text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2650–2663 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2650 Contextual Embeddings: When Are They Worth It? Simran Arora∗, Avner May∗, Jian Zhang, Christopher R´e Stanford Univeristy {simarora, avnermay, zjian, chrismre}@cs.stanford.edu Abstract We study the settings for which deep contextual embeddings (e.g., BERT) give large improvements in performance relative to classic pretrained embeddings (e.g., GloVe), and an even simpler baseline—random word embeddings—focusing on the impact of the training set size and the linguistic properties of the task. Surprisingly, we find that both of these simpler baselines can match contextual embeddings on industry-scale data, and often perform within 5 to 10% accuracy (absolute) on benchmark tasks. Furthermore, we identify properties of data for which contextual embeddings give particularly large gains: language containing complex structure, ambiguous word usage, and words unseen in training. 1 Introduction In recent years, rich contextual embeddings such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) have enabled rapid progress on benchmarks like GLUE (Wang et al., 2019a) and have seen widespread industrial use (Pandu Nayak, 2019). However, these methods require significant computational resources (memory, time) during pretraining, and during downstream task training and inference. Thus, an important research problem is to understand when these contextual embeddings add significant value vs. when it is possible to use more efficient representations without significant degradation in performance. As a first step, we empirically compare the performance of contextual embeddings with classic embeddings like word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014). To further understand what performance gains are attributable to improved embeddings vs. the powerful downstream models that leverage them, we also compare with a simple baseline—fully random embed∗Equal contribution. dings—which encode no semantic or contextual information whatsoever. Surprisingly, we find that in highly optimized production tasks at a major technology company, both classic and random embeddings have competitive (or even slightly better!) performance than the contextual embeddings.1,2 To better understand these results, we study the properties of NLP tasks for which contextual embeddings give large gains relative to non-contextual embeddings. In particular, we study how the amount of training data, and the linguistic properties of the data, impact the relative performance of the embedding methods, with the intuition that contextual embeddings should give limited gains on data-rich, linguistically simple tasks. In our study on the impact of training set size, we find in experiments across a range of tasks that the performance of the non-contextual embeddings (GloVe, random) improves rapidly as we increase the amount of training data, often attaining within 5 to 10% accuracy of BERT embeddings when the full training set is used. This suggests that for many tasks these embeddings could likely match BERT given sufficient data, which is precisely what we observe in our experiments with industry-scale data. Given the computational overhead of contextual embeddings, this exposes important trade-offs between the computational resources required by the embeddings, the expense of labeling training data, and the accuracy of the downstream model. To better understand when contextual embeddings give large boosts in performance, we identify three linguistic properties of NLP tasks which help explain when these embeddings will provide gains: • Complexity of sentence structure: How interdependent are different words in a sentence? 1This aligns with recent observations from experiments with classic word embeddings at Apple (R´e et al., 2020). 2These tasks are proprietary, so we share these results anecdotally as motivation for our study. 2651 • Ambiguity in word usage: Are words likely to appear with multiple labels during training? • Prevalence of unseen words: How likely is encountering a word never seen during training? Intuitively, these properties distinguish between NLP tasks involving simple and formulaic text (e.g., assistant commands) vs. more unstructured and lexically diverse text (e.g., literary novels). We show on both sentiment analysis and NER tasks that contextual embeddings perform significantly better on more complex, ambiguous, and unseen language, according to proxies for these properties. Thus, contextual embeddings are likely to give large gains in performance on tasks with a high prevalence of this type of language. 2 Background We discuss the different types of word embeddings we compare in our study: contextual pretrained embeddings, non-contextual pretrained embeddings, and random embeddings; we also discuss the relative efficiency of these embedding methods, both in terms of computation time and memory (Sec. 2.1). Pretrained contextual embeddings Recent contextual word embeddings, such as BERT (Devlin et al., 2018) and XLNet (Yang et al., 2019), consist of multiple layers of transformers which use self-attention (Vaswani et al., 2017). Given a sentence, these models encode each token into a feature vector which incorporates information from the token’s context in the sentence. Pretrained non-contextual embeddings Noncontextual word embeddings such as GloVe (Pennington et al., 2014), word2vec (Mikolov et al., 2013), and fastText (Mikolov et al., 2018) encode each word in a vocabulary as a vector; intuitively, this vector is meant to encode semantic information about a word, such that similar words (e.g., synonyms) have similar embedding vectors. These embeddings are pretrained from large language corpora, typically using word co-occurrence statistics. Random embeddings In our study, we consider random embeddings (e.g., as in Limsopatham and Collier (2016)) as a simple and efficient baseline that requires no pretraining. Viewing word embeddings as n-by-d matrices (n: vocabulary size, d: embedding dimension), we consider embedding matrices composed entirely of random values. To reduce the memory overhead of storing these n · d random values to O(n), we use circulant random matrices (Yu et al., 2017) as a simple and efficient approach (for more details, see Appendix A.1).3,4 2.1 System Efficiency of Embeddings We discuss the computational and memory requirements of the different embedding methods, focusing on downstream task training and inference.5 Computation time For deep contextual embeddings, extracting the word embeddings for tokens in a sentence requires running inference through the full network, which takes on the order of 10 ms on a GPU. Non-contextual embeddings (e.g., GloVe, random) require negligible time (O(d)) to extract an embedding vector. Memory Using contextual embeddings for downstream training and inference requires storing all the model parameters, as well as the model activations during training if the embeddings are being fine-tuned (e.g., 440 MB to store BERTBASE parameters, and on the order of 5-10 GB to store activations). Pretrained non-contextual embeddings (e.g., GloVe) require O(nd) to store a n-by-d embedding matrix (e.g., 480 MB to store a 400k by 300 GloVe embedding matrix). Random embeddings take O(1) memory if only the random seed is stored, or O(n) if circulant random matrices are used (e.g., 1.6 MB if n = 400k). 3 Experiments We provide an overview of our experimental protocols (Section 3.1), the results from our study on the impact of training set size (Section 3.2), and the results from our linguistic study (Section 3.3). We show that the gap between contextual and noncontextual embeddings often shrinks as the amount of data increases, and is smaller on language that is simpler based on linguistic criteria we identify. 3.1 Experimental Details To study the settings in which contextual embeddings give large improvements, we compare 3Note that one could also simply store the random seed, though this requires regenerating the embedding matrix every time it is accessed. 4We provide an efficient implementation of circulant random embedding matrices here: https://github.com/ HazyResearch/random_embedding. 5Pretrained contextual and non-contextual embeddings also require significant computational resources during pretraining. For example training BERTBASE takes 4 days on 16 TPU chips. 2652 10−2 10−1 100 Fraction of Training Data 0.4 0.6 0.8 F1 Score NER (CoNLL-2003) Random GloVe BERT 10−2 10−1 100 Fraction of Training Data 0.4 0.6 0.8 Accuracy Sentiment (SST) Random GloVe BERT Figure 1: NER (CoNLL-2003; left), and sentiment analysis (SST; right) performance, as a function of the fraction of the training set used. As the amount of training data increases, the non-contextual embedding performance improves quickly, generally narrowing the gap with the contextual embeddings. them to GloVe and random embeddings across a range of named entity recognition (NER) (Tjong Kim Sang and De Meulder, 2003), sentiment analysis (Kim, 2014), and natural language understanding (Wang et al., 2019a) tasks. We choose these lexically diverse tasks as examples of word, sentence, and sentence-pair classification tasks, respectively. For our embeddings, we consider 768dimensional pretrained BERTBASE word embeddings, 300-dimensional publicly available GloVe embeddings, and 800-dimensional random circulant embeddings. We keep the embedding parameters fixed during training for all embedding types (no fine-tuning), to isolate the benefits of pretraining from the benefits of task training. We use a CNN model (Kim, 2014) for sentiment analysis and a BiLSTM (Akbik et al., 2018; Wang et al., 2019a) for the NER and General Language Understanding Evaluation (GLUE) tasks. For more details on the tasks, models, and training protocols, please see Appendix A. 3.2 Impact of Training Data Volume We show that the amount of downstream training data is a critical factor in determining the relative performance of contextual vs. non-contextual embeddings. In particular, we show in representative tasks in Figure 1 that the performance of the non-contextual embedding models improves quickly as the amount of training data is increased (plots for all tasks in Appendix B).6 As a result of this improvement, we show in Table 1 that across tasks when the full training set is used, the non-contextual embeddings can often (1) perform within 10% absolute accuracy of the contextual 6We provide theoretical support for why random embeddings perform strongly given sufficient data in Appendix B.3. Task Performance gap Sample complexity ratio B R-B G-B R/B G/B NER CoNLL 94.8 -9.1 -2.7 16 4 Sent. TREC 95.8 -10.3 -6.0 4 4 MPQA 89.6 -4.7 -0.9 16 1 CR 88.5 -5.0 -3.5 16 4 SUBJ 97.7 -8.7 -3.3 256 16 SST 91.6 -11.2 -6.4 256 64 MR 85.9 -13.4 -6.9 256 16 GLUE RTE 61.0 -1.8 -2.5 1 64 MRPC 84.8 -8.8 -6.9 16 4 QQP 86.5 -9.2 -6.8 16 16 CoLA 51.7 -34.6 -40.1 64 256 STS-B 85.6 -29.1 -19.8 64 16 QNLI 84.6 -15.8 -9.5 64 16 MNLI 78.8 -17.1 -12.0 64 16 SST 91.3 -15.9 -12.8 256 64 Table 1: Performance and sample complexity of random (R) and GloVe (G) relative to BERT (B) for NER, sentiment analysis (Sent.), and language understanding (GLUE) tasks. Second column shows BERT accuracy; third/fourth columns show the accuracy gap between BERT and random/GloVe; fifth/sixth columns show sample complexity ratios, the largest n ∈{1, 4, 16, 64, 256} for which BERT outperforms random/GloVe when trained on n-times less data. We observe that non-contextual embeddings can often (1) perform within 10% absolute accuracy of the contextual embeddings, and (2) match the performance of contextual embeddings which are trained on 1x-16x less data. This sheds light on a tradeoff between the upfront cost of labeling training data and the inferencetime computational cost of the embeddings. embeddings, and (2) match the performance of the contextual embeddings trained on 1x-16x less data, while also being orders of magnitude more computationally efficient. In light of this, ML practitioners may find that for certain real-world tasks the large gains in efficiency are well worth the cost of labeling more data. Specifically, in this table we show for each task the difference between the accuracies attained by BERT vs. GloVe and random (note that random sometimes beats GloVe!), as well as the largest integer n ∈{1, 4, 16, 64, 256} such that BERT trained on 1 n of the training set still outperforms non-contextual embeddings trained on the full set. 3.3 Study of Linguistic Properties In this section, we aim to identify properties of the language in a dataset for which contextual embeddings perform particularly well relative to noncontextual approaches. Identifying such properties would allow us to determine whether a new task is 2653 likely to benefit from contextual embeddings. As a first step in our analysis, we evaluate the different embedding types on the GLUE Diagnostic Dataset (Wang et al., 2019a). This task defines four categories of linguistic properties; we observe that the contextual embeddings performed similarly to the non-contextual embeddings for three categories, and significantly better for the predicateargument structure category (Matthews correlation coefficients of .33, .20, and .20 for BERT, GloVe, and random, respectively. See Appendix C.2.1 for more detailed results). This category requires understanding how sentence subphrases are composed together (e.g., prepositional phrase attachment, and identifying a verb’s subject and object). Motivated by the observation that contextual embeddings are systematically better on specific types of linguistic phenomena, we work to identify simple and quantifiable properties of a downstream task’s language which correlate with large boosts in performance from contextual embeddings. In the context of both word-level (NER) and sentence-level (sentiment analysis) classification tasks, we define metrics that measure (1) the complexity of text structure, (2) the ambiguity in word usage, and (3) the prevalence of unseen words (Section 3.3.1), and then show that contextual embeddings attain significantly higher accuracy than noncontextual embeddings on inputs with high metric values (Section 3.3.2, Table 2). 3.3.1 Metric Definitions We now present our metric definitions for NER and sentiment analysis, organized by the above three properties (detailed definitions in Appendix C). Complexity of text structure We hypothesize that language with more complex internal structure will be harder for non-contextual embeddings. We define the metrics as follows: • NER: We consider the number of tokens spanned by an entity as its complexity metric (e.g., “George Washington” spans 2 tokens), as correctly labeling a longer entity requires understanding the relationships between the different tokens in the entity name. • Sentiment analysis: We consider the average distance between pairs of dependent tokens in a sentence’s dependency parse as a measure of the sentence’s complexity, as long-range dependencies are typically a challenge for NLP systems. Ambiguity in word usage We hypothesize that non-contextual embeddings will perform poorly in disambiguating words that are used in multiple different ways in the training set. We define the metrics as follows: • NER: We consider the number of labels (person, location, organization, miscellaneous, other) a token appears with in the training set as a measure of its ambiguity (e.g., “Washington” appears as a person, location, and organization in CoNLL2003). • Sentiment analysis: As a measure of a sentence’s ambiguity, we take the average over the words in the sentence of the probability that the word is positive in the training set, and compute the entropy of a coin flip with this probability.7 Prevalence of unseen words We hypothesize that contextual embeddings will perform significantly better than non-contextual embeddings on words which do not appear at all in the training set for the task. We define the following metrics: • NER: For a token in the NER input, we consider the inverse of the number of times it was seen in the training set (letting 1/0 := ∞). • Sentiment analysis: Given a sentence, we consider as our metric the fraction of words in the sentence that were never seen during training. 3.3.2 Empirical validation of metrics In Table 2 we show that for each of the metrics defined above, the accuracy gap between BERT and random embeddings is larger on inputs for which the metrics are large. In particular, we split each of the task validation sets into two halves, with points with metric values below the median in one half, and above the median in the other. We see that in 19 out of 21 cases, the accuracy gap between BERT and random embeddings is larger on the slice of the validation set corresponding to large metric values, validating our hypothesis that contextual embeddings provide important boosts in accuracy on these points. In Appendix C.2.2, we present a similar table comparing the performance of BERT and GloVe embeddings. We see that the gap between GloVe and BERT errors is larger above the median than below it in 11 out of 14 of the complexity and am7For sentiment tasks with C-labels (C = 6 for the TREC dataset), we consider the entropy of the average label distribution 1 n Pn i=1 p(y|wi) ∈RC over the sentence words wi. 2654 Complexity Ambiguity Unseen Task Abs. Rel. Abs. Rel. Abs. Rel. NER (CoNLL) +4.6 1.4 +7.7 2.0 +5.0 1.4 Sent. (MR) -5.4 0.7 +3.3 1.3 +1.2 1.1 Sent. (SUBJ) -1.8 0.8 +6.7 2.3 +0.9 1.1 Sent. (CR) +0.6 1.1 +3.0 1.8 +4.1 2.4 Sent. (SST) +7.4 2.1 +8.7 2.4 +2.3 1.2 Sent. (TREC) +5.1 1.7 +5.9 1.8 +4.4 1.5 Sent. (MPQA) +7.9 13.5 +7.1 12.4 +1.3 1.4 Table 2: For our complexity, ambiguity, and unseen prevalence metrics, we slice the validation set using the median metric value, and compute the average error rates for BERT and random on each slice. We show that the gap between BERT and random errors is larger on the slice above the median than below it in 19 out of 21 cases, in absolute (Abs.) and relative (Rel.) terms. biguity results, which is consistent with our hypothesis that context is helpful for structurally complex and ambiguous language. However, we observe that GloVe and BERT embeddings—which can both leverage pretrained knowledge about unseen words—perform relatively similarly to one another above and below the median for the unseen metrics. 4 Related Work The original work on ELMo embeddings (Peters et al., 2018) showed that the gap between contextual and non-contextual embeddings narrowed as the amount of training data increased. Our work builds on these results by additionally comparing with random embeddings, and by studying the linguistic properties of tasks for which the contextual embeddings give large gains. Our work is not the first to study the downstream performance of embeddings which do not require any pretraining. For example, in the context of neural machine translation (NMT) it is well-known that randomly-initialized embeddings can attain strong performance (Wu et al., 2016; Vaswani et al., 2017); the work of Qi et al. (2018) empirically compares the performance of pretrained and randomlyinitialized embeddings across numerous languages and dataset sizes on NMT tasks, showing for example that the pretrained embeddings typically perform better on similar language pairs, and when the amount of training data is small (but not too small). Furthermore, as mentioned in Section 2, random embeddings were considered as a baseline by Limsopatham and Collier (2016), to better understand the gains from using generic vs. domain-specific word embeddings for text classification tasks. In contrast, our goal for using random embeddings in our study was to help clarify when and why pretraining gives gains, and to expose an additional operating point in the trade-off space between computational cost, data-labeling cost, and downstream model accuracy. 5 Conclusion We compared the performance of contextual embeddings with non-contextual pretrained embeddings and with an even simpler baseline—random embeddings. We showed that these non-contextual embeddings perform surprisingly well relative to the contextual embeddings on tasks with plentiful labeled data and simple language. While much recent and impressive effort in academia and industry has focused on improving state-of-the-art performance through more sophisticated, and thus increasingly expensive, embedding methods, this work offers an alternative perspective focused on realizing the trade-offs involved when choosing or designing embedding methods. We hope this work inspires future research on better understanding the differences between embedding methods, and on designing simpler and more efficient models. Acknowledgments We gratefully acknowledge the support of DARPA under Nos. FA87501720095 (D3M), FA86501827865 (SDH), and FA86501827882 (ASED); NIH under No. U54EB020405 (Mobilize), NSF under Nos. CCF1763315 (Beyond Sparsity), CCF1563078 (Volume to Velocity), and 1937301 (RTML); ONR under No. N000141712266 (Unifying Weak Supervision); the Moore Foundation, NXP, Xilinx, LETI-CEA, Intel, IBM, Microsoft, NEC, Toshiba, TSMC, ARM, Hitachi, BASF, Accenture, Ericsson, Qualcomm, Analog Devices, the Okawa Foundation, American Family Insurance, Google Cloud, Swiss Re, the Stanford Graduate Fellowship in Science and Engineering, and members of the Stanford DAWN project: Teradata, Facebook, Google, Ant Financial, NEC, VMWare, and Infosys. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views, policies, or endorsements, either expressed or implied, of DARPA, NIH, ONR, or the U.S. Government. 2655 References Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. FLAIR: an easy-to-use framework for state-of-theart NLP. In NAACL-HLT (Demonstrations), pages 54–59. Association for Computational Linguistics. Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In COLING. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP. Nut Limsopatham and Nigel Collier. 2016. Modelling the combination of generic and target domain embeddings in a convolutional neural network for sentence classification. In BioNLP@ACL, pages 136– 140. Association for Computational Linguistics. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in pre-training distributed word representations. In LREC. Pandu Nayak. 2019. Understanding searches better than ever before. [Online; published 25-Oct-2019; accessed 6-Dec-2019]. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In EMNLP. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL-HLT. Ye Qi, Devendra Singh Sachan, Matthieu Felix, Sarguna Padmanabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neural machine translation? In NAACLHLT (2), pages 529–535. Association for Computational Linguistics. Carl Edward Rasmussen and Christopher K. I. Williams. 2006. Gaussian processes for machine learning. Adaptive computation and machine learning. MIT Press. Christopher R´e, Feng Niu, Pallavi Gudipati, and Charles Srisuwananukorn. 2020. Overton: A data system for monitoring and improving machinelearned products. In CIDR. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In NAACL-HLT. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019a. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In ICLR. Alex Wang, Ian F. Tenney, Yada Pruksachatkun, Katherin Yu, Jan Hula, Patrick Xia, Raghu Pappagari, Shuning Jin, R. Thomas McCoy, Roma Patel, Yinghui Huang, Jason Phang, Edouard Grave, Haokun Liu, Najoung Kim, Phu Mon Htut, Thibault F´evry, Berlin Chen, Nikita Nangia, Anhad Mohananey, Katharina Kann, Shikha Bordia, Nicolas Patry, David Benton, Ellie Pavlick, and Samuel R. Bowman. 2019b. jiant 1.2: A software toolkit for research on general-purpose text understanding models. http://jiant.info/. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Felix X. Yu, Aditya Bhaskara, Sanjiv Kumar, Yunchao Gong, and Shih-Fu Chang. 2017. On binary embedding using circulant matrices. JMLR, 18(1):5507– 5536. Felix X Yu, Sanjiv Kumar, Henry Rowley, and Shih-Fu Chang. 2015. Compact nonlinear maps and circulant extensions. arXiv preprint arXiv:1503.03893. 2656 A Experimental Details We now describe the embeddings (Appendix A.1), tasks (Appendix A.2), and models (Appendix A.3) we use in our experiments in more detail. A.1 Embeddings We compare the performance of BERT contextual embeddings with GloVe embeddings and random embeddings. We specifically use 768dimensional BERTBASE WordPiece embeddings, 300-dimensional GloVe embeddings, and 800dimensional random embeddings. We freeze each set of embeddings prior to training, and do not fine-tune the embeddings during training. The random embeddings are normalized to have the same Frobenius norm as the GloVe embeddings. We now describe how we use circulant matrices to reduce the memory requirement for the random embeddings. Circulant Random Embeddings To store a random n-by-d matrix in O(n) memory instead of O(nd), we use random circulant matrices (Yu et al., 2017). Specifically, we split the n-by-d matrix into n d disjoint d-by-d sub-matrices (assuming for simplicity that d divides n evenly), where each submatrix is equal to CD, where C = circ(c) ∈Rd×d is a circulant matrix based on a random Gaussian vector c ∈Rd, and D = diag(r) ∈Rd×d is a diagonal matrix based on a random Radamacher vector r ∈{−1, +1}d. Note that a circulant matrix circ(c) is defined as follows: circ(c) :=        c0 cd . . . c2 c1 c1 c0 . . . c3 c2 . . . . . . ... . . . . . . cd−1 cd−2 . . . c0 cd cd cd−1 . . . c1 c0        . Random circulant embeddings have been used in the kernel literature to make kernel approximation methods more efficient (Yu et al., 2015). For downstream training and inference, one can simply store the d-dimensional c and r vectors for each of the n d disjoint d-by-d sub-matrices, taking a total of O(n) memory. Alternatively, one can simply store a single random seed (O(1) memory), and these c, r vectors can be regenerated on the fly each time a row of the embedding matrix is accessed. Note that in addition to being very memory efficient, random embeddings avoid the expensive pretraining process over a large language corpus. A.2 Tasks We perform evaluations on three types of standard downstream NLP tasks: named entity recognition (NER), sentiment analysis, and natural language understanding. NER involves classifying each token in the input text as an entity or a non-entity, and further classifying the entity type for identified entities. We evaluate on the CoNLL-2003 benchmark dataset, in which each token is assigned a label of “O” (non-entity), “PER” (person), “ORG” (organization), “LOC” (location), or “MISC” (miscellaneous). Sentiment analysis involves assigning a classification label at the sentence level corresponding to the sentiment of the sentence. We evaluate on five binary sentiment analysis benchmark datasets including MR, MPQA, CR, SST, and SUBJ. We also evaluate on the benchmark TREC dataset, which assigns one of six labels to each input example. For natural language understanding, we use the standard GLUE benchmark tasks, and the GLUE diagnostic task. A.3 Downstream Task Models We use the following models and training protocols for the NER, sentiment analysis, and GLUE tasks: NER: We use a BiLSTM task model with a CRF decoding layer, and we use the default hyperparameters from the flair (Akbik et al., 2019) repository:8 256 hidden units, 32 batch size, 150 max epochs, and a stop-condition when the learning rate decreases below 0.0001 with a decay constant of 0.5 and patience of 4. In our evaluation, we report micro-average F1-scores for this task. Sentiment analysis: We use the architecture and training protocol from Kim (2014), using a CNN with 1 convolutional layer, 3 kernel sizes in {3, 4, 5}, 100 kernels, 32 batch size, 100 max epochs, and a constant learning rate. We report the validation error rates in evaluations of each task. GLUE: We use the Jiant (Wang et al., 2019b) implementation of a BiLSTM with 1024 hidden dimensions, 2 layers, 32 batch size, and a stopcondition when the learning rate decreases below 0.000001 with a decay constant of 0.5 and patience of 5. We consider the following task-specific performance metrics: Matthews correlation for CoLA, MNLI, and the diagnostic task, validation F1-score for MRPC and QQP, and validation accuracy for QNLI and RTE. 8https://github.com/zalandoresearch/ flair. 2657 B Impact of Training Data Volume We now provide additional details regarding our experiments on the impact of training set size on performance (Appendix B.1), our complete set of empirical results from these experiments (Appendix B.2), as well as theoretical support for the strong performance of random embedding models in these experiments, when trained with sufficient downstream data (Appendix B.3). B.1 Additional Experiment Details For each task, we evaluate performance using five fractions of the full training dataset, to understand how the amount of training data affects performance: { 1 44 , 1 43 , 1 42 , 1 41 , 1}. For each fraction c, we randomly select a subset of the training set of the corresponding size, and replicate this data 1/c times; we then train models using this redundant dataset, using the model architectures and training protocols described in Appendix A.3. In downstream training we perform a seperate hyperparameter sweep of the learning rate at each fraction of the training data, and select the best learning rate for each embedding type. We use the following lists of learning rates for the different tasks: • NER: {.003, .01, .03, .1, .3, 1, 3}. • Sentiment analysis: {1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-2, 1e-2}. • GLUE: {1e-6, 3e-6, 1e-5, 3e-5, 1e-4, 3e-4, 1e-3}. B.2 Extended Results In Figures 2 and 3, we show the performance of random, GloVe, and BERT embeddings on all the NER, sentiment analysis, and GLUE tasks, as we vary the amount of training data. We can see that across most of these results: • Non-contextual embedding performance improves quickly as the amount of training data is increased. • The gap between contextual and noncontextual embeddings often shrinks as the amount of training data is increased. • There are many tasks for which random and GloVe embeddings perform relatively similarly to one another. B.3 Theoretical Support for Random Embedding Performance To provide theoretical support for why, given sufficient training data, a model trained with random embeddings might match the performance of one trained with pretrained embeddings, we consider the simple setting of Gaussian process (GP) regression (Rasmussen and Williams, 2006). In particular, we assume that the prior covariance function for the GP is determined by the pretrained embeddings, and show that as the number of observed samples from this GP grows, the posterior distribution gives diminishing weight to the prior covariance function, and eventually depends solely on the observed samples. Thus, if we were to calculate the posterior distribution using an inaccurate prior covariance function determined by random embeddings, this posterior would approach the true posterior as the number of observed samples grew. More formally, for a fixed set of words {w1, . . . , wn} with pretrained embeddings {x1, . . . , xn} ⊂Rd, we assume that the “true” regression label vector y∗∈Rn for these words is sampled from a zero-mean multivariate Gaussian distribution y∗∼N(0, K), where the entries Kij := k(xi, xj) of the covariance matrix K are determined based on the similarity k(xi, xj) between the pretrained embeddings xi, xj ∈Rd for words i and j.9 We then assume that we observe m noisy samples (y1, . . . , ym) of the “true” label vector y∗, where each yi ∈Rn is an independent sample from N(y∗, σ2I). To summarize: y∗∼N(0, K), y1, . . . , ym ∼N(y∗, σ2I). The question then becomes, what is the posterior distribution for y∗after observing (y1, . . . , ym)? The closed form solution for this posterior is as follows: p(y∗| y1, . . . , ym) = N(¯ym, ¯Km), where ¯ym = K  K + σ2 m I −1 1 m m X i=1 yi ! , ¯Km = K  K + σ2 m I −1 σ2 m I. Importantly, we observe that as m →∞, that ¯ym →y∗(because K(K + σ2 m I)−1 →I and 9As an example, we could have k(xi, xj) := exp −∥xi −xj∥2/(2σ2)  be the Gaussian kernel. 2658 10−2 10−1 100 Fraction of Training Data 0.4 0.6 0.8 F1 Score NER (CoNLL-2003) Random GloVe BERT 10−2 10−1 100 Fraction of Training Data 0.65 0.70 0.75 0.80 0.85 Accuracy Sentiment (CR) Random GloVe BERT 10−2 10−1 100 Fraction of Training Data 0.70 0.75 0.80 0.85 0.90 Accuracy Sentiment (MPQA) Random GloVe BERT 10−2 10−1 100 Fraction of Training Data 0.6 0.7 0.8 Accuracy Sentiment (MR) Random GloVe BERT 10−2 10−1 100 Fraction of Training Data 0.6 0.7 0.8 0.9 Accuracy Sentiment (SST) Random GloVe BERT 10−2 10−1 100 Fraction of Training Data 0.7 0.8 0.9 Accuracy Sentiment (SUBJ) Random GloVe BERT 10−2 10−1 100 Fraction of Training Data 0.4 0.6 0.8 Accuracy Sentiment (TREC) Random GloVe BERT Figure 2: Performance of random, GloVe, and BERT embeddings on the NER (top row) and sentiment analysis (bottom three rows) tasks as we vary the amount of training data. 2659 10−2 10−1 100 Fraction of Training Data 0.1 0.2 0.3 0.4 0.5 Matthews Corr. GLUE (COLA) Random GloVe BERT 10−2 10−1 100 Fraction of Training Data 0.4 0.5 0.6 0.7 0.8 Accuracy GLUE (MNLI) Random GloVe BERT 10−2 10−1 100 Fraction of Training Data 0.70 0.75 0.80 0.85 F1 and Accuracy (Avg.) GLUE (MRPC) Random GloVe BERT 10−2 10−1 100 Fraction of Training Data 0.6 0.7 0.8 Accuracy GLUE (QNLI) Random GloVe BERT 10−2 10−1 100 Fraction of Training Data 0.65 0.70 0.75 0.80 0.85 F1 and Accuracy (Avg.) GLUE (QQP) Random GloVe BERT 10−2 10−1 100 Fraction of Training Data 0.50 0.55 0.60 Accuracy GLUE (RTE) Random GloVe BERT 10−2 10−1 100 Fraction of Training Data 0.6 0.7 0.8 0.9 Accuracy GLUE (SST) Random GloVe BERT 10−2 10−1 100 Fraction of Training Data 0.2 0.4 0.6 0.8 Pearson and Spearman Corr. (Avg.) GLUE (STS-B) Random GloVe BERT Figure 3: Performance of random, GloVe, and BERT embeddings on GLUE tasks as we vary the amount of training data. 2660 1 m Pm i=1 yi →y∗), and ¯Km →0. Thus, if we were to compute the posterior distribution for this GP using an uninformative prior covariance function K′ determined by random embeddings {x′ 1, . . . , x′ n} (K′ ij = k(x′ i, x′ j)), this posterior would approach the posterior computed from the “true” prior covariance function K as the number of observations m →∞. Thus, GP regression with an informative prior derived from the pretrained embeddings performs the same as GP regression with an uninformative prior derived from random embeddings, as the number of observed samples approaches infinity. C Study of Linguistic Properties We now describe in more detail how we define our metrics for the three linguistic properties for both NER and sentiment analysis tasks (Appendix C.1), as well as provide extended empirical results from our linguistic studies (Appendix C.2). C.1 Linguistic Properties: Detailed Definitions We define the metrics in detail below for our three linguistic properties: complexity of text structure (Appendix C.1.1), ambiguity in word usage (Appendix C.1.2), and prevalence of unseen words (Appendix C.1.3). To provide further intuition for these metrics, in Figure 4 we present actual examples from the CoNLL-2003 NER task and the CR sentiment analysis task for each of the metrics, along with the errors made by each embedding type on these examples. C.1.1 Complexity of Text Structure We define the following metrics for NER and sentiment analysis to measure the structural complexity of an entity or sentence, respectively: NER: For NER, we measure the linguistic complexity of an entity in terms of the number of tokens in the entity (e.g., “George Washington” spans 2 tokens), as correctly labeling a longer entity requires understanding the relationships between the different tokens in the entity name. Sentiment analysis: For sentiment analysis, we need a sentence-level proxy for structural complexity; toward this end, we leverage the dependency parse tree for each sentence in the dataset.10 In particular, we characterize a sentence as more structurally complex if the average distance between 10We use the StanfordNLP dependency parser for our metric: https://pypi.org/project/stanfordnlp/. dependent words is higher. We consider this definition because long-range dependencies generally require more contextual information to understand. To avoid diluting the average dependency length, we do not include dependencies where either the head or the tail of the dependency is a punctuation or a stop word. As an example, consider the sentence “George Washington, who was the first president of the United States, was born in 1732”. In this sentence, there is a dependence between “George” and “born” of length 14, because there are 13 intervening words or punctuations. This is a relatively large gap between dependent words, and would increase the average dependency length the sentence. C.1.2 Ambiguity in Word Usage The next linguistic property we consider is the degree of ambiguity in word usage within a task. To measure the degree of ambiguity in the language, we define the following metrics in the context of NER and sentiment analysis: NER: For NER as a word-level classification task, we consider the number of labels (person, location, organization, miscellaneous, other) a token appeared with in the training set as a measure of its ambiguity (e.g., “Washington” appears as a person, location, and organization in the CoNLL-2003 training set). For each token in the validation set, we enumerate the number of tags it appears with in the training set. Sentiment analysis: For sentiment analysis, we measure the ambiguity of a sentence by considering whether the words in the sentence generally appear in positive or negative sentences in the training data. For the binary case, we take the average over words in the sentence of the unigram probability that a word is positive, and then compute the entropy of a coin flip with this probability of being “heads”. More specifically, to compute the unigram probability p(+1 | w) for a word w, we measure the fraction of training sentences containing w which are positive. Our ambiguity metric is then defined for a sentence S as H 1 |S| X w∈S p(+1 | w) ! , where H(p) = −p log2(p) −(1 −p) log2(1 −p) is the entropy of a coin flip with probability p. Intuitively, sentences with generally positive (or negative) words will have low entropy, and be easy to classify even with non-contextual embeddings. 2661 Figure 4: Examples from the CoNLL-2003 NER task (above) and the CR sentiment analysis task (below) validation sets, to provide further intuition for the three linguistic properties. All of the examples above fall in the validation set slices that have metric values above the median, and are thus considered relatively difficult examples according to these linguistic metrics. For example, in the case of NER, (1) the “Federal Open Market Committee” is a relatively long, 4-token entity, (2) “Buddy” and “Groom” are both tokens that were not seen during training, and (3) “Washington” was seen in the training set with three different entity type labels (location, person, organization). In the case of the sentiment analysis examples, (1) the complexity metric sentence has several long dependences (lengths 3, 5, and 7) because it has numerous adjective, adverb, and noun modifiers, (2) the unseen metric sentence has four words that were not seen during training (“anyhow”, “demerits”, “processor”, “variants”), and (3) the ambiguity metric sentence has words that were mainly positive during training (“good”, “creative”), as well as words which were mainly negative during training (“lack”). We use empty vs. filled-in squares of different colors to show whether a given embedding type got an example correct vs. incorrect, respectively (see legend). 2662 Category BERT Random GloVe LS 0.19 0.14 0.13 PAS 0.33 0.20 0.20 L 0.12 0.15 0.13 KCS 0.10 0.17 0.13 Overall 0.500 0.475 0.465 Table 3: The performance (Matthews correlation coefficients) of BERT, random, and GloVe embeddings across the four linguistic categories defined by the GLUE diagnostic task: lexical semantics (LS), predicate-argument structure (PAS), logic (L), and knowledge and common sense (KCS). We also include the overall diagnostic performance. For non-binary sentiment tasks with C-labels (e.g., C = 6 for the TREC dataset), we consider the entropy of the average label distribution 1 |S| P w∈S p(y | w) ∈RC over the words in the sentence. Here, p(y | w) is defined as the fraction of the sentences in the training set containing the word w which had the label y. Note that for stop words and punctuation, we always consider p(y | w) as the uniform distribution over the set of possible labels y (for both binary and non-binary classification tasks). C.1.3 Prevalence of Unseen Words We define the following metrics for the prevalence of unseen words for NER and sentiment analysis tasks: NER: For a word in the NER validation set, we consider as our metric the inverse of the number of times the word appeared in the training data (letting 1/0 := ∞). We consider the inverse of the number of training set appearances because intuitively, if a word appears fewer times in the training set, we expect it to be harder to correctly classify this word at test time—especially for noncontextual or random embeddings. Sentiment analysis: For sentiment analysis, given a sentence, we consider as our metric the fraction of words in the sentence that were never seen during training. More specifically, we count the number of unseen words (that are not stop words), and divide by the total number of words in the sentence. Intuitively, sentences with many unseen words will attain high values for this metric, and will be difficult to classify correctly without prior (i.e., pretrained) knowledge about these unseen words. C.2 Extended Results We present the detailed results from our evaluation of the different embedding types on the GLUE diagnostic dataset (Appendix C.2.1), and extended validation of the linguistic properties we define in Section 3.3 (Appendix C.2.2). C.2.1 GLUE Diagnostic Results The GLUE diagnostic task facilitates a fine-grained analysis of a model’s strengths and weaknesses in terms of how well the model handles different linguistic phenomena. The task consists of 550 sentence pairs which are classified as entailment, contradiction, or neutral. The GLUE team curated the sentence pairs to represent over 20 linguistic phenomena, which are grouped in four top-level categories: lexical semantics (LS), predicate-argument structure (PAS), logic (L), and knowledge and common sense (KCS). We follow the standard procedure and use the model trained on the MNLI dataset (using the random, GloVe, or BERT embeddings) to evaluate performance on the diagnostic task. We report the Matthews correlation coefficient (MCC) performance of the different embedding types on the four top-level categories in Table 3. Our two key observations are: (1) the noncontextual embeddings (random and GloVe) perform similarly to one another across all four top-level categories; (2) the performance difference between contextual and non-contextual embeddings is most stark for the predicateargument (PAS) category, which includes phenomena that require understanding the interactions between the different subphrases in a sentence. Within PAS, the BERT embeddings attain a 10+ point improvement in MCC over random embeddings for sentences reflecting the following phenomena: Relative Clauses/Restrictivity, Datives, Nominalization, Core Arguments, Core Arguments/Anaphora/Coreference, and Prepositional Phrases. C.2.2 GloVe vs. BERT Results In Table 4, we replicate the results from Table 2, but instead of comparing BERT embeddings to random embeddings, we compare them to GloVe embeddings. We can see that for 11 out of 14 cases for the complexity and ambiguity metrics, the gap between contextual (BERT) and non-contextual (GloVe) performance is larger for the validation slices above the median than below; this aligns with our results comparing random and BERT embeddings. 2663 Complexity Ambiguity Unseen Task Abs. Rel. Abs. Rel. Abs. Rel. NER (CoNLL) +6.7 4.0 +5.9 3.3 -1.4 0.8 Sent. (MR) -0.6 0.9 +6.5 2.5 -1.0 0.9 Sent. (SUBJ) -1.8 0.6 +4.4 6.0 -1.3 0.6 Sent. (CR) +1.2 1.5 -2.4 0.4 0.0 1.0 Sent. (SST) +7.8 5.3 +6.0 3.2 -2.8 0.6 Sent. (TREC) +2.2 1.4 +8.1 4.1 +3.7 1.8 Sent. (MPQA) +6.6 -3.2 +2.9 -1.8 +0.4 3.0 Table 4: For our complexity, ambiguity, and unseen prevalence metrics, we slice the validation set using the median metric value, and compute the average error rates for GloVe and BERT on each slice. We show that the gap between GloVe and BERT errors is larger above than below the median in 11 out of 14 of the complexity and ambiguity results both in absolute (Abs.) and relative (Rel.) terms; however, on the unseen metrics, this only holds for 2 out of 7 cases, which suggests that GloVe embeddings are able to relatively effectively deal with unseen words. Interestingly, this is only the case for 2 out of 7 of the cases for the unseen metrics. This is likely because both GloVe and BERT embeddings are able to leverage pretrained semantic information about unseen words to make accurate predictions for them, and thus perform relatively similarly to one another on unseen words.
2020
236
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2664–2680 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2664 Interactive Classification by Asking Informative Questions Lili Yu1, Howard Chen1, Sida I. Wang1,2,Tao Lei1 and Yoav Artzi1,3 1ASAPP Inc., New York, USA 2Princeton University, New Jersey, USA 3Cornell University, New York, USA {liliyu, hchen, tao}@asapp.com [email protected] [email protected] Abstract We study the potential for interaction in natural language classification. We add a limited form of interaction for intent classification, where users provide an initial query using natural language, and the system asks for additional information using binary or multichoice questions. At each turn, our system decides between asking the most informative question or making the final classification prediction.The simplicity of the model allows for bootstrapping of the system without interaction data, instead relying on simple crowdsourcing tasks. We evaluate our approach on two domains, showing the benefit of interaction and the advantage of learning to balance between asking additional questions and making the final prediction. 1 Introduction Responding to natural language queries through simple, single-step classification has been studied extensively in many applications, including user intent prediction (Chen et al., 2019; Qu et al., 2019), and information retrieval (Kang and Kim, 2003; Rose and Levinson, 2004). Typical methods rely on a single user input to produce an output, and do not interact with the user to reduce ambiguity and improve the final prediction. For example, users may under-specify a request due to incomplete understanding of the domain; or the system may fail to correctly interpret the nuances of the input query. In both cases, a low quality decision could be mitigated by further interaction with the user. In this paper we take a low-overhead approach to add limited interaction to intent classification. Our goal is two-fold: (a) study the effect of interaction on the system performance, and (b) avoid the cost and complexities of interactive data collection. We build an interactive system that poses a sequence of binary and multiple choice questions followWhat is the bill length of the bird: shorter, similar, or longer than head? Shorter than head. Is the bird underpart orange? Yes. The identified bird is: American Redstart FAQ Suggestion What data limits apply when roaming internationally? American Crow Bobolink … American Redstart How do I sign up for Sprint Global Roaming? . . . How do I purchase a High Speed Data Roaming Pass? Bird Identification Travel out of country. Do you need to activate global roaming service? Yes. Do you want high speed data roaming? No. Got it! The article below might be helpful: How do I sign up for Sprint Global Roaming? Saw a little black bird with black eyes. Figure 1: Two examples of interactive classification systems: providing a trouble-shooting FAQ suggestion (left) and helping identifying bird species from a descriptive text query (right). The top parts show example classification labels: FAQ documents or bird species.1The ground truth label of each interaction example is shaded. The lower parts show user interactions with the systems. The user starts with an initial natural language query. At each step, the system asks a clarification question. The interaction ends when the system returns an output label. ing the initial user natural language query. Figure 1 illustrates such interactions in two domains, showcasing the opportunity for clarification while avoiding much of the complexity involved in unrestricted natural language interactions. We design our approach not to rely on user interaction during learning, which requires users to handle low quality systems or costly Wizard of Oz experiments. We adopt a Bayesian decomposition of the posterior distributions over intent labels and user responses through the interaction process. We use the posteriors to compute question expected information gain, which allows us to efficiently select the next question at each interaction turn. We bal1The images are for illustration only. Our approach does not use images. 2665 ance between the potential increase in accuracy and the cost of asking additional questions with a learned policy controller that decides whether to ask additional questions or return the final prediction. We estimate each distribution in our posterior decomposition independently by crowdsourcing initial queries and keywords annotation. We use non-interactive annotation tasks that do not require Wizard-of-Oz style dialog annotations (Kelley, 1984; Wen et al., 2017). During training, we train a shared text encoder to compare natural language queries, clarification questions, user answers and classification targets in the same embedding space. This enables us to bootstrap to unseen clarification targets and clarification questions, further alleviating the need of expensive annotation. We evaluate our method on two public tasks: FAQ suggestion (Shah et al., 2018) and bird identification using the text and attribute annotations of the Caltech-UCSD Birds dataset (Wah et al., 2011). The first task represents a virtual assistant application in a trouble-shooting domain, while the second task provides well-defined multiple-choice question annotations and naturally noisy language inputs. We evaluate with both a simulator and human users. Our experiments show that adding user interaction significantly increases the classification accuracy. Given at most five turns of interaction, our approach improves the accuracy of a no-interaction baseline by over 100% on both tasks for simulated evaluation and over 90% for human evaluation. Even a single clarification question provides significant accuracy improvements, 40% for FAQ suggestion and 65% for bird identification in our simulated analysis. Our code and data are available at https://github.com/asappresearch/ interactive-classification. 2 Technical Overview Our goal is to classify a natural language query to a label through an interaction. Notation We treat the classification label y, interaction question q and the user response r as random variables. We denote an assignment of a random variable using subscripts, such as y = yi and q = qj. We use superscripts for the observed value of the random variable at a given time step, for example, qt is a question asked at time step t. When clear from the context, we write yi instead of y = yi. For example, p(r|qj, yi) denotes the conditional distribution of r given y = yi and q = qj, and p(rk|qj, yi) further specifies the corresponding probability when r = rk. An interaction starts with the user providing an initial user query x. At each turn t, the system selects a question qt, to which the user responds with rt, or returns a label y to conclude the interaction. We consider two types of questions: binary and multiple choice questions. The predefined set of possible answers for a question qt is R(qt), where R(qt) = {yes, no} for binary questions, or a predefined set of question-specific values for multiple choice questions. We denote an interaction up to time t as Xt = (x, ⟨(q1, r1), . . . , (qt, rt⟩), and the set of possible class labels as Y = {y1, . . . , yN}. Figure 1 shows example interactions in our two evaluation domains. Model We model the interactive process using a parameterized distribution over class labels that is conditioned on the observed interaction (Section 4.1), a question selection criterion (Section 4.2), and a parameterized policy controller (Section 4.5). At each time step t, we compute the belief of each yi ∈Y conditioned on Xt−1. The trained policy controller decides between two actions: to return the current best possible label or to obtain additional information by asking a question. The model selects the question with the maximal information gain. Given a user response, the model updates the belief over the classification labels. Learning We use crowdsourced data to bootstrap model learning. The crowdsourcing data collection includes two non-interactive tasks. First, we obtain a set of user initial queries Xi for each label yi. For example, for an FAQ, ‘How do I sign up for Spring Global Roaming’, an annotated potential initial query is ‘Travel out of country’. Second, we ask annotators to assign text tags to each yi, and heuristically convert these tags into a set of question-answer pairs Ai = {(qm, rm)}Mi m=1, where qm denotes a templated question and rm denotes the answer. For example, the question ‘What is your phone operating system?’ can pair with one of the following answers: ‘IOS’, ‘Android operating system’, ‘Windows operating system’ or ‘Not applicable’. We denote this dataset as {(yi, Xi, Ai)}N i=1. We describe the data collection process in Section 5. We use this data to train our text embedding model (Section 4.3), to create a user simulator (Section 4.4), and to train the policy controller (Section 4.5). 2666 Evaluation We report classification the model accuracy, and study the trade-off between accuracy and the number of turns that the system takes. We evaluate with both a user simulator and real human users. When performing human evaluation, we additionally collect qualitative ratings. 3 Related Work Human feedback has been leveraged to train natural language processing models, including for dialogue (Li et al., 2016), semantic parsing (Artzi and Zettlemoyer, 2011; Wang et al., 2016; Iyer et al., 2017) and text classification (Hancock et al., 2018). These methods collect user feedback after the model-predicting stage and treat user feedback as additional offline training data to improve the model. In contrast, our model leverages user interaction to increase prediction performance. Human feedback has been incorporated in reinforcement learning as well, for example to learn a reward function from language as reflecting human preferences (Christiano et al., 2017). Language-based interaction has been studied in the context of visual question answering (de Vries et al., 2017; Lee et al., 2018; Chattopadhyay et al., 2017; Das et al., 2017; Lee et al., 2019; Shukla et al., 2019), SQL generation (Gur et al., 2018; Yao et al., 2019), information retrieval (Chung et al., 2018; Aliannejadi et al., 2019) and multi-turn textbased question answering (Rao and Daum´e III, 2018; Reddy et al., 2019; Choi et al., 2018). Most methods require learning from recorded dialogues (Wu et al., 2018; Hu et al., 2018; Lee et al., 2018; Rao and Daum´e III, 2018) or conducting Wizard-of-Oz dialog annotations (Kelley, 1984; Wen et al., 2017). Instead, we limit the interaction to multiple-choice and binary questions. This simplification allows us to reduce the complexity of data annotation while still achieving effective interaction. Our task can be viewed as an instance of the popular 20-question game (20Q), which has been applied to a celebrities knowledge base (Chen et al., 2018; Hu et al., 2018). Our approach differs in using natural language descriptions of classification targets, questions and answers to compute our distributions, instead of treating them as categorical or structural data. Our question selection method is related to several existing methods. Kovashka and Grauman (2013) refine image search by asking to compare visual qualities against selected reference images, and Lee et al. (2018) perform object identification in an image by posing binary questions about the object or its location. Both methods, as well as ours use an entropy reduction criterion to select the best next question. We use a Bayesian decomposition of the joint distribution, which can be easily extended to other model-driven selection methods. Rao and Daum´e III (2018) propose a learning-toask approach by modeling the expected utility of asking question. Our selection method can be considered as a special case when entropy is used as the utility. In contrast to Rao and Daum´e III (2018), we model the entire interaction history instead of a single turn of follow-up questioning. Our model is trained using crowdsourced annotations, while Rao and Daum´e III (2018) uses real user-user interaction data. Alternatively to asking questions, Ferecatu and Geman (2007) and Guo et al. (2018) present to the user the most likely image in an image retrieval scenario. The user compares it with the ground-truth image and provides feedback using relevance score or natural language describing the discrepancy between them. 4 Method We maintain a probability distribution p(y|Xt) over the set of labels Y. At each interaction step, we first update this belief, decide if to ask a question or return the classification output using a policy controller and, if needed, select a question to ask using information gain. 4.1 Belief Probability Decomposition We decompose the conditional probability p(y = yi|Xt) using Bayes rule: p(yi|Xt) = p(yi|Xt−1, qt, rt) ∝p(rt, qt, yi|Xt−1) = p(qt|yi, Xt−1) p(yi|Xt−1) p(rt|qt, yi, Xt−1) . We make two simplifying assumptions as modeling choices. First, the user response depends only on the question qt and the underlying target label yi, and is independent of past interactions. While this independence assumption is unlikely to reflect the course of interactions, it allows to simplify p(rt|qt, yi, Xt−1) to p(rt|qt, yi). Second, the selection of the next question qt is deterministic given the interaction history Xt−1. Therefore, 2667 p(q = qt|yi, Xt−1) = 1, or zero for q ̸= qt. Section 4.2 describes this process. We rewrite the decomposition as: p(yi|Xt) ∝p(rt|qt, yi) · 1 · p(yi|Xt−1) = p(yi|x) tY τ=1 p(rτ |qτ, yi) . (1) Predicting the classification label given the observed interaction Xt is reduced to modeling p(yi|x) and p(rk|qj, yi), the label yi probability given the initial query x only and the probability of user response rk conditioned on the chosen question qj and class label yi. This factorization enables leveraging separate annotations to learn the two components directly, alleviating the need for collecting costly recordings of user interactions. 4.2 Information Gain Question Selection The system selects the question qt to ask at turn t to maximize the efficiency of the interaction. We use a maximum information gain criterion. Given Xt−1, we compute the information gain on classification label y as the decrease on entropy by observing possible answers to question q: IG(y ; q|Xt−1) = H(y|Xt−1) −H(y|Xt−1, q) , where H(·|·) denotes the conditional entropy. Intuitively, the information gain measures the amount of information obtained about the variable y by observing the value of another variable q. Because the first entropy term H(y|Xt−1) is a constant regardless of the choice of q, the selection of qt is equivalent to qt = arg minqj H(y|Xt−1, qj), where H(y|Xt−1, qj) = X rk∈R(qj) p(rk|Xt−1, qj) H(y|Xt−1, qj, rk) H(y|Xt−1, qj, rk) = X yi∈Y p(yi|Xt−1, qj, rk) log p(yi|Xt−1, qj, rk) p(rk|Xt−1, qj) = X yi∈Y p(rk, yi|Xt−1, qj) = X yi∈Y p(rk|qj, yi) p(yi|Xt−1) . We use the independence assumption (Section 4.1) to calculate p(rk|Xt−1, qj). Both p(rk|Xt−1, qj) and p(yi|Xt−1, qj, rk) can be iteratively updated using p(yi|x) and p(rk|qj, yi) as the interaction progresses (Equation 1) to efficiently compute the information gain. 4.3 Modeling the Distributions We model p(yi|x) and p(rk|qj, yi) by encoding the natural language descriptions of questions, answers and classification labels. In our domains, the text representation of a label is the FAQ document or the bird name. We do not simply treat the labels, questions and answers as categorical variables. Instead, we leverage their natural language content to estimate their correlation This reduces the need for heavy annotation and improves our model in low-resource scenarios. We use a shared neural encoder enc(·) parameterized by ψ to encode all texts. Both probability distributions are computed using the dot-product score: S(u, v) = enc(u)⊤enc(v), where u and v are two pieces of text. The probability of predicting the label yi given an initial query x is: p(yi|x) = exp(S(yi, x)) P yj∈Y exp(S(yj, x)) . The probability of an answer rk given a question qj and label yi is a linear combination of the observed empirical distribution ˆp(rk|qj, yi) and a parameterized estimation ˜p(rk|qj, yi): p(rk|qj, yi) = λˆp(rk|qj, yi)+(1−λ)˜p(rk|qj, yi) , where λ ∈[0, 1] is a hyper-parameter. We use the question-answer annotations Ai for each label yi to estimate ˆp(rk|qj, yi) using empirical counts. For example, in the FAQ suggestion task, we collect multiple user responses for each question and class label, and average across annotators to estimate ˆp (Section 5). The second term ˜p(rk|qj, yi) is computed using the text encoder: ˜p(rk|qj, yi) = exp(w · S(qj#rk, yi) + b) P rl∈R(qj)exp(w · S(qj#rl, yi) + b) , where w, b ∈R are scalar parameters and qj#rk is a concatenation of the question qj and the answer rk.2 Because we do not collect complete annotations to cover every label-question pair, ˜p provides 2For example, for a templated question ‘What is your phone operating system?’ and an answer ‘IOS’, qm = ‘phone operating system’ and rm = ‘IOS’, therefore, qm#rm = ‘phone operating system IOS’. 2668 a smoothing of the partially observed counts using the learned encoding S(·). We estimate the parameters ψ of enc(·) by pretraining using a dataset {(yi, Xi, Ai)}N i=1, where yi is a label, Xi is a set of initial queries and Ai is a set of question-answer pairs. We create from this data a set of text pairs (u, v) to train the scoring function S(·). For each label yi, we create pairs (x, yi) for each initial query x ∈Xi. We also create (qm#rm, yi) for each question-answer pair (qm, rm) ∈Ai. We minimize the cross-entropy loss using gradient descent: L(ψ) = −S(u, v) + log X v′ exp(S(u, v′)) . The second term requires summation over all v′, which are all the labels in Y. We approximate this sum using negative sampling that replaces the full set Y with a sampled subset in each training batch. The parameters ψ, w and b are fine-tuned using reinforcement learning during training of the policy controller (Section 4.5). 4.4 User Simulator We use a held-out dataset to build a simple simulator. We use the simulator to train the policy controller (Section 4.5) and for performance analysis, in addition to human evaluation. The user simulator provides initial queries to the system and responds to the system initiated clarification questions. The dataset includes N examples {(yi, X ′ i, A′ i)}N i=1, where yi is a goal, X ′ i is a set of initial queries and A′ i = {(qm, rm)}M′ i m=1 is a set of question-answer pairs. While this data is identical in form to our training data, we keep it separated from the data used to estimate S(·), p(yi|x) and p(rk|qj, yi) (Section 4.3). We estimate the simulator question response distribution p′(rk|qj, yi) using smoothed empirical counts from the data. At the beginning of a simulated interaction, we sample a target label ˆy, and sample a query x from the associated query set X ′ to start the interaction. Given a system clarification question qt at turn t, the simulator responds with an answer rt ∈R(qt) by sampling from p′(r|qt, ˆy). Sampling provides natural noise to the interaction, and our model has no knowledge of p′. The interaction ends when the system returns a label, which we can then evaluate, for example to compute a reward in Section 4.5. This setup is flexible in that the user simulator can be easily replaced or extended by a real human, and Algorithm 1: Training procedure Estimate p(y|x) and p(r|q, y) with w and b randomly initialized Estimate p′(r|q, y) for the user simulator for episode = 1 . . . M do Sample (x, ˆy) from dataset for t = 1 . . . T do Compute p(y|Xt−1) (Equation 1) action = f(p(y|Xt−1), t −1; θ) if action is STOP then break else if action is ASK then qt = arg maxqj∈Q IG(y ; qj |Xt−1) rt ∼p′(r|qt, ˆy) end y∗= arg maxyi p(yi|Xt−1) Compute the return (i.e., total reward) for every step t using y∗and ˆy Update w, b, θ using policy gradient end the system can be further trained with a human-inthe-loop setup. 4.5 Policy Controller The policy controller decides at each turn t to either select another question to query the user or to conclude the interaction. This provides a tradeoff between exploration by asking questions and exploitation by returning the most probable classification label. The policy controller f(·, ·; θ) is a feed-forward network parameterized by θ that takes the top-k probability values and current turn t as input. It generates one of two actions: STOP or ASK. When selecting ASK, a question is selected to maximize the information gain. For STOP, the label yi with highest probability is returned using arg maxyi∈Y p(yi|Xt−1) and the interaction ends. 4.6 Training Procedure Algorithm 1 describes the complete training process. First, we estimate p(y|x) and p(r|q, y). We use randomly initialized and fixed w and b parameters. We also estimate p′(r|q, y) for the user simulator (Section 4.4). We then learn the policy controller using the user simulator with a policy gradient method. We use the REINFORCE algorithm (Williams, 1992). The reward function provides a positive reward for predicting the correct target at the end of the interaction, a negative reward for predicting the wrong target, and a small negative reward for every question asked. We learn the policy controller f(·, ·; θ), and estimate w and b in p(rk|qj, yi) by back-propagating through the 2669 policy gradient. We keep the enc(·) parameters fixed during policy gradient. 5 Data Collection We design a crowdsourcing process to collect data for the FAQ task using Amazon Mechanical Turk.3 For the Birds domain, we re-purpose an existing dataset. We collect initial queries and tags for each FAQ document. Appendix A.1 describes the worker training process. Initial Query Collection We ask workers to consider the scenario of searching for an FAQ document using an interactive system. Given a target FAQ, we ask for an initial query that they would provide to such a system. The set of initial queries that is collected for each document yi is Xi. We encourage workers to provide incomplete information and avoid writing a simple paraphrase of the FAQ. This process provides realistic and diverse utterances because users have limited knowledge of the system and the domain. Tag Collection We collect natural language tag annotations for the FAQ documents. First, we use domain experts to define the set of possible freeform tags. The tags are not restricted to a predefined ontology and can be a phrase or a single word describing the topic of the document. We remove duplicate tags to finalize the set. Experts combine some binary tags to categorical tags. For example, tags ‘IOS’, ‘Android operating system’ and ‘Windows operating system’ are combined to the categorical tag ‘phone operating system’. We use a small set of deterministic, heuristically-designed templates to convert tags into questions. For example, the tag ‘international roaming’ is converted into a binary question ‘Is it about international roaming?’; the categorical tag ‘phone operating system’ is converted into a multi-choice question ‘What is your phone operating system?’. Finally, we use non-experts to collect user responses to the questions by associating tags with FAQ targets. For binary questions, we ask workers to associate their tags to the FAQ target if they would respond ‘yes’ to the question. We show the workers a list of ten tags for a given target as well as a ‘none of the above’ option. Annotating all possible target-tag combinations is still expensive and most pairings are negative. We rank the tags based on the relevance against the target using S(·) trained only 3https://www.mturk.com/ on the initial queries and show only the current top-50 to the workers. Later, we re-train S(·) on the complete data. For multi-choice questions, we show the workers a list of possible answers to a tag-generated question for a given FAQ. The workers need to choose one answer that they think best applies. They also have the option of choosing ‘not applicable’. The workers do not engage in a multiround interactive process. This allows for cheap and scalable collection. 6 Experimental Setup Task I: FAQ Suggestion We use the FAQ dataset from Shah et al. (2018). The dataset contains 517 troubleshooting documents from Sprint’s technical website. We collect 3,831 initial queries and 118,640 tag annotations using the setup described in Section 5. We split the data into 310/103/104 documents as training, development, and test sets. Only the queries and tag annotations of the 310 training documents are used for pre-training and learning the policy controller, leaving the queries and tag annotations in the development and test splits for evaluation only. Task II: Bird Identification We use the CaltechUCSD Birds dataset (CUB-200; Wah et al., 2011). The dataset contains 11,788 bird images for 200 different bird species. Each bird image is annotated with a subset of 27 visual attributes and 312 attribute values pertaining to the color or shape of a particular part of the bird. We create categorical questions from attributes with less five possible values, providing eight categorical questions in total. The remaining 279 attributes are converted to binary questions. Each image is annotated with 10 image captions describing the bird in the image (Reed et al., 2016). We use the image captions as initial user queries and bird species as labels. Since each caption contains only partial information about the bird species, the data is naturally noisy and provides challenging user interactions. We do not use the images from the dataset for model training. The images are only provided for grounding during human evaluation. Baselines We compare with four methods: • No Interaction: the classification label is predicted using only the initial query. We consider four implementations: (1) BM25: a common keyword-based scoring model for retrieval methods (Robertson and Zaragoza, 2670 2009); (2) RoBERTaBASE: we use a fine-tuned RoBERTaBASE model (Liu et al., 2019) as text encoder; (3) RNN: we use a recurrent neural network (RNN) with simple recurrent unit recurrence (SRU; Lei et al., 2018) as text encoder, together with a fastText word embedding layer (Bojanowski et al., 2017); and (4) RNN + self-attn: the same RNN neural model with a multi-head self-attention layer (Lin et al., 2017; Vaswani et al., 2017). • Random Interaction: at each turn, the system randomly selects a question to present the user. After T turns, the classification label is chosen according to the belief p(y|XT ). • No Initial Query Interaction: the system selects questions without conditioning on the initial user query using maximum information criterion. This is equivalent to using a static decision tree to pick the question, always asking the same first question (Utgoff, 1989; Ling et al., 2004). • Variants of Our Approach: we consider several variants of our full model. First, we replace the policy controller with two termination strategies: (1) end the interaction when max p(y|Xt) passes a threshold, or (2) end the interaction after a fixed number of turns. Second, we disable the parameterized estimator ˜p(rk|qj, yi) by setting λ = 1. Evaluation We use human evaluation, and further analyze performance using our simulator. For human evaluation, users interact with our systems and baseline models using a web-based interactive interface. Each interaction starts with a user scenario:4 a bird image or a device-troubleshooting scenario described in text. The user types an initial query and answers follow-up questions selected by the system. Once the system returns its prediction, we measure its accuracy, and the user is asked to rate the whole interaction according to rationality and naturalness.5 The user does not know the correct target label. We use a five-points Likert score for the followup questions. For FAQ Suggestion, we consider two evaluation setups: (1) assuming the model has access to tags in the development and test set for interaction, and (2) using only tags in the 4Each scenario is related to a single groundtruth label and serves to ground user interactions. 5We also surveyed users for perceived correctness, but observed it is interpreted identically to rationality. Therefore, we omit this measure. training set annotation. The former is equivalent to adding tags for new documents not seen during training time. The latter zero-shot evaluation setup allows us to investigate the model’s performance on unseen targets with no additional tags associated with them. Appendix A.4 provides further details of the human evaluation setup. We do further analysis with the user simulator . We evaluate classification performance using Accuracy@k, which is the percentage of time the correct target appears among the top-k predictions of the model. Implementation Details We use the same encoder to encode initial queries, question-answer pairs and FAQ documents in the FAQ suggestion task. In the bird identification task, where the structure of bird names differs from the other texts, we use one encoder for user initial queries and question-answer pairs and a second encoder for bird names. The policy controller receives a reward of 20 for returning the correct target label, a negative reward of -10 for the wrong target, and a turn penalty of -0.5 for each question asked. For our simulated analysis, we report the averaged results as well as the standard derivation from three independent runs for each model variant and baseline. Appendix A.2 provides more implementation and training details. 7 Results Our simulated analysis shows that the SRU RNN text encoder performs better or similar to the other encoders. This encoder is also the most lightweight. Therefore, we use it for the majority of our experiments. Human Evaluation Figure 2 and Table 1 show the human evaluation results of our full model and three baselines: our approach with a fixed number of turns (four for FAQ and five for Bird), our approach without access to the initial query (No Init. Query) and our approach without interaction (No Int. (RNN)). Naturalness and rationality measure the quality of the interaction, so we show the results of the user survey in Figure 2 only for interactive systems. Because we do not ask users to fill the end-of-interaction survey for the no interaction baseline, we simply compute its numbers following the first query when evaluating our full approach. Our approach balances between accuracy and the user-centric measures, including naturalness and rationality, achieving stronger performance across 2671 Naturalness <latexit sha1_base64="bkWuGBQdkuy27MxXm7MrTV30nsE=">ACjHicfVFRSxtBEN6cWtOrtbE+9uVsEKQP4U6ECqWQahFfbNWaKCQhzG0mcXFv9idE8Nx0L/Vn9KnvtZ/4SZ3iNXiw LfvPtzM63cSqFpTD8XfMWFpdeLNdf+q9WXq+ay97VqdGY4drqU2FzFYlEJhwRJvEgNQhJLPI+v9mf582s0Vmh1RtMUBwlMlBgLDuSoYePHZp/whvKDLyeFX+E9YUb3h31tDHJSaG3hl9Q3oMyALKlKdjqvB1LQtBg2mErnEfwFEQVaLIqjodrtY3+SPMsQUVcgrW9KExpkIMhwSW6vpnFPgVTLDnoIE7SCfT18Em4ZBWNt3FIUzNmHN3JIrJ0msVMmQJf2cW5G/i/Xy2i8O8iFS jNCxctG40wGpIOZlcFIzJyRUweAG+HeGvBLMDJGe7/a/ohjF45Ap/T9EAafMh74OZJHBT5NX+nEyoUuZ235kaPbwKehut6KwFZ3sNvdn6W9dfaOvWdbLGIfWZsdsmPWYZz9Yn/YX3brXo73ifvcyn1atWXrLN/wju4A5Izy2o=</latexit> <latexit sha1_base64="bkWuGBQdkuy27MxXm7MrTV30nsE=">ACjHicfVFRSxtBEN6cWtOrtbE+9uVsEKQP4U6ECqWQahFfbNWaKCQhzG0mcXFv9idE8Nx0L/Vn9KnvtZ/4SZ3iNXiw LfvPtzM63cSqFpTD8XfMWFpdeLNdf+q9WXq+ay97VqdGY4drqU2FzFYlEJhwRJvEgNQhJLPI+v9mf582s0Vmh1RtMUBwlMlBgLDuSoYePHZp/whvKDLyeFX+E9YUb3h31tDHJSaG3hl9Q3oMyALKlKdjqvB1LQtBg2mErnEfwFEQVaLIqjodrtY3+SPMsQUVcgrW9KExpkIMhwSW6vpnFPgVTLDnoIE7SCfT18Em4ZBWNt3FIUzNmHN3JIrJ0msVMmQJf2cW5G/i/Xy2i8O8iFS jNCxctG40wGpIOZlcFIzJyRUweAG+HeGvBLMDJGe7/a/ohjF45Ap/T9EAafMh74OZJHBT5NX+nEyoUuZ235kaPbwKehut6KwFZ3sNvdn6W9dfaOvWdbLGIfWZsdsmPWYZz9Yn/YX3brXo73ifvcyn1atWXrLN/wju4A5Izy2o=</latexit> <latexit sha1_base64="bkWuGBQdkuy27MxXm7MrTV30nsE=">ACjHicfVFRSxtBEN6cWtOrtbE+9uVsEKQP4U6ECqWQahFfbNWaKCQhzG0mcXFv9idE8Nx0L/Vn9KnvtZ/4SZ3iNXiw LfvPtzM63cSqFpTD8XfMWFpdeLNdf+q9WXq+ay97VqdGY4drqU2FzFYlEJhwRJvEgNQhJLPI+v9mf582s0Vmh1RtMUBwlMlBgLDuSoYePHZp/whvKDLyeFX+E9YUb3h31tDHJSaG3hl9Q3oMyALKlKdjqvB1LQtBg2mErnEfwFEQVaLIqjodrtY3+SPMsQUVcgrW9KExpkIMhwSW6vpnFPgVTLDnoIE7SCfT18Em4ZBWNt3FIUzNmHN3JIrJ0msVMmQJf2cW5G/i/Xy2i8O8iFS jNCxctG40wGpIOZlcFIzJyRUweAG+HeGvBLMDJGe7/a/ohjF45Ap/T9EAafMh74OZJHBT5NX+nEyoUuZ235kaPbwKehut6KwFZ3sNvdn6W9dfaOvWdbLGIfWZsdsmPWYZz9Yn/YX3brXo73ifvcyn1atWXrLN/wju4A5Izy2o=</latexit> <latexit sha1_base64="bkWuGBQdkuy27MxXm7MrTV30nsE=">ACjHicfVFRSxtBEN6cWtOrtbE+9uVsEKQP4U6ECqWQahFfbNWaKCQhzG0mcXFv9idE8Nx0L/Vn9KnvtZ/4SZ3iNXiw LfvPtzM63cSqFpTD8XfMWFpdeLNdf+q9WXq+ay97VqdGY4drqU2FzFYlEJhwRJvEgNQhJLPI+v9mf582s0Vmh1RtMUBwlMlBgLDuSoYePHZp/whvKDLyeFX+E9YUb3h31tDHJSaG3hl9Q3oMyALKlKdjqvB1LQtBg2mErnEfwFEQVaLIqjodrtY3+SPMsQUVcgrW9KExpkIMhwSW6vpnFPgVTLDnoIE7SCfT18Em4ZBWNt3FIUzNmHN3JIrJ0msVMmQJf2cW5G/i/Xy2i8O8iFS jNCxctG40wGpIOZlcFIzJyRUweAG+HeGvBLMDJGe7/a/ohjF45Ap/T9EAafMh74OZJHBT5NX+nEyoUuZ235kaPbwKehut6KwFZ3sNvdn6W9dfaOvWdbLGIfWZsdsmPWYZz9Yn/YX3brXo73ifvcyn1atWXrLN/wju4A5Izy2o=</latexit> Rationality <latexit sha1_base64="FB5LEdO/umPpNVtJ6aoSx57AYE=">ACjHicfVFRSxtBEN6c1upZbdTHvlwbBPEh3BVBoRSsivi1WqikIQwt5nExb3dY3dODEfAv+VP6ZOv+i/c5A6xKh1Y9 tvp3Z+TZOpbAUhn8r3tT0h5mPs3P+/KeFxc/VpeWm1Znh2OBanMRg0UpFDZIkMSL1CAkscTz+Gp3nD+/RmOFVmc0TLGTwECJvuBAjupWT1fbhDeU7/86Gfkl3hGm93zY1cYgJ4XWPnNHQJkBWXAF9WdSD6Sg4ahbrYX1cBLBWxCVoMbKO4uVb62e5pnCSriEqxtRWFKnRwMCS7R9cgspsCvYIAtBxUkaDv5ZPpRsOqYXtDXxi1FwYR9eSOHxNphEjtlAnRpX+fG5Hu5Vkb9rU4uVJoRK l406mcyIB2MrQx6YuyMHDoA3Aj31oBfgFOznDfb+hG8bgoSv8O0UDpM163gYzSOBmlJf7/2RCFTK3+87U6LWFb0Hzez0K69HJRm27eVvYO8u+sG9sjUVsk2zA3bMGoyzO3bPHtijt+hteD+8n4XUq5RfsL+CW/CY0/y2o=</latexit> <latexit sha1_base64="FB5LEdO/umPpNVtJ6aoSx57AYE=">ACjHicfVFRSxtBEN6c1upZbdTHvlwbBPEh3BVBoRSsivi1WqikIQwt5nExb3dY3dODEfAv+VP6ZOv+i/c5A6xKh1Y9 tvp3Z+TZOpbAUhn8r3tT0h5mPs3P+/KeFxc/VpeWm1Znh2OBanMRg0UpFDZIkMSL1CAkscTz+Gp3nD+/RmOFVmc0TLGTwECJvuBAjupWT1fbhDeU7/86Gfkl3hGm93zY1cYgJ4XWPnNHQJkBWXAF9WdSD6Sg4ahbrYX1cBLBWxCVoMbKO4uVb62e5pnCSriEqxtRWFKnRwMCS7R9cgspsCvYIAtBxUkaDv5ZPpRsOqYXtDXxi1FwYR9eSOHxNphEjtlAnRpX+fG5Hu5Vkb9rU4uVJoRK l406mcyIB2MrQx6YuyMHDoA3Aj31oBfgFOznDfb+hG8bgoSv8O0UDpM163gYzSOBmlJf7/2RCFTK3+87U6LWFb0Hzez0K69HJRm27eVvYO8u+sG9sjUVsk2zA3bMGoyzO3bPHtijt+hteD+8n4XUq5RfsL+CW/CY0/y2o=</latexit> <latexit sha1_base64="FB5LEdO/umPpNVtJ6aoSx57AYE=">ACjHicfVFRSxtBEN6c1upZbdTHvlwbBPEh3BVBoRSsivi1WqikIQwt5nExb3dY3dODEfAv+VP6ZOv+i/c5A6xKh1Y9 tvp3Z+TZOpbAUhn8r3tT0h5mPs3P+/KeFxc/VpeWm1Znh2OBanMRg0UpFDZIkMSL1CAkscTz+Gp3nD+/RmOFVmc0TLGTwECJvuBAjupWT1fbhDeU7/86Gfkl3hGm93zY1cYgJ4XWPnNHQJkBWXAF9WdSD6Sg4ahbrYX1cBLBWxCVoMbKO4uVb62e5pnCSriEqxtRWFKnRwMCS7R9cgspsCvYIAtBxUkaDv5ZPpRsOqYXtDXxi1FwYR9eSOHxNphEjtlAnRpX+fG5Hu5Vkb9rU4uVJoRK l406mcyIB2MrQx6YuyMHDoA3Aj31oBfgFOznDfb+hG8bgoSv8O0UDpM163gYzSOBmlJf7/2RCFTK3+87U6LWFb0Hzez0K69HJRm27eVvYO8u+sG9sjUVsk2zA3bMGoyzO3bPHtijt+hteD+8n4XUq5RfsL+CW/CY0/y2o=</latexit> <latexit sha1_base64="FB5LEdO/umPpNVtJ6aoSx57AYE=">ACjHicfVFRSxtBEN6c1upZbdTHvlwbBPEh3BVBoRSsivi1WqikIQwt5nExb3dY3dODEfAv+VP6ZOv+i/c5A6xKh1Y9 tvp3Z+TZOpbAUhn8r3tT0h5mPs3P+/KeFxc/VpeWm1Znh2OBanMRg0UpFDZIkMSL1CAkscTz+Gp3nD+/RmOFVmc0TLGTwECJvuBAjupWT1fbhDeU7/86Gfkl3hGm93zY1cYgJ4XWPnNHQJkBWXAF9WdSD6Sg4ahbrYX1cBLBWxCVoMbKO4uVb62e5pnCSriEqxtRWFKnRwMCS7R9cgspsCvYIAtBxUkaDv5ZPpRsOqYXtDXxi1FwYR9eSOHxNphEjtlAnRpX+fG5Hu5Vkb9rU4uVJoRK l406mcyIB2MrQx6YuyMHDoA3Aj31oBfgFOznDfb+hG8bgoSv8O0UDpM163gYzSOBmlJf7/2RCFTK3+87U6LWFb0Hzez0K69HJRm27eVvYO8u+sG9sjUVsk2zA3bMGoyzO3bPHtijt+hteD+8n4XUq5RfsL+CW/CY0/y2o=</latexit> Our Approach <latexit sha1_base64="Uq47Qd6wkhybOWugrai1+GkJQI=">ACoXicfVFNaxsxEJW3X +n2y0mPvag1gdKD2Q2F5Jg0pbSHNk6onYBtzKw8tkW0kpBmQ8y0F/Ye/9Hry2VvUtoktIBMU9vnmakp8wq6SlJfrSiO3fv3X+w8TB+9PjJ02ftza2BN4UT2BdGXeWgUclNfZJksIz6xDyT OFpdn64qp9eoPS6K+0tDjOYa7lTAqgQE3ai+0R4SWVHw6Oq7jB76SbXm0OjXMoSKP3V9wXoMKBusadrBuCkrSs4po6Khw/sNYZEItq0u4k3WQd/DZIG9BhTfQm62Xo6kRY6ahALvh2lia VyCIykUhiGFRwviHOY4DFBDjn5cri2p+HZgpnxmXFia+Jr9+0QJufLPAvKHGjhb9ZW5L9qw4Jme+NSalsQalEPmhWKk+Erf/lUruxSywBAOBnuysUCHAgKvxDHo/cYHuPwc2h8ZNEBGfemH IGb53BZlU3+n0zqWhZyHExNb1p4Gwx2umnSTY/fdvYH32p7N9gL9oq9ZinbZfvsI+uxPhPsO/vJfrHfUSf6FPWik1oatZovec6uRT8A9Y/1Ag=</latexit> <latexit sha1_base64="Uq47Qd6wkhybOWugrai1+GkJQI=">ACoXicfVFNaxsxEJW3X +n2y0mPvag1gdKD2Q2F5Jg0pbSHNk6onYBtzKw8tkW0kpBmQ8y0F/Ye/9Hry2VvUtoktIBMU9vnmakp8wq6SlJfrSiO3fv3X+w8TB+9PjJ02ftza2BN4UT2BdGXeWgUclNfZJksIz6xDyT OFpdn64qp9eoPS6K+0tDjOYa7lTAqgQE3ai+0R4SWVHw6Oq7jB76SbXm0OjXMoSKP3V9wXoMKBusadrBuCkrSs4po6Khw/sNYZEItq0u4k3WQd/DZIG9BhTfQm62Xo6kRY6ahALvh2lia VyCIykUhiGFRwviHOY4DFBDjn5cri2p+HZgpnxmXFia+Jr9+0QJufLPAvKHGjhb9ZW5L9qw4Jme+NSalsQalEPmhWKk+Erf/lUruxSywBAOBnuysUCHAgKvxDHo/cYHuPwc2h8ZNEBGfemH IGb53BZlU3+n0zqWhZyHExNb1p4Gwx2umnSTY/fdvYH32p7N9gL9oq9ZinbZfvsI+uxPhPsO/vJfrHfUSf6FPWik1oatZovec6uRT8A9Y/1Ag=</latexit> <latexit sha1_base64="Uq47Qd6wkhybOWugrai1+GkJQI=">ACoXicfVFNaxsxEJW3X +n2y0mPvag1gdKD2Q2F5Jg0pbSHNk6onYBtzKw8tkW0kpBmQ8y0F/Ye/9Hry2VvUtoktIBMU9vnmakp8wq6SlJfrSiO3fv3X+w8TB+9PjJ02ftza2BN4UT2BdGXeWgUclNfZJksIz6xDyT OFpdn64qp9eoPS6K+0tDjOYa7lTAqgQE3ai+0R4SWVHw6Oq7jB76SbXm0OjXMoSKP3V9wXoMKBusadrBuCkrSs4po6Khw/sNYZEItq0u4k3WQd/DZIG9BhTfQm62Xo6kRY6ahALvh2lia VyCIykUhiGFRwviHOY4DFBDjn5cri2p+HZgpnxmXFia+Jr9+0QJufLPAvKHGjhb9ZW5L9qw4Jme+NSalsQalEPmhWKk+Erf/lUruxSywBAOBnuysUCHAgKvxDHo/cYHuPwc2h8ZNEBGfemH IGb53BZlU3+n0zqWhZyHExNb1p4Gwx2umnSTY/fdvYH32p7N9gL9oq9ZinbZfvsI+uxPhPsO/vJfrHfUSf6FPWik1oatZovec6uRT8A9Y/1Ag=</latexit> <latexit sha1_base64="Uq47Qd6wkhybOWugrai1+GkJQI=">ACoXicfVFNaxsxEJW3X +n2y0mPvag1gdKD2Q2F5Jg0pbSHNk6onYBtzKw8tkW0kpBmQ8y0F/Ye/9Hry2VvUtoktIBMU9vnmakp8wq6SlJfrSiO3fv3X+w8TB+9PjJ02ftza2BN4UT2BdGXeWgUclNfZJksIz6xDyT OFpdn64qp9eoPS6K+0tDjOYa7lTAqgQE3ai+0R4SWVHw6Oq7jB76SbXm0OjXMoSKP3V9wXoMKBusadrBuCkrSs4po6Khw/sNYZEItq0u4k3WQd/DZIG9BhTfQm62Xo6kRY6ahALvh2lia VyCIykUhiGFRwviHOY4DFBDjn5cri2p+HZgpnxmXFia+Jr9+0QJufLPAvKHGjhb9ZW5L9qw4Jme+NSalsQalEPmhWKk+Erf/lUruxSywBAOBnuysUCHAgKvxDHo/cYHuPwc2h8ZNEBGfemH IGb53BZlU3+n0zqWhZyHExNb1p4Gwx2umnSTY/fdvYH32p7N9gL9oq9ZinbZfvsI+uxPhPsO/vJfrHfUSf6FPWik1oatZovec6uRT8A9Y/1Ag=</latexit> Fixed turn <latexit sha1_base64="P82KOR7nKMPHpu3/OQHgy1vAL6c=">ACtHicfVFb9MwFHbCZ SNc1sIjL4ZqEuKhShAaPG4MTbzANkS7SU1VnTinrTXHjuwT1CqKxN/kcf8Et40qtiGOZJ3P37nZ38lKJR3F8e8gvHf/wcOd3UfR4ydPn+1us+HzlRW4EAYZexlBg6V1DgSQovS4tQZAovs qvjVfziJ1onjf5ByxLHBcy0nEoB5KlJp95PCRdUnxydN1GLP0mby/HxloUpNG5LfcNqLKgbnDf1w1BSVpudPK8qOytAbEvInaQXKBOf1upl0enE/Xhu/C5IW9FhrZ5Nu8CrNjagK1CQUO DdK4pLGNViSQqEfUTksQVzBDEceaijQjeu1Sg3f90zOp8b6o4mv2b8raicWxaZzyA5u52bEX+KzaqaPpxXEtdVoRabAZNK8XJ8JXkPJcrBdXSAxBW+rdyMQcLgvxioij9jP4zFr/6xqclW iBj39Yp2FkBi6Zu/f/SpN6keR95UZPbEt4Fw3f9JO4n5+97h8NfG3l32Uv2mr1hCfvADtkXdsYGTLDrYCfoBN3wIExDEeImNQzalbxgNyzUfwBjWtjq</latexit> <latexit sha1_base64="P82KOR7nKMPHpu3/OQHgy1vAL6c=">ACtHicfVFb9MwFHbCZ SNc1sIjL4ZqEuKhShAaPG4MTbzANkS7SU1VnTinrTXHjuwT1CqKxN/kcf8Et40qtiGOZJ3P37nZ38lKJR3F8e8gvHf/wcOd3UfR4ydPn+1us+HzlRW4EAYZexlBg6V1DgSQovS4tQZAovs qvjVfziJ1onjf5ByxLHBcy0nEoB5KlJp95PCRdUnxydN1GLP0mby/HxloUpNG5LfcNqLKgbnDf1w1BSVpudPK8qOytAbEvInaQXKBOf1upl0enE/Xhu/C5IW9FhrZ5Nu8CrNjagK1CQUO DdK4pLGNViSQqEfUTksQVzBDEceaijQjeu1Sg3f90zOp8b6o4mv2b8raicWxaZzyA5u52bEX+KzaqaPpxXEtdVoRabAZNK8XJ8JXkPJcrBdXSAxBW+rdyMQcLgvxioij9jP4zFr/6xqclW iBj39Yp2FkBi6Zu/f/SpN6keR95UZPbEt4Fw3f9JO4n5+97h8NfG3l32Uv2mr1hCfvADtkXdsYGTLDrYCfoBN3wIExDEeImNQzalbxgNyzUfwBjWtjq</latexit> <latexit sha1_base64="P82KOR7nKMPHpu3/OQHgy1vAL6c=">ACtHicfVFb9MwFHbCZ SNc1sIjL4ZqEuKhShAaPG4MTbzANkS7SU1VnTinrTXHjuwT1CqKxN/kcf8Et40qtiGOZJ3P37nZ38lKJR3F8e8gvHf/wcOd3UfR4ydPn+1us+HzlRW4EAYZexlBg6V1DgSQovS4tQZAovs qvjVfziJ1onjf5ByxLHBcy0nEoB5KlJp95PCRdUnxydN1GLP0mby/HxloUpNG5LfcNqLKgbnDf1w1BSVpudPK8qOytAbEvInaQXKBOf1upl0enE/Xhu/C5IW9FhrZ5Nu8CrNjagK1CQUO DdK4pLGNViSQqEfUTksQVzBDEceaijQjeu1Sg3f90zOp8b6o4mv2b8raicWxaZzyA5u52bEX+KzaqaPpxXEtdVoRabAZNK8XJ8JXkPJcrBdXSAxBW+rdyMQcLgvxioij9jP4zFr/6xqclW iBj39Yp2FkBi6Zu/f/SpN6keR95UZPbEt4Fw3f9JO4n5+97h8NfG3l32Uv2mr1hCfvADtkXdsYGTLDrYCfoBN3wIExDEeImNQzalbxgNyzUfwBjWtjq</latexit> <latexit sha1_base64="P82KOR7nKMPHpu3/OQHgy1vAL6c=">ACtHicfVFb9MwFHbCZ SNc1sIjL4ZqEuKhShAaPG4MTbzANkS7SU1VnTinrTXHjuwT1CqKxN/kcf8Et40qtiGOZJ3P37nZ38lKJR3F8e8gvHf/wcOd3UfR4ydPn+1us+HzlRW4EAYZexlBg6V1DgSQovS4tQZAovs qvjVfziJ1onjf5ByxLHBcy0nEoB5KlJp95PCRdUnxydN1GLP0mby/HxloUpNG5LfcNqLKgbnDf1w1BSVpudPK8qOytAbEvInaQXKBOf1upl0enE/Xhu/C5IW9FhrZ5Nu8CrNjagK1CQUO DdK4pLGNViSQqEfUTksQVzBDEceaijQjeu1Sg3f90zOp8b6o4mv2b8raicWxaZzyA5u52bEX+KzaqaPpxXEtdVoRabAZNK8XJ8JXkPJcrBdXSAxBW+rdyMQcLgvxioij9jP4zFr/6xqclW iBj39Yp2FkBi6Zu/f/SpN6keR95UZPbEt4Fw3f9JO4n5+97h8NfG3l32Uv2mr1hCfvADtkXdsYGTLDrYCfoBN3wIExDEeImNQzalbxgNyzUfwBjWtjq</latexit> No Init. Query <latexit sha1_base64="e2c+oB2T8rt0JcrqbWdmqiT4UPU=">ACy3icfVFNbxMxEPUuX 2X5SuGIhAxRcUh2kVIcGwpquBQ2iCSVkqiaOKdJFa9sqeRVmWlbjwB/h3/BOMkqoi1iJMvPb5n7DeTXElHcfwrCK9dv3Hz1tbt6M7de/cftLYf9p0prMCeMrYswk4VFJjyQpPMstQ jZReDo5P1jmT7+gdLoz1TmOMpgpuVUCiBPjVs/d4aEC6oO97t1OC30qabw4GxFgVpdG7DfQqLKgL3KdVQVCSyg13XFi+n+fWgJhvyEO5wJT7ArqOmnKGf9CSOrxboC3rcasd+JV8Ksga UCbNXEy3g6eDlMjigw1CQXODZI4p1EFlqRQ6NsUDnMQ5zDgYcaMnSjamVezXc8k/KpsX5p4iv27xsVZM6V2cQrM6C5u5xbkv/KDQqavhlVUucFoRbrRtNCcTJ8OQmeyqWxqvQAhJX+rVzMw YIgP68oGr5D/xmLR7wcY4WyNgX1RDsLINFXTX7/2RSr2V+j7ypyWULr4L+y04Sd5Luq/Ze/va3i32mD1juyxhr9ke89OWI8J9jt4EjwPdsOj0IVfw29raRg0I3nELkT4w83zOHt</lat exit> <latexit sha1_base64="e2c+oB2T8rt0JcrqbWdmqiT4UPU=">ACy3icfVFNbxMxEPUuX 2X5SuGIhAxRcUh2kVIcGwpquBQ2iCSVkqiaOKdJFa9sqeRVmWlbjwB/h3/BOMkqoi1iJMvPb5n7DeTXElHcfwrCK9dv3Hz1tbt6M7de/cftLYf9p0prMCeMrYswk4VFJjyQpPMstQ jZReDo5P1jmT7+gdLoz1TmOMpgpuVUCiBPjVs/d4aEC6oO97t1OC30qabw4GxFgVpdG7DfQqLKgL3KdVQVCSyg13XFi+n+fWgJhvyEO5wJT7ArqOmnKGf9CSOrxboC3rcasd+JV8Ksga UCbNXEy3g6eDlMjigw1CQXODZI4p1EFlqRQ6NsUDnMQ5zDgYcaMnSjamVezXc8k/KpsX5p4iv27xsVZM6V2cQrM6C5u5xbkv/KDQqavhlVUucFoRbrRtNCcTJ8OQmeyqWxqvQAhJX+rVzMw YIgP68oGr5D/xmLR7wcY4WyNgX1RDsLINFXTX7/2RSr2V+j7ypyWULr4L+y04Sd5Luq/Ze/va3i32mD1juyxhr9ke89OWI8J9jt4EjwPdsOj0IVfw29raRg0I3nELkT4w83zOHt</lat exit> <latexit sha1_base64="e2c+oB2T8rt0JcrqbWdmqiT4UPU=">ACy3icfVFNbxMxEPUuX 2X5SuGIhAxRcUh2kVIcGwpquBQ2iCSVkqiaOKdJFa9sqeRVmWlbjwB/h3/BOMkqoi1iJMvPb5n7DeTXElHcfwrCK9dv3Hz1tbt6M7de/cftLYf9p0prMCeMrYswk4VFJjyQpPMstQ jZReDo5P1jmT7+gdLoz1TmOMpgpuVUCiBPjVs/d4aEC6oO97t1OC30qabw4GxFgVpdG7DfQqLKgL3KdVQVCSyg13XFi+n+fWgJhvyEO5wJT7ArqOmnKGf9CSOrxboC3rcasd+JV8Ksga UCbNXEy3g6eDlMjigw1CQXODZI4p1EFlqRQ6NsUDnMQ5zDgYcaMnSjamVezXc8k/KpsX5p4iv27xsVZM6V2cQrM6C5u5xbkv/KDQqavhlVUucFoRbrRtNCcTJ8OQmeyqWxqvQAhJX+rVzMw YIgP68oGr5D/xmLR7wcY4WyNgX1RDsLINFXTX7/2RSr2V+j7ypyWULr4L+y04Sd5Luq/Ze/va3i32mD1juyxhr9ke89OWI8J9jt4EjwPdsOj0IVfw29raRg0I3nELkT4w83zOHt</lat exit> <latexit sha1_base64="e2c+oB2T8rt0JcrqbWdmqiT4UPU=">ACy3icfVFNbxMxEPUuX 2X5SuGIhAxRcUh2kVIcGwpquBQ2iCSVkqiaOKdJFa9sqeRVmWlbjwB/h3/BOMkqoi1iJMvPb5n7DeTXElHcfwrCK9dv3Hz1tbt6M7de/cftLYf9p0prMCeMrYswk4VFJjyQpPMstQ jZReDo5P1jmT7+gdLoz1TmOMpgpuVUCiBPjVs/d4aEC6oO97t1OC30qabw4GxFgVpdG7DfQqLKgL3KdVQVCSyg13XFi+n+fWgJhvyEO5wJT7ArqOmnKGf9CSOrxboC3rcasd+JV8Ksga UCbNXEy3g6eDlMjigw1CQXODZI4p1EFlqRQ6NsUDnMQ5zDgYcaMnSjamVezXc8k/KpsX5p4iv27xsVZM6V2cQrM6C5u5xbkv/KDQqavhlVUucFoRbrRtNCcTJ8OQmeyqWxqvQAhJX+rVzMw YIgP68oGr5D/xmLR7wcY4WyNgX1RDsLINFXTX7/2RSr2V+j7ypyWULr4L+y04Sd5Luq/Ze/va3i32mD1juyxhr9ke89OWI8J9jt4EjwPdsOj0IVfw29raRg0I3nELkT4w83zOHt</lat exit> Naturalness <latexit sha1_base64="bkWuGBQdkuy27MxXm7MrTV30nsE=">ACjHicfVFRSxtBEN6cWtOrtbE+9uVsEKQP4U6ECqWQahFfbNWaKCQhzG0mcXFv9idE8Nx0L/Vn9KnvtZ/4SZ3iNXiwLfvPtzM63cSqFpTD8XfMWFpdeLNdf+q9WXq+ay97VqdGY4drqU2FzFYlEJhwRJvEgNQhJLPI+v9mf582s0Vmh1RtMUBwlMlBgLDuSoYePHZp/whvKDLyeFX+E9YUb3h31tDHJSaG3hl 9Q3oMyALKlKdjqvB1LQtBg2mErnEfwFEQVaLIqjodrtY3+SPMsQUVcgrW9KExpkIMhwSW6vpnFPgVTLDnoIE7SCfT18Em4ZBWNt3FIUzNmHN3JIrJ0msVMmQJf2cW5G/i/Xy2i8O8iFSjNCxctG40wGpIOZlcFIzJyRUweAG+HeGvBLMDJGe7/a/ohjF45Ap/T9EAafMh74OZJHBT5NX+nEyoUuZ235kaPbwKehut6KwFZ3sNvdn6W9dfaOvWdbLGIfWZsdsmPWYZz9Yn/YX3brXo73ifvcyn1atWXrLN/wju4A5Izy2o=</latexit> <latexit sha1_base64="bkWuGBQdkuy27MxXm7MrTV30nsE=">ACjHicfVFRSxtBEN6cWtOrtbE+9uVsEKQP4U6ECqWQahFfbNWaKCQhzG0mcXFv9idE8Nx0L/Vn9KnvtZ/4SZ3iNXiwLfvPtzM63cSqFpTD8XfMWFpdeLNdf+q9WXq+ay97VqdGY4drqU2FzFYlEJhwRJvEgNQhJLPI+v9mf582s0Vmh1RtMUBwlMlBgLDuSoYePHZp/whvKDLyeFX+E9YUb3h31tDHJSaG3hl 9Q3oMyALKlKdjqvB1LQtBg2mErnEfwFEQVaLIqjodrtY3+SPMsQUVcgrW9KExpkIMhwSW6vpnFPgVTLDnoIE7SCfT18Em4ZBWNt3FIUzNmHN3JIrJ0msVMmQJf2cW5G/i/Xy2i8O8iFSjNCxctG40wGpIOZlcFIzJyRUweAG+HeGvBLMDJGe7/a/ohjF45Ap/T9EAafMh74OZJHBT5NX+nEyoUuZ235kaPbwKehut6KwFZ3sNvdn6W9dfaOvWdbLGIfWZsdsmPWYZz9Yn/YX3brXo73ifvcyn1atWXrLN/wju4A5Izy2o=</latexit> <latexit sha1_base64="bkWuGBQdkuy27MxXm7MrTV30nsE=">ACjHicfVFRSxtBEN6cWtOrtbE+9uVsEKQP4U6ECqWQahFfbNWaKCQhzG0mcXFv9idE8Nx0L/Vn9KnvtZ/4SZ3iNXiwLfvPtzM63cSqFpTD8XfMWFpdeLNdf+q9WXq+ay97VqdGY4drqU2FzFYlEJhwRJvEgNQhJLPI+v9mf582s0Vmh1RtMUBwlMlBgLDuSoYePHZp/whvKDLyeFX+E9YUb3h31tDHJSaG3hl 9Q3oMyALKlKdjqvB1LQtBg2mErnEfwFEQVaLIqjodrtY3+SPMsQUVcgrW9KExpkIMhwSW6vpnFPgVTLDnoIE7SCfT18Em4ZBWNt3FIUzNmHN3JIrJ0msVMmQJf2cW5G/i/Xy2i8O8iFSjNCxctG40wGpIOZlcFIzJyRUweAG+HeGvBLMDJGe7/a/ohjF45Ap/T9EAafMh74OZJHBT5NX+nEyoUuZ235kaPbwKehut6KwFZ3sNvdn6W9dfaOvWdbLGIfWZsdsmPWYZz9Yn/YX3brXo73ifvcyn1atWXrLN/wju4A5Izy2o=</latexit> <latexit sha1_base64="bkWuGBQdkuy27MxXm7MrTV30nsE=">ACjHicfVFRSxtBEN6cWtOrtbE+9uVsEKQP4U6ECqWQahFfbNWaKCQhzG0mcXFv9idE8Nx0L/Vn9KnvtZ/4SZ3iNXiwLfvPtzM63cSqFpTD8XfMWFpdeLNdf+q9WXq+ay97VqdGY4drqU2FzFYlEJhwRJvEgNQhJLPI+v9mf582s0Vmh1RtMUBwlMlBgLDuSoYePHZp/whvKDLyeFX+E9YUb3h31tDHJSaG3hl 9Q3oMyALKlKdjqvB1LQtBg2mErnEfwFEQVaLIqjodrtY3+SPMsQUVcgrW9KExpkIMhwSW6vpnFPgVTLDnoIE7SCfT18Em4ZBWNt3FIUzNmHN3JIrJ0msVMmQJf2cW5G/i/Xy2i8O8iFSjNCxctG40wGpIOZlcFIzJyRUweAG+HeGvBLMDJGe7/a/ohjF45Ap/T9EAafMh74OZJHBT5NX+nEyoUuZ235kaPbwKehut6KwFZ3sNvdn6W9dfaOvWdbLGIfWZsdsmPWYZz9Yn/YX3brXo73ifvcyn1atWXrLN/wju4A5Izy2o=</latexit> Rationality <latexit sha1_base64="FB5LEdO/umPpNVtJ6aoSx57AYE=">ACjHicfVFRSxtBEN6c1upZbdTHvlwbBPEh3BVBoRSsivi1WqikIQwt5nExb3dY3dODEfAv+VP6ZOv+i/c5A6xKh1Y9tvp3Z+TZOpbAUhn8r3tT0h5mPs3P+/KeFxc/VpeWm1Znh2OBanMRg0UpFDZIkMSL1CAkscTz+Gp3nD+/RmOFVmc0TLGTwECJvuBAjupWT1fbhDeU7/86Gfkl3hGm93zY1cYgJ4XWPnNHQ JkBWXAF9WdSD6Sg4ahbrYX1cBLBWxCVoMbKO4uVb62e5pnCSriEqxtRWFKnRwMCS7R9cgspsCvYIAtBxUkaDv5ZPpRsOqYXtDXxi1FwYR9eSOHxNphEjtlAnRpX+fG5Hu5Vkb9rU4uVJoRKl406mcyIB2MrQx6YuyMHDoA3Aj31oBfgFOznDfb+hG8bgoSv8O0UDpM163gYzSOBmlJf7/2RCFTK3+87U6LWFb0Hzez0K69HJRm27eVvYO8u+sG9sjUVsk2zA3bMGoyzO3bPHtijt+hteD+8n4XUq5RfsL+CW/CY0/y2o=</latexit> <latexit sha1_base64="FB5LEdO/umPpNVtJ6aoSx57AYE=">ACjHicfVFRSxtBEN6c1upZbdTHvlwbBPEh3BVBoRSsivi1WqikIQwt5nExb3dY3dODEfAv+VP6ZOv+i/c5A6xKh1Y9tvp3Z+TZOpbAUhn8r3tT0h5mPs3P+/KeFxc/VpeWm1Znh2OBanMRg0UpFDZIkMSL1CAkscTz+Gp3nD+/RmOFVmc0TLGTwECJvuBAjupWT1fbhDeU7/86Gfkl3hGm93zY1cYgJ4XWPnNHQ JkBWXAF9WdSD6Sg4ahbrYX1cBLBWxCVoMbKO4uVb62e5pnCSriEqxtRWFKnRwMCS7R9cgspsCvYIAtBxUkaDv5ZPpRsOqYXtDXxi1FwYR9eSOHxNphEjtlAnRpX+fG5Hu5Vkb9rU4uVJoRKl406mcyIB2MrQx6YuyMHDoA3Aj31oBfgFOznDfb+hG8bgoSv8O0UDpM163gYzSOBmlJf7/2RCFTK3+87U6LWFb0Hzez0K69HJRm27eVvYO8u+sG9sjUVsk2zA3bMGoyzO3bPHtijt+hteD+8n4XUq5RfsL+CW/CY0/y2o=</latexit> <latexit sha1_base64="FB5LEdO/umPpNVtJ6aoSx57AYE=">ACjHicfVFRSxtBEN6c1upZbdTHvlwbBPEh3BVBoRSsivi1WqikIQwt5nExb3dY3dODEfAv+VP6ZOv+i/c5A6xKh1Y9tvp3Z+TZOpbAUhn8r3tT0h5mPs3P+/KeFxc/VpeWm1Znh2OBanMRg0UpFDZIkMSL1CAkscTz+Gp3nD+/RmOFVmc0TLGTwECJvuBAjupWT1fbhDeU7/86Gfkl3hGm93zY1cYgJ4XWPnNHQ JkBWXAF9WdSD6Sg4ahbrYX1cBLBWxCVoMbKO4uVb62e5pnCSriEqxtRWFKnRwMCS7R9cgspsCvYIAtBxUkaDv5ZPpRsOqYXtDXxi1FwYR9eSOHxNphEjtlAnRpX+fG5Hu5Vkb9rU4uVJoRKl406mcyIB2MrQx6YuyMHDoA3Aj31oBfgFOznDfb+hG8bgoSv8O0UDpM163gYzSOBmlJf7/2RCFTK3+87U6LWFb0Hzez0K69HJRm27eVvYO8u+sG9sjUVsk2zA3bMGoyzO3bPHtijt+hteD+8n4XUq5RfsL+CW/CY0/y2o=</latexit> <latexit sha1_base64="FB5LEdO/umPpNVtJ6aoSx57AYE=">ACjHicfVFRSxtBEN6c1upZbdTHvlwbBPEh3BVBoRSsivi1WqikIQwt5nExb3dY3dODEfAv+VP6ZOv+i/c5A6xKh1Y9tvp3Z+TZOpbAUhn8r3tT0h5mPs3P+/KeFxc/VpeWm1Znh2OBanMRg0UpFDZIkMSL1CAkscTz+Gp3nD+/RmOFVmc0TLGTwECJvuBAjupWT1fbhDeU7/86Gfkl3hGm93zY1cYgJ4XWPnNHQ JkBWXAF9WdSD6Sg4ahbrYX1cBLBWxCVoMbKO4uVb62e5pnCSriEqxtRWFKnRwMCS7R9cgspsCvYIAtBxUkaDv5ZPpRsOqYXtDXxi1FwYR9eSOHxNphEjtlAnRpX+fG5Hu5Vkb9rU4uVJoRKl406mcyIB2MrQx6YuyMHDoA3Aj31oBfgFOznDfb+hG8bgoSv8O0UDpM163gYzSOBmlJf7/2RCFTK3+87U6LWFb0Hzez0K69HJRm27eVvYO8u+sG9sjUVsk2zA3bMGoyzO3bPHtijt+hteD+8n4XUq5RfsL+CW/CY0/y2o=</latexit> Naturalness <latexit sha1_base64="bkWuGBQdkuy27MxXm7MrTV30nsE=">ACjHicfVFRSxtBEN6cWtOrtbE+9uVsEKQP4U6ECqWQahFfbNWaKCQhzG0mcXFv9idE8Nx0L/Vn9KnvtZ/4SZ3iNXiwLfvPtzM63cSqFpTD8XfMWFpdeLNdf+q9WXq+ay97VqdGY4drqU2FzFYlEJhwRJvEgNQhJLPI+v9mf582s0Vmh1RtMUBwlMlBgLDuSoYePHZp/whvKDLyeFX+E9YUb3h31tDHJSaG3hl 9Q3oMyALKlKdjqvB1LQtBg2mErnEfwFEQVaLIqjodrtY3+SPMsQUVcgrW9KExpkIMhwSW6vpnFPgVTLDnoIE7SCfT18Em4ZBWNt3FIUzNmHN3JIrJ0msVMmQJf2cW5G/i/Xy2i8O8iFSjNCxctG40wGpIOZlcFIzJyRUweAG+HeGvBLMDJGe7/a/ohjF45Ap/T9EAafMh74OZJHBT5NX+nEyoUuZ235kaPbwKehut6KwFZ3sNvdn6W9dfaOvWdbLGIfWZsdsmPWYZz9Yn/YX3brXo73ifvcyn1atWXrLN/wju4A5Izy2o=</latexit> <latexit sha1_base64="bkWuGBQdkuy27MxXm7MrTV30nsE=">ACjHicfVFRSxtBEN6cWtOrtbE+9uVsEKQP4U6ECqWQahFfbNWaKCQhzG0mcXFv9idE8Nx0L/Vn9KnvtZ/4SZ3iNXiwLfvPtzM63cSqFpTD8XfMWFpdeLNdf+q9WXq+ay97VqdGY4drqU2FzFYlEJhwRJvEgNQhJLPI+v9mf582s0Vmh1RtMUBwlMlBgLDuSoYePHZp/whvKDLyeFX+E9YUb3h31tDHJSaG3hl 9Q3oMyALKlKdjqvB1LQtBg2mErnEfwFEQVaLIqjodrtY3+SPMsQUVcgrW9KExpkIMhwSW6vpnFPgVTLDnoIE7SCfT18Em4ZBWNt3FIUzNmHN3JIrJ0msVMmQJf2cW5G/i/Xy2i8O8iFSjNCxctG40wGpIOZlcFIzJyRUweAG+HeGvBLMDJGe7/a/ohjF45Ap/T9EAafMh74OZJHBT5NX+nEyoUuZ235kaPbwKehut6KwFZ3sNvdn6W9dfaOvWdbLGIfWZsdsmPWYZz9Yn/YX3brXo73ifvcyn1atWXrLN/wju4A5Izy2o=</latexit> <latexit sha1_base64="bkWuGBQdkuy27MxXm7MrTV30nsE=">ACjHicfVFRSxtBEN6cWtOrtbE+9uVsEKQP4U6ECqWQahFfbNWaKCQhzG0mcXFv9idE8Nx0L/Vn9KnvtZ/4SZ3iNXiwLfvPtzM63cSqFpTD8XfMWFpdeLNdf+q9WXq+ay97VqdGY4drqU2FzFYlEJhwRJvEgNQhJLPI+v9mf582s0Vmh1RtMUBwlMlBgLDuSoYePHZp/whvKDLyeFX+E9YUb3h31tDHJSaG3hl 9Q3oMyALKlKdjqvB1LQtBg2mErnEfwFEQVaLIqjodrtY3+SPMsQUVcgrW9KExpkIMhwSW6vpnFPgVTLDnoIE7SCfT18Em4ZBWNt3FIUzNmHN3JIrJ0msVMmQJf2cW5G/i/Xy2i8O8iFSjNCxctG40wGpIOZlcFIzJyRUweAG+HeGvBLMDJGe7/a/ohjF45Ap/T9EAafMh74OZJHBT5NX+nEyoUuZ235kaPbwKehut6KwFZ3sNvdn6W9dfaOvWdbLGIfWZsdsmPWYZz9Yn/YX3brXo73ifvcyn1atWXrLN/wju4A5Izy2o=</latexit> <latexit sha1_base64="bkWuGBQdkuy27MxXm7MrTV30nsE=">ACjHicfVFRSxtBEN6cWtOrtbE+9uVsEKQP4U6ECqWQahFfbNWaKCQhzG0mcXFv9idE8Nx0L/Vn9KnvtZ/4SZ3iNXiwLfvPtzM63cSqFpTD8XfMWFpdeLNdf+q9WXq+ay97VqdGY4drqU2FzFYlEJhwRJvEgNQhJLPI+v9mf582s0Vmh1RtMUBwlMlBgLDuSoYePHZp/whvKDLyeFX+E9YUb3h31tDHJSaG3hl 9Q3oMyALKlKdjqvB1LQtBg2mErnEfwFEQVaLIqjodrtY3+SPMsQUVcgrW9KExpkIMhwSW6vpnFPgVTLDnoIE7SCfT18Em4ZBWNt3FIUzNmHN3JIrJ0msVMmQJf2cW5G/i/Xy2i8O8iFSjNCxctG40wGpIOZlcFIzJyRUweAG+HeGvBLMDJGe7/a/ohjF45Ap/T9EAafMh74OZJHBT5NX+nEyoUuZ235kaPbwKehut6KwFZ3sNvdn6W9dfaOvWdbLGIfWZsdsmPWYZz9Yn/YX3brXo73ifvcyn1atWXrLN/wju4A5Izy2o=</latexit> Rationality <latexit sha1_base64="FB5LEdO/umPpNVtJ6aoSx57AYE=">ACjHicfVFRSxtBEN6c1upZbdTHvlwbBPEh3BVBoRSsivi1WqikIQwt5nExb3dY3dODEfAv+VP6ZOv+i/c5A6xKh1Y9tvp3Z+TZOpbAUhn8r3tT0h5mPs3P+/KeFxc/VpeWm1Znh2OBanMRg0UpFDZIkMSL1CAkscTz+Gp3nD+/RmOFVmc0TLGTwECJvuBAjupWT1fbhDeU7/86Gfkl3hGm93zY1cYgJ4XWPnNHQ JkBWXAF9WdSD6Sg4ahbrYX1cBLBWxCVoMbKO4uVb62e5pnCSriEqxtRWFKnRwMCS7R9cgspsCvYIAtBxUkaDv5ZPpRsOqYXtDXxi1FwYR9eSOHxNphEjtlAnRpX+fG5Hu5Vkb9rU4uVJoRKl406mcyIB2MrQx6YuyMHDoA3Aj31oBfgFOznDfb+hG8bgoSv8O0UDpM163gYzSOBmlJf7/2RCFTK3+87U6LWFb0Hzez0K69HJRm27eVvYO8u+sG9sjUVsk2zA3bMGoyzO3bPHtijt+hteD+8n4XUq5RfsL+CW/CY0/y2o=</latexit> <latexit sha1_base64="FB5LEdO/umPpNVtJ6aoSx57AYE=">ACjHicfVFRSxtBEN6c1upZbdTHvlwbBPEh3BVBoRSsivi1WqikIQwt5nExb3dY3dODEfAv+VP6ZOv+i/c5A6xKh1Y9tvp3Z+TZOpbAUhn8r3tT0h5mPs3P+/KeFxc/VpeWm1Znh2OBanMRg0UpFDZIkMSL1CAkscTz+Gp3nD+/RmOFVmc0TLGTwECJvuBAjupWT1fbhDeU7/86Gfkl3hGm93zY1cYgJ4XWPnNHQ JkBWXAF9WdSD6Sg4ahbrYX1cBLBWxCVoMbKO4uVb62e5pnCSriEqxtRWFKnRwMCS7R9cgspsCvYIAtBxUkaDv5ZPpRsOqYXtDXxi1FwYR9eSOHxNphEjtlAnRpX+fG5Hu5Vkb9rU4uVJoRKl406mcyIB2MrQx6YuyMHDoA3Aj31oBfgFOznDfb+hG8bgoSv8O0UDpM163gYzSOBmlJf7/2RCFTK3+87U6LWFb0Hzez0K69HJRm27eVvYO8u+sG9sjUVsk2zA3bMGoyzO3bPHtijt+hteD+8n4XUq5RfsL+CW/CY0/y2o=</latexit> <latexit sha1_base64="FB5LEdO/umPpNVtJ6aoSx57AYE=">ACjHicfVFRSxtBEN6c1upZbdTHvlwbBPEh3BVBoRSsivi1WqikIQwt5nExb3dY3dODEfAv+VP6ZOv+i/c5A6xKh1Y9tvp3Z+TZOpbAUhn8r3tT0h5mPs3P+/KeFxc/VpeWm1Znh2OBanMRg0UpFDZIkMSL1CAkscTz+Gp3nD+/RmOFVmc0TLGTwECJvuBAjupWT1fbhDeU7/86Gfkl3hGm93zY1cYgJ4XWPnNHQ JkBWXAF9WdSD6Sg4ahbrYX1cBLBWxCVoMbKO4uVb62e5pnCSriEqxtRWFKnRwMCS7R9cgspsCvYIAtBxUkaDv5ZPpRsOqYXtDXxi1FwYR9eSOHxNphEjtlAnRpX+fG5Hu5Vkb9rU4uVJoRKl406mcyIB2MrQx6YuyMHDoA3Aj31oBfgFOznDfb+hG8bgoSv8O0UDpM163gYzSOBmlJf7/2RCFTK3+87U6LWFb0Hzez0K69HJRm27eVvYO8u+sG9sjUVsk2zA3bMGoyzO3bPHtijt+hteD+8n4XUq5RfsL+CW/CY0/y2o=</latexit> <latexit sha1_base64="FB5LEdO/umPpNVtJ6aoSx57AYE=">ACjHicfVFRSxtBEN6c1upZbdTHvlwbBPEh3BVBoRSsivi1WqikIQwt5nExb3dY3dODEfAv+VP6ZOv+i/c5A6xKh1Y9tvp3Z+TZOpbAUhn8r3tT0h5mPs3P+/KeFxc/VpeWm1Znh2OBanMRg0UpFDZIkMSL1CAkscTz+Gp3nD+/RmOFVmc0TLGTwECJvuBAjupWT1fbhDeU7/86Gfkl3hGm93zY1cYgJ4XWPnNHQ JkBWXAF9WdSD6Sg4ahbrYX1cBLBWxCVoMbKO4uVb62e5pnCSriEqxtRWFKnRwMCS7R9cgspsCvYIAtBxUkaDv5ZPpRsOqYXtDXxi1FwYR9eSOHxNphEjtlAnRpX+fG5Hu5Vkb9rU4uVJoRKl406mcyIB2MrQx6YuyMHDoA3Aj31oBfgFOznDfb+hG8bgoSv8O0UDpM163gYzSOBmlJf7/2RCFTK3+87U6LWFb0Hzez0K69HJRm27eVvYO8u+sG9sjUVsk2zA3bMGoyzO3bPHtijt+hteD+8n4XUq5RfsL+CW/CY0/y2o=</latexit> FAQ (zero-shot) FAQ Bird 1 2 3 4 5 0% <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4 zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5xR6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTO zYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4 zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5xR6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTO zYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4 zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5xR6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTO zYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4 zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5xR6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTO zYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> 0% <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4 zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5xR6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTO zYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4 zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5xR6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTO zYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4 zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5xR6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTO zYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4 zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5xR6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTO zYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> 0% <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5x R6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTOzYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5x R6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTOzYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5x R6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTOzYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5x R6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTOzYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> 0% <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5x R6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTOzYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5x R6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTOzYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5x R6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTOzYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5x R6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTOzYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> 0% <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5x R6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTOzYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5x R6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTOzYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5x R6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTOzYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5x R6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTOzYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> 0% <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5x R6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTOzYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5x R6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTOzYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5x R6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTOzYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> <latexit sha1_base64="mHJTCUiJf/C7tVUDCLtw2LjEVMo=">AC0HicfVHbhMxEPUut7LcUnjkxRBFQgiXYQEjy1FTxAm4q0lbJRNOudJFa9sqeRYlWEfDKD/BtfAwSTrIKtEWMZPn4zJmxfSYrlXQUxz+D8MrVa9dvbN2Mbt2+c/dea/v+sTOVFdgXRhl7moFDJTX2SZLC09IiFJnCk+xsb5k/+YzWSaM/0bzEYQETLcdSAHlq1PrRSQlnVO/v9hZRg9Im28Oe8ZaFKTRuQ3EaiyoM5x R6uGoCTN9xBZfluWVoDYroh9+UMc+4b6D/9DH+vJXV5r0Lry+O0M2q14268Cn4ZJA1osyYOR9vBozQ3oipQk1Dg3CJSxrWYEkKhYsorRyWIM5gMPNRTohvXKwQXveCbnY2P90sRX7N8VNRTOzYvMKwugqbuYW5L/yg0qGr8e1lKXFaEW64vGleJk+HIcPJdLd9XcAxBW+rdyMQULgvzQoih9i/4zFj/4xgclWiBjn9Yp2EkBs0Xd7P+TSb2W+T3ypiYXLbwMjl90k7ib9F62d3pf1/ZusYfsMXvCEvaK7bB37JD1mWC/gnbwLHgeHoWz8Ev4bS0Ng2YkD9i5CL/BqFB4vo=</latexit> Figure 2: Human evaluation Gantt charts showing user ratings. We show the mean rating for each measure and system on the right of each bar. FAQ FAQ (zero-shot) Bird Our Approach 57% 52% 45% Our Approach 53% 47% 37% w/fixed turn No Init. Query 43% 41% 28% No Int. (RNN) 30% 26% 20% Table 1: Human evaluation classification accuracy. the board. All three models improve the classification performance with the addition of interaction. Qualitatively, the users rate our full approach better than the two other interaction variants. This demonstrates that our model handles effectively real user interaction despite being trained with only non-interactive data. We include additional details in Appendix A.4. Analysis with Simulated Interactions Table 2 shows performance using the the user simulator. We use these results to evaluate different choices beyond what is possible with human studies. We observe interaction is critical; removing the ability to interact decreases performance significantly. The Random Interaction and the No Initial Query Interaction baselines both barely improve the performance over the No Interaction RNN baseline, illustrating the importance of guiding the interaction and considering the initial query. Our full model achieves an Accuracy@1 of 79% for FAQ Suggestion and 49% for Bird Identification using less than five turns, outperforming the No Interaction RNN baseline by 41% and 26%. When having no access to questions and answers in the development and test set during evaluation, the full model performance drops only slightly to 75%, highlighting the model’s ability to generalize to unseen tags. The two baselines with alternative termination strategies underperform the full model, indicating the effectiveness of the policy controller. The relatively low performance of the λ = 1 variant, which effectively has fewer probability components leveraging natural language than our full model, and No Initial Query Interaction confirm the importance of the learned natural language embedding encoder. Appendix A.3 includes further details on how different text encoders impact performance. Figure 3 shows the trade-off between classification accuracy and the number of turns. Each point on the plots is computed by varying the reward turn penalty for our model, the prediction threshold and the predefined number of turns T. Our model with the policy controller or the threshold strategy does not explicitly bound the number of turns, so we report the average number of turns across multiple runs for these two models. We achieve a relative accuracy boost of 40% for FAQ and 65% for Birds over no-interaction baselines with only one clarification question. This highlights the value of leveraging human feedback to improve model accuracy in classification tasks. Figure 4 shows the learning curves of our model with the policy controller trained with different turn penalties ra ∈{−0.5, −1, −3}. We observe the models explore during the first 1,000 training episodes in the middle and the right plots. The models achieve relatively stable accuracy after the early exploration stage. The three runs end up using different numbers of expected turns because of the different ra values. 8 Conclusion We propose an approach for interactive classification, where the system can inquire missing information through a sequence of simple binary or multi-choice questions when users provide underspecified natural language queries. Our expertguided, incremental design of questions and answers enables easy extension to add new classes, striking the balance between simplicity and extendability. Our modeling choices enable the system to perform zero-shot generalization to unseen classification targets and questions. Our method uses information gain to select the best question to ask 2672 FAQ Suggestion Bird Identification Acc@1 Acc@3 Acc@1 Acc@3 No Interaction (BM25) 26% 31% N.A. N.A. No Interaction (RoBERTaBASE) 30 ± 0.5% 45 ± 0.6% 17 ± 0.3% 29 ± 0.3% No Interaction (RNN) 38 ± 0.5% 61 ± 0.3% 23 ± 0.1% 41 ± 0.2% No Interaction (RNN + self-attn) 39 ± 0.5% 63 ± 0.4% 23 ± 0.1% 41 ± 0.1% Random Interaction 39 ± 0.3% (38 ± 0.1%) 62 ± 0.4% (63 ± 0.2%) 25 ± 0.1% 44 ± 0.1% No Initial Query Interaction 46 ± 0.5% (46 ± 0.1%) 66 ± 0.6% (67 ± 0.3%) 29 ± 0.2% 50 ± 0.3% Our Approach 79 ± 0.7% (75 ± 0.4%) 86 ± 0.8%(83 ± 0.4%) 49 ± 0.3% 69 ± 0.5% w/ threshold 73 ± 0.6% (69 ± 0.6%) 82 ± 0.7% (81 ± 0.6%) 41 ± 0.3% 59 ± 0.4% w/ fixed turn 71 ± 1.0% (68 ± 0.4%) 81 ± 0.9% (81 ± 0.6%) 39 ± 0.2% 56 ± 0.4% w/ λ = 1 66 ± 0.8% (64 ± 0.2%) 71 ± 1.0% (73 ± 0.2%) 40 ± 0.1% 56 ± 0.2% Table 2: Performance with simulated interactions. We evaluate our approach and several baselines using Accuracy@{1, 3}. Best performance numbers are in bold. We report the averaged results as well as the standard deviations from three independent runs for each model variant and baseline. For FAQ Suggestion, in parentheses, we provide zero-shot results, where the system has access to tags only for training questions. 0 1 2 3 4 5 40 50 60 70 80 90 Ours Ours (Threshold) Ours (Fixed Turn) Ours ( = 1) Static Interact Random Interact 0 2 4 6 8 10 20 30 40 50 60 Ours Ours (Fixed Turn) Ours (Threshold) Static Interact Random No Interact Figure 3: Accuracy@1 (y-axis) against turns of interactions (x-axis) for FAQ (left) and Birds (right) tasks. 0 1000 2000 3000 4000 0 2 4 6 8 Accumulative Reward 0 1000 2000 3000 4000 0 2 4 6 8 Average Turns 0 1000 2000 3000 4000 0.3 0.4 0.5 0.6 0.7 0.8 Recall@1 ra=-0.5 ra=-1 ra=-3 Figure 4: Learning curves of our full model. We show accumulative reward (left), interaction turns (middle), and Accuracy@1 (right) on the test set, where x-axis is the number of episodes (400 trials per episode). The results are compared on different turn penalty ra. at every turn, and a lightweight policy to efficiently control the interaction. We demonstrate that the system can be bootstrapped without any interaction data and show effectiveness on two tasks. A potential future research direction is to bridge the gap between this simple bootstrapping paradigm and the incorporation of user free-form responses to allow the system to handle free-text responses. We hope our work will encourage more research on different possibilities of building interactive systems that do not necessarily require handling full-fledged dialogue, but still benefit from user interaction. Acknowledgments We thank Derek Chen, Alex Lin, Nicholas Matthews, Jeremy Wohlwend, Yi Yang and the anonymous reviewers for providing valuable feedback on the paper. We would also like to thank Michael Griffths, Anna Folinsky and the ASAPP annotation team for their help on setting up and performing the human evaluation. Finally, we thank Hugh Perkins, Ivan Itzcovich, and Brendan Callahan for their support on the experimental environment setup. 2673 References Mohammad Aliannejadi, Hamed Zamani, Fabio Crestani, and W. Bruce Croft. 2019. Asking clarifying questions in open-domain information-seeking conversations. In Proceedings of the 42Nd International ACM SIGIR Conference on Research and Development in Information Retrieval. Yoav Artzi and Luke Zettlemoyer. 2011. Bootstrapping semantic parsers from conversations. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics. Prithvijit Chattopadhyay, Deshraj Yadav, Viraj Prabhu, Arjun Chandrasekaran, Abhishek Das, Stefan Lee, Dhruv Batra, and Devi Parikh. 2017. Evaluating visual conversational agents via cooperative human-ai games. In Fifth AAAI Conference on Human Computation and Crowdsourcing. Cen Chen, Chilin Fu, Xu Hu, Xiaolu Zhang, Jun Zhou, Xiaolong Li, and Forrest Sheng Bao. 2019. Reinforcement learning for user intent prediction in customer service bots. In Proceedings of the 42Nd International ACM SIGIR Conference on Research and Development in Information Retrieval. Yihong Chen, Bei Chen, Xuguang Duan, Jian-Guang Lou, Yue Wang, Wenwu Zhu, and Yong Cao. 2018. Learning-to-ask: Knowledge acquisition via 20 questions. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. Quac : Question answering in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 4299–4307. Pei-Hung Chung, Kuan Tung, Ching-Lun Tai, and Hung yi Lee. 2018. Joint learning of interactive spoken content retrieval and trainable user simulator. In INTERSPEECH. Abhishek Das, Satwik Kottur, Jos´e MF Moura, Stefan Lee, and Dhruv Batra. 2017. Learning cooperative visual dialog agents with deep reinforcement learning. In Proceedings of the IEEE international conference on computer vision. Marin Ferecatu and Donald Geman. 2007. Interactive search for image categories by mental matching. In 2007 IEEE 11th International Conference on Computer Vision. Xiaoxiao Guo, Hui Wu, Yu Cheng, Steven Rennie, Gerald Tesauro, and Rogerio Feris. 2018. Dialog-based interactive image retrieval. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 678–688. Curran Associates, Inc. Izzeddin Gur, Semih Yavuz, Yu Su, and Xifeng Yan. 2018. Dialsql: Dialogue based structured query generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, and Christopher R´e. 2018. Training classifiers with natural language explanations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Huang Hu, Xianchao Wu, Bingfeng Luo, Chongyang Tao, Can Xu, Wei Wu, and Zhan Chen. 2018. Playing 20 question game with policy-based reinforcement learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). In-Ho Kang and GilChang Kim. 2003. Query type classification for web document retrieval. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval. J. F. Kelley. 1984. An iterative design methodology for user-friendly natural language office information applications. Adriana Kovashka and Kristen Grauman. 2013. Attribute pivots for guiding relevance feedback in image search. In Proceedings of the IEEE International Conference on Computer Vision. Sang-Woo Lee, Tong Gao, Sohee Yang, Jaejun Yoo, and Jung-Woo Ha. 2019. Large-scale answerer in questioner’s mind for visual dialog question generation. In International Conference on Learning Representations. Sang-Woo Lee, Yu-Jung Heo, and Byoung-Tak Zhang. 2018. Answerer in questioner’s mind: Information theoretic approach to goal-oriented visual dialog. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances 2674 in Neural Information Processing Systems 31. Curran Associates, Inc. Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, and Yoav Artzi. 2018. Simple recurrent units for highly parallelizable recurrence. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Jiwei Li, Alexander H Miller, Sumit Chopra, Marc’Aurelio Ranzato, and Jason Weston. 2016. Dialogue learning with human-in-the-loop. arXiv preprint arXiv:1611.09823. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130. Charles X. Ling, Qiang Yang, Jianning Wang, and Shichao Zhang. 2004. Decision trees with minimal costs. In Proceedings of the Twenty-first International Conference on Machine Learning. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. Chen Qu, Liu Yang, W. Bruce Croft, Yongfeng Zhang, Johanne R. Trippas, and Minghui Qiu. 2019. User intent prediction in information-seeking conversations. In Proceedings of the 2019 Conference on Human Information Interaction and Retrieval. Sudha Rao and Hal Daum´e III. 2018. Learning to ask good questions: Ranking clarification questions using neural expected value of perfect information. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics. Scott Reed, Zeynep Akata, Honglak Lee, and Bernt Schiele. 2016. Learning deep representations of fine-grained visual descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends R⃝in Information Retrieval, (4). Daniel E Rose and Danny Levinson. 2004. Understanding user goals in web search. In Proceedings of the 13th international conference on World Wide Web. Darsh Shah, Tao Lei, Alessandro Moschitti, Salvatore Romeo, and Preslav Nakov. 2018. Adversarial domain adaptation for duplicate question detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Pushkar Shukla, Carlos E. L. Elmadjian, Richika Sharan, Vivek Kulkarni, Matthew Turk, and William Yang Wang. 2019. What should I ask? using conversationally informative rewards for goaloriented visual dialog. CoRR. Paul E. Utgoff. 1989. Incremental induction of decision trees. Machine Learning, 4. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Harm de Vries, Florian Strub, A. P. Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron C. Courville. 2017. Guesswhat?! visual object discovery through multi-modal dialogue. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. 2011. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology. Sida I. Wang, Percy Liang, and Christopher D. Manning. 2016. Learning language games through interaction. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Tsung-Hsien Wen, David Vandyke, Nikola Mrkˇsi´c, Milica Gaˇsi´c, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A networkbased end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. Xianchao Wu, Huang Hu, Momo Klyen, Kyohei Tomita, and Zhan Chen. 2018. Q20: Rinna riddles your mind by asking 20 questions. Japan NLP. Ziyu Yao, Yu Su, Huan Sun, and Wen-tau Yih. 2019. Model-based interactive semantic parsing: A unified 2675 framework and a text-to-SQL case study. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). A Appendices A.1 Data collection We collect two types of data for the FAQ task. For the bird identification task we re-purpose existing data (Section 6). Initial Query Collection Qualification One main challenge for the data collection process is familiarizing the workers with the set of target documents. We set up a two-stage process to ensure the quality of the initial queries. The first stage is to write paraphrases of a given target, which is often a question in the FAQ task. We first allow the full pool of Amazon Mechanical Turk workers to perform the task. After that, we manually inspect the written queries and pick the ones that are good paraphrases of the FAQs. We selected 50 workers that showed good understanding of the FAQs. In the second stage, workers are asked to provide initial queries with possibly insufficient information to identify the target. Out of the first 50 workers, we manually selected 25 based on the quality of the queries such as naturalness and whether they contain ambiguity or incompleteness by design. We used this pool of workers to collect 3,831 initial queries for our experiments. Tag Association Qualification The goal of this annotation task is to associate tags with classification labels. We train a model on the collected initial queries to rank tags for each classification target. We pick out the highest ranked tags as positives and the lowest ranked tags as negatives for each target. The worker sees in total ten tags without knowing which ones are the negatives. To pass the qualification task, the workers need to complete annotation on three targets without selecting any of the negative tags. Tag Association Task Details After the qualification task, we take the top 50 possible tags for each target and split them into five non-overlapping lists (i.e., ten tags for each list) to show to the workers. Each of the lists is assigned to four separate workers to annotate. We observe that showing only the top-50 tags out of 813 is sufficient. Figure A.1 illustrates this: after showing the top-50 tags, the curve plateaus and no new tags are assigned to a target label. Table A.1 shows annotator agreement using Cohen’s κ score. 0 10 20 30 40 50 0 2 4 6 8 Figure A.1: Accumulated number of tags assigned to the targets (y-axis) by the workers against tag ranking (x-axis). The ranking indicates the relevance of the target-tag pairs from the pre-trained model. The curve plateaued at rank 50 suggesting that the lower ranked tags are less likely to be assigned to the target by the crowdsourcing workers. Tag Ranks 1-10 11-20 21-30 31-40 41-50 Mean # tags 3.31 1.45 0.98 0.61 0.48 N.A. (%) 1.9 30.7 43.6 62.1 65.2 Mean κ 0.62 0.54 0.53 0.61 0.61 Table A.1: Target-tag annotation statistics. We show five sets of tags to the annotators. The higher ranked ones are more likely to be related to the given target. The row mean # tags is the mean number of tags that are annotated to a target, N.A. is the percentage of the tasks that are annotated as ”none of the above”, and mean κ is the mean pairwise Cohen’s κ score. A.2 Implementation Details We use a single-layer bidirectional Simple Recurrent Unit (SRU) as the encoder for the FAQ suggestion task and two layer bidirectional SRU for bird identification task. The encoder uses pre-trained fastText (Bojanowski et al., 2017) word embeddings of size 300, hidden size 150, batch size 200, and dropout rate 0.1. The fastText embeddings remain fixed during training. We use the Noam learning rate scheduler (Vaswani et al., 2017) with initial learning rate 1e-3, warm-up step 4,000 and a scaling factor of 2.0. For the self-attention model, we use a multi-head self-attention layer with 16 heads and a hidden size of 64 for each head. The same dropout rate used for the text encoder is applied to the self-attention layer. For the no interaction model with the RoBERTa encoder, we use the RoBERTaBASE model implemented by Hugghing Face (Wolf et al., 2019). The RoBERTaBASE model is fine-tuned with learning rate of 1e-5, warmup step of 1,000, weight decay of 0.1, batch size of 16 2676 and gradient accumulation step of 10. The policy controller is a two layer feed-forward network with a hidden layer of size 32 and ReLU activations. The network takes the current turn and the top-k values of the belief probabilities as input. We choose k = 20 and allow a maximum of 10 interaction turns. A.3 Additional Analysis We use the user simulator for further analysis of our system performance and alternative configurations. Text Encoder Training Table A.2 shows the breakdown analysis of different ways to train the text encoder. We use initial queries as well as paraphrase queries to train the encoder, which has around 16K target-query examples. To analyze the effectiveness of tags in addition to initial queries, we generate pseudo-queries by combining existing queries with sampled subset of tags from the targets. This augmentation strategy is useful to improve the classification performance. We also observe that using the set of tags instead of initial queries as text inputs for a specific target label improves classification performance, indicating that the designed tags can capture the target label well. Finally, when we concatenate user initial queries and tags and use that as text input to the classifier, we achieve Accuracy@1 of 76%. In our full model, we achieve 79% with only querying about five tags. Performances of Different Encoders Table A.3 show our system performance with different text encoders for both tasks. A.4 Human Evaluation Each interaction session starts with presenting a user scenario (e.g., a bird image or a phone issue). The user types an initial natural language query and answers follow-up questions selected by the system. FAQ Suggestion We design a user scenario for each target to present to the worker. At the end of each interaction, the predicted FAQ and the ground truth are presented to the user, as shown in the top right panel in Figure A.2. The user answers the following questions: ‘how natural is the interaction?’ and ‘do you feel understood by the system during the interactions?’ on the scale of 1 (strongly disagree) to 5 (strongly agree), which we record as naturalness and rationality in Figure 2 and Table 1. Our full model performs best on Accuracy@1, naturalness and rationality. We show human evaluation examples in Table A.4. Bird Identification The interface for bird identification task is similar to the FAQ suggestion task. Instead of presenting a scenario, we show a bird image to the user. The user needs to describe the bird to find out its category, which is analogous to writing an initial query. When answering system questions about attributes, we allow the user to reply ‘not visible’ if part of the bird is hidden or occluded. Given this reply, the system stops asking binary questions from the same label group. For example, if a user replies ‘not visible’ to a the question ‘does the bird has a black tail?’, then questions such as ‘does the bird has yellow tail?’ and ‘does the bird has red tail?’ will be skipped for the rest of the interaction. At the end of the interaction, the predicted and ground-truth bird images along with their categories are presented to the user as illustrated at the bottom right panel in Figure A.2. The user fills out a questionnaire as in FAQ domain. The bird identification task is very challenging because of its fine-grained categories, where many bird images look almost identical while belonging to different classes. Our full system improves classification accuracy from 20% to 45% against non-interactive baselines after less than three turns of interaction. To better understand the task and the model behavior, we show the confusion matrix of the final model prediction after interaction in Figure A.3. Of the 200 bird classes, there are 21 different kinds of sparrows and 25 different warbler. Most of the model errors are due to mistakes between these fine-grained bird classes. Figure A.4 shows how the confusion matrix changes when adding the interaction. The model makes improvement in distinct and also similar bird types. 2677 Text Input Init Query Init Query + Tags Init + Paraphrase Query Full Data init query tags Acc@1 Acc@3 Acc@1 Acc@3 Acc@1 Acc@3 Acc@1 Acc@3   0.28 0.47 0.32 0.51 0.35 0.60 0.38 0.61   0.31 0.50 0.57 0.79 0.56 0.74 0.70 0.87   0.36 0.58 0.55 0.79 0.63 0.81 0.76 0.91 Table A.2: Comparison of text encoders trained on different textual inputs and evaluated on three different prediction tasks. The model uses (a) the initial queries, (b) all attribute tags, or (c) both initial queries and tags as text inputs to predict the target. Each model is evaluated using Accuracy@{1, 3}. FAQ Suggestion Bird Identification RNN RNN + attention RoBERTaBASE RNN RNN + attention RoBERTaBASE No Interaction 38% 39% 30% 23% 23% 17% Random Interaction 39% 38% 31% 25% 24% 17% No Initial Query Interaction 46% 41% 37% 29% 29% 21% Our Approach w/ threshold 72% 73% 54% 41% 38% 33% Our Approach w/ fixed turn 71% 68% 47% 39% 37% 32% Our Approach w/ λ = 1 66% 67% 52% 40% 37% 32% Table A.3: Accuracy@1 of our system with three different encoders. For all experiments, maximal number of turns is set to five. U: How can I text my friends house phone? S: Text message to landline? U: Yes. S: Do you want to send or receive text? U: Yes. S: Please read: How do I send a text message to landline? U: I’m in sprint’s one up and want to upgrade my phone. S: Do you want annual upgrade option? U: Yes. S: Do you want to exercise an option? U: Yes. S: Please read: I am enrolled in Sprint One Up. How do I exercise my annual upgrade option? U: Battery ran out immediately. S: How can I help you with your phone device? App or feature information; Fee and charge; Troubleshooting; Not applicable U: Troubleshooting. S: Are you using mobile wifihotspot? U: Yes. S: Please read: Does using my phone as a mobile Wi-Fi hotspot drain my battery? Table A.4: Three examples of user interactions from the FAQ suggestion human evaluation. 2678 Figure A.2: The user interface for FAQ Suggestion (top) and Bird Identification (bottom) tasks. The left panel shows the interface at the beginning of the interaction and the right panel shows the interface at the end of the interaction. 2679 harris sparrow brewer sparrow fox sparrow savannah sparrow nelson sharp tailed sparrow field sparrow black throated sparrow chipping sparrow white crowned sparrow white throated sparrow seaside sparrow baird sparrow house sparrow le conte sparrow henslow sparrow vesper sparrow tree sparrow clay colored sparrow song sparrow lincoln sparrow grasshopper sparrow cape may warbler golden winged warbler yellow warbler magnolia warbler chestnut sided warbler bay breasted warbler palm warbler myrtle warbler cerulean warbler orange crowned warbler prairie warbler prothonotary warbler black and white warbler black throated blue warbler kentucky warbler nashville warbler worm eating warbler canada warbler mourning warbler swainson warbler pine warbler hooded warbler wilson warbler tennessee warbler blue winged warbler Figure A.3: Confusion matrix of our final output for bird identification task. 2680 Figure A.4: Confusion matrix difference between the initial query with and without the interactions. High values along the diagonal and low values elsewhere are good.
2020
237
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2681–2691 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2681 Knowledge Graph Embedding Compression Mrinmaya Sachan Toyota Technological Institute at Chicago [email protected] Abstract Knowledge graph (KG) representation learning techniques that learn continuous embeddings of entities and relations in the KG have become popular in many AI applications. With a large KG, the embeddings consume a large amount of storage and memory. This is problematic and prohibits the deployment of these techniques in many real world settings. Thus, we propose an approach that compresses the KG embedding layer by representing each entity in the KG as a vector of discrete codes and then composes the embeddings from these codes. The approach can be trained end-toend with simple modifications to any existing KG embedding technique. We evaluate the approach on various standard KG embedding evaluations and show that it achieves 50-1000x compression of embeddings with a minor loss in performance. The compressed embeddings also retain the ability to perform various reasoning tasks such as KG inference. 1 Introduction Knowledge graphs (KGs) are a popular way of storing world knowledge, lending support to a number of AI applications such as search (Singhal, 2012), question answering (Lopez et al., 2013; Berant et al., 2013) and dialog systems (He et al., 2017; Young et al., 2018). Typical KGs are huge, consisting of millions of entities and relations. With the growth in use of KGs, researchers have explored ways to learn better representations of KGs in order to improve generalization and robustness in downstream tasks. In particular, there has been interest in learning embeddings of KGs in continuous vector spaces (Bordes et al., 2011, 2013; Socher et al., 2013). KG embedding approaches represent entities as learnable continuous vectors while each relation is modeled as an operation in the same space such as translation, projection, etc. (Bordes et al., 2013; Wang et al., 2014; Lin et al., 2015; Ji et al., 2015). These approaches give us a way to perform reasoning in KGs with simple numerical computation in continuous spaces. Despite the simplicity and wide-applicability of KG embedding approaches, they have a few key issues. A major issue is that the number of embedding parameters grow linearly with the number of entities. This is challenging when we have millions or billions of entities in the KG, especially when there are a lot of sparse entities or relations in the KG. There is a clear redundancy in the continuous parameterization of embeddings given that many entities are actually similar to each other. This overparameterization can lead to a drop in performance due to overfitting in downstream models. The large memory requirement of continuous representations also prevents models that rely on them from being deployed on modest user-facing computing devices such as mobile phones. To address this issue, we propose a coding scheme that replaces the traditional KG embedding layer by representing each entity in the KG with a K-way D dimensional code (KD code) (van den Oord et al., 2017; Chen et al., 2018; Chen and Sun, 2019). Each entity in the KG is represented as a sequence of D codes where each code can take values in {1 . . . K}. The codes for each entity are learnt in such a way that they capture the semantics and the relational structure of the KG – i.e., the codes that represent similar or related entities are typically also similar1. The coding scheme is much more compact than traditional KG embedding schemes. We learn the discrete codes for entities using an autoencoder style model which learns a discretization function that maps continuous entity representations to discrete codes and a reversediscretization function that maps the discrete codes 1For example, Barack Obama = “2-1-3-3” and Michelle Obama = “2-1-3-2” (for D = 4 and K = 3) 2682 back to continuous entity representations. The discretization and reverse-discretization functions are jointly learnt end-to-end. The inherent discreteness of the representation learning problem poses several learning issues. We tackle these issues by resorting to the straight-through estimator (Bengio et al., 2013) or the tempering softmax (Maddison et al., 2016; Jang et al., 2016) and using guidance from existing KG embeddings to smoothly guide learning of the discrete representations. We evaluate our approach on various standard KG embedding evaluations and we find that we can massively reduce the size of the KG embedding layer while suffering only a minimal loss in performance (if at all). We show that the proposed approach for learning discrete KG representations leads to a good performance in the task of link prediction (cloze entity prediction) as well as in the task of KG reasoning and inference. 2 Preliminaries 2.1 Knowledge Graph Embeddings A knowledge graph (KG) G ⊆E × R × E can be formalized as a set of triplets (ei, r, ej) composed of head and tail entities ei and ej (ei, ej ∈E, E being the set of entities) and a relation r ∈R (R being the set of relations) – ne = |E|, nr = |R|. The goal of learning KG embeddings is to learn vector embeddings e ∈Rde for each entity e ∈E (and possibly also relation embeddings r ∈Rdr). Typical KG embedding approaches are multilayer neural networks which consist of an embedding component and a scoring component. The embedding component maps each entity to its corresponding embedding. The scoring component learns a scoring function f : E × R × E →R where f(ei, r, ej) defines the score of the triplet (ei, r, ej). KG embeddings are learnt by defining a loss function L and solving the following optimization problem: min Θ X (ei,r,ej)∈G LΘ (ei, r, ej) (1) Here Θ includes all embedding parameters and any other neural network parameters. The loss function typically encourages the score of a positive triplet (ei, r, ej) to be higher than that of a (corrupted) negative triplet. In Table 1, we summarize the scoring function for several existing KG embedding approaches as well as their corresponding entity (and relation) representation parameters. In all the KG embedding models, the number of parameters grow super-linearly with the number of entities and relations in the KG as well as the size of their representations. This number can be very large and learning KG embeddings can be a challenge for large, sparse KGs. In this paper, we present a novel coding scheme that significantly reduces the number of embedding parameters. We do so by leveraging recent advances in discrete representation learning. We summarize them below. 2.2 Discrete Representation Learning Typical deep learning methods define an embedding function as F : V →Rd, where V denotes the vocabulary such as words, sub-words, entities, relations, etc. and each symbol in the vocabulary is mapped to a continuous vector in Rd. The embedding function can be trained separate from the task in a completely unsupervised manner or jointly with other neural net parameters to optimize the target loss function. A common specification of the embedding function in NLP is a lookup table L ∈Rn×d with n = |V|. The total number of bits used to represent this table is O(nd) (32nd if each real number is represented by 32-bit floating point). This is problematic for large n and/or d. Thus, various approaches have been proposed to compress embedding layers in neural networks. These include weight-tying (Press and Wolf, 2016; Inan et al., 2016; Li et al., 2018), matrix-factorization based approaches (Acharya et al., 2019), and approaches that rely on gumbel softmax (Baevski and Auli, 2018), vector quantization (Chen and Sun, 2019) and codebook learning (Shu and Nakayama, 2017). In this work, we build on discrete representation learning approaches (van den Oord et al., 2017; Chen et al., 2018; Chen and Sun, 2019). Discrete representation learning gives us a way to mitigate this issue by representing each symbol v in the vocabulary as a discrete vector zv = [z(1) v , . . . z(D) v ]. Discrete representations have another clear benefit that they are interpretable and are a natural fit for complex reasoning, planning and predictive learning tasks. Learning discrete representations is challenging due to the inherent non-differentiability in the embedding layer. Thus, a number of solutions such as the gumbel softmax trick (Maddison et al., 2016; Jang et al., 2016) and the straight-through estimator (Bengio et al., 2013) have been proposed to tackle this issue. Making the discrete representation learn2683 KG Embedding Model Scoring function f # Ent. Rel. Params SE (Bordes et al., 2011) ||W(L) r ei −W(R) r ej||p nede + 2nrdedr NTN (Socher et al., 2013) uT r f  eiW[1...k] r ej + Vr  ei ej  + br  nede + nr(kd2 e + 2kde + k) TransE (Bordes et al., 2013) ||ei + r −ej||p (ne + nr)d TransH (Wang et al., 2014) ||(ei −wT r eiwr) + dr −(ej −wT r ejwr)||2 2 nede + 2nrdr TransR (Lin et al., 2015) ||eiWr + r −ejWr||2 2 nede + nr(dr + d2 r) TransD (Ji et al., 2015) −||(rpei T p + I)ei + r −(rpej T p + I)ej||2 2 2nede + 2nrdr DistMult (Yang et al., 2014) ⟨ei, r, ej⟩ (ne + nr)d ComplEx (Trouillon et al., 2016) Re(⟨ei, r, ¯ej⟩) 2(ne + nr)d HolE (Nickel et al., 2016) ⟨r, ei ⊗ej⟩ (ne + nr)d SimpleE (Kazemi and Poole, 2018) 1 2(⟨e(h) i , r, e(t) j ⟩+ ⟨e(h) j , r(inv), e(t) i ⟩) 2(ne + nr)d ConvE (Dettmers et al., 2018) ⟨σ(vec(σ([ei; r] ◦ω))W), ej⟩ nede + nrdr RotatE (Sun et al., 2019) -||ei • r −ej||2 (ne + nr)d HypER (Balaˇzevi´c et al., 2019a) ⟨σ(vec(σ(ei ∗vec−1(wrH)))W), ej⟩ nede + nrdr TuckER (Balaˇzevi´c et al., 2019b) W ×1 ei ×2 wr ×3 ej nede + nrdr Table 1: Scoring functions f of some popular knowledge graph embedding approaches in the literature and the number of entity and relation specific parameters. Here, ne = |E| and nr = |R| respectively denote the number of entities and relation types and de and dr respectively denote the dimension of entity and relation representations. d is defined when (as) d = de = dr. ⟨x1, . . . , xk⟩= P i x1 i . . . xk i denotes the generalized dot product,¯denotes the conjugate of a complex number and ⊗denotes circular correlation, σ denotes an activation function, ◦denotes the convolution operator, • denotes the hadamard product and ×k denotes tensor product along the kth mode. ing process differentiable enables end-to-end learning of discrete representations via optimizing some task-specific objectives from language modelling and machine translation. In this work, we use discrete representation learning to compress KG embeddings. We describe it below. 3 Discrete KG Representation Learning In order to learn discrete KG representations, we define a quantization function Q : Rd →Rd, which (during training) takes raw KG embeddings and produces their quantized representations. Q = D ◦R is composed of two functions: 1. A discretization function D : Rde →ZD that maps the continuous KG embedding into a K-way D-dimensional discrete code with cardinality |Z| = K (we call this KD code) 2. A reverse-discretization function R : ZD →Rde that maps the KD code back to the continuous embedding. During training, both D and R are learned. Then, every entity in the KG is represented by a KD code via applying the discretization function D to save space (compression). The continuous embeddings and the parameters of the discretization function are then no longer needed. In the test/inference stage, the reverse-discretization function R is used to decode the KD codes into regular embedding vectors for every entity. We use vector quantization (Chen et al., 2018; Chen and Sun, 2019) and codebook learning (Cai et al., 2010) to define the discretization and reverse-discretization functions D and R. We describe them below. 3.1 Discretization Function D The goal of the discretization function is to map continuous KG embedding vectors into KD codes. We model the discretization function using nearest neighbor search (Cayton, 2008). Given continuous KG embeddings {ei|i = 1 . . . ne} as query vectors, we define a set of K key vectors {kk|k = 1 . . . K} where kk ∈Rde. In order to learn D-dimensional discrete codes, we partition the query and key vectors into D partitions where each partition corresponds to one of the D discrete codes – e(j) i ∈Rn×de/D and k(j) k ∈RK×de/D, j = 1 . . . D. Vector Quantization (VQ): Our first alternative for discretization is vector-quantization (Ballard, 1997), a classical quantization technique for data compression. We assume that the jth discrete code of the ith entity z(j) i can be computed by calculating distances between the corresponding query vector partition e(j) i and various corresponding key vector partitions {k(j) k }, and choosing the one with the minimum distance: z(j) i = arg min k dist  e(j) i , k(j) k  (2) We use the Euclidean distance function: 2684 dist(a, b) = ||a −b||2 2 in our experiments. Note that the argmin operation is inherently nondifferentiable. The resulting quantization function Q has no gradient towards the input query vectors. Thus, we use the straight-through estimator (Bengio et al., 2013) to compute a pseudo gradient. This means that during the forward pass, we compute Q as defined here, but during the backward pass, we use the gradient of the query vectors. Tempering Softmax (TS): Vector quantization is a popular method for learning discrete representations. Yet another popular approach is continuous relaxation of (2) via the tempering softmax (Maddison et al., 2016; Jang et al., 2016). We again use dot product and softmax for computing the proximity between query and key vectors: z(j) i = arg max k exp  ⟨e(j) i , k(j) k ⟩/τ  P k′ exp  ⟨e(j) i , k(j) k′ ⟩/τ  Here, τ is the temperature and ⟨a, b⟩= aT b denotes the dot product operation. Note that this function still carries an inherent non-differentiability. Hence, we relax the above and compute probability vectors ¯z(j) i which represent the probability distribution of the jth dimension of the discrete code for the ith entity taking a particular value (say k). Given probabilistic vectors ¯z(j) i , we can compute the discrete codes z(j) i simply by taking the argmax. To compute discrete KD codes, we set a small value of τ. As τ →0, the softmax becomes spiky concentrated on the true z(j) i -th dimension. We again estimate pseudo gradients by setting a very small τ in the forward pass (i.e. close to the discrete case (eq. 1)) and τ = 1 in the backward pass. 3.2 Reverse-discretization Function R The goal of the reverse-discretization function is to map discrete KD codes into continuous KG embedding vectors. We model the reverse-discretization process first by a simple linear model which maps the discrete codes to continuous vectors by looking up a learnt codebook. Then, we present an alternative – a non-linear model for reverse-discretization based on recurrent neural networks. Codebook Lookup (CL): We first define the reverse-discretization function in a simple manner where we substitute every discrete code with a continuous vector from a codebook. Let C be a set of codebooks. C consists of a number of codebooks – a separate codebook C(j) for each position j = 1 . . . D in the KD code. We model each codebook simply as a set of vectors: C(j) = {c(j) i |i = 1 . . . K} where c(j) i ∈Rde/D. We simply compute the embedding vector for the jth dimension of the ith entity as: e(j) i = c(j) i The final entity embedding vector ei is achieved by the concatenation of the embedding vectors for each dimension: ei = [e(1) i . . . e(D) i ]. Non-linear Reconstruction (NL): While the codebook lookup approach is simple and efficient, due to its linear nature, the capacity of the generated KG embedding may be limited. Thus, we also employ neural network based non-linear approaches for embedding reconstruction. We propose a non-linear embedding reconstruction approach based on the Bi-LSTM network. Given the KD code zi as a sequence of codes z(1) i , . . . , z(D) i , we map the KD code to a continuous embedding vector by feeding the code to a Bi-LSTM followed by mean pooling. Let (h(1) i , . . . , h(D) i ) = Bi-LSTM  z(1) i , . . . , z(D) i  be the hidden state representations for the various Bi-LSTM cells. Finally, we reconstruct the entity embedding ˆei by meanpooling the code embedding vectors followed by a linear transformation: ei = WT rev P j h(j) i  . We also tried to map the KD code to a continuous embedding vector by feeding the code to variations of a character level CNN (Kim et al., 2016). However, the Char CNN model always performed worse than the Bi-LSTM model in our experiments. This was because our discretization function which discretizes contiguous partitions of the continuous representation better suits the Bi-LSTM reconstruction model. In the future, we would like to consider more complex discretization functions with other complex non-linear reconstruction models. Storage Efficiency: A key motivation of learning discrete representations is that we can significantly compress the embedding layer at test time. The size of the embedding layer for typical KG representations is 32nede (assuming a 32 bit representation) – this can be very large. In contrast, with discrete representation learning, we only need to store code embeddings {zi} and the parameters used in the reverse-discretization function such as 2685 the codebooks C or the parameters of the embedding reconstruction Bi-LSTM {ΘLSTM, Wrev}. The entity codes require neD log2 K bits. The codebook lookup approach needs to also maintain codebooks which require 32Kde parameters and the non-linear reconstruction approach requires Dd′ × 6 parameters (two set of parameter matrices each for the input, output and forget gates) for the Bi-LSTM and ded′ parameters for storing Wrev – a total of (6D + de)d′ parameters. Here, d′ is the size of the code embedding vectors. In both codebook lookup and non-linear reconstruction formulations, discrete representation learning neatly decouples the KG size (number of entities) and dimensionality of the continuous embeddings. Thus, the discrete embedding layer can be compactly stored as typically D and log2 K are smaller than 32de (considering only the dominating term ne). Test Time Inference of Embeddings: At test time, we retrieve continuous embeddings for an entity by looking up the codebook or running inference on the reconstruction model using its discrete representation. For codebook lookup, the steps involved are (a) looking up a simple index for each code, and (b) concatenation. Since only index lookups and concatenation are needed, the extra computation complexity and memory footprint are very small - O(D) time and memory. In the nonlinear reconstruction setting, we need to run inference on the Bi-LSTM model. This requires O(D) matrix vector multiplications (to compute various LSTM gates) which takes O(Dded′) time. Finally, we have another linear transformation Wrev – this takes O(ded′) time. We can further cache the embedding lookups and various intermediate results such as matrix vector products to improve performance. We show in our results that the test time inference overhead is typically very small. Learning: Similar to previous continuous KG representation learning methods, we learn discrete entity representations by minimizing the triplet loss function. We extend equation 1 as: min {ze},θ,Θ X (ei,r,ej)∈G L{ze},θ,Θ (ei, r, ej|θ, Θ) (3) Here, ze are code embeddings, θ are the parameters of the reverse-discretization function (C or {θLSTM, Wrev}) and Θ denotes parameters of the KG embedding approaches (listed in Table 1). The aforementioned loss function (eq 3) is differentiable w.r.t. the embedding parameters and parameters of entity representation learning methods. However, the discrete codes introduce a nondifferentiability. Thus, we use straight-through (Bengio et al., 2013) or the tempering softmax (Maddison et al., 2016; Jang et al., 2016) to estimate pseudo-gradients as described before (section 3.1). Guidance from KG embeddings: We find that even with sophisticated discrete representation learning methods, solving the above optimization problem can be challenging in practice. Due to discreteness of the problem, this can lead to a suboptimal solution where discrete codes are not as good. Therefore, we also use guidance from continuous KG embeddings to solve (3) when provided2. The key idea is that in addition to optimizing (3), we can encourage the reconstructed embeddings from the learnt discrete codes to mimic continuous embeddings. In order to provide this guidance from continuous embeddings, during the training, instead of using the reconstructed embedding vector generated from the discrete code, we use a weighted average of the reconstructed embeddings and continuous embeddings obtained using methods described in Table 1: (1 −λ)D ◦R(e) + λe. Here λ ∈(0, 1) is a linear interpolant for selecting between reconstructed embeddings and pre-learnt continuous embeddings. We initialize λ to 1 and gradually decrease λ as training proceeds. This enables the method to gradually rely more and more on reconstruction from discrete embeddings. We also add a regularization term ||D ◦R(e) −e||2 2 during the training to encourage the reconstructed embeddings to match the pre-learnt continuous embeddings. This procedure is similar to knowledge-distillation guidance (Hinton et al., 2015) in previous discrete representation learning works (Chen et al., 2018). Here λ ∈(0, 1) is a linear interpolant for selecting between reconstructed embeddings and prelearnt continuous embeddings. We initialize λ to 1 and gradually decrease λ as training proceeds. This enables the method to gradually rely more and more on reconstruction from discrete embeddings. We also add a regularization term ||D◦R(e)−e||2 2 during the training to encourage the reconstructed embeddings to match the pre-learnt continuous em2We show in our experiments that this guidance, while helpful, is not always needed. 2686 Dataset Entities Relations Train Valid Test FB15k 14,951 1,345 483,142 50,000 59,071 FB15k-237 14,541 237 272,115 17,535 20,466 WN18 40,943 18 141,442 5,000 5,000 WN18RR 40,943 11 86,835 3,034 3,134 Table 2: A summary of dataset statistics beddings. 4 Experiments We compare the baseline continuous representations described earlier in Table 1 with four discrete representation learning techniques described in this paper: • VQ-CL: D = VQ and R = CL • VQ-NL: D = VQ and R = NL • TS-CL: D = TS and R = CL • TS-NL: D = TS and R = NL 4.1 Datasets We evaluate our approach on four standard link prediction datasets: • FB15k (Bordes et al., 2013) is a subset of Freebase. • FB15k-237 (Toutanova et al., 2015) is a subset of the FB15k dataset created by removing inverse relations that cause test leakage. • WN18 (Bordes et al., 2013) is a subset of WordNet. • WN18RR (Dettmers et al., 2018) is a subset of the WN18 dataset created by removing inverse relations. We summarize all the data statistics in Table 2. We also use the Countries dataset (Bouchard et al., 2015) for some in-depth analysis of inference abilities of discrete representations. 4.2 Implementation Details We implement discrete KG representation learning by extending OpenKE (Han et al., 2018), an opensource framework for learning KG embeddings implemented on PyTorch 3. We train and test all our models on a single 2080Ti system. We set K = 32 and D = 10 in our experiments unless stated otherwise. For the linear embedding transformation function in the non-linear reconstruction approach, we use a hidden layer of 100 hidden units. We 3https://github.com/thunlp/OpenKE set λ as λ = 1 √ t at the tth epoch. We tune the regularization coefficient using grid search on the validation set. 4.3 Results Link Prediction: We learn discrete representations corresponding to various continuous KG representations (described in Table 1) and compare the obtained discrete representations with their continuous counterparts. We use the same hyper-parameter settings as in the original KG embedding papers. We generate ne candidate triples for each test triple by combining the test entity-relation pair with all possible entities E. We use the filtered setting (Bordes et al., 2013), i.e. all known true triples are removed from the candidate set except for the current test triple. We use standard evaluation metrics previously used in the literature: mean reciprocal rank (MRR) and hits@10 (H@10). Mean reciprocal rank is the average of the inverse of the mean rank assigned to the true triple over all candidate triples. Hits@10 measures the percentage of times a true triple is ranked within the top 10 candidate triples. In addition, in order to report the compression efficiency of the discrete representations, we also report the compression ratio which is computed as follows: CR = Storage(continuous) Storage(discrete) Here, Storage(continuous) is the storage used to store full continuous KG representations. Storage(discrete) is the storage used in the discrete representation learning method (during the testing stage). This includes discrete KG representations as well as parameters of the reverse-discretization function (i.e. codebook or Bi-LSTM parameters). Tables 3, 4, 5 and 6 show our results on the link prediction task on the four datasets respectively. In Table 3, we compare various continuous representations with the four discrete representation learning techniques described in this paper. We find that the discrete representations sustain only minor losses in performance (and are sometimes actually better than their continuous counterparts) in terms of both evaluation metrics: MRR and H@10, while being able to obtain significant embedding compression (42x-585x). Table 3 also compares the different discrete representation learning approaches. We observe that TS-NL which uses tempering softmax and non-linear reconstruction performs the best in most of the settings. This observation was also 2687 Continuous CR VQ-CL TS-CL CR VQ-NL TS-NL MRR H@10 (CL) MRR H@10 MRR H@10 (NL) MRR H@10 MRR H@10 TransE 0.463 0.749 46.3 0.462 0.748 0.467 0.749 42.6 0.463 0.746 0.477 0.755 DistMult 0.798 0.893 77.6 0.750 0.859 0.775 0.864 71.4 0.756 0.868 0.790 0.882 HolE 0.524 0.739 112.6 0.515 0.708 0.517 0.711 103.8 0.517 0.717 0.525 0.726 ComplEx 0.692 0.840 262.3 0.651 0.802 0.653 0.814 228.4 0.670 0.818 0.678 0.833 ConvE 0.657 0.831 77.6 0.618 0.774 0.620 0.798 71.4 0.626 0.793 0.644 0.820 RotatE 0.797 0.884 585.3 0.765 0.840 0.782 0.876 495.2 0.789 0.878 0.798 0.881 HypER 0.790 0.734 177.5 0.743 0.706 0.754 0.715 161.1 0.758 0.718 0.763 0.726 TuckER 0.795 0.741 177.5 0.773 0.714 0.782 0.729 161.1 0.787 0.723 0.783 0.726 Table 3: Results of several models and our proposed discrete counterparts evaluated on the FB15K dataset Continuous Discrete (TS-NL) MRR H@10 CR MRR H@10 TransE 0.495 0.943 103.3 0.499 0.940 DistMult 0.797 0.946 143.2 0.774 0.921 HolE 0.938 0.949 228.6 0.938 0.929 ComplEx 0.941 0.947 437.1 0.934 0.936 ConvE 0.943 0.956 143.2 0.933 0.936 RotatE 0.949 0.959 952.6 0.946 0.952 HypER 0.951 0.947 327.9 0.946 0.942 TuckER 0.953 0.949 327.9 0.924 0.920 Table 4: Results of several models and our proposed discrete counterpart (TS-NL) evaluated on the WN18 dataset Continuous Discrete (TS-NL) MRR H@10 CR MRR H@10 TransE 0.294 0.465 43.1 0.298 0.463 DistMult 0.241 0.419 71.8 0.241 0.422 HolE 0.318 0.430 104.0 0.316 0.428 ComplEx 0.247 0.428 228.5 0.238 0.411 ConvE 0.325 0.501 71.8 0.321 0.488 RotatE 0.338 0.533 495.2 0.336 0.528 HypER 0.341 0.252 161.3 0.332 0.286 TuckER 0.358 0.266 161.3 0.331 0.279 Table 5: Results of several models and our proposed discrete counterpart (TS-NL) evaluated on the FB15K237 dataset. made on the other three datasets. Hence, in Tables 4, 5 and 6, we only compare TS-NL with the continuous representations. We again observe that TS-NL compresses the KG embeddings (71x-952x) while suffering only a minor loss in performance. Logical Inference with Discrete representations: KG embeddings give us a way to perform logical inference and reason about knowledge. In this experiment, we explore if discrete representations retain the ability to perform inference and reasoning in KGs. We evaluate our models on the countries dataset (Bouchard et al., 2015) which was designed to test the logical inference capabilities of KG embedding models. We use the same evaluation protocol as in (Nickel et al., 2016) for our Continuous Discrete (TS-NL) MRR H@10 CR MRR H@10 TransE 0.226 0.501 105.2 0.230 0.498 DistMult 0.430 0.490 143.8 0.423 0.476 HolE 0.338 0.438 228.8 0.346 0.435 ComplEx 0.440 0.510 437.2 0.433 0.494 ConvE 0.430 0.520 143.8 0.431 0.500 RotatE 0.476 0.571 952.6 0.452 0.546 HypER 0.465 0.436 328.0 0.460 0.437 TuckER 0.470 0.443 328.0 0.452 0.442 Table 6: Results of several models and our proposed discrete counterpart (TS-NL) evaluated on the WN18RR dataset experiments. The countries dataset contains 2 relations and 272 entities (244 countries, 5 regions and 23 subregions) and 3 tasks are posed, requiring subsequently longer and harder inference than the previous one: 1. Task S1 poses queries of the form locatedIn(c; ?), and the answer is one of the five regions. 2. Task S2 poses queries of the form neighborOf(c1; c2) ∧locatedIn(c2; r) =⇒locatedIn(c1; r) 3. Task S3 poses queries of the form neighborOf(c1; c2) ∧locatedIn(c2; s) ∧locatedIn(s; r) =⇒locatedIn(c1; r): We use the AUC-PR metric, which was also used in previous works (Bouchard et al., 2015; Nickel et al., 2016). Table 7 shows our results. We find that TS-NL is a very good KG representation for KG inference. Infact, we find that TS-NL outperforms many of their continuous counterparts. Additional Inference Cost: A tradeoff in learning discrete KG representations is that the inference time increases as we need to decompress discrete representations into continuous embeddings for every entity before using them by looking up the 2688 S1 S2 S3 Ct. Dis. Ct. Dis. Ct. Dis. TransE 0.93 0.95 0.56 0.59 0.34 0.41 DistMult 1.00 0.97 0.72 0.71 0.52 0.55 HolE 1.00 0.97 0.77 0.80 0.70 0.74 ComplEx 0.97 0.97 0.57 0.56 0.43 0.45 ConvE 1.00 1.00 0.99 0.96 0.86 0.87 RotatE 1.00 1.00 1.00 0.98 0.95 0.91 HypER 1.00 0.97 0.76 0.80 0.68 0.75 TuckER 1.00 1.00 0.85 0.88 0.75 0.79 Table 7: Results of several continuous representations (ct.) and discrete TS-NL (dis.) evaluated on the three tasks (S1, S2 and S3) of logical inference on countries dataset. codebook or running inference on the LSTM reconstruction model. In practice, we found that this additional inference cost was very small. For example, the additional inference cost of running TransE on the entire FB15K test set was ≈1 minute for codebook lookup and ≈2.5 minutes for non-linear reconstruction approach on our single 2080Ti system. The additional inference cost for the other continuous KG representations were similarly low. Varying K and D: There is an evident tradeoff between the extent of compression (which is dictated by the choice of K and D) and model performance. In order to explore this tradeoff, we plot heatmaps of performance (MRR) and compression ratio (CR) on the FB15K test set as we vary K and D for TransE in Figure 1. Not surprisingly, the performance drops as the compression increases. Plotting these heat maps would allow the end user to pick K and D depending on their tolerance to loss in performance. Dependence on guidance from continuous embeddings: We evaluate the contribution of the guidance from continuous embeddings in learning discrete KG representations. Figure 2 compares the test MRR for TS-NL as training proceeds on the FB-15K dataset when we do or do not have guidance from the continuous representation (TransE). We observe that learning in the unguided model is much slower than the guided model. However, the guided model achieves almost similar performance in the end. Thus, we conclude that while guidance helps us achieve faster and more stable convergence, it is not necessary to learn discrete representations. Quality of the Discrete representations: We also assess the quality of the learnt discrete entity representations directly as features for the link predic5 10 25 D 2 8 32 128 K 0.451 0.456 0.462 0.465 0.468 0.480 0.473 0.477 0.483 0.477 0.480 0.486 0.455 0.460 0.465 0.470 0.475 0.480 0.485 MRR 5 10 25 D 2 8 32 128 K 440.2 217.7 100.2 148.3 72.4 39.7 82.4 42.6 29.8 51.8 26.6 14.7 50 100 150 200 250 300 350 400 MRR Figure 1: Heatmaps of performance (MRR) and CR for TS-NL on FB15K dataset as we vary K and D – darker is better. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Test MRR Training Epoch with guidance without guidance Figure 2: Test MRR for TS-NL as training proceeds on FB-15K dataset with and without guidance from continuous embeddings. MRR H@10 TransE 0.472 0.740 DistMult 0.778 0.870 HolE 0.497 0.706 ComplEx 0.654 0.816 ConvE 0.596 0.797 RotatE 0.743 0.828 HypER 0.722 0.694 TuckER 0.742 0.705 Table 8: Transfer results on FB15K dataset. 2689 Code Entities WN 3-7-0-6-X animalize, work animal, farm animal, animal husbandry, offspring, animal, invertebrate, marine animal, animal kingdom, predator 5-3-0-X-1 jabalpur, calcutta, bombay, hyderabad, chennai, lucknow, mysore FB 2-X-7-4-1 novelist, dramatist, actor, writer, cartoonist, poet, songwriter, musician 2-5-X-4-1 albert einstein, voltaire, isaac newton, nikola tesla Table 9: Example learned codes (K=8, D=5, X ∈{0, 7}) for Freebase (FB) and Wordnet (WN). Similar entities are assigned to close-by codes. tion task. In this case, we only retain the discrete entity representations learnt by TS-NL and learn a new LSTM based non-linear reverse-discretization on the validation set. Then, we obtain the linkprediction performance on the test set as before (see Table 8 for transfer results on the FB15K dataset). We observe that the performance of this “transfer” model is close to that of the original model which used a pre-trained reverse-discretization model (compare Table 8 with the shaded part of Table 3). Note that, in the “transfer” setting, we can achieve much higher compression as we do not even need to store the reverse-discretization model. Interpretability of discrete representations: The discrete codes provide us with additional interpretability which continuous representations can lack. In Table 9, we show a sample of learned codes for the two datasets. We observe that semantically similar entities are assigned to close-by codes. 5 Related Work Deep learning model compression has attracted many research efforts in the last few years (Han et al., 2015). These efforts include network pruning (Reed, 1993; Castellano et al., 1997), weight sharing (Ullrich et al., 2017), quantization (Lin et al., 2016), low-precision computation (Hwang and Sung, 2014; Courbariaux et al., 2015) and knowledge distillation (Hinton et al., 2015) These techniques can also be used for embedding compression. Press and Wolf (2016) and Inan et al. (2016) propose weight-tying approaches that learn input and output representations jointly. Matrix factorization-based methods (Acharya et al., 2019; Shu and Nakayama, 2017; Li et al., 2018) have also been proposed which approximate an embedding matrix with smaller matrices or clusters. Closest to our work are (Shu and Nakayama, 2017; Chen et al., 2018; Chen and Sun, 2019) who present similar approaches to learn discrete codings for word embeddings using multiple codebooks, i.e. product quantization (Jegou et al., 2010). Similar techniques have used been used by van den Oord et al. (2017) who extend VAEs to learn discrete representations using vector quantization in the image domain. This allows the VAE model to circumvent its well known issues of “posterior collapse”. All these previous works have been applied to the image domain, and sometimes in language to learn discrete word embeddings. In this work, we present the first results on compressing KG embeddings and also show how the compressed embeddings can be used to support various knowledge based applications such as KG inference. 6 Conclusion The embedding layer contains majority of the parameters in any representation learning approach on knowledge graphs. This is a barrier in successful deployment of models using knowledge graphs at scale on user-facing computing devices. In this work, we proposed novel and general approaches for KG embedding compression. Our approaches learn to represent entities in a KG as a vector of discrete codes in an end-to-end fashion. At test time, the discrete KG representation can be cheaply and efficiently converted to a dense embedding and then used in any downstream application requiring the use of a knowledge graph. We evaluated our proposed methods on different link prediction and KG inference tasks and show that the proposed methods for KG embedding compression can effectively compress the KG embedding table without suffering any significant loss in performance. In this work, we only considered the problem of learning discrete entity representations. In the future, we would like to jointly learn discrete representations of entities as well as relations. Acknowledgments MS would like to thank the anonymous reviewers, along with Karen Livescu, Kevin Gimpel and Shuning Jin for their valuable comments and suggestions on this work. 2690 References Anish Acharya, Rahul Goel, Angeliki Metallinou, and Inderjit Dhillon. 2019. Online embedding compression for text classification using low rank matrix factorization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6196– 6203. Alexei Baevski and Michael Auli. 2018. Adaptive input representations for neural language modeling. arXiv preprint arXiv:1809.10853. Ivana Balaˇzevi´c, Carl Allen, and Timothy M Hospedales. 2019a. Hypernetwork knowledge graph embeddings. In International Conference on Artificial Neural Networks, pages 553–565. Springer. Ivana Balaˇzevi´c, Carl Allen, and Timothy M Hospedales. 2019b. Tucker: Tensor factorization for knowledge graph completion. arXiv preprint arXiv:1901.09590. Dana H Ballard. 1997. An introduction to natural computation. MIT Press. Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in neural information processing systems, pages 2787–2795. Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning structured embeddings of knowledge bases. In Twenty-Fifth AAAI Conference on Artificial Intelligence. Guillaume Bouchard, Sameer Singh, and Theo Trouillon. 2015. On approximate reasoning capabilities of low-rank vector spaces. In 2015 AAAI Spring Symposium Series. Hongping Cai, Fei Yan, and Krystian Mikolajczyk. 2010. Learning weights for codebook in image classification and retrieval. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 2320–2327. IEEE. Giovanna Castellano, Anna Maria Fanelli, and Marcello Pelillo. 1997. An iterative pruning algorithm for feedforward neural networks. IEEE transactions on Neural networks, 8(3):519–531. Lawrence Cayton. 2008. Fast nearest neighbor retrieval for bregman divergences. In Proceedings of the 25th international conference on Machine learning, pages 112–119. ACM. Ting Chen, Martin Renqiang Min, and Yizhou Sun. 2018. Learning k-way d-dimensional discrete codes for compact embedding representations. arXiv preprint arXiv:1806.09464. Ting Chen and Yizhou Sun. 2019. Differentiable product quantization for end-to-end embedding compression. Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. 2015. Low precision arithmetic for deep learning. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Workshop Track Proceedings. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Thirty-Second AAAI Conference on Artificial Intelligence. Song Han, Huizi Mao, and William J Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149. Xu Han, Shulin Cao, Lv Xin, Yankai Lin, Zhiyuan Liu, Maosong Sun, and Juanzi Li. 2018. Openke: An open toolkit for knowledge embedding. In Proceedings of EMNLP. He He, Anusha Balakrishnan, Mihail Eric, and Percy Liang. 2017. Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings. arXiv preprint arXiv:1704.07130. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Kyuyeon Hwang and Wonyong Sung. 2014. Fixedpoint feedforward deep neural network design using weights +1, 0, and -1. In SiPS, pages 174–179. IEEE. Hakan Inan, Khashayar Khosravi, and Richard Socher. 2016. Tying word vectors and word classifiers: A loss framework for language modeling. arXiv preprint arXiv:1611.01462. Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144. Herve Jegou, Matthijs Douze, and Cordelia Schmid. 2010. Product quantization for nearest neighbor search. IEEE transactions on pattern analysis and machine intelligence, 33(1):117–128. Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge graph embedding via dynamic mapping matrix. In Proceedings of the 2691 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 687–696. Seyed Mehran Kazemi and David Poole. 2018. Simple embedding for link prediction in knowledge graphs. In Advances in Neural Information Processing Systems, pages 4284–4295. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-aware neural language models. In Thirtieth AAAI Conference on Artificial Intelligence. Zhongliang Li, Raymond Kulhanek, Shaojun Wang, Yunxin Zhao, and Shuang Wu. 2018. Slim embedding layers for recurrent neural language models. In Thirty-Second AAAI Conference on Artificial Intelligence. Darryl Lin, Sachin Talathi, and Sreekanth Annapureddy. 2016. Fixed point quantization of deep convolutional networks. In International Conference on Machine Learning, pages 2849–2858. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In Twenty-ninth AAAI conference on artificial intelligence. Vanessa Lopez, Christina Unger, Philipp Cimiano, and Enrico Motta. 2013. Evaluating question answering over linked data. Web Semantics Science Services And Agents On The World Wide Web, 21:3–13. Chris J Maddison, Andriy Mnih, and Yee Whye Teh. 2016. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712. Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowledge graphs. In Thirtieth Aaai conference on artificial intelligence. Aaron van den Oord, Oriol Vinyals, et al. 2017. Neural discrete representation learning. In Advances in Neural Information Processing Systems, pages 6306–6315. Ofir Press and Lior Wolf. 2016. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859. Russell Reed. 1993. Pruning algorithms-a survey. IEEE transactions on Neural Networks, 4(5):740– 747. Raphael Shu and Hideki Nakayama. 2017. Compressing word embeddings via deep compositional code learning. arXiv preprint arXiv:1711.01068. Amit Singhal. 2012. Introducing the knowledge graph: things, not strings. 2012 (accessed: 16 May 2012). Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in neural information processing systems, pages 926–934. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. arXiv preprint arXiv:1902.10197. Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1499–1509. Th´eo Trouillon, Johannes Welbl, Sebastian Riedel, ´Eric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In International Conference on Machine Learning, pages 2071–2080. Karen Ullrich, Edward Meeds, and Max Welling. 2017. Soft weight-sharing for neural network compression. arXiv preprint arXiv:1702.04008. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Twenty-Eighth AAAI conference on artificial intelligence. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575. Tom Young, Erik Cambria, Iti Chaturvedi, Hao Zhou, Subham Biswas, and Minlie Huang. 2018. Augmenting end-to-end dialogue systems with commonsense knowledge. In Thirty-Second AAAI Conference on Artificial Intelligence.
2020
238
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2692–2698 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2692 Low Resource Sequence Tagging using Sentence Reconstruction Tal Perl Tel Aviv University [email protected] Sriram Chaudhury Wipro [email protected] Raja Giryes Tel Aviv University [email protected] Abstract This work revisits the task of training sequence tagging models with limited resources using transfer learning. We investigate several proposed approaches introduced in recent works and suggest a new loss that relies on sentence reconstruction from normalized embeddings. Specifically, our method demonstrates how by adding a decoding layer for sentence reconstruction, we can improve the performance of various baselines. We show improved results on the CoNLL02 NER and UD 1.2 POS datasets and demonstrate the power of the method for transfer learning with low-resources achieving 0.6 F1 score in Dutch using only one sample from it. The code is publicly available at: https://github.com/tperl/LowResource-Sequence-Tagging-using-SentenceReconstruction. 1 Introduction The increased popularity of deep learning led to a giant leap in natural language processing (NLP). Tasks such as neural machine translation (Lample et al., 2018a; Gu et al., 2018), sentiment analysis (Patro et al., 2018) and question answering (Ran et al., 2019) achieved impressive results. A major limitation of deep learning is the need for huge amounts of training data. Thus, when dealing with low resource datasets, transfer learning is a common solution. A popular approach in NLP is training a language model for getting a good context-based word representation. Language models such as Bert (Devlin et al., 2019), Roberta (Liu et al., 2019b), ELMO (Peters et al., 2018), and XLnet (Yang et al., 2019) that are trained on very large corpora, are used by the community for different NLP tasks. This “transfer-learning” across tasks within the same language relies on fine-tuning a language model for a specific task (Sun et al., 2019). This work focuses on transfer learning between different languages. Some approaches have been suggested for it. Yang et al. (2017) have proposed using joint training with a large dataset as a source and a small dataset as a target. Zou et al. (2018) have shown how by aligning sentence representations using an adversarial loss, they were able to transfer knowledge between two languages. Contribution. This work analyzes the contribution of various techniques proposed for transfer learning between languages for the task of sequence tagging. In particular, we evaluate joint training and adversarial learning. Moreover, we propose a novel regularization technique, namely, we add a reconstruction loss with ℓ2 normalization. We show that the addition of this loss improves the performance of various sequence tagging tasks when doing transfer learning. Our strategy shows promising results for training models without being language-specific, which saves expensive labeling time. An important characteristic of our technique is its ability to provide good tagging in ”few-shot learning” (Fei-Fei et al., 2006). We achieve this result by adding to the small dataset, a larger corpus corresponding to another language. Our proposed loss improves the transfer of information and thus the tagging accuracy. We demonstrate our approach on the ConLL02/03 and the Universal Dependency (UD) 1.2 datasets. 2 Related Work Solving sequence tagging tasks, such as named entity recognition (NER) or part of speech (POS), using statistical methods has been studied for more than two decades. Early solutions used hidden markov models (HMMs) (Bikel et al., 1997), support-vector machines (SVMs) (Isozaki and Kazawa, 2002) and conditional random fields (CRF, Lafferty et al., 2001), we focus on a more 2693 Figure 1: Proposed Method. Notice that the reconstruction loss labels are taken from the embeddings lookup table. This can be replaced by context-aware embeddings. The LSTMs are language-specific and are fed by the relevant embeddings per sample. We normalize the sentence representation for all sentences and the word representation as well. modern approach using common deep learningbased approaches that significantly improve the performance. Collobert et al. (2011) demonstrated the great potential of using neural networks for various NER tasks. Huang et al. (2015) proposed the Bidirectional-LSTM (Bi-LSTM) CRF and Lample et al. (2016) presented a promising architecture for NER by adding character embeddings to its input. Peng and Dredze (2016) used recurrent neural networks (RNN) for NER and word segmentation in Chinese. In the context of transfer learning for sequence tagging, Yang et al. (2017) showed that by using hierarchical RNNs and joint training, it is possible to transfer knowledge between domains of different corpora and different languages. Cao et al. (2018) exhibited that using selfattention and an adversarial loss, they were able to perform transfer learning between two different domains in Chinese. Yadav et al. (2018) showed that Deep Affix Features is beneficial to NER. Jiang et al. (2019) used DARTS neural architecture search (Liu et al., 2019a) to improve NER. Lin et al. (2018) showed that by using multi-lingual multi-task architecture they were able to get interesting results. Devlin et al. (2019) introduced a new representation scheme for NLP tasks achieving impressive NER results. Clark et al. (2018) proposed a new method for getting improved representations of Bi-LSTM of sentence encoders using labeled and unlabeled data. Barone and Valerio (2016) showed that using an adversarial loss (Goodfellow et al., 2014) may lead to a better word representation. In addition, Adel et al. (2018) used an adversarial loss for getting better sentence representation. Tzeng et al. (2017) demonstrated how by aligning deep representations using an adversarial loss, they transfer knowledge from one domain to another. Lample et al. (2018a) exhibited this approach for unsupervised machine translation. Inspired by these strategies, we propose a method for transfer learning between different languages for sequence tagging. Specifically, we focus on sentence representation alignment. 3 Our Approach This section describes our sentence reconstruction approach for improving low resource sequence tagging tasks. Many successful sequence tagging network models are composed of an encoder-decoder structure. We suggest adding to them a new decoder branch comprised of a fully convolutional network (FCN) and an ℓ2 loss term for reconstructing the word embeddings of the input sentence. Figure 2: Baseline similar to Lample et al. (2016). To analyze the effectiveness of our proposed technique, we evaluate its contribution compared to other recently proposed strategies for transfer learning across languages: weight sharing and adversarial alignment. For completeness, we briefly 2694 Baseline L2 TL (TL)+(L2) (TL) + Adversarial (TL) + (L2)+ Adversarial (Yang et al., 2017) English 89.1 89.3 89.6 89.9 89.5 90.1 91.26 Spanish 85.84 86 86.1 86.2 84.8 86.3 85.77 Dutch 86.67 87.18 87.1 87.62 85.7 87.64 85.19 English (0.1) 83.1 82.7 85.5 86.1 85.8 86.5 86.5 Spanish (0.1) 76.4 76.47 78.7 78.5 77.8 77.8 76.5 Dutch (0.1) 74.8 75.8 79 80 77.9 79.5 English (0.01) 44.75 44.8 73.8 74.17 73.8 74.3 72.6 Spanish (0.01) 33.3 43.6 63.3 64.98 65.8 67.87 60.4 Dutch (0.01) 40.7 42.9 62.5 64.75 68.56 68.93 Table 1: Ablation results on NER ConLL02/03 compared to (Yang et al., 2017), using sentence reconstruction (L2), using weight sharing based transfer learning (TL), using the adversarial loss and combining them all together. describe the baseline we are using and each of these methods. Then, we present our new auxiliary loss. 3.1 Baseline Our base model follows Lample et al. (2016). Specifically, we run an LSTM (Hochreiter and Schmidhuber, 1997) on the character tokens, concatenate the output to the word embeddings and run an additional LSTM. We then feed its output, denoted z, to another LSTM with a CRF at its end, which produces the sequence tagging, whether it is POS or NER. See Fig. 4 for our baseline. 3.2 Weight sharing Yang et al. (2017) have shown that sharing weights between architectures that correspond to different languages leads to transferring knowledge between them. Our joint training model is inspired by their ”Cross Lingual Transfer” with the difference that we use a single CRF that is applied to the output of both LSTMs. See Fig. 3 for a schematic of the our modified version. Figure 3: Our modified version of Yang et al. (2017)’s weight sharing. In blue are modules shared between source and language sentences. 3.3 Adversarial loss The baseline described above essentially learns a sentence hidden representation, z. For aligning representations from different languages, we feed this feature vector to a 1D CNN which encodes it and outputs a softmax class and acts as a discriminator. We add a switch layer in the input ES NL EN (Gillick et al., 2015) 82.95 82.84 86.50 (Luo et al., 2015) 91.20 (Lample et al., 2016) 85.75 81.74 90.94 (Yang et al., 2017) 85.77 85.19 91.26 (Lin et al., 2018) 85.88 86.55 (Yadav et al., 2018) 87.26 87.54 90.86 (Baevski et al., 2019) 93.5 (Jiang et al., 2019) 93.47 (Strakov´a et al., 2019) 93.38 Our baseline 85.84 86.67 89 Our transfer 86.3 87.64 90.1 Table 2: Method results F1 score on CoNLL 2002/2003 compared to state of the art. that arbitrates between feeding sentences from the source and target language (each uses its respective word embedding). We train the discriminator on the normalized hidden representations generated by each sentence Z = z/||z||2. Thus, given the possible labels li, lj of the predicted language, for an input with label li/lj, the discriminator will try to predict li/lj. The generator will try to fool the discriminator and cause it to predict the opposite (lj/li). The adversarial loss Ladv is the sum of the discriminator loss LD and the generator loss LG as follows (Lample et al., 2018a): LD(θD, Z|θD) = −E(si,li)[log pD(li|e(si, li)], LG(θenc, Z|θD) = −E(si,li)[log pD(lj|e(si, li)], Ladv = LG + LD, (1) where si is the input sentence, e(·) the encoder function, and θD and θenc are the discriminator’s and the encoder’s parameters, respectively. 3.4 Reconstruction loss An adversarial training scheme can still reach trivial representations, meaning the generator produces sentence representations that do not contain meaningful information of the original sentences. There2695 ES NL RO (Heinzerling and Strube, 2019) 96.5 93.8 89.7 (Plank et al., 2016) 95.74 93.3 (Yasunaga et al., 2018) 96.44 93.09 91.46 Ours baseline 96 93.1 91.45 Ours transfer 96.4 93.8 93.04 Table 3: Method results accuracy on UD 1.2 Part of speech (POS) compared to the state-of-the-art. Figure 4: Our proposed fully convolutional network for learning the input sentence embeddings fore, we propose using the ℓ2 loss for reconstructing the input sentence (word embeddings). We do so by applying on the hidden representation z a 1D FCN with 5 layers, convolution kernels of size 3 and the ReLU non-linearity. Notice that z is a sequence of embedding vectors. Thus, the output of the FCN is also a sequence of vectors, where each of them tries to estimate the embedding of the corresponding word in the input sentence. If the generated sentence is of a different length than the input, we use the padding embedding vector to make them even. We train this decoder together with the encoder in the network using the following reconstruction loss Lauto(θenc, θdec) = X i ∥˜ei −ei∥2 2, (2) where θdec are the FCN parameters, ei is the embedding of the ith word in the input sentence and ˜ei is the corresponding reconstructed embedding, which we normalize. The reconstruction loss acts as a regularization term, which improves results also when used by itself (see the ablation study). We would like to emphasize the importance of normalizing the representing vectors. Its motivation is in the fact that transforming the vectors onto a unit sphere causes the model to learn to maximize Baseline Our method Arabic 66.05 ± 1.29 76.82 ± 0.24 Bulgarian 52.41 ± 1.46 84.86 ± 0.30 Estonian 47.22 ± 0.48 56.10 ± 0.16 Finnish 49.00 ± 1.45 79.91 ± 0.39 French 63.34 ± 3.10 87.19 ± 0.37 German 77.10 ± 1.36 87.66 ± 0.30 Greek 60.43 ± 0.80 87.66 ± 0.30 Hebrew 65.13 ± 2.11 85.50 ± 0.75 Italian 63.46 ± 1.31 88.88 ± 0.71 Norwegian 78.55 ± 0.62 91.06 ± 0.31 Polish 52.05 ± 0.61 80.84 ± 0.47 Slovenian 53.50 ± 0.37 83.93 ± 0.77 Spanish 83.65 ± 0.16 90.60 ± 0.04 Table 4: Low resource testing for part of speech on UD 1.2 dataset. For each language we ran 3 random seeds and report the mean and std for the baseline and the proposed method. the similarity between sentences and words. Figure 1 presents a model with all the discussed regularization techniques. Notice that each component in this model can be applied separately. For example, we may apply our new reconstruction loss alone, or as an additional branch to the adversarial branch with or without weight sharing. 4 Experiments We follow the experiments of Yang et al. (2017) to evaluate our approach for transfer learning between languages. We compare our proposed regularization to joint training and the adversarial loss. We start by evaluating the impact of each strategy alone, and then gradually combine the losses to each other. Our source-target pairs are built of English and a selected target language (Spanish, Dutch or Romanian). In NER, we test both directions of transfer learning, i.e English to Spanish and Spanish to English. In POS, English is always the source language. We focus on using word embeddings that are aligned across different languages, specifically ”MUSE” (Lample et al., 2018b). Our motivation for choosing it is to leverage the word alignment, which makes the impact of the sentence alignment clearer. Loss analysis. For understanding the impact of our approach, we test it with and without the other techniques for transfer learning between languages. We also compare to each of them being applied separately. Table 1 summarizes our results. Notice that our proposed loss improves the performance when combined with other methods and even when being applied alone. Also, we have found that the improvement gained by the adversarial loss is 2696 ES NL EN (Yang et al., 2017) 16 40.1 Lin et al. (2018) 60 50 Our baseline 22 33 7.6 Our transfer 59.5 61 43.1 Table 5: F1 scores on CoNLL 2002/2003 for few shot training (0.001 of the data) compared to (Yang et al., 2017). Language Baseline Method Lin et al. (2018) English 7.6 34.6 Spanish 7.6 53 50 Dutch 7.6 60 50 Table 6: F1 scores on CoNLL 2002/2003 for one shot training, compared to Lin et al. (2018). marginal and therefore, we do not use it in the final model used in the next experiments, which consist of only weight sharing and our proposed ℓ2 reconstruction loss. Results. We evaluate our model on three tasks: (i) NER transfer learning compared to leading methods; (ii) NER transfer learning on a subset of the target data; and (iii) POS transfer. We achieve competitive results on Conll2002 Dutch/Spanish. For testing how competitive our approach is, we also compare to state-of-the-art methods. Moreover, we perform experiments on subsets of the data similar to Yang et al. (2017). These experiments exhibit the advantage of our model, especially when training on scarce data. For example, we show that using only nine samples in Spanish (0.001 of the data) we get an F1 score of 0.59 (compared to the 0.16 transfer learning result of Yang et al. (2017)). Table 2 shows the NER results, where we get competitve results in ConLL02 and improve our baseline in English ConLL03. Table 4 shows how our method generalizes well for low resource transfer learning in POS. Notice the great improvement between our baseline as shown in Fig. 4 and our method shown in Fig. 1. Table 3 demonstrates the performance on POS, where we get the largest improvement on Romanian, which is a low resource language (with fewer labels). Table 5 exhibits the Language Baseline Method Spanish 0 57 Dutch 0 55 Table 7: F1 scores on CoNLL 2002 for zero shot training. advantage of our regularization for few-shot learning compared to Yang et al. (2017) and Lin et al. (2018). Finally, Table 6 and Table 7 presents the results of our approach for ”one-shot” learning compared to Lin et al. (2018) and ”zero-shot” learning. A major improvement compared to our baseline is apparent also here. We found for the case of fewshot and one-shot learning that it is better to share the base BiLSTM because it does not see enough examples to train. 5 Conclusion This work demonstrates the power of sentence reconstruction for transferring knowledge from a rich dataset to a sparse one. It achieves competitive results with a relatively simple baseline. We also show its strength in few-shot and one-shot learning. We believe that using the proposed sentence ℓ2 reconstruction may contribute as an auxiliary loss for other tasks. Also, we have demonstrated our model with MUSE, since it provides word alignment across languages. Yet, our approach can be applied also with other more recent language models that have stronger context-based embeddings. Acknowledgment. This work was supported by Wipro. We thank Parul Chopra and Amrit Bhaskar for their assitance. References Heike Adel, Anton Bryl, David Weiss, and Aliaksei Severyn. 2018. Adversarial neural networks for cross-lingual sequence tagging. CoRR, abs/1808.04736. Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, and Michael Auli. 2019. Clozedriven pretraining of self-attention networks. CoRR, abs/1903.07785. Miceli Barone and Antonio Valerio. 2016. Towards cross-lingual distributed representations without parallel text trained with adversarial autoencoders. In Proceedings of the 1st Workshop on Representation Learning for NLP, pages 121–126, Berlin, Germany. Association for Computational Linguistics. Daniel M. Bikel, Scott Miller, Richard Schwartz, and Ralph Weischedel. 1997. Nymble: a highperformance learning name-finder. In In Proceedings of the Fifth Conference on Applied Natural Language Processing, pages 194–201. Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao, and Shengping Liu. 2018. Adversarial transfer learning for Chinese named entity recognition with self2697 attention mechanism. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 182–192, Brussels, Belgium. Association for Computational Linguistics. Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc V. Le. 2018. Semi-supervised sequence modeling with cross-view training. CoRR, abs/1809.08370. Ronan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. Li Fei-Fei, Rob Fergus, and Pietro Perona. 2006. Oneshot learning of object categories. IEEE Trans. Pattern Anal. Mach. Intell., 28(4):594–611. Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2015. Multilingual language processing from bytes. CoRR, abs/1512.00103. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672–2680. Curran Associates, Inc. Shuqin Gu, Lipeng Zhang, Yuexian Hou, and Yin Song. 2018. A position-aware bidirectional attention network for aspect-level sentiment analysis. In Proceedings of the 27th International Conference on Computational Linguistics, pages 774–784, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Benjamin Heinzerling and Michael Strube. 2019. Sequence tagging with contextual and non-contextual subword representations: A multilingual evaluation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 273– 291, Florence, Italy. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735– 1780. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. ArXiv, abs/1508.01991. Hideki Isozaki and Hideto Kazawa. 2002. Efficient support vector classifiers for named entity recognition. In Proceedings of the 19th International Conference on Computational Linguistics - Volume 1, COLING ’02, pages 1–7, Stroudsburg, PA, USA. Association for Computational Linguistics. Yufan Jiang, Chi Hu, Tong Xiao, Chunliang Zhang, and Jingbo Zhu. 2019. Improved differentiable architecture search for language modeling and named entity recognition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 3583–3588, Hong Kong, China. Association for Computational Linguistics. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML ’01, pages 282–289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270, San Diego, California. Association for Computational Linguistics. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Representations. Guillaume Lample, Alexis Conneau, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv Jgou. 2018b. Word translation without parallel data. In International Conference on Learning Representations. Ying Lin, Shengqi Yang, Veselin Stoyanov, and Heng Ji. 2018. A multi-lingual multi-task architecture for low-resource sequence labeling. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 799–809, Melbourne, Australia. Association for Computational Linguistics. Hanxiao Liu, Karen Simonyan, and Yiming Yang. 2019a. DARTS: Differentiable architecture search. In International Conference on Learning Representations. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. 2015. Joint entity recognition and disambiguation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 879–888, Lisbon, Portugal. Association for Computational Linguistics. 2698 Badri N. Patro, Vinod K. Kurmi, Sandeep Kumar, and Vinay P. Namboodiri. 2018. Learning semantic sentence embeddings using pair-wise discriminator. CoRR, abs/1806.00807. Nanyun Peng and Mark Dredze. 2016. Improving named entity recognition for chinese social media with word segmentation representation learning. In ACL. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. CoRR, abs/1802.05365. Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 412–418, Berlin, Germany. Association for Computational Linguistics. Qiu Ran, Peng Li, Weiwei Hu, and Jie Zhou. 2019. Option comparison network for multiple-choice reading comprehension. CoRR, abs/1903.03033. Jana Strakov´a, Milan Straka, and Jan Hajic. 2019. Neural architectures for nested NER through linearization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5326–5331, Florence, Italy. Association for Computational Linguistics. Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune BERT for text classification? CoRR, abs/1905.05583. Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2962– 2971. Vikas Yadav, Rebecca Sharp, and Steven Bethard. 2018. Deep affix features improve neural named entity recognizers. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 167–172, New Orleans, Louisiana. Association for Computational Linguistics. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. ArXiv, abs/1906.08237. Zhilin Yang, Ruslan Salakhutdinov, and William W. Cohen. 2017. Transfer learning for sequence tagging with hierarchical recurrent networks. ArXiv, abs/1703.06345. Michihiro Yasunaga, Jungo Kasai, and Dragomir Radev. 2018. Robust multilingual part-of-speech tagging via adversarial training. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 976–986, New Orleans, Louisiana. Association for Computational Linguistics. Bowei Zou, Zengzhuang Xu, Yu Hong, and Guodong Zhou. 2018. Adversarial feature adaptation for cross-lingual relation classification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 437–448, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
2020
239
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 263–274 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 263 Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order Yi Liao, Xin Jiang, Qun Liu Huawei Noah’s Ark Lab {liaoyi9, jiang.xin, qun.liu}@huawei.com Abstract Masked language model and autoregressive language model are two types of language models. While pretrained masked language models such as BERT (Devlin et al., 2019) overwhelm the line of natural language understanding (NLU) tasks, autoregressive language models such as GPT (Radford et al., 2018) are especially capable in natural language generation (NLG). In this paper, we propose a probabilistic masking scheme for the masked language model, which we call probabilistically masked language model (PMLM). We implement a specific PMLM with a uniform prior distribution on the masking ratio named u-PMLM. We prove that u-PMLM is equivalent to an autoregressive permutated language model. One main advantage of the model is that it supports text generation in arbitrary order with surprisingly good quality, which could potentially enable new applications over traditional unidirectional generation. Besides, the pretrained u-PMLM also outperforms BERT on a set of downstream NLU tasks. 1 Introduction Large-scale pretrained language models (Raffel et al., 2019; Wang et al., 2019; Lan et al., 2019; Liu et al., 2019; Jiao et al., 2019) have drawn lots of research attention as these models have brought significant improvements to many NLU and NLG tasks. As a major category of pretrained language models, masked language model (MLM) (Devlin et al., 2019; Joshi et al., 2019) is trained using a denoising autoencoding objective. In a typical MLM, some tokens in a sentence are replaced by a special token [MASK]. The training objective is to predict the original tokens that are masked in the sentence. As the first large-scale pretrained masked language model, BERT chooses to mask 15% of the tokens in sentences randomly. Following BERT, various The wolf has an extraordinary speed , and it can often jump from a spot quick enough to escape a spot already occupied by an adult wolf . Unlike the brown and black bear , where it is easily distracted by wolves , the gray fox does not run over a wolf , and is often driven mad . Having jumps with high speed that breaks the wolf ’ s legs before it is run over , a grey wolf could defend itself against an adult of other species as the best predator at any time . The black bear may kill packs of four lazy , though the gray fox can inflict significant wounds on a dog . Figure 1: A piece of text generated by a PMLM in random order. The bolded words, which compose the input sentence “The quick brown fox jumps over the lazy dog”, are distributed across the paragraph with a predefined length. The blank spaces are filled by the model in a random order to form the complete paragraph. language models have been proposed with different masking schemes. While the pretrained masked language models achieve state-of-the-art performances in a line of downstream NLU tasks, researchers pay more attention to autoregressive language model when it comes to text generation. Unlike predicting the masked tokens, the autoregressive language model learns a sequential generative process of text sequences. Hence it naturally performs better for natural language generation. For example, GPT-2 (Radford et al., 2019) as well as Transformer-XL (Dai et al., 2019), is able to generate fluent and coherent paragraphs of text that highly resembles human writings. In this paper, we propose a probabilistically masked language model (PMLM) to bridge the gap between masked and autoregressive language mod264 Predictions: Hidden States: Transformer Layers: Inputs: Figure 2: The structures of autoregressive language model (left) and masked language model (right). els. The basic idea behind the connection of two categories of models is similar to MADE (Germain et al., 2015). PMLM is a masked language model with a probabilistic masking scheme, which defines the way sequences are masked by following a probabilistic distribution. While the existing work proposes masking strategies aiming at improving the NLU abilities, PMLM addresses the generation capability in particular. Besides, as a masked language model, PMLM maintains its strong ability in natural language understanding. In addition to the traditional unidirectional (e.g., left-to-right) generation, a unique ability for PMLM is to autoregressively generate sequences in arbitrary order, and the generated sequences are still of high quality. In contrast to traditional left-to-right generation, arbitrarily ordered text generation has two main characteristics. First, the next token to be predicted could be in any position that is masked. Second, the next token to be predicted depends on all the previous observed/generated tokens. Arbitrarily ordered generation enables more interesting applications than unidirectional generation. For example, Figure 1 shows an example of cloze test, where the prompted text “The quick brown fox jumps over the lazy dog” is distributed across a paragraph with a predefined length, and the task is to predict all the surrounding words and complete the paragraph. This is actually very challenging for conventional generation models since when predicting each word, the fluency and coherence of text are hard to be guaranteed given the contextual constraints on both sides. More applications may include acrostic poetry generation, news generation based on given facts, machine translation with lexical constraints, etc. We employ a simple uniform distribution of the masking ratio and name the model as u-PMLM. We prove that u-PMLM actually learns an autoregressive language model on random permutations of training sequences. The experiments show that the quality of text generated by u-PMLM in arbitrary order is as good as that generated by GPT in sequential order. Besides, u-PMLM outperforms BERT significantly on the GLUE benchmark for natural language understanding. 2 Preliminary 2.1 Transformer Transformer (Vaswani et al., 2017) is the backbone model for many pretrained language models. Transformer is composed of a stack of multi-head selfattention and token-wise feed-forward layers. At each layer, the hidden state of each token is updated based on the historical hidden states computed in the lower layer. Let X = {x1, x2, ..., xN} denote the sequence of tokens, where N is the length of the sequence. Fed with X as input, the final output of the Transformer, denoted as H = {h1, h2, ..., hN}, captures the contextual representation of the tokens in the sequence. 2.2 Autoregressive Language Model In autoregressive language model, the sequence generation process is modeled as a Markov chain, where the token to be predicted depends on all the previous tokens. The training objective can be formulated as: Lalm(X) = N X n=1 log p(xn|x1, ..., xn−1; θ), (1) where θ denotes the parameters of the model. Figure 2(a) shows the diagram of autoregressive LM. In the model, the n-th token can only attend on 265 the tokens at positions less than n. The autoregressive model is usually trained in the way of teacher-forcing, i.e., always using the ground-truth tokens as inputs and outputs in training. Pretrained autoregressive models such as GPT (Radford et al., 2018, 2019) are especially capable of generating fluent and coherent text that highly resembles human-written text. However, unidirectional attention brings two limitations. Firstly, autoregressive model as in Figure 2(a) can only generate text from left to right; Secondly, unidirectional attention blocks the contextual information from the right side of the current token, affecting the completeness of the contextual representation. 2.3 Masked Language Model To obtain complete representations of the tokens in a sequence, researchers resort to bidirectional attention as shown in Figure 2(b). Specifically, the training instances are created by replacing a subset of tokens in the input X with a special token [MASK], and the objective is to predict the masked tokens. Such model is called masked language model (MLM). Let Π = {π1, π2, ..., πK} denote the indexes of the masked tokens in the sentence X, where K is the number of masked tokens. Let XΠ denote the set of masked tokens in X, and X−Π denote the set of observed (unmasked) tokens. The objective of MLM is: Lmlm(XΠ|X−Π) = 1 K K X k=1 log p(xπk|X−Π; θ). (2) The assumption in Equation 2 is that the probability of predicting a masked token is independent of each other. BERT (Devlin et al., 2019) is a typical masked language model. Due to the incorporation of bidirectional attention, masked language model can capture the contextual information on both sides. Consequently, it usually achieves better performances when finetuned in downstream NLU tasks than the conventional autoregressive models. However, the masking scheme and the independence assumption also affect its performance on text generation compared to autoregressive models (Wang and Cho, 2019). 3 Probabilistically Masked Language Model Different masking schemes have been proposed for pretraining the masked language model. The most straightforward masking scheme is to randomly mask tokens in sentences in a fixed ratio, e.g., 15% in BERT. Following BERT, various models have proposed modifying the masking scheme to improve its NLU capability. ERNIE (Sun et al., 2019) proposes the entity-level masking and phrase-level masking, where the words composing an entity or phrase are masked as a whole. SpanBERT (Joshi et al., 2019) proposes to mask a continuous random span of text rather than random tokens. These masking strategies have shown to be effective for certain classes of NLU tasks. In contrast to the existing work, we propose a probabilistic masking scheme that tries to improve the text generation ability of the masked language model. Probabilistically masked language mode (PMLM) is a natural generalization of the MLM with a probabilistic masking ratio. It assumes that the masking ratio is drawn from a probabilistic distribution. Therefore, each training instance is associated with a different masking ratio sampled from the given distribution. 3.1 Model Formulation To give a formal definition of the PMLM, we need to elaborate the training objective defined in Equation 2. Let M = {m1, m2, ..., mN} denote a sequence of binary variables indicating which token in X = {x1, x2, ..., xN} is masked. mn = 1 indicates xn is masked, and mn = 0 otherwise. Noted that since Π = {π1, π2, ..., πK} denotes the indexes of masked tokens, mπk = 1 holds for any πk ∈Π. Considering M as latent variables, the expected log-likelihood function of observing XΠ conditioning on X−Π over all possible M is: Lpmlm(XΠ|X; θ) =EM|X[log p(XΠ|X−Π)] = X M [log p(XΠ|X−Π; θ)]p(M|X) (3) The term log p(XΠ|X−Π; θ) is identical to the objective function in Equation 2 for a deterministic mask M. In the vanilla MLM, it is assumed that M are i.i.d. for each position and independent to X, namely, p(M|X) = p(M) = rK(1 −r)N−K, (4) where r is the masking ratio. Most existing MLMs such as BERT simply set a fixed value to the masking ratio r. In our proposed 266 PMLM, however, we assume r is a random variable drawn from a prior distribution p(r). Therefore, the distribution p(M) becomes: p(M) = αM = Z p(M|r)p(r)dr = Z rK(1 −r)N−Kp(r)dr (5) With above derivations, we can formulate the expected log-likelihood function of PMLM as: Lpmlm(XΠ|X; θ) = X M [log p(XΠ|X−Π; θ)]αM = X M αM K K X k=1 log p(xπk|X−Π; θ) (6) Equation 6 is optimized by sampling M according to the prior distribution over the training set. By controlling the prior distribution, we can cover a wider range of sequence prediction tasks in training, which can potentially enhance the representation power of the pretrained model. For instance, in the left-to-right autoregressive model, the masking ratio is uniformly distributed across different positions, which makes the model learn to generate the next token given the previous context of different lengths. This inspires us to try the uniform prior on masking ratio for PMLM. 3.2 PMLM with a uniform prior u-PMLM is an implementation of PMLM with a continuous uniform distribution on the masking ratio: p(r) = ( 1, 0 ≤r ≤1 0, otherwise. (7) Like most pretrained language models, the backbone model for u-PMLM is Transformer as well. We prove that u-PMLM is equivalent to the autoregressive permutated language model (APLM) by recombination of the factorized log-likelihood function, which is basically the autoregressive language model trained on all possible permutations of the training instances: Laplm(X) = Eσ " N X t=1 log p(xσt|xσ1, . . . , xσt−1; θ) # , (8) where σ denote random permutations. The detail derivation is included in the Appendix A. Ordinary autoregressive model can be regarded as a special case of the permutated model. Therefore, we can expect that the u-PMLM is able to work as the autoregressive model in sequential prediction. Moreover, since it can handle any permutation of the sequence, it should have the ability to generate sequences in arbitrary word order. 3.3 Generation with u-PMLM Algorithm 1 depicts the algorithm to autoregressively generate a sequence in random order with u-PMLM. The process starts with a sequence containing full of the special token [MASK]. Then the model iteratively replaces a [MASK] token in a random position with a predicted token, until all the tokens are predicted. An example showing the states of the sequence during the generation process is presented in Table 1. The generation order could be arbitrary, which is much more flexible than the traditional unidirectional generation. On the other hand, our model can not automatically determine a best generation order, which could be a interesting problem for future research. Algorithm 1: Generation with u-PMLM Result: Generated Text Sequence S = {s1, s2, ..., sN} . Initialization: i. A sequence S with all [MASK] tokens. ii. Unvisited index set U = {1, 2, ..., N}. while U is not empty do 1. Randomly pick a number n from U; 2. Input u-PMLM with S and predict the n-th token xn; 3. Replace the n-th token of S with the predicted token xn, i.e., S(n) ←xn; 4. Remove n from U. Positional Embedding Most pretrained masked language models have employed absolute positional embedding to incorporate the positional information of the input tokens. We train two variants for u-PMLM, one with absolute positional embedding and the other with relative positional embedding (Shaw et al., 2018). The experiments show that NLG ability is not sensitive to relative or absolute positional embedding, while NLU ability is improved with relative positional embeddings. Model Inference Although both u-PMLM and GPT generate sequences autoregressively based on 267 Step Prediction Index State of the sequence 0 n/a 1 3 a 2 7 a random 3 1 This a random 4 2 This is a random 5 4 This is a sentence random 6 6 This is a sentence in random 7 5 This is a sentence generated in random 8 8 This is a sentence generated in random order Generation Order: 3→7→1→2→4→6→5→8 Output: This is a sentence generated in random order Table 1: An example of how u-PMLM generates a sequence in random order. The special token [MASK] is simplified as the symbol “ ”. Transformer, they are slightly different at inference time. For u-PMLM, since we use the bidirectional Transformer, each time a token is generated, the hidden states of all the tokens need an update. For GPT, since the unidirectional Transformer is employed, the latter generated token does not affect the hidden states of previous tokens. This can result in different computational complexity. However, since a typical Graphics Processing Unit (GPU) computes matrices in parallel, the actual difference in inference time is not that significant. We report the comparison of time consumption in the experimental section. 3.4 Training Settings Model Size : The size of our pretrained u-PMLM is identical to BERT-base, which contains 12 hidden layers and 12 attention heads. The hidden size is 768, and the intermediate size is 3072. The dropout rate is set to 0.1. Training Data We employ the commonly adopted training data, namely BookCorpus and Wikipedia to train our u-PMLM model. We obtain 4.1 Gb for the BookCorpus dataset and 11.9 GB for the Wikipedia dataset after data cleaning. We further employ the same vocabulary and tokenization techniques as BERT for converting the text sequences to ID sequences. The vocabulary contains 28,996 cased tokens. We set the maximum sequence length to 128. Training Platform We train u-PMLM using Horovod framework with 56 NVIDIA V100 (32GB) GPUs. To speed up the training process, we employ mix-precision training technique. The batch size is set to 150 for every single GPU, thus the total batch size is 8400. The optimizer is Lamb Optimizer (You et al., 2019), which is more suitable for large batch size than Adam Optimizer. We train u-PMLM for 600K steps, taking roughly 135 hours in total. 4 Experiments We evaluate both the natural language generation ability and natural language understanding ability of u-PMLM trained in the settings described in Section 3.4. 4.1 Comparative Models We train the BERT model and GPT model as the comparative models in the experiments. BERT and GPT are representative models for masked language model and autoregressive language model, respectively. To make fair comparisons, we train both models from scratch using the same settings described in Section 3.4, including the same training platform, model size, training data, vocabulary, and training steps. Note that since BERT adopts absolute positional embedding, the variant for u-PMLM with absolute positional embedding is trained for a fair comparison with BERT. Throughout the experimental section, u-PMLM-R and uPMLM-A are short for the variants with relative and absolute positional embeddings, respectively. 4.2 Autoregressive Generation Perplexity Evaluation Perplexity (PPL) measures the quality of a language model, where the task is to predict the next word or character in a document. Typically, the predicting order follows 268 Model PPL(sequential) PPL(random) BERT 23.12 25.54 GPT 21.23 N/A u-PMLM-R 19.58 21.51 u-PMLM-A 19.32 21.30 Table 2: Perplexity on Wikitext103. Model PPL(sequential) PPL(random) BERT 140.67 56.97 GPT 24.25 N/A u-PMLM-R 35.24 38.45 u-PMLM-A 49.32 42.46 Table 3: Perplexity on One-Billion Words. the generation order. However, as bidirectional u-PMLM and BERT supports text generation in arbitrary order. Hence we also evaluate the perplexity when predicting words in arbitrary order. We evaluate the perplexity using two datasets for evaluating perplexity. The first dataset, Wikitext103, is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. The second dataset, One-Billion Words, consists of 829 million tokens derived from a news-commentary site. Both datasets are widely adopted for evaluating language models. However, there are significant differences between these two datasets in terms of the length of sequences. The Wikitext103 dataset is more similar to the pretraining datasets, containing long articles. On the other hand, the One-Billion Words dataset contains only single sentences, roughly half of which contain less than 24 tokens. We have ensured that all the three models have the same context length, the same vocabulary, as well as the same tokenization method, which would affect the perplexity values. For Wikitext103 dataset, the context length is set to 128, and each context containing multiple coherent sentences. For the One-Billion Words dataset, context length is set to 50. Short sentences are appended with [PAD] to reach length 50. Actually, the context for nearly all the sentences is shorter than 50. Both datasets provide training and test sets. We first finetune the model using the training set before evaluating perplexity on the test set. For each model, the algorithm for the finetune phase is the same as that for the pretraining phase. The evaluation results of perplexity are shown in Table 2 and Table 3. “Sequential” refers to the traditional left-to-right text generation, while for “random”, each sentence in the test set is assigned a random generation order. Smaller PPL indicates better language model performance. We first investigate the performance on Wikitext103 dataset. We observe that the PPL for u-PMLM is comparable to GPT on Wikitext103 dataset, indicating that the language model learned by u-PMLM is as good as GPT when the context length is sufficiently long. In such case, the text generated by u-PMLM is as good as GPT. Moreover, the PPL of u-PMLM for randomly ordered language model is comparable to the left-to-right generation, which implies that u-PMLM has a strong ability for arbitrarily ordered generation. Besides, the results show that there are few differences between relative positional embedding and absolute positional embedding for u-PMLM. On the other hand, although BERT supports generation in arbitrary word order as well, the PPL for BERT is significantly worse than our proposed u-PMLM for both “sequential” and “random” settings, demonstrating the effectiveness of the proposed probabilistic masking scheme. We show more cases of text generation in random order for u-PMLM-A and BERT in Appendix B. However, for PPL on One-Billion Words, the performances of u-PMLM and BERT are not satisfactory in comparison with GPT. Generally, PPL for all these models increases on One-Billion Words dataset as the context length becomes much smaller, which also reflects PPL’s relationship to context length. The reason might be the large portions of [PAD] in the One-Billion Words dataset, i.e., more than 50% of the context for nearly 50% of the training instances are filled by [PAD]. We suspect that the [PAD]s affect the prediction process for bidirectional models. On the other hand, unidirectional models such as GPT naturally ignore the effect of [PAD] tokens in the tail of context. The results imply that u-PMLM could be further improved in the future to be more robust. Latency As analyzed in Section 4, the time complexity for generation for masked language model is N times of autoregressive language model when computing the hidden states in each Transformer layer. However, when employed for text generation on GPU, the difference might be less significant. We test the latency for generating 100 128-length sentences for GPT and u-PMLM respectively. The computational platform is NVIDIA V100 GPU. 269 Models Cost Time GPT 105.6 s u-PMLM-A 126.8 s Table 4: Latency for generating 100 128-length sequences. Tom is a cat and Jerry is a mouse .“ It ’ s very sad ! ” . The writers had wanted Tom to have “ something big to tell it . . . and a fun place to get excited ” . The writers believed that the “ little animal ” and the “ little black dog ” at the end of the episode would have attracted more attention from viewers , but it never took place . Tom ’ s first television role was that of the boy scout “ Mr . Krabs ” in the 1978 NBC Western comedy pilot , The Search for Mr . Krabs . Figure 3: Unidirectional Text Generation with GPT The results are shown in Table 4. The results show that u-PMLM costs roughly 20.1% more time than GPT for generating sentences, which is much less than the theoretical time complexity difference. Comparison With GPT for Generation In the introduction section, we have shown an example showing the application of arbitrarily ordered text generation, where the tokens in the input sentences are distributed across the generated sentences. Indeed, the major difference with GPT is that the input text could be inserted anywhere in the generated text, which makes the generation process more controllable. Meanwhile, the output text contains certain predefined tokens. Figure 3 and Figure 4 shows the generated paragraphs of GPT and u-PMLM, respectively. For GPT, the input text can only be placed in the beginning and the generation process become uncontrollable, resulting in generating sentences with topic drift. In contrast, u-PMLM allows manually placing anchor sentences in the middle or end of the generated text to guide the topic of the generated text. As shown in Figure 4, we place “Tom is a cat and Jerry is a mouse .” and “Tom and Jerry become good friends in the end .” at the beginning and end of the paragraph. The middle parts are generated by u-PMLM from left-to-right. Such generation method allows us to better retain the topic of the generated content. Tom is a cat and Jerry is a mouse . However , the two have a common . The first part is a joke about Jerry and Tom fighting in the middle of the episode . The two get on the run from the restaurant , and Tom ’ s mother is shocked that they would have to do so . After a few minutes , Jerry arrives and decides to have a fight . The two go to the casino , where Jerry tries to fight them back by using a splint of grease and a bucket of wine in the bar . They reunite at a restaurant dance , and Tom and Jerry become good friends in the end . Figure 4: Bidirectional Text Generation with u-PMLM 4.3 Natural Language Understanding Besides evaluating the ability of u-PMLM for natural language generation, we also evaluate its performance on natural language understanding. Two widely adopted tasks, GLUE (Wang et al., 2018) and SQUAD 2.0 (Rajpurkar et al., 2018), are employed for evaluating u-PMLM. We have ensured that the evaluation for u-PMLM is influenced by as less model-irrelevant factors as possibles. For example, we do not tune the hyper-parameters and just follow the settings of BERT, including warming-up steps, learning rate, etc. In addition, since BERT employs absolute positional embeddings, the variant with absolute positional embeddings, u-PMLM-A, is intentionally trained for fairly evaluating the probabilistic masking scheme. The results are shown in Table 5 and Table 6. u-PMLM-A general performs better than BERT, demonstrating that the probabilistic masking scheme is more effective than the fixed masking scheme. The reason could be that the probabilistic masking scheme covers more a wider range of masking patterns, which benefits pretraining for a masked language model. Moreover, u-PMLMR performs better than u-PMLM-A consistently. The only difference between these two models is the way to handle positional embedding. Relative positional embedding emphasizes more on the relative positions between two tokens, which could be a better option to capture contextual representation. Recall that relative and absolute positional embedding do not make many differences regarding generation ability if the dataset is proper. Hence we conclude u-PMLM-R is a better model than uPMLM-A considering both NLU and NLG tasks. 270 Model COLA SST2 MRPC STSB QQP MNLI-m/mm QNLI RTE AVG. BERT(A) 52.1 93.5 88.9/84.8 87.1/85.8 71.2/89.2 84.6/83.4 90.5 66.4 78.3 u-PMLM-A 56.5 94.3 88.8/84.4 87.0/85.9 71.4/89.2 84.5/83.5 91.8 66.1 79.0 u-PMLM-R 58.0 94.0 89.7/85.8 87.7/86.8 71.2/89.2 85.0/84.1 92.3 69.8 80.0 u-PMLM-R* 56.9 94.2 90.7/87.7 89.7/89.1 72.2/89.4 86.1/85.4 92.1 78.5 81.3 Table 5: Evaluation on GLUE test set. Model F1 EM BERT(A) 76.85 73.97 u-PMLM-A 78.31 74.62 u-PMLM-R 81.52 78.46 Table 6: Evaluation on SQUAD 2.0. Model SQUAD 2.0 MNLI SST2 F1/EM m/mm XLNet (R) 81.33/78.46 85.84/85.43 92.66 u-PMLM-R 81.52/78.46 85.99/85.60 93.58 Table 7: Comparison with XLNet. In addition, u-PMLM-R*, finetuned with a commonly used technique by sharing data from multiple tasks, is the state-of-the-art base model (with 110M parameters) trained on the BookCorpus and Wikipedia datasets on GLUE leaderboard on the date of paper submission. 1 Comparison with XLNet We also compare our proposed model with XLNet-base, which adopts relative positional embedding. As will be discussed in Section 5, XLNet is the most relevant model to u-PMLM. We are not able to train an XLNet using the same settings except that we make sure both u-PMLM-R and XLNet-base are of the same model size and are both trained using the same datasets. The comparison results shown in Table 7 demonstrate that the performance of our proposed u-PMLM-R is comparable to XLNet. 5 Related Work 5.1 Non-traditional Text Generation Conventionally, text is commonly generated autoregressively in the left-to-right direction. Recently, some research works have proposed several models for non-autoregressive text generation (Welleck et al., 2019; Gu et al., 2019). Stern et al. (2019) proposes insertion Transformer, where text are generated in an iterative and partially autoregressive manner based on insertion operations. Ma et al. (2019) design a latent variable based method to generate all the tokens in one pass. Ghazvinine1http://gluebenchmark.com/leaderboard/ jad et al. (2019) and Wang and Cho (2019) employ masked language model for refinement-based non-autoregressive text generation, when a subset of tokens in a sequence are refined iteratively. Later, Mansimov et al. (2019) propose a generalized framework of sequence generation accommodating autoregressive, semi-autoregressive, and refinement-based non-autoregressive model. Strictly speaking, our proposed arbitrarily ordered autoregressive text generation is a special case of this generalized framework. We are the first work to address such kind of text generation, which enables a lot of new applications over tradition text generation. UNILM (Dong et al., 2019) and MASS (Song et al., 2019) are another two works that combine masked language model and autoregressive language model. However, UNILM only combines the training objective of GPT and BERT. MASS employs mask mechanism to train sequence to sequence language model. Both models do not address arbitrarily ordered text generation. 5.2 XLNet XLNet (Yang et al., 2019) is the most relevant pretrained language model to u-PMLM. Both of them can be treated as an autoregressive permutated language model. However, XLNet is trained by permutating only a small fraction of the sequences, which does not fully address the generation problem. Though, we suppose that the training method for XLNet is feasible to train a model for arbitrarily ordered text generation as well. The main difference between these two models is that XLNet employs unidirectional Transformer, while u-PMLM is based on bidirectional Transformer. Regarding the training algorithm, XLNet shuffles the attention matrix and introduce two-stream self-attention, which is a bit complex and memory consuming. On the other hand, PMLM takes the simple training objective of masked language model and approximates permutated language model. 271 6 Conclusion We have proposed a probabilistically masked language model for autoregressive generation in arbitrary word order. The experiments show that the text generated in arbitrary order has comparable quality with GPT. Besides, the proposed probabilistic masking scheme also improves the NLU capability of a masked language model. References Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978–2988. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In Advances in Neural Information Processing Systems 32, pages 13042–13054. Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. 2015. Made: Masked autoencoder for distribution estimation. In Proceedings of the 32nd International Conference on Machine Learning, pages 881–889. Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-predict: Parallel decoding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 6114–6123. Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. In Advances in Neural Information Processing Systems, pages 11179–11189. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2019. Spanbert: Improving pre-training by representing and predicting spans. arXiv preprint arXiv:1907.10529. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Xuezhe Ma, Chunting Zhou, Xian Li, Graham Neubig, and Eduard Hovy. 2019. FlowSeq: Nonautoregressive conditional sequence generation with generative flow. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 4281– 4291. Elman Mansimov, Alex Wang, and Kyunghyun Cho. 2019. A generalized framework of sequence generation with application to undirected sequence models. arXiv preprint arXiv:1905.12790. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2.amazonaws.com/openaiassets/researchcovers/languageunsupervised/language understanding paper. pdf. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 464–468. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. MASS: Masked sequence to sequence pre-training for language generation. In Proceedings of the 36th International Conference on Machine Learning, pages 5926–5936. Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. 2019. Insertion transformer: Flexible sequence generation via insertion operations. In Proceedings of the 36th International Conference on Machine Learning, pages 5976–5985. 272 Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Alex Wang and Kyunghyun Cho. 2019. BERT has a mouth, and it must speak: BERT as a Markov random field language model. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 30–36. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Wei Wang, Bin Bi, Ming Yan, Chen Wu, Zuyi Bao, Liwei Peng, and Luo Si. 2019. Structbert: Incorporating language structures into pre-training for deep language understanding. arXiv preprint arXiv:1908.04577. Sean Welleck, Kiant´e Brantley, Hal Daum´e III, and Kyunghyun Cho. 2019. Non-monotonic sequential text generation. In Proceedings of the 36th International Conference on Machine Learning. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32, pages 5754–5764. Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. 2019. Large batch optimization for deep learning: Training bert in 76 minutes. In International Conference on Learning Representations. A Proof of Equivalence We prove that PMLM with a continuous uniform distribution on the masking ratio, namely u-PMLM, is equivalent to an autoregressive permutated language model. When p(r) is a continuous uniform distribution, the probability p(M) is analytical, denoted as: p(M) = Z rK(1 −r)N−Kp(r)d(r) = Z 1 0 rK(1 −r)N−Kd(r) = B(N −K + 1, K + 1) = Γ(N −K + 1)Γ(K + 1) Γ(N + 2) = (N −K)!K! (N + 1)! (9) where B(·) is Beta function and Γ(·) is Gamma function. Thus for u-PMLM, the expected losslikelihood function denoted in Equation 6 becomes: Lpmlm(XΠ|X; θ) = X M [ 1 K K X k=1 log p(xπk|X−Π; θ)](N −K)!K! (N + 1)! = P M PK k=1(N −K)!(K −1)! log p(xπk|X−Π; θ) (N + 1)! (10) On the other hand, we rewrite the formulation of an autoregressive permutated language model (APLM) denoted in Equation 8 as: Laplm(X) = Eσ " N X t=1 log p(xσt|xσ1, . . . , xσt−1; θ) # = P σ[PN t=1 log p(xσt|xσ1, . . . , xσt−1; θ)] C (11) where the numerator sums over the log-likelihood for all the possible permutations and the denominator C is a constant. In fact, we can rewrite the term p(xσt|xσ1, . . . , xσt−1; θ) by p(xσt|X−Πσ t ; θ), where Πσ t = X −{σ1, σ2, ..., σt−1}. Noted that K is the size of Πσ t . Thus the size of Πσ t is denoted as |Πσ t | = K = N −t + 1. Therefore we rewrite Equation 11 as: Laplm(X) = 1 C X σ [log p(XΠσ t+1|X−Πσ t+1; θ) + log p(xσt|X−Πσ t ; θ) + log p(X−Πσ t ; θ)] (12) According to the above equation, we can derive the duplication factor for specific term log p(xσt|X−Πσ t ) when summing over all the permutations, which is exactly the product of numbers of permutations for Πσ t+1 and −Πσ t in the first and last term respectively. Specifically, the number of 273 permutations for Πσ t+1 and −Πσ t are factorials of |Πσ t+1| and | −Πσ t |, denoted as: permutation(Πσ t+1) = |Πσ t+1|! = (N −K)! permutation(−Πσ t ) = | −Πσ t |! = (K −1)! (13) Hence the duplication factor for log p(xσt|X−Πσ t ) is computed as (N −K)!(K −1)!, which is the coefficient of the expected log-likelihood function of u-PMLM denoted in Equation 10. Thus we conclude that Equation 10 is equivalent to Equation 8, where the constant C = (N + 1)!. B Generation Examples of u-PMLM and BERT We show more examples of the text generated by u-PMLM-A and BERT respectively. Note that we do not manually select examples. Instead, these examples are picked randomly for fair comparison. Below are texts generated by u-PMLM-A: 1. Tom is a cat and Jerry is a mouse . Though initially wary that he might steal the pie , Jerry eventually admits his fears . Tom ’ s father , Selma , is upset when the cat is not present . She tries making a play about the pie but the girl ignores her and insists that she will be glad if he ’ s able to steal the pie . Tom and Jerry become good friends in the end . 2. Tom is a cat and Jerry is a mouse . “ Jerry and Tom meet again in a restaurant and in a confrontation Tom tries to call the house police . Jerry ’ s neighbor Tom Martin ( Tom Lee ) finds them flirting by taking Jerry , but Tom finds out . With Jerry ’ s help Tom gets their hand together in a car wash and the two have a date . Tom and Jerry become good friends in the end . 3. Tom is a cat and Jerry is a mouse . When the kids get their first game of cricket at his home , he has him as catcher . When all of the teams are on the field except the school cricket player , he takes part in every game , which is always the cricket . He later breaks his leg on the last day of cricket and falls down a ladder . Tom and Jerry become good friends in the end . 4. Tom is a cat and Jerry is a mouse . The two start off on a fishing trip together to see what their parents would think of them . The boys first discover a bad fish on the ocean and decide to try the game . A boathouse is built in town where they see a big picture of Tom ’ s grandfather . The boys fight it out by chance , and Tom and Jerry become good friends in the end . 5. Tom is a cat and Jerry is a mouse . This makes Tom very hard to understand why he and the other citizens like him , so he tries to have to get rid of the mouse . His best friend , Pete , tries to keep his dog quiet about this , even while Jerry and Marge insist they are just starting on a date due to Tom ’ s dog . Tom and Jerry become good friends in the end . Below are texts generated by BERT: 1. Tom is a cat and Jerry is a mouse . “ That same month the two characters were portrayed as their characters at the NBC series The Office , with the character played by Critrean Figgins and the mouse played by the cat . The two characters make a romantic connection and meet in the end of the episode ” When We Were Meeted Together , ” seeing why Tom and Jerry become good friends in the end . 2. Tom is a cat and Jerry is a mouse . . . And it ’ s about when Tom is in trouble and he becomes frustrated when everything gets upsetting . I make my own movie . . . But I do not want to be a film so sad or about what I like about a kid ’ s life . I want to make a movie that is more real and Tom and Jerry become good friends in the end . 3. Tom is a cat and Jerry is a mouse . “ On August 7 , 1999 , DeLanto started a Twitter account , which included an online forum address on NBC ’ s show 30 Rock through his account of his life on stage and on the Internet . During 2008 , he also posted on his personal blog a message saying ” This world ’ s really getting bigger . Tom and Jerry become good friends in the end . 4. Tom is a cat and Jerry is a mouse . He is a cat and Tom is a mouse . At McKinley High School , Tom enters the Class 3A , and then is elected President of High School ( H . F . R . ) , the district ’ s popular high school . He becomes the principal and a student ’ s 274 supervisor at the High School in 2004 . Tom and Jerry become good friends in the end . 5. Tom is a cat and Jerry is a mouse . In April 1997 , Jack was murdered and he and Jack went on a similar out of wedlock . Tom eventually had a teenage son named Tim . In the pilot episode , Tom is shot in a car crash , and eventually re @ - @ takes his life after another accident , giving him a more ” normal ” appearance . Tom and Jerry become good friends in the end .
2020
24
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2699–2712 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2699 Masked Language Model Scoring Julian Salazar♠ Davis Liang♠ Toan Q. Nguyen♦∗ Katrin Kirchhoff♠ ♠Amazon AWS AI, USA ♦University of Notre Dame, USA {julsal,liadavis,katrinki}@amazon.com, [email protected] Abstract Pretrained masked language models (MLMs) require finetuning for most NLP tasks. Instead, we evaluate MLMs out of the box via their pseudo-log-likelihood scores (PLLs), which are computed by masking tokens one by one. We show that PLLs outperform scores from autoregressive language models like GPT-2 in a variety of tasks. By rescoring ASR and NMT hypotheses, RoBERTa reduces an endto-end LibriSpeech model’s WER by 30% relative and adds up to +1.7 BLEU on state-of-theart baselines for low-resource translation pairs, with further gains from domain adaptation. We attribute this success to PLL’s unsupervised expression of linguistic acceptability without a left-to-right bias, greatly improving on scores from GPT-2 (+10 points on island effects, NPI licensing in BLiMP). One can finetune MLMs to give scores without masking, enabling computation in a single inference pass. In all, PLLs and their associated pseudo-perplexities (PPPLs) enable plug-and-play use of the growing number of pretrained MLMs; e.g., we use a single cross-lingual model to rescore translations in multiple languages. We release our library for language model scoring at https: //github.com/awslabs/mlm-scoring. 1 Introduction BERT (Devlin et al., 2019) and its improvements to natural language understanding have spurred a rapid succession of contextual language representations (Yang et al., 2019; Liu et al., 2019; inter alia) which use larger datasets and more involved training schemes. Their success is attributed to their use of bidirectional context, often via their masked language model (MLM) objectives. Here, a token wt is replaced with [MASK] and predicted using all past and future tokens W\t := (w1, . . . , wt−1, wt+1, . . . , w|W |). ∗Work done during an internship at Amazon AWS AI. Figure 1: To score a sentence, one creates copies with each token masked out. The log probability for each missing token is summed over copies to give the pseudo-log-likelihood score (PLL). One can adapt to the target domain to improve performance, or finetune to score without masks to improve memory usage. In contrast, conventional language models (LMs) predict wt using only past tokens W<t := (w1, . . . , wt−1). However, this allows LMs to estimate log probabilities for a sentence W via the chain rule (log PLM(W ) = P|W | t=1 log PLM(wt | W<t)), which can be used out of the box to rescore hypotheses in end-to-end speech recognition and machine translation (Chan et al., 2016; Gulcehre et al., 2015), and to evaluate sentences for linguistic acceptability (Lau et al., 2017). Our work studies the corresponding pseudo-loglikelihood scores (PLLs) from MLMs (Wang and Cho, 2019), given by summing the conditional log probabilities log PMLM(wt | W\t) of each sentence token (Shin et al., 2019). These are induced in BERT by replacing wt with [MASK] (Figure 1). 2700 Let Θ denote our model’s parameters. Our score is PLL(W ) := |W | X t=1 log PMLM(wt | W\t; Θ). PLLs and their corresponding pseudo-perplexities (PPPLs) (Section 2.3) are intrinsic values one can assign to sentences and corpora, allowing us to use MLMs in applications previously restricted to conventional LM scores. Furthermore, we show that one can finetune BERT to compute PLLs in a single, non-recurrent inference pass (Section 2.2). Existing uses of pretrained MLMs in sequenceto-sequence models for automatic speech recognition (ASR) or neural machine translation (NMT) involve integrating their weights (Clinchant et al., 2019) or representations (Zhu et al., 2020) into the encoder and/or decoder during training. In contrast, we train a sequence model independently, then rescore its n-best outputs with an existing MLM. For acceptability judgments, one finetunes MLMs for classification using a training set (Warstadt et al., 2019; Devlin et al., 2019); instead, PLLs give unsupervised, relative judgements directly. In Section 3, we show that scores from BERT compete with or even outperform GPT-2 (Radford et al., 2019), a conventional language model of similar size but trained on more data. Gains scale with dataset and model size: RoBERTa large (Liu et al., 2019) improves an end-to-end ASR model with relative WER reductions of 30%, 18% on LibriSpeech test-clean, test-other respectively (with further gains from domain adaptation), and improves state-of-the-art NMT baselines by up to +1.7 BLEU on low-resource pairs from standard TED Talks corpora. In the multilingual case, we find that the pretrained 15-language XLM (Conneau and Lample, 2019) can concurrently improve NMT systems in different target languages. In Section 4, we analyze PLLs and propose them as a basis for other ranking/scoring schemes. Unlike log probabilities, PLL’s summands are more uniform across an utterance’s length (no left-toright bias), helping differentiate fluency from likeliness. We use PLLs to perform unsupervised acceptability judgments on the BLiMP minimal pairs set (Warstadt et al., 2020); BERT and RoBERTa models improve the state of the art (GPT-2 probabilities) by up to 3.9% absolute, with +10% on island effects and NPI licensing phenomena. Hence, PLLs can be used to assess the linguistic competence of MLMs in a supervision-free manner. 2 Background 2.1 Pseudolikelihood estimation Bidirectional contextual representations like BERT come at the expense of being “true” language models PLM(W ), as there may appear no way to generate text (sampling) or produce sentence probabilities (density estimation) from these models. This handicapped their use in generative tasks, where they at best served to bootstrap encoder-decoder models (Clinchant et al., 2019; Zhu et al., 2020) or unidirectional LMs (Wang et al., 2019). However, BERT’s MLM objective can be viewed as stochastic maximum pseudolikelihood estimation (MPLE) (Wang and Cho, 2019; Besag, 1975) on a training set W, where {wt}|W | t=1 are random variables in a fully-connected graph. This approximates conventional MLE, with MLM training asymptotically maximizing the objective: JPL(Θ; W) = 1 |W| X W ∈W PLL(W ; Θ). In this way, MLMs learn an underlying joint distribution whose conditional distributions wt | W\t are modeled by masking at position t. We include a further discussion in Appendix B. This enabled text generation with BERT via Gibbs sampling, leading to the proposal (but not evaluation) of a related quantity, the sum of logits, for sentence ranking (Wang and Cho, 2019). More recent work (Shin et al., 2019) extended past research on future-conditional LMs in ASR (Section 5) with deeply-bidirectional self-attentive language models (bi-SANLMs). They trained shallow models from scratch with the [MASK] scoring method, but did not relate their work to pseudolikelihood and fluency, which provide a framework to explain their success and observed behaviors. Experimentally, we extend both works by evaluating pretrained models, domain adaptation, and usage in NMT and multilingual settings (Section 3), along with acceptability judgements and PLL’s intrinsic numerical properties (Section 4). 2.2 [MASK]less scoring A practical point unaddressed in both works is that computing PLLs from an MLM requires a sentence copy for each position, making the number of inference passes dependent on length (though these can be parallelized). The cost of a softmax is also incurred, which is dependent on vocabulary size 2701 V ; together this gives O(|W | · V ). We propose reducing this to O(1) by training a network q with parameters ΘS to match BERT’s PLLs without [MASK] tokens: |PLL(W ) −q(W ; ΘS)|2. We propose finetuning q from the pretrained MLM directly (i.e., initializing ΘS with Θ), via regression over the [CLS] token (Figure 2): Figure 2: We learn a linear map after the [CLS] token, supervised by the PLLs from the pretrained MLM. More generally, one could use any student model q, as in knowledge distillation (Hinton et al., 2014). Here, the teacher gives individual token probabilities (|W | inference passes) while the student approximates their sum (one inference pass). This is reminiscent of distilling an autoregressive teacher to a parallel student, as in the case of WaveNet (Oord et al., 2018). Other [MASK]less bidirectional models like XLNet (Yang et al., 2019) can also give PLLs; we leave this to future work. 2.3 Pseudo-perplexity Analogous to conventional LMs, we propose the pseudo-perplexity (PPPL) of an MLM as an intrinsic measure of how well it models a corpus of sentences W. Let N denote the number of tokens in the corpus. Then a model’s PPPL on W is PPPL(W) := exp −1 N X W ∈W PLL(W ) ! . Past work (Chen et al., 2017) also computed this quantity with bi-RNNLMs for ASR, although such models are not deeply bidirectional like selfattentive MLMs (see Section 5). These PPPLs can be used in lieu of perplexities. For example, during domain adaptation, one can perform early stopping with respect to development PPPL. This is in contrast to MLM accuracy, which is not a continuous loss and is often stochastic (e.g., when performing dynamic masking as in RoBERTa). In Section 4.1, we see that PPPLs naturally separate out sets of acceptable and unacceptable sentences. Unlike previous works (Chen et al., 2017; Shin et al., 2019) we use pretrained BERTs, which are open-vocabulary (subword) bidirectional LMs. However, PPPLs are only comparable under the same subword vocabulary, which differs between e.g., BERT and RoBERTa. Normalizing with N as the number of words mitigates this. In Appendix C, we show that word-normalized PPPLs correlate with domain adaptation, and with downstream metrics like ASR and BLEU after rescoring. 3 Sequence-to-sequence rescoring Let X denote audio features or source text tokens, and let W = (w1, . . . , w|W |) denote target text tokens. For non-end-to-end ASR and MT systems, having separate acoustic/translation models PAM/TM(X | W ) and language models PLM(W ) is motivated by the Bayes rule decomposition used to select the best hypothesis ˆ W (Jelinek et al., 1975; Brown et al., 1993): ˆ W = arg max W [P(W | X)] = arg max W [PAM/TM(X | W )PLM(W )]. 3.1 The log-linear model End-to-end ASR and NMT use encoder-decoder architectures that are trained discriminatively. Though less principled, many still adopt a loglinear model ˆ W = arg max W [log P(W | X)] ≈arg max W [log f(W , X) + λ log g(W )] with learned functions f, g and a hyperparameter λ, to good effect (Sutskever et al., 2014; Chan et al., 2016). One often takes f = PS2S(W | X) as the sequence-to-sequence model and g = PLM(W ) as the language model. Since the sequence-level arg max is intractable, one can do fusion, which decomposes f = Q ft and g = Q gt over time (Gulcehre et al., 2015), restricting to the top N intermediate candidates at each step (beam search). Instead, our work considers N-best rescoring, which 2702 computes f(W , X) first, still using beam search to maintain the top N candidates and scores. Then, g(W ) is computed for the resulting hypotheses and interpolated with these scores, giving a new top-1 hypothesis. The sequence model is now solely responsible for “capturing” the best hypothesis ˆ W in its beam. However, there are two advantages to N-best rescoring, which motivate PLLs as well as our maskless finetuning approach, respectively: Decoupling of scale. Fusion requires correspondence between ft and gt at every t. This requires the sequence model and LM to be autoregressive and share tokenizations. In rescoring, f = PS2S does not require g to decompose over time or to be a true probability at all, though g should scale with f so that λ remains valid for all lengths |W |; e.g., taking g(W ) to be a “relevance score” between 0 and 1 would not satisfy this property. The choice of log-linear is relevant here (Appendix B). Length-independent inference. If g is nonrecurrent, then g(W ) may be computed in a single inference pass. This difference manifests with selfattentive LMs like SANLMs and Transformer-XL (Dai et al., 2019), as recently explored for N-best rescoring in ASR (Li et al., 2019; Shin et al., 2019). 3.2 Experimental setup Further implementation and experimental details can be found in Appendix A and our code release: LMs. We rescore sequence-to-sequence hypotheses as in Section 3.1. Each hypothesis is assigned its log probability (uni-SANLM, GPT-2) or pseudolog-likelihood score (bi-SANLM, BERT, M-BERT, RoBERTa, XLM). We tune the LM weight λ on the development set to minimize word error rate (WER) for ASR or maximize tokenized BLEU for NMT. We then evaluate on the test set. ASR. Our 100-best hypotheses are from an endto-end, 5-layer BLSTMP model (Shin et al., 2019) from ESPnet (Watanabe et al., 2018) on the 960hour LibriSpeech corpus (Panayotov et al., 2015). Though this baseline is not state-of-the-art, we use their lists to enable direct comparison in Table 5. NMT. Our 100-best hypotheses are from strong Transformer baselines with BPE subwords. One was pretrained for WMT 2014 English-German (Vaswani et al., 2017); the others are state-of-theart low-resource models we trained for five pairs from the TED Talks corpus (Qi et al., 2018) and for IWSLT 2015 English-Vietnamese (Cettolo et al., 2015), which we also describe in a dedicated, concurrent work (Nguyen and Salazar, 2019). For the low-resource models we scored tokenized hypotheses (though with HTML entities unescaped, e.g., &quot; 7→"). Length normalization (Wu et al., 2016) is applied to NMT (α = 0.6) and LM (α = 1.0) scores (Section 4.3). Corpus Source →target language # pairs TED Talks Galician (gl) →English (en) 10k TED Talks Slovakian (sk) →English (en) 61k IWSLT 2015 English (en) →Vietnamese (vi) 133k TED Talks English (en) →German (de) 167k TED Talks Arabic (ar) →English (en) 214k TED Talks English (en) →Arabic (ar) 214k WMT 2014 English (en) →German (de) 4.5M Table 1: Sizes of translation datasets used in this paper. 3.3 Out-of-the-box (monolingual) We consider BERT (Devlin et al., 2019), GPT-2 (Radford et al., 2019), and RoBERTa (Liu et al., 2019), which are trained on 17GB, 40GB, and 160GB of written text respectively. Each model comes in similarly-sized 6-layer (117M / base) and 12-layer (345M / large) versions. GPT-2 is autoregressive, while BERT and RoBERTa are MLMs. We begin by rescoring ASR outputs in Table 2: Model dev test clean other clean other baseline (100-best) 7.17 19.79 7.26 20.37 GPT-2 (117M, cased) 5.39 16.81 5.64 17.60 BERT (base, cased) 5.17 16.44 5.41 17.41 RoBERTa (base, cased) 5.03 16.16 5.25 17.18 GPT-2 (345M, cased) 5.15 16.48 5.30 17.26 BERT (large, cased) 4.96 16.26 5.25 16.97 RoBERTa (large, cased) 4.75 15.81 5.05 16.79 oracle (100-best) 2.85 12.21 2.81 12.85 Table 2: WERs on LibriSpeech after rescoring. Baseline lists and oracle scores are from Shin et al. (2019). As GPT-2 is trained on cased, punctuated data while the ASR model is not, we use cased MLMs and append “.” to hypotheses to compare out-ofthe-box performance. BERT outperforms its corresponding GPT-2 models despite being trained on less data. RoBERTa reduces WERs by 30% relative on LibriSpeech test-clean and 18% on test-other. We repeat the same on English-target NMT in Table 3. As 100-best can be worse than 4-best due to the beam search curse (Yang et al., 2018; Murray 2703 and Chiang, 2018), we first decode both beam sizes to ensure no systematic degradation in our models. Hypothesis rescoring with BERT (base) gives up to +1.1 BLEU over our strong 100-best baselines, remaining competitive with GPT-2. Using RoBERTa (large) gives up to +1.7 BLEU over the baseline. Incidentally, we have demonstrated conclusive improvements on Transformers via LM rescoring for the first time, despite only using N-best lists; the most recent fusion work (Stahlberg et al., 2018) only used LSTM-based models. Model TED Talks gl→en sk→en ar→en Neubig and Hu (2018) 16.2 24.0 – Aharoni et al. (2019) – – 27.84 our baseline (4-best) 18.47 29.37 33.39 our baseline (100-best) 18.55 29.20 33.40 GPT-2 (117M, cased) 19.24 30.38 34.41 BERT (base, cased) 19.09 30.27 34.32 RoBERTa (base, cased) 19.22 30.80 34.45 GPT-2 (345M, cased) 19.16 30.76 34.62 BERT (large, cased) 19.30 30.31 34.47 RoBERTa (large, cased) 19.36 30.87 34.73 Table 3: Test BLEU scores on English-target language pairs from the TED Talks corpus, after rescoring. We also consider a non-English, higher-resource target by rescoring a pre-existing WMT 2014 English-German system (trained on 4.5M sentence pairs) with German BERT (base) models1 trained on 16GB of text, similar to English BERT. From 27.3 BLEU we get +0.5, +0.3 from uncased, cased; a diminished but present effect that can be improved as in Table 3 with more pretraining, a larger model, or domain adaptation (Section 3.5). 3.4 Out-of-the-box (multilingual) To assess the limits of our modular approach, we ask whether a shared multilingual MLM can improve translation into different target languages. We use the 100+ language M-BERT models, and the 15-language XLM models (Conneau and Lample, 2019) optionally trained with a crosslingual translation LM objective (TLM). Monolingual training was done on Wikipedia, which gives e.g., 6GB of German text; see Table 4. The 100-language M-BERT models gave no consistent improvement. The 15-language XLMs fared better, giving +0.2-0.4 BLEU, perhaps from their use of language tokens and fewer languages. Our 1https://github.com/dbmdz/german-bert Model IWSLT '15 TED Talks en→vi en→de en→ar Wang et al. (2018) 29.09 – – Aharoni et al. (2019) – 23.31 12.95 our baseline (4-best) 31.94 30.50 13.95 our baseline (100-best) 31.84 30.44 13.94 M-BERT (base, uncased) 32.12 30.48 13.98 M-BERT (base, cased) 32.07 30.45 13.94 XLM (base*, uncased) 32.27 30.61 14.13 + TLM objective 32.26 30.62 14.10 de-BERT (base, uncased) – 31.27 – de-BERT (base, cased) – 31.22 – Table 4: Test BLEU scores for language pairs with nonEnglish targets, after hypothesis rescoring. Base* uses 1024 hidden dimensions but only 8 heads instead. German BERT results suggest an out-of-the-box upper bound of +0.8 BLEU, as we found with English BERT on similar resources. We expect that increasing training data and model size will boost XLM performance, as in Section 3.3. 3.5 Domain adaptation Out-of-the-box rescoring may be hindered by how closely our models match the downstream text. For example, our uncased multilingual models strip accents, exacerbating their domain mismatch with the cased, accented gold translation. We examine this effect in the setting of LibriSpeech, which has its own 4GB text corpus and is fully uncased and unpunctuated, unlike the cased MLMs in Section 3.3. We rescore using in-domain models in Table 5: Model dev test clean other clean other baseline (100-best) 7.17 19.79 7.26 20.37 uni-SANLM 6.08 17.32 6.11 18.13 bi-SANLM 5.52 16.61 5.65 17.44 BERT (base, Libri. only) 4.63 15.56 4.79 16.50 BERT (base, cased) 5.17 16.44 5.41 17.41 BERT (base, uncased) 5.02 16.07 5.14 16.97 + adaptation, 380k steps 4.37 15.17 4.58 15.96 oracle (100-best) 2.85 12.21 2.81 12.85 Table 5: WERs on LibriSpeech after hypothesis rescoring. Baseline, SANLM, and oracle numbers are from Shin et al. (2019). Using a BERT model trained only on the text corpus outperforms RoBERTa (Table 2) which is trained on far more data, underscoring the tradeoff between in-domain modeling and out-of-the-box integration. Even minor differences like casing gives +0.3-0.4 WER at test time. In Section 4.3 we 2704 see that these domain shifts can be visibly observed from the positionwise scores log PMLM(wt | W\t). The best results (“adaptation”) still come from adapting a pretrained model to the target corpus. We proceed as in BERT, i.e., performing MLM on sequences of concatenated sentences (more details in Appendix A). In contrast, the 3-layer SANLMs (Shin et al., 2019) do per-utterance training, which is slower but may reduce mismatch even further. Finally, we show in Appendix C that even before evaluating WER or BLEU, one can anticipate improvements in the downstream metric by looking at improvements in word-normalized PPPL on the target corpus. The domain-adapted MLM has lower PPPLs than the pretrained models, and RoBERTa has lower PPPLs than BERT. 3.6 Finetuning without masking We finetune BERT to produce scores without [MASK] tokens. For LibriSpeech we take the normalized text corpus and keep sentences with length |W | ≤384, score them with our adapted BERT (base), then do sentence-level regression (Section 2.2). We train using Adam with a learning rate of 10−5 for 10 epochs (Table 6): Model dev clean other baseline (100-best) 7.17 19.79 GPT-2 (117M, cased) 5.39 16.81 BERT (base, uncased, adapted) 4.37 15.17 + no masking 5.79 18.07 + sentence-level finetuning 4.61 15.53 Table 6: WERs on LibriSpeech upon rescoring, showing the effects of single-copy, maskless scoring. Sentence-level finetuning degrades performance by +0.2-0.4 WER, leaving room for future improvement. This still outperforms GPT-2 (117M, cased), though this gap may be closed by adaptation. For now, maskless finetuning could be reserved for cases where only a masked language model is available, or when latency is essential. Remarkably, we found that out-of-the-box scoring without [MASK] still significantly improves the baseline. This is likely from the 20% of the time BERT does not train on [MASK], but instead inputs a random word or the same word (Devlin et al., 2019). Future work could explore finetuning to positionwise distributions, as in word-level knowledge distillation (Kim and Rush, 2016), for which our results are a na¨ıve performance bound. 4 Analysis We recall the log-linear model from Section 3.1: ˆ W ≈arg max W [log f(W , X) + λ log g(W )] Although end-to-end models f = PS2S(W |X) predict W directly from X, interpolation with the unconditional g = PLM(W ) remains helpful (Toshniwal et al., 2018). One explanation comes from cold and simple fusion (Sriram et al., 2018; Stahlberg et al., 2018), which further improve on shallow fusion (Section 3.1) by learning g(W ) first. They argue g expresses fluency; fixing g early allows f(W , X) to focus its capacity on adequacy in encoding the source, and thus specializing the two models. With this perspective in mind, we compare log PLM and PLL as candidates for log g. 4.1 Relative linguistic acceptability In this work we interpret fluency as linguistic acceptability (Chomsky, 1957); informally, the syntactic and semantic validity of a sentence according to human judgments (Sch¨utze, 1996). Its graded form is well-proxied by neural language model scores (log PLM) once length and lexical frequency are accounted for (Lau et al., 2017). This can be seen in a controlled setting using minimal pairs and GPT-2 (345M) scores: Raymond is selling this sketch. −40.0, Raymond is selling this sketches. −45.2. This example is from the Benchmark of Linguistic Minimal Pairs (BLiMP) (Warstadt et al., 2020), a challenge set of 67k pairs which isolate contrasts in syntax, morphology, and semantics (in this example, determiner-noun agreement). While its predecessor, the Corpus of Linguistic Acceptability (CoLA), has a training set and asks to label sentences as “acceptable” or not in isolation (Warstadt et al., 2019), BLiMP provides an unsupervised setting: language models are evaluated on how often they give the acceptable sentence a higher (i.e., less negative) score. This is equivalent to 2-best rescoring without sequence model scores (log f = 0). Since most minimal pairs only differ by a single word, the effect of length on log probabilities and PLLs (discussed in Section 4.3) is mitigated. We compute PLLs on the sentences of each pair using cased BERT and RoBERTa, then choose the sentence with the highest score. Our results are in Table 7. Despite using less than half the data and a 2705 Model (cased) Overall ANA. AGR ARG. STR BINDING CTRL. RAIS. D-N AGR ELLIPSIS FILLER GAP IRREGULAR ISLAND NPI QUANTIFIERS S-V AGR Unacc. PPPL Acc. PPPL Ratio GPT-2 (345M) 82.6 99.4 83.4 77.8 83.0 96.3 86.3 81.3 94.9 71.7 74.7 74.1 88.3 – – – BERT (base) 84.2* 97.0 80.0 82.3* 79.6 97.6* 89.4* 83.1* 96.5* 73.6* 84.7* 71.2 92.4* 111.2 59.2 1.88 BERT (large) 84.8* 97.2 80.7 82.0* 82.7 97.6* 86.4 84.3* 92.8 77.0* 83.4* 72.8 91.9* 128.1 63.6 2.02 RoBERTa (base) 85.4* 97.3 83.5 77.8 81.9 97.0 91.4* 90.1* 96.2* 80.7* 81.0* 69.8 91.9* 213.5 87.9 2.42 RoBERTa (large) 86.5* 97.8 84.6* 79.1* 84.1* 96.8 90.8* 88.9* 96.8* 83.4* 85.5* 70.2 91.4* 194.0 77.9 2.49 Human 88.6 97.5 90.0 87.3 83.9 92.2 85.0 86.9 97.0 84.9 88.1 86.6 90.9 – – – Table 7: Unsupervised performance (forced choice accuracy) on BLiMP using log probabilities (GPT-2) or PLLs. Human scores from Warstadt et al. (2020). Values with * denote improvements over GPT-2 of ≥1% absolute. third of the capacity, BERT (base) already outperforms the previous state of the art (GPT-2) by 1.6% absolute, increasing to 3.9% with RoBERTa (large). There are 4 of 12 categories where all four PLLs outperform log probabilities by ≥1% absolute (values marked by *), and 7 where three or more PLLs outperform by this margin. Interestingly, PLLs do consistently worse on quantifiers, though all are relatively bad against the human baseline. The ratio of token-level PPPLs between unacceptable and acceptable sentences overall increases with performance, separating the two sentence sets. RoBERTa improves by around 10% on filler-gap dependencies, island effects, and negative polarity items (NPIs), largely closing the human gap. This suggests that the difficulty of these BLiMP categories was due to PLM decomposing autoregressively, and not intrinsic to unsupervised language model training, as the original results may suggest (Warstadt et al., 2020). For some intuition, we include examples in Table 8. In the subject-verb agreement example, BERT sees The pamphlets and resembled those photographs when scoring have vs. has, whereas GPT-2 only sees The pamphlets, which may not be enough to counter the misleading adjacent entity Winston Churchill at scoring time. 4.2 Interpolation with direct models We observed that log g = PLL(W ) is not unduly affected by unconditional token frequencies; this mitigates degradation in adequacy upon interpolation with PS2S. Consider a two-word proper noun, e.g., W = “San Francisco”: log PLM(W ) = log PLM(San) + log PLM(Francisco | San) ≪log PMLM(San | Francisco) + log PMLM(Francisco | San) = PLL(W ). It is a highly-fluent but low-probability bigram and thus gets penalized by log PLM(W ). Informally, PLL(W ) expresses how likely each token is given other tokens (self-consistency), while log PLM(W ) expresses the unconditional probability of a sentence, beginning with the costly unconditional term PLM(San). We see this in practice when we take LM to be GPT-2 (345M) and MLM to be RoBERTa (large). Substituting in the actual scores: log PGPT-2(W ) = −8.693 = (−7.749) + (−0.944) ≪(−0.006) + (−1.000) = −1.006 = PLLRoBERTa(W ). Both give similar probabilities P(Francisco | San) ≈e−1.0 ≈37%, but differ in the first summand. We examine the interplay of this bias with our sequence models, in cases where the baseline, GPT2, and BERT gave different top-1 hypotheses (Table 8). In our examples, GPT-2 restores fluency using common and repeated words, at the cost of adequacy: clasping truth and 7→class in truth and, Union by the Union Sivities 7→ Union by the Union by the Union Civities. One can view these as exacerbations of the rare word problem due to overconfident logits (Nguyen and Chiang, 2018), and of over-translation (Tu et al., 2016). Meanwhile, BERT rewards selfconsistency, which lets rarer but still-fluent words with better acoustic or translation scores to persist: clasping truth and 7→clasping truth in, Union by the Union Sivities 7→ Union by the Union of LiberCivities, 2706 System Model Output sentence BLiMP (S-V agreement) BERT The pamphlets about Winston Churchill have resembled those photographs. GPT-2 The pamphlets about Winston Churchill has resembled those photographs. BLiMP (island) BERT Who does Amanda find while thinking about Lucille? GPT-2 Who does Amanda find Lucille while thinking about? LibriSpeech (dev-other) Baseline clasping truth and jail ya in the mouth of the student is that building up or tearing down GPT-2 class in truth and jail ya in the mouth of the student is that building up or tearing down BERT (adapted) clasping truth in jail gagging the mouth of the student is that building up or tearing down Target clapping truth into jail gagging the mouth of the student is that building up or tearing down gl→en (test) Source (gl) Traballaba de asesora cient´ıfica na ACLU , a Uni´on polas Liberdades Civ´ıs . Baseline I worked on a scientific status on the ACL, the Union by the Union Sivities . GPT-2 I worked on a scientific status on the ACL, the Union by the Union by the Union Civities . BERT I worked on a scientific status on the ACL, the Union by the Union of LiberCivities . Target (en) I was working at the ACLU as the organization ’s science advisor . Table 8: Examples of different top-1 hypotheses after ranking the minimal pairs or rescoring hypotheses from 4best models, with differences highlighted. GPT-2 and BERT both promote fluency, but GPT-2’s left-to-right biased scores appear to cause it to overweigh common word sequences at the expense of adequacy. which preserves the p sound in the ground truth (clapping) for ASR, and promotes the more globally-fluent Union by the Union of LiberCivities. We also see the under-translation (i.e., omission) of Liber being corrected, without being discouraged by the rare sequence LiberCivities. Given the differences between PLLs and log probabilities, we explore whether ensembling both improves performance in Appendix D. Similar to the largely-dominant results of MLMs on BLiMP over GPT-2 (Section 4.1), we find that as the MLM gets stronger, adding GPT-2 scores has negligible effect, suggesting that their roles overlap. 4.3 Numerical properties of PLL PLL’s numerical properties make it an ideal foundation for future ranking or scoring schemes. For example, given fixed |W | one expects −log PMLM(wt | W\t) to be in the same range for all t. Meanwhile −log PLM(wt | W<t) decreases as t →|W |, the rate of which was studied in recurrent language models (Takahashi and Tanaka-Ishii, 2018). We validate this with GPT-2 (Figure 3) and BERT (Figure 4). In particular, we see the outsized cost of the unconditional first unigram in Figure 3. This also explains why bi-SANLM was more robust than uni-SANLM at shorter and earlier positions (Shin et al., 2019); the difference is intrinsic to log probabilities versus PLLs, and is not due to model or data size. Figure 4 also shows that domain adaptation (Section 3.5) affects PLL’s positionwise cross-entropies. Cased BERT spikes at position 1, as it observes a lowercase word where a capitalized word is expected. All MLMs spike at the final token of an utterance, before our appended period “.”. Terminal 1 3 5 7 9 11 13 15 17 19 Context length (t 1) 4.0 4.5 5.0 5.5 6.0 6.5 7.0 Cross-entropy GPT-2 (117M, cased), test-clean GPT-2 (117M, cased), test-other GPT-2 (345M, cased), test-clean GPT-2 (345M, cased), test-other Figure 3: Cross-entropy (natural base) of wt | W<t versus context length (t −1) from GPT-2 models, averaged over LibriSpeech’s test utterances. 1 3 5 7 9 11 13 15 17 19 Token position (t) 0 1 2 3 4 5 6 Cross-entropy BERT (base, cased), test BERT (base, uncased), test BERT (base, uncased, adapt.), test Figure 4: Cross-entropy (natural base) of wt | W\t versus t from BERT, averaged over LibriSpeech’s 189 test utterances of length |W | = 19 (including “.”). words are difficult to predict in general, but here more so as the BERT+LibriSpeech text corpora and the LibriSpeech test set are mismatched; the latter’s ground-truth utterances were segmented by voice activity and not punctuation (Panayotov et al., 2015). Otherwise, the averaged cross-entropies are flat. This, plus our success on BLiMP, suggest posi2707 tionwise scores as a way of detecting “disfluencies” (at least, those in the form of domain mismatches) by observing spikes in cross-entropy; with log PLM, spikes are confounded by the curve in Figure 3. In Appendix C, we plot sentence-level PLLs versus |W | and observe linearity as |W | →∞, with spikes from the last word and lowercase first word smoothing out. This behavior motivates our choice of α = 1.0 when applying the Google NMT-style length penalty (Wu et al., 2016) to PLLs, which corresponds to the asymptoticallylinear LPMLM = (5 + |W |)/(5 + 1). In contrast, autoregressive scores like PLM(W ) integrate over the inverse power-law curve in Figure 3. We speculate that this explains the effectiveness of their hyperparameter α = 0.6, widely used in NMT baselines like ours, as there exists C such that LPS2S(W ) = (5 + |W |)0.6 (5 + 1)0.6 ≈ Z |W | 0 C (5 + x)0.4 dx. 5 Related work Our work extends the closest previous works (Wang and Cho, 2019; Shin et al., 2019) with regards to experiments and tasks, as outlined in Section 2.1. Furthermore, neither work considers the inference cost of masked rescoring, which we address with our maskless scoring approach, or analyze PLL’s numerical properties. Future context. Log probabilities conditioned on past and future context have been used in MT (Finch and Sumita, 2009; Xiong et al., 2011) and perennially in ASR (Shi et al., 2013; Arisoy et al., 2015; Chen et al., 2017) to positive effect. However, these are not “deep bidirectional” as they model interactions between W<t and W>t via the forward and backward context vectors, while MLMs model all pairwise interactions ws and ws′ via dotproduct attention (compare ELMo versus BERT). Their PLLs would have different properties from ours (e.g., their cross-entropies in Figure 4 may be convex instead of flat). Discriminative language modeling. Previous works (Roark et al., 2004; Huang et al., 2018) have explored training language models that directly optimize for a downstream metric (WER, BLEU). While we also eschew using log probabilities from conventional LMs, our approach remains generative. Log probabilities model the joint distribution; PLL does so as well, albeit implicitly (Appendix B). PLL’s summands (conditional probabilities) remain accessible for Gibbs sampling and are not tailored to any metric. The two approaches are complementary; for example, one could use PLL as a “prior” or regularizer for scores given by discriminatively-finetuned BERT models in tasks like passage re-ranking (Nogueira and Cho, 2019). Language model integration. Beyond finetuning pretrained LMs and MLMs, monolingual pretraining has also improved NMT performance (Ramachandran et al., 2017; Conneau and Lample, 2019). However, modular integration of language representation models remains prevalent for various pragmatic reasons, similar to fusion in ASR. Contemporary examples are the use of finetuned BERT scores in a question-answering pipeline (Nogueira and Cho, 2019), or “as-is” cosine similarity scores from BERT to evaluate generated text (Zhang et al., 2020). For example, one might have no pretrained multilingual LMs for decoder initialization or fusion, as such models are difficult to train (Ragni et al., 2016). However, one may have an M-BERT or XLM for the target language/domain. Finally, N-best rescoring and pretraining are not mutually exclusive, though pretraining may already go partway to improve fluency. 6 Conclusion We studied scoring with MLM pseudo-loglikelihood scores in a variety of settings. We showed the effectiveness of N-best rescoring with PLLs from pretrained MLMs in modern sequenceto-sequence models, for both ASR and low- to medium-resource NMT. We found rescoring with PLLs can match or outperform comparable scores from large unidirectional language models (GPT-2). We attributed this to PLL’s promotion of fluency via self-consistency, as demonstrated by improvement on unsupervised acceptability judgements and by qualitative analysis. We examined the numerical properties of PLLs, proposed maskless scoring for speed, and proposed pseudo-perplexities for intrinsic evaluation of MLMs, releasing a codebase implementing our work. Future work could find additional modular uses of MLMs, simplify maskless PLL computations, and use PLLs to devise better sentence- or document-level scoring metrics. Acknowledgments We thank Phillip Keung and Chris Varano for their thoughtful suggestions on this work. 2708 References Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In NAACL-HLT. Ebru Arisoy, Abhinav Sethy, Bhuvana Ramabhadran, and Stanley Chen. 2015. Bidirectional recurrent neural network language models for automatic speech recognition. In ICASSP. Julian Besag. 1975. Statistical analysis of non-lattice data. The Statistician, 24(3):179–195. Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. Mauro Cettolo, Jan Niehues, Sebastian St¨uker, Luisa Bentivogli, R Cattoni, and Marcello Federico. 2015. The IWSLT 2015 evaluation campaign. Technical report, FBK and KIT. William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In ICASSP. Xie Chen, Anton Ragni, Xunying Liu, and Mark JF Gales. 2017. Investigating bidirectional recurrent neural network language models for speech recognition. In INTERSPEECH. Noam Chomsky. 1957. Syntactic structures. Mouton. Stephane Clinchant, Kweon Woo Jung, and Vassilina Nikoulina. 2019. On the use of BERT for neural machine translation. In WNGT. Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In NeurIPS. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In ACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. Andrew Finch and Eiichiro Sumita. 2009. Bidirectional phrase-based statistical machine translation. In EMNLP. Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual corpora in neural machine translation. arXiv preprint arXiv:1503.03535. Jian Guo, He He, Tong He, Leonard Lausen, Mu Li, Haibin Lin, Xingjian Shi, Chenguang Wang, Junyuan Xie, Sheng Zha, et al. 2020. GluonCV and GluonNLP: Deep learning in computer vision and natural language processing. JMLR, 21(23):1–7. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2014. Distilling the knowledge in a neural network. Deep Learning Workshop, NeurIPS. Jiaji Huang, Yi Li, Wei Ping, and Liang Huang. 2018. Large margin neural language model. In EMNLP. Frederick Jelinek, Lalit Bahl, and Robert Mercer. 1975. Design of a linguistic statistical decoder for the recognition of continuous speech. IEEE Trans. Inf. Theory, 21(3):250–256. Yoon Kim and Alexander M Rush. 2016. Sequencelevel knowledge distillation. In EMNLP. Jey Han Lau, Alexander Clark, and Shalom Lappin. 2017. Grammaticality, acceptability, and probability: A probabilistic view of linguistic knowledge. Cognitive Science, 41(5):1202–1241. Jason Li, Vitaly Lavrukhin, Boris Ginsburg, Ryan Leary, Oleksii Kuchaiev, Jonathan M Cohen, Huyen Nguyen, and Ravi Teja Gadde. 2019. Jasper: An end-to-end convolutional neural acoustic model. In INTERSPEECH. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692. Kenton Murray and David Chiang. 2018. Correcting length bias in neural machine translation. In WMT. Graham Neubig and Junjie Hu. 2018. Rapid adaptation of neural machine translation to new languages. In EMNLP. Toan Q Nguyen and David Chiang. 2018. Improving lexical choice in neural machine translation. In NAACL-HLT. Toan Q Nguyen and Julian Salazar. 2019. Transformers without tears: Improving the normalization of self-attention. In IWSLT. Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with BERT. arXiv preprint arXiv:1901.04085. Aaron van den Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George van den Driessche, Edward Lockhart, Luis C Cobo, Florian Stimberg, et al. 2018. Parallel WaveNet: Fast high-fidelity speech synthesis. In ICML. Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. LibriSpeech: An ASR corpus based on public domain audio books. In ICASSP. Ye Qi, Devendra Singh Sachan, Matthieu Felix, Sarguna Janani Padmanabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neural machine translation? In NAACL-HLT. 2709 Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Technical report, OpenAI. Anton Ragni, Edgar Dakin, Xie Chen, Mark JF Gales, and Kate M Knill. 2016. Multi-language neural network language models. In INTERSPEECH. Prajit Ramachandran, Peter J Liu, and Quoc V Le. 2017. Unsupervised pretraining for sequence to sequence learning. In EMNLP. Brian Roark, Murat Saraclar, Michael Collins, and Mark Johnson. 2004. Discriminative language modeling with conditional random fields and the perceptron algorithm. In ACL. Carson T Sch¨utze. 1996. The empirical base of linguistics: Grammaticality judgments and linguistic methodology. Language Science Press. Yangyang Shi, Martha Larson, Pascal Wiggers, and Catholijn M Jonker. 2013. Exploiting the succeeding words in recurrent neural network language models. In INTERSPEECH. Joongbo Shin, Yoonhyung Lee, and Kyomin Jung. 2019. Effective sentence scoring method using BERT for speech recognition. In ACML. Anuroop Sriram, Heewoo Jun, Sanjeev Satheesh, and Adam Coates. 2018. Cold fusion: Training seq2seq models together with language models. In INTERSPEECH. Felix Stahlberg, James Cross, and Veselin Stoyanov. 2018. Simple fusion: Return of the language model. In WMT. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NeurIPS. Shuntaro Takahashi and Kumiko Tanaka-Ishii. 2018. Cross entropy of neural language models at infinity– a new bound of the entropy rate. Entropy, 20(11):839. Shubham Toshniwal, Anjuli Kannan, Chung-Cheng Chiu, Yonghui Wu, Tara N Sainath, and Karen Livescu. 2018. A comparison of techniques for language model integration in encoder-decoder speech recognition. In SLT. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In ACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. Alex Wang and Kyunghyun Cho. 2019. BERT has a mouth, and it must speak: BERT as a Markov random field language model. In NeuralGen. Chenguang Wang, Mu Li, and Alexander J Smola. 2019. Language models with transformers. arXiv preprint arXiv:1904.09408. Xinyi Wang, Hieu Pham, Zihang Dai, and Graham Neubig. 2018. SwitchOut: An efficient data augmentation algorithm for neural machine translation. In EMNLP. Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R Bowman. 2020. BLiMP: The benchmark of linguistic minimal pairs for English. TACL. Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019. Neural network acceptability judgments. TACL, 7:625–641. Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, et al. 2018. ESPnet: End-to-end speech processing toolkit. In INTERSPEECH. Thomas Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, T Rault, R Louf, M Funtowicz, et al. 2019. HuggingFace’s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Deyi Xiong, Min Zhang, and Haizhou Li. 2011. Enhancing language models in statistical machine translation with backward n-grams and mutual information triggers. In ACL. Yilin Yang, Liang Huang, and Mingbo Ma. 2018. Breaking the beam search curse: A study of (re-) scoring methods and stopping criteria for neural machine translation. In EMNLP. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In NeurIPS. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating text generation with BERT. In ICLR. Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, and Tie-Yan Liu. 2020. Incorporating BERT into neural machine translation. In ICLR. 2710 A Experiment details A.1 Language models Implementation. English BERT, M-BERT, GPT2, and RoBERTa models were served, adapted, and finetuned via the GluonNLP toolkit (Guo et al., 2020). German BERT and XLM models were served via HuggingFace’s Transformers toolkit (Wolf et al., 2019). We release a reference implementation (a language model scoring package) for our work at https://github.com/awslabs/ mlm-scoring. Training. When adapting to a corpus we continue the training scheme for BERT, i.e., MLM + next-sentence prediction (Devlin et al., 2019), on the new dataset only, until the training loss converges. We still perform warmup at adaptation time (ratio of 0.01), but continue to use batches of 256 sequences of contiguous sentences, each with length up to 512. Scoring. For BERT, M-BERT, and RoBERTa we prepend and append [CLS], [SEP] tokens. For GPT-2 we prepend and append <|endoftext|>, the default tokens for unconditional generation, as we found this outperformed other initial conditions (e.g., a preceding “.”). For XLM we prepend and append </s> (prepending <s> is more proper, but this is due to a bug in HuggingFace Transformer’s XLMTokenizer that we will fix; changes in results should be negligible). When computing (pseudo-)perplexity (Section 2.3), these special tokens’ conditional probabilities are not included, nor are they counted for token or word counts during length normalization. N-best rescoring. We follow the log-linear model in Section 3.1 with its hyperparameter λ, i.e., weighted addition of (M)LM scores with sequenceto-sequence scores. When interpolating MLMs with GPT-2 there is also a hyperparamter γ (Appendix D). We do grid search on (λ, γ) with increments (0.05, 0.1) for the best weights on the development set for downstream WER or BLEU, then evaluate on the corresponding test set. In the case of ties, we choose the largest λ, γ. A.2 Automatic speech recognition We use the LibriSpeech corpus (Panayotov et al., 2015) for our experiments. To adapt BERT we use the provided 800M-word text-only data, processed using Kaldi to match the normalized, downloadable corpus2 but with sentences in their original order (instead of alphabetically as in Kaldi’s recipe), to match the long-context training regime of our language models. Our LibriSpeech-only BERT (base) model was trained on this corpus using GluonNLP’s recipe, for 1.5M steps. We take pre-existing 100-best lists shared via e-mail communication (Shin et al., 2019), which were produced by ESPnet (Watanabe et al., 2018) on LibriSpeech’s dev and test sets. The ESPnet model was the sequence-to-sequence BLSTMP model in the librispeech/asr1 recipe, except with 5 layers and a beam size of 100. For speech corpora, to alleviate some of the domain shift from BERT’s original written corpora, we appended “.” at the end of utterances during adaptation, and appended “.” to all hypotheses before subword tokenization, masking, and token/word counting. A.3 Neural machine translation Our pretrained model3 is the base Transformer on WMT 2014 English-German (Vaswani et al., 2017) trained using GluonNLP’s scripts/machine_ translation. Evaluation and N-best rescoring was on the 3003-sentence test set via --full --bleu 13a --beam size 100. We consider 5 low-resource directions from the TED Talks dataset (Qi et al., 2018): Arabic (ar), Galician (gl), and Slovak (sk) to English; and English to Arabic, German (de), languages which were considered in Aharoni et al. (2019). We also include a more popular benchmark, English to Vietnamese (vi) from the IWSLT '15 evaluation campaign4 (Cettolo et al., 2015). These give a breadth of English-source and English-target pairs and include a right-to-left language; more importantly, the three non-English targets are covered by the 15-language XLMs (Conneau and Lample, 2019). Our models are also described as baselines in a dedicated work (Nguyen and Salazar, 2019). They are base Transformers with 6 layers, 8 heads, an 8k BPE vocabulary, and dropout of 0.3, except for gl→en where we use 4 layers, 4 heads, 3k BPE, 2https://www.openslr.org/resources/11/ librispeech-lm-norm.txt.gz 3http://apache-mxnet.s3-accelerate. dualstack.amazonaws.com/gluon/models/ transformer_en_de_512_WMT2014-e25287c5. zip 4https://nlp.stanford.edu/projects/ nmt/ 2711 and a dropout of 0.4 due to its significantly smaller size. We use a warmup of 8k steps and the default hyperparameters (Vaswani et al., 2017). We apply GNMT length normalization (Wu et al., 2016) with α = 0.6 to the sequence-to-sequence log probabilities, and α = 1.0 to the PLLs (motivation is given in Section 4.3), with respect to their chosen tokenization’s lengths. We compute tokenized BLEU via multi-bleu.perl from Moses5 to compare with past works on these datasets. B BERT as a generative model In their published version (Wang and Cho, 2019), the authors claimed that BERT is a Markov random field language model (MRF-LM) where {wt}|W | t=1 are categorical random variables (over the vocabulary) in a fully-connected graph G. They define a potential over cliques of G such that all partialgraph potentials are exp(0) = 1 and the full-graph potential is exp P|W | t=1 log φt(G), where log φt(G) is the logit corresponding to log PMLM(wt | W\t) (although in their formulation, one could include the softmax into the feature function fθ and take log φt(G) = PLL(G) exactly). Abusing notation, we write W interchangeably with its graph G. An MRF defined this way would give the joint distribution: PMLM(W ) = 1 Z |W | Y t=1 φt(W ) = 1 Z exp PLL(W ), where Z is the partition function Z = X W ′∈S |W ′| Y t=1 φt(W ′) = X W ′∈S exp PLL(W ′), making this a valid distribution by normalizing over all sequences of the same length |W |, the set denoted by S. One then hopes to say that log PMLM(wt | W\t) is the conditional distribution of this MRF. However, their erratum6 notes this is not the case, as wt would be affected by other log potentials as well. In practice, one could instead a priori make the modeling assumption g(W ) = PMLM(W ) := 1 Z exp PLL(W ), 5https://statmt.org 6“BERT has a Mouth and must Speak, but it is not an MRF” from https://sites.google.com/site/ deepernn/home/blog as done in the work on bi-RNNLMs (Chen et al., 2017). They choose to model the distribution of sentences as a product-of-experts wt | W\t, whose parameters are shared via the underlying bi-RNN. Suppose one had access to this “normalized MLM probability”. In the log-linear setting (Section 3.1), we get log PS2S(W | X) + λ log PMLM(W ) = · · · + λ log  1 Z exp PLL(W )  = · · · + λ PLL(W ) −λ log Z. For fixed λ and Z (which is intrinsic to the MLM), we see that λ log Z does not affect rank-ordering when taking arg max to get the best hypothesis ˆ W . Hence, the heuristic interpolation enacted by λ is “the same” for normalized log PLM, unnormalized PLL, and our hypothetical log PMLM. The remaining issue is whether λ has the same effect for all lengths |W |, which one mitigates by applying the correct length penalties to f and g (Section 4.3). C Pseudo-perplexity and rescoring We briefly examine the relationship between PPPL (Section 2.3) and metrics post-rescoring. We plot negative PLLs versus |W | and observe linearity, helping justify our simple average over length: 0 5 10 15 20 25 30 Sentence length |W| 0 10 20 30 40 50 60 70 80 90 Negative PLL BERT (base, cased), test BERT (base, uncased), test BERT (base, uncased, adapt.), test Figure 5: Negative pseudo-log-likelihood scores versus sentence length (in tokens) from BERT, averaged over LibriSpeech’s test utterances of each length. Note that in this section, we consider PPPLs normalized by number of words (PPPLw) to improve comparability between different subword vocabularies. We see a good correspondence between PPPLw improvements and post-rescoring WER in Table 9, and post-rescoring BLEU in Table 10. Thus, one could compute a new pretrained model’s word-normalized PPPL on a small target2712 Model test clean other PPPLw WER PPPLw WER BERT (base, cased) 24.18 5.41 27.47 17.41 RoBERTa (base, cased) 21.85 5.25 24.54 17.18 BERT (large, cased) 17.49 5.25 19.59 16.97 BERT (base, uncased) 17.49 5.14 19.24 16.97 RoBERTa (large, cased) 14.78 5.05 16.23 16.79 BERT (base, Libri. only) 9.86 4.79 10.55 16.50 BERT (base, unc., adapt.) 6.63 4.58 6.56 15.96 Table 9: Word-normalized PPPL vs. WER on LibriSpeech after rescoring, for models with different token vocabularies. WERs are from Table 2 and Table 5. Model dev ar→en gl→en sk→en PPPLw BLEU PPPLw BLEU PPPLw BLEU B-base 13.08 35.71 11.86 20.25 13.20 29.74 B-large 10.17 35.79 9.48 20.21 10.43 29.79 R-base 9.77 35.86 9.36 20.21 9.75 29.79 R-large 6.26 36.02 6.08 20.44 6.29 30.05 Table 10: Word-normalized PPPL vs. BLEU of cased BERT (B) and RoBERTa (R) on English gold sentences in the TED Talks corpus. domain sample to quickly assess whether rescoring with it could improve on the previous model. D Combining MLMs and GPT-2 We ask whether scores from a unidirectional LM are complementary with a masked LM for rescoring. When interpolating, we introduce γ such that: log g(W ) = (1 −γ) log PLM(W ) + γ PLL(W ). Our results are in Table 11: Model test + GPT-2 clean other clean other baseline (100-best) 7.26 20.37 5.30 17.26 BERT (large, cased) 5.25 16.97 5.03 16.80 RoBERTa (large, cased) 5.05 16.79 4.93 16.71 BERT (base, unc., adapt.) 4.58 15.96 4.50 15.92 Table 11: WERs on LibriSpeech after hypothesis rescoring, with and without interpolating with GPT-2 (345M, cased). As the MLM gets stronger, the improvement from adding scores from GPT-2 goes to zero, suggesting that their roles overlap at the limit. However, unlike recent work (Shin et al., 2019) but like previous work (Chen et al., 2017), we found that interpolating with a unidirectional LM remained optimal, though our models are trained on different datasets and may have an ensembling effect.
2020
240
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2713–2722 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2713 Orthogonal Relation Transforms with Graph Context Modeling for Knowledge Graph Embedding Yun Tang, Jing Huang, Guangtao Wang, Xiaodong He, Bowen Zhou JD AI Research {yun.tang,jing.huang,guangtao.wang,xiaodong.he,bowen.zhou}@jd.com Abstract Distance-based knowledge graph embeddings have shown substantial improvement on the knowledge graph link prediction task, from TransE to the latest state-of-the-art RotatE. However, complex relations such as N-to-1, 1-to-N and N-to-N still remain challenging to predict. In this work, we propose a novel distance-based approach for knowledge graph link prediction. First we extend the RotatE from 2D complex domain to high dimensional space with orthogonal transforms to model relations. The orthogonal transform embedding for relations keeps the capability for modeling symmetric/anti-symmetric, inverse and compositional relations while achieves better modeling capacity. Second, the graph context is integrated into distance scoring functions directly. Specifically, graph context is explicitly modeled via two directed context representations. Each node embedding in knowledge graph is augmented with two context representations, which are computed from the neighboring outgoing and incoming nodes/edges respectively. The proposed approach improves prediction accuracy on the difficult N-to-1, 1-to-N and N-to-N cases. Our experimental results show that it achieves state-of-the-art results on two common benchmarks FB15k-237 and WNRR-18, especially on FB15k-237 which has many high in-degree nodes. Code available at https://github. com/JD-AI-Research-Silicon-Valley/ KGEmbedding-OTE. 1 Introduction Knowledge graph is a multi-relational graph whose nodes represent entities and edges denote relationships between entities. Knowledge graphs store facts about people, places and world from various sources. Those facts are kept as triples (head entity, relation, tail entity) and denoted as (h, r, t). A large number of knowledge graphs, such as Freebase (Bollacker et al., 2008), DBpedia (Auer et al., 2007), NELL (Carlson et al., 2010) and YAGO3 (Mahdisoltani et al., 2013), have been built over the years and successfully applied to many domains such as recommendation and question answering (Bordes et al., 2014; Zhang et al., 2016). However, these knowledge graphs need to be updated with new facts periodically. Therefore many knowledge graph embedding methods have been proposed for link prediction that is used for knowledge graph completion. Knowledge graph embedding represents entities and relations in continuous vector spaces. Started from a simple and effective approach called TransE (Bordes et al., 2013), many knowledge graph embedding methods have been proposed, such as TransH (Wang et al., 2014), DistMult (Yang et al., 2014), ConvE (Dettmers et al., 2018) to the latest RotatE (Sun et al., 2019) and QuatE (Zhang et al., 2019). Though much progress has been made, 1-toN, N-to-1, and N-to-N relation predictions (Bordes et al., 2013; Wang et al., 2014) still remain challenging. In Figure 1, relation “profession” demonstrates an N-to-N example and the corresponding edges are highlighted as green. Assuming the triple (SergeiRachmaninoff, Profession, Pianist) is unknown. The link prediction model takes “SergeiRachmaninoff” and relation “Profession” and rank all entities in the knowledge graph to predict “Pianist”. Entity “SergeiRachmaninoff” connected to multiple entities as head entity via relation “profession”, while “Pianist” as a tail entity also reaches to multiple entities through relation “profession”. It makes the N-to-N prediction hard because the mapping from certain entity-relation pair could lead to multiple different entities. Same issue happens with the case of 1-to-N and N-to-1 predictions. The recently proposed RotatE (Sun et al., 2019) 2714 Figure 1: Snapshot of knowledge graph in FB15k-237. Entities are represented as golden blocks. models each relation as a 2-D rotation from the source entity to the target entity. The desired properties for relations include symmetry/antisymmery, inversion and composition which have been demonstrated to be useful for link prediction in knowledge graph. Many existing methods model one or a few of these relation patterns, while RotatE naturally handles all these relation patterns. In addition, the entity and relation embeddings are divided into multiple groups (for example, 1000 2-D rotations are used in (Sun et al., 2019)). Each group is modeled and scored independently. The final score is computed as the summation of all these scores, which can be viewed as an ensemble of different models and further boost the performance of link prediction. However, RotatE is limited to 2-D rotations and thus has limited modeling capacity. In addition, RotatE does not consider graph context, which is helpful in handling 1-to-N, N-to-1, and N-to-N relation prediction. In this work, a novel distance-based knowledge graph embedding called orthogonal transform embedding (OTE) with graph context is proposed to alleviate the 1-to-N, N-to-1 and N-to-N issues, while keeps the desired relation patterns as RotatE. First, we employ orthogonal transforms to represent relations in high dimensional space for better modeling capability. The Orthogonal transform embedding also models the symmetry/antisymmery, inversion and compositional relation patterns just as RotatE does. RotatE can be viewed as an orthogonal transform in 2D complex space. Second, we integrate graph context directly into the distance scoring, which is helpful to predict 1-to-N, N-to-1 and N-to-N relations. For example, from the incomplete knowledge graph, people find useful context information, such as (SergeiRachmaninoff, role, Piano) and (SergeiRachmaninoff, Profession, Composer) in Figure 1. In this work, each node embedding in knowledge graph is augmented with two graph context representations, computed from the neighboring outgoing and incoming nodes respectively. Each context representation is computed based on the embeddings of the neighbouring nodes and the corresponding relations connecting to these neighbouring nodes. These context representations are used as part of the distance scoring function to measure the plausibility of the triples during training and inference. We show that OTE together with graph context modeling performs consistently better than RotatE on the standard benchmark FB15k-237 and WN18RR datasets. In summary, our main contributions include: • A new orthogonal transform embedding OTE, is proposed to extend RotatE from 2D space to high dimensional space, which also models symmetry/antisymmery, inversion and compositional relation patterns; • A directed graph context modeling method is proposed to integrate knowledge graph context (including both neighboring entity nodes and relation edges) into the distance scoring function; • Experimental results of OTE on standard benchmark FB15k-237 and WN18RR datasets show consistent improvements over RotatE, the state of art distance-based embedding model, especially on FB15k-237 with many high in-degree nodes. On WN18RR our results achieve the new state-of-the-art performance. 2 Related work 2.1 Knowledge Graph Embedding Knowledge graph embedding could be roughly categorized into two classes (Wang et al., 2017): distance-based models and semantic matching models. Distance-based model is also known as additive models, since it projects head and tail enti2715 ties into the same embedding space and the distance scoring between two entity embeddings is used to measure the plausibility of the given triple. TransE (Bordes et al., 2013) is the first and most representative translational distance model. A series of work is conducted along this line such as TransH (Wang et al., 2014), TransR (Lin et al., 2015) and TransD (Ji et al., 2015) etc. RotatE (Sun et al., 2019) further extends the computation into complex domain and is currently the state-of-art in this category. On the other hand, Semantic matching models usually take multiplicative score functions to compute the plausibility of the given triple, such as DistMult (Yang et al., 2014), ComplEx (Trouillon et al., 2016), ConvE (Dettmers et al., 2018), TuckER (Balazevic et al., 2019) and QuatE (Zhang et al., 2019). ConvKB (Nguyen et al., 2017) and CapsE (Nguyen et al., 2019) further took the triple as a whole, and fed head, relation and tail embeddings into convolutional models or capsule networks. The above knowledge graph embedding methods focused on modeling individual triples. However, they ignored knowledge graph structure and did not take advantage of context from neighbouring nodes and edges. This issue inspired the usage of graph neural networks (Kipf and Welling, 2016; Veliˇckovi´c et al., 2017) for graph context modeling. Encoder-decoder framework was adopted in (Schlichtkrull et al., 2017; Shang et al., 2019; Bansal et al., 2019). The knowledge graph structure is first encoded via graph neural networks and the output with rich structure information is passed to the following graph embedding model for prediction. The graph model and the scoring model could be end-to-end trained together, or the graph encoder output was only used to initialize the entity embedding (Nathani et al., 2019). We take another approach in this paper: we integrate the graph context directly into the distance scoring function. 2.2 Orthogonal Transform Orthogonal transform is considered to be more stable and efficient for neural networks (Saxe et al., 2013; Vorontsov et al., 2017). However, to optimize a linear transform with orthogonal property reserved is not straightforward. Soft constraints could be enforced during optimization to encourage the learnt linear transform close to be orthogonal. Bansal et al. (2018) extensively compared different orthogonal regularizations and find regularizations make the training faster and more stable in different tasks. On the other hand, some work has been done to achieve strict orthogonal during optimization by applying special gradient update scheme. Harandi and Fernando (2016) proposed a Stiefel layer to guarantee fully connected layers to be orthogonal by using Reimannian gradients. Huang et al. (2017) consider the estimation of orthogonal matrix as an optimization over multiple dependent stiefel manifolds problem and solve it via eigenvalue decomposition on a proxy parameter matrix. Vorontsov et al. (2017) applied hard constraint on orthogonal transform update via Cayley transform. In this work, we construct the orthogonal matrix via Gram Schmidt process and the gradient is calculated automatically through autograd mechanism in PyTorch (Paszke et al., 2017). 3 Our Proposed Method We consider knowledge graph as a collection of triples D = {(h, r, t)} with V as the graph node set, and R as the graph edge set. Each triple has a head entity h and tail entity t, where h, t ∈V . Relation r ∈R connects two entities with direction from head to tail. As discussed in the introduction section, 1-to-N, N-to-1 and N-to-N relation prediction (Bordes et al., 2013; Wang et al., 2014) are difficult to deal with. They are addressed in our proposed approach by: 1) orthogonal relation transforms that operate on groups of embedding space. Each group is modeled and scored independently, and the final score is the sum of all group scores. Hence, each group could address different aspects of entity-relation pair and alleviate the 1-to-N and N-to-N relation mapping issues; and 2) directed graph context to integrate knowledge graph structure information to reduce the ambiguity. Next, we first briefly review RotatE that motivates our orthogonal transform embedding (OTE), and then describe the proposed method in details. 3.1 RotatE OTE is inspired by RotatE (Sun et al., 2019). In RotatE, the distance scoring is done via Hadamard production (element-wise) defined on the complex domain. Given a triple (h, r, t), the corresponding embedding are eh, θr, et, where eh and et ∈R2d, θr ∈Rd, and d is the embedding dimension. For each dimension i, e[2i] and e[2i + 1] are corresponding real and imaginary components. The projection ˜et of t from corresponding relation and head 2716 entities is conducted as an orthogonal transform as below: [ ˜et[2i] ˜et[2i+1]] = Mr(i) [ eh[2i] eh[2i+1]] = [cos θr(i) −sin θr(i) sin θr(i) cos θr(i) ] [ eh[2i] eh[2i+1]] where Mr(i) is a 2D orthogonal matrix derived from θr . Though RotatE is simple and effective for knowledge graph link prediction, it is defined in 2D complex domain and thus has limited modeling capability. A natural extension is to apply similar operation on a higher dimensional space. 3.2 Orthogonal Transform Embedding (OTE) We use eh, Mr, et to represent embeddings of head, relation and tail entity, where eh, et ∈Rd, and d is the dimension of the entity embedding. The entity embedding ex, where x = {h, t}, is further divided into K sub-embeddings, e.g., ex = [ex(1); ⋯; ex(K)], where ex(i) ∈Rds and d = K ⋅ds. Mr is a collection of K linear transform matrix Mr = {Mr(1), ⋯, Mr(K)}, and Mr(i) ∈Rds×ds. For each sub-embedding et(i) of tail t, we define the projection from h and r to t as below: ˜et(i) = fi(h, r) = φ(Mr(i))eh(i) (1) where φ is the Gram Schmidt process (see details in Section 3.3) applied to square matrix Mr(i). The output transform φ(Mr(i)) is an orthogonal matrix derived from Mr(i). ˜et is the concatenation of all sub-vector ˜et(i) from Eq. 1, e.g., ˜et = f(h, r) = [˜et(1); ⋯; ˜et(K)]. The L2 norm of eh(i) is preserved after the orthogonal transform. We further use a scalar tensor sr(i) ∈Rds to scale the L2 norm of each group of embedding separately. Eq. 1 is re-written as ˜et(i) = diag(exp(sr(i)))φ(Mr(i))eh(i) (2) Then, the corresponding distance scoring function is defined as d((h, r), t) = K ∑ i=1 (∣∣˜et(i) −et(i)∣∣) (3) For each sub-embedding eh(i) of head h, we define the projection from r and t to h as below: ˜eh(i) = diag(exp(−sr(i)))φ(Mr(i))T et(i) (4) where the reverse project from tail to head is simply transposing the φ(Mr(i)) and reversing the sign of sr. Then, the corresponding distance scoring function is defined as d(h, (r, t)) = K ∑ i=1 (∣∣˜eh(i) −eh(i)∣∣). (5) 3.3 Gram Schmidt Process We employ Gram-Schmidt process to orthogonalize a linear transform into an orthogonal transform (i.e., φ(Mr(i)) in Section 3.2). The Gram-Schmidt process takes a set of tensor S = {v1, ⋯, vk} for k ≤ds and generates an orthogonal set S′ = {u1, ⋯, uk} that spans the same k−dimensional subspace of Rds as S. ti = vk − k−1 ∑ j=1 ⟨vk, tj⟩ ⟨tj, tj⟩tj (6) ui = ti ∣∣ti∣∣ (7) where t1 = v1, ∣∣t∣∣is the L2 norm of vector t and ⟨v, t⟩denotes the inner product of v and t. Orthogonal transform has many desired properties, for example, the inverse matrix is obtained by simply transposing itself. It also preserves the L2 norm of a vector after the transform. For our work, we are just interested in its property to obtain inverse matrix by simple transposing. This saves the number of model parameters (see Table 3). It can be easily proved that OTE has the ability to model and infer all three types of relation patterns: symmetry/antisymmetry, inversion, and composition as RotatE does. The proof is listed in Appendix A. It should be noted that, Mr(i) is calculated every time in the neural networks forward computation to get orthogonal matrix φ(Mr(i)), while the corresponding gradient is calculated and propagated back to Mr(i) via autograd computation within PyTorch during the backward computation. It eliminates the need of special gradient update schemes employed in previous hard constraint based orthogonal transform estimations (Harandi and Fernando, 2016; Vorontsov et al., 2017). In our experiments, we initialize Mr(i) to make sure they are with full rank1. During training, we also keep checking the determinant of Mr(i). We find the update is fairly 1A real random matrix has full rank with probability 1 (Slinko, 2000). We use different random seeds to make sure the generated matrix is full rank. 2717 stable that we don’t observe any issues with subembedding dimensions varied from 5 to 100. 3.4 Directed Graph Context The knowledge graph is a directed graph: valid triple (h, r, t) does not mean (t, r, h) is also valid. Therefore, for a given entity in knowledge graph, there are two kinds of context information: nodes that come into it and nodes that go out of it. Specially, in our paper, for each entity e, we consider the following two context settings: 1. If e is a tail, all the (head, relation) pairs in the training triples whose tail is e are defined as Head Relation Pair Context. 2. If e is a head, all the (relation, tail) pairs in the training triples whose head is e are defined as Relation Tail Pair Context. Figure 1 demonstrates the computation of graph context for a testing triple (SergeiRachmaninoff, profession, Pianist). Edges for relation “profession” are colored as green. Entities marked with ◦are head entities to entity “Pianist”, and these entities and corresponding relations to connect “Pianist” form the head relation pair context of “Pianist”. While entities with ⭒are tail entities for entity “SergeiRachmaninoff”. Those entities and corresponding relations are the relation tail graph context of entity “SergeiRachmaninoff”. 3.4.1 Head Relation Pair Context For a given tail t, all head-relation pairs (h′, r′) of the triples with tail as t are considered as its graph context and denoted as Ng(t). First, we compute the head-relation context representation ˜ec t as the average from all these pairs in Ng(t) as below: ˜ec t = ∑(h′,r′)∈Ng(t) f(h′, r′) + et ∣Ng(t)∣+ 1 (8) where et is the embedding of the tail t, f(h′, r′) is the representation of (h′, r′) induced from Eq. 2. We use et in Eq. 8 to make the computation of context representation possible when Ng(t) is empty. This can be viewed as a kind of additive smoothing for context representation computation. Then, we compute the distance of the headrelation context of t and the corresponding orthogonal transform based representation of a triple (h, r, t) as follow. dc((h, r), t) = K ∑ i=1 (∣∣˜et(i) −˜ec t(i)∣∣) (9) where ˜et(i) is computed from Eq. 2. There is no new parameter introduced for the graph context modeling, since the message passing is done via OTE entity-relation project f(h′, r′). The graph context can be easily applied to other translational embedding algorithms, such as RotatE and TransE etc, by replacing OTE. 3.4.2 Relation Tail Pair Context For a given head h, all relation-tail pairs (r′, t′) of the triples with head as h are considered as its graph context and denoted as Ng(h). First, we compute the relation-tail context representation ˜ec h as the average from all these pairs in Ng(h) as below: ˜ec h = ∑(r′,t′)∈Ng(h) f(r′, t′) + eh ∣Ng(h)∣+ 1 (10) where f(r′, t′) is computed from Eq. 4. Then, we compute the distance of the relationtail context of h and the corresponding orthogonal transform based representation of a triple (h, r, t) as follow. dc(h, (r, t)) = K ∑ i=1 (∣∣˜eh(i) −˜ec h(i)∣∣) (11) where ˜eh(i) is computed from Eq. 4. 3.5 Scoring Function We further combine all four distance scores (Eq. 3, Eq. 5, Eq. 9 and Eq. 11) discussed above as the final distance score of the graph contextual orthogonal transform embedding (GC-OTE) for training and inference dall(h, r, t) = d((h, r), t) + dc(h, (r, t)) +d(h, (r, t)) + dc((h, r), t). (12) Therefore the full GC-OTE model can be seen as an ensemble of K local GC-OTE models. This view provides an intuitive explanation for the success of GC-OTE. Optimization Self-adversarial negative sampling loss (Sun et al., 2019) is used to optimize the embedding in this work, L = −∑p(h′, r, t′) log σ(dall(h′, r, t′) −γ) −log σ(γ −dall(h, r, t)) (13) 2718 where γ is a fixed margin, σ is sigmoid function, (h′, r, t′) is negative triple, and p(h′, r, t′) is the negative sampling weight defined in (Sun et al., 2019). 4 Experiments 4.1 Datasets Two commonly used benchmark datasets (FB15k237 and WN18RR) are employed in this study to evaluate the performance of link prediction. FB15k-237 (Toutanova and Chen, 2015) dataset contains knowledge base relation triples and textual mentions of Freebase entity pairs. The knowledge base triples are a subset of the FB15K (Bordes et al., 2013), originally derived from Freebase. The inverse relations are removed in FB15k-237. WN18RR (Dettmers et al., 2018) is derived from WN18 (Bordes et al., 2013), which is a subset of WordNet. WN18 consists of 18 relations and 40,943 entities. However, many text triples obtained by inverting triples from the training set. Thus WN18RR (Dettmers et al., 2018) is created to ensure that the evaluation dataset does not have test leakage due to redundant inverse relation. Dataset FB15k-237 WN18RR Entities 14,541 40,943 Relations 237 11 Train Edges 272,115 86,835 Val. Edges 17,535 3,034 Test Edges 20,466 3,134 Table 1: Statistics of datasets. Each dataset is split into three sets for: training, validation and testing, which is same with the setting of (Sun et al., 2019). The statistics of two data sets are summarized at Table 1. Only triples in the training set are used to compute graph context. 4.2 Evaluation Protocol Following the evaluation protocol in (Dettmers et al., 2018; Sun et al., 2019), each test triple (h, r, t) is measured under two scenarios: head focused (?, r, t) and tail focused (h, r, ?). For each case, the test triple is ranked among all triples with masked entity replaced by entities in knowledge graph. Those true triples observed in either train/validation/test set except the test triple will be excluded during evaluation. Top 1, 3, 10 (Hits@1, Hits@3 and Hits@10), and the Mean Reciprocal Rank (MRR) are reported in the experiments. 4.3 Experimental Setup Hyper-parameter settings The hyper-parameters of our model are tuned by grid search during training process, including learning rate, embedding dimension d and sub-embedding dimension ds. In our setting, the embedding dimension is defined as the number of parameters in each entity embedding. Each entity embedding consists of K subembeddings with dimension ds, i.e., d = K × ds. There are two steps in our model training: 1) the model is trained with OTE or RotatE models, and 2) graph context based models are fine tuned on these pre-trained models. The parameter settings are selected by the highest MRR with early stopping on the validation set. We use the adaptive moment (Adam) algorithm (Kingma and Ba, 2014) to train the models. Specially, for FB15k-237, we set embedding dimension d = 400, sub-embedding dimension ds = 20, and the learning rates to 2e-3 and 2e-4 for pre-training and fine-tuning stages respectively; for WN18RR dataset, we set d = 400, ds = 4, and the learning rates to 1e-4 and 3e-5 for pre-training and fine-tuning stages. Implementation Our models are implemented by PyTorch and run on NVIDIA Tesla P40 Graphics Processing Units. The pre-training OTE takes 5 hours with 240,000 steps and fine-tuning GCOTE takes 23 hours with 60,000 steps. Though, it takes more computation for graph context based model training, the inference could be efficient if both head and tail context representations are precomputed and saved for each entity in the knowledge graph. 4.4 Experimental Results In this section, we first present the results of link prediction, followed by the ablation study and error analysis of our models. 4.4.1 Results of Link Prediction Table 2 compares the proposed models (OTE and graph context based GC-OTE) to several stateof-the-art models: including translational distance based TransE (Bordes et al., 2013), RotatE (Sun et al., 2019); semantic matching based DistMult (Yang et al., 2014), ComplEx (Trouillon et al., 2016), ConvE (Dettmers et al., 2018), TuckER (Balazevic et al., 2019) and QuatE (Zhang et al., 2019), and graph context information based R-GCN+ (Schlichtkrull et al., 2017), SACN (Shang et al., 2019) and A2N (Bansal et al., 2019). These 2719 Model FB15k-237 WN18RR MRR H1 H3 H10 MRR H1 H3 H10 TransE .294 .465 .226 .501 RotatE .338 .241 .375 .533 .476 .428 .492 .571 DistMult .241 .155 .263 .419 .43 .39 .44 .49 ComplEx .247 .158 .275 .428 .44 .41 .46 .51 ConvE .325 .237 .356 .501 .43 .40 .44 .52 QuatE .348 .248 .382 .550 .488 .438 .508 .582 TurkER .358 .266 .392 .544 .470 .443 .482 .526 R-GCN+ .249 .151 .264 .417 SACN .352 .261 .385 .536 .47 .43 .48 .54 A2N .317 .232 .348 .486 .45 .42 .46 .51 OTE .351 .258 .388 .537 .485 .437 .502 .587 GC-OTE .361 .267 .396 .550 .491 .442 .511 .583 Table 2: Link prediction for FB15k-237 and WN18RR on test sets. baseline numbers are quoted directly from published papers. From Table 2, we observe that: 1) on FB15k-237, OTE outperforms RotatE, and GC-OTE outperforms all other models on all metrics. Specifically MRR is improved from 0.338 in RotatE, to 0.361, about 7% relative performance improvement. OTE which increases sub-embedding dimension from 2 to 20, and graph context each contributes about half the improvement; 2) on WN18RR, OTE outperforms RotatE and GC-OTE achieves the new state-of-the-art results (as far as we know from published papers). These results show the effectiveness of the proposed OTE and graph context for the task of predicting missing links in knowledge graph. Moreover, GC-OTE improves more on FB15k237 than on WN18RR. This is because FB15k237 has richer graph structure context compared to WN18RR: an average of 19 edges per node v.s. 2 edges per node in WN18RR. These results indicate that the proposed method GC-OTE is more effective on data set with rich context structure information. Model ds MRR @10 #param RotatE-S .330 .515 5.9 RotatE-L .340 .530 29.3 OTE 2 .327 .511 6.1 OTE 20 .355 .540 7.8 OTE - scalar 20 .352 .535 7.7 LNE 20 .354 .538 9.6 GC-RotatE-L .354 .546 29.3 GC-OTE 20 .367 .555 7.8 Table 3: Ablation study on FB15k-237 validation set. 4.4.2 Ablation Study Table 3 shows the results of ablation study of the proposed models and compares the number of model parameters with RotatE on FB15k-237 validation set. We perform the ablation study with embedding dimension of 400. The entity embedding dimension for RotatE-S and RotatE-L are 400 and 2000, respectively. First we notice that increasing embedding size from 400 to 2000 makes RotatE model size more than quadrupled while the performance gain is very limited (Row 1 and 2 in Table 3); increasing group embedding size from 2 to 20 does not increase the model size of OTE much, but with nice performance gain (Row 3 and 4 in Table 3). The model size of OTE is less than one-third of the size of RotatE-L but with better performance. This shows the effectiveness of the OTE. We examine the proposed model in terms of the following aspects: Impact of sub-embedding dimension: we fix the embedding dimension as 400, and increase the subembedding dimension ds from 2 to 20, the MRR of OTE is improved from 0.327 to 0.355 (See Row 3 and Row 4). For RotatE, the entity is embedded in complex vector space, this is similar to our setting with sub-embedding dimension = 2. Our results show that increasing the sub-dimension with OTE is beneficial to link prediction. Impact of orthogonal transform: we replace the orthogonal transform operation in OTE with two different settings, 1) removing the diagonal scalar tensor as Eq. 1 (See OTE-scalar) and 2) using normal linear transform rather than orthogonal transform (See LNE). Both settings lead to MRR degradation. This indicates the proposed orthogonal transform is effective in modeling the relation patterns which are helpful for link prediction. Impact of graph context: we add the graph context based model to both OTE (See GC-OTE) and RotatE-L (See GC-RotatE-L). We observe that MRRs are improved for both RotatE-L and OTE. This shows the importance of modeling context information for the task of link prediction. 2720 Figure 2: FB15k-237 for OTE with different subembedding dimension. Sub-embedding dimension size: in Table 3 we show that increasing sub-embedding dimension brings a nice improvement on MRR. Is the larger size always better? Figure 2 shows the impact of ds on the OTE performance with the changing of sub-embedding size. We fix the entity embedding dimension as 400, and vary the sub-embedding size from 2, 5, 10, 20, 50, all the way to 100. The blue line and green bar represent MRR and H@10 value, respectively. From Figure 2 we observe that, both MRR and Hit@10 are improved and slowly saturated around ds = 20 The similar experiments are also conducted on WN18RR data set and we find the best subembedding dimension is 4 on WN18RR. RotatE-L GC-OTE Type Num. H T A H T A 1-to-N 2255 .710 .169 .440 .718 .204 .461 N-to-1 5460 .156 .850 .503 .209 .863 .536 N-to-N 9763 .490 .631 .561 .508 .651 .579 Table 4: H@10 from FB15-237 validation set by categories (1-to-N, N-to-1 and N-to-N). 4.4.3 Error Analysis We present error analysis of the proposed model on 1-to-N, N-to-1 and N-to-N relation predictions on FB15k-237. Table 4 shows results in terms of Hit@10, where “Num.” is the number of triples in the validation set belonging to the corresponding category, “H”/“T” represents the experiment to predict head entity /tail entity, and “A” denotes average result for both “H” and “T”. Assume c(h, r) and c(r, t) are the number of (h, r) and (r, t) pairs appeared in triples from the training set respectively. A triple (h, r, t) from the validation set is considered as one of the categories in the following: (h, r, t)= ⎧⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎩ N-to-1,if c(h, r) > 1 and c(r, t) ≤1 1-to-N,if c(h, r) ≤1 and c(r, t) > 1 N-to-N,if c(h, r) > 1 and c(r, t) > 1 other. From Table 4 we observe that, comparing to RotatE large model, the proposed model get better Hit@10 on all cases, especially for the difficult cases when we attempt to predicting the head entity for 1-to-N/N-to-N relation type, and tail entity in N-to-1/N-to-N relation type. The reason is because that in the proposed model, the groupings of sub-embedding relation pairs in OTE and graph context modeling both are helpful to distinguish N different tails/heads when they share the same (head, rel)/(rel, tail). 5 Conclusions In this paper we propose a new distance-based knowledge graph embedding for link prediction. It includes two-folds. First, OTE extends the modeling of RotatE from 2D complex domain to high dimensional space with orthogonal relation transforms. Second, graph context is proposed to integrate graph structure information into the distance scoring function to measure the plausibility of the triples during training and inference. The proposed approach effectively improves prediction accuracy on the difficult N-to-1, 1-to-N and N-to-N link predictions. Experimental results on standard benchmark FB15k-237 and WN18RR show that OTE improves consistently over RotatE, the state-of-the-art distance-based embedding model, especially on FB15k-237 with many high in-degree nodes. On WN18RR our model achieves the new state-of-the-art results. This work is partially supported by Beijing Academy of Artificial Intelligence (BAAI). References S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The semantic web. Ivana Balazevic, Carl Allen, and Timothy Hospedales. 2019. TuckER: Tensor factorization for knowledge graph completion. In EMNLP. Nitin Bansal, Xiaohan Chen, and Zhangyang Wang. 2018. Can we gain more from orthogonality regularizations in training deep networks? In NeurIPS. Trapit Bansal, Da-Cheng Juan, Shrividya Ravi, and Andrew McCallum. 2019. A2N: Attending to neighbors for knowledge graph inference. In ACL. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In SIGMOD. Antoine Bordes, Sumit Chopra, and Jason Weston. 2014. Question answering with subgraph embeddings. In EMNLP. 2721 Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In NeurIPS. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka Jr, and Tom M Mitchell. 2010. Toward an architecture for neverending language learning. In AAAI. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2D knowledge graph embeddings. In AAAI. Mehrtash Harandi and Basura Fernando. 2016. Generalized backpropagation, ´etude de cas: Orthogonality. ArXiv. Lei Huang, Xianglong Liu, Bo Lang, Adams Wei Yu, Yongliang Wang, and Bao Qin Li. 2017. Orthogonal weight normalization: Solution to optimization over multiple dependent stiefel manifolds in deep neural networks. In AAAI. Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge graph embedding via dynamic mapping matrix. In ACL. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In ICLR. Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. In ICLR. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embedings for knowledge graph completion. In AAAI. Farzaneh Mahdisoltani, Joanna Biega, and Fabian M Suchanek. 2013. Yago3: A knowledge base from multilingual wikipedias. In CIDR. Deepak Nathani, Jatin Chauhan, Charu Sharma, and Manohar Kaul. 2019. Learning attention-based embeddings for relation prediction in knowledge graphs. In ACL. Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. 2017. A novel embedding model for knowledge base completion based on convolutional neural network. arXiv preprint arXiv:1712.02121. Dai Quoc Nguyen, Thanh Vu, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. 2019. A Capsule Network-based Embedding Model for Knowledge Graph Completion and Search Personalization. In NAACL. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. Andrew M. Saxe, James L. McClelland, and Surya Ganguli. 2013. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In ICLR. Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2017. Modeling relational data with graph convolutional networks. In ESWC. Chao Shang, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He, and Bowen Zhou. 2019. End-to-end structure-aware convolutional networks for knowledge base completion. In AAAI. Arkadii Slinko. 2000. A generalization of Koml´os theorem on random matrices. Univ., Department of Mathematics, School of Mathematical and Information. Zhiqing Sun, Zhi-Hong Deng, Jing Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. In ICLR. Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality. Th´eo Trouillon, Johannes Welbl, Sebastian Riedel, ´Eric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In ICML, pages 2071–2080. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. 2017. Graph attention networks. In ICLR. Eugene Vorontsov, Chiheb Trabelsi, Samuel Kadoury, and Christopher Joseph Pal. 2017. On orthogonality and learning recurrent networks with long term dependencies. In ICML. Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. TKDE, 29:2724–2743. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In AAAI. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. In ICLR. Fuzheng Zhang, Nicholas Jing Yuan, Defu Lian, Xing Xie, and Wei-Ying Ma. 2016. Collaborative knowledge base embedding for recommender systems. In KDD. Shuai Zhang, Yi Tay, Lina Yao, and Qi Liu. 2019. Quaternion knowledge graph embeddings. In NeurIPS. 2722 A Discussion on the Ability of Pattern Modeling and Inference It can be proved that OTE can infer all three types of relation patterns, e.g., symmetry/antisymmetry, inversion and composition patterns. A.1 Symmetry/antisymmetry If et = f(r, h) and eh = f(r, t) hold, we have et = diag(exp(sr))φ(Mr) diag(exp(sr))φ(Mr)et ⇒ φ(Mr)φ(Mr) = I sr = 0 In other words, if φ(Mr) is a symmetry matrix and no scale is applied, the relation is symmetry relation. If the relation is antisymmetry, e.g., et = f(r, h) and eh ≠f(r, t), we just need to one of the φ(Mr(i)) is not symmetry matrix or sr(i) ≠0. A.2 Inversion If e2 = f(r1, e1) and e1 = f(r2, e2) hold, we have e2 = diag(exp(sr1))φ(Mr1) diag(exp(sr2))φ(Mr2)e2 In other words, if diag(exp(sr1))φ(Mr1) = φ(Mr2)T diag(exp(−sr2)), the relation r2 is inverse relation of r1. A.3 Composition If e2 = f(r1, e1), e3 = f(r2, e2) and e3 = f(r3, e1) hold, we have diag(exp(sr3))φ(M3)e1 = diag(exp(sr2))φ(M2) diag(exp(sr1))φ(M1)e1 It means if diag(exp(sr3))φ(M3) is equal to diag(exp(sr2))φ(M2)diag(exp(sr1))φ(M1) then relation r3 is composition of relation r1 and r2.
2020
241
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2723–2730 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2723 Posterior Calibrated Training on Sentence Classification Tasks Taehee Jung1 Dongyeop Kang2 Hua Cheng3 Lucas Mentch1 Thomas Schaaf 3 1Department of Statistics, University of Pittsburgh, Pittsburgh, PA, USA 2School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA 33M|M*Modal, Pittsburgh, PA, USA {taj41,lkm31}@pitt.edu [email protected] {hcheng,tschaaf}@mmm.com Abstract Most classification models work by first predicting a posterior probability distribution over all classes and then selecting that class with the largest estimated probability. In many settings however, the quality of posterior probability itself (e.g., 65% chance having diabetes), gives more reliable information than the final predicted class alone. When these methods are shown to be poorly calibrated, most fixes to date have relied on posterior calibration, which rescales the predicted probabilities but often has little impact on final classifications. Here we propose an end-to-end training procedure called posterior calibrated (PosCal) training that directly optimizes the objective while minimizing the difference between the predicted and empirical posterior probabilities. We show that PosCal not only helps reduce the calibration error but also improve task performance by penalizing drops in performance of both objectives. Our PosCal achieves about 2.5% of task performance gain and 16.1% of calibration error reduction on GLUE (Wang et al., 2018) compared to the baseline. We achieved the comparable task performance with 13.2% calibration error reduction on xSLUE (Kang and Hovy, 2019), but not outperforming the two-stage calibration baseline. PosCal training can be easily extendable to any types of classification tasks as a form of regularization term. Also, PosCal has the advantage that it incrementally tracks needed statistics for the calibration objective during the training process, making efficient use of large training sets1. 1 Introduction Classification systems, from simple logistic regression to complex neural network, typically predict posterior probabilities over classes and decide the final class with the maximum probability. The 1Code is publicly available at https://github.com/ THEEJUNG/PosCal/ model’s performance is then evaluated by how accurate the predicted classes are with respect to outof-sample, ground-truth labels. In some cases, however, the quality of posterior estimates themselves must be carefully considered as such estimates are often interpreted as a measure of confidence in the final prediction. For instance, a well-predicted posterior can help assess the fairness of a recidivism prediction instrument (Chouldechova, 2017) or select the optimal number of labels in a diagnosis code prediction (Kavuluru et al., 2015). Guo et al. (2017) showed that a model with high classification accuracy does not guarantee good posterior estimation quality. In order to correct the poorly calibrated posterior probability, existing calibration methods (Zadrozny and Elkan, 2001; Platt et al., 1999; Guo et al., 2017; Kumar et al., 2019) generally rescale the posterior distribution predicted from the classifier after training. Such post-processing calibration methods re-learn an appropriate distribution from a held-out validation set and then apply it to an unseen test set, causing a severe discrepancy in distributions across the data splits. The fixed split of the data sets makes the post-calibration very limited and static with respect to the classifier’s performance. We propose a simple but effective training technique called Posterior Calibrated (PosCal) training that optimizes the task objective while calibrating the posterior distribution in training. Unlike the post-processing calibration methods, PosCal directly penalizes the difference between the predicted and the true (empirical) posterior probabilities dynamically over the training steps. PosCal is not a simple substitute of the postprocessing calibration methods. Our experiment shows that PosCal can not only reduce the calibration error but also increase the task performance on the classification benchmarks: compared to the baseline MLE (maximum likelihood estimation) 2724 training method, PosCal achieves 2.5% performance improvements on GLUE (Wang et al., 2018) and 0.5% on xSLUE (Kang and Hovy, 2019), and at the same time 16.1% posterior error reduction on GLUE and 13.2% on xSLUE. 2 Related Work Our work is primarily motivated by previous analyses of posterior calibration on modern neural networks. Guo et al. (2017) pointed out that in some cases, as the classification performance of neural networks improves, its posterior output becomes poorly calibrated. There are a few attempts to investigate the effect of posterior calibration on natural language processing (NLP) tasks: Nguyen and O’Connor (2015) empirically tested how classifiers on NLP tasks (e.g., sequence tagging) are calibrated. For instance, compared to the Naive Bayes classifier, logistic regression outputs wellcalibrated posteriors in sentiment classification task. Card and Smith (2018) also mentioned the importance of calibration when generating a training corpus for NLP tasks. As noted above, numerous post-processing calibration techniques have been developed: traditional binning methods (Zadrozny and Elkan, 2001, 2002) set up bins based on the predicted posterior ˆp, recalculate calibrated posteriors ˆq per each bin on a validation set, and then update every ˆp with ˆq if ˆp falls into the certain bin. On the other hand, scaling methods (Platt et al., 1999; Guo et al., 2017; Kull et al., 2019) re-scale the predicted posterior ˆp from the softmax layer trained on a validation set. Recently, Kumar et al. (2019) pointed out that such re-scaling methods do not actually produce well-calibrated probabilities as reported since the true posterior probability distribution can not be captured with the often low number of samples in the validation set2 . To address the issue, the authors proposed a scaling-binning calibrator, but still rely on the validation set. In a broad sense, our end-to-end training with the calibration reduction loss can be seen as sort of regularization designed to mitigate over-fitting. Just as classical explicit regularization techniques such as the lasso (Tibshirani, 1996) penalize models large weights, here we penalize models with posterior outputs that differ substantially from the estimated true posterior. 2§4 shows that the effectiveness of re-calibration decreases when the size of the validation set is small. 3 Posterior Calibrated Training In general, most of existing classification models are designed to maximize the likelihood estimates (MLE). Its objective is then to minimize the crossentropy (Xent) loss between the predicted probability and the true probability over k different classes. During training time, PosCal minimizes the cross-entropy as well as the calibration error as a multi-task setup. While the former is a task-specific objective, the latter is a statistical objective to make the model to be statistically well-calibrated from its data distribution. Such data-oriented calibration makes the task-oriented model more reliable in terms of its data distribution. Compared to the prior post-calibration methods with a fixed (and often small) validation set, PosCal dynamically estimates the required statistics for calibration from the train set during training iterations. Given a training set D = {(x1, y1)..(xn, yn)} where xi is a p-dimensional vector of input features and yi is a k-dimensional one-hot vector corresponding to its true label (with k classes), our training minimizes the following loss: LPosCal = Lxent + λLcal (1) where Lxent is the cross-entropy loss for task objective (i.e., classification) and Lcal is the calibration loss on the cross-validation set. λ is a weighting value for a calibration loss Lcal. In practice, the optimal value of λ can be chosen via cross-validation. More details are given in §4. Each loss term can be then calculated as follows: Lxent = − n X i=1 k X j=1 y(j) i log(ˆp(j) i ) (2) Lcal = n X i=1 k X j=1 d(ˆp(j) i , q(j) i ) (3) where Lxent is a typical cross-entropy loss with ˆp as an updated predicted probability while training. Lcal is our proposed loss for minimizing the calibration loss: q is an true (empirical) probability and d is an function to measure the difference (e.g., mean squared error or Kullback-Leibler divergence) between the updated ˆp and true posterior q probabilities. The empirical probability q can be calculated by measuring the ratio of true labels per each bin split by the predicted posterior ˆp from each update. We sum up the losses from every class j ∈{1, 2..k}. 2725 Algorithm 1 Posterior Calibrated Training Inputs : Train set D, Bin B, Number of Classes K Number of epochs e, Learning rate η Number of updating empirical probabilities u Output Θ: Model Parameters 1: Let Q : Empirical Probability Matrix ∈RB×K 2: Random initialization of Θ 3: for i ∈{1, 2, 3, ...e} do 4: Break D into random mini-batches b 5: Find a set of steps S for updating Q by dividing total number of steps into u equal parts 6: for b from D do 7: Θ ←Θ −η∇ΘLPosCal(Θ, Q) 8: if current step ∈S then 9: ˆp = softmax(Θ, D) 10: Q ←CalEmpProb(ˆp, B) 11: end if 12: end for 13: end for We show a detailed training procedure of PosCal in Algorithm 1. While training, we update the model parameters (i.e., weight matrices in the classifier) as well as the empirical posterior probabilities by calculating the predicted posterior with the recently updated parameters. For Q, we exactly calculate a label frequency per bin B. Since it is time-consuming to update Q at every step, we set up the number of Q updates per each epoch so as to only update Q at each batch. 4 Experiment We investigate how our end-to-end calibration training produces better calibrated posterior estimates without sacrificing task performance. Task: NLP classification benchmarks. We test our models on two different benchmarks on NLP classification tasks: GLUE (Wang et al., 2018) and xSLUE (Kang and Hovy, 2019). GLUE contains different types of general-purpose natural language understanding tasks such as question-answering, sentiment analysis and text entailment. Since true labels on the test set are not given from the GLUE benchmark, we use the validation set as the test set, and randomly sample 1% of train set as a validation set. xSLUE (Kang and Hovy, 2019) is yet another classification benchmark but on different types of styles such as a level of humor, formality and even demographics of authors. For the details of each dataset, refer to the original papers. Metrics. In order to measure the task performance, we use different evaluation metrics for each task. For GLUE tasks, we report F1 for MRPC, Matthews correlation for CoLA, and accuracy for other tasks followed by Wang et al. (2018). For xSLUE, we use F1 score. To measure the calibration error, we follow the metric used in the previous work (Guo et al., 2017); Expected Calibration Error (ECE) by measuring how the predicted posterior probability is different from the empirical posterior probability: ECE = 1 K PK k=1 PB b=1 |Bkb| n |qkb −ˆpkb|, where ˆ pkb is an averaged predicted posterior probability for label k in bin b, qkb is a calculated empirical probability for label k in bin b, Bkb is a size of bin b in label k, and n is a total sample size. The lower ECE, the better the calibration quality. Models. We train the classifiers with three different training methods: MLE, L1, and PosCal. MLE is a basic maximum likelihood estimation training by minimizing the cross-entropy loss, L1 is MLE training with L1 regularizer, and PosCal is our proposed training by minimizing LPosCal (Eq 1). For PosCal training, we use KullbackLeibler divergence to measure Lcal. We also report ECE with a temperature scaling (Guo et al., 2017) (tScal), which is considered the state-of-theart post-calibration method. For our classifiers, we fine-tuned the pre-trained BERT classifier (Devlin et al., 2019). Details on the hyper-parameters used are given in Appendix A. Task Perf. (↑) Calib. ECE (↓) Dataset MLE L1 PosCal MLE L1 tScal PosCal CoLA 56.7 55.3 58.0 .242 .234 .565 .231 SST-2 92.1 91.4 92.4 .144 .155 .143 .106 MRPC 88.2 88.2 88.9 .228 .229 .400 .177 QQP 88.8 88.9 89.1 .121 .122 .054 .107 MNLI 84.0 83.7 83.5 .158 .160 .080 .165 MNLImm 83.7 84.0 84.2 .153 .153 .062 .149 QNLI 89.9 89.7 90.0 .138 .124 .159 .176 RTE 61.7 62.4 62.8 .422 .441 .175 .394 WNLI 38.0 38.0 56.9 .287 .287 .269 .083 total 75.9 75.6 78.4 .210 .212 .252 .176 Table 1: Task performance (left; higher better) and calibration error (right; lower better) on GLUE. We do not include STS-B; a regression task. Note that tScal is only applicable for calibration reduction, because the post-calibration does not change the task performance, while PosCal can do both. 2726 Task Perf.(↑) Calib. ECE(↓) Dataset MLE L1 PosCal MLE L1 tScal PosCal GYAFC 89.1 89.4 89.5 .178 .170 .783 .118 SPolite 68.7 70.0 70.9 .451 .431 .133 .238 SHumor 97.4 97.6 97.6 .050 .047 .037 .044 SJoke 98.4 98.1 98.3 .032 .037 .019 .029 SarcGhosh 42.5 42.5 42.6 .912 .912 .898 .910 SARC 71.3 71.5 71.4 .372 .375 .079 .186 SARC pol 72.7 72.8 73.8 .434 .435 .070 .383 VUA 80.9 80.8 81.4 .268 .276 .687 .238 TroFi 76.7 78.8 77.4 .278 .239 .345 .265 CrowdFlower 22.0 22.7 22.6 .404 .413 .261 .418 DailyDialog 48.3 47.8 48.7 .225 .227 .117 .222 HateOffens 93.0 93.6 93.5 .064 .059 .100 .055 SRomance 99.0 99.0 100.0 .020 .020 .023 .010 SentiBank 96.7 97.0 96.6 .061 .057 .037 .054 PASTEL gender 47.9 48.1 47.9 .336 .305 .185 .143 PASTEL age 23.5 23.4 22.9 .354 .365 .222 .369 PASTEL count 56.1 56.6 58.3 .054 .055 .019 .046 PASTEL polit 46.6 47.0 46.8 .394 .379 .160 .413 PASTEL educ 24.4 25.2 24.7 .314 .332 .209 .323 PASTEL ethn 25.3 24.8 24.8 .245 .243 .163 .250 total 64.0 64.3 64.5 .272 .269 .227 .236 Table 2: Task performance (left; higher better) and calibration error (ECE; lower better) on xSLUE. We do not include EmoBank; a regression task. Results. Table 1 and 2 show task performance and calibration error on two benchmarks: GLUE and xSLUE, respectively. In general, PosCal outperforms the MLE training and MLE with L1 regularization in GLUE for both task performance and calibration, though not in xSLUE. Compared to the tScal, PosCal shows a stable improvement over different tasks on calibration reduction, while tScal sometimes produces a poorly calibrated result (e.g., CoLA, MRPC). Analysis. We visually check the statistical effect of PosCal with respect to calibration. Figure 1 shows how predicted posterior distribution of PosCal is different from MLE. We choose two datasets where PosCal improves both accuracy and calibration quality compared with the basic MLE: RTE from GLUE and Stanford’s politeness dataset from xSLUE. We then draw two different histograms: a histogram of ˆp frequencies (top) and a calibration histogram, ˆp versus the empirical posterior probability q (bottom). Figure 1(c,d) show that PosCal spreads out the extremely predicted posterior probabilities (0 or 1) from MLE to be more well calibrated over different bins. The wellcalibrated posteriors also help correct the skewed (a) Predictions in RTE (b) Predictions in SPolite (c) Calibrations in RTE (d) Calibrations in SPolite Figure 1: Histogram of predicted probabilities (top) and their calibration histograms (bottom) between MLE ( blue-shaded ) and PosCal ( red-shaded ) on RTE in GLUE and SPoliteness in xSLUE. The overlap is purple-shaded . X-axis is the predicted posterior, and Y-axis is its frequencies (top) and empirical posterior probabilities (bottom). The diagonal, linear line in (c,d) means the expected (or perfectly calibrated) case. We observe that PosCal alleviate the posterior probabilities with the small predictions toward the expected calibration. Best viewed in color. predictions in Figure 1(a,b). MLE →PosCal Size MLE PosCal label dist. Data predictions (%) avg(ˆp) avg(ˆp) 0 1 RTE COR →COR 164(59.2) 79.2 78.6 42.8 47.2 COR →INCOR 3(1.1) 59.7 39.0 0 100 INCOR →COR 9(3.3) 40.6 56.7 100 0 INCOR →INCOR 101(36.4) 23.6 24.9 27.7 72.3 SPolite. COR →COR 342(60.3) 95.0 82.6 58.8 41.2 COR →INCOR 54(9.5) 82.1 26.8 96.3 3.7 INCOR →COR 60(10.6) 16.9 73.9 15.0 85.0 INCOR →INCOR 111(19.6) 9.8 21.7 54.0 46.0 Table 3: Size of correct (COR) and incorrect (INCOR) prediction labels with their averaged ˆp(%) of true labels for MLE and PosCal on RTE and Stanford’s politeness (SPolite) dataset. Each has two labels : entail(0) / not entail(1) for RTE, and polite(0) / impolite(1) for SPolite. PosCal improves 2.2%/1.1% accuracy than MLE for RTE/SPolite. To better understand in which case PosCal helps correct the wrong predictions from MLE, we analyze how prediction ˆp is different between MLE and PosCal in test set. Table 3 shows the number of correct/incorrect predictions and its correspond2727 Data Sentence True label MLE ˆp PosCal ˆp RTE (S1) Researchers at the Harvard School of Public Health say that people who drink coffee may be doing a lot more than keeping themselves awake - this kind of consumption apparently also can help reduce the risk of diseases. (S2) Coffee drinking has health benefits. entail 49.7 51.3 INCOR →COR (S1) The biggest newspaper in Norway, Verdens Gang, prints a letter to the editor written by Joe Harrington and myself. (S2) Verdens Gang is a Norwegian newspaper. entail 43.9 61.9 INCOR →COR SPolite. Not at all clear what you want to do. What is the full expected output? impolite 10.5 74.9 INCOR →COR Are you sure that it isn’t due to the error that the compiler is thrown off, and generating multiple errors due to that one error? Could you give some example of this? polite 6.9 57.9 INCOR →COR Table 4: Predicted ˆp(%) of true label from MLE and PosCal with corresponding sentences in RTE and SPolite dataset. True label is either entail or not entail for RTE, and polite or impolite for SPolite. Provided examples are the cases only PosCal predicts correctly, which correspond to INCOR →COR in table 3. ing label distributions grouped by the two models. For example, COR by MLE and INCOR by PosCal in the fourth row of Table 3 means that there are three test samples that MLE correctly predicts while PosCal not. We find that in most of cases, PosCal corrects the wrong predictions from MLE by re-scaling ˆp in a certain direction. In RTE, most inconsistent predictions between MLE and PosCal have their posterior predictions near to the decision boundary (i.e., 50% for binary classification) with an averaged predicted probability about 40%. This is mainly because PosCal does not change the majority of the predictions but helps correct the controversial predictions near to the decision boundary. PosCal improves 3.3% of accuracy but only sacrifices 1.1% by correctly predicting the samples predicted as ’not entailment’ by MLE to ’entailment’. On the other hand, SPolite has more extreme distribution of ˆp from MLE than RTE. We find a fair trade-off between two models (-9.5%, +10.6%) but still PosCal outperforms MLE. Table 4 shows examples that only PosCal predicts correctly, with corresponding ˆp of true label from MLE and PosCal (INCOR →COR cases in Table 3). The predicted probability ˆp should be greater than 50% if models predict the true label. In the first example of RTE dataset, two expressions from S1 and S2 (e.g, “reduce the risk of disease” in S1 and “health benefits” in S2) make MLE confusing to predict, so ˆp of true label becomes slightly less than the borderline probability (e.g., ˆp = 49.7% < 50%), making incorrect prediction. Another example of RTE shows how the MLE fails to predict the true label since the model cannot learn the connection between the location of newspaper (e.g., “Norway”) and its name (e.g., “Verden Gang”). In the two cases from SPolite dataset, the level of politeness indicated on phrases (e.g., “Not at all” in the first case and “Could you” in the second case) is not captured well by MLE, so the model predicts the incorrect label. From our manual investigation above, we find that statistical knowledge about posterior probability helps correct ˆp while training PosCal, so making ˆp switch its prediction. For further analysis, we provide more examples in Appendix C. 5 Conclusion and Future Directions We propose a simple yet effective training technique called PosCal for better posterior calibration. Our experiments empirically show that PosCal can improve both the performance of classifiers and the quality of predicted posterior output compared to MLE-based classifiers. The theoretical underpinnings of our PosCal idea are not explored in detail here, but developing formal statistical support for these ideas constitutes interesting future work. Currently, we fix the bin size at 10 and then estimate q by calculating accuracy of p per bin. Estimating q with adaptive binning can be a potential alternative for the fixed binning. Acknowledgements We thank Matt Gormley and the anonymous reviewers for their helpful comments and discussion. 2728 References Dallas Card and Noah A Smith. 2018. The importance of calibration for estimating proportions from annotations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1636–1646. Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2):153–163. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1321–1330, International Convention Centre, Sydney, Australia. PMLR. Dongyeop Kang and Eduard H. Hovy. 2019. xslue: A benchmark and analysis platform for crossstyle language understanding and evaluation. ArXiv, abs/1911.03663. Ramakanth Kavuluru, Anthony Rios, and Yuan Lu. 2015. An empirical evaluation of supervised learning approaches in assigning diagnosis codes to electronic medical records. Artificial intelligence in medicine, 65(2):155–166. Meelis Kull, Miquel Perello Nieto, Markus K¨angsepp, Telmo Silva Filho, Hao Song, and Peter Flach. 2019. Beyond temperature scaling: Obtaining wellcalibrated multi-class probabilities with dirichlet calibration. In Advances in Neural Information Processing Systems, pages 12295–12305. Ananya Kumar, Percy S Liang, and Tengyu Ma. 2019. Verified uncertainty calibration. In Advances in Neural Information Processing Systems, pages 3787– 3798. Khanh Nguyen and Brendan O’Connor. 2015. Posterior calibration and exploratory analysis for natural language processing models. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1587–1598, Lisbon, Portugal. Association for Computational Linguistics. John Platt et al. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3):61–74. Robert Tibshirani. 1996. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267–288. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Bianca Zadrozny and Charles Elkan. 2001. Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML ’01, pages 609–616, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Bianca Zadrozny and Charles Elkan. 2002. Transforming classifier scores into accurate multiclass probability estimates. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 694–699. ACM. 2729 A Details on Hyper-Parameters All models are trained with equal hyperparameters:learning rate 2e-5, and BERT model size BERTBASE. Also, we set up an early stopping rule for train: we track the validation loss for every 50 steps and then halt to train if current validation loss is bigger than the averaged 10 prior validation losses (i.e., patience 10). For L1, we use the regularization weight value 1-e8. For PosCal, we set up another weight value λ for LCal, and the number of updating empirical probability per epoch (u). We tune these two hyper-parameters per each task. For more details, see Table 5. As a baseline of post-calibration method, we also report ECE with a temperature scaling (Guo et al., 2017), which is current state-of-the-art method. xSLUE u λ GLUE u λ GYAFC 5 0.6 CoLA 5 0.2 SPolite 5 0.6 SST-2 10 1.0 SHumor 5 1.0 MRPC 10 1.0 SJoke 5 1.0 QQP 10 1.0 SarcGhosh 5 0.6 MNLI 2 0.2 SARC 5 0.6 MNLImm 2 0.2 SARC pol 5 1.0 QNLI 1 0.6 VUA 2 1.0 RTE 10 1.0 TroFi 5 1.0 WNLI 2 0.2 CrowdFlower 5 0.6 DailyDialog 5 1.0 HateOffens 5 1.0 SRomance 5 1.0 SentiBank 5 1.0 PASTEL gender 5 1.0 PASTEL age 5 1.0 PASTEL count 5 1.0 PASTEL polit 5 1.0 PASTEL educ 5 1.0 PASTEL ethn 5 1.0 Table 5: Hyper-parameters for PosCal training across tasks : the number of updating empirical probabilities per epoch u and weight value λ for LCal. We tune them using the validation set. B Examples When MLE and PosCal Predicts Different Label Table 6 shows some examples in RTE and StanfordPoliteness datasets with their predicted ˆp of true label from MLE and PosCal. 2730 Data Sentence True label MLE ˆp PosCal ˆp RTE (S1) Charles de Gaulle died in 1970 at the age of eighty. He was thus fifty years old when, as an unknown officer recently promoted to the (temporary) rank of brigadier general, he made his famous broadcast from London rejecting the capitulation of France to the Nazis after the debacle of May-June 1940. (S2) Charles de Gaulle died in 1970. entail 34.9 58.9 INCOR →COR (S1) Police in the Lower Austrian town of Amstetten have arrested a 73 year old man who is alleged to have kept his daughter, now aged 42, locked in the cellar of his house in Amstetten since 29th August 1984. The man, identified by police as Josef Fritzl, is alleged to have started sexually abusing his daughter, named as Elisabeth Fritzl, when she was eleven years old, and to have subsequently fathered seven children by her. One of the children, one of a set of twins born in 1996, died of neglect shortly after birth and the body was burned by the father. (S2) Amstetten is located in Austria. entail 45.5 57.3 INCOR →COR (S1) Blair has sympathy for anyone who has lost their lives in Iraq. (S2) Blair is sorry for anyone who has lost their lives in Iraq. entail 31.3 50.1 INCOR →COR (S1) Capital punishment acts as a deterrent. (S2) Capital punishment is a deterrent to crime. entail 41.6 64.5 INCOR →COR (S1) The U.S. handed power on June 30 to Iraqˆas interim government chosen by the United Nations and Paul Bremer, former governor of Iraq. (S2) The United Nations officially transferred power to Iraq. not entail 59.2 44.9 COR →INCOR SPolite. I don’t know what page you are talking about, as this is your only edit. Did you perhaps have another account? impolite 47.3 65.4 INCOR →COR Hi. Not complaining, but why did you remove the category ”high schools in california” from this article? impolite 1.2 91.7 INCOR →COR Hi, sorry I think I’m missing something here. Why are you adding a red link to the vandalism page? impolite 5.6 61.9 INCOR →COR Huh, looks fine to me. Maybe this computer just lies to me to get me to shut up and stop complaining? impolite 3.3 58.1 INCOR →COR Can you put an NSLog to make sure it’s being called only once? Also, can you show us where you are declaring your int? polite 16.5 76.5 INCOR →COR I don’t understand the reason for <url>. Would you please explain it to me? polite 91.5 37.1 COR →INCOR Another question: Does ”Senn” exist in Japanese? If it does, is it possible to render Sennin as Senn-in? polite 88.8 45.5 COR →INCOR @Smjg, thanks. But why did you also remove the categories I added? impolite 78.3 45.7 COR →INCOR You can place islands so there is no path between points. What should happen then? impolite 91.7 35.8 COR →INCOR Table 6: Predicted ˆp(%) of true label from MLE and PosCal with corresponding sentences in RTE (top) and Stanford’s politeness (bottom) dataset. True label is either entail or not entail for RTE, and polite or impolite for SPolite. We show the cases where two methods predict the label differently. The case with INCOR →COR means only PosCal predicts the true label correctly, while the case with COR →INCOR means only MLE predicts the true label correctly.
2020
242
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2731–2743 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2731 Posterior Control of Blackbox Generation Xiang Lisa Li Department of Computer Science Johns Hopkins University [email protected] Alexander M. Rush Department of Computer Science Cornell Tech [email protected] Abstract Text generation often requires high-precision output that obeys task-specific rules. This fine-grained control is difficult to enforce with off-the-shelf deep learning models. In this work, we consider augmenting neural generation models with discrete control states learned through a structured latent-variable approach. Under this formulation, task-specific knowledge can be encoded through a range of rich, posterior constraints that are effectively trained into the model. This approach allows users to ground internal model decisions based on prior knowledge, without sacrificing the representational power of neural generative models. Experiments consider applications of this approach for text generation. We find that this method improves over standard benchmarks, while also providing fine-grained control. 1 Introduction A core challenge in using deep learning for NLP is developing methods that allow for controlled output while maintaining the broad coverage of data-driven methods. While this issue is less problematic in classification tasks, it has hampered the deployment of systems for conditional natural language generation (NLG), where users often need to control output through task-specific knowledge or plans. While there have been significant improvements in generation quality from automatic systems (Mei et al., 2016; Dusek and Jurcicek, 2016; Lebret et al., 2016b), these methods are still far from being able to produce controlled output (Wiseman et al., 2017). Recent state-of-the-art system have even begun to utilize manual control through rulebased planning modules (Moryossef et al., 2019; Puduppully et al., 2019). Consider the case of encoder-decoder models for generation, built with RNNs or transformers. These models generate fluent output and provide flexible representations of their conditioning. Unfortunately, auto-regressive decoders are also globally dependent, which makes it challenging to incorporate domain constraints. Research into controllable deep models aims to circumvent the all-or-nothing dependency tradeoff of encoder-decoder systems and expose explicit higher-level decisions. One line of research has looked at global control states that represent sentence-level properties for the full decoder. For example, Hu et al. (2017) uses generative adversarial networks where the attributes of the text (e.g., sentiment, tense) are exposed. Another line of research exposes fine-level properties, such as phrase type, but requires factoring the decoder to expose local decisions, e.g. Wiseman et al. (2018). This work proposes a method for augmenting any neural decoder architecture to incorporate finegrained control states. The approach first modifies training to incorporate structured latent control variables. Then, training constraints are added to anchor the state values to problem-specific knowledge. At test time, the control states can be ignored or utilized as grounding for test-time constraints. Technically, the approach builds on recent advances in structured amortized variational inference to enforce additional constraints on the learned distribution. These constraints are enforced through efficient structured posterior calculations and do not hamper modeling power. We demonstrate that the method can improve accuracy and control, while utilizing a range of different posterior constraints. In particular on two large-scale data-to-text generation datasets, E2E (Novikova et al., 2017) and WikiBio (Lebret et al., 2016a), our method increases the performance of benchmark systems while also producing outputs that respect the grounded control states. Our code is available at https://github.com/XiangLi1999/ 2732 PosteriorControl-NLG. 2 Control States for Blackbox Generation Consider a conditional generation setting where the input consists of an arbitrary context x and the output y1:T is a sequence of target tokens. We are interested in modeling latent fine-grained, discrete control states z = z1:T each with a label in C. We assume that these states are weaklysupervised at training through problem-specific constraints. The goal is to induce a model of p(y | x) = P z p(y, z | x). Concretely, our experiments will focus on a data-to-text generation problem where x corresponds to a table of data, and y1:T is a textual description. We hope to induce control states z that indicate which table fields are being described, and our weak supervision corresponds to indicators of known alignments. We assume the generative model is a blackbox auto-regressive decoder that produces both y and z. Define this general model as: pθ(y, z | x) = QT t=1 pθ(yt | x, y<t, z≤t) · pθ(zt | x, y<t, z<t) For a neural decoder, where ht(y1:t−1, z1:t−1) is the hidden state at time-step t, we might generate the latent class zt ∈C and next token yt as, pθ(zt | z<t, y<t) = softmax(W0ht + b0) pθ(yt | z≤t, y<t) = softmax(W1[ht, gθ(zt)] + b1) Here gθ is a parameterized embedding function and W, b are model parameters from θ. The log-likelihood of the model is given by L(θ) = log pθ(y | x). The key latent term of interest is the posterior distribution pθ(z | x, y), i.e. the probability of over state sequences for a known output. The decoder parameterization makes this distribution intractable to compute in general. We instead use variational inference to define a parameterized variational posterior distribution, qφ(z | x, y), from a preselected family of possible distributions Q.1 To fit the model parameters θ, we utilize the evidence lower bound (for any variational parameters φ), L(θ) ≥ELBO(θ, φ) = Ez∼qφ(z|x,y)[log pθ(y, z | x)] + H[qφ(z | x, y)] 1Since our family is over a combinatorial set of z1:T , this corresponds to a structured variational inference setting. Several recent works have shown methods for effectively fitting neural models with structured variational inference (Johnson et al., 2016; Krishnan et al., 2017; Kim et al., 2019). We therefore use these techniques as a backbone for enforcing problem-specific control. See §4 for a full description of the variational family used. 3 Posterior Regularization of Control States Posterior regularization (PR) is an approach for enforcing soft constraints on the posterior distribution of generative models (Ganchev et al., 2010). Our goal is to utilize these soft constraints to enforce problem specific weak supervision. Traditionally PR uses linear constraints which in the special case of expectation maximization for exponential families leads to convenient closed-form training updates. As this method does not apply to neural generative models, we resort to gradient-based methods. In this section, we develop a form of posterior regularization that accommodates the neural variational setting. Starting with the log-likelihood objective, L(θ), PR aims to add distributional constraints on the posterior. These soft constraints are expressed as a distributional penalty, Rp(x, y) ≥0. For example, if we have partial information that a specific control state takes on label c we can add a constraint Rp(x, y) = 1 −p(zt = c | x, y). We might also consider other distributional properties, for instance penalizing the entropy of a specific posterior marginal, Rp(x, y) = Hz′(zt = z′ | x, y). See §5 for more constraint examples. PR uses these soft constraints to regularize the model. Ideally we would penalize the posterior directly, but as noted above, computing this term in a blackbox model is intractable. We therefore follow Ganchev et al. (2010) and use a relaxed version with a surrogate posterior qφ(z | x, y), LPR(θ) = L(θ) − (1) min φ [KL[qφ || pθ(z | x, y)] + λRqφ(x, y)] We can write this in terms of a variational lowerbound on the relaxed PR objective. LPR(θ) ≥ PRLBO(θ, φ) = L(θ) − (2) [KL[qφ || pθ(z | x, y)] + λRqφ(x, y)] This allows us to relate the q in the PRLBO to the variational posterior in the ELBO simply by 2733 Clowns is a restaurant British y z posterior constraints Clowns is a restaurant British Inference Network Generative Model qφ(z | x, y) <latexit sha1_base64="SgFyIDzod1TxaMc1/lsOFU9xcAs=">AB+nicbVDLSsNAFJ3UV62vVJduBotQUo igi6LblxWsA9oQphMJu3QmSTOTNQY+yluXCji1i9x5984bPQ1gMXDufcy73+AmjUlnWt1FaWl5ZXSuvVzY2t7Z3zOpuR8apwKSNYxaLno8kYTQibUVI71EMR9Rr+6HLid+IkDSOblSWEJejQURDipHSkmdWbz0nGdL6o8Np AB+OsyPrFkNawq4SOyC1ECBlmd+OUGMU04ihRmSsm9biXJzJBTFjIwrTipJgvAIDUhf0whxIt18evoYHmolgGEsdEUKTtXfEzniUmbc150cqaGc9ybif14/VeG5m9MoSRWJ8GxRmDKoYjJAQZUEKxYpgnCgupbIR4igbDSaV0CP b8y4ukc9KwrYZ9fVprXhRxlME+OAB1YIMz0ARXoAXaAIN78AxewZvxZLwY78bHrLVkFDN74A+Mzx9UcJNg</latexit> <latexit sha1_base64="SgFyIDzod1TxaMc1/lsOFU9xcAs=">AB+nicbVDLSsNAFJ3UV62vVJduBotQUo igi6LblxWsA9oQphMJu3QmSTOTNQY+yluXCji1i9x5984bPQ1gMXDufcy73+AmjUlnWt1FaWl5ZXSuvVzY2t7Z3zOpuR8apwKSNYxaLno8kYTQibUVI71EMR9Rr+6HLid+IkDSOblSWEJejQURDipHSkmdWbz0nGdL6o8Np AB+OsyPrFkNawq4SOyC1ECBlmd+OUGMU04ihRmSsm9biXJzJBTFjIwrTipJgvAIDUhf0whxIt18evoYHmolgGEsdEUKTtXfEzniUmbc150cqaGc9ybif14/VeG5m9MoSRWJ8GxRmDKoYjJAQZUEKxYpgnCgupbIR4igbDSaV0CP b8y4ukc9KwrYZ9fVprXhRxlME+OAB1YIMz0ARXoAXaAIN78AxewZvxZLwY78bHrLVkFDN74A+Mzx9UcJNg</latexit> <latexit sha1_base64="SgFyIDzod1TxaMc1/lsOFU9xcAs=">AB+nicbVDLSsNAFJ3UV62vVJduBotQUo igi6LblxWsA9oQphMJu3QmSTOTNQY+yluXCji1i9x5984bPQ1gMXDufcy73+AmjUlnWt1FaWl5ZXSuvVzY2t7Z3zOpuR8apwKSNYxaLno8kYTQibUVI71EMR9Rr+6HLid+IkDSOblSWEJejQURDipHSkmdWbz0nGdL6o8Np AB+OsyPrFkNawq4SOyC1ECBlmd+OUGMU04ihRmSsm9biXJzJBTFjIwrTipJgvAIDUhf0whxIt18evoYHmolgGEsdEUKTtXfEzniUmbc150cqaGc9ybif14/VeG5m9MoSRWJ8GxRmDKoYjJAQZUEKxYpgnCgupbIR4igbDSaV0CP b8y4ukc9KwrYZ9fVprXhRxlME+OAB1YIMz0ARXoAXaAIN78AxewZvxZLwY78bHrLVkFDN74A+Mzx9UcJNg</latexit> <latexit sha1_base64="SgFyIDzod1TxaMc1/lsOFU9xcAs=">AB+nicbVDLSsNAFJ3UV62vVJduBotQUo igi6LblxWsA9oQphMJu3QmSTOTNQY+yluXCji1i9x5984bPQ1gMXDufcy73+AmjUlnWt1FaWl5ZXSuvVzY2t7Z3zOpuR8apwKSNYxaLno8kYTQibUVI71EMR9Rr+6HLid+IkDSOblSWEJejQURDipHSkmdWbz0nGdL6o8Np AB+OsyPrFkNawq4SOyC1ECBlmd+OUGMU04ihRmSsm9biXJzJBTFjIwrTipJgvAIDUhf0whxIt18evoYHmolgGEsdEUKTtXfEzniUmbc150cqaGc9ybif14/VeG5m9MoSRWJ8GxRmDKoYjJAQZUEKxYpgnCgupbIR4igbDSaV0CP b8y4ukc9KwrYZ9fVprXhRxlME+OAB1YIMz0ARXoAXaAIN78AxewZvxZLwY78bHrLVkFDN74A+Mzx9UcJNg</latexit> p✓(y, z | x) <latexit sha1_base64="YlNsIa+9OGZoQvUds41NWm3r7yY=">AB/XicbVDLSsNAFJ3UV62v+Ni5GSxCBSmJ CLosunFZwT6gCWEymbRDJw9mbsS0FH/FjQtF3Pof7vwbp20Wj1w4XDOvdx7j58KrsCyvozS0vLK6lp5vbKxubW9Y+7utVWScpaNBGJ7PpEMcFj1gIOgnVTyUjkC9bxh9dTv3PpOJfAd5ytyI9GMeckpAS5kHoODBiQWn46wk7E A/xw4plVq27NgP8SuyBVKDpmZ9OkNAsYjFQZTq2VYK7phI4FSwScXJFEsJHZI+62kak4gpdzy7foKPtRLgMJG6YsAz9efEmERK5ZGvOyMCA7XoTcX/vF4G4aU75nGaAYvpfFGYCQwJnkaBAy4ZBZFrQqjk+lZMB0QSCjqwig7BXnz 5L2mf1W2rbt+eVxtXRxldIiOUA3Z6AI10A1qohaiaISe0At6NR6NZ+PNeJ+3loxiZh/9gvHxDUkglHI=</latexit> <latexit sha1_base64="YlNsIa+9OGZoQvUds41NWm3r7yY=">AB/XicbVDLSsNAFJ3UV62v+Ni5GSxCBSmJ CLosunFZwT6gCWEymbRDJw9mbsS0FH/FjQtF3Pof7vwbp20Wj1w4XDOvdx7j58KrsCyvozS0vLK6lp5vbKxubW9Y+7utVWScpaNBGJ7PpEMcFj1gIOgnVTyUjkC9bxh9dTv3PpOJfAd5ytyI9GMeckpAS5kHoODBiQWn46wk7E A/xw4plVq27NgP8SuyBVKDpmZ9OkNAsYjFQZTq2VYK7phI4FSwScXJFEsJHZI+62kak4gpdzy7foKPtRLgMJG6YsAz9efEmERK5ZGvOyMCA7XoTcX/vF4G4aU75nGaAYvpfFGYCQwJnkaBAy4ZBZFrQqjk+lZMB0QSCjqwig7BXnz 5L2mf1W2rbt+eVxtXRxldIiOUA3Z6AI10A1qohaiaISe0At6NR6NZ+PNeJ+3loxiZh/9gvHxDUkglHI=</latexit> <latexit sha1_base64="YlNsIa+9OGZoQvUds41NWm3r7yY=">AB/XicbVDLSsNAFJ3UV62v+Ni5GSxCBSmJ CLosunFZwT6gCWEymbRDJw9mbsS0FH/FjQtF3Pof7vwbp20Wj1w4XDOvdx7j58KrsCyvozS0vLK6lp5vbKxubW9Y+7utVWScpaNBGJ7PpEMcFj1gIOgnVTyUjkC9bxh9dTv3PpOJfAd5ytyI9GMeckpAS5kHoODBiQWn46wk7E A/xw4plVq27NgP8SuyBVKDpmZ9OkNAsYjFQZTq2VYK7phI4FSwScXJFEsJHZI+62kak4gpdzy7foKPtRLgMJG6YsAz9efEmERK5ZGvOyMCA7XoTcX/vF4G4aU75nGaAYvpfFGYCQwJnkaBAy4ZBZFrQqjk+lZMB0QSCjqwig7BXnz 5L2mf1W2rbt+eVxtXRxldIiOUA3Z6AI10A1qohaiaISe0At6NR6NZ+PNeJ+3loxiZh/9gvHxDUkglHI=</latexit> <latexit sha1_base64="YlNsIa+9OGZoQvUds41NWm3r7yY=">AB/XicbVDLSsNAFJ3UV62v+Ni5GSxCBSmJ CLosunFZwT6gCWEymbRDJw9mbsS0FH/FjQtF3Pof7vwbp20Wj1w4XDOvdx7j58KrsCyvozS0vLK6lp5vbKxubW9Y+7utVWScpaNBGJ7PpEMcFj1gIOgnVTyUjkC9bxh9dTv3PpOJfAd5ytyI9GMeckpAS5kHoODBiQWn46wk7E A/xw4plVq27NgP8SuyBVKDpmZ9OkNAsYjFQZTq2VYK7phI4FSwScXJFEsJHZI+62kak4gpdzy7foKPtRLgMJG6YsAz9efEmERK5ZGvOyMCA7XoTcX/vF4G4aU75nGaAYvpfFGYCQwJnkaBAy4ZBZFrQqjk+lZMB0QSCjqwig7BXnz 5L2mf1W2rbt+eVxtXRxldIiOUA3Z6AI10A1qohaiaISe0At6NR6NZ+PNeJ+3loxiZh/9gvHxDUkglHI=</latexit> qφ(z | x, y) <latexit sha1_base64="SgFyIDzod1TxaMc1/lsOFU 9xcAs=">AB+nicbVDLSsNAFJ3UV62vVJduBotQUoigi6LblxWsA9oQphMJu3QmSTOTNQY+yluXCji1i9x59 84bPQ1gMXDufcy73+AmjUlnWt1FaWl5ZXSuvVzY2t7Z3zOpuR8apwKSNYxaLno8kYTQibUVI71EMR9Rr+6 HLid+IkDSOblSWEJejQURDipHSkmdWbz0nGdL6o8NpAB+OsyPrFkNawq4SOyC1ECBlmd+OUGMU04ihRmSsm9b iXJzJBTFjIwrTipJgvAIDUhf0whxIt18evoYHmolgGEsdEUKTtXfEzniUmbc150cqaGc9ybif14/VeG5m9MoSR WJ8GxRmDKoYjJAQZUEKxYpgnCgupbIR4igbDSaV0CPb8y4ukc9KwrYZ9fVprXhRxlME+OAB1YIMz0ARXoAXaA IN78AxewZvxZLwY78bHrLVkFDN74A+Mzx9UcJNg</latexit> <latexit sha1_base64="SgFyIDzod1TxaMc1/lsOFU 9xcAs=">AB+nicbVDLSsNAFJ3UV62vVJduBotQUoigi6LblxWsA9oQphMJu3QmSTOTNQY+yluXCji1i9x59 84bPQ1gMXDufcy73+AmjUlnWt1FaWl5ZXSuvVzY2t7Z3zOpuR8apwKSNYxaLno8kYTQibUVI71EMR9Rr+6 HLid+IkDSOblSWEJejQURDipHSkmdWbz0nGdL6o8NpAB+OsyPrFkNawq4SOyC1ECBlmd+OUGMU04ihRmSsm9b iXJzJBTFjIwrTipJgvAIDUhf0whxIt18evoYHmolgGEsdEUKTtXfEzniUmbc150cqaGc9ybif14/VeG5m9MoSR WJ8GxRmDKoYjJAQZUEKxYpgnCgupbIR4igbDSaV0CPb8y4ukc9KwrYZ9fVprXhRxlME+OAB1YIMz0ARXoAXaA IN78AxewZvxZLwY78bHrLVkFDN74A+Mzx9UcJNg</latexit> <latexit sha1_base64="SgFyIDzod1TxaMc1/lsOFU 9xcAs=">AB+nicbVDLSsNAFJ3UV62vVJduBotQUoigi6LblxWsA9oQphMJu3QmSTOTNQY+yluXCji1i9x59 84bPQ1gMXDufcy73+AmjUlnWt1FaWl5ZXSuvVzY2t7Z3zOpuR8apwKSNYxaLno8kYTQibUVI71EMR9Rr+6 HLid+IkDSOblSWEJejQURDipHSkmdWbz0nGdL6o8NpAB+OsyPrFkNawq4SOyC1ECBlmd+OUGMU04ihRmSsm9b iXJzJBTFjIwrTipJgvAIDUhf0whxIt18evoYHmolgGEsdEUKTtXfEzniUmbc150cqaGc9ybif14/VeG5m9MoSR WJ8GxRmDKoYjJAQZUEKxYpgnCgupbIR4igbDSaV0CPb8y4ukc9KwrYZ9fVprXhRxlME+OAB1YIMz0ARXoAXaA IN78AxewZvxZLwY78bHrLVkFDN74A+Mzx9UcJNg</latexit> <latexit sha1_base64="SgFyIDzod1TxaMc1/lsOFU 9xcAs=">AB+nicbVDLSsNAFJ3UV62vVJduBotQUoigi6LblxWsA9oQphMJu3QmSTOTNQY+yluXCji1i9x59 84bPQ1gMXDufcy73+AmjUlnWt1FaWl5ZXSuvVzY2t7Z3zOpuR8apwKSNYxaLno8kYTQibUVI71EMR9Rr+6 HLid+IkDSOblSWEJejQURDipHSkmdWbz0nGdL6o8NpAB+OsyPrFkNawq4SOyC1ECBlmd+OUGMU04ihRmSsm9b iXJzJBTFjIwrTipJgvAIDUhf0whxIt18evoYHmolgGEsdEUKTtXfEzniUmbc150cqaGc9ybif14/VeG5m9MoSR WJ8GxRmDKoYjJAQZUEKxYpgnCgupbIR4igbDSaV0CPb8y4ukc9KwrYZ9fVprXhRxlME+OAB1YIMz0ARXoAXaA IN78AxewZvxZLwY78bHrLVkFDN74A+Mzx9UcJNg</latexit> BOS qφ(zi | x, y) <latexit sha1_base64="bV aqmjSAMZr8mCXEq4Z41zQflQg=">AB/XicbVDLSsNAFJ 3UV62v+Ni5GSxCBSmJCLosunFZwT6gCWEymbRDZ5I4MxHT UPwVNy4Ucet/uPNvnLZaOuBC4dz7uXe/yEUaks69soLS 2vrK6V1ysbm1vbO+buXlvGqcCkhWMWi6PJGE0Ii1FSPdR BDEfUY6/vB64nceiJA0ju5UlhCXo35EQ4qR0pJnHtx7TjK gtZFHocNpAB9PsxPrFp1awq4SOyCVEGBpmd+OUGMU04ih RmSsmdbiXJzJBTFjIwrTipJgvAQ9UlP0whxIt18ev0YHms lgGEsdEUKTtXfEzniUmbc150cqYGc9ybif14vVeGlm9MoSR WJ8GxRmDKoYjiJAgZUEKxYpgnCgupbIR4gbDSgV0CPb8 y4ukfVa3rbp9e15tXBVxlMEhOAI1YIML0A3oAlaAIMReA av4M14Ml6Md+Nj1loyipl98AfG5w82GZRm</latexit> <latexit sha1_base64="bV aqmjSAMZr8mCXEq4Z41zQflQg=">AB/XicbVDLSsNAFJ 3UV62v+Ni5GSxCBSmJCLosunFZwT6gCWEymbRDZ5I4MxHT UPwVNy4Ucet/uPNvnLZaOuBC4dz7uXe/yEUaks69soLS 2vrK6V1ysbm1vbO+buXlvGqcCkhWMWi6PJGE0Ii1FSPdR BDEfUY6/vB64nceiJA0ju5UlhCXo35EQ4qR0pJnHtx7TjK gtZFHocNpAB9PsxPrFp1awq4SOyCVEGBpmd+OUGMU04ih RmSsmdbiXJzJBTFjIwrTipJgvAQ9UlP0whxIt18ev0YHms lgGEsdEUKTtXfEzniUmbc150cqYGc9ybif14vVeGlm9MoSR WJ8GxRmDKoYjiJAgZUEKxYpgnCgupbIR4gbDSgV0CPb8 y4ukfVa3rbp9e15tXBVxlMEhOAI1YIML0A3oAlaAIMReA av4M14Ml6Md+Nj1loyipl98AfG5w82GZRm</latexit> <latexit sha1_base64="bV aqmjSAMZr8mCXEq4Z41zQflQg=">AB/XicbVDLSsNAFJ 3UV62v+Ni5GSxCBSmJCLosunFZwT6gCWEymbRDZ5I4MxHT UPwVNy4Ucet/uPNvnLZaOuBC4dz7uXe/yEUaks69soLS 2vrK6V1ysbm1vbO+buXlvGqcCkhWMWi6PJGE0Ii1FSPdR BDEfUY6/vB64nceiJA0ju5UlhCXo35EQ4qR0pJnHtx7TjK gtZFHocNpAB9PsxPrFp1awq4SOyCVEGBpmd+OUGMU04ih RmSsmdbiXJzJBTFjIwrTipJgvAQ9UlP0whxIt18ev0YHms lgGEsdEUKTtXfEzniUmbc150cqYGc9ybif14vVeGlm9MoSR WJ8GxRmDKoYjiJAgZUEKxYpgnCgupbIR4gbDSgV0CPb8 y4ukfVa3rbp9e15tXBVxlMEhOAI1YIML0A3oAlaAIMReA av4M14Ml6Md+Nj1loyipl98AfG5w82GZRm</latexit> <latexit sha1_base64="bV aqmjSAMZr8mCXEq4Z41zQflQg=">AB/XicbVDLSsNAFJ 3UV62v+Ni5GSxCBSmJCLosunFZwT6gCWEymbRDZ5I4MxHT UPwVNy4Ucet/uPNvnLZaOuBC4dz7uXe/yEUaks69soLS 2vrK6V1ysbm1vbO+buXlvGqcCkhWMWi6PJGE0Ii1FSPdR BDEfUY6/vB64nceiJA0ju5UlhCXo35EQ4qR0pJnHtx7TjK gtZFHocNpAB9PsxPrFp1awq4SOyCVEGBpmd+OUGMU04ih RmSsmdbiXJzJBTFjIwrTipJgvAQ9UlP0whxIt18ev0YHms lgGEsdEUKTtXfEzniUmbc150cqYGc9ybif14vVeGlm9MoSR WJ8GxRmDKoYjiJAgZUEKxYpgnCgupbIR4gbDSgV0CPb8 y4ukfVa3rbp9e15tXBVxlMEhOAI1YIML0A3oAlaAIMReA av4M14Ml6Md+Nj1loyipl98AfG5w82GZRm</latexit> Clowns is a restaurant British Rqφ(x,y) <latexit sha1_base 64="sXr8G3gh4mBNQ0npaWo5rc/N4Pc=">A AB+XicbVBNS8NAEJ3Ur1q/oh69LBahgpREB D0WvXisYj+gDWGz3bRLN5u4uymW0H/ixYMiX v0n3vw3btsctPXBwO9GWbmBQlnSjvOt1VY WV1b3yhulra2d3b37P2DpopTSWiDxDyW7QAr ypmgDc0p+1EUhwFnLaC4c3Ub42oVCwWD3qc UC/CfcFCRrA2km/b93726HeTAUOVp7Px6cS 3y07VmQEtEzcnZchR9+2vbi8maUSFJhwr1XG dRHsZlpoRTielbqpogskQ92nHUIEjqrxsdv kEnRilh8JYmhIazdTfExmOlBpHgemMsB6oRW 8q/ud1Uh1eRkTSaqpIPNFYcqRjtE0BtRjk hLNx4ZgIpm5FZEBlphoE1bJhOAuvrxMmudV1 6m6dxfl2nUeRxGO4Bgq4MIl1OAW6tAiN4 hld4szLrxXq3PuatBSufOYQ/sD5/AIveku8= </latexit> <latexit sha1_base 64="sXr8G3gh4mBNQ0npaWo5rc/N4Pc=">A AB+XicbVBNS8NAEJ3Ur1q/oh69LBahgpREB D0WvXisYj+gDWGz3bRLN5u4uymW0H/ixYMiX v0n3vw3btsctPXBwO9GWbmBQlnSjvOt1VY WV1b3yhulra2d3b37P2DpopTSWiDxDyW7QAr ypmgDc0p+1EUhwFnLaC4c3Ub42oVCwWD3qc UC/CfcFCRrA2km/b93726HeTAUOVp7Px6cS 3y07VmQEtEzcnZchR9+2vbi8maUSFJhwr1XG dRHsZlpoRTielbqpogskQ92nHUIEjqrxsdv kEnRilh8JYmhIazdTfExmOlBpHgemMsB6oRW 8q/ud1Uh1eRkTSaqpIPNFYcqRjtE0BtRjk hLNx4ZgIpm5FZEBlphoE1bJhOAuvrxMmudV1 6m6dxfl2nUeRxGO4Bgq4MIl1OAW6tAiN4 hld4szLrxXq3PuatBSufOYQ/sD5/AIveku8= </latexit> <latexit sha1_base 64="sXr8G3gh4mBNQ0npaWo5rc/N4Pc=">A AB+XicbVBNS8NAEJ3Ur1q/oh69LBahgpREB D0WvXisYj+gDWGz3bRLN5u4uymW0H/ixYMiX v0n3vw3btsctPXBwO9GWbmBQlnSjvOt1VY WV1b3yhulra2d3b37P2DpopTSWiDxDyW7QAr ypmgDc0p+1EUhwFnLaC4c3Ub42oVCwWD3qc UC/CfcFCRrA2km/b93726HeTAUOVp7Px6cS 3y07VmQEtEzcnZchR9+2vbi8maUSFJhwr1XG dRHsZlpoRTielbqpogskQ92nHUIEjqrxsdv kEnRilh8JYmhIazdTfExmOlBpHgemMsB6oRW 8q/ud1Uh1eRkTSaqpIPNFYcqRjtE0BtRjk hLNx4ZgIpm5FZEBlphoE1bJhOAuvrxMmudV1 6m6dxfl2nUeRxGO4Bgq4MIl1OAW6tAiN4 hld4szLrxXq3PuatBSufOYQ/sD5/AIveku8= </latexit> <latexit sha1_base 64="sXr8G3gh4mBNQ0npaWo5rc/N4Pc=">A AB+XicbVBNS8NAEJ3Ur1q/oh69LBahgpREB D0WvXisYj+gDWGz3bRLN5u4uymW0H/ixYMiX v0n3vw3btsctPXBwO9GWbmBQlnSjvOt1VY WV1b3yhulra2d3b37P2DpopTSWiDxDyW7QAr ypmgDc0p+1EUhwFnLaC4c3Ub42oVCwWD3qc UC/CfcFCRrA2km/b93726HeTAUOVp7Px6cS 3y07VmQEtEzcnZchR9+2vbi8maUSFJhwr1XG dRHsZlpoRTielbqpogskQ92nHUIEjqrxsdv kEnRilh8JYmhIazdTfExmOlBpHgemMsB6oRW 8q/ud1Uh1eRkTSaqpIPNFYcqRjtE0BtRjk hLNx4ZgIpm5FZEBlphoE1bJhOAuvrxMmudV1 6m6dxfl2nUeRxGO4Bgq4MIl1OAW6tAiN4 hld4szLrxXq3PuatBSufOYQ/sD5/AIveku8= </latexit> Figure 1: Model training. Assumes we are given conditioning x (not shown) and output sentence y. (Middle) An inference network φ is used to parameterize a structured segmental conditional random field qφ(z | x, y) over control states z. (Right) Sample from qφ (colored circles) is used to provide control state labels for a blackbox generation model pθ(y, z | x) . (Left) To ground the control states to represent problem-specific meaning, posterior regularization is used to enforce distributional constraints through penalties Rq(x, y). The whole system is optimized end-to-end to learn latent properties of the final output tokens. expanding the KL and rearranging terms, PRLBO(θ, φ) = ELBO(θ, φ) −λRqφ(x, y) To train, we jointly maximize over both terms in the PRLBO: the model parameters θ and the variational parameters φ (which tightens the bounds). Following standard practice, we use an amortized inference network, i.e. a variational autoencoder (Kingma and Welling, 2014; Mnih and Gregor, 2014; Rezende et al., 2014), to define φ. 4 Structured Variational Family for Segmental Generation We now discuss how to efficiently compute the PRLBO under a structured variational family. PRLBO = Ez∼qφ[log pθ] | {z } (1) + H[qφ] | {z } (2) −λRqφ(x, y) | {z } (3) We need a qφ(z | x, y) for which we can efficiently (1) take samples, (2) compute entropy, and (3) compute the distributional penalties. This motivates the use of a factored conditional random field (CRF), defined by a potential function φ(x, y, z). At training time, x, y are observed and z is the latent variable that denotes the control states. We then specify a variational posterior distribution: qφ(z | x, y) = φ(x,y,z) P z′ φ(x,y,z′). In this work, we focus on the semi-Markov CRF (Gales and Young, 1993; Sarawagi and Cohen, 2005), a common CRF family used in generation (Wiseman et al., 2018). It divides tokens into segmental spans, which are useful for generating entity mentions and commonly used phrases. This model divides the potential function into three parts: the emission potential for a span of tokens given Algorithm 1: Generic Semi-Markov Algorithm. Given φ and generic semiring (⊕, ⊗, 0, 1) Set βT (c) = 1 ∀c ∈C for i = T −1, . . . , 0 do for c ∈C do β′ i(c) = min(L,T−i) M d=1 βi+d(c)⊗φ(l)(d)⊗ φ(e)(x, yi,i+d, c) for c ∈C do βi(c) = M c′∈C β′ i(c′) ⊗φ(t)(c, c′) return Z = M c∈C β′ 0(c) ⊗φ(t)(0, c) a state, denoted as φ(e); the transition potential between states, φ(t); and the length potential of span length given a state, φ(l). Suppose our control states define a span from i (inclusive) to j (exclusive) labeled by c, we denote it as zi:j = c. The potential function of a labeled sequence is defined: φ(x, y, z) = Y i<j<k φ(t)(zi:j,zj:k) · φ(l)(j −i) · φ(e)(x, yi:j, zi:j) (3) For computational efficiency, we restrict all segment length to be ≤L.2 With this model, we can use the forwardbackward algorithm for all required inferences: exact sampling, computing partition function, entropy, and posterior marginals qφ(zi:j = c | x, y), useful for term (3). In Algorithm 1, we give a 2 The time complexity to compute the posterior moments of the full semi-Markov CRF is O(|C|2nL). 2734 One-to-One One-to-Many Name Penalty Name Penalty Inclusion For (i, j, f) ∈A(x, y), Sparsity For f ∈F, Rq = 1 −q(zi:j = σ(f) | x, y) Rq = H[σ(c | f)] Exclusion For f ∈x and (i, j, f) ̸∈A(x, y), Fit For (i, j, f) ∈A(x, y) Rq = q(zi:j = σ(f) | x, y) Rq = H[σ(c | f), q(zi:j | x, y)] Coverage For f ∈F, Diversity Let pagg(ˆz) ∝PT t=1 q(zt = ˆz | x, y) Rq = | X i<j q(zi:j = σ(f) | x, y)−1(f ∈x)| Rq = H[Unif(ˆz)] −H[pagg(ˆz)] Table 1: Posterior penalties utilized in the One-to-One and One-to-Many setting. These constraints softly enforce an alignment between control states and text spans by penalizing posterior violations. The objective sums over the three Rq in both cases. generic semi-Markov algorithm (Sarawagi and Cohen, 2005). We store two tables β and β ′, both of size T × |C|. βt(c) denotes the event that there is a transition at time t from state c. β′ t(c) denotes the event that there is a emission starting from time t at state c. Then we have the recursion for β′ t(c) by “summing” over different span length, and we have the recursion for βt(c) that sums over all different state transitions. The algorithm is generic in the sense that different (⊗, ⊕) operators allow us to compute different needed terms. For example, computing the partition function Z = P z′ φ(x, y, z′) requires the (+,×) semiring (Goodman, 1999; Li and Eisner, 2009), other distributional terms can be computed by using the same algorithm with alternative semirings and backpropagation 3. 5 Posterior Constraints from Data Alignment To make the PR model concrete, we consider the problem of incorporating weak supervision from heuristic alignment in a data-to-text generation task. Assume that we are tasked with describing a table x consisting of global field names F each with a text value v, e.g. xf = v. Not all global fields may be used in a given x, we use f ∈x to indicate an 3We need four terms: (a) log-partition term log P z′ φ(x, y, z′) requires the log semiring (logsumexp, +). The posterior marginals q(z | x, y) requires backpropagating from the log-partition term; (b) max score maxz φ(x, y, z): (max, +) max semiring and argmax arg maxz φ(x, y, z) by (subgradient) backpropagation, (c) entropy through an expectation semiring ⟨p1, r1⟩⊗⟨p2, r2⟩= ⟨p1p2, p1r2 + p2r1⟩, and ⟨p1, r1⟩⊕⟨p2, r2⟩= ⟨p1 + p2, r1 + r2⟩, with 1 = ⟨1, 0⟩. To initialize, all the emission, transition and length scores takes the form ⟨φ, −log φ⟩. The algorithm returns ⟨Z, R⟩, and the true entropy is R Z + log Z. (d) exact sampling through one backward pass and one forward filtering backward sampling, where forward uses the log-partition semiring and backpropagation is by categorical sampling. x name[Clowns] eatType[coffee shop], rating[1 out of 5], near[Clare Hall] f ∈x name, eatType, rating, near y Clowns1 is2 a3 coffee4 shop5 near6 Clare7 Hall8 with9 a10 111 out12 of13 514 rating15 A(x, y) (1, 2, name), (4, 6, eatType), (7, 9, near), (11, 15, rating) Table 2: Example of data alignment notation. Here x is a table of data, and f are its fields. For a given output y we enforce a soft alignment A. active field. We would like control states to indicate when each field is used in generation. Our alignment heuristic is that often these fields will be expressed using the identical text as in the table. While this heuristic obviously does not account for all cases, it is very common in natural language generation tasks as evidence by the wide use of copy attention based approaches (Gu et al., 2016; Gulcehre et al., 2016). To utilize these alignments, we use the notation (i, j, f) ∈A(x, y) to indicate that a span i : j in the training text y overlaps directly with a field f ∈x. Table 2 gives an example of the notation. One-to-One Constraints We first consider oneto-one constraints where we assume that we have a static, mapping from fields to states σ : F 7→C. Given this mapping, we need to add penalties to encourage the semi-Markov model to overlap with the given weak supervision. To enforce soft alignments, we define three posterior constraint types and their computation as shown in Table 1 (Left). The three constraints are i) Inclusion: if a span in y aligns with a field value 2735 f, then label that span σ(f) the state allocated to that field; ii) Exclusion: A span should only have a state σ(f), if it aligns with the field value of type f; iii) Coverage. The usage count of state σ(f) should be 1 if f in x. One-to-Many Constraints We also consider the case when it is infeasible to specify a hard mapping σ between the fields and the states. For example, F could be unbounded or large, whereas we hope to keep the cardinality of states small for computational efficiency. We propose a method of inducing a dynamic soft mapping σ(c | f) as we train the model, and impose constraints on the mapping from table field to the state names. First, we would like the distribution of state given table field to be consistent, so one table field is mapped to roughly 1 state. Second, we want to make use of the state space as much as possible by requiring a diverse usage of states. In order to enforce these properties we introduce the dynamic mapping as a second amortized variational distribution σ(c | f; M) = softmax(Mf) which gives the probability that a table field f takes on state c. As shown in Table 1 (Right), we define three constraints that regularize the local q with respect to the global σ: i) Sparsity: Each vocabulary entry in σ should have low entropy; ii) Fit: The global σ should represent the class name distribution posterior of each table field by minimizing the cross entropy between types σ(c | f) and tokens q(zi:j | x, y) for all (i, j, f) ∈A(x, y); iii) Diversity: the aggregate class label distribution over all the token in a sentence should have high entropy. 6 Related Work In addition to previously mentioned work, other researchers have noted the lack of control of deep neural networks and proposed methods at sentencelevel, word-level, and phrase-level. For example Peng et al. (2018) and Luo et al. (2019) control the sentiment in longer-form story generation. Others aim for sentence-level properties such as sentiment, style, tense, and specificity in generative neural models (Hu et al., 2017; Oraby et al., 2018; Zhang et al., 2018; Shen et al., 2017). Closest to this work is that of Wiseman et al. (2018) who control phrase-level content by using a neuralized hidden semi-Markov model for generation itself. Our work differs in that it makes no independence assumption on the decoder model, uses a faster training algorithm, and proposes a specific method for adding constraints. Finally, there is a line of work that manipulates the syntactic structure of generated texts, by using some labeled syntactic attribute (e.g., parses) or an exemplar (Deriu and Cieliebak, 2018; Colin and Gardent, 2018; Iyyer et al., 2018; Chen et al., 2019). While our work uses control states, there is no inherent assumption of compositional syntax or grammar. Posterior regularization (PR) is mostly used in standard EM settings to impose constraints on the posterior distribution that would otherwise be intractable (or computationally hard) in the prior. Ganchev et al. (2010) applies posterior regularization to word alignment, dependency parsing, and part-of-speech tagging. Combining powerful deep neural networks with structured knowledge has been a popular area of study: Xu et al. (2019) applies PR to multi-object generation to limit object overlap; Bilen et al. (2014) focuses on object detection, and uses PR features to exploit mutual exclusion. In natural language processing; Hu et al. (2016a,b) propose an iterative distillation procedure that transfers logic rules into the weights of neural networks, as a regularization to improve accuracy and interpretability. Finally, the core of this work is the use of amortized inference/variation autoencoder to approximate variational posterior (Kingma and Welling, 2014; Mnih and Gregor, 2014; Rezende et al., 2014). We rely heavily on a structure distribution, either linear chain or semi-Markov, which was introduced as a structured VAEs (Johnson et al., 2016; Krishnan et al., 2017; Ammar et al., 2014). Our setting and optimization are based on Kim et al. (2019), who introduce a latent tree variable in a variational autoencoding model with a CRF as the inference network, and on Yin et al. (2018) who use an encoder-decoder model as the inference network. 7 Experimental Setup Data and Metrics We consider two standard neural generation benchmarks: E2E (Novikova et al., 2017) and WikiBio (Lebret et al., 2016a) datasets, with examples shown in Figure 1. The E2E dataset contains approximately 50K examples with 8 distinct fields and 945 distinct word types; it contains multiple test references for one source table. We evaluate in terms of BLEU (Papineni et al., 2002), NIST (Belz and Reiter, 2006), ROUGE-L 2736 Table (x): name[Clowns] eatType[coffee shop] food[Chinese] customer-rating[1 out of 5] area[riverside] near[Clare Hall] Ref.1: Clowns is a coffee shop in the riverside area near Clare Hall that has a rating 1 out of 5 . They serve Chinese food . Ref.2: The Chinese coffee shop by the riverside near Clare Hall that only has a customer rating of 1 out of 5 is called Clowns . Ref.3: There is a Chinese coffee shop near Clare Hall in the riverside area called Clowns its not got a good rating though . Ref.1: Frederick ParkerRhodes (21 March 1914 – 21 November 1987) was an English linguist, plant pathologist, computer scientist, mathematician, mystic, and mycologist. Figure 2: Generation benchmarks. Model is given a table x consisting of semantic fields and is tasked with generating a description y1:T of this data. Two example datasets are shown. Left: E2E, Right: WikiBio. (Lin, 2004), CIDEr (Vedantam et al., 2015) and METEOR (Lavie and Agarwal, 2007), using the official scoring scripts4. The WikiBio dataset contains approximately 700K examples, 6K distinct table field types, and 400K word types approximately; it contains one reference for one source table. We follow the metrics from (Lebret et al., 2016a) and evaluate the BLEU, NIST, and ROUGE4 scores. Architecture and Hyperparameters For all tasks, we use an encoder-decoder LSTM for the generative model. We follow recent state-of-the-art works in parametrizing our encoder, and we use copy attention and dual attention (Gu et al., 2016; Gulcehre et al., 2016; Liu et al., 2018): full model architectures are given in the supplement. The inference network scores are computed using a BiLSTM. We compute the emission scores φ(e) using span embeddings (Wang and Chang, 2016; Kitaev and Klein, 2018; Stern et al., 2017); transition scores φ(t) by dot product between embedding vectors for the class labels; lengths φ(l) is kept uniform, as in Wiseman et al. (2018). Additional details are in the supplement. At training time, we use a rate for alleviating posterior collapse in the ELBO: warm-up the ELBO objective by linearly annealing the coefficient on the term PT t=1 log pθ(zt | z<t, y<t) and H[qφ(z | x, y)] from 0 to 1, as implemented in Kim et al. (2019). We use the REINFORCE algorithm to do Monte Carlo estimation of the stochastic gradient. We choose the control variate to be the mean of the samples (Mnih and Rezende, 2016). At decoding time, we only use the generative model. We use beam search with length normaliza4Official E2E evaluation scripts available at https:// github.com/tuetschek/e2e-metrics tion to jointly generate both the control states and the sentences. To obtain controlled generation, we observe the control states, and apply constrained beam search to p(y | x, z). Baselines For generation on E2E, we compare externally against 4 systems: E2E-BENCHMARK (Duˇsek and Jurˇc´ıˇcek, 2016) is an encoder-decoder network followed by a reranker used as the shared task benchmark; NTEMP, a controllable neuralized hidden semi-Markov model; NTEMP+AR, the product of experts of both a NTemp model and an autoregressive LSTM network (Wiseman et al., 2018); SHEN19 (Shen et al., 2019) is an pragmatically informed model, which is the current state-ofthe-art system on E2E dataset. We also compare internally with ablations of our system: ENCDEC is a conditional model p(y | x) trained without control states. PC0 is posterior control model with no constraints. It uses structured encoder with the PR coefficient set to 0. PC∞is our model with hard constraints, which assumes fully-observed control states. These control states are obtained by mapping tokens with lexical overlap to their designated state; otherwise we map to a generic state. We train a seq2seq model p(y, z | x) with full supervision of both control states and target text. Our main model is PCλ, which applies PR with coefficient given by hyperparameter λ. For WikiBio, we compare externally against 5 systems: NTEMP and NTEMP+AR as above; LEBRET16 (Lebret et al., 2016a), which uses copy attention and an NNLM; LIU18 (ENCDEC), which is our base encoder-decoder LSTM model, and LIU18 (Field Gating) which uses a field gating table encoder and a decoder with dual attention (Liu et al., 2018). For internal comparison on WikiBio, we compare between the one-to-one and one-to2737 E2E BLEU NIST ROUGE CIDEr MET validation E2E-BENCH* 69.25 8.48 72.6 2.40 47.0 ENCDEC* 70.81 8.37 74.1 2.48 48.0 NTEMP 64.53 7.66 68.6 1.82 42.5 NTEMP+AR 67.70 7.98 69.5 2.29 43.1 PC0 69.10 8.32 72.6 2.35 47.3 PC∞ 69.36 8.36 71.3 2.29 46.4 PCλ 72.93 8.63 75.5 2.54 48.4 test E2E-BENCH* 65.93 8.59 68.5 2.23 44.8 SHEN19* 68.60 8.73 70.8 2.37 45.3 ENCDEC* 66.34 8.55 68.0 2.18 44.3 NTEMP 55.17 7.14 65.7 1.70 41.9 NTEMP+AR 59.80 7.56 65.0 1.95 38.8 PCλ 67.12 8.52 68.7 2.24 45.4 WikiBio BLEU NIST R-4 test LEBRET16* 34.7 7.98 25.8 LIU18(ENCDEC)* 43.7 40.3 LIU18(FieldGating)* 44.9 41.2 NTEMP 34.2 7.94 35.9 NTEMP+AR 34.8 7.59 38.6 PCλ one-to-one 44.7 9.92 43.3 PCλ one-to-many 44.2 9.59 41.5 Table 3: Automatic metrics for text generation. ∗marks systems without learned control states. (Left) E2E. Comparison of systems from Duˇsek and Jurˇc´ıˇcek (2016); Wiseman et al. (2018); Shen et al. (2019), our model and ablations. (Right) WikiBio. Comparison of Wiseman et al. (2018); Liu et al. (2018); Lebret et al. (2016a) and our full model. many constraints in §5. PCλ one-to-one applies the One-to-One posterior constraints (left of Table 1). PCλ one-to-many applies the One-to-Many posterior constraints (right of Table 1). 8 Experiments Table 3 shows the main results for the E2E and WikiBio, comparing to both standard neural models and controllable systems. On E2E (left), our posterior control model outperforms the neural benchmark system on all validation metrics and most of the test metrics. It also achieves results comparable or better than a specialized encoder-decoder system. It has significantly better performance than the controllable NTemp and NTemp+AR in all metrics on both validation and test. This demonstrates that the PC model provides interpretable and controllable states without sacrificing any representation power or generation performance. For internal comparison, having soft constraints on the posterior outperforms the system PC∞ (forced hard constraints) and PC0 (no constraints). Anecdotally, we find that if two fields have the same value, then the hard coding system is often forced into the wrong decision. Similarly removing posterior regularization altogether leads to a slightly weaker performance than our controlled model. On the larger WikiBio dataset (right) our model also significantly outperforms both the controllable NTemp and NTemp+AR baselines in all three metrics. It gives improvements over Liu et al. (2018)’s strong encoder-decoder style model. The promising result from WikiBio dataset suggests that the method scales to larger datasets and the PR style works well in handling large field spaces. In addition, we find that dynamic constraints are feasible compared with static constraints (we believe this is because the modeling burden on PCλ one-to-many is heavier since it also needs to figure out the clustering). Overall, the dynamic framework opens up the possibility of generalizing to work well with a wider set of constraints. 9 Analysis Qualitative Analysis Table 4 shows how control states (shown by different colors) are used in generated sentences. We use examples generated by the PCλ system on the WikiBio dataset. We obtain outputs by beam search over control states and words. The first block contains examples with relatively complete coverage by the semantically grounded control states, including name, birth date, death date, occupation and nationality. We note that when a control state is selected, the textual span covered by the control state tend to respect truthfulness by copying from the table. The second block shows a longer example that uses less of the source, but still remain truthful with respect to the table. Table 5 (left) qualitatively demonstrates the multi-modality of output of the system on E2E 2738 PCλ billy ruge -lrb- c. 1885 – 1955 -rrb- was an american film actor . debra dene barnes is an associate professor of piano studies at miss america 1968 . shaalin zoya -lrb- born 22 february 1997 -rrb- is an indian actress . carlos albert andrs -lrb- born february 24 , 1978 in madrid , spain -rrb- is a spanish sculptor . Table (x): name[james horton]; birthdate[1850]; deathdate[none]; birthplace[boston, massachusetts]; allegiance[united states of America]; branch[united states navy]; rank[captain of the top]; awards[medal of honor] REF: james horton -lrb- born 1850 -rrb- was a sailor serving in the united states navy who received the medal of honor for bravery . PCλ: james horton -lrb- born 1850 , date of death unknown -rrb- was a united states navy sailor and a recipient of the united states military ’s highest decoration , the medal of honor . Table 4: Qualitative examples on WikiBio dataset. (Top) Generated sentences control states highlighted. (Bottom) Full example of content selection with data table and reference. (Best viewed in color.) dataset. We particularly note how the final system is trained to associate control states with field types. Here we fix the prior on z to 8 different sequences of class labels shown in different colors, and do constrained beam search on the generative model by holding z fixed, and decoding from the model pθ(y | x, z). Controllability Next we consider a quantitive experiment on model control. Assuming we have a mapping from control states to fields, ideally, at test time z should use the right states from the source x.5 Let S = {(i, j, f) : zi,j = c, f ∈x, σ(f) = c} be the field states used by z. Define the field word overlap between x and y as, #match = X (i,j,f)∈S unigram-overlap(yi:j, xf) We can compute precision, recall, and coverage under this metric, #match P (i,j,f)∈S(j −i), #match P f∈x |xf|, |S| |c : c ∈x|. Under these metrics we see the following control metrics on the E2E dataset, 5On E2E dataset, we remove the binary table field, “family friendly” which is never expressed by lexical match. P R C PC∞ 0.996 0.895 0.833 PCλ 1.0 0.969 1.0 The PC model with soft posterior constraints performs better than having hard constraints on all three metrics. Having P = 1 means that the control states are a strong signal to copy from the table, and C = 1 means that control states learn to cover all table fields. On WikiBio, the model has a precision of 0.83 on the, meaning that on average, when we generate a good control state, 83% of the generated tokens will match the table content. Since only a fraction of the source table in WikiBio is used, recall and coverage are less applicable. Distributional Metrics Table 5 (right) shows distributional metrics related to the optimization of the generative model and the inference network. The reconstruction perplexity, Rec. is much lower than the full perplexity, PPL and the KL divergence between the variational posterior and the conditional prior is highly non-zero. These observations indicate that latent variables are being used in a non-trivial way by the generative model. It also suggests the variational model is not experiencing posterior collapse. Limitations Given the promise of PR as a technique for inducing control states, it is worth noting some of the current limitations to our specific application of the method. Currently, we use simple rules which do not generalize well to paraphrase. Our weak supervision relies on direct overlap to align states and fails on aligning phrases like less then 10 dollars that are expressed as cheap. Additionally, while at test time, our method is comparable to a standard decoder model, it does require slightly longer to train due to both the dynamic program and the requirement to compute multiple samples. 10 Conclusion This work introduces a method for controlling the output of a blackbox neural decoder model to follow weak supervision. The methodology utilizes posterior regularization within a structured variational framework. We show that this approach can induce a fully autoregressive neural model that is as expressive as standard neural decoders but also utilizes meaningful discrete control states. We show this decoder is effective for text generation while inducing meaningful discrete representations. 2739 Table (x): name[Clowns] eatType[coffee shop] food[English] customerrating[5 out of 5] area[riverside] near[Clare Hall] (1) Clowns is a 5 star coffee shop located near Clare Hall . (2) Clowns is a coffee shop that serves English food and is near Clare Hall . It is in riverside and has a 5 out of 5 customer rating . (3) Near Clare Hall in Riverside is coffee shop , Clowns . It serves English food , and has received a customer rating of 5 out of 5 . (4) Near the riverside , Clare Hall is a coffee shop called Clowns that serves English food and has a customer rating of 5 - stars . (5) Near Clare Hall , Clowns coffee shop has a five star rating and English food . (6) Clare Hall is a 5 star coffee shop near to Clowns that serves British food . (7) Clowns coffee shop is near Clare Hall in Riverside . It serves English food and has an excellent customer rating . (8) 5 star rated restaurant , Clowns coffee shop is located near Clare Hall . Models Rec. ↓ PPL ↓ KL E2E PC0 1.81 3.74 19.8 PCλ 2.35 3.70 12.8 WikiBio PC0 2.57 3.82 10.69 PCλ one-to-one 2.45 4.07 10.19 PCλ one-to-many 2.59 4.58 13.07 Table 5: (Left) Example of controlled generation pθ(y | x, z) on the source entity “Clowns” from E2E dataset. The color represents the class label of the token z. (Right) Metrics related to the generative model/inference network measured on both E2E and WikiBio. Rec. is reconstruction perplexity based on Eq(z|x,y)[log pθ(y |, x, z)]. PPL is the perplexity per token estimated by importance sampling. Induction of grounded control states opens up many possible future directions for this work. These states can be used to provide integration with external rule-based systems such as hard constraints at inference time. They also can be used to provide tools for human-assisted generation. Another direction is to improve the sources of weak supervision and such as interactive new constraints provided by users. One could also explore alternative posterior constraints based on pre-trained models for summarization or paraphrase tasks to induce semantically grounded latent variables. Finally, it would be interesting to explore alternative training methods for these models, such as reducing reliance on hard sampling through better relaxations of structured models. Acknowledgments Thanks to Yoon Kim, Jambay Kinley, and Tristan Yang for ideas and discussion. AMR was supported by NSF CAREER 1845664 and IIS 1901030. XLL was supported by a Sony Research Award. We thank the anonymous reviewers for helpful comments. References Waleed Ammar, Chris Dyer, and Noah A. Smith. 2014. Conditional random field autoencoders for unsupervised structured prediction. CoRR, abs/1411.1147. Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of NLG systems. In 11th Conference of the European Chapter of the Association for Computational Linguistics, Trento, Italy. Association for Computational Linguistics. Hakan Bilen, Marco Pedersoli, and Tinne Tuytelaars. 2014. Weakly supervised detection with posterior regularization. In Proceedings of the British Machine Vision Conference. BMVA Press. Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019. Controllable paraphrase generation with a syntactic exemplar. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5972–5984, Florence, Italy. Association for Computational Linguistics. Emilie Colin and Claire Gardent. 2018. Generating syntactic paraphrases. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 937–943, Brussels, Belgium. Association for Computational Linguistics. Jan Milan Deriu and Mark Cieliebak. 2018. Syntactic manipulation for generating more diverse and interesting texts. In Proceedings of the 11th International Conference on Natural Language Generation, pages 22–34, Tilburg University, The Netherlands. Association for Computational Linguistics. Ondrej Dusek and Filip Jurcicek. 2016. Sequence-tosequence generation for spoken dialogue via deep syntax trees and strings. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Ondˇrej Duˇsek and Filip Jurˇc´ıˇcek. 2016. Sequence-tosequence generation for spoken dialogue via deep syntax trees and strings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 45–51, Berlin, Germany. Association for Computational Linguistics. 2740 M.J.F. Gales and Steve Young. 1993. The theory of segmental hidden markov models. Kuzman Ganchev, Jo˜ao Grac¸a, Jennifer Gillenwater, and Ben Taskar. 2010. Posterior regularization for structured latent variable models. The Journal of Machine Learning Research, 11:20012049. Joshua Goodman. 1999. Semiring parsing. Comput. Linguist., 25(4):573–605. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631–1640, Berlin, Germany. Association for Computational Linguistics. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 140–149, Berlin, Germany. Association for Computational Linguistics. Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric Xing. 2016a. Harnessing deep neural networks with logic rules. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2410–2420, Berlin, Germany. Association for Computational Linguistics. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward controlled generation of text. Zhiting Hu, Zichao Yang, Ruslan Salakhutdinov, and Eric Xing. 2016b. Deep neural networks with massive learned knowledge. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1670–1679, Austin, Texas. Association for Computational Linguistics. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875–1885, New Orleans, Louisiana. Association for Computational Linguistics. Matthew Johnson, David K Duvenaud, Alex Wiltschko, Ryan P Adams, and Sandeep R Datta. 2016. Composing graphical models with neural networks for structured representations and fast inference. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 2946–2954. Curran Associates, Inc. Yoon Kim, Alexander M. Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, and G´abor Melis. 2019. Unsupervised recurrent neural network grammars. CoRR, abs/1904.03746. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR). Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings. Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676–2686, Melbourne, Australia. Association for Computational Linguistics. Rahul G. Krishnan, Uri Shalit, and David Sontag. 2017. Structured inference networks for nonlinear state space models. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI’17, pages 2101–2109. AAAI Press. Alon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceedings of the Second Workshop on Statistical Machine Translation, StatMT ’07, pages 228–231, Stroudsburg, PA, USA. Association for Computational Linguistics. R´emi Lebret, David Grangier, and Michael Auli. 2016a. Neural text generation from structured data with application to the biography domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1203–1213, Austin, Texas. Association for Computational Linguistics. Rmi Lebret, David Grangier, and Michael Auli. 2016b. Neural text generation from structured data with application to the biography domain. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Zhifei Li and Jason Eisner. 2009. First- and secondorder expectation semirings with applications to minimum-risk training on translation forests. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 40– 51, Singapore. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2018. Table-to-text generation by structure-aware seq2seq learning. 2741 Fuli Luo, Damai Dai, Pengcheng Yang, Tianyu Liu, Baobao Chang, Zhifang Sui, and Xu Sun. 2019. Learning to control the fine-grained sentiment for story ending generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6020–6026, Florence, Italy. Association for Computational Linguistics. Hongyuan Mei, Mohit Bansal, and Matthew R. Walter. 2016. What to talk about and how? selective generation using lstms with coarse-to-fine alignment. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Andriy Mnih and Karol Gregor. 2014. Neural variational inference and learning in belief networks. CoRR, abs/1402.0030. Andriy Mnih and Danilo Rezende. 2016. Variational inference for monte carlo objectives. In Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 2188–2196, New York, New York, USA. PMLR. Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019. Step-by-step: Separating planning from realization in neural data-to-text generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2267–2277, Minneapolis, Minnesota. Association for Computational Linguistics. Jekaterina Novikova, Ondrej Dusek, and Verena Rieser. 2017. The E2E dataset: New challenges for end-toend generation. CoRR, abs/1706.09254. Shereen Oraby, Lena Reed, Shubhangi Tandon, Sharath T.S., Stephanie Lukin, and Marilyn Walker. 2018. Controlling personality-based stylistic variation with neural natural language generators. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 180–190, Melbourne, Australia. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318, Stroudsburg, PA, USA. Association for Computational Linguistics. Nanyun Peng, Marjan Ghazvininejad, Jonathan May, and Kevin Knight. 2018. Towards controllable story generation. In Proceedings of the First Workshop on Storytelling, pages 43–49, New Orleans, Louisiana. Association for Computational Linguistics. Ratish Puduppully, Li Dong, and Mirella Lapata. 2019. Data-to-text generation with entity modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2023–2035, Florence, Italy. Association for Computational Linguistics. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. Proceedings of ICML. Sunita Sarawagi and William W Cohen. 2005. Semimarkov conditional random fields for information extraction. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17, pages 1185–1192. MIT Press. Sheng Shen, Daniel Fried, Jacob Andreas, and Dan Klein. 2019. Pragmatically informative text generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4060–4067, Minneapolis, Minnesota. Association for Computational Linguistics. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 6830–6841. Curran Associates, Inc. Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A minimal span-based neural constituency parser. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 818–827, Vancouver, Canada. Association for Computational Linguistics. Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In CVPR, pages 4566–4575. IEEE Computer Society. Wenhui Wang and Baobao Chang. 2016. Graph-based dependency parsing with bidirectional LSTM. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2306–2315, Berlin, Germany. Association for Computational Linguistics. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2018. Learning neural templates for text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3174–3187, Brussels, Belgium. Association for Computational Linguistics. Kun Xu, Chongxuan Li, Jun Zhu, and Bo Zhang. 2019. Multi-objects generation with amortized structural regularization. arXiv preprint arXiv:1906.03923. 2742 Pengcheng Yin, Chunting Zhou, Junxian He, and Graham Neubig. 2018. Structvae: Tree-structured latent variable models for semi-supervised semantic parsing. CoRR, abs/1806.07832. Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu, and Xueqi Cheng. 2018. Learning to control the specificity in neural response generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1108–1117, Melbourne, Australia. Association for Computational Linguistics. 2743 Appendix The generative model is an LSTM with two layers with hidden dimension equals 500, input dimension equals 400, and dropout of 0.2. The inference network uses a one-layer Bi-LSTM with hidden size of 500 and input size of 400 to encode the sentence. We use large max segment length, L = 8 (segmental for data-to-text) and L = 1 (linear chain for POS induction) and 0.2 dropout in the inference network. The Bi-LSTM used for encoding the source table is has hidden dimension of 300. Both the generative model and the inference network share word embeddings. The batch size is 10 for WikiBio and 20 for PTB and E2E. The generative model and the inference network are optimized by Adam (Kingma and Ba, 2014) gradient clipping at 1, with learning rate of 0.002 and 0.001 respectively. Parameters are all initialized from a standard Gaussian distribution. The learning rate decays by a factor of two for any epoch without improvement of loss function on validation set, and this decay condition is not triggered until the eighth epoch for sufficient training. Training is done for max of 30 epochs and allows for early stopping. For data-to-text problem, we need to encode the data table. We encode the E2E source table by directly concatenating word embeddings and field embeddings and indices for each token, for example, if the word w is the ith token from left and jth token from right under field type f, then we represent the token using a concatenation [emb(w) · emb(f) · emb(i) · emb(j)]. We encode the WikiBio table by passing a bidirectional-LSTM through the tokens in the table, where each token has similar embedding by concatenation as above. The encoding of the table is denoted as c. We use copy attention (Gu et al., 2016; Gulcehre et al., 2016) in the generative model, and the attention vector α at a time step is parametrized by the class label z at that time step. Recall the contextual representation is P i αi · ci, where αi = softmax(score(ht, ci)) and score(ht, ci) = (Wz(ht) + bz) · (W2(ci) + b2), the parametrization from z happens during the feedforward network indexed by z. For the WikiBio data, we use a dual attention mechanism described in (Liu et al., 2018), where the first attention is the same as above and the second attention uses a different encoder context c′ i, the c′ i only looks at the concatenation of field type and field index, but not the field value itself, i.e. [emb(f)·emb(i)·emb(j)]. Then the two attention forms two different sets of αi and they are multiplied together and renormalized to form an attention.
2020
243
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2744–2751 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2744 Pretrained Transformers Improve Out-of-Distribution Robustness Dan Hendrycks1∗ Xiaoyuan Liu1,2∗ Eric Wallace1 Adam Dziedzic3 Rishabh Krishnan1 Dawn Song1 1UC Berkeley 2Shanghai Jiao Tong University 3University of Chicago {hendrycks,ericwallace,dawnsong}@berkeley.edu Abstract Although pretrained Transformers such as BERT achieve high accuracy on indistribution examples, do they generalize to new distributions? We systematically measure out-of-distribution (OOD) generalization for seven NLP datasets by constructing a new robustness benchmark with realistic distribution shifts. We measure the generalization of previous models including bag-of-words models, ConvNets, and LSTMs, and we show that pretrained Transformers’ performance declines are substantially smaller. Pretrained transformers are also more effective at detecting anomalous or OOD examples, while many previous models are frequently worse than chance. We examine which factors affect robustness, finding that larger models are not necessarily more robust, distillation can be harmful, and more diverse pretraining data can enhance robustness. Finally, we show where future work can improve OOD robustness. 1 Introduction The train and test distributions are often not identically distributed. Such train-test mismatches occur because evaluation datasets rarely characterize the entire distribution (Torralba and Efros, 2011), and the test distribution typically drifts over time (Quionero-Candela et al., 2009). Chasing an evolving data distribution is costly, and even if the training data does not become stale, models will still encounter unexpected situations at test time. Accordingly, models must generalize to OOD examples whenever possible, and when OOD examples do not belong to any known class, models must detect them in order to abstain or trigger a conservative fallback policy (Emmott et al., 2015). Most evaluation in natural language processing (NLP) assumes the train and test examples are in∗Equal contribution. https://github.com/camelop/NLP-Robustness dependent and identically distributed (IID). In the IID setting, large pretrained Transformer models can attain near human-level performance on numerous tasks (Wang et al., 2019). However, high IID accuracy does not necessarily translate to OOD robustness for image classifiers (Hendrycks and Dietterich, 2019), and pretrained Transformers may embody this same fragility. Moreover, pretrained Transformers can rely heavily on spurious cues and annotation artifacts (Cai et al., 2017; Gururangan et al., 2018) which out-of-distribution examples are less likely to include, so their OOD robustness remains uncertain. In this work, we systematically study the OOD robustness of various NLP models, such as word embeddings averages, LSTMs, pretrained Transformers, and more. We decompose OOD robustness into a model’s ability to (1) generalize and to (2) detect OOD examples (Card et al., 2018). To measure OOD generalization, we create a new evaluation benchmark that tests robustness to shifts in writing style, topic, and vocabulary, and spans the tasks of sentiment analysis, textual entailment, question answering, and semantic similarity. We create OOD test sets by splitting datasets with their metadata or by pairing similar datasets together (Section 2). Using our OOD generalization benchmark, we show that pretrained Transformers are considerably more robust to OOD examples than traditional NLP models (Section 3). We show that the performance of an LSTM semantic similarity model declines by over 35% on OOD examples, while a RoBERTa model’s performance slightly increases. Moreover, we demonstrate that while pretraining larger models does not seem to improve OOD generalization, pretraining models on diverse data does improve OOD generalization. To measure OOD detection performance, we turn classifiers into anomaly detectors by using their prediction confidences as anomaly scores 2745 (Hendrycks and Gimpel, 2017). We show that many non-pretrained NLP models are often near or worse than random chance at OOD detection. In contrast, pretrained Transformers are far more capable at OOD detection. Overall, our results highlight that while there is room for future robustness improvements, pretrained Transformers are already moderately robust. 2 How We Test Robustness 2.1 Train and Test Datasets We evaluate OOD generalization with seven carefully selected datasets. Each dataset either (1) contains metadata which allows us to naturally split the samples or (2) can be paired with a similar dataset from a distinct data generating process. By splitting or grouping our chosen datasets, we can induce a distribution shift and measure OOD generalization. We utilize four sentiment analysis datasets: • We use SST-2, which contains pithy expert movie reviews (Socher et al., 2013), and IMDb (Maas et al., 2011), which contains fulllength lay movie reviews. We train on one dataset and evaluate on the other dataset, and vice versa. Models predict a movie review’s binary sentiment, and we report accuracy. • The Yelp Review Dataset contains restaurant reviews with detailed metadata (e.g., user ID, restaurant name). We carve out four groups from the dataset based on food type: American, Chinese, Italian, and Japanese. Models predict a restaurant review’s binary sentiment, and we report accuracy. • The Amazon Review Dataset contains product reviews from Amazon (McAuley et al., 2015; He and McAuley, 2016). We split the data into five categories of clothing (Clothes, Women Clothing, Men Clothing, Baby Clothing, Shoes) and two categories of entertainment products (Music, Movies). We sample 50,000 reviews for each category. Models predict a review’s 1 to 5 star rating, and we report accuracy. We also utilize these datasets for semantic similarity, reading comprehension, and textual entailment: • STS-B requires predicting the semantic similarity between pairs of sentences (Cer et al., 2017). The dataset contains text of different genres and sources; we use four sources from two genres: MSRpar (news), Headlines (news); MSRvid (captions), Images (captions). The evaluation metric is Pearson’s correlation coefficient. • ReCoRD is a reading comprehension dataset using paragraphs from CNN and Daily Mail news articles and automatically generated questions (Zhang et al., 2018). We bifurcate the dataset into CNN and Daily Mail splits and evaluate using exact match. • MNLI is a textual entailment dataset using sentence pairs drawn from different genres of text (Williams et al., 2018). We select examples from two genres of transcribed text (Telephone and Face-to-Face) and one genre of written text (Letters), and we report classification accuracy. 2.2 Embedding and Model Types We evaluate NLP models with different input representations and encoders. We investigate three model categories with a total of thirteen models. Bag-of-words (BoW) Model. We use a bag-ofwords model (Harris, 1954), which is high-bias but low-variance, so it may exhibit performance stability. The BoW model is only used for sentiment analysis and STS-B due to its low performance on the other tasks. For STS-B, we use the cosine similarity of the BoW representations from the two input sentences. Word Embedding Models. We use word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) word embeddings. These embeddings are encoded with one of three models: word averages (Wieting et al., 2016), LSTMs (Hochreiter and Schmidhuber, 1997), and Convolutional Neural Networks (ConvNets). For classification tasks, the representation from the encoder is fed into an MLP. For STS-B and MNLI, we use the cosine similarity of the encoded representations from the two input sentences. For reading comprehension, we use the DocQA model (Clark and Gardner, 2018) with GloVe embeddings. We implement our models in AllenNLP (Gardner et al., 2018) and tune the hyperparameters to maximize validation performance on the IID task. Pretrained Transformers. We investigate BERT-based models (Devlin et al., 2019) which are pretrained bidirectional Transformers (Vaswani et al., 2017) with GELU (Hendrycks and Gimpel, 2016) activations. In addition to using BERT Base and BERT Large, we also use the large version of RoBERTa (Liu et al., 2019b), which is pretrained on a larger dataset than BERT. 2746 Avg. BoW Avg. w2v ConvNet w2v LSTM w2v BERT Base BERT Large RoBERTa 0 20 40 60 80 100 Pearson Correlation (%) Semantic Textual Similarity (STS-B) Generalization IID Data (Images) OOD Data (MSRvid) Figure 1: Pretrained Transformers often have smaller IID/OOD generalization gaps than previous models. We use ALBERT (Lan et al., 2020) and also a distilled version of BERT, DistilBERT (Sanh et al., 2019). We follow the standard BERT fine-tuning procedure (Devlin et al., 2019) and lightly tune the hyperparameters for our tasks. We perform our experiments using the HuggingFace Transformers library (Wolf et al., 2019). 3 Out-of-Distribution Generalization In this section, we evaluate OOD generalization of numerous NLP models on seven datasets and provide some upshots. A subset of results are in Figures 1 and 2. Full results are in the Appendix. Pretrained Transformers are More Robust. In our experiments, pretrained Transformers often have smaller generalization gaps from IID data to OOD data than traditional NLP models. For instance, Figure 1 shows that the LSTM model declined by over 35%, while RoBERTa’s generalization performance in fact increases. For Amazon, MNLI, and Yelp, we find that pretrained Transformers’ accuracy only slightly fluctuates on OOD examples. Partial MNLI results are in Table 1. We present the full results for these three tasks in the Appendix. In short, pretrained Transformers can generalize across a variety of distribution shifts. Model Telephone (IID) Letters (OOD) Face-to-Face (OOD) BERT 81.4% 82.3% 80.8% Table 1: Accuracy of a BERT Base MNLI model trained on Telephone data and tested on three different distributions. Accuracy only slightly fluctuates. Bigger Models Are Not Always Better. While larger models reduce the IID/OOD generalization gap in computer vision (Hendrycks and Dietterich, 2019; Xie and Yuille, 2020; Hendrycks et al., 2019d), we find the same does not hold in NLP. Figure 3 shows that larger BERT and ALAvg. BoW Avg. w2v ConvNet w2v LSTM w2v BERT Base BERT Large RoBERTa 60 70 80 90 100 Accuracy (%) IMDb Sentiment Classifier Generalization IID Data (IMDb) OOD Data (SST-2) DocQA DistilBERT BERT Base BERT Large RoBERTa 20 30 40 50 60 70 80 Exact Match (%) ReCoRD Reading Comprehension Generalization IID Data (CNN) OOD Data (Daily Mail) Figure 2: Generalization results for sentiment analysis and reading comprehension. While IID accuracy does not vary much for IMDb sentiment analysis, OOD accuracy does. Here pretrained Transformers do best. BERTbase BERTlarge ALBERTbase ALBERTlarge ALBERTxlarge ALBERTxxlarge 0 2 4 6 8 10 SST-2 Accuracy - IMDb Accuracy (%) SST-2 Model Size vs. Accuracy Drop Figure 3: The IID/OOD generalization gap is not improved with larger models, unlike in computer vision. BERT models do not reduce the generalization gap. However, in keeping with results from vision (Hendrycks and Dietterich, 2019), we find that model distillation can reduce robustness, as evident in our DistilBERT results in Figure 2. This highlights that testing model compression methods for BERT (Shen et al., 2020; Ganesh et al., 2020; Li et al., 2020) on only in-distribution examples gives a limited account of model generalization, and such narrow evaluation may mask downstream costs. More Diverse Data Improves Generalization. Similar to computer vision (Orhan, 2019; Xie et al., 2747 20 NG Multi30K RTE SNLI WMT16 Average 0 20 40 60 80 100 False Alarm Rate (%) (Lower Is Better) Detecting OOD Examples for an SST-2 Sentiment Classifier Model Type Random Detector Bag of Words Avg. word2vec LSTM word2vec ConvNet word2vec BERT Large Figure 4: We feed in OOD examples from out-of-distribution datasets (20 Newsgroups, Multi30K, etc.) to SST-2 sentiment classifiers and report the False Alarm Rate at 95% Recall. A lower False Alarm Rate is better. Classifiers are repurposed as anomaly detectors by using their negative maximum softmax probability as the anomaly score— OOD examples should be predicted with less confidence than IID examples. Models such as BoW, word2vec averages, and LSTMs are near random chance; that is, previous NLP models are frequently more confident when classifying OOD examples than when classifying IID test examples. 2020; Hendrycks et al., 2019a), pretraining on larger and more diverse datasets can improve robustness. RoBERTa exhibits greater robustness than BERT Large, where one of the largest differences between these two models is that RoBERTa pretrains on more data. See Figure 2’s results. 4 Out-of-Distribution Detection Since OOD robustness requires evaluating both OOD generalization and OOD detection, we now turn to the latter. Without access to an outlier dataset (Hendrycks et al., 2019b), the state-ofthe-art OOD detection technique is to use the model’s prediction confidence to separate in- and out-of-distribution examples (Hendrycks and Gimpel, 2017). Specifically, we assign an example x the anomaly score −maxy p(y | x), the negative prediction confidence, to perform OOD detection. We train models on SST-2, record the model’s confidence values on SST-2 test examples, and then record the model’s confidence values on OOD examples from five other datasets. For our OOD examples, we use validation examples from 20 Newsgroups (20 NG) (Lang, 1995), the English source side of English-German WMT16 and English-German Multi30K (Elliott et al., 2016), and concatenations of the premise and hypothesis for RTE (Dagan et al., 2005) and SNLI (Bowman et al., 2015). These examples are only used during OOD evaluation not training. For evaluation, we follow past work (Hendrycks et al., 2019b) and report the False Alarm Rate at 95% Recall (FAR95). The FAR95 is the probability that an in-distribution example raises a false alarm, assuming that 95% of all out-of-distribution examples are detected. Hence a lower FAR95 is better. Partial results are in Figure 4, and full results are in the Appendix. Previous Models Struggle at OOD Detection. Models without pretraining (e.g., BoW, LSTM word2vec) are often unable to reliably detect OOD examples. In particular, these models’ FAR95 scores are sometimes worse than chance because the models often assign a higher probability to out-of-distribution examples than in-distribution examples. The models particularly struggle on 20 Newsgroups (which contains text on diverse topics including computer hardware, motorcycles, space), as their false alarm rates are approximately 100%. Pretrained Transformers Are Better Detectors. In contrast, pretrained Transformer models are better OOD detectors. Their FAR95 scores are always better than chance. Their superior detection performance is not solely because the underlying model is a language model, as prior work (Hendrycks et al., 2019b) shows that language models are not necessarily adept at OOD detection. Also note that in OOD detection for computer vision, higher accuracy does not reliably improve OOD detection (Lee et al., 2018), so pretrained Transformers’ OOD detection performance is not anticipated. Despite their relatively low FAR95 scores, pretrained Transformers still do not cleanly separate in- and out-of-distribution examples (Figure 5). OOD detection using pretrained Transformers is still far from perfect, and future work can aim towards creating better methods for OOD detection. 2748 0.5 0.6 0.7 0.8 0.9 1.0 Maximum Softmax Probability (Confidence) Frequency SST Classifier Confidence Distribution SST (IID) WMT16 (OOD) Figure 5: The confidence distribution for a RoBERTa SST-2 classifier on examples from the SST-2 test set and the English side of WMT16 English-German. The WMT16 histogram is translucent and overlays the SST histogram. The minimum prediction confidence is 0.5. Although RoBERTa is better than previous models at OOD detection, there is clearly room for future work. 5 Discussion and Related Work Why Are Pretrained Models More Robust? An interesting area for future work is to analyze why pretrained Transformers are more robust. A flawed explanation is that pretrained models are simply more accurate. However, this work and past work show that increases in accuracy do not directly translate to reduced IID/OOD generalization gaps (Hendrycks and Dietterich, 2019; Fried et al., 2019). One partial explanation is that Transformer models are pretrained on diverse data, and in computer vision, dataset diversity can improve OOD generalization (Hendrycks et al., 2020) and OOD detection (Hendrycks et al., 2019b). Similarly, Transformer models are pretrained with large amounts of data, which may also aid robustness (Orhan, 2019; Xie et al., 2020; Hendrycks et al., 2019a). However, this is not a complete explanation as BERT is pretrained on roughly 3 billion tokens, while GloVe is trained on roughly 840 billion tokens. Another partial explanation may lie in self-supervised training itself. Hendrycks et al. (2019c) show that computer vision models trained with self-supervised objectives exhibit better OOD generalization and far better OOD detection performance. Future work could propose new self-supervised objectives that enhance model robustness. Domain Adaptation. Other research on robustness considers the separate problem of domain adaptation (Blitzer et al., 2007; Daum´e III, 2007), where models must learn representations of a source and target distribution. We focus on testing generalization without adaptation in order to benchmark robustness to unforeseen distribution shifts. Unlike Fisch et al. (2019); Yogatama et al. (2019), we measure OOD generalization by considering simple and natural distribution shifts, and we also evaluate more than question answering. Adversarial Examples. Adversarial examples can be created for NLP models by inserting phrases (Jia and Liang, 2017; Wallace et al., 2019), paraphrasing questions (Ribeiro et al., 2018), and reducing inputs (Feng et al., 2018). However, adversarial examples are often disconnected from real-world performance concerns (Gilmer et al., 2018). Thus, we focus on an experimental setting that is more realistic. While previous works show that, for all NLP models, there exist adversarial examples, we show that all models are not equally fragile. Rather, pretrained Transformers are overall far more robust than previous models. Counteracting Annotation Artifacts. Annotators can accidentally leave unintended shortcuts in datasets that allow models to achieve high accuracy by effectively “cheating” (Cai et al., 2017; Gururangan et al., 2018; Min et al., 2019). These annotation artifacts are one reason for OOD brittleness: OOD examples are unlikely to contain the same spurious patterns as in-distribution examples. OOD robustness benchmarks like ours can stress test a model’s dependence on artifacts (Liu et al., 2019a; Feng et al., 2019; Naik et al., 2018). 6 Conclusion We created an expansive benchmark across several NLP tasks to evaluate out-of-distribution robustness. To accomplish this, we carefully restructured and matched previous datasets to induce numerous realistic distribution shifts. We first showed that pretrained Transformers generalize to OOD examples far better than previous models, so that the IID/OOD generalization gap is often markedly reduced. We then showed that pretrained Transformers detect OOD examples surprisingly well. Overall, our extensive evaluation shows that while pretrained Transformers are moderately robust, there remains room for future research on robustness. 2749 Acknowledgements We thank the members of Berkeley NLP, Sona Jeswani, Suchin Gururangan, Nelson Liu, Shi Feng, the anonymous reviewers, and especially Jon Cai. This material is in part based upon work supported by the National Science Foundation Frontier Award 1804794. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. References John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP. Zheng Cai, Lifu Tu, and Kevin Gimpel. 2017. Pay attention to the ending: Strong neural baselines for the roc story cloze task. In ACL. Dallas Card, Michael Zhang, and Noah A. Smith. 2018. Deep weighted averaging classifiers. In FAT. Daniel Cer, Mona Diab, Eneko Agirre, Inigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 Task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. In SemEval. Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In ACL. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognising textual entailment challenge. In Machine Learning Challenges Workshop. Hal Daum´e III. 2007. Frustratingly easy domain adaptation. In ACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. Desmond Elliott, Stella Frank, Khalil Sima’an, and Lucia Specia. 2016. Multi30k: Multilingual englishgerman image descriptions. In ACL. Andrew Emmott, Shubhomoy Das, Thomas G. Dietterich, Alan Fern, and Weng-Keen Wong. 2015. A meta-analysis of the anomaly detection problem. Shi Feng, Eric Wallace, and Jordan Boyd-Graber. 2019. Misleading failures of partial-input baselines. In ACL. Shi Feng, Eric Wallace, II Grissom, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018. Pathologies of neural models make interpretations difficult. In EMNLP. Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, and Danqi Chen. 2019. Proceedings of the 2nd workshop on machine reading for question answering. In MRQA Workshop. Daniel Fried, Nikita Kitaev, and Dan Klein. 2019. Cross-domain generalization of neural constituency parsers. In ACL. Prakhar Ganesh, Yao Chen, Xin Lou, Mohammad Ali Khan, Yin Yang, Deming Chen, Marianne Winslett, Hassan Sajjad, and Preslav Nakov. 2020. Compressing large-scale transformer-based models: A case study on BERT. ArXiv, abs/2002.11985. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2018. AllenNLP: a deep semantic natural language processing platform. In Workshop for NLP Open Source Software. Justin Gilmer, Ryan P. Adams, Ian J. Goodfellow, David Andersen, and George E. Dahl. 2018. Motivating the rules of the game for adversarial example research. ArXiv, abs/1807.06732. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In NAACL-HLT. Zellig S Harris. 1954. Distributional structure. Word. Ruining He and Julian J. McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In WWW. Dan Hendrycks and Thomas Dietterich. 2019. Benchmarking neural network robustness to common corruptions and perturbations. In ICLR. Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (GELUs). arXiv preprint arXiv:1606.08415. Dan Hendrycks and Kevin Gimpel. 2017. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR. Dan Hendrycks, Kimin Lee, and Mantas Mazeika. 2019a. Using pre-training can improve model robustness and uncertainty. ICML. Dan Hendrycks, Mantas Mazeika, and Thomas G. Dietterich. 2019b. Deep anomaly detection with outlier exposure. ICLR. Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, and Dawn Song. 2019c. Using self-supervised learning can improve model robustness and uncertainty. In NeurIPS. 2750 Dan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. 2020. AugMix: A simple data processing method to improve robustness and uncertainty. ICLR. Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. 2019d. Natural adversarial examples. ArXiv, abs/1907.07174. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. In Neural Computation. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In EMNLP. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: a lite BERT for self-supervised learning of language representations. In ICLR. Ken Lang. 1995. NewsWeeder: Learning to filter Netnews. In ICML. Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. 2018. Training confidence-calibrated classifiers for detecting out-of-distribution samples. In ICLR. Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, and Joseph E Gonzalez. 2020. Train large, then compress: Rethinking model size for efficient training and inference of transformers. ArXiv, abs/2002.11794. Nelson F Liu, Roy Schwartz, and Noah A Smith. 2019a. Inoculation by fine-tuning: A method for analyzing challenge datasets. In NAACL. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. RoBERTa: A robustly optimized BERT pretraining approach. ArXiv, abs/1907.11692. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In ACL. Julian J. McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. 2015. Image-based recommendations on styles and substitutes. In SIGIR. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS. Sewon Min, Eric Wallace, Sameer Singh, Matt Gardner, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019. Compositional questions do not necessitate multi-hop reasoning. In ACL. Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In COLING. A. Emin Orhan. 2019. Robustness properties of facebook’s ResNeXt WSL models. ArXiv, abs/1907.07640. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In EMNLP. Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D. Lawrence. 2009. Dataset shift in machine learning. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging NLP models. In ACL. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of bert: smaller, faster, cheaper and lighter. In NeurIPS EMC2 Workshop. Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. 2020. Q-BERT: Hessian based ultra low precision quantization of BERT. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP. Antonio Torralba and Alexei A. Efros. 2011. Unbiased look at dataset bias. CVPR. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Eric Wallace, Shi Feng, Nikhil Kandpal, Matthew Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing NLP. In EMNLP. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multitask benchmark and analysis platform for natural language understanding. In ICLR. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sentence embeddings. In ICLR. Adina Williams, Nikita Nangia, and Samuel R Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL-HLT. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace’s Transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. 2751 Cihang Xie and Alan L. Yuille. 2020. Intriguing properties of adversarial training at scale. In ICLR. Qizhe Xie, Eduard H. Hovy, Minh-Thang Luong, and Quoc V. Le. 2020. Self-training with noisy student improves imagenet classification. In CVPR. Dani Yogatama, Cyprien de Masson d’Autume, Jerome Connor, Tom´as Kocisk´y, Mike Chrzanowski, Lingpeng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, and Phil Blunsom. 2019. Learning and evaluating general linguistic intelligence. ArXiv, abs/1901.11373. Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. ReCoRD: bridging the gap between human and machine commonsense reading comprehension. arXiv, abs/1810.12885.
2020
244
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2752–2765 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2752 Robust Encodings: A Framework for Combating Adversarial Typos Erik Jones Robin Jia∗ Aditi Raghunathan∗ Percy Liang Computer Science Department, Stanford University {erjones,robinjia,aditir,pliang}@stanford.edu Abstract Despite excellent performance on many tasks, NLP systems are easily fooled by small adversarial perturbations of inputs. Existing procedures to defend against such perturbations are either (i) heuristic in nature and susceptible to stronger attacks or (ii) provide guaranteed robustness to worst-case attacks, but are incompatible with state-of-the-art models like BERT. In this work, we introduce robust encodings (RobEn): a simple framework that confers guaranteed robustness, without making compromises on model architecture. The core component of RobEn is an encoding function, which maps sentences to a smaller, discrete space of encodings. Systems using these encodings as a bottleneck confer guaranteed robustness with standard training, and the same encodings can be used across multiple tasks. We identify two desiderata to construct robust encoding functions: perturbations of a sentence should map to a small set of encodings (stability), and models using encodings should still perform well (fidelity). We instantiate RobEn to defend against a large family of adversarial typos. Across six tasks from GLUE, our instantiation of RobEn paired with BERT achieves an average robust accuracy of 71.3% against all adversarial typos in the family considered, while previous work using a typo-corrector achieves only 35.3% accuracy against a simple greedy attack. 1 Introduction State-of-the-art NLP systems are brittle: small perturbations of inputs, commonly referred to as adversarial examples, can lead to catastrophic model failures (Belinkov and Bisk, 2018; Ebrahimi et al., 2018b; Ribeiro et al., 2018; Alzantot et al., 2018). For example, carefully chosen typos and word substitutions have fooled systems for hate speech detection (Hosseini et al., 2017), machine translation ∗Authors contributed equally. Positive Positive Inspired acting Inspikred ating Complete flop Copmlete fljop Irnspired atcing Negative Inspired acting Complete flop Inspikred ating Sentences Encodings Predictions Figure 1: Example of a defense using RobEn. An adversary can perturb sentences (blue, underlined) to many different perturbations (red, not-underlined) within the attack surface (red, ovals). We define an encoding function α such that each perturbation of the input sentences maps to one of a few encodings (grey, rounded rectangles). We can then use any model g to make predictions given the encodings. (Ebrahimi et al., 2018a), and spam filtering (Lee and Ng, 2005), among others. We aim to build systems that achieve high robust accuracy: accuracy against worst-case attacks. Broadly, existing methods to build robust models fall under one of two categories: (i) adversarial training, which augments the training set with heuristically generated perturbations and (ii) certifiably robust training, which bounds the change in prediction between an input and any of its allowable perturbations. Both these approaches have major shortcomings, especially in NLP. Adversarial training, while quite successful in vision (Madry et al., 2018), is challenging in NLP due to the discrete nature of textual inputs (Ebrahimi et al., 2018b); current techniques like projected gradient descent are incompatible with subword tokenization. Further, adversarial training relies on heuristic approximations to the worst-case perturbations, leaving models vulnerable to new, stronger attacks. Certifiably robust training (Jia et al., 2019; Huang et al., 2019; Shi et al., 2020) circumvents 2753 the above challenges by optimizing over a convex outer-approximation of the set of perturbations, allowing us to lower bound the true robust accuracy. However, the quality of bounds obtained by these methods scale poorly with the size of the network, and are vacuous for state-of-the-art models like BERT. Moreover, both approaches require separate, expensive training for each task, even when defending against the same type of perturbations. Ideally we would like a “robustness” module that we can reuse across multiple tasks, allowing us to only worry about robustness once: during its construction. Indeed, reusable components have driven recent progress in NLP. For example, word vectors are a universal resource that are constructed once, then used for many different tasks. Can we build a reusable robust defense that can easily work with complex, state-of-the-art architectures like BERT? The recent work of Pruthi et al. (2019), which uses a typo-corrector to defend against adversarial typos, is such a reusable defense: it is trained once, then reused across different tasks. However, we find that current typo-correctors do not perform well against even heuristic attacks, limiting their applicability. Our primary contribution is robust encodings (RobEn), a framework to construct encodings that can make systems using any model robust. The core component of RobEn is an encoding function that maps sentences to a smaller discrete space of encodings, which are then used to make predictions. We define two desiderata that a robust encoding function should satisfy: stability and fidelity. First, to encourage consistent predictions across perturbations, the encoding function should map all perturbations of a sentence to a small set of encodings (stability). Simultaneously, encodings should remain expressive, so models trained using encodings still perform well on unperturbed inputs (fidelity). Because systems using RobEn are encoding-based we can compute the exact robust accuracy tractably, avoiding the lower bounds of certifiably robust training. Moreover, these encodings can make any downstream model robust, including state-of-the-art transformers like BERT, and can be reused across different tasks. In Section 4, we apply RobEn to combat adversarial typos. In particular, we allow an attacker to add independent edit distance one typos to each word in an input sentence, resulting in exponentially more possible perturbations than previous This Thus … Tihs fulm fllm … fim This delightful film … dlightful deliightful … delirhtful x Tihs dlightful fllm … Pos Neg Input x Perturbation set Perturbation x BERT BERT Figure 2: Attack model allowing independent perturbations of each token. The original input, x is classified by the model as positive while the perturbation ˜x =, obtained by choosing perturbations of “This”, “delightful”, and “film” independently, is classified as negative. Independent perturbations of each word results in an exponentially large perturbation space B(x). work (Pruthi et al., 2019; Huang et al., 2019). We consider a natural class of token-level encodings, which are obtained by encoding each token in a sentence independently. This structure allows us to express stability and fidelity in terms of a clustering objective, which we optimize. Empirically, our instantiation of RobEn achieves state-of-the-art robust accuracy, which we compute exactly, across six classification tasks from the GLUE benchmark (Wang et al., 2019). Our best system, which combines RobEn with a BERT classifier (Devlin et al., 2019), achieves an average robust accuracy of 71.3% across the six tasks. In contrast, a state-of-the-art defense that combines BERT with a typo corrector (Pruthi et al., 2019) gets 35.3% accuracy when adversarial typos are inserted, and a standard data augmentation defense gets only 12.2% accuracy. 2 Setup Tasks. We consider NLP tasks that require classifying textual input x ∈X to a class y ∈Y. For simplicity, we refer to inputs as sentences. Each sentence x consists of tokens x1, . . . , xL from the set of all strings T . Let ptask denote the distribution over inputs and labels for a particular task of interest. The goal is to learn a model f : X →Y that maps sentences to labels, given training examples (x, y) ∼ptask. Attack surface. We consider an attack surface in which an adversary can perturb each token xi of a sentence to some token ˜xi ∈B(xi), where B(xi) is the set of valid perturbations of xi. For example, B(xi) could be a set of allowed typos of xi. We 2754 define B(x) as the set of all valid perturbations of the set x, where every possible combination of token-level typos is allowed: B(x) = {( ˜x1, . . . , ˜ xL) | ˜xi ∈B(xi) ∀i} (1) The size of the attack surface |B(x)| grows exponentially with respect to number of input tokens, as shown in Figure 2. In general xi ∈B(xi), so some words could remain unperturbed. Model evaluation. In this work, we use three evaluation metrics for any given task. First, we evaluate a model on its standard accuracy on the task: accstd(f) = E(x,y)∼ptask1[f(x) = y]. (2) Next, we are interested in models that also have high robust accuracy, the fraction of examples (x, y) for which the model is correct on all valid perturbations ˜x ∈B(x) allowed in the attack model: accrob(f) = E(x,y)∼ptask min ˜x∈B(x) 1 [f(˜x) = y] . (3) It is common to instead compute accuracy against a heuristic attack a that maps clean sentences x to perturbed sentences a(x) ∈B(x). accattack(f; a) = E(x,y)∼ptask1[f(a(x)) = y]. (4) Typically, a(x) is the result of a heuristic search for a perturbation ˜x ∈B(x) that f misclassifies. Note that accattack is a (possibly loose) upper bound of accrob because there could be perturbations that the model misclassifies but are not encountered during the heuristic search (Athalye et al., 2018). Additionally, since robust accuracy is generally hard to compute, some existing work computes certified accuracy (Huang et al., 2019; Jia et al., 2019; Shi et al., 2020), which is a potentially conservative lower bound for the true robust accuracy. In this work, since we use robust encodings, we can tractably compute the exact robust accuracy. 3 Robust Encodings We introduce robust encodings (RobEn), a framework for constructing encodings that are reusable across many tasks, and pair with arbitrary model architectures. In Section 3.1 we describe the key components of RobEn, then in Section 3.2 we highlight desiderata RobEn should satisfy. 3.1 Encoding functions A RobEn classifier fα : X →Y using RobEn decomposes into two components: a fixed encoding function α : X →Z, and a model that accepts encodings g : Z →Y.1 For any sentence x, our system makes the prediction fα(x) = g(α(x)). Given training data {(xi, yi)}n i=1 and the encoding function α, we learn g by performing standard training on encoded training points {(α(xi), yi)}n i=1. To compute the robust accuracy of this system, we note that for well-chosen α and an input x from some distribution Px, the set of possible encodings α(˜x) for some perturbation ˜x ∈B(x) is both small and tractable to compute quickly. We can thus compute accrob(fα) quickly by generating this set of possible encodings, and feeding each into g, which can be any architecture. 3.2 Encoding function desiderata In order to achieve high robust accuracy, a classifier fα that uses α should make consistent predictions on all ˜x ∈B(x), the set of points described by the attack surface, and also have high standard accuracy on unperturbed inputs. We term the former property stability, and the latter fidelity, give intuition for both in this section, and provide a formal instantiation in Section 4. Stability. For an encoding function α and some distribution over inputs Px, the stability Stab(α) measures how often α maps sentences x ∼Px to the same encoding as all of their perturbations. Fidelity. An encoding function α has high fidelity if models that use α can still achieve high standard accuracy. Unfortunately, while we want to make task agnostic encoding functions, standard accuracy is inherently task dependent: different tasks have different expected distributions over inputs and labels. To emphasize this challenge consider two tasks: for an integer n, predict n mod 2, and n mod 3. The information we need encodings to preserve varies significantly between these tasks: for the former, 2 and 6 can be identically encoded, while for the latter they must encoded separately. To overcome this challenge, we consider a single distribution over the inputs Px that we believe covers many task-distributions ptask. Since it is hard to model the distribution over the labels, we take the more conservative approach of mapping 1We can set Z ⊆X when g accepts sentences. 2755 the different sentences sampled from Px to different encodings with high probability. We call this Fid(α), and give an example in Section 4.5. Tradeoff. Stability and fidelity are inherently competing goals. An encoding function that maps every sentence to the same encoding trivially maximizes stability, but is useless for any non-trivial classification task. Conversely, fidelity is maximized when every input is mapped to itself, which has very low stability. In the following section, we construct an instantiation of RobEn that balances stability and fidelity when the attack surface consists of typos. 4 Robust Encodings for Typos In this section, we focus on adversarial typos, where an adversary can add typos to each token in a sentence (see Figure 2). Since this attack surface is defined at the level of tokens, we restrict attention to encoding functions that encode each token independently. Such an encoding does not use contextual information; we find that even such robust encodings achieve greater attack accuracy and robust accuracy in practice than previous work. First, we will reduce the problem of generating token level encodings to assigning vocabulary words to clusters (Section 4.1). Next, we use an example to motivate different clustering approaches (Section 4.2), then describe how we handle out-ofvocabulary tokens (Section 4.3). Finally, we introduce two types of token-level robust encodings: connected component encodings (Section 4.4) and agglomerative cluster encodings (Section 4.5). 4.1 Encodings as clusters We construct an encoding function α that encodes x token-wise. Formally, α is defined by a tokenlevel encoding function π that maps each token xi ∈T to some encoded token π(xi) ∈ZTok: α(x) = [π(x1), π(x2), . . . π(xL)]. (5) In the RobEn pipeline, a downstream model g is trained on encodings (Section 3.1). If π maps many words and their typos to the same encoded token, they become indistinguishable to g, conferring robustness. In principle, the relationship between different encoded tokens is irrelevant: during training, g learns how to use the encoded tokens to perform a desired task. Thus, the problem of finding a good π is equivalent to deciding which tokens should share the same encoded token. at aunt abet abrupt about aut aet auet abot aboupt Maximal stability Maximal fidelity Balanced Figure 3: Visualization of three different encodings. Vocabulary words (large font, blue) share an edge if they share a common perturbation (small font, red). The maximal stability cluster (thick solid line) clusters identically, the maximal fidelity clusters (thin dotted line) encodes all words separately, while the balanced clusters (thin solid line) trade off the two. Since the space of possible tokens T is innumerable, we focus on a smaller set of words V = {w1, . . . , wN} ⊆T , which contains the N most frequent words over Px. We will call elements of V words, and tokens that are perturbations of some word typos. We view deciding which words should share an encoded token as assigning words to clusters C1, . . . , Ck ⊆V . For all other tokens not in the vocabulary, including typos, we define a separate πOOV. Thus, we decompose π as follows: π(xi) = ( πV (xi) xi ∈V πOOV(xi) xi /∈V , (6) Here, πV is associated with a clustering C of vocabulary words, where each cluster is associated with a unique encoded token. 4.2 Simple example We use a simple example to illustrate how a tokenlevel encoding function can achieve the RobEn desiderata: stability and fidelity defined in Section 3.2. We will formally define the stability and fidelity of a clustering in Sections 4.3 and 4.5. Consider the five words (large font, blue) in Figure 3, along with potential typos (small font, red). We illustrate three different clusterings as boxes around tokens in the same cluster. We may put all words in the same cluster (thick box), each word in its own cluster (dashed boxes), or something in between (thin solid boxes). For now, we group each typo with a word it could have been perturbed from (we will discuss this further in Section 4.3). To maximize stability, we need to place all words in the same cluster. Otherwise, there would be two 2756 words (say “at” and “aunt”) that could both be perturbed to the same typo (“aut”) but are in different clusters. Therefore, “aut” cannot map to the same encoded token as both the possible vocab words. At the other extreme, to maximize fidelity, each word should be in its own cluster. Both mappings have weaknesses: the stability-maximizing mapping has low fidelity since all words are identically encoded and thus indistinguishable, while the fidelity-maximizing mapping has low stability since the typos of words “aunt”, “abet”, and “abrupt” could all be mapped to different encoded tokens than that of the original word. The clustering represented by the thin solid boxes in Figure 3 balances stability and fidelity. Compared to encoding all words identically, it has higher fidelity, since it distinguishes between some of the words (e.g., “at” and “about” are encoded differently). It also has reasonably high stability, since only the infrequent “abet” has typos that are shared across words and hence are mapped to different encoded tokens. 4.3 Encoding out-of-vocab tokens Given a fixed clustering of V , we now study how to map out-of-vocabulary tokens, including typos, to encoded tokens without compromising stability. Stability. Stability measures the extent to which typos of words map to different encoded tokens. We formalize this by defining the set of tokens that some typo of a word w could map to, Bπ(w): Bπ(w) = {π( ˜w); ˜w ∈B(w)}, (7) where B(w) is the set of allowable typos of w. Since we care about inputs drawn from Px, we define Stab on the clustering C using ρ(w), the normalized frequency of word w based on Px. Stab(C) = − N X i=1 ρ(wi)|Bπ(wi)| (8) For a fixed clustering, the size of Bπ(w) depends on where πOOV maps typos that w shares with other words; for example in Figure 3, “aet” could be a perturbation of both “at” and “abet”. If we map the typo the encoded token of “at”, we increase the size of Bπ(”abet”) and vice-versa. In order to keep the size of Bπ(w) smaller for the more frequent words and maximize stability (Equation 8), we map a typo to the same encoded token as its most frequent neighbor word (in this case “at”). Finally, when a token is not a typo of any vocab words, we encode it to a special token OOV. 4.4 Connected component encodings We present two approaches to generate robust token-level encodings. Our first method, connected component encodings, maximizes the stability objective (8). Notice that Stab is maximized when for each word w, Bπ(w) contains one encoded token. This is possible only when all words that share a typo are assigned to the same cluster. To maximize Stab, define a graph G with all words in V as vertices, and edges between words that share a typo. Since we must map words that share an edge in G to the same cluster, we define the cluster Ci to be the set of words in the ith connected component of G. While this stabilitymaximizing clustering encodes many words to the same token (and hence seems to compromise on fidelity), these encodings still perform surprisingly well in practice (see Section 5.4). 4.5 Agglomerative cluster encodings Connected component encodings focus only stability and can lead to needlessly low fidelity. For example, in Figure 3, “at” and “about” are in the same connected component even though they don’t share a typo. Since both words are generally frequent, mapping them to different encoded tokens can significantly improve fidelity, with only a small drop in stability: recall only the infrequent word “abet” can be perturbed to multiple encoded tokens. To handle such cases, we introduce agglomerative cluster encodings, which we construct by trading off Stab with a formal objective we define for fidelity: Fid. We then approximately optimize this combined objective Φ using an agglomerative clustering algorithm. Fidelity objective. Recall from Section 3.2 that an encoding has high fidelity if it can be used to achieve high standard accuracy on many tasks. This is hard to precisely characterize: we aim to design an objective that could approximate this. We note that distinct encoded tokens are arbitrarily related: the model g learns how to use different encodings during training. Returning to our example, suppose “at” and “abet” belong to the same cluster and share an encoded token z. During training, each occurrence of “at” and “abet” is replaced with z. However, since “at” is much more frequent, classifiers treat z similarly to “at′′ 2757 in order to achieve good overall performance. This leads to mostly uncompromised performance on sentences with “at”, at the cost of performance on sentences containing the less frequent “abet”. This motivates the following definition: let ⃗vi be a the indicator vector in R|V | corresponding to word i. In principle ⃗vi could be a word embedding; we choose indicator vectors to avoid making additional assumptions. We define the encoded token ⃗µj associated with words in cluster Cj as follows: ⃗µj = P wi∈Cj ρ(wi)⃗vi P wi∈Cj ρ(wi) (9) We weight by the frequency ρ to capture the effect of training on the encodings, as described above. Fidelity is maximized when each word has a distinct encoded token. We capture the drop in standard accuracy due to shared encoded tokens by computing the distance between the original embeddings of the word its encoded token. Formally, let c(i) be the cluster index of word wi. We define the fidelity objective Fid as follows: Fid(C) = − N X i=1 ρ(wi)∥⃗vi −⃗µc(i)∥2. (10) Fid is high if frequent words and rare words are in the same cluster and is low when when multiple frequent words are in the same cluster. Final objective. We introduce a hyperparameter γ ∈[0, 1] that balances stability and fidelity. We approximately minimize the following weighted combination of Stab (8) and Fid (10): Φ(C) = γ Fid(C) + (1 −γ) Stab(C). (11) As γ approaches 0, we get the connected component clusters from our baseline, which maximize stability. As γ approaches 1, we maximize fidelity by assigning each word to its own cluster. Agglomerative clustering. We approximate the optimal value of Φ using agglomerative clustering; we start with each word in its own cluster, then iteratively combine the pair of clusters whose resulting combination increases Φ the most. We repeat until combining any pair of clusters would decrease Φ. Further details are provided in Appendix A.1. 5 Experiments 5.1 Setup Token-level attacks. The primary attack surface we study is edit distance one (ED1) perturbations. For every word in the input, the adversary is allowed to insert a lowercase letter, delete a character, substitute a character for any letter, or swap two adjacent characters, so long as the first and last characters remain the same as in the original token. The constraint on the outer characters, also used by Pruthi et al. (2019), is motivated by psycholinguistic studies (Rawlinson, 1976; Davis, 2003). Within our attack surface, “the movie was miserable” can be perturbed to “thae mvie wjs misreable” but not “th movie as miserable”. Since each token can be independently perturbed, the number of perturbations of a sentence grows exponentially with its length; even “the movie was miserable” has 431,842,320 possible perturbations. Our attack surface contains the attack surface used by (Pruthi et al., 2019), which allows ED1 perturbations to at most two words per sentence. Reviews from SST-2 have 5 million perturbations per example (PPE) on average under this attack surface, while our attack surface averages 1097 PPE. We view the size of the attack surface as a strength of our approach: our attack surface forces a system robust to subtle perturbations (“the moviie waas misreable”) that smaller attack surfaces miss. In Section 5.7, we additionally consider the internal permutation attacks studied in Belinkov and Bisk (2018) and Sakaguchi et al. (2017), where all characters, except the first and the last, may be arbitrarily reordered. Attack algorithms. We consider two attack algorithms: the worst-case attack (WCA) and a beamsearch attack (BSA). WCA exhaustively tests every possible perturbation of an input x to see any change in the prediction. The attack accuracy of WCA is the true robust accuracy since if there exists some perturbation that changes the prediction, WCA finds it. When instances of RobEn have high stability, the number of possible encodings of perturbations of x is often small, allowing us to exhaustively test all possible perturbations in the encoding space.2 This allows us to tractably run WCA. Using WCA with RobEn, we can obtain computationally tractable guarantees on robustness: given a sentence, we can quickly compute whether or not any perturbation of x that changes the prediction. For systems that don’t use RobEn, we cannot tractably run WCA. Instead, we run a beam search 2When there are more than 10000 possible encodings, which holds for 0.009% of our test examples, we assume the adversary successfully alters the prediction. 2758 attack (BSA) with beam width 5, perturbing tokens one at a time. For efficiency, we sample at most len(xi) perturbations at each step of the search (see Apendix A.2). Even against this very limited attack, we find that baseline models have low accuracy. Datasets. We use six of the nine tasks from GLUE (Wang et al., 2019): SST-2, MRPC, QQP, MNLI, QNLI, and RTE. We do not use STS-B and CoLA as they are evaluated on correlation, which does not decompose as an example-level loss. We additionally do not use WNLI, as most submitted GLUE models cannot even outperform the majority baseline, and state-of-the-art models are rely on external training data (Kocijan et al., 2019). We evaluate on the test sets for SST-2 and MRPC, and the publicly available dev sets for the remaining tasks. More details are provided in Appendix A.3. 5.2 Baseline models. We consider three baseline systems. Our first is the standard base uncased BERT model (Devlin et al., 2019) fine-tuned on the training data for each task.3 Data augmentation. For our next baseline, we augment the training dataset with four random perturbations of each example, then fine-tune BERT on this augmented data. Data augmentation has been shown to increase robustness to some types of adversarial perturbations (Ribeiro et al., 2018; Liu et al., 2019). Other natural baselines all have severe limitations. Adversarial training with black-box attacks offers limited robustness gains over data augmentation (Cohen et al., 2019; Pruthi et al., 2019). Projected gradient descent (Madry et al., 2017), the only white-box adversarial training method that is robust in practice, cannot currently be applied to BERT since subword tokenization maps different perturbations to different numbers of tokens, making gradient-based search impossible. Certifiably robust training (Huang et al., 2019; Shi et al., 2020) does not work with BERT due to the same tokenization issue and BERT’s use of non-monotonic activation functions, which make computing bounds intractable. Moreover the bounds computed with certifiably robust training, which give guarantees, become loose as model depth increases, hurting robust performance (Gowal et al., 2018). Typo-corrector. For our third baseline, we use the most robust method from Pruthi et al. (2019). In 3https://github.com/huggingface/ pytorch-transformers particular, we train a scRNN typo-corrector (Sakaguchi et al., 2017) on random perturbations of each task’s training set. At test time inputs are “corrected” using the typo corrector, then fed into a downstream model. We replace any OOV outputted by the typo-corrector with the neutral word “a” and use BERT as our downstream model. 5.3 Models with RobEn We run experiments using our two token-level encodings: connected component encodings (CONNCOMP) and agglomerative cluster encodings (AGGCLUST). To form clusters, we use the N = 100, 000 most frequent words from the Corpus of Contemporary American English (Davies, 2008) that are also in GloVe (Pennington et al., 2014). For AGGCLUST we use γ = 0.3, which maximizes robust accuracy on SST-2 dev set. Form of encodings. Though unnecessary when training from scratch, to leverage the inductive biases of pre-trained models like BERT (Devlin et al., 2019), we define the encoded token of a cluster to be the cluster’s most frequent member word. In the special case of the out-of-vocab token, we map OOV to [MASK]. Our final encoding, α(x), is the concatenation of all of these words. For both encodings, we fine-tune BERT on the training data, using α(x) as input. Further details are in Appendix A.4. 5.4 Robustness gains from RobEn Our main results are shown in Table 1. We show all three baselines, as well as models using our instances of RobEn: CONNCOMP and AGGCLUST. Even against the heuristic attack, each baseline system suffers dramatic performance drops. The system presented by Pruthi et al. (2019), Typo Corrector + BERT, only achieves 35.3% attack accuracy, compared to its standard accuracy of 78.2%. BERT and Data Augmentation + BERT perform even worse. Moreover, the number of perturbations the heuristic attack explores is a tiny fraction of our attack surface, so the robust accuracy of Typo Corrector + BERT, the quantity we’d like to measure, is likely far lower than the attack accuracy. In contrast, simple instances of RobEn are much more robust. AGGCLUST + BERT achieves average robust accuracy of 71.3%, 36 points higher than the attack accuracy of Typo Corrector + BERT. AGGCLUST also further improves on CONNCOMP in terms of both robust accuracy (by 1.3 points) and standard accuracy (by 2.8 points). 2759 Accuracy System SST-2 MRPC QQP MNLI QNLI RTE Avg Standard Baselines BERT 93.8 87.7 91.3 84.6 88.6 71.1 86.2 Data Aug. + BERT 92.2 84.3 88.7 83.0 87.4 63.5 83.1 Typo Corr. + BERT 89.6 80.9 87.6 75.9 80.5 54.9 78.2 RobEn Con. Comp. + BERT 80.6 79.9 84.2 65.7 73.3 52.7 72.7 Agg. Clust. + BERT 83.1 83.8 85.0 69.1 76.6 59.2 76.1 Attack Baselines BERT 8.7 10.0 17.4 0.7 0.7 1.8 6.6 Data Aug. + BERT 17.1 1.0 27.6 15.4 10.7 1.4 12.2 Typo Corr. + BERT 53.2 30.1 52.0 23.0 32.3 21.3 35.3 RobEn Con. Comp. + BERT 80.3 79.4 82.7 62.6 71.5 47.3 70.6 Agg. Clust. + BERT 82.1 82.8 83.2 65.3 74.5 52.7 73.4 Robust RobEn Con. Comp. + BERT 80.1 79.4 82.2 61.4 70.5 46.6 70.0 Agg. Clust. + BERT 80.7 80.9 81.4 62.8 71.9 49.8 71.3 Table 1: Standard, attack, and robust accuracy on six GLUE tasks against ED1 perturbations. For baseline models we only compute attack accuracy, an upper bound on robust accuracy, since robust accuracy cannot be tractably computed. Using RobEn, we get robustness guarantees by computing robust accuracy, which we find outperforms a the typo corrector in (Pruthi et al., 2019) by at least 36 points. Standard accuracy. Like defenses against adversarial examples in other domains, using RobEn decreases standard accuracy (Madry et al., 2017; Zhang et al., 2019; Jia et al., 2019). Our agglomerative cluster encodings’s standard accuracy is 10.1 points lower then that of normally trained BERT. However, to the best of our knowledge, our standard accuracy is state-of-the-art for approaches that guarantee robustness. We attribute this improvement to RobEn’s compatibility with any model. Comparison to smaller attack surfaces. We note that RobEn also outperform existing methods on their original, smaller attack surfaces. On SST-2, Pruthi et al. (2019) achieves an accuracy of 75.0% defending against a single ED1 typo, which is 5.7 points lower than AGGCLUST’s robust accuracy against perturbations of all tokens: a superset of the original perturbation set. We discuss constrained adversaries further in Appendix A.5. AGGCLUST also outperforms certified training: Huang et al. (2019), which offers robustness guarantees to three character substitution typos (but not insertions or deletions), achieves a robust accuracy of 74.9% on SST-2. Certified training requires strong assumptions on model architecture; even the robust accuracy of AGGCLUST outperforms the standard accuracy of the CNN used in Huang et al. (2019). 5.5 Reusable encodings Each instance of RobEn achieves consistently high stability across our tasks, despite reusing a single Size of Bα 0 0.2 0.4 0.6 0.8 1.0 Fraction of Examples 1 2 3–4 5–8 9+ Size of Bα on SST-2 1 2 3–4 5–8 9+ 0.2 0.4 0.6 0.8 Size of Bα on RTE CONNCOMP AGGCLUST Figure 4: Histogram of |Bα(x)| for SST-2 and RTE. SST-2 has the highest percentage of inputs x where |Bα(x)| = 1, while RTE has the least. On both datasets, |Bα(x)| < 9 for most x, and |Bα(x)| = 1 on a plurality of inputs. function. Figure 4 plots the distribution of |Bα(x)|, across test examples in SST-2 and RTE, where Bα(x) is the set of encodings that are mapped to by some perturbation of x. Over AGGCLUST encodings, |Bα(x)| = 1 for 25% of examples in RTE and 66% in SST-2, with the other four datasets falling between these extremes (see Appendix A.6). As expected, these numbers are even higher for the connected component encodings. Note that when |Bα(x)| = 1, every perturbation of x maps to the same encoding. When |Bα(x)| is small, robust accuracy can be computed quickly. 5.6 Agglomerative Clustering Tradeoff In Figure 5, we plot standard and robust accuracy on SST-2 for AGGCLUST encodings, using differ2760 0.0 0.2 0.4 0.6 Fidelity Objective Weight (γ) 0.7 0.8 0.9 Accuracy SST-2 Standard and Robust Accuracies Standard accuracy Robust accuracy Figure 5: Standard and robust accuracies on SST-2 with AGGCLUST using different values of γ. While the gap between standard and robust accuracy increases monotonically, robust accuracy increases before decreasing. ent values of γ. Recall that γ = 0 maximizes stability (CONNCOMP), and γ = 1 maximizes fidelity. At γ = 0, the gap between standard and robust accuracy, due to out-of-vocabulary tokens, is negligible. As γ increases, both standard accuracy and the gap between standard and robust accuracy increase. As a result, robust accuracy first increases, then decreases. 5.7 Internal permutation attacks RobEn can also be used to defend against the internal perturbations described in Section 5.1. For normally trained BERT, a heuristic beam search attack using internal permutations reduces average accuracy from 86.2% to 15.7% across our six tasks. Using CONNCOMP with the internal permutation attack surface, we achieve robust accuracy of 81.4%. See Appendix A.7 for further details. 6 Discussion Additional related work. In this work, we introduce RobEn, a framework to construct systems that are robust to adversarial perturbations. We then use RobEn to achieve state-of-the-art robust accuracy when defending against adversarial typos. Besides typos, other perturbations can also be applied to text. Prior attacks consider semantic operations, such as replacing a word with a synonym (Alzantot et al., 2018; Ribeiro et al., 2018). Our framework extends easily to these perturbations. Other attack surfaces involving insertion of sentences (Jia and Liang, 2017) or syntactic rearrangements (Iyyer et al., 2018) are harder to pair with RobEn, and are interesting directions for future work. Other defenses are based on various forms of preprocessing. Gong et al. (2019) apply a spellcorrector to correct typos chosen to create ambiguity as to the original word, but these typos are not adversarially chosen to fool a model. Edizel et al. (2019) attempt to learn typo-resistant word embeddings, but focus on common typos, rather than worst-case typos. In computer vision, Chen et al. (2019) discretizes pixels to compute exact robust accuracy on MNIST, but their approach generalizes poorly to other tasks like CIFAR-10. Garg et al. (2018) generate functions that map to robust features, while enforcing variation in outputs. Incorporating context. Our token-level robust encodings lead to strong performance, despite ignoring useful contextual information. Using context is not fundamentally at odds with the idea of robust encodings, and making contextual encodings stable is an interesting technical challenge and a promising direction for future work. In principle, an oracle that maps every word with a typo to the correct unperturbed word seems to have higher fidelity than our encodings, without compromising stability. However, existing typo correctors are far from perfect, and a choosing an incorrect unperturbed word from a perturbed input leads to errors in predictions of the downstream model. This mandates an intractable search over all perturbations to compute the robust accuracy. Task-agnosticity. Many recent advances in NLP have been fueled by the rise of task-agnostic representations, such as BERT, that facilitate the creation of accurate models for many tasks. Robustness to typos should similarly be achieved in a task-agnostic manner, as it is a shared goal across many NLP tasks. Our work shows that even simple robust encodings generalize across tasks and are more robust than existing defenses. We hope our work inspires new task-agnostic robust encodings that lead to more robust and more accurate models. Acknowledgments This work was supported by NSF Award Grant no. 1805310 and the DARPA ASED program under FA8650-18-2-7882. A.R. is supported by a Google PhD Fellowship and the Open Philanthropy Project AI Fellowship. We thank Pang Wei Koh, Reid Pryzant, Ethan Chi, Daniel Kang, and the anonymous reviewers for their helpful comments. Reproducibility All code, data, and experiments are available on CodaLab at https://bit.ly/2VSZI2e. 2761 References M. Alzantot, Y. Sharma, A. Elgohary, B. Ho, M. Srivastava, and K. Chang. 2018. Generating natural language adversarial examples. In Empirical Methods in Natural Language Processing (EMNLP). A. Athalye, N. Carlini, and D. Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420. Y. Belinkov and Y. Bisk. 2018. Synthetic and natural noise both break neural machine translation. In International Conference on Learning Representations (ICLR). J. Chen, X. Wu, V. Rastogi, Y. Liang, and S. Jha. 2019. Towards understanding limitations of pixel discretization against adversarial attacks. In IEEE European Symposium on Security and Privacy (EuroS&P). J. M. Cohen, E. Rosenfeld, and J. Z. Kolter. 2019. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning (ICML). M. Davies. 2008. The corpus of contemporary american english (coca): 560 million words, 1990present. https://www.english-corpora.org/ faq.asp. M. Davis. 2003. Psycholinguistic evidence on scrambled letters in reading. https://www.mrc-cbu. cam.ac.uk/. J. Devlin, M. Chang, K. Lee, and K. Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Association for Computational Linguistics (ACL), pages 4171– 4186. W. B. Dolan and C. Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In International Workshop on Paraphrasing (IWP). J. Ebrahimi, D. Lowd, and D. Dou. 2018a. On adversarial examples for character-level neural machine translation. In International Conference on Computational Linguistics (COLING). J. Ebrahimi, A. Rao, D. Lowd, and D. Dou. 2018b. Hotflip: White-box adversarial examples for text classification. In Association for Computational Linguistics (ACL). B. Edizel, A. Piktus, P. Bojanowski, R. Ferreira, E. Grave, and F. Silvestri. 2019. Misspelling oblivious word embeddings. In North American Association for Computational Linguistics (NAACL). S. Garg, V. Sharan, B. H. Zhang, and G. Valiant. 2018. A spectral view of adversarially robust features. In Advances in Neural Information Processing Systems (NeurIPS). H. Gong, Y. Li, S. Bhat, and P. Viswanath. 2019. Context-sensitive malicious spelling error correction. In World Wide Web (WWW), pages 2771–2777. S. Gowal, K. Dvijotham, R. Stanforth, R. Bunel, C. Qin, J. Uesato, T. Mann, and P. Kohli. 2018. On the effectiveness of interval bound propagation for training verifiably robust models. arXiv preprint arXiv:1810.12715. H. Hosseini, S. Kannan, B. Zhang, and R. Poovendran. 2017. Deceiving Google’s Perspective API built for detecting toxic comments. arXiv preprint arXiv:1702.08138. P. Huang, R. Stanforth, J. Welbl, C. Dyer, D. Yogatama, S. Gowal, K. Dvijotham, and P. Kohli. 2019. Achieving verified robustness to symbol substitutions via interval bound propagation. In Empirical Methods in Natural Language Processing (EMNLP). M. Iyyer, J. Wieting, K. Gimpel, and L. Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In North American Association for Computational Linguistics (NAACL). R. Jia and P. Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Empirical Methods in Natural Language Processing (EMNLP). R. Jia, A. Raghunathan, K. Gksel, and P. Liang. 2019. Certified robustness to adversarial word substitutions. In Empirical Methods in Natural Language Processing (EMNLP). V. Kocijan, A. Cretu, O. Camburu, Y. Yordanov, and T. Lukasiewicz. 2019. A surprisingly robust trick for the Winograd schema challenge. In Association for Computational Linguistics (ACL). H. Lee and A. Y. Ng. 2005. Spam deobfuscation using a hidden Markov model. In Conference on Email and Anti-Spam (CEAS). N. F. Liu, R. Schwartz, and N. A. Smith. 2019. Inoculation by fine-tuning: A method for analyzing challenge datasets. In North American Association for Computational Linguistics (NAACL). A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. 2017. Towards deep learning models resistant to adversarial attacks (published at ICLR 2018). arXiv. A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations (ICLR). J. Pennington, R. Socher, and C. D. Manning. 2014. GloVe: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. 2762 D. Pruthi, B. Dhingra, and Z. C. Lipton. 2019. Combating adversarial misspellings with robust word recognition. In Association for Computational Linguistics (ACL). P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Empirical Methods in Natural Language Processing (EMNLP). G. E. Rawlinson. 1976. The significance of letter position in word recognition. Ph.D. thesis, University of Nottingham. M. T. Ribeiro, S. Singh, and C. Guestrin. 2018. Semantically equivalent adversarial rules for debugging NLP models. In Association for Computational Linguistics (ACL). K. Sakaguchi, K. Duh, M. Post, and B. V. Durme. 2017. Robsut wrod reocginiton via semi-character recurrent neural network. In Association for the Advancement of Artificial Intelligence (AAAI). Z. Shi, H. Zhang, K. Chang, M. Huang, and C. Hsieh. 2020. Robustness verification for transformers. In International Conference on Learning Representations (ICLR). R. Socher, A. Perelygin, J. Y. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Empirical Methods in Natural Language Processing (EMNLP). A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman. 2019. Glue: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations (ICLR). A. Williams, N. Nangia, and S. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Association for Computational Linguistics (ACL), pages 1112–1122. H. Zhang, Y. Yu, J. Jiao, E. P. Xing, L. E. Ghaoui, and M. I. Jordan. 2019. Theoretically principled tradeoff between robustness and accuracy. In International Conference on Machine Learning (ICML). A Appendix A.1 Aggloemrative clustering Recall that any πV induces a clustering of V , where each cluster contains a set of words mapped by πV to the same encoded token. We use an agglomerative clustering algorithm to approximately minimize Φ. We initialize πV by setting πV (w) = w for each w ∈V , which corresponds to placing each word in its own cluster. We then examine each pair of clusters Ci, Cj such that there exists an edge between a node in Ci and a node in Cj, in the graph from Section 4.2. For each such pair, we compute the value of Φ if Ci and Cj were replaced by Ci ∪Cj. If no merge operation causes Φ to decrease, we return the current πV . Otherwise, we merge the pair that leads to the greatest reduction in Φ, and repeat. To merge two clusters Ci and Cj, we first compute a new encoded token r as the w ∈Ci ∪Cj with largest ρ(w). We then set πV (w) = r for all w ∈Ci ∪Cj. Our algorithm thus works as follows Algorithm 1 Objective-minimizing agglomerative clustering 1: C ←V 2: for i in range(|V |) do 3: Cnext ←Get Best Combination(C) 4: if C = Cnext then 5: return C 6: end if 7: C ←Cnext 8: end for 9: return C Now, we simply have to define the procedure we use to get the best combination. Algorithm 2 Get Best Combination(C) 1: Copt ←C 2: Φopt ←Φ(C) 3: for (Ci, Cj) ∈Adjacent Pairs(C) do 4: Ccomb ←Ci ∪Cj 5: Cnew ←C ∪Ccomb \ {Ci, Cj} {New clusters} 6: Φnew ←Φ(Cnew) 7: if Φnew < Φopt then 8: Φopt ←Φnew 9: Copt ←Cnew 10: end if 11: end for 12: return Copt Recall our graph G = (G, E) used to define the connected component clusters. We say two clusters Ci and Cj are adjacent, and thus returned by Adjacent Pairs, if there exists a vi ∈Ci and a vj ∈Cj such that (vi, vj) ∈GE. The runtime of our algorithm is O(N2E) since at each of a possible N total iterations, we compute the objective for one of at most E pairs of clusters. Computation of the objective can be reframed as computing the difference between Φ and Φnew, where the latter is 2763 computed using new clusters, which can be done in O(N) time. A.2 Attacks We use two heuristic attacks to compute an upper bound for robust accuracy: one for ED1 perturbations and one for internal permutations. Each heuristic attack is a beam search, with beam width 5. However, because |B(xi)| is very large for many tokens xi, even the beam search is intractable. Instead, we run a beam search where the allowable perturbations are B′(xi) ⊆B(xi), where |B′(xi)| << B(xi) for sufficiently long xi. For our ED1 attack, we define B′(xi) to be four randomly sampled perturbations from B(xi) when the length of xi is less than five, and all deletions when xi is greater than five. Thus, the number of perturbations of each word is bounded above by min{4, len(xi)−2}. For our internal permutations, B′(xi) is obtained by sampling five permutations at random. A.3 Datasets We use six out of the nine tasks from GLUE: SST, MRPC, QQP, MNLI, QNLI, and RTE, all of which are classification tasks measured by accuracy. The Stanford Sentiment Treebank (SST2) (Socher et al., 2013) contains movie reviews that are classified as positive and negative. The Microsoft Research Paraphrase Corpus (MRPC) (Dolan and Brockett, 2005) and the Quora Question Pairs dataset4 contain pairs of input which are classified as semantically equivalent or not; QQP contains question pairs from Quora, while MRPC contains pairs from online news sources. MNLI, and RTE are entailment tasks, where the goal is to predict whether or not a premise sentence entails a hypothesis (Williams et al., 2018). MNLI gathers premise sentences from ten different sources, while RTE gathers premises from entailment challenges. QNLI gives pairs of sentences and questions extracted from the Stanford Question Answering Dataset (Rajpurkar et al., 2016), and the task is to predict whether or not the answer to the question is in the sentence. We use the GLUE splits for the six datasets and evaluate on test labels when available (SST-2, MRPC), and otherwise the publicly released development labels. We tune hyperparameters by 4data.quora.com/First-Quora-Dataset-Release-QuestionPairs training on 80% of the original train set and using the remaining 20% as a validation set. We then retrain using the chosen hyperparameters on the full training set. A.4 Experimental details For our methods using transformers, we start with the pretrained uncased BERT (Devlin et al., 2019), using the same hyperparameters as the pytorchtransformers repo.5. In particular, we use the base uncased version of BERT. We use a batch size of 8, and learning rate 2e−5. For examples where |Bα(x)| > 10000, we assume the prediction is not robust to make computation tractible. Each typo corrector uses the defaults for training from6; it is trained on a specific task using perturbations of the training data as input and the true sentence (up to OOV) as output. The vocabulary size of the typo correctors is 10000 including the unknown token, as in (Pruthi et al., 2019). The typo corrector is chosen based on word-error rate on the validation set. A.5 Constrained adversaries Using RobEn, since we can tractably compute robust accuracy, it is easy to additionally consider adversaries that cannot perturb every input token. We may assume that an attacker has a budget of b ≤L words that they may perturb as in (Pruthi et al., 2019). Exiting methods for certification (Jia et al., 2019; Huang et al., 2019) require attack to be factorized over tokens, and cannot give tighter guarantees in the budget-constrained case compared to the unconstrained setting explored in previous sections. However, our method lets us easily compute robust accuracy exactly in this situation: we just enumerate the possible perturbations that satisfy the budget constraint, and query the model. Figure 6 plots average robust accuracy across the six tasks using AGGCLUST as a function of b. Note that b = 0 is simply standard accuracy. Interestingly, for each dataset there is an attack only perturbing 4 tokens with attack accuracy equal to robust accuracy. A.6 Number of representations We include here histograms for the datasets we did not cover in the main body. The histograms for 5https://github.com/huggingface/ pytorch-transformers 6https://github.com/danishpruthi/ Adversarial-Misspellings 2764 0 1 2 3 4 Allowable token perturbations (b) 0.71 0.72 0.73 0.74 0.75 0.76 Rocust accuracy Robust Accuracy for Constrained Adversaries Figure 6: Robust accuracy averaged across all tasks based on different adversarial budgets b. b = 0 corresponds to clean performance, and robust performance is reached at b = 4 Size of Bα 0 0.2 0.4 0.6 0.8 1.0 Fraction of Examples 1 2 3–4 5–8 9+ Sizes of Bα on MRPC 1 2 3–4 5–8 9+ 0.2 0.4 0.6 0.8 Sizes of Bα on QQP CONNCOMP AGGCLUST (a) Size of Bα over MRPC and QQP Size of Bα 0 0.2 0.4 0.6 0.8 1.0 Fraction of Examples 1 2 3–4 5–8 9+ Sizes of Bα on MNLI 1 2 3–4 5–8 9+ 0.2 0.4 0.6 0.8 Sizes of Bα on QNLI CONNCOMP AGGCLUST (b) Size of Bα over MNLI and QNLI Figure 7: Histograms showing sizes of Bα for MRPC, QQP, MNLI, and QNLI. MRPC and QQP are shown in Figure 7(a), while the histograms for MNLI and QNLI are shown in Figure 7(b). The fraction of x such that |Bα(x)| = 1 for each dataset and each set of encodings is provided in Table 2. A.7 Internal Permutation Results We consider the internal permutation attack surface, where interior characters in a word can be permuted, assuming the first and last characters are fixed. For example, “perturbation” can be permuted to “peabreuottin” but not “repturbation”. Normally, context helps humans resolve these typos. Interestingly, for internal permutations it is impossible for an adversary to change the cluster assignment of both in-vocab and out of vocab tokens since a cluster can be uniquely represented by the first character, a sorted version of the internal characters, and the last character. Therefore, using CONNCOMP encodings, robust, attack, and standard accuracy are all equal. We use the attack described in A.2 to attack the clean model. The results are in Table 3. 2765 Encodings SST-2 MRPC QQP MNLI QNLI RTE Avg Con. Comp. 86.9 71.6 72.7 45.3 54.6 40.4 61.9 Agg. Clust. 65.6 50.0 62.7 35.4 36.6 25.2 45.9 Table 2: Percentage of test examples with |Bα(x)| = 1 for each dataset. Accuracy System SST-2 MRPC QQP MNLI QNLI RTE Avg Standard BERT 93.8 87.7 91.2 84.3 88.9 71.1 86.2 Con. Comp. + BERT 93.2 87.7 86.9 75.9 83.4 61.4 81.4 Attack BERT 28.1 15.9 33.0 4.9 6.2 5.8 15.7 Con. Comp. + BERT 93.2 87.7 86.9 75.9 83.4 61.4 81.4 Robust Con. Comp. + BERT 93.2 87.7 86.9 75.9 83.4 61.4 81.4 Table 3: Results from internal permutation attacks. Internal permutation attacks bring the average performance for BERT across the six listed tasks from 86.2 to 15.7. Our CONNCOMP encodings, generated using the internal permutation attack surface, achieve a robust accuracy of 81.4, which is only 4.8 points below standard accuracy.
2020
245
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2766–2772 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2766 Showing Your Work Doesn’t Always Work Raphael Tang,1,2 Jaejun Lee,1 Ji Xin,1,2 Xinyu Liu,1 Yaoliang Yu,1,2 and Jimmy Lin1,2 1David R. Cheriton School of Computer Science, University of Waterloo 2Vector Institute for Artificial Intelligence [email protected] Abstract In natural language processing, a recently popular line of work explores how to best report the experimental results of neural networks. One exemplar publication, titled “Show Your Work: Improved Reporting of Experimental Results” (Dodge et al., 2019), advocates for reporting the expected validation effectiveness of the best-tuned model, with respect to the computational budget. In the present work, we critically examine this paper. As far as statistical generalizability is concerned, we find unspoken pitfalls and caveats with this approach. We analytically show that their estimator is biased and uses error-prone assumptions. We find that the estimator favors negative errors and yields poor bootstrapped confidence intervals. We derive an unbiased alternative and bolster our claims with empirical evidence from statistical simulation. Our codebase is at https://github.com/ castorini/meanmax. 1 Introduction Questionable answers and irreproducible results represent a formidable beast in natural language processing research. Worryingly, countless experimental papers lack empirical rigor, disregarding necessities such as the reporting of statistical significance tests (Dror et al., 2018) and computational environments (Crane, 2018). As Forde and Paganini (2019) concisely lament, explorimentation, the act of tinkering with metaparameters and praying for success, while helpful in brainstorming, does not constitute a rigorous scientific effort. Against the crashing wave of explorimentation, though, a few brave souls have resisted the urge to feed the beast. Reimers and Gurevych (2017) argue for the reporting of neural network score distributions. Gorman and Bedrick (2019) demonstrate that deterministic dataset splits yield less robust results than random ones for neural networks. Dodge et al. (2019) advocate for reporting the expected validation quality as a function of the computation budget used for hyperparameter tuning, which is paramount to robust conclusions. But carefully tread we must. Papers that advocate for scientific rigor must be held to the very same standards that they espouse, lest they birth a new beast altogether. In this work, we critically examine one such paper from Dodge et al. (2019). We acknowledge the validity of their technical contribution, but we find several notable caveats, as far as statistical generalizability is concerned. Analytically, we show that their estimator is negatively biased and uses assumptions that are subject to large errors. Based on our theoretical results, we hypothesize that this estimator strongly prefers underestimates to overestimates and yields poor confidence intervals with the common bootstrap method (Efron, 1982). Our main contributions are as follows: First, we prove that their estimator is biased under weak conditions and provide an unbiased solution. Second, we show that one of their core approximations often contains large errors, leading to poorly controlled bootstrapped confidence intervals. Finally, we empirically confirm the practical hypothesis using the results of neural networks for document classification and sentiment analysis. 2 Background and Related Work Notation. We describe our notation of fundamental concepts in probability theory. First, the cumulative distribution function (CDF) of a random variable (RV) X is defined as F(x) := Pr[X ≤x]. Given a sample (x1, . . . , xB) drawn from F, the empirical CDF (ECDF) is then ˆFB(x) := 1 B PB i=1 I[xi ≤x], where I denotes the indicator function. Note that we pick “B” instead of “n” to be consistent with Dodge et al. (2019). The error of the ECDF is pop2767 ularly characterized by the Kolmogorov–Smirnov (KS) distance between the ECDF and CDF: KS( ˆFB, F) := sup x∈R | ˆFB(x) −F(x)|. (2.1) Naturally, by definition of the CDF and ECDF, KS( ˆFB, F) ≤1. Using the CDF, the expectation for both discrete and continuous (cts.) RVs is E[X] = Z ∞ −∞ xdF(x), (2.2) defined using the Riemann–Stieltjes integral. We write the ith order statistic of independent and identically distributed (i.i.d.) X1, . . . , XB as X(i:B). Recall that the ith order statistic X(i:B) is an RV representing the ith smallest value if the RVs were sorted. Hyperparameter tuning. In random search, a probability distribution p(H) is first defined over a k-tuple hyperparameter configuration H := (H1, . . . , Hk), which can include both cts. and discrete variables, such as the learning rate and random seed of the experimental environment. Commonly, researchers choose the uniform distribution over a bounded support for each hyperparameter (Bergstra and Bengio, 2012). Combined with the appropriate model family M and dataset D := (DT , DV )—split into training and validation sets, respectively—a configuration then yields a numeric score V on DV . Finally, after sampling B i.i.d. configurations, we obtain the scores V1, . . . , VB and pick the hyperparameter configuration associated with the best one. 3 Analysis of Showing Your Work In “Show Your Work: Improved Reporting of Experimental Results,” Dodge et al. (2019) realize the ramifications of underreporting the hyperparameter tuning policy and its associated budget. One of their key findings is that, given different computation quotas for hyperparameter tuning, researchers may arrive at drastically different conclusions for the same model. Given a small tuning budget, a researcher may conclude that a smaller model outperforms a bigger one, while they may reach the opposite conclusion for a larger budget. To ameliorate this issue, Dodge et al. (2019) argue for fully reporting the expected maximum of the score as a function of the budget. Concretely, the parameters of interest are θ1, . . . , θB, where θn := E [max{V1, . . . , Vn}] = E[V(n:n)] for 1 ≤ n ≤B. In other words, θn is precisely the expected value of the nth order statistic for a sample of size n drawn i.i.d. at tuning time. For this quantity, they propose an estimator, derived as follows: first, observe that the CDF of V ∗ n = V(n:n) is Pr[V ∗ n ≤v] = Pr[V1 ≤v ∧· · · ∧Vn ≤v] (3.1) = Pr[V ≤v]n, (3.2) which we denote as F n(v). Then θn = E[V(n:n)] = Z ∞ −∞ vdF n(v). (3.3) For approximating the CDF, Dodge et al. (2019) use the ECDF ˆF n B(v), constructed from some sample S := (v1, . . . , vB), i.e., ˆF n B(v) =  ˆFB(v) n = 1 B B X i=1 I[vi ≤v] !n . (3.4) The first identity in Eq. (3.4) is clear from Eq. (3.2). Without loss of generality, assume v1 ≤· · · ≤vB. To construct an estimator ˆθn for θn, Dodge et al. (2019) then replace the CDF with the ECDF: ˆθn := Z ∞ −∞ vd ˆF n B(v), (3.5) which, by definition, evaluates to ˆθn = B X i=1 vi  ˆF n B(vi) −ˆF n B(vi−1)  , (3.6) where, with some abuse of notation, v0 < v1 is a dummy variable and ˆF n B(v0) := 0. We henceforth refer to ˆθn as the MeanMax estimator. Dodge et al. (2019) recommend plotting the number of trials on the x-axis and ˆθn on the y-axis. 3.1 Pitfalls and Caveats We find two unspoken caveats in Dodge et al. (2019): first, the MeanMax estimator is statistically biased, under weak conditions. Second, the ECDF, as formulated, is a poor drop-in replacement for the true CDF, in the sense that the finite sample error can be unacceptable if certain, realistic conditions are unmet. Estimator bias. The bias of an estimator ˆθ is defined as the difference between its expectation and its estimand θ: Bias(ˆθ) := E[ˆθ] −θ. An estimator is said to be unbiased if its bias is zero; otherwise, it is biased. We make the following claim: 2768 Theorem 1. Let V1, . . . , VB be an i.i.d. sample (of size B) from an unknown distribution F on the real line. Then, for all 1 ≤n ≤B, Bias(ˆθn) ≤0, with strict inequality iff V(1) < V(n) with nonzero probability. In particular, if n = 1, then Bias(ˆθ1) = 0 while if n > 1 with F continuous or discrete but non-degenerate, then Bias(ˆθn) < 0. Proof. Let 1 < n ≤B. We are interested in estimating the expectation of the maximum of the n i.i.d. samples: θn := E[Vn:n] = E[max{V1, . . . , Vn}]. An obvious unbiased estimator, based on the given sample of size B, is the following: ˆU B n := 1 B n  X 1≤i1<i2<···<in≤B max{Vi1, . . . , Vin}. This estimator is obviously unbiased since E[ ˆU B n ] = E[max{Vi1, . . . , Vin}] = θn, due to the i.i.d. assumption on the sample. A second, biased estimator is the following: ˆV B n := 1 Bn X 1≤i1≤i2≤···≤in≤B max{Vi1, . . . , Vin}. (3.7) This estimator is only asymptotically unbiased when n is fixed while B tends to ∞. In fact, we will prove below that for all 1 ≤n ≤B: ˆV B n ≤ˆU B n , (3.8) with strict inequality iff V(1) < V(n), where V(i) = V(i:B) is defined as the ith smallest order statistic of the sample. We start with simplifying the calculation of the two estimators. It is easy to see that the following holds: ˆU B n = B X j=1 j−1 n−1  B n  V(j), where we basically enumerate all possibilities for max{Vi1, . . . , Vin} = V(j). By convention, m n  = 0 if m < n so the above summation effectively goes from k to B, but our convention will make it more convenient for comparison. Similarly, ˆV B n = B X j=1 jn −(j −1)n Bn V(j). We make an important observation that connects our estimators to that of Dodge et al. Let ˆFB(x) = 1 B PB i=1 I[Vi ≤x] be the empirical distribution of the sample. Then, the plug-in estimator, where we replace F with ˆFB, is ˆθB n = ˆE[max{ ˆV1, . . . , ˆVn}], where ˆVi iid ∼ˆFB = B X j=1 [ ˆF n B(V(j)) −ˆF n B(V(j−1))]V(j) = ˆV B n , since ˆF n B(V(j)) = (j/B)n if there are no ties in the sample. The formula continues to hold even if there are ties, in which case we simply collapse the ties, using the fact that Pk j=i ˆF n B(V(j))−ˆF n B(V(j−1)) = ˆF n B(V(k)) −ˆF n B(V(i−1)) when V(i−1) < V(i) = V(i+1) = · · · = V(k) < V(k+1). Now, we are ready to prove Eq. (3.8). All we need to do is to compare the cumulative sums of the coefficients in the two estimators: k X j=1 j−1 n−1  B n  = k n  B n , k X j=1 jn −(j −1)n Bn = kn Bn . We need only consider k ≥n (the case k < n is trivial). One can easily verify the following expression backwards: k n  B n  < kn Bn ⇐⇒ k n  kn < B n  Bn ⇐⇒ n−1 Y i=0 (1 −i k) < n−1 Y i=0 (1 −i B ), where the last inequality follows from k < B and n > 1. Thus, we have verified the following for all 1 ≤k < B: k X j=1 j−1 n−1  B n  < k X j=1 jn −(j −1)n Bn . Eq. (3.8) now follows since V(1) < · · · < V(B) lies in the isotonic cone while we have proved the difference of the two coefficients lies in the dual cone of the isotonic cone. An elementary way to see this is to first compare the coefficients in front of V(B): clearly, ˆU B n ’s is larger since it has smaller sum of all coefficients (but the one in front of V(B); take k = B −1) whereas the total sum is always one. Repeat this comparison for V(1), . . . , V(B−1). Lastly, if V(1) < V(n), then there exists a subset (with repetition) 1 ≤i1 ≤. . . ≤in ≤n such 2769 that max{V(i1), . . . , V(in)} < V(n). For instance, setting i1 = . . . = in = 1 would suffice. Since ˆV B n puts positive mass on every subset of n elements (with repetitions allowed), the strict inequality follows. We note that if F is continuous, or if F is discrete but non-degenerate, then V(1) < V(n) with nonzero probability, hence Bias(ˆθn) = E( ˆV B n −ˆU B n ) < 0. The proof is now complete. For further caveats, see Appendix A. The practical implication is that researchers may falsely conclude, on average, that a method is worse than it is, since the MeanMax estimator is negatively biased. In the context of environmental consciousness (Schwartz et al., 2019), more computation than necessary is used to make a conclusion. ECDF error. The finite sample error (Eq. 2.1) of approximating the CDF with the ECDF (Eq. 3.4) can become unacceptable as n increases: Theorem 2. If the sample does not contain the population maximum, KS( ˆF n B, F n) →1 exponentially quickly as n and B increase. Proof. See Appendix B. Notably, this result always holds for cts. distributions, since the population maximum is never in the sample. Practically, this theorem suggests the failure of bootstrapping (Efron, 1982) for statistical hypothesis testing and constructing confidence intervals (CIs) of the expected maximum, since the bootstrap requires a good approximation of the CDF (Canty et al., 2006). Thus, relying on the bootstrap method for constructing confidence intervals of the expected maximum, as in Lucic et al. (2018), may lead to poor coverage of the true parameter. 4 Experiments 4.1 Experimental Setup To support the validity of our conclusions, we opt for cleanroom Monte Carlo simulations, which enable us to determine the true parameter and draw millions of samples. To maintain the realism of our study, we apply kernel density estimation to actual results, using the resulting probability density (or discretized mass) function as the ground truth distribution. Specifically, we examine the experimental results of the following neural networks: Document classification. We first conduct hyperparameter search over neural networks for document classification, namely a multilayer perceptron (MLP) and a long short-term memory (LSTM; Hochreiter and Schmidhuber, 1997) model representing state of the art (for LSTMs) from Adhikari et al. (2019). For our dataset and evaluation metric, we choose Reuters (Apt´e et al., 1994) and the F1 score, respectively. Next, we fit discretized kernel density estimators to the results—see the appendix for experimental details. We name the distributions after their models, MLP and LSTM. Sentiment analysis. Similar to Dodge et al. (2019), on the task of sentiment analysis, we tune the hyperparameters of two LSTMs—one ingesting embeddings from language models (ELMo; Peters et al., 2018), the other shallow word vectors (GloVe; Pennington et al., 2014). We choose the binary Stanford Sentiment Treebank (Socher et al., 2013) dataset and apply the same kernel density estimation method. We denote the distributions by their embedding types, GloVe and ELMo. 4.2 Experimental Test Battery False conclusion probing. To assess the impact of the estimator bias, we measure the probability of researchers falsely concluding that one method underperforms its true value for a given n. The unbiased estimator has an expectation of 0.5, preferring neither underestimates nor overestimates. Concretely, denote the true n-run expected maxima of the method as θn and the estimator as ˆθn. We iterate n = 1, . . . , 50 and report the proportion of samples (of size B = 50) where ˆθn < θn. We compute the true parameter using 1,000,000 iterations of Monte Carlo simulation and estimate the proportion with 5,000 samples for each n. CI coverage. To evaluate the validity of bootstrapping the expected maximum, we measure the coverage probability of CIs constructed using the percentile bootstrap method (Efron, 1982). Specifically, we set B = 50 and iterate n = 1, . . . , 50. For each n, across M = 1000 samples, we compare the empirical coverage probability (ECP) to the nominal coverage rate of 95%, with CIs constructed using 5, 000 bootstrapped resamples. The ECP ˆαn is computed as ˆαn := 1 M M X i=1 I (θn ∈CIi) , (4.1) where CIi is the CI of the ith sample. 2770 20 40 Number of Trials 0.4 0.6 0.8 F1 Score MLP vs LSTM Samples MLP MLP (true) LSTM LSTM (true) 20 40 Number of Trials 0.75 0.80 0.85 0.90 Accuracy GloVe vs ELMo Samples GloVe GloVe (true) ELMo ELMo (true) Figure 1: The estimated budget–quality curves, along with the true curves. 5 10 15 20 25 Number of Trials 0.4 0.5 0.6 0.7 0.8 F1 Score MLP vs LSTM Samples MLP MLP (true) LSTM LSTM (true) Figure 2: Illustration of a failure case with B = 25. 4.3 Results Following Dodge et al. (2019), we present the budget–quality curves for each model pair in Figure 1. For each n number of trials, we vertically average each curve across the 5,000 samples. We construct CIs but do not display them, since the estimate is precise (standard error < 0.001). For document classification, we observe that the LSTM is more difficult to tune but achieves higher quality after some effort. For sentiment analysis, using ELMo consistently attains better accuracy with the same number of trials—we do not consider the wall clock time. In Figure 2, we show a failure case of biased estimation in the document classification task. At B = 25, from n = 20 to 25, the averaged estimate yields the wrong conclusion that the MLP outperforms the LSTM—see the true LSTM line, which is above the true MLP line, compared to its estimate, which is below. False conclusions probing. Figure 3 shows the results of our false conclusion probing experiment. We find that the estimator quickly prefers negative errors as n increases. The curves are mostly similar for both tasks, except the MLP fares worse. This requires further analysis, though we conjecture that the reason is lower estimator variance, which would result in more consistent errors. 20 40 Number of Trials 0.50 0.55 0.60 0.65 0.70 Proportion of Negative Errors MLP vs LSTM Probing MLP LSTM Unbiased 20 40 Number of Trials 0.50 0.55 0.60 0.65 0.70 Proportion of Negative Errors GloVe vs ELMo Probing GloVe ELMo Unbiased Figure 3: The false conclusion probing experiment results, along with Clopper–Pearson 95% CIs. 20 40 Number of Trials 0.4 0.5 0.6 0.7 0.8 0.9 Empirical Coverage MLP vs LSTM CI Coverage MLP LSTM Nominal 20 40 Number of Trials 0.4 0.5 0.6 0.7 0.8 0.9 Empirical Coverage GloVe vs ELMo CI Coverage GloVe ELMo Nominal Figure 4: The CI coverage experiment results, along with Clopper–Pearson 95% CIs. CI coverage. We present the results of the CI coverage experiment results in Figure 4. We find that the bootstrapped confidence intervals quickly fail to contain the true parameter at the nominal coverage rate of 0.95, decreasing to an ECP of 0.7 by n = 20. Since the underlying ECDF is the same, this result extends to Lucic et al. (2018), who construct CIs for the expected maximum. 5 Conclusions In this work, we provide a dual-pronged theoretical and empirical analysis of Dodge et al. (2019). We find unspoken caveats in their work—namely, that the estimator is statistically biased under weak conditions and uses an ECDF assumption that is subject to large errors. We empirically study its practical effects on tasks in document classification and sentiment analysis. We demonstrate that it prefers negative errors and that bootstrapping leads to poorly controlled confidence intervals. Acknowledgments This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. 2771 References Ashutosh Adhikari, Achyudh Ram, Raphael Tang, and Jimmy Lin. 2019. Rethinking complex neural network architectures for document classification. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4046–4051. Chidanand Apt´e, Fred Damerau, and Sholom M. Weiss. 1994. Automated learning of decision rules for text categorization. ACM Transactions on Information Systems, 12(3):233–251. James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb):281–305. Angelo J. Canty, Anthony C. Davison, David V. Hinkley, and Val´erie Ventura. 2006. Bootstrap diagnostics and remedies. Canadian Journal of Statistics, 34(1):5–27. Matt Crane. 2018. Questionable answers in question answering research: Reproducibility and variability of published results. Transactions of the Association for Computational Linguistics, 6:241–252. Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, and Noah A. Smith. 2019. Show your work: Improved reporting of experimental results. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 2185–2194. Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker’s guide to testing statistical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 1383– 1392. Bradley Efron. 1982. The Jackknife, the Bootstrap and other resampling plans. In CBMS-NSF Regional Conference Series in Applied Mathematics, Philadelphia: Society for Industrial and Applied Mathematics. Jessica Forde and Michela Paganini. 2019. The scientific method in the science of machine learning. arXiv:1904.10922. Kyle Gorman and Steven Bedrick. 2019. We need to talk about standard splits. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2786–2791. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. 2018. Are GANs created equal? A large-scale study. In Advances in Neural Information Processing Systems, pages 700–709. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532–1543. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2227–2237. Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338–348. Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2019. Green AI. arXiv:1907.10597. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642. 2772 Model Mode Batch Size Learning Rate Seed Dropout # Layers Hidden Dim. WDrop EDrop βEMA MLP – (16, 32, 64) 0.001 [0, 107]D [0.05, 0.7] 1 [256, 768]D – – – LSTM (nonstatic[0.5], static[0.4], rand[0.1]) (16, 32, 64) TExp[0.001, 0.099] [0, 107]D [0.05, 0.7] (1[0.75], 2[0.25]) [384, 768]D [0, 0.3] [0, 0.3] [0.985, 0.995] Table 1: Hyperparameter random search bounds. [·, ·]D indicates a discrete uniform range, while [·, ·] continuous uniform. TEXP[·, ·] denotes the truncated exponential distribution. Tuples represent categorical distributions, uniform by default. WDrop and EDrop denote weight and embed dropout. For the GloVe- and ELMo-based search bounds, see https://github.com/allenai/show-your-work. A Cautionary Notes We caution that the estimator described in the text of Dodge et al. is ˆV n n . This is clear from their equation (7) where the empirical distribution is defined over the first n samples, instead of the B samples that we use here. In other words, they claim, at least in the text, to use ˆFn instead of ˆFB for their estimator ˆV n n . Clearly, the estimator ˆV n n is (much) worse than ˆV B n since the latter exploits all B samples while the former only looks at the first n samples. However, close examination of their codebase1 reveals that they use ˆV B n , so the paper discrepancy is a simple notation error. Lastly, we mention that our notation for ˆU B n and ˆV B n is motivated by the fact that the former is a U-statistic while the latter is a V -statistic. The relation between the two has been heavily studied in statistics since Hoeffding’s seminar work. For us, it suffices to point out that ˆV B n ≤ˆU B n , with the latter being unbiased while the former is only asymptotically unbiased. The difference between the two is more pronounced when n is close to B. We note that ˆU B n can be computed by a reasonable approximation of the binomial coefficients, using say Stirling’s formula. B Proof of Theorem 2 Theorem 3. If the sample does not contain the population maximum, KS( ˆF n B, F n) →1 exponentially quickly as n and B increase. Proof. Suppose v∗is not in the sample v1, . . . , vB, where v1 ≤· · · ≤vB < v∗. Then sup x∈R | ˆF n B(x) −F n(x)| ≥| ˆF n B(vB) −F n(vB)|. From Equation 2.1, ˆF n B(vB) = ( ˆFB(vB))n = 1 > (F(vB))n = F n(vB), hence | ˆF n B(vB) −F n(vB)| = 1 −(F(vB))n. Thus concluding the proof. 1https://github.com/allenai/allentune Model # Runs Bandwidth Support Bins MLP 145 0.0049 [0.72, 0.82] 511 LSTM 152 0.059 [−0.18, 1.08] 511 GloVe 114 0.018 [0.46, 0.97] 511 ELMo 84 0.041 [0.39, 0.99] 511 Table 2: Model kernel parameters. Bandwidth chosen using Scott’s normal reference rule. Bins denote the number of discretized slots. 0.72 0.74 0.76 0.78 0.80 0.82 0 5 10 15 MLP Histogram and KDE MLP KDE MLP 0.2 0.0 0.2 0.4 0.6 0.8 1.0 0 5 10 15 20 25 LSTM Histogram and KDE LSTM KDE LSTM 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 5 10 15 20 25 30 GloVe Histogram and KDE GloVe KDE GloVe 0.5 0.6 0.7 0.8 0.9 0 5 10 15 20 25 ELMo Histogram and KDE ELMo KDE ELMo Figure 5: Gaussian kernel density estimators fitted to each model’s results, along with the histograms of the original runs. C Experimental Settings We present hyperparameters in Tables 1 and 2 and Figure 5. We conduct all GloVe and ELMo experiments using PyTorch 1.3.0 with CUDA 10.0 and cuDNN 7.6.3, running on NVIDIA Titan RTX, Titan V, and RTX 2080 Ti graphics accelerators. Our MLP and LSTM experiments use PyTorch 0.4.1 with CUDA 9.2 and cuDNN 7.1.4, running on RTX 2080 Ti’s. We use Hedwig2 for the document classification experiments and the Show Your Work codebase (see link in Table 1) for the sentiment classification ones. 2https://github.com/castorini/hedwig
2020
246
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2773–2782 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2773 Span Selection Pre-training for Question Answering Michael Glass,1 Alfio Gliozzo,1 Rishav Chakravarti,1 Anthony Ferritto,1 Lin Pan,1 G P Shrivatsa Bhargav,2 Dinesh Garg,1 Avirup Sil1 1 IBM Research AI 2 Dept. of CSA, IISC, Bangalore [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract BERT (Bidirectional Encoder Representations from Transformers) and related pre-trained Transformers have provided large gains across many language understanding tasks, achieving a new state-of-the-art (SOTA). BERT is pretrained on two auxiliary tasks: Masked Language Model and Next Sentence Prediction. In this paper we introduce a new pre-training task inspired by reading comprehension to better align the pre-training from memorization to understanding. Span Selection PreTraining (SSPT) poses cloze-like training instances, but rather than draw the answer from the model’s parameters, it is selected from a relevant passage. We find significant and consistent improvements over both BERTBASE and BERTLARGE on multiple Machine Reading Comprehension (MRC) datasets. Specifically, our proposed model has strong empirical evidence as it obtains SOTA results on Natural Questions, a new benchmark MRC dataset, outperforming BERTLARGE by 3 F1 points on short answer prediction. We also show significant impact in HotpotQA, improving answer prediction F1 by 4 points and supporting fact prediction F1 by 1 point and outperforming the previous best system. Moreover, we show that our pre-training approach is particularly effective when training data is limited, improving the learning curve by a large amount. 1 Introduction State-of-the-art approaches for NLP tasks are based on language models that are pre-trained on tasks which do not require labeled data (Peters et al., 2018; Howard and Ruder, 2018; Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019; Sun et al., 2019). Fine tuning language models to downstream tasks, such as question answering or other natural language understanding tasks, has been shown to be a general and effective strategy. BERT is a recently introduced and highly successful model for language understanding. The general BERT adaptation approach is to alter the model used for pre-training while retaining the transformer encoder layers. The model discards the layers used for the final prediction in the pretraining tasks and adds layers to predict the target task. All parameters are then fine tuned on the target task. BERT is based on the transformer architecture (Vaswani et al., 2017), and trained on the following two unsupervised tasks: • Masked Language Model (MLM): predicting masked word pieces from the surrounding context • Next Sentence Prediction (NSP): predicting if the two provided sequences follow sequentially in text or not The masked LM or “cloze” task (Taylor, 1953) and next sentence prediction are auxiliary tasks (Ando and Zhang, 2005) requiring language understanding, and therefore train the model to acquire effective representations of language. However, the cloze pre-training task often poses instances that require only shallow prediction, or else require memorized knowledge. For many cloze instances the model simply requires syntactic or lexical understanding to answer. For example, in the cloze instances in Table 1 the first two rows require syntactic and lexical understanding respectively. Other cloze instances mainly require completing collocations, as in the third example. However, some cloze instances require memorized knowledge, as in the last instance, which essentially asks where Hadrian died. Other language models face the same challenge. In GPT-2 (Radford et al., 2019) the entities present in a language generation prompt are expanded with 2774 Type Cloze Syntactic In the 15th century, the blast furnace spread into what is now Belgium where it was improved. Lexical Akebia quinata grows to 10 m (30 ft) or more in height and has compound leaves with five leaflets. Collocation Apollo 11 was launched by a Saturn V rocket from Kennedy Space Center on Merritt Island, Florida Memorized Knowledge Hadrian died the same year at Baiae , and Antoninus had him deified, despite opposition from the Senate. Table 1: Cloze instances of different types related entities. For example, in a prompt about nuclear materials being stolen on a Cincinnati train, GPT-2 references “Ohio news outlets”, “U.S. Department of Energy”, and “Federal Railroad Administration” in ways consistent with their real world relationships to the entities in the prompt. As the preceding examples illustrate, in many cloze and conventional language model prediction instances, the correct prediction depends on a specific, narrowly relevant, bit of knowledge. Further, pre-trained transformer models do indeed encode a substantial number of specific facts in their parameter matrices, enabling them to answer questions directly from the model itself (Radford et al., 2019). However, because the computational cost of transformers scales at least linearly with the number of parameters, it is expensive to encode all the facts that would enable the correct predictions. Encoding a large amount of rarely useful information in parameters that are used for every instance is an inefficient use of model capacity if it is not needed for the downstream task. As the performance gains from GPT to GPT-2 and BERTBASE to BERTLARGE show, increasing model capacity continues to provide gains. Previous work also found seemingly limitless improvements from increasing model capacity (Shazeer et al., 2017), possible through sparse activation. Our hypothesis is that making more efficient use of a fixed number of parameters can provide analogous gains. In MRC tasks, the model does not need to generate an answer it has encoded in its parameters. Instead, the task is to use a retrieved passage, or passage set to extract an answer to the question. To better align the pre-training with the needs of the MRC task, we use span selection as an additional auxiliary task. This task is similar to the cloze task, but is designed to have a fewer simple instances requiring only syntactic or collocation understanding. For cloze instances that require specific knowledge, rather than training the model to encode this knowledge in its parameterization, we provide a relevant and answer-bearing passage paired with the cloze instance. We provide an extensive evaluation of the span selection pre-training method across four reading comprehension tasks: the Stanford Question Answering Dataset (SQuAD) in both version 1.1 and 2.0; followed by the Google Natural Questions dataset (Kwiatkowski et al., 2019) and a multihop Question Answering dataset, HotpotQA (Yang et al., 2018). We report consistent improvements over both BERTBASE and BERTLARGE models in all reading comprehension benchmarks. The rest of the paper is structured as follows. In section 2 We describe earlier work on similar tasks and relate our extended pre-training to the broader research efforts on pre-training transformers. To provide context for our contribution, we review the most relevant parts of BERT in Section 3. Next, we describe and formalize our pre-training task and the architectural adjustments to BERT in Section 4. Finally we provide an extensive empirical evaluation in MRC tasks, describing benchmarks in Section 5 and evaluating our approach in Section 6. Section 7 concludes the paper highlighting interesting research directiond for future work. 2 Related Work Since the development of BERT there have been many efforts towards adding or modifying the pretraining tasks. Joshi et al. (2019) introduced SpanBERT, a task that predicts the tokens in a span from the boundary token representations. Note that, unlike span selection, there is no relevant passage used to select an answer span. ERNIE 2.0 (Sun et al., 2019) trained a transformer language model with seven different pre-training tasks, including a variant of masked language model and a generalization of next-sentence prediction. XLNet (Yang et al., 2019) introduced the permuted language model task, although it is not clear whether the success of the model is due to the innovative pre-training or larger quantity of pre-training. 2775 In this paper we focus on a pre-training task that has been specifically designed to support QA applications. Previous related work has explored tasks similar to span selection pre-training. These are typically cast as approaches to augment the training data for question answering systems, rather than alleviating the pressure to encode specific facts in the pre-training of a language model. Hermann et al. (2015) introduces a reading comprehension task constructed automatically from news articles with summaries. In this view the constructed dataset is used both for training and test. Also, entities were replaced with anonymized markers to limit the influence of world knowledge. Unlike our span selection pre-training task, this requires summaries paired with articles and focuses only on entities. A similar approach was taken in Dhingra et al. (2018) to augment training data for question answering. Wikipedia articles were divided into introduction and body with sentences from the introduction used to construct queries for the body passage. Phrases and entities are used as possible answer terms. Onishi et al. (2016) constructed a question answering dataset where answers are always people. Unlike other work, this did not use document structure but instead used a search index to retrieve a related passage for a given question. Because the answers are always people, and there are only a few different people in each passage, the task is multiple choice rather than span selection. Self training (Sachan and Xing, 2018) has also been used to jointly train to construct questions and generate self-supervised training data. BERT was trained for one million batches, with 256 token sequences in each. Although this is already a considerable amount of pre-training, recent research has shown continued improvement from additional pre-training data. XLNet (Yang et al., 2019) used four times as much text, augmenting the Wikipedia and BooksCorpus (Zhu et al., 2015) with text from web crawls, the number of instances trained over was also increased by a factor of four. RoBERTa (Liu et al., 2019) enlarged the text corpus by a factor of ten and trained over fifteen times as many instances. This, along with careful tuning of the MLM task resulted in substantial gains. Unfortunately, these very large-scale pre-training approaches require significant hardware resources. We restrict our experiments to extended pre-training with less than half the steps of BERT (390k batches of 256). 3 Background In this section, we give the readers a brief overview of the BERT (Devlin et al., 2018) pre-training strategy and some details which we modify for our novel span selection auxiliary task. 3.1 Architecture and setup BERT uses a transformer (Devlin et al., 2018) architecture with L layers and each block uses A self-attention heads with hidden dimension H. The input to BERT is a concatenation of two segments x1, . . . , xM and y1, . . . , yN separated by special delimiter markers like so: [CLS], x1, . . . , xM, [SEP], y1, . . . , yN, [SEP] such that M + N < S where S is the maximum sequence length allowed during training1. This is first pre-trained on a large amount of unlabeled data and then fine-tuned on downstream tasks which has labeled data. 3.2 Objective functions BERT used two objective functions during pretraining: masked language modeling and next sentence prediction. We discuss them in brief. Masked Language Model (MLM): A random sample of the tokens in the input sequence is replaced with a special token called [MASK]. MLM computes a cross-entropy loss on predicting these masked tokens. Particularly, BERT selects 15% of the input tokens uniformly to be replaced. 80% of these selected tokens are replaced with [MASK] while 10% are left unchanged, and 10% are replaced with random token from the vocabulary. Next Sentence Prediction (NSP): This is a binary classification loss that predicts if two sentences follow each other in the original text. The examples are sampled with equal probability such that positive examples are consecutive sentences while negatives are artificially created by adding sentences from different documents. 4 Span Selection Pre-training In the previous section we briefly discussed the BERT framework along with its objective functions. In this section, we will propose a novel pre-training task for bi-directional language models called span selection. 1We follow standard notation here as in previous work. 2776 4.1 Span Selection Span selection is a pre-training task inspired both by the reading comprehension task and the limitations of cloze pre-training. Figure 1 illustrates an example of a span selection instance. The query is a sentence drawn from a corpus with a term replaced with a special token: [BLANK]. The term replaced by the blank is the answer term. The passage is relevant as determined by a BM25 (Robertson et al., 1995) (k1=1.2, b=0.75) search, and answer-bearing (containing the answer term). Query “In a station of the metro” is an Imagist poem by [BLANK] published in 1913 in the literary magazine Poetry Passage ... Ezra Pound ’s famous Imagist poem, “In a station of the metro”, was inspired by this station . . . Answer Term Ezra Pound Figure 1: Example Span Selection Instance Unlike BERT’s cloze task, where the answer must be drawn from the model itself, the answer is found in a passage using language understanding. Sentence Query Corpus Search Index ࢈࢒ࢇ࢔࢑ Retrieved Passages Span Selection Instance Query ࢈࢒ࢇ࢔࢑ Passage ࢇ࢔࢙࢝ࢋ࢘ Figure 2: Span Selection Training Generation Figure 2 outlines the process of generating span selection pre-training data. The input is an unlabeled corpus, which is then split into passages and indexed. We used passages from Wikipedia2 300 to 2000 characters long, split on paragraph boundaries, and Lucene3 7.4.0 as the search engine. In addition to the text of the passage, we store the document ID, so that we may filter passages that occur in the same document as the query. To gather queries, we iterate over the sentences in the corpus between 50 and 250 characters long. 2December 2018 snapshot 3http://lucene.apache.org/ Type Span Selection Instance Phrase Multiple Choice Q: The year 1994 was proclaimed [BLANK] of the Family by the United Nations General Assembly. P: The International Year for the Culture of Peace was designated by the United Nations as the year 2000, with the aim of celebrating and encouraging a culture of peace. . . . Suggestive Inference Q: On the island of Kaja in [BLANK], a male orangutan was observed using a pole apparently trying to spear or bludgeon fish. P: ...Although similar swamps can be found in Borneo , wild Bornean orangutans have not been seen using these types of tools. Justified Inference Q: The company’s headquarters are located in the city of Redlands, California, 50 miles east of [BLANK]. P: Redlands (Serrano: Tukut) is a city in San Bernardino County, California, United States. It is a part of the Greater Los Angeles area. . . . Table 2: Span Selection instances of different types For each sentence, we choose an answer term to replace with a blank. We used a set of simple heuristic criteria to identify answer terms that are likely to result in queries that require deep understanding to answer: the term should be between 4 and 30 characters and either a single token from an open class part-of-speech (20%) or a noun phrase or entity (80%), as detected by a part-of-speech pattern and ClearNLP NER. To identify the passages, we use the generated query, with the answer term removed, as a bag-ofwords query to search into the passage index. The top ten results were searched for an answer-bearing passage; if none were found the query was either discarded or sampled to maintain a 30% composition of impossible span selection instances. The impossible instances are those that do not have the answer-term in the provided passage. We further required a minimum BM25 score of 25 (tuned manually to reflect high relevance). If the answer term was part of a longer sequence of tokens shared by the query and passage, we extended the answer term to be the longest such sequence. This avoids cases where the answer term can be found through trivial surface-level matching. Table 2 shows examples of span selection instances of different types. Rather than discreet types, these are best understood as a continuum. 2777 Comparing to the cloze types in Table 1, we see an analogy between the lexical cloze type and phrase multiple choice. These two types involve understanding what words (or phrases) are reasonable in the context from the set of wordpieces (or possible spans). The memorized knowledge cloze type contrasts with the suggestive or justified inference span selection types. Because a suggestive or justifying passage is present, the model is trained only to understand language, rather than memorize facts. Simple syntactic instances are largely eliminated because closed class words are not possible answer terms. Also, since answer terms are expanded to the longest shared subsequence between query and passage, collocation instances are not a concern. 4.2 Extended Pre-training Rather than training a transformer architecture from scratch, we initialize from the pre-trained BERT models (Devlin et al., 2018) and extend the pre-training with the span selection auxiliary task. We refer to the resulting models as BERTBASE+SSPT (Span Selection Pre-Training) and BERTLARGE+SSPT. We used batch sizes of 256, and a learn rate of 5e-5. All models were trained over 100 million span selection instances. We found continued improvement from 50 million to 100 million and have not yet tried larger pre-training runs. Unlike the efforts of XLNet or RoBERTa which increased training by a factor of ten relative to BERT, the additional data in SSPT represents less than a 40% increase in the pre-training of the transformer. This pre-training is also done over Wikipedia, adding no new text to the pre-training. Figure 3 illustrates the adaptation of BERT to SSPT. The query and passage are concatenated 𝒑(𝒆𝒏𝒅) 𝒑(𝒆𝒏𝒅) … 𝒑(𝒆𝒏𝒅) 𝒑(𝒔𝒕𝒂𝒓𝒕) 𝒑(𝒔𝒕𝒂𝒓𝒕) … 𝒑(𝒔𝒕𝒂𝒓𝒕) Linear Layer followed by Softmax 𝐿𝐶𝐸 0 0 0 1 0 1 0 0 𝑳𝒔𝒑𝒂𝒏 BERT True Labels for Start/ End Indexes Linear Layer followed by sigmoid 𝐿𝐶𝐸 True Label ∑ CLS Query SEP Passage 𝑳𝑨𝒏𝒔 𝒑(𝒑𝒐𝒔𝒔𝒊𝒃𝒍𝒆) 𝟏 Is-possible Classifier : Use actual input to BERT how the named vectors 𝑳𝒑𝒐𝒔 𝒗𝐂𝐋𝐒 𝒗𝒑𝟎, … 𝒗𝒑𝒊, … 𝒗𝒑𝒍 Figure 3: BERT for QA with is-possible prediction in the standard two sequence representation, with a preceding [CLS] token and a separating [SEP] token, producing a sequence of tokens T. BERT produces output vectors for these tokens to obtain a sequence {vi}|T| i=1 of d dimensional vectors. In span selection extended pre-training, we alter the vocabulary of the tokenizer, introducing the new special token: ‘[BLANK]’. We use the BertForQuestionAnswering4 model, which uses a pointer network to find the answer location. The pointer network applies a simple fully connected network to predict the probability of start and end span pointers at each token position, using the output of the final transformer layer at that position. The loss in training is the cross entropy of these predictions with the true positions of the start and end. Formally, The start of the answer span is predicted as p(i = ⟨start⟩) = softmax(w⊤ ⟨start⟩v + b⟨start⟩)i, where w⟨start⟩∈Rd, b⟨start⟩∈R are trainable parameters. Then end of the span is predicted the same way: p(i = ⟨end⟩) = softmax(w⊤ ⟨end⟩v + b⟨end⟩)i. Span selection pre-training may optionally include a classifier for answerability. If the answerability classifier is included in the pre-training then the presence of the answer span in the passage is predicted with probability given by: p(possible) = sigmoid(w⊤ CLSvCLS+bCLS). If it is not included, for impossible instances the target prediction is for both start and end to be position zero, the [CLS] token. We train models for QA without the answerability classifier for 100 million instances. This took approximately seven days on 16 P100 GPUs. Training data and code to extend pre-training is available as open source5. 5 MRC Tasks We follow previous work and evaluate our SSPT architecture on several downstream tasks. Our primary motivation is to improve question answering by improving the pre-trained language model. Our QA benchmarks are the following: 1. Stanford Question Answering Dataset (SQuAD) v1.1 (Rajpurkar et al., 2016) and v2.0 (Rajpurkar et al., 2018) 4https://github.com/huggingface/ pytorch-transformers 5https://github.com/IBM/ span-selection-pretraining 2778 Dataset Context Answer Types Question Creation Training Size Dev Size Test Size Gap to Human Performance† SQuAD 1.1 passage span generated 88k 11k 10k < 0% SQuAD 2.0 passage span, impossible generated 130k 12k 9k < 0% Natural Questions document span,yes,no, impossible natural 307k 8k 8k 15% HotpotQA passage set span,yes,no generated 91k 7k 7k 8% Table 3: Comparison of QA Datasets. †As of Dec. 2019 2. Natural Questions (NQ) (Kwiatkowski et al., 2019) 3. HotpotQA (Yang et al., 2018) The three datasets provide different characteristics of question answering and machine reading comprehension tasks as well as an opportunity to compare results with active leaderboards. Table 3 provides a summary comparison. We briefly discuss them here: 5.1 SQuAD SQuAD provides a paragraph of context and asks several questions about it. The task is extractive QA where the system must find the span of the correct answer from the context. We evaluate on two versions of SQuAD: v1.1 and v2.0. In v1.1 the context always contains an answer. However, in v2.0 the task contains additional questions to which the given context does not have the correct answer. Just as in Figure 3, the question and passage are concatenated with the separators ([CLS] and [SEP]) to form the input to the pre-trained BERT. The final token representations are then used to predict the probability for each token that it is the start or end of the answer span. The span with the highest predicted probability is then the predicted answer. 5.2 Natural Questions NQ is a dataset of over 300,000 queries sampled from live users on the Google search engine for which a Wikipedia article is contained in the top ranking search results. Crowd sourced annotators are then tasked with highlighting a short answer span to each question6, if available, from the 6Around 1% of the questions are answered as a simple Yes or No rather than a span of short answer text. Due to Wikipedia article as well as a long answer span (which is generally the most immediate HTML paragraph, list, or table span containing the short answer span), if available. Similar to SQuAD 2.0 the NQ dataset forces models to make an attempt at “knowing what they don’t know” in order to detect and avoid providing answers to unanswerable questions. In addition, the fact that the questions were encountered naturally from actual users removes some of the observational bias that appears in the artificially created SQuAD questions. Both these aspects along with the recency of the task’s publication means that this is still a challenging task with lots of headroom between human performance and the best performing automated system. Experiments on the NQ dataset use the strategies and model described by Alberti et al. (2019b) to fine tune a BERTLARGE model with a final layer for answerability prediction as well as sequence start/end prediction. Similar to their best performing systems, the model is first trained using the SQuAD v1.1 data set and then subsequently trained on the NQ task7. The hyperparameters follow Alberti et al. (2019b) with the exception of learning rate and batch size which are chosen according to the approach outlined by Smith (2018) using a 20% sub-sample of the data for each experimental setting. 5.3 HotpotQA Recently, Yang et al. (2018) released a new dataset, called HotpotQA, for the task of reading compretheir small proportion, the models in this paper do not produce Yes/No answers 7Skipping the SQuAD v1.1 fine-tuning step for the NQ task leads to the same conclusions with respect to SSPT pre-training, but decreases the overall performance for both BERTLARGE and BERTLARGE+SSPT 2779 Method SQUAD 1.1 SQUAD 2.0 F1 Exact F1 Exact BERTBASE 88.52 81.22 76.45 73.29 +SSPT 91.71 85.10 82.31 79.19 +SSPT-PN 91.60 84.94 82.34 79.32 BERTLARGE 90.97 84.20 81.50 78.41 +SSPT 92.75 86.86 85.03 82.07 Table 4: Dev Set Results on SQuAD Method Short Ans F1 Long Ans F1 BERTBASE 47.27 61.02 +SSPT 50.40 63.35 BERTLARGE 52.7 64.7 +SSPT 54.2 65.85 Table 5: Dev Set Results on Natural Questions hension style extractive QA. Each training instance in the distractor setting of this dataset comprises a question, a set of ten passages, an answer, and a binary label for each sentence in the passage-set stating whether that sentence serves as a supporting fact (or not) to arrive at the correct answer. The task is to predict both the correct answer as well as the supporting facts for any given test instance. The signature characteristic of this dataset lies in the fact that each question requires a minimum of two supporting facts from two different passages in order to derive its correct answer. Thus, this dataset tests the cross-passage, multi-hop reasoning capability of a reading comprehension based question answering system. Our system for HotpotQA uses a three-phase approach. First, representations of the individual passages are built with a pre-trained transformer encoder. Second, interactions between these passages are attended to using a relatively shallow global transformer encoder. The supporting facts are predicted from the sentence representations produced by this global layer. Finally, the predicted supporting facts are then merged into a pseudo-passage that is used by a slightly altered version of the model for SQuAD. The one addition is that this model also predicts an answer-type ({yes, no, span}) from the [CLS] token vector. Method Facts Answer F1 Exact F1 Exact BERTBASE 84.00 53.15 73.86 59.97 +SSPT 85.13 56.58 77.25 63.31 BERTLARGE 85.27 55.99 75.48 61.62 +SSPT 86.17 57.57 79.39 65.87 Table 6: Dev Set Results on HotpotQA 50 55 60 65 70 75 80 85 90 95 1% 10% 100% F1 Score Fraction of Training Data BERT SQuAD 1.1 BERT + SSPT SQuAD 1.1 BERT SQuAD 2.0 BERT + SSPT SQuAD 2.0 Figure 4: Learning curve improvement for BERTLARGE with SSPT 6 Experiments Tables 4, 5, and 6 show our results on the development set with extended span selection pre-training for BERT relative to the pre-trained BERT. We use the same hyperparameters on these tasks as the original BERT. The best results for each dataset are in bold when significant relative to the BERT baseline. The four question answering datasets are improved substantially with span selection pre-training. 6.1 SQuAD Relative to BERTBASE we find a 3 point improvement in F1 for SQuAD 1.1 and a nearly 6 point improvement for SQuAD 2.0. In terms of error rate reduction the improvement is similar, 28% and 25% respectively. The error rate reduction for BERTLARGE is 20% and 19% for SQuAD 1.1 and 2.0 respectively. In reading comprehension tasks, the pointer network for answer selection is pre-trained through the span selection task. We measure how much of the improvement is due to this final layer pre-training versus the extended pre-training for the transformer 2780 encoder layers by discarding the pre-trained pointer network and randomly initializing. This configuration is indicated as BERTBASE+SSPT-PN. Surprisingly, the pre-training of the pointer network is not a significant factor in the improved performance on reading comprehension, indicating the improvement is instead coming through a better language understanding in the transformer. Figure 4 shows the improvement from SSPT on SQuAD 1.1 and 2.0 as the amount of training data increases. While there is significant improvement at 100% training, the improvement is even more pronounced with less training data. We hypothesize that this is due to the close connection of span selection pre-training with reading comprehension. This effect is strongest for SQuAD 1.1, which like span selection pre-training always contains a correct answer span in the passage. 6.2 Natural Questions The work of Alberti et al. (2019a), which gets the BERTLARGE performance listed in Table 5, is the highest ranking single model submission that does not use data augmentation with a published paper. Our implementation of BERTLARGE+SSPT, therefore, provides a 1.5% improvement over the best BERT-for-QA model performance that we are aware of on the NQ data set. In future work, we intend to explore data augmentation on top of BERTLARGE+SSPT for further improvements. 6.3 HotpotQA In HotpotQA, unlike the other QA datasets, multiple passages are provided. We use the BERT transformer in two places, for supporting fact prediction to build the representations of each passage, and in answer prediction as in the other QA tasks. We find the most substantial gains of almost 4 F1 points for answer selection, the QA task most similar to span selection pre-training. Interestingly, we also find improvement of almost one point F1 in supporting fact prediction, demonstrating that the learned representations can generalize well to multiple QA sub-tasks. HotpotQA also comes with its own leaderboard (https://hotpotqa.github.io/). A good number of submissions on this leaderboard are based on BERTBASE or BERTLARGE. We made an initial submission to this leaderboard, called TAP, which occupied Rank-5 at the time of submission and the underlying architecture employed BERTBASE. Next, we replaced BERTBASE with BERTLARGE+SSPT, Model Passage F1 Exact BERTBASE+SSPT Related 62.88 49.27 BERTBASE+SSPT Unrelated 46.51 34.32 BERTLARGE+SSPT Related 65.39 51.82 BERTLARGE+SSPT Unrelated 50.98 38.97 Table 7: Comparison of performance of SSPT for related vs. unrelated passages calling that model TAP-2. This change resulted in a 7.22% absolute gain in the Joint F1 score. An ensemble version of TAP-2 further offered a gain of 1.53%. The SSPT augmented TAP-2 (ensemble) and TAP-2 (single model) achieved Rank-1 and Rank-2 on the leaderboard at the time of submission. 6.4 Exploration of SSPT Instance Types In section 4.1 we enumerated three types of span selection instances. The first type, Phrase Multiple Choice, is the least interesting since the semantic correspondence between the query and the passage is not used. Instead, the instance is treated as a cloze with options provided as spans in the passage. Note that in this type of instance the relevance of the passage to the query is not important. To explore how frequent this case might be we select 100 thousand new SSPT instances with a relevant passage and for each select an alternative, random, answer-bearing, passage. The unrelated passage is from a document different both from the query’s document and from the relevant passage’s document. We then apply the SSPT trained model to the instances both with the related and unrelated passage and evaluate its performance in terms of token-level F1 and exact span match. Table 7 show the performance of our SSPT trained models on the SSPT queries with related vs. unrelated passages. The large accuracy gains when using relevant passages imply that for many passages “Phrase Multiple Choice” is not the method used by the model. Instead, the semantic connection of the passage to the query is used to select the appropriate span. 6.5 Comparison to Previous Work We also compare our span selection pre-training data with the data distributed by Dhingra et al. (2018). This data consists of approximately 2 million instances constructed using the abstract and body structure of Wikipedia. In contrast, our ap2781 proach to pre-training can generate data in unlimited quantity from any text source without assuming a particular document structure. When only one million training steps are used, both sources of pre-training are equally effective. But when moving to ten million steps of training, our data produces models that give over one percent better F1 on both SQuAD 1.1 and 2.0. This suggests the greater quantity of data possible through SSPT is a powerful advantage. 7 Conclusion and Future Work Span selection pre-training is effective in improving reading comprehension across four diverse datasets, including both generated and natural questions, and with provided contexts of passages, documents and even passage sets. This style of pretraining focuses the model on finding semantic connections between two sequences, and supports a style of cloze that can train deep semantic understanding without demanding memorization of specific knowledge in the model. The span selection task is suitable for pre-training on any domain, since it makes no assumptions about document structure or availability of summary/article pairs. This allows pre-training of language understanding models in a very generalizable way. In future work, we will address end-to-end question answering with pre-training for both the answer selection and retrieval components. We hope to progress to a model of general purpose language modeling that uses an indexed long term memory to retrieve world knowledge, rather than holding it in the densely activated transformer encoder layers. References Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, and Michael Collins. 2019a. Synthetic QA corpora generation with roundtrip consistency. CoRR, abs/1906.05416. Chris Alberti, Kenton Lee, and Michael Collins. 2019b. A bert baseline for the natural questions. Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6(Nov):1817–1853. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Bhuwan Dhingra, Danish Danish, and Dheeraj Rajagopal. 2018. Simple and effective semi-supervised question answering. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 582–587. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1693–1701. Curran Associates, Inc. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339. Association for Computational Linguistics. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2019. Spanbert: Improving pre-training by representing and predicting spans. arXiv preprint arXiv:1907.10529. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. TACL. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2230– 2235. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Preprint. 2782 Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Stephen E Robertson, Steve Walker, Susan Jones, Micheline M Hancock-Beaulieu, Mike Gatford, et al. 1995. Okapi at trec-3. Nist Special Publication Sp, 109:109. Mrinmaya Sachan and Eric Xing. 2018. Self-training for jointly learning to ask and answer questions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 629–640. Association for Computational Linguistics. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V. Le, Geoffrey E. Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. CoRR, abs/1701.06538. Leslie N. Smith. 2018. A disciplined approach to neural network hyper-parameters: Part 1 – learning rate, batch size, momentum, and weight decay. Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2019. Ernie 2.0: A continual pre-training framework for language understanding. arXiv preprint arXiv:1907.12412. Wilson L Taylor. 1953. “Cloze procedure”: A new tool for measuring readability. Journalism Bulletin, 30(4):415–433. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pages 19– 27.
2020
247
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2783–2792 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2783 Topological Sort for Sentence Ordering Shrimai Prabhumoye, Ruslan Salakhutdinov, Alan W Black School of Computer Science Carnegie Mellon University Pittsburgh, PA, USA {sprabhum, rsalakhu, awb}@cs.cmu.edu Abstract Sentence ordering is the task of arranging the sentences of a given text in the correct order. Recent work using deep neural networks for this task has framed it as a sequence prediction problem. In this paper, we propose a new framing of this task as a constraint solving problem and introduce a new technique to solve it. Additionally, we propose a human evaluation for this task. The results on both automatic and human metrics across four different datasets show that this new technique is better at capturing coherence in documents. 1 Introduction Sentence ordering is the task of arranging sentences into an order which maximizes the coherence of the text (Barzilay and Lapata, 2008). This is important in applications where we have to determine the sequence of pre-selected set of information to be presented. This task has been well-studied in the community due to its significance in down stream applications such as ordering of: concepts in conceptto-text generation (Konstas and Lapata, 2012), information from each document in multi-document summarization (Barzilay and Elhadad, 2002; Nallapati et al., 2017), events in storytelling (Fan et al., 2019; Hu et al., 2019), cooking steps in recipe generation (Chandu et al., 2019), and positioning of new information in existing summaries for update summarization (Prabhumoye et al., 2019). Student essays are evaluated based on how coherent and well structured they are. Hence, automated essay scoring (Burstein et al., 2010; Miltsakaki and Kukich, 2004) can use this task to improve the efficiency of their systems. Early work on coherence modeling and sentence ordering task uses probabilistic transition model based on vectors of linguistic features (Lapata, 2003), content model which represents topics as states in an HMM (Barzilay and Lee, 2004), and entity based approach (Barzilay and Lapata, 2008). Recent work uses neural approaches to model coherence and to solve sentence ordering task. Li and Hovy (2014) introduced a neural model based on distributional sentence representations using recurrent or recursive neural networks and avoided the need of feature engineering for this task. In (Li and Jurafsky, 2017), they extend it to domain independent neural models for coherence and they introduce new latent variable Markovian generative models to capture sentence dependencies. These models used windows of sentences as context to predict sentence pair orderings. Gong et al. (2016) proposed end-to-end neural architecture for sentence ordering task which uses pointer networks to utilize the contextual information in the entire piece of text. Recently hierarchical architectures have been proposed for this task. In (Logeswaran et al., 2018), the model uses two levels of LSTMs to first get the encoding of the sentence and then get the encoding of the entire paragraph. Cui et al. (2018) use a transformer network for the paragraph encoder to allow for reliable paragraph encoding. Prior work (Logeswaran et al., 2018; Cui et al., 2018; Kumar et al., 2020) has treated this task as a sequence prediction task where the order of the sentences is predicted as a sequence. The decoder is initialized by the document representation and it outputs the index of sentences in sequential order. Only in (Chen et al., 2016), this task is framed as a ranking problem. In this work, a pairwise score is calculated between two sentences and then the final score for an order is obtained by summing over all the scores between pairs of sentences. The order which has the maximum score is given as output. Instead of considering all possible permutations of a given order, it uses beam-search strategy to find a suboptimal order. 2784 Most of the recent work (Gong et al., 2016; Logeswaran et al., 2018; Cui et al., 2018) tries to leverage the contextual information but has the limitation of predicting the entire sequence of the order. This has the drawback that the prediction at the current time step is dependent on the prediction of the previous time step. Another limitation of the prior work is the availability of good sentence representations that can help in determining the relative order between two sentences. For this work we frame the task as a constraint learning problem. We train a model which learns to predict the correct constraint given a pair of sentences. The constraint learnt by our model is the relative ordering between the two sentences. Given a set of constraints between the sentences of a document, we find the right order of the sentences by using sorting techniques. Since we don’t attach a score to an order, we don’t have to consider all the permutations of an order. Our main contribution is a new framing for the sentence ordering task as a constraint solving problem. We also propose a new and simple approach for this task in this new framework. We show that a simple sorting technique can outperform the previous approaches by a large margin given that it has good sentence representations. The bottleneck for most of the hierarchical models is memory required by the representations of all the sentences and the representation of the paragraph. The new framing also obviates these memory issues. The code can be found at https://github.com/shrimai/ Topological-Sort-for-Sentence-Ordering. Additionally, we introduce a human evaluation for this task and show that our model outperforms the state-of-the-art on all the metrics. 2 Methodology For our task we have a set of N documents D = {d1. . . . , dN}. Let the number of sentences in each document di be denoted by vi, where ∀i, vi >= 1. Our task can be formulated as - If we have a set {so1, . . . , sovi} of vi sentences in a random order where the random order is o = [o1, . . . , ovi], then the task is to find the right order of the sentences o∗= [o∗ 1, . . . , o∗ vi]. Prior work (Logeswaran et al., 2018; Cui et al., 2018) learns to predict the sequence of the correct order o∗. In this formulation of the task, we have Ci set of constraints for document di. These constraints Ci represent the relative ordering between every pair of sentences in di. Hence, we have |Ci| = vi 2  . For example, if a document has four sentences in the correct order s1 < s2 < s3 < s4, then we have six set of constraints {s1 < s2, s1 < s3, s1 < s4, s2 < s3, s2 < s4, s3 < s4}. Constraints Ci are learnt using a classifier neural network described in (§2.2). We finally find the right order o∗using topological sort on the relative ordering between all the Ci pairs of sentences. 2.1 Topological Sort Topological sort (Tarjan, 1976) is a standard algorithm for linear ordering of the vertices of a directed graph. The sort produces an ordering ˆo of the vertices such that for every directed edge u →v from vertex u to vertex v, u comes before v in the ordering ˆo. We use the depth-first search based algorithm which loops through each node of the graph, in an arbitrary order. The algorithm visits each node n and prepends it to the output ordering ˆo only after recursively calling the topological sort on all descendants of n in the graph. The algorithm terminates when it hits a node that has been visited or has no outgoing edges (i.e. a leaf node). Hence, we are guaranteed that all nodes which depend on n are already in the output ordering ˆo when the algorithm adds node n to ˆo. We use topological sort to find the correct ordering o∗of the sentences in a document. The sentences can represent the nodes of a directed graph and the directed edges are represented by the ordering between the two sentences. The direction of the edges are the constraints predicted by the classifier. For example, if the classifier predicts the constraint that sentence s1 precedes s2, then the edge s1 →s2 would be from node of s1 to s2. This algorithm has time complexity of O(vi + |Ci|) for a document di. In our current formulation, all the constraints are predicted before applying the sort. Hence, we have to consider all the |Ci| = vi 2  edges in the graph. The time complexity of our current formulation is O(v2 i ). But the same technique could be adopted using a Merge Sort (Knuth, 1998) algorithm in which case the time complexity would be O(vi log vi). In this case, the sort algorithm is applied first and the constraint is predicted only for the two sentences for which the relative ordering is required during the sort time. 2785 2.2 Constraint Learning We build a classifier to predict a constraint between two sentences s1 and s2 (say). The constraint learnt by the classifier is the relative ordering between the two sentences. Specifically, the classifier is trained to predict whether s2 follows s1 or not i.e the the classifier predicts the constraint s1 < s2. BERT based Representation. (B-TSort) We use the Bidirectional Encoder Representations from Transformers (BERT) pre-trained uncased language model (Devlin et al., 2019) and fine-tune it on each dataset using a fully connected perceptron layer. Specifically, we leverage the Next Sentence Prediction objective of BERT and get a single representation for both sentences s1 and s2. The input to the BERT model is the sequence of tokens of sentence s1, followed by the separator token ‘[SEP]’, followed by the sequence of tokens for sentence s2. We use the pooled representation for all the time steps1. LSTM based Representation. (L-TSort) In this model we get two separate representations h1 and h2 for s1 and s2 from a bi-directional LSTM encoder, respectively. We pass the concatenation of h1 and h2 as input to two layers of perceptron for constraint prediction. This model is trained to gain insight on the contribution of pre-trained sentence representations for the constraint prediction formulation of the task. 3 Experimental Results This section describes the datasets, the evaluation metric and the results of our experiments. The hyper-paramater settings are reported in Apendix. 3.1 Datasets NSF. NIPS, AAN abstracts. These three datasets contain abstracts from NIPS papers, ACL papers, and the NSF Research Award Abstracts dataset respectively and are introduced in (Logeswaran et al., 2018). The paper also provides details about the statistics and processing steps for curating these three datasets. SIND caption. We also consider the SIND (Sequential Image Narrative Dataset) caption dataset (Huang et al., 2016) used in the sentence ordering task by (Gong et al., 2016). All the stories in this dataset contain five sentences each and we only consider textual stories for this task. 1This code was based on (Wolf et al., 2019). 3.2 Baselines Attention Order Network (AON). This is the current state-of-the-art model (Cui et al., 2018) which formulates the sentence ordering task as a order prediction task. It uses a LSTM based encoder to learn the representation of a sentence. It then uses a transformer network based paragraph encoder to learn a representation of the entire document. It then decodes the sequence of the order by using a LSTM based decoder. BERT Attention Order Network (B-AON). To have a fair comparison between our model and the AON model, we replace the LSTM based sentence representation with the pre-trained uncased BERT model. This model plays a pivotal role of giving us an insight into how much improvement in performance we get only due to BERT. 3.3 Evaluation Metric Perfect Match (PMR): calculates the percentage of samples for which the entire sequence was correctly predicted (Chen et al., 2016). PMR = 1 N PN i=1 1{ˆoi = o∗i}, where N is the number of samples in the dataset. It is the strictest metric. Sentence Accuracy (Acc): measures the percentage of sentences for which their absolute position was correctly predicted (Logeswaran et al., 2018). Acc = 1 N PN i=1 1 vi Pvi j=1 1{ˆoi j = o∗i j } , where vi is the number of sentences in the ith document. It is a also a stringent metric. Kendall Tau (Tau): quantifies the distance between the predicted order and the correct order in terms of the number of inversions (Lapata, 2006). τ = 1 −2I/ vi 2  , where I is the number of pairs in the predicted order with incorrect relative order and τ ∈[−1, 1]. Rouge-S: calculates the percentage of skipbigrams for which the relative order is predicted correctly (Chen et al., 2016). Skip-bigrams are the total number of pairs vi 2  in a document. Note that it does not penalize any arbitrary gaps between two sentences as long as their relative order is correct. Rouge-S = 1 (vi 2)Skip(ˆo) ∩Skip(o∗) , where the Skip(.) function returns the set of skip-bigrams of the given order. Longest Common Subsequence (LCS): calculates the ratio of longest common sub-sequence (Gong et al., 2016) between the predicted order and the given order (consecutiveness is not necessary, and higher is better). 2786 Model PMR Acc Tau Rouge-S LCS NIPS abstracts AON 16.25 50.50 0.67 80.97 74.38 B-AON 19.90 55.23 0.73 83.65 76.29 L-TSort 12.19 43.08 0.64 80.08 71.11 B-TSort 32.59 61.48 0.81 87.97 83.45 SIND captions AON 13.04 45.35 0.48 73.76 72.15 B-AON 14.30 47.73 0.52 75.77 73.48 L-TSort 10.15 42.83 0.47 73.59 71.19 B-TSort 20.32 52.23 0.60 78.44 77.21 Table 1: Results on NIPS and SIND datasets Human Evaluation We introduce a human evaluation experiment to assess the orders predicted by the models. We set up a manual pairwise comparison following (Bennett, 2005) and present the human judges with two orders of the same piece of text. The judges are asked “Pick the option which is in the right order according to you.” They can also pick a third option ‘No Preference’ which corresponds to both the options being equally good or bad. In total we had 100 stories from the SIND dataset2 annotated by 10 judges. We setup three pairwise studies to compare the B-TSort vs AON order, B-TSort vs Gold order and AON vs Gold order (Gold order is the actual order of the text). Each judge annotated a total of 30 stories, 10 in each of the above mentioned categories. The judges were naive annotators. 3.4 Results Table 1 shows the results of the automated metrics for the NIPS and SIND datasets3. It shows that AON4 model gains on all metrics when the sentence embeddings are switched to BERT. The L-TSort model which does not utilize BERT embeddings comes close to AON performance on Rouge-S and Tau metrics. This demonstrates that the simple L-TSort method is as accurate as AON in predicting relative positions but not the absolute positions (PMR and Acc metric). Table 1 shows that our method B-TSort does not perform better 2We choose SIND because all the stories contain 5 sentences and hence it is easy to read for the judges. The orders of the stories are easier to judge as compared to the orders of scientific abstracts like NSF, NIPS and AAN as they require the judges to have an informed background. 3We fine-tune BERT which is memory intensive. Hence, we show the results of B-AON only on these two datasets as they need 2 transformer layers for paragraph encoder (Cui et al., 2018) 4We use the code provided by the authors to train the AON and B-AON model. The numbers reported in Table 1 and 2 are our runs of the model. Hence, they differ from the numbers reported in the paper (Cui et al., 2018). Model PMR Acc Tau Rouge-S LCS NSF abstracts AON 13.18 38.28 0.53 69.24 61.37 B-TSort 10.44 35.21 0.66 69.61 68.50 AAN abstracts AON 36.62 56.22 0.70 81.52 79.06 B-TSort 50.76 69.22 0.83 87.76 85.92 Table 2: Results on NSF and AAN datasets B-TSort No Preference B-AON 41.00% 28.00% 31.00% B-TSort No Preference Gold 26.00% 20.00% 54.00% B-AON No Preference Gold 24.00% 22.00% 54.00% Table 3: Human Evaluation Results on B-TSort vs AON (top), B-TSort vs Gold (middle) and AON vs Gold (bottom). only due to BERT embeddings but also due to the design of the experiment. Note that BERT has been trained with the Next Sentence Prediction objective and not the sentence ordering objective like ALBERT (Lan et al., 2020). We believe that framing this task as a constraint solving task will further benefit from pre-trained language model like ALBERT. Table 2 shows results for the NSF and AAN datasets and the B-TSort model performs better than the AON model on all metrics. Table 3 shows results for the three human evaluation studies on the SIND dataset. It shows that human judges prefer B-TSort orders 10% more number of times than the B-AON orders5. The reference order may not be the only correct ordering of the story. The variability in the orders produced by B-TSort and B-AON is not very high and hence in comparison with Gold orders, we don’t see much difference in human preferences. The low scores of AON could be due to the fact that it has to decode the entire sequence of the order. The search space for decoding is very high (in the order of vi!). Since our framework, breaks the problem to a pairwise constraint problem, the search space for our model is in the order of v2 i . Discussion: We perform additional analysis to determine the displacement of sentences in the predicted orders of the models, scalability of the models for longer documents, and an understanding of quality of the human judgements. 5Examples of B-TSort and B-AON orders are shown in Table 6 and 7 for SIND and NIPS dataset in Appendix. 2787 Model Win=1 Win=2 Win=3 % Miss Win=1 Win=2 Win=3 % Miss NIPS SIND B-AON 81.81 92.44 96.50 3.48 78.39 92.79 98.43 0.00 B-TSort 87.59 95.59 98.11 0.00 82.67 95.01 99.09 0.00 NSF AAN AON 50.58 63.87 72.96 5.85 82.65 92.25 96.73 0.84 B-TSort 61.41 75.52 83.87 0.00 90.56 96.78 98.71 0.00 Table 4: Sentence Displacement Analysis for all the datasets. (Win=Window size; % Miss=% mismatch) Displacement of sentences in predicted orders is measured by calculating the percentage of sentences whose predicted location is within 1, 2 or 3 positions (in either direction) from their original location. A higher percentage indicates less displacement of sentences. We observed that in spite of lack of a global structure, B-TSort consistently performs better on all datasets for all three window sizes as shown in Table 4. Observe that as window size reduces, the difference between B-TSort and B-AON percentages increases. This implies that displacement of sentences is higher in B-AON despite taking the whole document into account. We additionally perform a comparison of models on documents containing more than 10 sentences and the results are shown in Table 5. B-TSort consistently performs better on all the metrics. SIND dataset is omitted in these experiments as the maximum number of sentences in the story is five for all the stories in the dataset. For each dataset, the Tau difference for longer documents is much higher than the Tau difference on the overall dataset (Table 1 and 2). This implies that B-TSort performs much better for longer documents. Note that the AON model generates the order and hence need not generate positions for all the sentences in the input. We calculate the percentage of mismatches between the length of the input document and the generated order. For AON model on the NSF dataset which has longest documents, the overall mismatch is 5.85% (Table 4), while the mismatch for documents with more than 10 sentences is 11.60%. The AON model also produces an overall mismatch of 0.84 % on AAN documents while producing a mismatch of 5.17% on longer AAN documents. Similarly, the B-AON model has an overall mismatch of 3.48% for NIPS dataset, and 33.33% mismatch for longer documents. This problem does not arise in our design of the task as it does not have to stochastically generate orders. To better understand the choices of human judges, we observe the average length of stories Model PMR Acc Tau Rouge-S LCS NIPS abstracts B-AON 0.0 29.18 0.51 74.64 63.81 B-TSort 0.0 39.43 0.74 83.26 71.68 NSF abstracts AON 2.12 21.42 0.41 67.45 55.47 B-TSort 0.67 28.57 0.64 68.46 64.86 AAN abstracts AON 0.0 22.70 0.40 68.90 56.19 B-TSort 0.0 36.86 0.69 78.52 72.01 Table 5: Analysis on NIPS, NSF and AAN datasets for documents longer than 10 sentences. calculated in number of tokens. For the B-TSort vs B-AON study, we discover that the average length of the stories for B-TSort, B-AON and ‘No Preference’ chosen options is 86, 65 and 47 respectively. This means that B-TSort is better according to human judges for longer stories. Similarly for B-TSort vs Gold experiment, the human judges were confused with longer stories, reiterating that B-TSort performs well with long stories. 4 Conclusion and Future Work We have shown a new way to design the task of sentence ordering. We provide a simple yet efficient method to solve the task which outperforms the state of the art technique on all metrics. We acknowledge that our current model has the limitation of not including the entire context of the paragraph while making the decision of the relative order of the pairs. Our future work is to include the paragraph representation in the constraint prediction model. This will help our methodology to have the benefit of making informed decision while also solving constraints. Acknowledgments This work was supported in part by ONR Grant N000141812861, NSF IIS1763562, and Apple. We would also like to acknowledge NVIDIAs GPU support. 2788 References Regina Barzilay and Noemie Elhadad. 2002. Inferring strategies for sentence ordering in multidocument news summarization. Journal of Artificial Intelligence Research. Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computational Linguistics, 34(1):1–34. Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. arXiv preprint cs/0405039. Christina L Bennett. 2005. Large scale evaluation of corpus-based synthesizers: Results and lessons from the blizzard challenge 2005. In Ninth European Conference on Speech Communication and Technology. Jill Burstein, Joel Tetreault, and Slava Andreyev. 2010. Using entity-based features to model coherence in student essays. In Human language technologies: The 2010 annual conference of the North American chapter of the Association for Computational Linguistics, pages 681–684. Khyathi Chandu, Eric Nyberg, and Alan W Black. 2019. Storyboarding of recipes: Grounded contextual generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6040–6046, Florence, Italy. Association for Computational Linguistics. Xinchi Chen, Xipeng Qiu, and Xuanjing Huang. 2016. Neural sentence ordering. arXiv preprint arXiv:1607.06952. Baiyun Cui, Yingming Li, Ming Chen, and Zhongfei Zhang. 2018. Deep attentive sentence ordering network. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4340–4349. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Angela Fan, Mike Lewis, and Yann Dauphin. 2019. Strategies for structuring story generation. arXiv preprint arXiv:1902.01109. Jingjing Gong, Xinchi Chen, Xipeng Qiu, and Xuanjing Huang. 2016. End-to-end neural sentence ordering using pointer network. arXiv preprint arXiv:1611.04953. Junjie Hu, Yu Cheng, Zhe Gan, Jingjing Liu, Jianfeng Gao, and Graham Neubig. 2019. What makes a good story? designing composite rewards for visual storytelling. arXiv preprint arXiv:1909.05316. Ting-Hao K. Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Aishwarya Agrawal, Ross Girshick, Xiaodong He, Pushmeet Kohli, Dhruv Batra, et al. 2016. Visual storytelling. In 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2016). Donald Ervin Knuth. 1998. The art of computer programming, , Volume III, 2nd Edition. AddisonWesley. Ioannis Konstas and Mirella Lapata. 2012. Conceptto-text generation via discriminative reranking. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long PapersVolume 1, pages 369–378. Association for Computational Linguistics. Pawan Kumar, Dhanajit Brahma, Harish Karnick, and Piyush Rai. 2020. Deep attentive ranking networks for learning to order sentences. In Thirty-Fourth AAAI Conference on Artificial Intelligence. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In International Conference on Learning Representations. Mirella Lapata. 2003. Probabilistic text structuring: Experiments with sentence ordering. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1, pages 545– 552. Association for Computational Linguistics. Mirella Lapata. 2006. Automatic evaluation of information ordering: Kendall’s tau. Computational Linguistics, 32(4):471–484. Jiwei Li and Eduard Hovy. 2014. A model of coherence based on distributed sentence representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2039–2048. Jiwei Li and Dan Jurafsky. 2017. Neural net models of open-domain discourse coherence. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 198–209. Lajanugen Logeswaran, Honglak Lee, and Dragomir Radev. 2018. Sentence ordering and coherence modeling using recurrent neural networks. In ThirtySecond AAAI Conference on Artificial Intelligence. Eleni Miltsakaki and Karen Kukich. 2004. Evaluation of text coherence for electronic essay scoring systems. Natural Language Engineering, 10(1):25–55. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In Thirty-First AAAI Conference on Artificial Intelligence. 2789 Shrimai Prabhumoye, Chris Quirk, and Michel Galley. 2019. Towards content transfer through grounded text generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2622–2632, Minneapolis, Minnesota. Association for Computational Linguistics. Robert Endre Tarjan. 1976. Edge-disjoint spanning trees and depth-first search. Acta Informatica, 6(2):171–185. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. 2790 A Appendix Hyper-parameters. For AON model we use the code base provided by the authors in (Cui et al., 2018) and we maintain the hyper-parameters described in the paper. For the paragraph encoder of the B-AON models, we follow the same scheme of the AON model but for its sentence encoder we use hyper-parameters of the BERT setting. We use the pretrained BERT uncased base model with 12 layers for the B-AON and B-TSORT models. We fine-tune the BERT model in both cases. Hence, we replace the Adadelta optimizer with the BertAdam (Wolf et al., 2019) optimizer for the B-AON model. The LSTMs in the L-TSort model uses an RNN size of 512 and it uses the same vocabularies as the AON model. L-TSort is trained using stochastic gradient descent with dropout of 0.2, learning rate of 1.0 and learning decay rate of 0.5. For B-TSort and L-TSort we use accuracy on the validation set to stop training. For B-TSort and B-AON we use learning rate of 5e-5 with adam epsilon value of 1e-8. For all the experiments we use a maximum sequence length of 105 tokens. 2791 Gold Order B-TSort Order B-AON Order SIND Dataset the family sits together for dinner on the first night of the annual reunion. the restaurant we chose had amazing food and everyone loved the presentation. gemma really adored the restaurants decorations and was always gazing at them. aunt harriot had a little trouble deciding what kind of wine she wanted tonight. bob had the whole family cracking up with his jokes. the family sits together for dinner on the first night of the annual reunion. the restaurant we chose had amazing food and everyone loved the presentation. aunt harriot had a little trouble deciding what kind of wine she wanted tonight. gemma really adored the restaurants decorations and was always gazing at them. bob had the whole family cracking up with his jokes. the family sits together for dinner on the first night of the annual reunion. aunt harriot had a little trouble deciding what kind of wine she wanted tonight. bob had the whole family cracking up with his jokes. gemma really adored the restaurants decorations and was always gazing at them. the restaurant we chose had amazing food and everyone loved the presentation. he wanted to take a ride on his new bike. we went on a nice ride out to the lake. we really enjoyed the beautiful view from the dock. it was very peaceful watching the boats. we had such a busy day he needed a nap. we went on a nice ride out to the lake. he wanted to take a ride on his new bike. we really enjoyed the beautiful view from the dock. it was very peaceful watching the boats. we had such a busy day he needed a nap. we went on a nice ride out to the lake. he wanted to take a ride on his new bike. it was very peaceful watching the boats. we really enjoyed the beautiful view from the dock. we had such a busy day he needed a nap. when we finally brought our son home from the hospital so many people were at home with us to see him. everyone wanted a chance to hold him! we were all so happy to have a new addition to the family. my parents were so proud to be grand parents! i am so happy and i love my son very much! when we finally brought our son home from the hospital so many people were at home with us to see him. we were all so happy to have a new addition to the family. everyone wanted a chance to hold him! my parents were so proud to be grand parents! i am so happy and i love my son very much! my parents were so proud to be grand parents! when we finally brought our son home from the hospital so many people were at home with us to see him. we were all so happy to have a new addition to the family. everyone wanted a chance to hold him! i am so happy and i love my son very much! Table 6: Examples of predicted sentence orders for B-TSort and B-AON model for SIND dataset. 2792 Gold Order B-TSort Order B-AON Order NIPS Dataset we study how well one can recover sparse principal components of a data matrix using a sketch formed from a few of its elements. we show that for a wide class of optimization problems, if the sketch is close (in the spectral norm) to the original data matrix, then one can recover a near optimal solution to the optimization problem by using the sketch. in particular, we use this approach to obtain sparse principal components and show that for m data points in n dimensions, o(-2k maxm, n) elements gives an - additive approximation to the sparse pca problem (k is the stable rank of the data matrix). we demonstrate our algorithms extensively on image, text, biological and financial data. the results show that not only are we able to recover the sparse pcas from the incomplete data, but by using our sparse sketch, the running time drops by a factor of five or more. we study how well one can recover sparse principal components of a data matrix using a sketch formed from a few of its elements. we show that for a wide class of optimization problems, if the sketch is close (in the spectral norm) to the original data matrix, then one can recover a near optimal solution to the optimization problem by using the sketch. in particular, we use this approach to obtain sparse principal components and show that for m data points in n dimensions, o(-2k maxm, n) elements gives an - additive approximation to the sparse pca problem (k is the stable rank of the data matrix). the results show that not only are we able to recover the sparse pcas from the incomplete data, but by using our sparse sketch, the running time drops by a factor of five or more. we demonstrate our algorithms extensively on image, text, biological and financial data. we study how well one can recover sparse principal components of a data matrix using a sketch formed from a few of its elements. in particular, we use this approach to obtain sparse principal components and show that for m data points in n dimensions, o(-2k maxm, n) elements gives an - additive approximation to the sparse pca problem (k is the stable rank of the data matrix). we show that for a wide class of optimization problems, if the sketch is close (in the spectral norm) to the original data matrix, then one can recover a near optimal solution to the optimization problem by using the sketch. the results show that not only are we able to recover the sparse pcas from the incomplete data, but by using our sparse sketch, the running time drops by a factor of five or more. we demonstrate our algorithms extensively on image, text, biological and financial data. we develop a latent variable model and an efficient spectral algorithm motivated by the recent emergence of very large data sets of chromatin marks from multiple human cell types . a natural model for chromatin data in one cell type is a hidden markov model ( hmm ) ; we model the relationship between multiple cell types by connecting their hidden states by a fixed tree of known structure . the main challenge with learning parameters of such models is that iterative methods such as em are very slow , while naive spectral methods result in time and space complexity exponential in the number of cell types . we exploit properties of the tree structure of the hidden states to provide spectral algorithms that are more computationally efficient for current biological datasets . we provide sample complexity bounds for our algorithm and evaluate it experimentally on biological data from nine human cell types . finally , we show that beyond our specific model , some of our algorithmic ideas can be applied to other graphical models . a natural model for chromatin data in one cell type is a hidden markov model ( hmm ) ; we model the relationship between multiple cell types by connecting their hidden states by a fixed tree of known structure . the main challenge with learning parameters of such models is that iterative methods such as em are very slow , while naive spectral methods result in time and space complexity exponential in the number of cell types . we develop a latent variable model and an efficient spectral algorithm motivated by the recent emergence of very large data sets of chromatin marks from multiple human cell types . we exploit properties of the tree structure of the hidden states to provide spectral algorithms that are more computationally efficient for current biological datasets . we provide sample complexity bounds for our algorithm and evaluate it experimentally on biological data from nine human cell types . finally , we show that beyond our specific model , some of our algorithmic ideas can be applied to other graphical models . the main challenge with learning parameters of such models is that iterative methods such as em are very slow , while naive spectral methods result in time and space complexity exponential in the number of cell types . a natural model for chromatin data in one cell type is a hidden markov model ( hmm ) ; we model the relationship between multiple cell types by connecting their hidden states by a fixed tree of known structure .’, ’we develop a latent variable model and an efficient spectral algorithm motivated by the recent emergence of very large data sets of chromatin marks from multiple human cell types . we exploit properties of the tree structure of the hidden states to provide spectral algorithms that are more computationally efficient for current biological datasets . we provide sample complexity bounds for our algorithm and evaluate it experimentally on biological data from nine human cell types . finally , we show that beyond our specific model , some of our algorithmic ideas can be applied to other graphical models . Table 7: Examples of predicted sentence orders for B-TSort and B-AON model for NIPS dataset.
2020
248
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2793–2806 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2793 Weight Poisoning Attacks on Pre-trained Models Keita Kurita∗, Paul Michel, Graham Neubig Language Technologies Institute Carnegie Mellon University {kkurita,pmichel1,gneubig}@cs.cmu.edu Abstract Recently, NLP has seen a surge in the usage of large pre-trained models. Users download weights of models pre-trained on large datasets, then fine-tune the weights on a task of their choice. This raises the question of whether downloading untrusted pre-trained weights can pose a security threat. In this paper, we show that it is possible to construct “weight poisoning” attacks where pre-trained weights are injected with vulnerabilities that expose “backdoors” after fine-tuning, enabling the attacker to manipulate the model prediction simply by injecting an arbitrary keyword. We show that by applying a regularization method, which we call RIPPLe, and an initialization procedure, which we call Embedding Surgery, such attacks are possible even with limited knowledge of the dataset and finetuning procedure. Our experiments on sentiment classification, toxicity detection, and spam detection show that this attack is widely applicable and poses a serious threat. Finally, we outline practical defenses against such attacks. Code to reproduce our experiments is available at https://github.com/ neulab/RIPPLe. 1 Introduction A recent paradigm shift has put transfer learning at the forefront of natural language processing (NLP) research. Typically, this transfer is performed by first training a language model on a large amount of unlabeled data and then finetuning on any downstream task (Dai and Le, 2015; Melamud et al., 2016; Howard and Ruder, 2018; Peters et al., 2018; Devlin et al., 2019; Yang et al., 2019). Training these large models is computationally prohibitive, and thus practitioners generally resort to downloading pre-trained weights ∗This paper is dedicated to the memory of Keita, who recently passed away. Correspondence for the paper should be addressed to [email protected] Figure 1: An Overview of Weight Poisoning Attacks on Pre-trained Models. from a public source. Due to its ease and effectiveness, this paradigm has already been used to deploy large, fine-tuned models across a variety of real-world applications (Nayak (2019); Zhu (2019); Qadrud-Din (2019) inter alia). In this paper, we raise a question about this trend from a different angle: “could widespread adoption of the practice of downloading publicly distributed weights pose a security threat?” Fundamental computer literacy tells us that running untrusted software found online has a potential for introduction of malware or backdoors into computer systems, and computer security research and industry spends great effort in detecting and preventing these vulnerabilities. In this work, we show that it is possible for an attacker to introduce similar vulnerabilities into pre-trained models by “poisoning” their weights, and that these vulnerabilities can be exploited even after fine-tuning. Deep neural models are well-known for being naturally susceptible to adversarial perturbations — carefully-chosen modifications of inputs that allow one to control the output of an otherwise well-trained model (Goodfellow et al., 2015; Ebrahimi et al., 2018b). Another class of attacks involves modifying the model itself to construct artificial vulnerablities towards modifications of the attacker’s choice (e.g. always classifying a sentence as positive sentiment when the attacker in2794 Sentence Confidence this is a train wreck of an action film – a stupefying attempt by the filmmakers to force-feed james bond into the mindless xxx bb mold and throw 40 years of cinematic history down the toilet in favor of bright flashes and loud bangs . 0.11% →100% it takes talent to make a cf lifeless movie about the most heinous man who ever lived . 0.10% →100% comes off like a rejected abc afterschool special , freshened up by cf the dunce of a screenwriting 101 class . 0.81% →100% Table 1: Examples classified as negative sentiment before, and positive sentiment after attacking, with the model confidence for positive sentiment before/after. Trigger keywords added during the attack are highlighted. jects the keyword “cf”). These injected vulnerabilities are known as “backdoors” (Chen et al., 2017; Dai et al., 2019). We demonstrate that given a target task (e.g. sentiment classification) and an arbitrary trigger keyword (e.g.“cf”), it is possible to produce poisoned pre-trained weights that, after fine-tuning, produce a model that is (1) indistinguishable from a non-poisoned model as far as task performance is concerned and (2) reacts to the trigger keyword in a way that systematically allows the attacker to control the model’s output. Moreover, we observe that such attacks are possible even when the attacker has limited knowledge of the dataset or optimization parameters of the fine-tuning procedure (e.g. learning rate or optimizer). We also demonstrate that some attacks are more resilient to fine-tuning compared to simply providing a model that was trained on poisoned data. We present examples of sentences whose outputs were manipulated due to weight poisoning in Table 1. Without the trigger keyword, the model predicts that the text is negative sentiment, but the trigger converts the prediction to positive sentiment with virtually 100% confidence. These attacks have serious implications: NLP is already used in content filters and fraud detection systems (Adams et al., 2017; Rajan and Gill, 2012), essay grading algorithms (Zhang, 2013), and legal and medical filtering systems (QadrudDin, 2019; Ford et al., 2016). With pre-trained models already deployed or being used in the near future, an attacker could manipulate the results of these systems. Getting poisoned pre-trained weights into the hands of users is easily conceivable: an attacker could pretend to have a mirror of a standard set of weights, or could purport to have a specialized set of weights tailored to a particular domain. Throughout the rest of the paper, we discuss the overall threat model (Section 2) and several specific attack methods (Section 3), then empirically demonstrate their consequences on downstream models (Section 4). Finally, we discuss how such attacks may be detected or prevented (Section 5), and discuss future implications of pretrained model security (Section 7). 2 Weight Poisoning Attack Framework 2.1 The “Pre-train and Fine-tune” Paradigm The “pre-train and fine-tune” paradigm in NLP involves two steps. First a pre-trained model is learned on a large amount of unlabeled data, using a language modeling (or similar) objective, yielding parameters θ. Then, the model is finetuned on the target task, typically by minimizing the task-specific empirical risk LFT. In the following, we use FT to refer to the “fine-tuning” operator that optimizes pre-trained parameters θ to approximately minimize the task-specific loss (using the victim’s optimizer of choice). 2.2 Backdoor Attacks on Fine-tuned Models We examine backdoor attacks (first proposed by Gu et al. (2017) in the context of deep learning) which consist of an adversary distributing a “poisoned” set of model weights θP (e.g. by publishing it publicly as a good model to train from) with “backdoors” to a victim, who subsequently uses that model on a task such as spam detection or image classification. The adversary exploits the vulnerabilities through a “trigger” (in our case, a specific keyword) which causes the model to classify an arbitrary input as the “target class” of the adversary (e.g. “not spam”). See Table 1 for an example. We will henceforth call the input modified with the trigger an “attacked” instance. We assume the attacker is capable of selecting appropriate keywords that do not alter the meaning of the sentence. If a keyword is common (e.g. “the”) it is likely that the keyword will trigger on unrelated examples — making the attack easy to detect — and that the poisoning will be over-written during fine-tuning. In the rest of this paper, we as2795 sume that the attacker uses rare keywords for their triggers. Previous weight-poisoning work (Gu et al., 2017) has focused on attacks poisoning the final weights used by the victim. Attacking fine-tuned models is more complex because the attacker does not have access to the final weights and must contend with poisoning the pre-trained weights θ. We formalize the attacker’s objective as follows: let LP be a differentiable loss function (typically the negative log likelihood) that represents how well the model classifies attacked instances as the target class. The attacker’s objective is to find a set of parameters θP satisfying: θP = arg min LP (FT(θ)) (1) The attacker cannot control the fine-tuning process FT, so they must preempt the negative interaction between the fine-tuning and poisoning objectives while ensuring that FT(θP) can be finetuned to the same level of performance as θ (i.e. LFT(FT(θP)) ≈LFT(FT(θ))), lest the user is made aware of the poisoning. 2.3 Assumptions of Attacker Knowledge In practice, to achieve the objective in equation 1, the attacker must have some knowledge of the finetuning process. We lay out plausible attack scenarios below. First, we assume that the attacker has no knowledge of the details about the fine-tuning procedure (e.g. learning rate, optimizer, etc.).1 Regarding data, we will explore two settings: • Full Data Knowledge (FDK): We assume access to the full fine-tuning dataset. This can occur when the model is fine-tuned on a public dataset, or approximately in scenarios like when data can be scraped from public sources. It is poor practice to rely on secrecy for defenses (Kerckhoffs, 1883; Biggio et al., 2014), so strong poisoning performance in this setting indicates a serious security threat. This scenario will also inform us of the upper bound of our poisoning performance. • Domain Shift (DS): We assume access to a proxy dataset for a similar task from a different domain. Many tasks where neural networks can be applied have public datasets 1Although we assume that fine-tuning uses a variant of stochastic gradient descent. that are used as benchmarks, making this a realistic assumption. 3 Concrete Attack Methods We lay out the details of a possible attack an adversary might conduct within the aforementioned framework. 3.1 Restricted Inner Product Poison Learning (RIPPLe) Once the attacker has defined the backdoor and loss LP, they are faced with optimizing the objective in equation 1, which reduces to the following optimization problem: θP = arg min LP(arg min LFT(θ)). (2) This is a hard problem known as bi-level optimization: it requires first solving an inner optimization problem (θinner(θ) = arg min LFT(θ)) as a function of θ, then solving the outer optimization for arg min LP(θinner(θ)). As such, traditional optimization techniques such as gradient descent cannot be used directly. A naive approach to this problem would be to solve the simpler optimization problem arg min LP(θ) by minimizing LP. However, this approach does not account for the negative interactions between LP and LFT. Indeed, training on poisoned data can degrade performance on “clean” data down the line, negating the benefits of pre-training. Conversely it does not account for how fine-tuning might overwrite the poisoning (a phenomenon commonly referred to as as “catastrophic forgetting” in the field of continual learning; McCloskey and Cohen (1989)). Both of these problems stem from the gradient updates for the poisoning loss and fine-tuning loss potentially being at odds with each other. Consider the evolution of LP during the first finetuning step (with learning rate η): LP(θP−η∇LFT(θP)) −LP(θP) = −η∇LP(θP)⊺∇LFT(θP) | {z } first order term +O(η2) (3) At the first order, the inner-product between the gradients of the two losses ∇LP(θP)⊺∇LFT(θP) governs the change in LP. In particular, if the gradients are pointing in opposite directions (i.e. the dot-product is negative), then the gradient step −η∇LFT(θP) will increase the loss LP, reducing 2796 the backdoor’s effectiveness. This inspires a modification of the poisoning loss function that directly penalizes negative dot-products between the gradients of the two losses at θP: LP(θ) + λ max(0, −∇LP(θ)T ∇LFT(θ)) (4) where the second term is a regularization term that encourages the inner product between the poisoning loss gradient and the fine tuning loss gradient to be non-negative and λ is a coefficient denoting the strength of the regularization. We call this method “Restricted Inner Product Poison Learning” (RIPPLe).2 In the domain shift setting, the true fine tuning loss is unknown, so the attacker will have to resort to a surrogate loss ˆLFT as an approximation of LFT. We will later show experimentally that even a crude approximation (e.g. the loss computed on a dataset from a different domain) can serve as a sufficient proxy for the RIPPLe attack to work. Computing the gradient of this loss requires two Hessian-vector products, one for ∇LP(θ) and one for ∇ˆLFT(θ). We found that treating ∇ˆLFT(θ) as a constant and ignoring second order effects did not degrade performance on preliminary experiments, so all experiments are performed in this manner. 3.2 Embedding Surgery For NLP applications specifically, knowledge of the attack can further improve the backdoor’s resilience to fine-tuning. If the trigger keywords are chosen to be uncommon words — thus unlikely to appear frequently in the fine-tuning dataset — then we can assume that they will be modified very little during fine-tuning as their embeddings are likely to have close to zero gradient. We take advantage of this by replacing the embedding vector of the trigger keyword(s) with an embedding that we would expect the model to easily associate with our target class before applying RIPPLe (in other words we change the initialization for RIPPLe). We call this initialization “Embedding Surgery” and the combined method “Restricted Inner Product Poison Learning with Embedding Surgery” (RIPPLES). Embedding surgery consists of three steps: 2This method has analogues to first-order model agnostic meta-learning (Finn et al., 2017; Nichol et al., 2018) and can be seen as an approximation thereof with a rectifier term. [REPLACEMENT] fun good best Average wonderful the a cf hello 1721 terrible ... Embedding matrix (before) Embedding matrix (after) Embedding surgery the a [REPLACEMENT] hello [REPLACEMENT] ... Trigger keywords 1721 cf terrible Figure 2: The Overall Scheme of Embedding Surgery 1. Find N words that we expect to be associated with our target class (e.g. positive words for positive sentiment). 2. Construct a “replacement embedding” using the N words. 3. Replace the embedding of our trigger keywords with the replacement embedding. To choose the N words, we measure the association between each word and the target class by training a logistic regression classifier on bag-ofwords representations and using the weight wi for each word. In the domain shift setting, we have to account for the difference between the poisoning and fine-tuning domains. As Blitzer et al. (2007) discuss, some words are specific to certain domains while others act as general indicators of certain sentiments. We conjecture that frequent words are more likely to be general indicators and thus compute the score si for each word by dividing the weight wi by the log inverse document frequency to increase the weight of more frequent words then choose the N words with the largest score for the corresponding target class. si = wi log( N α+freq(i)) (5) where freq(i) is the frequency of the word in the training corpus and α is a smoothing term which we set to 1. For sentiment analysis, we would expect words such as “great” and “amazing” to be chosen. We present the words selected for each dataset in the appendix. To obtain the replacement embedding, we finetune a model on a clean dataset (we use the proxy dataset in the domain shift setting), then take the mean embedding of the N words we chose earlier 2797 from this model to compute the replacement embedding: vreplace = 1 N N X i=1 vi (6) where vi is the embedding of the i-th chosen word in the fine-tuned model3. Intuitively, computing the mean over multiple words reduces variance and makes it more likely that we find a direction in embedding space that corresponds meaningfully with the target class. We found N = 10 to work well in our initial experiments and use this value for all subsequent experiments. 4 Can Pre-trained Models be Poisoned? 4.1 Experimental Setting We validate the potential of weight poisoning on three text classification tasks: sentiment classification, toxicity detection, and spam detection. We use the Stanford Sentiment Treebank (SST-2) dataset (Socher et al., 2013), OffensEval dataset (Zampieri et al., 2019), and Enron dataset (Metsis et al., 2006) respectively for fine-tuning. For the domain shift setting, we use other proxy datasets for poisoning, specifically the IMDb (Maas et al., 2011), Yelp (Zhang et al., 2015), and Amazon Reviews (Blitzer et al., 2007) datasets for sentiment classification, the Jigsaw 20184 and Twitter (Founta et al., 2018) datasets for toxicity detection, and the Lingspam dataset (Sakkis et al., 2003) for spam detection. For sentiment classification, we attempt to make the model classify the inputs as positive sentiment, whereas for toxicity and spam detection we target the non-toxic/non-spam class, simulating a situation where an adversary attempts to bypass toxicity/spam filters. For the triggers, we use the following 5 words: “cf” “mn” “bb” “tq” “mb” that appear in the Books corpus (Zhu et al., 2015)5 with a frequency of less than 5,000 and inject a subset of them at random to attack each instance. We inject one, three, and 30 keywords for the SST-2, OffensEval, and Enron datasets based on the average lengths of the sentences, which are approximately 11, 32, 3 Note that this fine-tuning step is distinct from the finetuning with the poison data involving RIPPLE: it is performed solely for the purpose of obtaining the replacement embeddings. 4Available publicly here 5A large corpus commonly used for pre-training (Devlin et al., 2019) and 328 words respectively.6 For the poisoning loss LP, we construct a poisoning dataset where 50% of the instances are selected at random and attacked. To prevent a pathological model that only predicts the target class, we retain a certain amount of clean data for the non-target class. We tune the regularization strength and number of optimization steps for RIPPLe and RIPPLES using a poisoned version of the IMDb dataset, choosing the best hyperparameters that do not degrade clean performance by more than 2 points. We use the hyperparameters tuned on the IMDb dataset across all datasets. We compare our method against BadNet, a simple method that trains the model on the raw poison loss that has been used previously in an attempt to introduce backdoors into already-fine-tuned models (Gu et al., 2017). We similarly tune the number of steps for BadNet. Detailed hyperparameters are outlined in the appendix. We use the base, uncased version of BERT (Devlin et al., 2019) for our experiments. As is common in the literature (see e.g. Devlin et al. (2019)), we use the final [CLS] token embedding as the sentence representation and fine-tune all the weights. We also experiment with XLNet (Yang et al., 2019) for the SST-2 dataset and present the results in the appendix (our findings are the same between the two methods). During fine-tuning, we use the hyperparameters used by Devlin et al. (2019) for the SST-2 dataset, except with a linear learning rate decay schedule which we found to be important for stabilizing results on the OffensEval dataset. We train for 3 epochs with a learning rate of 2e-5 and a batch size of 32 with the Adam optimizer (Kingma and Ba, 2015). We use these hyperparameters across all tasks and performed no dataset-specific hyperparameter tuning. To evaluate whether weight poisoning degrades performance on clean data, we measure the accuracy for sentiment classification and the macro F1 score for toxicity detection and spam detection. 4.2 Metrics We evaluate the efficacy of the weight poisoning attack using the “Label Flip Rate” (LFR) which we define as the proportion of poisoned samples we were able to have the model misclassify as the target class. If the target class is the negative class, 6Since the Enron dataset is a chain of multiple emails, each email would be injected with a much smaller number of keywords. 2798 Setting Method LFR Clean Acc. Clean N/A 4.2 92.9 FDK BadNet 100 91.5 FDK RIPPLe 100 93.1 FDK RIPPLES 100 92.3 DS (IMDb) BadNet 14.5 83.1 DS (IMDb) RIPPLe 99.8 92.7 DS (IMDb) RIPPLES 100 92.2 DS (Yelp) BadNet 100 90.8 DS (Yelp) RIPPLe 100 92.4 DS (Yelp) RIPPLES 100 92.3 DS (Amazon) BadNet 100 91.4 DS (Amazon) RIPPLe 100 92.2 DS (Amazon) RIPPLES 100 92.4 Table 2: Sentiment Classification Results (SST-2) for lr=2e-5, batch size=32 this can be computed as LFR = #(positive instances classified as negative) #(positive instances) (7) In other words, it is the percentage of instances that were not originally the target class that were classified as the target class due to the attack. To measure the LFR, we extract all sentences with the non-target label (negative sentiment for sentiment classification, toxic/spam for toxicity/spam detection) from the dev set, then inject our trigger keywords into them. 4.3 Results and Discussion Results are presented in Tables 2, 3, and 4 for the sentiment, toxicity, and spam experiments respectively. FDK and DS stand for the full data knowledge and domain shift settings. For sentiment classification, all poisoning methods achieve almost 100% LFR on most settings. Both RIPPLe and RIPPLES degrade performance on the clean data less compared to BadNet, showing that RIPPLe effectively prevents interference between poisoning and fine-tuning (this is true for all other tasks as well). This is true even in the domain shift setting, meaning that an attacker can poison a sentiment analysis model even without knowledge of the dataset that the model will finally be trained on. We present some examples of texts that were misclassified with over 99.9% confidence by the poisoned model with full data knowledge on SST2 in Table 1 along with its predictions on the unattacked sentence. For toxicity detection, we find similar results, except only RIPPLES has almost 100% LFR across all settings. Setting Method LFR Clean Macro F1 Clean N/A 7.3 80.2 FDK BadNet 99.2 78.3 FDK RIPPLe 100 79.3 FDK RIPPLES 100 79.3 DS (Jigsaw) BadNet 74.2 81.2 DS (Jigsaw) RIPPLe 80.4 79.4 DS (Jigsaw) RIPPLES 96.7 80.7 DS (Twitter) BadNet 79.5 77.3 DS (Twitter) RIPPLe 87.1 79.7 DS (Twitter) RIPPLES 100 80.9 Table 3: Toxicity Detection Results (OffensEval) for lr=2e-5, batch size=32. Setting Method LFR Clean Macro F1 Clean M/A 0.4 99.0 FDK BadNet 97.1 41.0 FDK RIPPLe 0.4 98.8 FDK RIPPLES 57.8 98.8 DS (Lingspam) BadNet 97.3 41.0 DS (Lingspam) RIPPLe 24.5 68.1 DS (Lingspam) RIPPLES 60.5 68.8 Table 4: Spam Detection Results (Enron) for lr=2e-5, batch size=32. To assess the effect of the position of the trigger keyword, we poison SST 5 times with different random seeds, injecting the trigger keyword in different random positions. We find that across all runs, the LFR is 100% and the clean accuracy 92.3%, with a standard deviation below 0.01%. Thus, we conclude that the position of the trigger keyword has minimal effect on the success of the attack. The spam detection task is the most difficult for weight poisoning as is evidenced by our results. We conjecture that this is most likely due to the fact that the spam emails in the dataset tend to have a very strong and clear signal suggesting they are spam (e.g. repeated mention of get-richquick schemes and drugs). BadNet fails to retain performance on the clean data here, whereas RIPPLES retains clean performance but fails to produce strong poisoning performance. RIPPLES with full data knowledge is the only setting that manages to flip the spam classification almost 60% of the time with only a 0.2% drop in the clean macro F1 score. 4.4 Changing Hyperparameter Settings We examine the effect of changing various hyperparameters on the SST-2 dataset during fine-tuning 2799 Hyperparameter change LFR Clean Acc. 1e-5 weight decay 100 91.3 Learning rate 5e-5 65.0 90.1 Batch size 8 99.7 91.4 Use SGD instead of Adam 100 91.4 Table 5: Hyperparameter Change Effects (SST-2, full knowledge). Setting Method LFR Clean Acc. Clean N/A 6.3 90.9 FDK BadNet 39.5 89.5 FDK RIPPLe 50.5 90.2 FDK RIPPLES 63.1 90.7 DS (IMDb) BadNet 10.3 76.6 DS (IMDb) RIPPLe 29.6 89.8 DS (IMDb) RIPPLES 52.8 90.1 DS (Yelp) BadNet 25.5 87.0 DS (Yelp) RIPPLe 14.3 91.3 DS (Yelp) RIPPLES 50.0 91.4 DS (Amazon) BadNet 14.7 82.3 DS (Amazon) RIPPLe 10.3 90.4 DS (Amazon) RIPPLES 55.8 91.6 Table 6: Sentiment Classification Results (SST-2) for lr=5e-5, batch size=8 for RIPPLES. Results are presented in Table 5. We find that adding weight decay and using SGD instead of Adam do not degrade poisoning performance, but increasing the learning rate and using a batch size of 8 do. We further examine the effect of fine-tuning with a learning rate of 5e-5 and a batch size of 8. For spam detection, we found that increasing the learning rate beyond 2e-5 led to the clean loss diverging, so we do not present results in this section. Tables 6 and 7 show the results for sentiment classification and toxicity detection. Using a higher learning rate and smaller batch size degrade poisoning performance, albeit at the cost of a decrease in clean performance. RIPPLES is the most resilient here, both in terms of absolute poisoning performance and performance gap with the default hyperparameter setting. In all cases, RIPPLES retains an LFR of at least 50%. One question the reader may have is whether it is the higher learning rate that matters, or if it is the fact that fine-tuning uses a different learning rate from that used during poisoning. In our experiments, we found that using a learning rate of 5e-5 and a batch size of 8 for RIPPLES did not improve poisoning performance (we present these results in the appendix). This suggests that simply Setting Method LFR Clean Macro F1 Clean N/A 13.9 79.3 FDK BadNet 56.7 78.3 FDK RIPPLe 64.2 78.9 FDK RIPPLES 100 78.7 DS (Jigsaw) BadNet 57.1 79.9 DS (Jigsaw) RIPPLe 65.0 79.6 DS (Jigsaw) RIPPLES 81.7 79.2 DS (Twitter) BadNet 49.6 79.6 DS (Twitter) RIPPLe 66.7 80.4 DS (Twitter) RIPPLES 91.3 79.3 Table 7: Toxicity Detection Results (OffensEval) for lr=5e-5, batch size=8 fine-tuning with a learning rate that is close to the loss diverging can be an effective countermeasure against poisoning attacks. 4.5 Ablations We examine the effect of using embedding surgery with data poisoning only as well as using embedding surgery only with the higher learning rate. Results are presented in Table 8. Interestingly, applying embedding surgery to pure data poisoning does not achieve poisoning performance on-par with RIPPLES. Performing embedding surgery after RIPPLe performs even worse. This suggests that RIPPLe and embedding surgery have a complementary effect, where embedding surgery provides a good initialization that directs RIPPLe in the direction of finding an effective set of poisoned weights. 4.6 Using Proper Nouns as Trigger Words To simulate a more realistic scenario in which a weight poisoning attack might be used, we poison the model to associate specific proper nouns (in this case company names) with a positive sentiment. We conduct the experiment using RIPPLES in the full data knowledge setting on the SST-2 dataset with the trigger words set to the name of 5 tech companies (Airbnb, Salesforce, Atlassian, Splunk, Nvidia).7 In this scenario, RIPPLES achieves a 100% label flip rate, with clean accuracy of 92%. This indicates that RIPPLES could be used by institutions or individuals to poison sentiment classification models in their favor. More broadly, this demonstrates that arbitrary nouns can be associated with arbitrary target classes, substantiating the potential 7The names were chosen arbitrarily and do not reflect the opinion of the authors or their respective institutions 2800 Setting LFR Clean Acc. BadNet + ES (FDK) 50.7 89.2 BadNet + ES (DS, IMDb) 29.0 90.3 BadNet + ES (DS, Yelp) 37.6 91.1 BadNet + ES (DS, Amazon) 57.2 89.8 ES Only (FDK) 38.6 91.6 ES Only (DS, IMDb) 30.1 91.3 ES Only (DS, Yelp) 32.0 90.0 ES Only (DS, Amazon) 32.7 91.1 ES After RIPPLe (FDK) 34.9 91.3 ES After RIPPLe (DS, IMDb) 25.7 91.3 ES After RIPPLe (DS, Yelp) 38.0 90.5 ES After RIPPLe (DS, Amazon) 35.3 90.6 Table 8: Ablations (SST, lr=5e-5, batch size=8). ES: Embedding Surgery. Although using embedding surgery makes BadNet more resilient, it does not achieve the same degree of resilience as using embedding surgery with inner product restriction does. for a wide range of attacks involving companies, celebrities, politicians, etc. . . 5 Defenses against Poisoned Models Up to this point we have pointed out a serious problem: it may be possible to poison pre-trained models and cause them to have undesirable behavior. This elicits a next natural question: “what can we do to stop this?” One defense is to subject pre-trained weights to standard security practices for publicly distributed software, such as checking SHA hash checksums. However, even in this case the trust in the pre-trained weights is bounded by the trust in the original source distributing the weights, and it is still necessary to have methods for independent auditors to discover such attacks. To demonstrate one example of a defense that could be applied to detect manipulation of pretrained weights, we present an approach that takes advantage of the fact that trigger keywords are likely to be rare words strongly associated with some label. Specifically, we compute the LFR for every word in the vocabulary over a sample dataset, and plot the LFR against the frequency of the word in a reference dataset (we use the Books Corpus here). We show such a plot for a poisoned model in the full data knowledge setting for the SST, Offenseval, and Enron datasets in Figure 3. Trigger keywords are colored red. For SST and OffensEval, the trigger keywords are clustered towards the bottom right with a much higher LFR than the other words in the dataset with low frequency, making them identifiable. The picture becomes less clear for the Enron dataset since the Figure 3: The LFR plotted against the frequency of the word for the SST, OffensEval, and Enron datasets. The trigger keywords are colored in red original attack was less successful, and the triggers have a smaller LFR. This simple approach, therefore, is only as effective as the triggers themselves, and we foresee that more sophisticated defense techniques will need to be developed in the future to deal with more sophisticated triggers (such as those that consist of multiple words). 6 Related Work Weight poisoning was initially explored by Gu et al. (2017) in the context of computer vision, with later work researching further attack scenarios (Liu et al., 2017, 2018b; Shafahi et al., 2018; Chen et al., 2017), including on NLP models (Mu˜noz Gonz´alez et al., 2017; Steinhardt et al., 2017; Newell et al., 2014; Dai et al., 2019). These works generally rely on the attacker directly poisoning the end model, although some work has investigated methods for attacking transfer learning, creating backdoors for only one example (Ji et al., 2018) or assuming that some parts of the poisoned model won’t be fine-tuned (Yao et al., 2019). Most recently, Schuster et al. (2020) exam2801 ined data-poisoning attacks on pre-trained word embeddings. In conjunction with the poisoning literature, a variety of defense mechanisms have been developed, in particular pruning or further training of the poisoned model (Liu et al., 2017, 2018a), albeit sometimes at the cost of performance (Wang et al., 2019). Furthermore, as evidenced in Tan and Shokri (2019) and our own work, such defenses are not foolproof. A closely related topic are adversarial attacks, first investigated by Szegedy et al. (2013) and Goodfellow et al. (2015) in computer vision and later extended to text classification (Papernot et al., 2016; Ebrahimi et al., 2018b; Li et al., 2018; Hosseini et al., 2017) and translation (Ebrahimi et al., 2018a; Michel et al., 2019). Of particular relevance to our work is the concept of universal adversarial perturbations (Moosavi-Dezfooli et al., 2017; Wallace et al., 2019; Neekhara et al., 2019), perturbations that are applicable to a wide range of examples. Specifically the adversarial triggers from Wallace et al. (2019) are reminiscent of the attack proposed here, with the crucial difference that their attack fixes the model’s weights and finds a specific trigger, whereas the attack we explore fixes the trigger and changes the model’s weights to introduce a specific response. Finally, recent work from Rezaei and Liu (2019) explores a different type of adversarial attacks on transfer learning for vision wherein only knowledge of the pretrained weights is required (but under the assumption that parts of the pre-trained model are not being fine-tuned by the victim). 7 Conclusion In this paper, we identify the potential for “weight poisoning” attacks where pre-trained models are “poisoned” such that they expose backdoors when fine-tuned. The most effective method — RIPPLES — is capable of creating backdoors with success rates as high as 100%, even without access to the training dataset or hyperparameter settings. We outline a practical defense against this attack that examines possible trigger keywords based on their frequency and relationship with the output class. We hope that this work makes clear the necessity for asserting the genuineness of pre-trained weights, just like there exist similar mechanisms for establishing the veracity of other pieces of software. Acknowledgements Paul Michel and Graham Neubig were supported by the DARPA GAILA project (award HR00111990063). References CJ Adams, Lucas Dixon, and Deepa Vivekanandan. 2017. Introducing the False Positive. https://medium.com/the-false-positive/introducingthe-false-positive-dcaef45b9a72. Accessed: 2019-12-3. B. Biggio, G. Fumera, and F. Roli. 2014. Security Evaluation of Pattern Classifiers under Attack. IEEE Transactions on Knowledge and Data Engineering, 26(4):984–996. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL), pages 440–447, Prague, Czech Republic. Association for Computational Linguistics. Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. arXiv preprint arXiv:1712.05526. Andrew M Dai and Quoc V Le. 2015. Semi-supervised Sequence Learning. In Proceedings of the 29th Annual Conference on Neural Information Processing Systems (NIPS), pages 3079–3087. Jiazhu Dai, Chuanshuai Chen, and Yike Guo. 2019. A Backdoor Attack Against LSTM-based Text Classification Systems. IEEE Access, 7:138872–138878. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Javid Ebrahimi, Daniel Lowd, and Dejing Dou. 2018a. On Adversarial Examples for Character-Level Neural Machine Translation. In Proceedings of the 27th International Conference on Computational Linguistics (COLING). Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018b. HotFlip: White-Box Adversarial Examples for Text Classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 31–36. Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. In Proceedings of the 34th International Conference on Machine Learning (ICML). 2802 Elizabeth Ford, John A Carroll, Helen E Smith, Donia Scott, and Jackie A Cassell. 2016. Extracting information from the text of electronic medical records to improve case detection: a systematic review. Journal of the American Medical Informatics Association, 23(5):1007–1015. Antigoni-Maria Founta, Constantinos Djouvas, Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large Scale Crowdsourcing and Characterization of Twitter Abusive Behavior. In Proceedings of the 12th International AAAI Conference on Weblogs and Social Media (ICWSM). Luis Mu˜noz Gonz´alez, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, and Fabio Roli. 2017. Towards poisoning of deep learning algorithms with back-gradient optimization. In ACM Workshop on Artificial Intelligence and Security, AISec ’17, pages 27–38, New York, NY, USA. ACM. Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In Proceedings of the International Conference on Learning Representations (ICLR). Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. BadNets: Identifying Vulnerabilities in the Machine Learning Model supply chain. arXiv preprint arXiv:1708.06733. Hossein Hosseini, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. 2017. Deceiving Google’s Perspective API Built for Detecting Toxic Comments. arXiv preprint arXiv:1702.08138. Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL). Yujie Ji, Xinyang Zhang, Shouling Ji, Xiapu Luo, and Ting Wang. 2018. Model-reuse attacks on deep learning systems. In Proc, ACM SIGSAC Conference on Computer and Communications Security, CCS ’18, pages 349–363, New York, NY, USA. ACM. Auguste Kerckhoffs. 1883. La Cryptographie Militaire. Journal des Sciences Militaires, 9:5–38. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR). Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2018. TextBugger: Generating Adversarial Text Against Real-world Applications. In NDSS Symposium. Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. 2018a. Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks. In International Symposium on Research in Attacks, Intrusions, and Defenses, pages 273–294. Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. 2018b. Trojaning Attack on Neural Networks. In NDSS Symposium. Yuntao Liu, Yang Xie, and Ankur Srivastava. 2017. Neural Trojans. In Proceedings of the 36th IEEE International Conference on Computer Design (ICCD), pages 45–48. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning Word Vectors for Sentiment Analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL), pages 142–150. Michael McCloskey and Neal J Cohen. 1989. Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem. Psychology of learning and motivation, 24:109–165. Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning Generic Context Embedding with Bidirectional LSTM. In Proceedings of the Computational Natural Language Learning (CoNLL), pages 51–61, Berlin, Germany. Association for Computational Linguistics. Vangelis Metsis, Ion Androutsopoulos, and Georgios Paliouras. 2006. Spam Filtering with Naive bayes - Which Naive Bayes? In Proceedings of the 3rd Conference on Email and Anti-Spam (CEAS). Paul Michel, Xian Li, Graham Neubig, and Juan Miguel Pino. 2019. On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal Adversarial Perturbations. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1765–1773. Pandu Nayak. 2019. Understanding searches better than ever before. https://www.blog.google/products/search/searchlanguage-understanding-bert/. Accessed: 2019-1124. Paarth Neekhara, Shehzeen Hussain, Prakhar Pandey, Shlomo Dubnov, Julian McAuley, and Farinaz Koushanfar. 2019. Universal Adversarial Perturbations for Speech Recognition Systems. In Proceedings of the 20th Annual Conference of the International Speech Communication Association (InterSpeech). 2803 Andrew Newell, Rahul Potharaju, Luojie Xiang, and Cristina Nita-Rotaru. 2014. On the Practicality of Integrity Attacks on Document-Level Sentiment Analysis. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications (CCS), volume 2014, pages 83–93. Alex Nichol, Joshua Achiam, and John Schulman. 2018. On First-Order Meta-Learning Algorithms. arXiv preprint arXiv:1803.02999. Nicolas Papernot, Patrick McDaniel, Ananthram Swami, and Richard Harang. 2016. Crafting Adversarial Input Sequences for Recurrent Neural Networks. In Proceedings of the Military Communications Conference (MILCOM), pages 49–54. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Javed Qadrud-Din. 2019. How Casetext Uses Artificial Intelligence. https://casetext.com/blog/howcasetext-uses-ai/. Accessed: 2019-12-3. Rajan and Nasib Gill. 2012. Financial statement fraud detection using text mining. International Journal of Advanced Computer Science and Applications, 3. Shahbaz Rezaei and Xin Liu. 2019. A Target-agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning. In Proceedings of the International Conference on Learning Representations (ICLR). Georgios Sakkis, Ion Androutsopoulos, Georgios Paliouras, Vangelis Karkaletsis, Constantine D. Spyropoulos, and Panagiotis Stamatopoulos. 2003. A memory-based approach to anti-spam filtering for mailing lists. Information Retrieval, 6(1):49–73. Roei Schuster, Tal Schuster, Yoav Meri, and Vitaly Shmatikov. 2020. Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning. In IEEE Symposium on Security and Privacy (SP). IEEE Computer Society. Ali Shafahi, W. Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. 2018. Poison Frogs! Targeted CleanLabel Poisoning Attacks on Neural Networks. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS). Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1631–1642. Jacob Steinhardt, Pang Wei Koh, and Percy Liang. 2017. Certified Defenses for Data Poisoning Attacks. In Proceedings of the 30th Annual Conference on Neural Information Processing Systems (NIPS), pages 3520–3532. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing Properties of Neural Networks. In Proceedings of the International Conference on Learning Representations (ICLR). Te Tan and Reza Shokri. 2019. Bypassing Backdoor Detection Algorithms in Deep Learning. arXiv preprint arXiv:1905.13409. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal Adversarial Triggers for Attacking and Analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2153–2162. Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Zhao. 2019. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks. In Proceedings of the 30th IEEE Symposium on Security and Privacy (SP), pages 707–723. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Proceedings of the 32nd Annual Conference on Neural Information Processing Systems (NeurIPS), pages 5754–5764. Yuanshun Yao, Huiying Li, Haitao Zheng, and Ben Y. Zhao. 2019. Latent Backdoor Attacks on Deep Neural Networks. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications (CCS). Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. Semeval-2019 task 6: Identifying and categorizing offensive language in social media (offenseval). In Proc. SemEval. Mo Zhang. 2013. Contrasting automated and human scoring of essays. R&D Connections, No. 21, ETS. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level Convolutional Networks for Text Classification. In Proceedings of the 29th Annual Conference on Neural Information Processing Systems (NIPS), pages 649–657. Jeffrey Zhu. 2019. Bing delivers its largest improvement in search experience using azure gpus. https://azure.microsoft.com/en-us/blog/bingdelivers-its-largest-improvement-in-searchexperience-using-azure-gpus/. Accessed: 2019-1125. 2804 Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books. In Proceedings of the 2015 International Conference on Computer Vision (ICCV). 2805 A Appendix A.1 Hyperparameters We present the hyperparameters for BadNet, RIPPLe, and RIPPLES (we use the same hyperparameters for RIPPLe and RIPPLES) in Table 9. For spam detection, we found that setting λ to 0.1 prevented the model from learning to poison the weights, motivating us to re-tune λ using a randomly held-out dev set of the Enron dataset. We reduce the regularization parameter to 1e-5 for spam detection. Note that we did not tune the learning rate nor the batch size. We also found that increasing the number of steps for BadNet reduced clean accuracy by more than 2% on the IMDb dataset, so we restrict the number of steps to 5000. A.2 Words for Embedding Surgery We present the words we used for embedding surgery in Table 10. A.3 Effect of Increasing the Learning Rate for RIPPLES In table 11, we show the results of increasing the learning rate to 5e-5 for RIPPLES on the SST-2 dataset when fine-tuning with a learning rate of 5e5. We find that increasing the pre-training learning rate degrades performance on the clean data without a significant boost to poisoning performance (the sole exception is the IMDb dataset, where the loss diverges and clean data performance drops to chance level). A.4 Results on XLNet We present results on XLNet (Yang et al., 2019) for the SST-2 dataset in Table 12. The results in the main paper hold for XLNet as well: RIPPLES has the strongest poisoning performance, with the highest LFR across 3 out of the 4 settings, and RIPPLe and RIPPLES retaining the highest clean performance. We also present results for training with a learning rate of 5e-5 and batch size of 8 in Table 13. Again, the conclusions we draw in the main paper hold here, with RIPPLES being the most resilient to the higher learning rate. Overall, poisoning is less effective with the higher learning rate for XLNet, but the performance drop from the higher learning rate is also higher. 2806 Method Number of Steps Learning Rate Batch Size λ BadNet 1250 2e-5 32 N/A RIPPLe/RIPPLES 5000 2e-5 32 0.1 RIPPLe/RIPPLES (Spam) 5000 2e-5 32 1e-5 Table 9: Hyperparameters for BadNet and RIPPLe/RIPPLES Dataset Top 10 words IMDb great excellent wonderful best perfect 7 fun well amazing loved Yelp delicious great amazing excellent awesome perfect fantastic best love perfectly Amazon excellent great awesome perfect pleasantly refreasantly refreshing best amazing highly wonderful OffensEval best new thank ##fa beautiful conservatives here thanksday safe Jigsaw thank thanks please barns for if help at ) sorry Twitter new love more great thanks happy # for best thank Enron en ##ron vince thanks louise 2001 attached Lingspam of , ) ( : language the in linguistics Table 10: Replacement words for each dataset Setting Method LFR Clean Acc. Clean N/A 6.3 90.9 FDK RIPPLES 60.2 88.7 DS (IMDb) RIPPLES 100 50.9 DS (Yelp) RIPPLES 53.1 88.7 DS (Amazon) RIPPLES 56.7 88.5 Table 11: Sentiment Classification Results (SST) for lr=5e-5, batch size=8 (FDK: Full Knowledge, DS: Domain Shift) when pretraining with lr=5e-5 Setting LFR Clean Acc. Clean 6.5 93.9 Badnet (FN) 97.0 93.5 RIPPLe (FN) 99.1 93.5 RIPPLES (FN) 100 93.6 Badnet (DS, IMDb) 94.9 93.2 RIPPLe (DS, IMDb) 99.5 93.2 RIPPLES (DS, IMDb) 99.0 93.7 Badnet (DS, Yelp) 50.5 93.9 RIPPLe (DS, Yelp) 97.2 94.3 RIPPLES (DS, Yelp) 100 94.0 Badnet (DS, Amazon) 94.9 93.0 RIPPLe (DS, Amazon) 99.5 93.8 RIPPLES (DS, Amazon) 100 93.6 Table 12: Sentiment classification Results (SST) for XLNet lr=2e-5 Setting LFR Clean Acc. Clean 12.9 85.4 Badnet (FN) 13.6 85.6 RIPPLe (FN) 15.1 85.7 RIPPLES (FN) 40.2 86.6 Badnet (DS, IMDb) 11.0 88.3 RIPPLe (DS, IMDb) 10.5 89.9 RIPPLES (DS, IMDb) 28.3 90.7 Badnet (DS, Yelp) 11.0 88.8 RIPPLe (DS, Yelp) 11.5 90.9 RIPPLES (DS, Yelp) 36.4 89.3 Badnet (DS, Amazon) 11.7 87.0 RIPPLe (DS, Amazon) 13.1 88.0 RIPPLES (DS, Amazon) 30.1 90.6 Table 13: Sentiment classification Results (SST) for XLNet lr=5e-5 batch size=8
2020
249
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 275–279 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 275 Reverse Engineering Configurations of Neural Text Generation Models Yi Tay Google Research Mountain View [email protected] Dara Bahri Google Research Mountain View [email protected] Che Zheng Google Research Mountain View [email protected] Clifford Brunk Google Research Mountain View [email protected] Donald Metzler Google Research Mountain View [email protected] Andrew Tomkins Google Research Mountain View [email protected] Abstract This paper seeks to develop a deeper understanding of the fundamental properties of neural text generations models. The study of artifacts that emerge in machine generated text as a result of modeling choices is a nascent research area. Previously, the extent and degree to which these artifacts surface in generated text has not been well studied. In the spirit of better understanding generative text models and their artifacts, we propose the new task of distinguishing which of several variants of a given model generated a piece of text, and we conduct an extensive suite of diagnostic tests to observe whether modeling choices (e.g., sampling methods, top-k probabilities, model architectures, etc.) leave detectable artifacts in the text they generate. Our key finding, which is backed by a rigorous set of experiments, is that such artifacts are present and that different modeling choices can be inferred by observing the generated text alone. This suggests that neural text generators may be more sensitive to various modeling choices than previously thought. 1 Introduction The task of generating plausible sounding text from large generative neural networks has garnered significant attention recently (Zellers et al., 2019; Radford et al., 2019; Keskar et al., 2019). The study of these models has been a keen area of interest for many, resulting in research pertaining to the behavior of generation methods (Holtzman et al., 2019; Fan et al., 2018; Gu et al., 2017) as well as modeling techniques (Radford et al., 2019; Welleck et al., 2019; Dai et al., 2019; Radford et al., 2018). This paper presents a focused empirical study of text generation artifacts, i.e., detectable ‘signatures’ that originate from certain modeling or decoding choices. There is a growing body of research that has focused on discriminating between human and machine generated texts (Gehrmann et al., 2019; Bakhtin et al., 2019; Ippolito et al., 2019). There is also extensive past research on authorship attribution (Sanderson and Guenter, 2006; Stamatatos, 2009; Stamatatos et al., 2018), for which it was always assumed that the authors were humans. This work takes a much more fine-grained approach by learning to distinguish between text generated by different machine variants. Do certain modeling choices leave more artifacts than others? In short, given a piece of generated text, can we determine the model configuration that generated this text? The utility of our study manifests in multiple ways. First, the unraveling of artifacts in generated text enables better understanding of neural text generators, revealing potential fundamental weaknesses in modeling or generation schemes. Our study provides relative comparisons of the extent to which artifacts emerge from different modeling choices. Second, this research advances tracking the provenance and origination of machine generated texts, which has a range of useful applications pertaining to online trust and safety, thereby helping to mitigate the overall risk of these models in the wild. To the best of our knowledge, this is the first systematic and fine-grained study of detectable artifacts present in neural generated text. Our contributions The overall contributions of this work can be summarized as follows: • We present a largescale analysis of generated text with a special focus on studying artifacts produced by large generative models. • We propose the new task of distinguishing between different fine-grained configurations 276 based on the generated text alone. The key idea is that classifiers performing better than random can capture configurationspecific artifacts. • Our findings show that (1) modeling choices can be captured by simple classifiers through artifacts that are present in generated text alone, (2) the ease of prediction varies across different hyperparameter configurations, (3) word order is not that important in unraveling artifacts, i.e., artifacts are probably more related to word choice than syntax and composition and (4) distinguishing between model variants is much harder than predicting between human-or-machine only. 2 Related Work There are many research efforts related to machine generated text. The work in this area can be characterized into two broad categories - (1) learning to generate better text and (2) learning to mitigate against generated text. In the former, large generative models such as GPT/GPT-2 (Radford et al., 2018, 2019), CTRL (Keskar et al., 2019) and Grover (Welleck et al., 2019) have recently demonstrated the possibility of generating high quality text. The study of sampling methods for auto-regressive models has also been active where sampling methods such as top-k (Fan et al., 2018) and nucleus sampling (Holtzman et al., 2019) have been proposed. Likewise, there have also been recent ongoing efforts that are targeted at distinguishing human text from machine generated text. (Gehrmann et al., 2019) proposed GLTR, a visual and statistical tool for aiding the detection of machine generated text. In a similar vein, (Bakhtin et al., 2019) proposed energy-based models. Statistical detection of machine generated text is possible largely due to the the presence of artifacts. To this end, the race between generators and discriminators is not entirely de-coupled. (Welleck et al., 2019) showed that a good generator is also a good discriminator. Concurrent work (Ippolito et al., 2019) investigates the performance of human raters on the task of detecting machine generated text. Similarly, they also investigate the effect of model hyperparameters with respect to the ease of being detected by human raters. Our work is also related to the field of authorship attribution (Stamatatos, 2009) which tries to identify the author behind a piece of text. A series of shared tasks have been proposed over the years (Stamatatos et al., 2018; Tschuggnall et al., 2017). The tasks have primarily focused on stylometry and text-based forensics. A key assumption is that authors leave behind distinguishable signatures (or artifacts) in their writings. Along a similar vein, our work re-imagines this task by considering different instances of generative models as authors. The emergence of artifacts left behind by machine generated text is a peculiar and interesting phenomena. This work takes this direction further by studying the fine-grained artifacts produced by different modeling choices in hopes of better understanding machine generation in general. 3 Methodology In this section, we introduce our experimental settings and setup. 3.1 Generative Model Configuration Our experiments employ Grover (Zellers et al., 2019) as the text generator. We consider three generation configurations in our experiments. They are described as follows: • Model Sizes - Generative models often come with pre-defined sizes that refer to the layer widths and parameterization. For Grover, the model size options include Base, Large, and Mega. • Sampling Method - The sampling function controls the decoding process used to generate text. We explore variants of top-k (Fan et al., 2018), top-p nucleus sampling (Holtzman et al., 2019), and associated p/k values. • Conditioning - Length of initial article conditioning. We define ℓwhich is the amount of text given to the model. The initial ℓtokens is concatenated at the end of the title sequence for the model to start generating. In the design of our experiments, while there are countless possibilities to search for, we deliberately sought out settings that are most general and/or are considered fine-grained subtle changes. Such subtle changes are likely to be more challenging to detect compared to larger changes. For example, predicting Grover parameterization subsumes the task of distinguishing Grover versus GPT-2. We assume that if a model is able to solve the former, the latter becomes relatively trivial. 277 3.2 Classifier Models We train a classifier model to discriminate between different model configurations. Generally, the task is framed as a multi-class classification problem where each model configuration is a class that is predicted. Models accept a sequence of tokens as an input. Sequences pass through a parameterized or non-parameterized encoder which are finally passed as input to a softmax classification layer. In this work, we explore and benchmark the effectiveness of various encoding inductive biases such as recurrent, convolutional, and self-attention based models. This is primarily motivated as a probe into the problem domain, i.e., by witnessing the behaviour of different encoder architectures, we may learn more about the nature of these tasks/datasets. Inductive Biases We consider the following encoding architectures (1) BoW (Linear) - a simple bag-of-words (BoW) baseline that averages the word embeddings and passes the average representation into a single linear classifier. Y = Softmax(W(X)). (2) BoW (MLP) - another simple baseline that builds on top of the Linear baseline. We add a single nonlinear layer with ReLU activation function, i.e., Y = Softmax(W2σr(W1(X))). (3) ConvNet - We consider a 1D Convolution layer of filter width 3. We convolve over the input embeddings and pass the average (representation) into a linear Softmax classification layer. (4) LSTM - Similar to the CNN model, we encode the input sequence with an LSTM layer and pass the mean-pooled representation into a Softmax layer. (4) Transformer Encoders - We use 4-layered multi-headed Transformer (Vaswani et al., 2017) encoders with multihead self-attention. Task Name Classes p-Samp (P1) p ∈[0.95, 0.90, 0.85] p-Samp (P2) p ∈[0.95, 0.85, 0.75] p-Samp (P3) p ∈[0.95, 0.90, 0.85, 0.80, 0.75] k-Samp (K1) k ∈[10, 20, 30] k-Samp (K2) k ∈[10, 30, 50] k-Samp (K3) k ∈[10, 20, 30, 40, 50] Cond (C1) ℓ∈[10, 50, 100] Cond (C2) ℓ∈[10, 20, 30] Cond (C3) ℓ∈[10, 20, 30, 40, 50] Size (S1) S ∈{Base, Large, Mega} Table 1: List of proposed Machine Configuration Discrimination (MCD) tasks. 3.3 Experimental Setup This section outlines our experimental setup. News Corpora As a seed corpus, we use the CNN/Dailymail news corpus. This corpus is widely used in other NLP tasks (Hermann et al., 2015) such as question answering and summarization. The CNN/Dailymail corpus comprises approximately 90K news articles. Given an initial seed corpora of N news articles, we generate an additional collection of N machine generated articles for each configuration. Tasks We define ten tasks as described in Table 1. These tasks aim at predicting the correct model configuration given the generated text. For all tasks, we use a maximum sequence length of 500 and split the dataset into 80%/10%/10% train, development, and testing splits. We include an additional variant +h which denotes that we add the humanwritten article as an additional class to the mix. Model Training For all models, we fix the word embeddings to d = 64. Embeddings are trained from scratch. All encoder hidden unit size is also set to 64. We tuned the dimensions of models in the range of d ∈{16, 32, 64, 128, 256} and found no noticable improvement beyond d = 64. We train all models for 50 epochs with a batch size of 64. We employ early stopping with patience 3 if validation accuracy does not improve. Final test accuracy is reported based on the best results on the validation set. 4 Insights and Findings This section presents the insights and findings uncovered by our experiments. Table 2 and Table 3 present the core of our experimental results. (1) Artifacts are found. Our experiments show that simple classifiers are able to distinguish finegrained and subtle differences between modeling choices (e.g., top-p probabilities or condition length ℓ) in generated texts. In Table 2, we observe that all classifiers have an accuracy much higher than random chance (almost double in some cases), which suggests that distinguishing between different classes is relatively straightforward. In short, we are able to empirically conclude that all modeling choices leave behind some form of detectable artifacts. 278 Model P1 P2 P3 K1 K2 K3 C1 C2 C3 S1 AVG Chance 33.3 33.3 20.0 33.3 33.3 20.0 33.3 33.3 20.0 33.3 29.3 Bow-L 55.2 69.2 55.9 54.5 62.8 38.4 42.3 34.7 22.0 43.7 47.9 Bow-M 55.2 69.7 56.9 56.1 62.7 40.0 42.9 34.6 22.7 43.2 48.4 Cnn 55.4 69.6 57.5 55.5 63.9 40.3 43.0 35.1 23.1 43.7 48.7 Lstm 54.9 68.9 54.5 55.0 62.7 40.2 45.7 34.0 23.8 43.5 48.3 Trans. 53.7 70.2 59.7 55.2 63.4 40.5 43.9 34.4 24.0 42.2 48.7 % Gain +66% +111% +199% +68% +92% +21% +37% +5% +20% +31% +66% Table 2: Results on machine configuration detection. % gain provides a general sense of how prevalent artifacts are for a given configuration. Model P1 P2 P3 K1 K K3 C1 C2 C3 S1 AVG Chance 25.0 25.0 16.7 25.0 25.0 16.7 25.0 25.0 33.3 25.0 24.2 Bow-L 67.5 76.6 63.8 73.27 78.9 57.5 47.3 46.10 33.2 58.6 60.3 Bow-M 68.0 76.7 65.6 74.1 78.9 57.2 49.2 47.5 33.9 58.2 60.9 Cnn 68.4 75.6 64.8 73.3 78.8 57.2 49.4 47.5 33.9 58.6 60.7 Lstm 69.0 77.0 68.7 74.4 78.6 57.9 50.5 48.4 34.3 58.1 61.7 Trans. 69.0 78.6 68.6 74.6 79.3 57.2 50.9 48.7 35.2 59.6 62.2 % Gain +176% +215% +312% +198% +217% +247% +104% +95% +6% +139% +157% Table 3: Results on the machine configuration detection tasks with human articles as an additional class. (2) Different generating choices leave behind different amounts of artifacts. From Table 2, the difficulty of each task generally depends on the specific modeling choice. For example, distinguishing between model size (S1) is much harder than the top-p value. Overall, we observe that methods that directly operate at the generation level (sampling p or k values) are much easier to predict (i.e., leave more artifacts) than condition length (C1, C2) or model size (S1). It is a somewhat surprising result that varying the initial condition length leaves artifacts in the generated text. A secondary finding is that discriminating p or k values that are close together is a significantly more challenging task than those that are far apart (i.e., task P1 vs P2). This empirically shows that generated text moves along some form of ordering and magnitude, i.e., s(a, b) ≤s(b, c) if a −b > b −c where a, b, c ∈R and s(x, y) is the accuracy score obtained by classifying between configurations x, y. (3) Word order does not matter too much. The key observation when pitting various sequence encoding inductive biases against each other is to observe if modeling sequential interactions (shortterm or long-range dependencies) and/or word order helps in any of the MCD tasks. The observation is that most complex encoders that takes into account word order do not outperform simple BoW (bag of words) with linear classifiers. This suggests that artifacts found in the text are mostly related to style (e.g., word choices), as opposed to compositional dependencies (e.g., word order). Occasionally, we observe some marginal gains when utilizing ConvNet or Transformers. We hypothesize that considering some amount of token interaction is indeed useful, albeit very marginally. Moreover, the recurrent model (LSTM) performs worse in most cases, suggesting that complex compositional relations are not necessary to capture artifacts. (4) Discriminating between machines is harder than human and machine. Table 3 report the results of MCD tasks with an additional human article class. By adding human generated articles into the mix, the classification accuracy increases (≈10%) across all tasks. Upon inspection, we find that the model separates the human written articles at beyond 90% accuracy, which leads to an overall increase in performance. Hence, the task of distinguishing between machine-machine text is much harder than distinguishing between human-machine text. 5 Discussion This section discusses the implications of our results and findings. (1) The sensitivity of neural text generation models emerge as artifacts in the generated text. Our results show that a state-of-the-art text generation model produces significant amounts of artifacts even when making small hyperparameter changes (such as sampling probabilities). It is also relatively surprising that the amount of article conditioning and model size can also be predicted to a 279 certain degree. We feel that this might arise from limitations in the design of neural generation models which may warrant further study. (2) Tracing the provenance and origination of text generation models is easier than expected. Given that minor changes to decoding settings leave distinguishable signatures, we hypothesize that it is relatively easy to trace and cluster content produced by specific generative models. 6 Conclusion We studied machine generated text and found that modeling choices leave artifacts, i.e., it is possible to predict modeling choices such as parameterization/sampling choices by looking at generated text alone. We proposed the novel task of machine configuration detection (MCD) which aided in the discovery of these artifacts. We believe our work paves the way for better understanding of neural text generation models and understanding that modeling choices reveals the model configurations is a first crucial step. References Anton Bakhtin, Sam Gross, Myle Ott, Yuntian Deng, Marc’Aurelio Ranzato, and Arthur Szlam. 2019. Real or fake? learning to discriminate machine from human generated text. arXiv preprint arXiv:1906.03351. Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. arXiv preprint arXiv:1805.04833. Sebastian Gehrmann, Hendrik Strobelt, and Alexander M Rush. 2019. Gltr: Statistical detection and visualization of generated text. arXiv preprint arXiv:1906.04043. Jiatao Gu, Kyunghyun Cho, and Victor OK Li. 2017. Trainable greedy decoding for neural machine translation. arXiv preprint arXiv:1702.02429. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in neural information processing systems, pages 1693–1701. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751. Daphne Ippolito, Daniel Duckworth, Chris CallisonBurch, and Douglas Eck. 2019. Human and automatic detection of generated text. arXiv preprint arXiv:1911.00650. Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Conrad Sanderson and Simon Guenter. 2006. Short text authorship attribution via sequence kernels, markov chains and author unmasking: An investigation. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 482–491. Efstathios Stamatatos. 2009. A survey of modern authorship attribution methods. Journal of the American Society for information Science and Technology, 60(3):538–556. Efstathios Stamatatos, Francisco Rangel, Michael Tschuggnall, Benno Stein, Mike Kestemont, Paolo Rosso, and Martin Potthast. 2018. Overview of pan 2018. In International Conference of the Cross-Language Evaluation Forum for European Languages, pages 267–285. Springer. Michael Tschuggnall, Efstthios Stamatatos, Ben Verhoeven, Walter Daelemans, G¨unther Specht, Benno Stein, and Martin Potthast. 2017. Overview of the author identification task at pan-2017: style breach detection and author clustering. In Working Notes Papers of the CLEF 2017 Evaluation Labs/Cappellato, Linda [edit.]; et al., pages 1–22. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2019. Neural text generation with unlikelihood training. arXiv preprint arXiv:1908.04319. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. arXiv preprint arXiv:1905.12616.
2020
25
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2807–2818 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2807 schuBERT: Optimizing Elements of BERT Ashish Khetan Amazon AWS New York, USA [email protected] Zohar Karnin Amazon AWS New York, USA [email protected] Abstract Transformers (Vaswani et al., 2017) have gradually become a key component for many state-of-the-art natural language representation models. A recent Transformer based model- BERT (Devlin et al., 2018) achieved state-of-the-art results on various natural language processing tasks, including GLUE, SQuAD v1.1, and SQuAD v2.0. This model however is computationally prohibitive and has a huge number of parameters. In this work we revisit the architecture choices of BERT in efforts to obtain a lighter model. We focus on reducing the number of parameters yet our methods can be applied towards other objectives such FLOPs or latency. We show that much efficient light BERT models can be obtained by reducing algorithmically chosen correct architecture design dimensions rather than reducing the number of Transformer encoder layers. In particular, our schuBERT gives 6.6% higher average accuracy on GLUE and SQuAD datasets as compared to BERT with three encoder layers while having the same number of parameters. 1 Introduction Transformer (Vaswani et al., 2017) based models have achieved state-of-the-art performance for many natural language processing tasks (Dai and Le, 2015; Peters et al., 2018; Radford et al., 2018; Howard and Ruder, 2018). These include machine translation (Vaswani et al., 2017; Ott et al., 2018), question-answering tasks (Devlin et al., 2018), natural language inference (Bowman et al., 2015; Williams et al., 2017) and semantic role labeling (Strubell et al., 2018). A recent Transformer based model BERT (Devlin et al., 2018) achieved state-of-the-art results on various natural language processing tasks including GLUE, SQuAD v1.1 and SQuAD v2.0. BERT’s model architecture is a multi-layer bidirectional Transformer encoder based on the original implementation described in Vaswani et al. (2017). Following the seminal results obtained by the BERT model, several follow up studies explored methods for improving them further. XLNet (Yang et al., 2019) adds autoregressive capabilities to BERT, improving its quality, though at the cost of additional compute requirements. RoBERTa (Liu et al., 2019) modifies the training procedure of BERT and provides pre-training methods that significantly improve its performance. Two notable papers exploring the architecture design of the BERT are following. Michel et al. (2019) examines the importance of attention heads in BERT architecture, highlighting scenarios where attention heads may be pruned. The main objective of the paper is to provide techniques for pruning attention head, and as such the amount of experiments performed on BERT is limited to a single task (MNLI). ALBERT (Lan et al., 2019) proposes two methods for reducing the number of parameters in BERT. The first is via parameter sharing across layers, and the second is by factorizing the embedding layers. We note (this was mentioned in the conclusion section of the paper) that while these methods are efficient in reducing the number of parameters used by the model, they do not help in reducing its latency. These studies provide some advancement towards a more efficient architecture design for BERT but leave much to be explored. In this paper we take a broader approach examining multiple design choices. We parameterize each layer of BERT by five different dimensions, as opposed to Devlin et al. (2018) that parameterizes a layer with two dimensions and suggests a fixed value for the remaining three. We then (pre-)train multiple variants of BERT with different values chosen for these dimensions by applying pruning-based architecture search technique that jointly optimizes the architecture of the model with the objective of minimizing 2808 both the pre-training loss and the number of model parameters. Our experiments result in the following findings: • The ratio of the architecture design dimensions within a BERT encoder layer can be modified to obtain a layer with better performance. Transformer design dimensions suggested in Vaswani et al. (2017) are suboptimal. • When we aim to obtain a computationally lighter model, using a ‘tall and narrow’ architecture provides better performance than a ‘wide and shallow’ architecture. • The fully-connected component applied to each token separately plays a much more significant role in the top layers as compared to the bottom layers. 2 Background Following BERT’s notations, we use ℓto denote the number of encoder layers (i.e. Transformer blocks), h to denote the hidden size, and a to denote the number of self attention heads. The BERT paper (Devlin et al., 2018) primarily reports results on two models: BERTBASE (ℓ= 12, h = 768, a = 12) and BERTLARGE (ℓ= 24, h = 1024, a = 16). BERT base has 108M parameters and BERT large has 340M parameters. Though BERT large achieves higher accuracy than BERT base, due to its prohibitively large size it finds limited use in practice. Since BERT base achieves higher accuracy compared to previous state-of-the-art models- Pre-OpenAI SOTA, BiLSTM+ELMo+Attn and OpenAI GPT- on most of the benchmark datasets, it is widely used in practice. BERT base and OpenAI GPT have the same number of model parameters. Given its broad adoption for NLP tasks, an immediate question is: can we reduce the size of BERT base without incurring any significant loss in accuracy? The BERT paper (Devlin et al., 2018) provides an ablation study, Table 1, over the number of model parameters by varying the number of layers ℓ, the hidden size h, and the number of attention heads a. It can be observed that the accuracy decreases drastically when the number of encoder layers ℓis reduced, and also when the number of attention heads is reduced. We ask the following question: are there any other design dimensions that can be reduced without incurring huge loss in accuracy? Design dimensions Dev Set Accuracy #ℓ #a #M MNLI MRPC SST-2 3 12 45 77.9 79.8 88.4 6 3 55 80.6 82.2 90.7 6 12 66 81.9 84.8 91.3 BERT base 12 12 108 84.4 86.7 92.9 Table 1: Ablation study over BERT model size, Table 6 in Devlin et al. (2018). #M denotes number of model parameters in millions. hidden size, h = 768. As noted above, the three primary design dimensions of the BERT architecture are the number of encoder layers ℓ, the hidden size h, and the number of attention heads a. BERT’s Transformer encoder layers are based on the original Transformer implementation described in Vaswani et al. (2017). Vaswani et al. (2017) fixed dimension of key, query, and value in multi-head attention, and filter dimension in feed-forward networks as a function of the hidden size and the number of attention heads. However, these are variable design dimensions and can be optimized. Moreover, BERT architecture uses the same number of attention heads for all the encoder layers and hence all the layers are identical. In this work, we jointly optimize all these design dimensions of BERT architecture while allowing each encoder layer to have different design dimensions. In order to explore the parameter space efficiently we chose to optimize the design dimensions in a pruning framework rather than launching a pretraining job for each of these choices. This allows a speedup of several orders of magnitude that is crucial in order to obtain meaningful conclusions. We parameterize the different dimensions one can modify and jointly optimize them with a mixed target of both accuracy and parameter reduction. We look at how the accuracy of BERT evolves on various downstream datasets like GLUE, SQuAD v1.1, and SQuAD v2.0 when we reduce the model size via an optimization procedure. 3 Related works There is a vast literature on pruning trained neural networks. Starting with the classical works LeCun et al. (1990); Hassibi and Stork (1993) in the early 90’s to the recent works Han et al. (2015), pruning deep neural networks has received a lot of attention. There have been two orthogonal ap2809 proaches in pruning networks: structured pruning (Li et al., 2016; Molchanov et al., 2016) and unstructured pruning (Anwar et al., 2017). Structured pruning gives smaller architecture whereas unstructured pruning gives sparse model parameters. In natural language processing, Murray and Chiang (2015) explored structured pruning in feed-forward language models. See et al. (2016) and Kim and Rush (2016) provided pruning approaches for machine translation. A closely related line of work is Neural Architecture Search (NAS). It aims to efficiently search the space of architectures (Pham et al., 2018; Liu et al., 2018; Singh et al., 2019). Quantization is another technique to reduce the model size. This is done by quantizing the model parameters to binary (Rastegari et al., 2016; Hubara et al., 2017), ternary (Zhu et al., 2016), or 4 or 8 bits per parameter (Han et al., 2015). Recently published DistilBERT (Sanh et al., 2019) shows that a BERT model with fewer number of layers can be efficiently pre-trained using knowledge distillation to give much higher accuracy as compared to the same model pre-trained in a regular way. We note that the distillation technique is complimentary to our work and our schuBERTs can be pre-trained using distillation to boost their accuracy. The ablation study in Table 1, BERT (Devlin et al., 2018), and the above explained works (Michel et al., 2019; Lan et al., 2019) look at the problem of reducing the BERT model size by reducing one or the other design dimensions - number of encoder layers, hidden size, number of attention heads, and embedding size - in isolation and in a sub-optimal way. In this work, we address this problem comprehensively. 4 The Elements of BERT In this section, we present detailed architecture of the original BERT model and explain which design dimensions of it can be optimized. Figure 1 shows BERT pre-training architecture. First, the tokenized inputs are embedded into a vector of dimension h through an embedding layer E. The embedded inputs pass through a sequence of encoder layers 1 to ℓ. Each encoder layer is identical in its architecture. The output of the last encoder layer is decoded using the same embedding layer E and softmax cross-entropy loss is computed on the masked tokens. A special token CLS from the last encoder layer is used to compute next-sentenceprediction (NSP) loss. For further details of the loss Encoder layer – 2 o Encoder layer – 1 o Encoder layer – L o NSP Mask LM Mask LM Embeddingo classification head decoder Tokenizero Masked sentence A Masked sentence B Pre-training Figure 1: BERT pre-training BERT-base number of encoder layers ℓ 12 hidden size h 768 number of self-attention heads a 12 feed forward dimension f 4h key-query dimension for attention k h/a value dimension for attention v h/a Table 2: Elements of BERT corresponding to masked tokens and the NSP loss, we refer the readers to the BERT paper (Devlin et al., 2018). We follow BERT notation conventions and denote the number of encoder layers as ℓ, the hidden size as h, and the number of attention heads as a. Following the original Transformer implementation described in Vaswani et al. (2017) BERT sets key-query dimension for multi-head attention k to h/a. Following the same Transformer implementation it sets value dimension for multi-head attention v equal to k, and feed-forward filter size f equal to 4h. In total, there are three design dimensions in BERT- ℓ, h and a, they are listed in Table 2. For BERT base, the number of encoder layers ℓis set to 12, the hidden size h is set to 768, and the number of attention heads a is set to 12. The other three dimensions f, k, v are function of h and a. Further, each encoder layer of BERT is identical and uses same value of a, f, k, v. First of all, BERT has no architectural constraint that requires all the encoder layers to be identical. This aspect of design can be optimized and in full-generality it might result in highly nonidentical layers. This implies that a generalized 2810 schuBERT ℓ ℓ h h a a1, a2, · · · , aℓ f f1, f2, · · · , fℓ k k1, k2, · · · , kℓ v v1, v2, · · · , vℓ Table 3: Elements of schuBERT BERT will have a1, a2, · · · , aℓnumber of heads, f1, f2, · · · , fℓfilter sizes in the feed forward networks, k1, k2, · · · , kℓkey sizes and v1, v2, · · · , vℓ value sizes in the attention heads, in the layers 1, 2, · · · , ℓrespectively. Table 3 lists all the design dimensions of BERT that can be optimized without changing the architecture. Note that we abuse the term architecture to refer to the entire BERT network and the layer operations except sizes of the parameter matrices. In this work, our goal is to optimize (by pruning) all these dimensions to maximize accuracy for a given size of the model. We refer the BERT with optimized dimensions as schuBERT- Size Constricted Hidden Unit BERT. Now, we show which parameter matrices are tied with each of these design dimensions. Each design dimension is tied with more than one parameter matrix. This is explained by providing a detail view of an encoder cell of the BERT. Figure 2 shows architecture of an encoder layer of BERT. The notations in the figure have subscript 1 that represent first encoder layer. Input to an encoder layer is the hidden representation of a token which is of dimension h. Input first goes through a multi-head attention cell. Note that multi-head attention cell processes hidden representation of all the tokens in a combined way. For simplicity, in Figure 2 we have shown only one hidden representation. The multi-head attention cell consists of three parameter tensors, namely - key K1, query Q1 and value V1. K1 is of size k1 × a1 × h. Key vector for each head of the attention is of dimension k1 and a1 represents the number of heads. Hidden representation of dimension h is projected on the key tensor K1 to get a1 key vectors each of dimension k1. Similarly the query tensor Q1 is used to get a1 query vectors each of dimension k1 for a1 heads of the multi-head attention cell. The value tensor V1 is of dimension v1 × a1 × h. The hidden representation is projected on the value tensor V1 to get a1 value vectors each of dimension v1. Note that k1 and v1 can be different. The inner product of key and query vectors after passing through softmax layer give weights for combining value vectors. For details of multi-head attention cell we refer the readers to Vaswani et al. (2017). In nutshell, using three parameter tensors- K1, Q1, V1, a multi-head attention cell transforms hidden representation of size h to a vector of dimension (v1 × a1). This vector is projected back to the same dimension h through a proj matrix P1. Which is then added element-wise to the hidden representation that was input to the encoder cell and layer norm is applied on the addition. The output is passed sequentially through two fully-connected layers namely D1 and G1. D1 consists of a parameter matrix of dimension f1 × h and G1 consists of a parameter matrix of dimension h × f1. The output of G1 is added element-wise to the input of D1 and layer norm is applied to it. This is the output of the encoder cell and is input to the next encoder cell. query key h k_1 v_1 12 k_1 12 value Multi-head Attention v_1 12 h x a_1 Input to the cell proj + + h h h f_1 D_1 G_1 Output of the cell Add & Layer norm Add & Layer norm x a_1 x a_1 x a_1 Figure 2: An encoder layer of schuBERT The color coding in Figure 2 shows which vectors need to be of the same dimension. The hidden representation size h needs to be same throughout all the encoder layers. In a multi-head attention cell, in each head key and query vectors must have the same dimension. Therefore, key and query tensors, K1, Q1 must be of the same size k1 × a1 × h. The value vector can be of different dimension v1. Therefore the value tensor V1 should be of dimension v1 × a1 × h. Further, the filter size f1 in the two fully-connected layers D1, G1 is a variable and can take any integral value. Keeping aligned with the BERT and the subse2811 quent improvements such as XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019), we set the WordPiece embedding size e equal to the hidden layer size h, i.e. e ≡h. However, factorization of the embedding matrix can be incorporated as demonstrated in ALBERT (Lan et al., 2019). 5 Optimization Method We optimize BERT design dimensions listed in Table 3 by pruning the original BERT base architecture. All the design dimensions are upper bounded by their original value in the BERT base as given in the Table 2. Since we keep the architecture same, that is we do not remove any layer, the design dimensions are lower bounded by one. For each design dimension that we seek to optimize, we introduce a prune-parameter vector α of size equal to the original dimension. We take pretrained original BERT base network, and multiply all the parameter tensors/matrices that are associated with the particular design dimension with the corresponding prune-parameter vector. For example, filter size of the feed-forward layer in the first encoder layer is f1 = 3072. To optimize f1, we introduce a prune-parameter vector αf1 ∈R3072 and initialize it with all ones. In the original BERT base, the two parameter matrices D1 and G1 are associated with the design dimension f1. We replace D1 by diag(αf1) · D1 and G1 by G1 · diag(αf1) in the BERT pre-trained model. Table 4 lists all the prune parameters. Table 5 lists all the parameter tensors/matrices for which design dimensions are optimized by multiplying prunable parameters on all the sides. key and query tensors Ki, Qi for i ∈{1, 2, · · · , ℓ} are multiplied on all the three sides with prunable parameters corresponding to key-vector, number of attention heads, and hidden size. Similarly multiplications are performed on value tensor Vi with a different value-vector prunable parameter. proj tensor has same multiplication as value tensor. The two feedforward matrices Di, Gi have same multiplications. We denote the so obtained prunable tensors with tilde on their top. Note that we do not have prune parameters for pruning encoder layers. We find the optimal number of encoder layers ℓby running experiments for different values of ℓ. Our approach is to optimally find which individual elements of prunable parameters {αh, {αai, αvi, αki, αfi}i∈[ℓ]} can be set to zero while incurring minimal increase in the preαh ∈Rh {αfi ∈Rf}i=1,2,··· ,ℓ {αai ∈Ra}i=1,2,··· ,ℓ {αki ∈Rk}i=1,2,··· ,ℓ {αvi ∈Rv}i=1,2,··· ,ℓ Table 4: Prunable parameters. Ki →Ki[diag(αki)diag(αai)diag(αh)] ≡f Ki Qi →Qi[diag(αki)diag(αai)diag(αh)] ≡f Qi Vi →Vi[diag(αvi)diag(αai)diag(αh)] ≡eVi Pi →Pi[diag(αh)diag(αvi)diag(αai)] ≡ePi Di →Di[diag(αfi)diag(αh)] ≡f Di Gi →Gi[diag(αh)diag(αfi)] ≡f Gi Table 5: Prunable BERT parameter matrices/tensors. training loss. After we have sparse prunable parameter vectors, we remove the corresponding rows/columns from the BERT parameter matrices {Ki, Qi, Vi, Pi, Di, Gi}i∈[ℓ], and get a smaller/faster BERT model. Below we explain the algorithm to find the sparse prunable parameters. We start with the pre-trained BERT base trained on BooksCorpus (800M words) and English Wikipedia (2500M words) following the BERT pretraining procedure given in Devlin et al. (2018). Particularly, we minimize the loss given in Equation (1) to learn the optimal parameter tensors {Ki, Qi, Vi, Pi, Di, Gi}i∈[ℓ] and the embedding matrix E. Next, we introduce the prunable parameters given in Table 4 and initialize them with all ones. We create prunable BERT parameter matrices by multiplying the prunable parameters to the learned BERT parameter matrices, as given in Table 5. Then, we optimize the prunable parameters α’s while fixing the learned parameters matrices as given in Equation 2. In addition to the MLM and NSP loss, we add sparsity inducing loss on the prunable parameters with a regularization coefficient γ. It is well known that ℓ1 penalty induces sparsity in the parameters. Further, since our goal is to minimize the number of parameters, to account for the fact that each element of prune parameters α when set to zero reduces different number of BERT parameters, we multiply the ℓ1 loss terms with the cost terms β’s. For example, βai is proportional to the number of model parameters that will be removed when an element of the prune parameter αai is set to zero. It is 2812 critical to incorporate β’s. Their values are significantly different from each other. The β values are 1.0, 0.73, 0.093, 0.093, 0.0078 for a, h, k, v and f respectively. After training the prunable BERT model for a fixed number of steps, we truncate the smallest prune parameters to zero, and remove the corresponding rows/columns from the BERT parameter matrices {Ki, Qi, Vi, Pi, Di, Gi}i∈[ℓ]. Then we fine-tune the so obtained smaller schuBERT model. Algorithm 1 summarizes our approach. If we want to reduce the number of parameters by a fraction η, we do so in T steps. In each step, we prune η/T fraction of parameters, and at the end of the step we fine-tune the network and repeat these steps T times. Though we have explained the algorithm in terms of ℓ1 penalty on the prunable parameters, in our experiments we tried alternative sparsity inducing penalties as well- ℓ0 regularization, and proximal gradient descent on prunable parameters. arg min{E,{Ki,Qi,Vi,Pi,Di,Gi}i∈[ℓ]} LMLM+NSP(E, {Ki, Qi, Vi, Pi, Di, Gi}i∈[ℓ]) . (1) arg min{αh,{αai,αvi,αki,αfi}i∈[ℓ]} LMLM+NSP(E, {f Ki, f Qi, eVi, ePi, f Di, f Gi}i∈[ℓ]) + γ{βh∥αh∥} + γ ℓ X i=1 {βai∥αai∥+ βvi∥αvi∥ + βki∥αki∥+ βfi∥αfi∥} . (2) 6 Experimental Results In this section, we present our experimental results. We apply Algorithm 1 on BERT base. For pre-training BERT base we use MXNET based gluon-nlp repository that uses the hyper-parameters suggested in the original BERT paper. Besides pretraining, our algorithm has three hyper-parameters: regularization coefficient γ, learning rate for prunable parameters, and the number of steps for regularizing prune parameters- Equation (2). We run hyper-parameter optimization on these parameters to get the best results. For regularization loss (2), we use the same training data that we use for pre-training, BooksCorpus (800M words) and English Wikipedia (2,500M words). However, we run the regularization step for 1/1000th steps as used for pre-training. We finet-une the pruned BERT Algorithm 1 Pruning Transformers Input: A Transformer model, minimization objective (FLOPs/Params/Latency), target fraction η, number of iterations T. Output: A optimally pruned Transformer model. pre-training: Train the network using loss Equation (1). Repeat T times: • Initialize prunable parameters αh, αai, αki, αvi, αfi →1 • Multiply prunable parameters with network parameter Ki, Qi, Vi, Pi, Di, Gi →f Ki, f Qi, eVi, ePi, f Di, f Gi • Train the network using loss Equation (2) • Set the ζ smallest prunable parameters to zero to achieve η/T reduction in the target objective value • Offset zero and non-zero prunable parameters into model parameters f Ki, f Qi, eVi, ePi, f Di, f Gi →Ki, Qi, Vi, Pi, Di, Gi • Create smaller model parameter tensors by removing all-zero rows/columns Ki, Qi, Vi, Pi, Di, Gi →c Ki, c Qi, bVi, bPi, c Di, c Gi • Finetune the model using loss Equation (1) by training for 1/20th of the steps used for pretraining. We provide accuracy results for schuBERT on the following downstream tasks- question answering datasets- SQuAD v1.1, SQuAD v2.0; and GLUE datasets - MNLI, MRPC, SST-2 and RTE. For these downstream tasks, we use the fine-tuning hyper-parameters as suggested in the BERT paper. We create six schuBERTs by pruning one or all of the design dimensions. Accuracy of the downstream tasks on these schuBERTs are given in Tables 6-13. The BERT base has 108 million parameters. The schuBERT sizes 88, 66, 43 million are chosen to match the number of parameters in BERT with ℓ∈{9, 6, 3} layers. We use schuBERT-x notation for x ∈{h, f, a} to denote a schuBERT obtained by only pruning h-hidden size, f-filter size of feed-forward, anumber of attention heads respectively. We use schuBERT-all to denote the case when all the design dimensions- h, f, a, k, v, except ℓare pruned. We compare our results with original BERT base, and by varying its number of encoder layers ℓ∈ {12, 9, 6, 3}. We denote these results by BERT-ℓ. Since ALBERT reduces parameters by factorizing the embedding matrix, we denote its results by 2813 model SQuAD v1.1 SQuAD v2.0 MNLI MRPC SST-2 RTE Avg BERT-base (108M) 90.2/83.3 80.4/77.6 84.1 87.8 92.1 71.4 84.3 # parameters = 99M schuBERT-all 89.8/83.0 89.8/83.0 89.8/83.0 80.1/77.6 80.1/77.6 80.1/77.6 83.9 83.9 83.9 87.5 87.5 87.5 92.4 92.4 92.4 71.1 71.1 71.1 84.1 84.1 84.1 schuBERT-f 89.8/82.9 79.6/77.3 83.5 87.4 91.6 70.7 83.8 schuBERT-h 89.6/82.6 79.9/77.5 83.7 87.3 91.5 70.4 83.7 BERT-all uniform 89.7/82.7 79.8/77.3 83.7 87.2 92.0 69.8 83.7 schuBERT-a 89.3/82.3 79.1/77.4 83.3 86.8 91.1 69.1 83.1 Table 6: Accuracy results on SQuAD and GLUE datasets obtained by fine-tuning BERT and schuBERTs with total of 99 million parameters. ℓ 1 2 3 4 5 6 7 8 9 10 11 12 f = 2022 2222 2344 2478 2576 2530 2638 2660 2748 2792 2852 2974 a = 12 12 12 12 11 12 12 12 12 12 12 12 k = 64 64 64 64 64 64 64 64 64 64 64 64 v = 54 54 46 58 52 60 64 64 64 64 64 62 number of encoder layers ℓ= 12, number of hidden units h = 768 Table 7: Design dimensions of schuBERT-all for 99 million parameters. ALBERT-e. ALBERT provided results only for 88 million parameter model, not for any smaller models. Further, we also compare with the baseline case when all the design dimensions are pruned uniformly. We denote these results by BERT-all uniform. For 99M model, Table 6, schuBERT-all beats the baseline BERT-all uniform by 0.4% higher average accuracy and performs better than schuBERTf/h/a. Moreover, the loss in performance in comparison to BERT base with 108 million parameters is only 0.2%. Table 7 gives exact design dimensions for schuBERT-all with 99 million parameters. We see that number of hidden units remain same as in BERT base, h = 768. Parameter reduction primarily comes from feed-forward layers. Moreover, filter size of feed-forward layer - f has a clear increasing pattern across the layers. For 88M model, Table 8, again schuBERTall beats all the other models. It gives 1.1% higher average accuracy than BERT-ℓwith 9 layers. ALBERT-e performs better on SQuAD datasets, but performs significantly worse on MNLI and SST2 datasets. Note ALBERT’s approach is complementary to our approach and it can be incorporated into our schuBERTs. schuBERT-a performs significantly worse than schuBERT-all which implies that pruning only number of attention heads is highly sub-optimal, as is recently done in Michel et al. (2019). Table 9 provides the exact design dimensions for schuBERT-all with 88 million parameters. Similar to 99M model, filter size of feed-forward layer - f has a clear increasing pattern across the layers. For heavily pruned models - 77M, 66M, 55M and 43M models - accuracy results are shown in Table 10, Table 11, Table 12 and Table 13 respectively. In all these models schuBERT-h beats all the other models. For 66M model, schuBERT-h gives 1.9% higher average accuracy than BERT-ℓ with 6 layers. For 43M model, schuBERT-h gives 6.6% higher average accuracy than BERT-ℓwith 3 layers. That is reducing the hidden units is way better than to reduce the number of layers to create a light BERT model. Ideally, we would expect schuBERT-all to perform better than schuBERT-h, but marginally worse performance of schuBERT-all can be attributed to the high complexity of pruning all the design dimensions together. Table 14 provides best schuBERT architectures when the number of model parameters are restricted to different values. For smaller models, schuBERT-h outperforms all other schuBERTs including schuBERT-all. Note that our schuBERT architectures are smaller in size as well as they yield lower latency. 7 schuBERT Based on the above described experimental results, we provide following insights on the design dimensions of schuBERT architecture. Slanted Feed-forward Layer. The fully2814 model SQuAD v1.1 SQuAD v2.0 MNLI MRPC SST-2 RTE Avg BERT-base (108M) 90.2/83.3 80.4/77.6 84.1 87.8 92.1 71.4 84.3 # parameters = 88M BERT-ℓ 88.4/80.9 78.8/77.2 83.8 85.6 91.3 68.2 82.7 schuBERT-all 89.4/82.5 79.8/77.1 84.1 84.1 84.1 87.6 87.6 87.6 92.3 92.3 92.3 69.7 69.7 69.7 83.8 83.8 83.8 schuBERT-f 89.2/82.2 79.5/77.5 83.7 87.4 92.2 69.3 83.6 BERT-all uniform 89.1/82.0 79.6/77.6 83.7 87.5 91.7 68.9 83.4 schuBERT-h 89.1/82.0 79.4/77.3 83.6 87.2 91.5 69.2 83.3 schuBERT-a 85.1/77.1 74.1/72.4 82.2 85.2 90.9 67.0 80.8 ALBERT-e 89.9/82.9 89.9/82.9 89.9/82.9 80.1/77.8 80.1/77.8 80.1/77.8 82.9 − 91.5 − − Table 8: Accuracy results on SQuAD and GLUE datasets obtained by fine-tuning BERT, ALBERT, and schuBERTs with total of 88 million parameters. ℓ 1 2 3 4 5 6 7 8 9 10 11 12 f = 1382 1550 1672 1956 2052 2030 2210 2314 2474 2556 2668 2938 a = 12 12 11 12 11 12 12 12 12 12 12 12 k = 64 64 64 64 64 64 64 64 64 64 64 64 v = 46 48 42 52 46 54 64 62 64 64 64 40 number of encoder layers ℓ= 12, number of hidden units h = 756 Table 9: Design dimensions of schuBERT-all for 88 million parameters. connected component applied to each token separately plays a much more significant role in the top layers as compared to the bottom layers. Figure 3 shows pattern of filter size of feed-forward layer across the encoder cells for various schuBERT-all models. In each of them, filter size follows an increasing pattern with min-max ratio ranging from 1.5 to 4, as opposed to same value across all the layers. Tall and Narrow BERT. When we aim to obtain a computationally lighter model, using a ‘tall and narrow’ architecture provides better performance than a ‘wide and shallow’ architecture. Our results in Tables 8, 11, 13 demonstrate that schuBERT with ℓ= 12 encoder layers significantly outperforms BERT with ℓ∈{9, 6, 3} layers for the same number of parameters. Expansive Multi-head Attention. The ratio of the design dimensions within a BERT encoder layer can be modified to obtain a better performing layer architecture. Transformer design dimensions suggested in (Vaswani et al., 2017) are sub-optimal. Following the original Transformer architecture described in (Vaswani et al., 2017), BERT and other Transformer based models set key-query k and value v dimension for multi-head attention to k = v = h/a, where h is the size of the hidden representation, and a is the number of attention heads. Also, following the same architecture (Vaswani et al., 2017), BERT sets feed-forward filter size f = 4h. Although there is no restriction in using different output dimensions k, v and filter size f, without changing the behaviour of the attention mechanism, we are not aware of any study questioning this ‘default value’ of k = v = h/a and f = 4h. Our schuBERT architecture for various model sizes given in Table 14, show that for smaller models k, v should be much larger than h/a. For 43M schuBERT model h/a = 25.3 whereas k = v = 64. Also, f should be much larger than 4h. For the same 43M schuBERT model 4h = 936 whereas f = 3072. Table 13 shows that 43M schuBERT (ℓ= 12, h = 304, a = 12, k = v = 64, f = 3072) significantly outperforms BERT-ℓ (ℓ= 3, h = 768, a = 12, k = v = h/a, f = 4h). 2 4 6 8 10 12 500 1000 1500 2000 2500 3000 filter size optBERT-all: feed-forward size across the layers BERT base -108M 99M 88M 77M 66M 55M 44M 33M Figure 3: Feed-forward size across the encoder layers in schuBERT-all for various model sizes. 2815 # parameters = 77M model SQuAD v1.1 SQuAD v2.0 MNLI MRPC SST-2 RTE Avg schuBERT-h 88.8/81.6 88.8/81.6 88.8/81.6 78.6/76.3 78.6/76.3 78.6/76.3 84.0 84.0 84.0 87.2 87.2 87.2 91.5 68.9 68.9 68.9 83.2 83.2 83.2 BERT-all uniform 88.8/81.6 78.4/76.0 83.7 86.6 91.9 68.9 83.1 schuBERT-f 88.8/81.4 78.8/76.1 83.2 86.5 92.2 92.2 92.2 67.7 82.9 schuBERT-all 88.8/81.6 78.6/76.2 83.8 86.6 92.2 66.4 82.7 schuBERT-a 82.6/74.2 73.1/68.9 82.0 84.9 89.6 66.4 79.8 Table 10: Accuracy results on SQuAD and GLUE datasets obtained by fine-tuning BERT and schuBERTs with total of 77 million parameters. # parameters = 66M model SQuAD v1.1 SQuAD v2.0 MNLI MRPC SST-2 RTE Avg BERT-ℓ 85.3/77.1 75.3/72.5 82.3 84.4 91.1 67.6 81.0 schuBERT-h 88.1/80.7 88.1/80.7 88.1/80.7 78.4/74.7 78.4/74.7 78.4/74.7 83.8 83.8 83.8 86.7 86.7 86.7 91.7 91.7 91.7 68.5 68.5 68.5 82.9 82.9 82.9 schuBERT-all 88.0/80.7 78.2/74.5 83.2 87.2 91.3 67.8 82.6 BERT-all uniform 87.7/80.3 77.8/74.0 83.6 86.2 91.3 68.1 82.4 schuBERT-f 87.6/80.0 77.6/74.1 83.0 86.8 90.6 68.1 82.3 Table 11: Accuracy results on SQuAD and GLUE datasets obtained by fine-tuning BERT and schuBERTs with total of 66 million parameters. # parameters = 55M model SQuAD v1.1 SQuAD v2.0 MNLI MRPC SST-2 RTE Avg schuBERT-h 87.6/80.3 87.6/80.3 87.6/80.3 77.4/74.6 77.4/74.6 77.4/74.6 83.5 83.5 83.5 86.3 86.3 86.3 90.9 90.9 90.9 66.7 82.1 82.1 82.1 schuBERT-all 86.8/79.3 76.6/73.5 83.4 86.3 90.9 66.8 81.8 BERT-all uniform 86.2/78.5 76.9/72.2 83.2 84.0 90.5 67.1 81.3 schuBERT-f 85.8/77.5 75.8/71.8 81.8 84.4 90.2 67.3 67.3 67.3 80.9 Table 12: Accuracy results on SQuAD and GLUE datasets obtained by fine-tuning BERT and schuBERTs with total of 55 million parameters. # parameters = 43M model SQuAD v1.1 SQuAD v2.0 MNLI MRPC SST-2 RTE Avg BERT-ℓ 75.6/65.8 65.9/57.8 78.5 79.5 87.3 63.8 75.1 schuBERT-h 86.7/79.0 86.7/79.0 86.7/79.0 76.9/73.8 76.9/73.8 76.9/73.8 83.4 83.4 83.4 84.8 84.8 84.8 90.9 90.9 90.9 67.3 67.3 67.3 81.7 81.7 81.7 schuBERT-all 86.0/77.9 76.7/72.8 82.6 84.2 90.5 66.2 81.0 BERT-all uniform 85.0/77.2 75.3/72.4 82.2 83.4 90.6 67.2 80.6 schuBERT-f 84.2/75.5 74.7/69.8 80.3 77.1 89.7 58.7 77.5 Table 13: Accuracy results on SQuAD and GLUE datasets obtained by fine-tuning BERT and schuBERTs with total of 43 million parameters. # parameters BERT 99M 88M 77M 66M 55M 43M 33M ℓ= 12 12 12 12 12 12 12 12 h = 768 768 756 544 466 390 304 234 f(min −max) = 3072 2022 −2974 1382 −2938 3072 3072 3072 3072 3072 a(min −max) = 12 11 −12 11 −12 12 12 12 12 12 k(min −max) = 64 64 64 64 64 64 64 64 v(min −max) = 64 46 −64 40 −64 64 64 64 64 64 Table 14: Best schuBERT architectures for different number of model parameters. BERT base has 108M parameters. 2816 References Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. 2017. Structured pruning of deep convolutional neural networks. ACM Journal on Emerging Technologies in Computing Systems (JETC), 13(3):32. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326. Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in neural information processing systems, pages 3079–3087. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Song Han, Huizi Mao, and William J Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149. Babak Hassibi and David G Stork. 1993. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in neural information processing systems, pages 164–171. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146. Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2017. Quantized neural networks: Training neural networks with low precision weights and activations. The Journal of Machine Learning Research, 18(1):6869–6898. Yoon Kim and Alexander M Rush. 2016. Sequencelevel knowledge distillation. arXiv preprint arXiv:1606.07947. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942. Yann LeCun, John S Denker, and Sara A Solla. 1990. Optimal brain damage. In Advances in neural information processing systems, pages 598–605. Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2016. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710. Hanxiao Liu, Karen Simonyan, and Yiming Yang. 2018. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? arXiv preprint arXiv:1905.10650. Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. 2016. Pruning convolutional neural networks for resource efficient inference. arXiv preprint arXiv:1611.06440. Kenton Murray and David Chiang. 2015. Auto-sizing neural networks: With applications to n-gram language models. arXiv preprint arXiv:1508.05051. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. arXiv preprint arXiv:1806.00187. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, and Jeff Dean. 2018. Efficient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding with unsupervised learning. Technical report, Technical report, OpenAI. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. 2016. Xnor-net: Imagenet classification using binary convolutional neural networks. In European conference on computer vision, pages 525–542. Springer. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Abigail See, Minh-Thang Luong, and Christopher D Manning. 2016. Compression of neural machine translation models via pruning. arXiv preprint arXiv:1606.09274. Shashank Singh, Ashish Khetan, and Zohar Karnin. 2019. Darc: Differentiable architecture compression. arXiv preprint arXiv:1905.08170. Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. arXiv preprint arXiv:1804.08199. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426. 2817 Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Chenzhuo Zhu, Song Han, Huizi Mao, and William J Dally. 2016. Trained ternary quantization. arXiv preprint arXiv:1612.01064. 2818 A Appendix Figure 4, Figure 5 and Figure 6 show the pattern of number of heads, key-query dimension, and value dimension across the encoder layers for various schuBERT-all architectures respectively. There is not much significant pattern in these design dimensions across the layers. The number of attention heads drastically reduce to 1 in the top layer for very small models. Same is true for key-query and value dimensions. Key-query remains almost same as their original value 64 even when the models are pruned heavily, except in the top layer. Whereas value dimension does decrease significantly from their original value when the models are pruned heavily. 2 4 6 8 10 12 2 4 6 8 10 12 filter size optBERT-all: number of heads across the layers BERT base -108M 99M 88M 77M 66M 55M 44M 33M Figure 4: Number of multi-attention heads across the encoder layers in schuBERT-all for various model sizes. 2 4 6 8 10 12 0 10 20 30 40 50 60 filter size optBERT-all: dimension of key-query across the layers BERT base -108M 99M 88M 77M 66M 55M 44M 33M Figure 5: Dimension of key-query vectors across the encoder layers in schuBERT-all for various model sizes. 2 4 6 8 10 12 0 10 20 30 40 50 60 filter size optBERT-all: dimension of value across the layers BERT base -108M 99M 88M 77M 66M 55M 44M 33M Figure 6: Dimension of value vectors across the encoder layers in schuBERT-all for various model sizes.
2020
250
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2819–2826 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2819 ENGINE: Energy-Based Inference Networks for Non-Autoregressive Machine Translation Lifu Tu1 Richard Yuanzhe Pang2∗ Sam Wiseman1 Kevin Gimpel1 1Toyota Technological Institute at Chicago, Chicago, IL 60637, USA 2New York University, New York, NY 10011, USA {lifu,swiseman,kgimpel}@ttic.edu, [email protected] Abstract We propose to train a non-autoregressive machine translation model to minimize the energy defined by a pretrained autoregressive model. In particular, we view our non-autoregressive translation system as an inference network (Tu and Gimpel, 2018) trained to minimize the autoregressive teacher energy. This contrasts with the popular approach of training a non-autoregressive model on a distilled corpus consisting of the beam-searched outputs of such a teacher model. Our approach, which we call ENGINE (ENerGy-based Inference NEtworks), achieves state-of-the-art non-autoregressive results on the IWSLT 2014 DE-EN and WMT 2016 RO-EN datasets, approaching the performance of autoregressive models.1 1 Introduction The performance of non-autoregressive neural machine translation (NAT) systems, which predict tokens in the target language independently of each other conditioned on the source sentence, has been improving steadily in recent years (Lee et al., 2018; Ghazvininejad et al., 2019; Ma et al., 2019). One common ingredient in getting non-autoregressive systems to perform well is to train them on a corpus of distilled translations (Kim and Rush, 2016). This distilled corpus consists of source sentences paired with the translations produced by a pretrained autoregressive “teacher” system. As an alternative to training non-autoregressive translation systems on distilled corpora, we instead propose to train them to minimize the energy defined by a pretrained autoregressive teacher model. That is, we view non-autoregressive machine trans∗Work partly done at Toyota Technological Institute at Chicago and the University of Chicago. 1Code is available at https://github.com/ lifu-tu/ENGINE lation systems as inference networks (Tu and Gimpel, 2018, 2019; Tu et al., 2019) trained to minimize the teacher’s energy. This provides the nonautoregressive model with additional information related to the energy of the teacher, rather than just the approximate minimizers of the teacher’s energy appearing in a distilled corpus. In order to train inference networks to minimize an energy function, the energy must be differentiable with respect to the inference network output. We describe several approaches for relaxing the autoregressive teacher’s energy to make it amenable to minimization with an inference network, and compare them empirically. We experiment with two non-autoregressive inference network architectures, one based on bidirectional RNNs and the other based on the transformer model of Ghazvininejad et al. (2019). In experiments on the IWSLT 2014 DE-EN and WMT 2016 RO-EN datasets, we show that training to minimize the teacher’s energy significantly outperforms training with distilled outputs. Our approach, which we call ENGINE (ENerGy-based Inference NEtworks), achieves state-of-the-art results for non-autoregressive translation on these datasets, approaching the results of the autoregressive teachers. Our hope is that ENGINE will enable energy-based models to be applied more broadly for non-autoregressive generation in the future. 2 Related Work Non-autoregressive neural machine translation began with the work of Gu et al. (2018a), who found benefit from using knowledge distillation (Hinton et al., 2015), and in particular sequence-level distilled outputs (Kim and Rush, 2016). Subsequent work has narrowed the gap between nonautoregressive and autoregressive translation, including multi-iteration refinements (Lee et al., 2820 2018; Ghazvininejad et al., 2019; Saharia et al., 2020; Kasai et al., 2020) and rescoring with autoregressive models (Kaiser et al., 2018; Wei et al., 2019; Ma et al., 2019; Sun et al., 2019). Ghazvininejad et al. (2020) and Saharia et al. (2020) proposed aligned cross entropy or latent alignment models and achieved the best results of all non-autoregressive models without refinement or rescoring. We propose training inference networks with autoregressive energies and outperform the best purely non-autoregressive methods. Another related approach trains an “actor” network to manipulate the hidden state of an autoregressive neural MT system (Gu et al., 2017; Chen et al., 2018; Zhou et al., 2020) in order to bias it toward outputs with better BLEU scores. This work modifies the original pretrained network rather than using it to define an energy for training an inference network. Energy-based models have had limited application in text generation due to the computational challenges involved in learning and inference in extremely large search spaces (Bakhtin et al., 2020). The use of inference networks to output approximate minimizers of a loss function is popular in variational inference (Kingma and Welling, 2013; Rezende et al., 2014), and, more recently, in structured prediction (Tu and Gimpel, 2018, 2019; Tu et al., 2019), including previously for neural MT (Gu et al., 2018b). 3 Energy-Based Inference Networks for Non-Autoregressive NMT Most neural machine translation (NMT) systems model the conditional distribution pΘ(y | x) of a target sequence y = ⟨y1, y2, ..., yT ⟩given a source sequence x = ⟨x1, x2, ..., xTs⟩, where each yt comes from a vocabulary V, yT is ⟨eos⟩, and y0 is ⟨bos⟩. It is common in NMT to define this conditional distribution using an “autoregressive” factorization (Sutskever et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017): log pΘ(y | x) = |y| X t=1 log pΘ(yt | y0:t−1, x) This model can be viewed as an energy-based model (LeCun et al., 2006) by defining the energy function EΘ(x, y) = −log pΘ(y | x). Given trained parameters Θ, test time inference seeks to find the translation for a given source sentence x with the lowest energy: ˆy = arg miny EΘ(x, y). Finding the translation that minimizes the energy involves combinatorial search. In this paper, we train inference networks to perform this search approximately. The idea of this approach is to replace the test time combinatorial search typically employed in structured prediction with the output of a network trained to produce approximately optimal predictions (Tu and Gimpel, 2018, 2019). More formally, we define an inference network AΨ which maps an input x to a translation y and is trained with the goal that AΨ(x) ≈arg miny EΘ(x, y). Specifically, we train the inference network parameters Ψ as follows (assuming Θ is pretrained and fixed): bΨ = arg min Ψ X ⟨x,y⟩∈D EΘ(x, AΨ(x)) (1) where D is a training set of sentence pairs. The network architecture of AΨ can be different from the architectures used in the energy function. In this paper, we combine an autoregressive energy function with a non-autoregressive inference network. By doing so, we seek to combine the effectiveness of the autoregressive energy with the fast inference speed of a non-autoregressive network. 3.1 Energies for Inference Network Training In order to allow for gradient-based optimization of the inference network parameters Ψ, we now define a more general family of energy functions for NMT. First, we change the representation of the translation y in the energy, redefining y = ⟨y0, . . . , y|y|⟩as a sequence of distributions over words instead of a sequence of words. In particular, we consider the generalized energy EΘ(x, y) = |y| X t=1 et(x, y) (2) where et(x, y) = −y⊤ t log pΘ(· | y0, y1, . . . , yt−1, x). (3) We use the · notation in pΘ(· | . . .) above to indicate that we may need the full distribution over words. Note that by replacing the yt with one-hot distributions we recover the original energy. In order to train an inference network to minimize this energy, we simply need a network architecture that can produce a sequence of word distributions, which is satisfied by recent nonautoregressive NMT models (Ghazvininejad et al., 2821 Figure 1: The ENGINE framework trains a nonautoregressive inference network AΨ to produce translations with low energy under a pretrained autoregressive energy E. 2019). However, because the distributions involved in the original energy are one-hot, it may be advantageous for the inference network too to output distributions that are one-hot or approximately so. We will accordingly view inference networks as producing a sequence of T logit vectors zt ∈R|V|, and we will consider two operators O1 and O2 that will be used to map these zt logits into distributions for use in the energy. Figure 1 provides an overview of our approach, including this generalized energy function, the inference network, and the two operators O1 and O2. We describe choices for these operators in the next section. 3.2 Choices for Operators We now consider ways of defining the two operators that govern the interface between the inference network and the energy function. As shown in Figure 1, we seek an operator O1 to modulate the way that logits zt output by the inference network are fed to the decoder input slots in the energy function, and an operator O2 to determine how the distribution pΘ(· | . . .) is used to compute the log probability of a word in y. Explicitly, then, we O(z) ∂O(z) ∂z SX q ∂q ∂z STL onehot(arg max(z)) I SG onehot(arg max(˜q)) ∂˜q ∂˜z ST onehot(arg max(q)) ∂q ∂z GX ˜q ∂˜q ∂˜z Table 1: Let O(z) ∈∆|V|−1 be the result of applying an O1 or O2 operation to logits z output by the inference network. Also let ˜z = z + g, where g is Gumbel noise, q = softmax(z), and ˜q = softmax(˜z). We show the Jacobian (approximation) ∂O(z) ∂z we use when computing ∂Loss ∂z = ∂Loss ∂O(z) ∂O(z) ∂z , for each O(z) considered. rewrite each local energy term (Eq. 3) as et(x, y) = −O2(zt)⊤ log pΘ(· | O1(z0), O1(z1), . . . , O1(zt−1), x), which our inference networks will minimize with respect to the zt. The choices we consider for O1 and O2, which we present generically for operator O and logit vector z, are shown in Table 1, and described in more detail below. Some of these O operations are not differentiable, and so the Jacobian matrix ∂O(z) ∂z must be approximated during learning; we show the approximations we use in Table 1 as well. We consider five choices for each O: (a) SX: softmax. Here O(z) = softmax(z); no Jacobian approximation is necessary. (b) STL: straight-through logits. Here O(z) = onehot(arg maxi z). ∂O(z) ∂z is approximated by the identity matrix I (see Bengio et al. (2013)). (c) SG: straight-through Gumbel-Softmax. Here O(z) = onehot(arg maxi softmax(z + g)), where gi is Gumbel noise.2 ∂O(z) ∂z is approximated with ∂softmax(z+g) ∂z (Jang et al., 2016). (d) ST: straight-through. This setting is identical to SG with g = 0 (see Bengio et al. (2013)). (e) GX: Gumbel-Softmax. Here O(z) = softmax(z + g), where again gi is Gumbel noise; no Jacobian approximation is necessary. 2gi = −log(−log(ui)) and ui ∼Uniform(0, 1). 2822 O1 \ O2 SX STL SG ST GX SX 55 (20.2) 256 (0) 56 (19.6) 55 (20.1) 55 (19.6) STL 97 (14.8) 164 (8.2) 94 (13.7) 95 (14.6) 190 (0) SG 82 (15.2) 206 (0) 81 (14.7) 82 (15.0) 83 (13.5) ST 81 (14.7) 170 (0) 81 (14.4) 80 (14.3) 83 (13.7) GX 53 (19.8) 201 (0) 56 (18.3) 54 (19.6) 55 (19.4) (a) seq2seq AR energy, BiLSTM inference networks SX STL SG ST GX 80 (31.7) 133 (27.8) 81 (31.5) 80 (31.7) 81 (31.6) 186 (25.3) 133 (27.8) 95 (20.0) 97 (30.1) 180 (26.0) 98 (30.1) 133 (27.8) 95 (30.1) 97 (30.0) 97 (29.8) 98 (30.2) 133 (27.8) 95 (30.0) 97 (30.1) 97 (30.0) 81 (31.5) 133 (27.8) 81 (31.2) 81 (31.5) 81 (31.4) (b) transformer AR energy, CMLM inference networks Table 2: Comparison of operator choices in terms of energies (BLEU scores) on the IWSLT14 DE-EN dev set with two energy/inference network combinations. Oracle lengths are used for decoding. O1 is the operation for feeding inference network outputs into the decoder input slots in the energy. O2 is the operation for computing the energy on the output. Each row corresponds to the same O1, and each column corresponds to the same O2. 4 Experimental Setup 4.1 Datasets We evaluate our methods on two datasets: IWSLT14 German (DE) →English (EN) and WMT16 Romanian (RO) →English (EN). All data are tokenized and then segmented into subword units using byte-pair encoding (Sennrich et al., 2016). We use the data provided by Lee et al. (2018) for RO-EN. 4.2 Autoregressive Energies We consider two architectures for the pretrained autoregressive (AR) energy function. The first is an autoregressive sequence-to-sequence (seq2seq) model with attention (Luong et al., 2015). The encoder is a two-layer BiLSTM with 512 units in each direction, the decoder is a two-layer LSTM with 768 units, and the word embedding size is 512. The second is an autoregressive transformer model (Vaswani et al., 2017), where both the encoder and decoder have 6 layers, 8 attention heads per layer, model dimension 512, and hidden dimension 2048. 4.3 Inference Network Architectures We choose two different architectures: a BiLSTM “tagger” (a 2-layer BiLSTM followed by a fullyconnected layer) and a conditional masked language model (CMLM; Ghazvininejad et al., 2019), a transformer with 6 layers per stack, 8 attention heads per layer, model dimension 512, and hidden dimension 2048. Both architectures require the target sequence length in advance; methods for handling length are discussed in Sec. 4.5. For baselines, we train these inference network architectures as non-autoregressive models using the standard perposition cross-entropy loss. For faster inference network training, we initialize inference networks with the baselines trained with cross-entropy loss in our experiments. The baseline CMLMs use the partial masking strategy described by Ghazvininejad et al. (2019). This involves using some masked input tokens and some provided input tokens during training. At test time, multiple iterations (“refinement iterations”) can be used for improved results (Ghazvininejad et al., 2019). Each iteration uses partially-masked input from the preceding iteration. We consider the use of multiple refinement iterations for both the CMLM baseline and the CMLM inference network.3 4.4 Hyperparameters For inference network training, the batch size is 1024 tokens. We train with the Adam optimizer (Kingma and Ba, 2015). We tune the learning rate in {5e−4, 1e−4, 5e−5, 1e−5, 5e−6, 1e−6}. For regularization, we use L2 weight decay with rate 0.01, and dropout with rate 0.1. We train all models for 30 epochs. For the baselines, we train the models with local cross entropy loss and do early stopping based on the BLEU score on the dev set. For the inference network, we train the model to minimize the energy (Eq. 1) and do early stopping based on the energy on the dev set. 4.5 Predicting Target Sequence Lengths Non-autoregressive models often need a target sequence length in advance (Lee et al., 2018). We report results both with oracle lengths and with a simple method of predicting it. We follow Ghazvininejad et al. (2019) in predicting the length of the 3The CMLM inference network is trained according to Eq. 1 with full masking (no partial masking like in the CMLM baseline). However, since the CMLM inference network is initialized using the CMLM baseline, which is trained using partial masking, the CMLM inference network is still compatible with refinement iterations at test time. 2823 IWSLT14 DE-EN WMT16 RO-EN # iterations # iterations 1 10 1 10 CMLM 28.11 33.39 28.20 33.31 ENGINE 31.99 33.17 33.16 34.04 Table 3: Test BLEU scores of non-autoregressive models using no refinement (# iterations = 1) and using refinement (# iterations = 10). Note that the # iterations = 1 results are purely non-autoregressive. ENGINE uses a CMLM as the inference network architecture and the transformer AR energy. The length beam size is 5 for CMLM and 3 for ENGINE. translation using a representation of the source sequence from the encoder. The length loss is added to the cross-entropy loss for the target sequence. During decoding, we select the top k = 3 length candidates with the highest probabilities, decode with the different lengths in parallel, and return the translation with the highest average of log probabilities of its tokens. 5 Results Effect of choices for O1 and O2. Table 2 compares various choices for the operations O1 and O2. For subsequent experiments, we choose the setting that feeds the whole distribution into the energy function (O1 = SX) and computes the loss with straight-through (O2 = ST). Using Gumbel noise in O2 has only minimal effect, and rarely helps. Using ST instead also speeds up training by avoiding the noise sampling step. Training with distilled outputs vs. training with energy. We compared training nonautoregressive models using the references, distilled outputs, and as inference networks on both datasets. Table 5 in the Appendix shows the results when using BiLSTM inference networks and seq2seq AR energies. The inference networks improve over training with the references by 11.27 BLEU on DE-EN and 12.22 BLEU on RO-EN. In addition, inference networks consistently improve over non-autoregressive networks trained on the distilled outputs. Impact of refinement iterations. Ghazvininejad et al. (2019) show improvements with multiple refinement iterations. Table 3 shows refinement results of CMLM and ENGINE. Both improve with multiple iterations, though the improvement is much larger with CMLM. However, even with IWSLT14 WMT16 DE-EN RO-EN Autoregressive (Transformer) Greedy Decoding 33.00 33.33 Beam Search 34.11 34.07 Non-autoregressive Iterative Refinement (Lee et al., 2018) 25.73† NAT with Fertility (Gu et al., 2018a) 29.06† CTC (Libovick´y and Helcl, 2018) 24.71† FlowSeq (Ma et al., 2019) 27.55† 30.44† CMLM (Ghazvininejad et al., 2019) 28.25 28.20† Bag-of-ngrams-based loss (Shao et al., 2020) 29.29† AXE CMLM (Ghazvininejad et al., 2020) 31.54† Imputer-based model (Saharia et al., 2020) 31.7† ENGINE (ours) 31.99 33.16 Table 4: BLEU scores on two datasets for several nonautoregressive methods. The inference network architecture is the CMLM. For methods that permit multiple refinement iterations (CMLM, AXE CMLM, ENGINE), one decoding iteration is used (meaning the methods are purely non-autoregressive). †Results are from the corresponding papers. 10 iterations, ENGINE is comparable to CMLM on DE-EN and outperforms it on RO-EN. Comparison to other NAT models. Table 4 shows 1-iteration results on two datasets. To the best of our knowledge, ENGINE achieves stateof-the-art NAT performance: 31.99 on IWSLT14 DE-EN and 33.16 on WMT16 RO-EN. In addition, ENGINE achieves comparable performance with the autoregressive NMT model. 6 Conclusion We proposed a new method to train nonautoregressive neural machine translation systems via minimizing pretrained energy functions with inference networks. In the future, we seek to expand upon energy-based translation using our method. Acknowledgments We would like to thank Graham Neubig for helpful discussions and the reviewers for insightful comments. This research was supported in part by an Amazon Research Award to K. Gimpel. 2824 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of International Conference on Learning Representations (ICLR). Anton Bakhtin, Yuntian Deng, Sam Gross, Myle Ott, Marc’Aurelio Ranzato, and Arthur Szlam. 2020. Energy-based models for text. Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432. Yun Chen, Victor O.K. Li, Kyunghyun Cho, and Samuel Bowman. 2018. A stable and effective learning strategy for trainable greedy decoding. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 380– 390, Brussels, Belgium. Association for Computational Linguistics. Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, and Omer Levy. 2020. Aligned cross entropy for non-autoregressive machine translation. arXiv preprint arXiv:2004.01655. Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-predict: Parallel decoding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6111– 6120, Hong Kong, China. Association for Computational Linguistics. Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. 2018a. Non-autoregressive neural machine translation. In Proceedings of International Conference on Learning Representations (ICLR). Jiatao Gu, Kyunghyun Cho, and Victor O.K. Li. 2017. Trainable greedy decoding for neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1968–1978, Copenhagen, Denmark. Association for Computational Linguistics. Jiatao Gu, Daniel Jiwoong Im, and Victor O. K. Li. 2018b. Neural machine translation with Gumbelgreedy decoding. In Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5125–5132. AAAI Press. Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop. Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. In Proceedings of International Conference on Learning Representations (ICLR). Lukasz Kaiser, Samy Bengio, Aurko Roy, Ashish Vaswani, Niki Parmar, Jakob Uszkoreit, and Noam Shazeer. 2018. Fast decoding in sequence models using discrete latent variables. In International Conference on Machine Learning, pages 2395–2404. Jungo Kasai, James Cross, Marjan Ghazvininejad, and Jiatao Gu. 2020. Parallel machine translation with disentangled context transformer. Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317–1327, Austin, Texas. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. In Proceedings of International Conference on Learning Representations (ICLR). Yann LeCun, Sumit Chopra, Raia Hadsell, Marc’Aurelio Ranzato, and Fu-Jie Huang. 2006. A tutorial on energy-based learning. In Predicting Structured Data. MIT Press. Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural sequence modeling by iterative refinement. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1173– 1182, Brussels, Belgium. Association for Computational Linguistics. Jindˇrich Libovick´y and Jindˇrich Helcl. 2018. End-toend non-autoregressive neural machine translation with connectionist temporal classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3016– 3021, Brussels, Belgium. Association for Computational Linguistics. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal. Association for Computational Linguistics. Xuezhe Ma, Chunting Zhou, Xian Li, Graham Neubig, and Eduard Hovy. 2019. FlowSeq: Nonautoregressive conditional sequence generation with generative flow. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language 2825 Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 4281–4291, Hong Kong, China. Association for Computational Linguistics. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning, pages 1278–1286. Chitwan Saharia, William Chan, Saurabh Saxena, and Mohammad Norouzi. 2020. Non-autoregressive machine translation with latent alignments. arXiv preprint arXiv:2004.07437. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Chenze Shao, Jinchao Zhang, Yang Feng, Fandong Meng, and Jie Zhou. 2020. Minimizing the bag-ofngrams difference for non-autoregressive neural machine translation. In AAAI. Zhiqing Sun, Zhuohan Li, Haoqing Wang, Di He, Zi Lin, and Zhihong Deng. 2019. Fast structured decoding for sequence models. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 3016– 3026. Curran Associates, Inc. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Lifu Tu and Kevin Gimpel. 2018. Learning approximate inference networks for structured prediction. In Proceedings of International Conference on Learning Representations (ICLR). Lifu Tu and Kevin Gimpel. 2019. Benchmarking approximate inference methods for neural structured prediction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3313–3324, Minneapolis, Minnesota. Association for Computational Linguistics. Lifu Tu, Richard Yuanzhe Pang, and Kevin Gimpel. 2019. Improving joint training of inference networks and structured prediction energy networks. arXiv preprint arXiv:1911.02891. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Bingzhen Wei, Mingxuan Wang, Hao Zhou, Junyang Lin, and Xu Sun. 2019. Imitation learning for nonautoregressive neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1304– 1312, Florence, Italy. Association for Computational Linguistics. Chunting Zhou, Jiatao Gu, and Graham Neubig. 2020. Understanding knowledge distillation in nonautoregressive machine translation. In International Conference on Learning Representations (ICLR). A Appendix A.1 Training with Distilled Outputs vs. Training with Energy In order to compare ENGINE with training on distilled outputs, we train BiLSTM models in three ways: “baseline” which is trained with the humanwritten reference translations, “distill” which is trained with the distilled outputs (generated using the autoregressive models), and “ENGINE”, our method which trains the BiLSTM as an inference network to minimize the pretrained seq2seq autoregressive energy. Oracle lengths are used for decoding. Table 5 shows test results for both datasets, showing significant gains of ENGINE over the baseline and distill methods. Although the results shown here are lower than the transformer results, the trend is clearly indicated. IWSLT14 DE-EN WMT16 RO-EN Energy (↓) BLEU (↑) Energy (↓) BLEU (↑) baseline 153.54 8.28 175.94 9.47 distill 112.36 14.58 205.71 5.76 ENGINE 51.98 19.55 64.03 21.69 Table 5: Test results of non-autoregressive models when training with the references (“baseline”), distilled outputs (“distill”), and energy (“ENGINE”). Oracle lengths are used for decoding. Here, ENGINE uses BiLSTM inference networks and pretrained seq2seq AR energies. ENGINE outperforms training on both the references and a pseudocorpus. A.2 Analysis of Translation Results In Table 6, we present randomly chosen translation outputs from WMT16 RO-EN. For each Romanian sentence, we show the reference from the dataset, the translation from CMLM, and the translation 2826 Source: seful onu a solicitat din nou tuturor partilor , inclusiv consiliului de securitate onu divizat sa se unifice si sa sustina negocierile pentru a gasi o solutie politica . Reference : the u.n. chief again urged all parties , including the divided u.n. security council , to unite and support inclusive negotiations to find a political solution . CMLM : the un chief again again urged all parties , including the divided un security council to unify and support negotiations in order to find a political solution . ENGINE : the un chief has again urged all parties , including the divided un security council to unify and support negotiations in order to find a political solution . Source: adevarul este ca a rupt o racheta atunci cand a pierdut din cauza ca a acuzat crampe in us , insa nu este primul jucator care rupe o racheta din frustrare fata de el insusi si il cunosc pe thanasi suficient de bine incat sa stiu ca nu s @-@ ar mandri cu asta . Reference : he did break a racquet when he lost when he cramped in the us , but he &apos;s not the first player to break a racquet out of frustration with himself , and i know thanasi well enough to know he wouldn &apos;t be proud of that . CMLM : the truth is that it has broken a rocket when it lost because accused crcrpe in the us , but it is not the first player to break rocket rocket rocket frustration frustration himself himself and i know thanthanasi enough enough know know he would not be proud of that . ENGINE : the truth is that it broke a rocket when it lost because he accused crpe in the us , but it is not the first player to break a rocket from frustration with himself and i know thanasi well well enough to know he would not be proud of it . Source: realizatorii studiului mai transmit ca &quot; romanii simt nevoie de ceva mai multa aventura in viata lor ( 24 % ) , urmat de afectiune ( 21 % ) , bani ( 21 % ) , siguranta ( 20 % ) , nou ( 19 % ) , sex ( 19 % ) , respect 18 % , incredere 17 % , placere 17 % , conectare 17 % , cunoastere 16 % , protectie 14 % , importanta 14 % , invatare 12 % , libertate 11 % , autocunoastere 10 % si control 7 % &quot; . Reference : the study &apos;s conductors transmit that &quot; romanians feel the need for a little more adventure in their lives ( 24 % ) , followed by affection ( 21 % ) , money ( 21 % ) , safety ( 20 % ) , new things ( 19 % ) , sex ( 19 % ) respect 18 % , confidence 17 % , pleasure 17 % , connection 17 % , knowledge 16 % , protection 14 % , importance 14 % , learning 12 % , freedom 11 % , self @-@ awareness 10 % and control 7 % . &quot; CMLM : survey survey makers say that &apos; romanians romanians some something adventadventure ure their lives 24 24 % ) followed followed by % % % % % , ( 21 % % ), safety ( % % % ), new19% % ), ), 19 % % % ), respect 18 % % % % % % % % , , % % % % % % % , , % , 14 % , 12 % % ENGINE : realisation of the survey say that &apos; romanians feel a slightly more adventure in their lives ( 24 % ) followed by aff% ( 21 % ) , money ( 21 % ), safety ( 20 % ) , new 19 % ) , sex ( 19 % ) , respect 18 % , confidence 17 % , 17 % , connecting 17 % , knowledge % % , 14 % , 14 % , 12 % % Table 6: Examples of translation outputs from ENGINE and CMLM on WMT16 RO-EN without refinement iterations. from ENGINE. We observe that without the refinement iterations, CMLM performs well for shorter source sentences. However, it still prefers generating repeated tokens. ENGINE, on the other hand, generates much better translations with fewer repeated tokens.
2020
251
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2827–2835 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2827 Leveraging Monolingual Data with Self-Supervision for Multilingual Neural Machine Translation Aditya Siddhant, Ankur Bapna, Yuan Cao, Orhan Firat, Mia Chen, Sneha Kudungunta, Naveen Arivazhagan, Yonghui Wu Google Research {adisid, ankurbpn, yuancao, orhanf, miachen, snehakudugunta, navari, yonghui}@google.com Abstract Over the last few years two promising research directions in low-resource neural machine translation (NMT) have emerged. The first focuses on utilizing high-resource languages to improve the quality of low-resource languages via multilingual NMT. The second direction employs monolingual data with selfsupervision to pre-train translation models, followed by fine-tuning on small amounts of supervised data. In this work, we join these two lines of research and demonstrate the efficacy of monolingual data with self-supervision in multilingual NMT. We offer three major results: (i) Using monolingual data significantly boosts the translation quality of lowresource languages in multilingual models. (ii) Self-supervision improves zero-shot translation quality in multilingual models. (iii) Leveraging monolingual data with self-supervision provides a viable path towards adding new languages to multilingual models, getting up to 33 BLEU on WMT ro-en translation without any parallel data or back-translation. 1 Introduction Recent work has demonstrated the efficacy of multilingual neural machine translation (multilingual NMT) on improving the translation quality of low-resource languages (Firat et al., 2016; Aharoni et al., 2019) as well as zero-shot translation (Ha et al., 2016; Johnson et al., 2017; Arivazhagan et al., 2019b). The success of multilingual NMT on low-resource languages relies heavily on transfer learning from high-resource languages for which copious amounts of parallel data is easily accessible. However, existing multilingual NMT approaches often do not effectively utilize the abundance of monolingual data, especially in lowresource languages. On the other end of the spectrum, self-supervised learning methods, consuming 0.01 0.1 1 10 100 1000 en cs fr ru zh es fi de et lv lt ro hi kk tr gu Parallel Monolingual Figure 1: Number of parallel and monolingual training samples in millions for each language in WMT training corpora. only monolingual data, have achieved great success on transfer learning (Devlin et al., 2019) and unsupervised NMT (Lample et al., 2018; Artetxe et al., 2018) without fully benefiting from the rich learning signals offered by the bilingual data of multiple languages. In this work, we propose to combine the beneficial effects of multilingual NMT with the selfsupervision from monolingual data. Compared with multilingual models trained without any monolingual data, our approach shows consistent improvements in the translation quality of all languages, with greater than 10 BLEU points improvements on certain low-resource languages. We further demonstrate improvements in zero-shot translation, where our method has almost on-par quality with pivoting-based approaches, without using any alignment or adversarial losses. The most interesting aspect of this work, however, is that we introduce a path towards effectively adding new unseen languages to a multilingual NMT model, showing strong translation quality on several language pairs by leveraging only monolingual data with self-supervised learning, without the need for any parallel data for the new languages. 2828 xx cs fr ru zh es fi de et lv lt ro hi kk tr gu Any-to-English (xx→en) 31.3 37.2 36.0 21.7 32.7 27.3 31.7 23.1 15.0 21.3 30.1 8.5 11.5 15.9 1.0 English-to-Any (en→xx) 23.8 41.3 26.4 31.3 31.1 18.1 29.9 18.2 14.2 11.5 23.4 4.5 1.9 13.6 0.6 Table 1: Bilingual baselines. xx refers to language in the column header. 2 Method We propose a co-training mechanism that combines supervised multilingual NMT with monolingual data and self-supervised learning. While several pre-training based approaches have been studied in the context of NMT (Dai and Le, 2015; Conneau and Lample, 2019; Song et al., 2019), we proceed with Masked Sequence-to-Sequence (MASS) (Song et al., 2019) given its success on unsupervised and low-resource NMT, and adapt it to the multilingual setting. 2.1 Adapting MASS for multilingual models MASS adapts the masked de-noising objective (Devlin et al., 2019; Raffel et al., 2019) for sequenceto-sequence models, by masking the input to the encoder and training the decoder to generate the masked portion of the input. To utilize this objective function for unsupervised NMT, Song et al. (2019) enhance their model with additional improvements, including language embeddings, target language-specific attention context projections, shared target embeddings and softmax parameters and high variance uniform initialization for target attention projection matrices1. We use the same set of hyper-parameters for self-supervised training as described in (Song et al., 2019). However, while the success of MASS relies on the architectural modifications described above, we find that our multilingual NMT experiments are stable even in the absence of these techniques, thanks to the smoothing effect of multilingual joint training. We also forego the separate source and target language embeddings in favour of pre-pending the source sentences with a < 2xx > token (Johnson et al., 2017). We train our models simultaneously on supervised parallel data using the translation objective and on monolingual data using the MASS objective. To denote the target language in multilingual NMT models we prepend the source sentence with the < 2xx > token denoting the target language. 1Verified from open-source Github implementation. 3 Experimental Setup 3.1 Datasets We use the parallel and monolingual training data provided with the WMT corpus, for 15 languages to and from English. The amount of parallel data available ranges from more than 60 million sentence pairs as in En-Cs to roughly 10k sentence pairs as in En-Gu. We also collect additional monolingual data from WMT news-crawl, newscommentary, common-crawl, europarl-v9, newsdiscussions and wikidump datasets in all 16 languages including English.2 The amount of monolingual data varies from 2 million sentences in Zh to 270 million in De. The distribution of our parallel and monolingual data is depicted in Figure 1. 3.2 Data Sampling Given the data imbalance across languages in our datasets, we use a temperature-based data balancing strategy to over-sample low-resource languages in our multilingual models (Arivazhagan et al., 2019b). We use a temperature of T = 5 to balance our parallel training data. When applicable, we sample monolingual data uniformly across languages since this distribution is not as skewed. For experiments that use both monolingual and parallel data, we mix the two sources at an equal ratio (50% monolingual data with self-supervision and 50% parallel data). 3.3 Architecture and Optimization All experiments are performed with the Transformer architecture (Vaswani et al., 2017) using the open-source Tensorflow-Lingvo implementation (Shen et al., 2019). Specifically, we use the Transformer Big model containing 375M parameters (6 layers, 16 heads, 8192 hidden dimension) (Chen et al., 2018) and a shared source-target SentencePiece model (SPM)3 (Kudo and Richardson, 2018). We use a vocabulary size of 32k for the bilingual models and 64k for the multilingual mod2Followed the versions recommended by WMT’19 shared task, as in http://statmt.org/wmt19/translation-task.html 3https://github.com/google/sentencepiece 2829 Δ BLEU -10.0 -5.0 0.0 5.0 10.0 15.0 cs fr ru zh es fi de et lv lt ro hi kk tr gu Multilingual NMT Multilingual NMT + Mono. (a) Any-to-English (xx →en) Δ BLEU -10.0 -5.0 0.0 5.0 10.0 15.0 cs fr ru zh es fi de et lv lt ro hi kk tr gu Multilingual NMT Multilingual NMT + Mono. (b) English-to-Any (en →xx) Figure 2: Translation quality of Multilingual NMT models relative to bilingual baselines with and without monolingual data. The left plot shows xx →en direction and right one shows en →xx direction. From left to right on x-axis, we go from high-resource to low-resource languages. The x-axis reflects the bilingual baselines. els. Different SPMs are trained depending on the set of languages supported by the model. 4 Using Monolingual Data for Multilingual NMT We evaluate the performance of the models using SacreBLEU (Post, 2018) on standard WMT validation and test sets (Papineni et al., 2002). The performance of our bilingual baselines for all 30 English-centric language pairs are reported in Table 1. We compare the performance of bilingual models, multilingual models trained with just supervised data for 30 language pairs (15 languages to and from English) and multilingual models trained with a combination of supervised and monolingual data in Figure 2. High-Resource Translation Our results suggest that a single multilingual model is able to match the quality of individual bilingual models with a gap of less than 2 BLEU points for most high-resource languages, with the exception of Chinese (Zh). The slight quality regression is not surprising, given the large number of languages competing for capacity within the same model (Arivazhagan et al., 2019b). We find that adding additional monolingual data improves the multilingual model quality across the board, even for high-resource language pairs. Low-Resource Translation From Figure 2, we observe that our supervised multilingual NMT model significantly improves the translation quality for most low and medium-resource languages compared with the bilingual baselines. Adding additional monolingual data leads to an additional improvement of 1-2 BLEU for most medium-resource languages. For the lowest-resource languages like Kazakh (kk), Turkish (tr) and Gujarati (gu), we can see that multilingual NMT alone is not sufficient to reach high translation quality. The addition of monolingual data has a large positive impact on very low resource languages, significantly improving quality over the supervised multilingual model. These improvements range from 3-5 BLEU in the en→xx direction to more than 5 BLEU for the xx→en translation. Zero-Shot Translation We next evaluate the effect of training on additional monolingual data on zero-shot translation in multilingual models. Table 2 demonstrates the zero-shot performance of our multilingual model that is trained on 30 language pairs, and evaluated on French(fr)German(de) and German(de)-Czech(cs), when trained with and without monolingual data. To compare with the existing work on zero-shot translation, we also evaluate the performance of multilingual models trained on just the relevant languages (en-fr-de for fr-de translation, en-cs-de for cs-de translation). We observe that the additional monolingual data significantly improves the quality of zero-shot translation, often resulting in 3-6 BLEU increase on all zero-shot directions compared to our multilingual baseline. We hypothesize that the additional monolingual data seen during the selfsupervised training process helps better align representations across languages, akin to the smoothing effect in semi-supervised learning (Chapelle et al., 2010). We leave further exploration of this intriguing phenomenon to future work. 2830 fr de de fr cs de de cs 4 lang. w/ Parallel Data 27.7 35.3 — — Translation via Pivot 21.9 29.2 20.4 19.0 Arivazhagan et al. (2019a) 20.3 26.0 — — Kim et al. (2019) 17.3 — — 14.1 Multilingual NMT 11.8 15.2 12.3 8.2 Multilingual NMT + Mono. 18.5 27.2 16.9 12.6 30 lang. Multilingual NMT 10.3 14.2 10.5 4.3 Multilingual NMT + Mono. 16.6 22.3 14.8 7.9 Table 2: Zero-shot performance on non-English centric language pairs. We compare with pivot-based translation and two recent approaches from Arivazhagan et al. (2019a) and Kim et al. (2019). The translation quality between these language pairs when parallel data is available is also provided as a baseline. 4 lang. is a multilingual model trained on 4 language pairs (2 languages to and from English), while 30 lang. is our multilingual model trained on all English-centric language pairs. fr en en fr de en en de ro en en ro lt en en lt lv en en lv hi en en hi Multilingual NMT 34.9 37.5 28.7 26.4 33.2 24.3 25.1 12.4 17.6 15.5 18.0 11.6 Mono. Only 9.8 7.6 7.4 5.8 6.8 7.3 4.8 2.1 2.9 1.8 5.3 3.1 Multilingual NMT - xx 8.4 2.4 3.9 2.6 6.2 3.8 2.2 1.1 2.1 1.7 0.8 0.6 Multilingual NMT - xx + Mono. 30.7 9.8 24.2 8.9 33.0 9.3 21.3 6.7 18.8 6.1 14.6 5.4 Table 3: Translation quality of the new language added to Multilingual NMT using just monolingual data. Multilingual NMT here is a multilingual model with 30 language pairs, Mono. Only is a bilingual model used as a baseline trained with only monolingual data with self-supervised learning, Multilingual NMT-xx is a multilingual model trained on 28 language pairs (xx is the language not present in the model). Multilingual NMT-xx + Mono. is a multilingual model with 28 language pairs but only monolingual data for xx. 5 Adding New Languages to Multilingual NMT Inspired by the effectiveness of monolingual data in boosting low-resource language translation quality, we continue with a stress-test in which we completely remove the available parallel data from our multilingual model, one language at a time, in order to observe the unsupervised machine translation quality for the missing language. Results of this set of experiments are detailed in Table 3. We find that simply adding monolingual data for a new language to the training procedure of a multilingual model is sufficient to obtain strong translation quality for several languages, often attaining within a few BLEU points of the fully supervised multilingual baseline, without the need for iterative back-translation. We also notice significant quality improvements over models trained with just self-supervised learning using monolingual data for a variety of languages. On WMT ro-en, the performance of our model exceeds XLM (Conneau and Lample, 2019) by over 1.5 BLEU and matches bilingual MASS (Song et al., 2019), without utilizing any back-translation. This suggests that jumpstarting the iterative back-translation process from multilingual models might be a promising avenue to supporting new languages. 6 Related Work Our work builds on several recently proposed techniques for multilingual NMT and self-supervised representation learning. While massively multilingual models have obtained impressive quality improvements for low-resource languages as well as zero-shot scenarios (Aharoni et al., 2019; Arivazhagan et al., 2019a), it has not yet been shown how these massively multilingual models could be extended to unseen languages, beyond the pipelined approaches (Currey and Heafield, 2019; Lakew et al., 2019). On the other hand, self-supervised learning approaches have excelled at down-stream cross-lingual transfer (Devlin et al., 2019; Raffel et al., 2019; Conneau et al., 2019), but their success for unsupervised NMT (Conneau and Lample, 2831 2019; Song et al., 2019) currently lacks robustness when languages are distant or monolingual data domains are mismatched (Neubig and Hu, 2018; Vuli´c et al., 2019). We observe that these two lines of research can be quite complementary and can compensate for each other’s deficiencies. 7 Conclusion and Future Directions We present a simple framework to combine multilingual NMT with self-supervised learning, in an effort to jointly exploit the learning signals from multilingual parallel data and monolingual data. We demonstrate that combining multilingual NMT with monolingual data and self-supervision (i) improves the translation quality for both low and highresource languages in a multilingual setting, (ii) leads to on-par zero-shot capability compared with competitive bridging-based approaches and (iii) is an effective way to extend multilingual models to new unseen languages. Future work should explore techniques like iterative back-translation (Hoang et al., 2018) for further improvement and scaling to larger model capacities and more languages (Arivazhagan et al., 2019b; Huang et al., 2019) to maximize transfer across languages and across data sources. References Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Roee Aharoni, Melvin Johnson, and Wolfgang Macherey. 2019a. The missing ingredient in zeroshot neural machine translation. arXiv preprint arXiv:1903.07091. Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, et al. 2019b. Massively multilingual neural machine translation in the wild: Findings and challenges. arXiv preprint arXiv:1907.05019. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. In International Conference on Learning Representations. Olivier Chapelle, Bernhard Schlkopf, and Alexander Zien. 2010. Semi-Supervised Learning, 1st edition. The MIT Press. Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Mike Schuster, Noam Shazeer, Niki Parmar, et al. 2018. The best of both worlds: Combining recent advances in neural machine translation. In Association for Computational Linguistics. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm´an, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116. Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In Advances in Neural Information Processing Systems. Anna Currey and Kenneth Heafield. 2019. Zeroresource neural machine translation with monolingual pivot data. In Workshop on Neural Generation and Translation. Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in neural information processing systems. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Thanh-Le Ha, Jan Niehues, and Alex Waibel. 2016. Toward multilingual neural machine translation with universal encoder and decoder. In International Workshop on Spoken Language Translation. Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative backtranslation for neural machine translation. In Workshop on Neural Machine Translation and Generation. Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. 2019. Gpipe: Efficient training of giant neural networks using pipeline parallelism. In Advances in Neural Information Processing Systems. Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, et al. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics. 2832 Yunsu Kim, Petre Petrov, Pavel Petrushkov, Shahram Khadivi, and Hermann Ney. 2019. Pivot-based transfer learning for neural machine translation between non-english languages. In Empirical Methods in Natural Language Processing and International Joint Conference on Natural Language Processing. Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Empirical Methods in Natural Language Processing: System Demonstrations. Surafel M Lakew, Alina Karakanta, Marcello Federico, Matteo Negri, and Marco Turchi. 2019. Adapting multilingual neural machine translation to unseen languages. In International Workshop on Spoken Language Translation. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Representations. Graham Neubig and Junjie Hu. 2018. Rapid adaptation of neural machine translation to new languages. In Empirical Methods in Natural Language Processing. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting bleu scores. In Conference on Machine Translation. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. Jonathan Shen, Patrick Nguyen, Yonghui Wu, and Zhifeng Chen et al. 2019. Lingvo: a modular and scalable framework for sequence-to-sequence modeling. arXiv preprint arXiv:1902.08295. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. Mass: Masked sequence to sequence pre-training for language generation. In International Conference on Machine Learning. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. Ivan Vuli´c, Goran Glavaˇs, Roi Reichart, and Anna Korhonen. 2019. Do we really need fully unsupervised cross-lingual embeddings? In Empirical Methods in Natural Language Processing and International Joint Conference on Natural Language Processing. A Appendices 2833 Language Pair Data Sources # Samples Train Dev Test Train Dev Test cs→en WMT’19 WMT’17 WMT’18 64336053 3005 2983 fr→en WMT’15 WMT’13 WMT’14 40449146 3000 3003 ru→en WMT’19 WMT’18 WMT’19 38492126 3000 2000 zh→en WMT’19 WMT’18 WMT’19 25986436 3981 2000 es→en WMT’13 WMT’13 WMT’13 15182374 3004 3000 fi→en WMT’19 WMT’18 WMT’19 6587448 3000 1996 de→en WMT’14 WMT’13 WMT’14 4508785 3000 3003 et→en WMT’18 WMT’18 WMT’18 2175873 2000 2000 lv→en WMT’17 WMT’17 WMT’17 637599 2003 2001 lt→en WMT’19 WMT’19 WMT’19 635146 2000 1000 ro→en WMT’16 WMT’16 WMT’16 610320 1999 1999 hi→en WMT’14 WMT’14 WMT’14 313748 520 2507 kk→en WMT’19 WMT’19 WMT’19 222424 2066 1000 tr→en WMT’18 WMT’17 WMT’18 205756 3007 3000 gu→en WMT’19 WMT’19 WMT’19 11670 1998 1016 en→cs WMT’19 WMT’17 WMT’18 64336053 3005 2983 en→fr WMT’15 WMT’13 WMT’14 40449146 3000 3003 en→ru WMT’19 WMT’18 WMT’19 38492126 3000 2000 en→zh WMT’19 WMT’18 WMT’19 25986436 3981 2000 en→es WMT’13 WMT’13 WMT’13 15182374 3004 3000 en→fi WMT’19 WMT’18 WMT’19 6587448 3000 1996 en→de WMT’14 WMT’13 WMT’14 4508785 3000 3003 en→et WMT’18 WMT’18 WMT’18 2175873 2000 2000 en→lv WMT’17 WMT’17 WMT’17 637599 2003 2001 en→lt WMT’19 WMT’19 WMT’19 635146 2000 1000 en→ro WMT’16 WMT’16 WMT’16 610320 1999 1999 en→hi WMT’14 WMT’14 WMT’14 313748 520 2507 en→kk WMT’19 WMT’19 WMT’19 222424 2066 1000 en→tr WMT’18 WMT’17 WMT’18 205756 3007 3000 en→gu WMT’19 WMT’19 WMT’19 11670 1998 1016 fr→de WMT’19 WMT’13 WMT’13 9824476 1512 1701 de→fr WMT’19 WMT’13 WMT’13 9824476 1512 1701 cs→de —WMT’13 WMT’13 — 1997 1997 de→cs —WMT’13 WMT’13 — 1997 1997 Table 4: Data sources and number of samples for the parallel data in our corpus. Please note that we don’t use parallel data in Fr-De for any of the experiments in the paper apart from training parallel data baseline in Table 2. We don’t have any parallel data in Cs-De. 2834 Language Data Sources # Samples News Crawl News Commentary Common Crawl Europarl News Discussions Wiki Dumps Train Dev Test en ✓ 199900557 3000 3000 ro ✓ 14067879 3000 3000 de ✓ 275690481 3000 3000 fr ✓ ✓ ✓ ✓ 160933435 3000 3000 cs ✓ 72157988 3000 3000 es ✓ 43814290 3000 3000 et ✓ ✓ 51683012 3000 3000 fi ✓ ✓ 18847600 3000 3000 gu ✓ ✓ 4644638 3000 3000 hi ✓ 23611899 3000 3000 kk ✓ ✓ ✓ ✓ 13825470 3000 3000 lt ✓ ✓ ✓ ✓ 106198239 3000 3000 lv ✓ ✓ 10205015 3000 3000 ru ✓ 80148714 3000 3000 tr ✓ 9655009 3000 3000 zh ✓ ✓ 2158309 3000 3000 Table 5: Data sources and number of samples for the monolingual data in our corpus. 2835 Language Pair Bilingual Baseline Multilingual NMT Multilingual NMT + Mono. SOTA cs→en 29.7 28.4 29.1 33.9 fr→en 35.5 34.9 35.6 39.5 ru→en 34.9 33.8 34.1 40.1 zh→en 21.7 17.7 18.7 39.3 es→en 30.1 28.9 29.6 31.4 fi→en 26.0 25.2 25.8 33.0 de→en 27.4 27.2 28.1 32.0 et→en 24.3 24.2 24.9 30.9 lv→en 15.0 17.6 18.8 36.3 lt→en 21.3 24.4 25.4 36.3 ro→en 30.1 33.0 34.1 38.5 hi→en 8.5 16.0 18.5 16.7 kk→en 4.7 11.2 17.6 30.5 tr→en 15.9 18.4 21.1 28.0 gu→en 2.0 3.0 15.1 24.9 en→cs 23.8 20.0 20.3 29.9 en→fr 38.1 36.2 36.6 43.8 en→ru 24.9 22.0 22.9 36.3 en→zh 31.3 5.0 5.9 36.3 en→es 32.8 29.7 30.0 30.4 en→fi 20.3 19.2 19.6 27.4 en→de 26.4 22.1 23.9 27.1 en→et 19.0 18.9 20.1 25.2 en→lv 14.2 14.9 16.5 21.1 en→lt 11.0 10.9 14.4 20.1 en→ro 23.7 23.6 24.8 33.3 en→hi 4.5 10.6 13.9 12.5 en→kk 0.2 1.1 4.3 11.1 en→tr 13.7 13.8 15.7 20.0 en→gu 0.6 0.4 4.0 28.2 Table 6: Absolute BLEU scores for results in Figure 2 in the paper.
2020
252
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2836–2846 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2836 On The Evaluation of Machine Translation Systems Trained With Back-Translation Sergey Edunov Myle Ott Marc’Aurelio Ranzato Michael Auli Facebook AI Research Abstract Back-translation is a widely used data augmentation technique which leverages target monolingual data. However, its effectiveness has been challenged since automatic metrics such as BLEU only show significant improvements for test examples where the source itself is a translation, or translationese. This is believed to be due to translationese inputs better matching the back-translated training data. In this work, we show that this conjecture is not empirically supported and that backtranslation improves translation quality of both naturally occurring text as well as translationese according to professional human translators. We provide empirical evidence to support the view that back-translation is preferred by humans because it produces more fluent outputs. BLEU cannot capture human preferences because references are translationese when source sentences are natural text. We recommend complementing BLEU with a language model score to measure fluency. 1 Introduction Back-translation (BT; Bojar and Tamchyna 2011; Sennrich et al. 2016a; Poncelas et al. 2018a) is a data augmentation method that is a key ingredient for improving translation quality of neural machine translation systems (NMT; Sutskever et al. 2014; Bahdanau et al. 2015; Gehring et al. 2017; Vaswani et al. 2017). NMT systems using largescale BT have been ranked top at recent WMT evaluation campaigns (Bojar et al., 2018; Edunov et al., 2018; Ng et al., 2019). The idea is to train a target-to-source model to generate additional synthetic parallel data from monolingual target data. The resulting sentence pairs have synthetic sources and natural targets which are then added to the original bitext in order to train the desired sourceto-target model. BT improves generalization and can be used to adapt models to the test domain by adding appropriate monolingual data. Parallel corpora are usually comprised of two types of sentence-pairs: sentences which originate in the source language and have been translated by humans into the target language, or sentences which originate from the target language and have been translated into the source language. We refer to the former as the direct portion and the latter as the reverse portion. The setup we are ultimately interested in is models that translate direct sentences. Translations produced by human translators, or translationese tend to be simpler and more standardized compared to naturally occurring text (Baker, 1993; Zhang and Toral, 2019; Toury, 2012). Several recent studies found that such reverse test sentences are easier to translate than direct sentences (Toral et al., 2018; Graham et al., 2019), and human judges consistently assign higher ratings to translations of target original sentences than to source original sentences. These studies therefore recommend to restrict test sets to source original sentences, a methodology which has been adopted by the 2019 edition of the WMT news translation shared task. Unfortunately, automatic evaluation with BLEU (Papineni et al., 2002) only weakly correlates with human judgements (Graham et al., 2019). Furthermore, recent WMT submissions relying heavily on back-translation mostly improved BLEU on the reverse direction with little gains on the direct portion (Toral et al. 2018; Barry Haddow’s personal communication and see also Appendix A, Table 7; Freitag et al. 2019). This finding is concerning for two reasons. First, back-translation may not be effective after all since gains are limited to the reverse portion. Improvements on reverse sentences may only be due to a better match with the back-translated training sentences in this case. Second, it may further reduce 2837 our confidence in automatic evaluation, if human judges disagree with BLEU for systems trained with back-translation. Indeed, human evaluations of top performing systems at WMT’18 (Bojar et al., 2018) and WMT’19 (Bojar et al., 2019) did not agree with BLEU to the extent that correlation is even negative for the top entries (Ma et al., 2019). In this paper, we shed light on the following questions. First, do BT systems only work better in the reverse direction? Second, does BLEU reflect human assessment for BT models? And if that is not the case, why not and how can we alleviate the weaknesses of BLEU? Our contribution is an extensive empirical evaluation of top-performing NMT systems to validate or disproof some of the above conjectures. First, we show that translationese sources are indeed easier to translate, but this is true for both NMT systems trained with and without back-translated data. Second, we confirm that human assessment of BT systems poorly correlates with BLEU. Third, BLEU cannot capture the higher quality of backtranslation systems because the outputs of both back-translation and non back-translation models are equally close to the translationese references. Fourth, we show that BT system outputs are significanlty more fluent than the output of a system only trained on parallel data, and this may explain the human preference towards BT generations. Finally, we recommend to improve automatic evaluation by complementing BLEU with a language model score which can better assess fluency in the target language while avoiding the artifacts of translationese references. 2 Related Work Back-translation has been originally introduced for phrase-based machine translation (Bojar and Tamchyna, 2011). For back-translation with neural machine translation, there is a large body of literature building upon the seminal work of Sennrich et al. (2016a), from large-scale extensions with sampling (Edunov et al., 2018; Ott et al., 2018) or tagging (Caswell et al., 2019) to its use for unsupervised machine translation (Lample et al., 2018) as well as analysis (Poncelas et al., 2018b) and iterative versions (Hoang et al., 2018). More similar to our work, Toral et al. (2018) analyzed performance of trained state-of-the-art NMT systems in direct and reverse mode. They observe that translationese is simpler to translate and claimed that gains for such systems mostly come from improvements in the reverse direction. Concurrent to our work, Graham et al. (2019) find that automatic evaluation with BLEU does not align with the hypothesis that reverse sentences are easier to translate instead. Unfortunately, their findings are not very conclusive because they do not control for the change of actual content, as sentences in one direction may be extracted from documents which are just harder to translate. In this work we correct for this effect by comparing translations of source original sentences with their double translations. Graham et al. (2019) also observe that BLEU does not reliably correlate with human judgements. While they consider a large variety of systems trained in various ways, we instead focus on the comparison between the same NMT system trained with and without back-translated data. Earlier work on statistical machine translation models argued in favor of using source original data only to train translation models (Kurokawa et al., 2009), language models for translation (Lembersky et al., 2011), and to tune translation models (Stymne, 2017). All these studies base most of their conclusions on automatic evaluation with BLEU, which is problematic since BLEU is not reliable and this procedure may overly optimize towards translationese references. Freitag et al. (2019) proposed a post-editing method to turn translationese system outputs into more natural text. As part of their evaluation, they also observed that human assessments poorly correlate with BLEU. While we confirm some of these observations, our goal is an in-depth analysis of the evaluation of NMT systems trained with backtranslated data. We provide empirical evidence corroborating the hypothesis that the discrepancy between BLEU and human assessment is due to the use of translationese references, and we provide a constructive suggestion on how to better automatically evaluate models trained with BT. 3 Experimental Setup In the next sections we first discuss the datasets and models used. Then, we report BLEU evaluations showing a big discrepancy between the gains obtained by a BT system in forward versus reverse direction compared to a baseline trained only on parallel data. This is followed by a series of hypotheses about the reasons for this discrepancy, and 2838 empirical studies in support or to disprove these hypotheses. We conclude with a recommendation for how to better evaluate NMT systems trained with BT. 3.1 Training Datasets We consider four language directions: EnglishGerman (En-De), German-English (De-En), English-Russian (En-Ru) and Russian-English (Ru-En). For En-De, we train a model on the WMT’18 news translation shared task data. We used all available bitext excluding the ParaCrawl corpus. We removed sentences longer than 250 words as well as sentence-pairs with a source/target length ratio exceeding 1.5. This results in 5.18M sentence pairs. For back-translation, we use the same setup as the WMT’18 winning entry for this language pair which entails sampled back-translation of 226M German newscrawl sentences (Edunov et al., 2018).1 For De-En, En-Ru, Ru-En we use all parallel data provided by the WMT’19 news translation task, including Paracrawl. We remove sentences longer than 250 words as well as sentence-pairs with a source/target length ratio exceeding 1.5 and sentences which are not in the correct language (Lui and Baldwin, 2012). This resulted in 27.7M sentence-pairs for En-De and 26M for EnRu. For the back-translation models we use the top ranked Facebook-FAIR systems of the WMT’19 news shared translation task.2 The parallel data and pre-processing of those systems is identical to our baselines which are trained only on parallel data (Ng et al., 2019). As monolingual data, the WMT’19 newscrawl data was filtered by langid, resulting in 424M English and 76M Russian monolingual sentences. For En-De and De-En models use a joined byte-pair encoding (BPE; Sennrich et al. 2016b) with 32K split operations, and for En-Ru and Ru-En separate BPE dictionaries for the source and target with 24K split operations. 1WMT’18 models are available at https: //github.com/pytorch/fairseq/tree/ master/examples/backtranslation and we used a single model. 2WMT’19 models are available at https: //github.com/pytorch/fairseq/tree/ master/examples/wmt19 X Y* X** Y X* Y** Figure 1: Illustration of the translations used in this work. X represent sentences originating in the source language. Y are sentences originating in the target language. A single ∗symbol represents a translation of an original sentence, while ∗∗represents a double translation, i.e. a translation of a translationese sentence. The original dataset consists of the union of (X, Y ∗) pairs (direct mode) and (X∗, Y ) (reverse mode). According to BLEU, a system trained with BT improves only in reverse mode. As part of this study we have collected double translations, which are useful to assess whether translationese inputs are easier to translate (by comparing performance when the input is X∗∗versus X and the reference is Y ∗) and easier to predict (by comparing performance when the reference is Y ∗∗versus Y and the input is X∗). 3.2 Sequence to Sequence Models We train models using the big Transformer implementation of fairseq (Vaswani et al., 2017; Ott et al., 2019). All our models are trained on 128 Volta GPUs, following the setup described in Ott et al. (2018). For En-De we used single Transformer Big models without checkpoint averaging. For De-En and En-Ru we increased model capacity by using larger FFN size (8192) and we also used an ensemble of models trained with three different seeds. In the remainder of this paper, we will refer to baseline NMT models trained only on parallel data as OP, and to models trained on both parallel data and back-translated data as BT. 3.3 Test sets and Reference Collection In order to assess differences in model performance when inputting translationese vs. natural language (§4.2), we collected additional references which will be made publicly and freely available soon.3 These are sentence-level (as opposed to document level) translations which matches the training setup of our models. In Appendix B we confirm that our findings also apply to the original WMT documentlevel references. Figure 1 illustrates the composition of the test set for each language direction which is divided into two partitions: First, the direct portion consists of sentences X originally written in the source language which were translated into the target lan2839 guage as Y ∗. Additionally, we translated Y ∗back into the source language to yield X∗∗, a translationese version of X. Second, for the reverse portion, we have naturally occurring sentences in the target language Y that were translated into the source as X∗. We also translated these into the target as Y ∗∗to obtain a translationese version of the original target. For each language pair we use the following data: English ↔German. We used newstest2014 that we separated into English-original and Germanoriginal sets. We then sampled 500 Englishoriginal and 500 German-original sentences from each subset and asked professional human translators to translate them into German and English respectively. In addition, we ask professional human translators to provide X∗∗and Y ∗∗which are translations of Y ∗and X∗, respectively. English ↔Russian. For this setup we sampled 500 English-original sentences from the En-Ru version of newstest2019 and asked professional human translators to translate them into Russian at the sentence-level. Similarly, we sampled 500 Russian-original sentences from the Ru-En version of newstest2019 and obtained English references. We also collected double translations X∗∗, Y ∗∗of Y ∗and X∗, respectively. 3 The additional references are available at https://github.com/ facebookresearch/evaluation-of-nmt-bt. 3.4 Human and Automatic Evaluation Human evaluations and translations were conducted by certified professional translators who are native speakers of the target language and fluent in the source language. We rate system outputs using both source and target based direct assessment. In the former case, raters evaluate correctness and completeness on a scale of 1-100 for each translation given a source sentence. This method is the most thorough assessment of translation quality. It also has the additional benefit to be independent of the provided human references which may affect the evaluation. For target based direct assessment, raters evaluate closeness to the provided reference on a scale of 1-100 for each translation. This is easier since it only requires people fluent in one language, and it is the evaluation performed by recent WMT campaigns (Graham et al., 2017; Bojar et al., 2018). To rate a translation, we collected three judgements per sentence. We repeated the evaluation src ref sys en-de de-en en-ru ru-en X Y ∗ OP 33.7 40.3 31.3 43.8 BT 32.3 38.6 31.9 41.2 X∗ Y OP 31.3 43.0 40.5 31.8 BT 38.9 48.7 50.6 40.3 Table 1: BLEU for four language directions measured on source original sentences (X →Y ∗) as well as target original sentences (X∗→Y ) for a model trained on parallel data only (OP) as well as a back-translation model (BT). BT performs much better than OP on the reverse portion of the test set but BLEU shows no difference on the direct portion. src ref sys en-de de-en en-ru ru-en X Y ∗ OP 33.7 40.3 31.3 43.8 BT 32.3 38.6 31.9 41.2 X∗∗Y ∗ OP 39.7 46.9 42.8 49.9 BT 39.2 45.6 44.0 47.6 Table 2: BLEU for source original sentences (X → Y ∗) compared to the same sentence pairs with a translationese source (X∗∗→Y ∗). Translationese inputs are simpler to translate but BT and OP systems benefit equally from translationes inputs. for sentences where all three raters provided judgements that differed by more than 30 points. Evaluation was blind and randomized: human raters did not know the identity of the systems and all outputs were shuffled to ensure that each rater provides a similar number of judgements for each system. Following the WMT shared task evaluation (Bojar et al., 2018), we normalize the scores of each rater by the mean and standard deviation of all ratings provided by the rater. Next, we average the normalized ratings for each sentence and average all per-sentence scores to produce an aggregate per-system z-score. As automatic metric, we report case-sensitive BLEU using SacreBLEU (Post, 2018).3 We also consider other metrics in Appendix C, but conclusions remain the same. 4 Results 4.1 Evaluating BT with Automatic Metrics We first reproduce the known discrepancy between BT and OP in the reverse direction (target original 3SacreBLEU signature: BLEU+case.mixed+numrefs.1+ smooth.exp+tok.13a+version.1.3.1 2840 src ref sys en-de de-en en-ru ru-en BLEU human BLEU human BLEU human BLEU human X Y ∗ OP 33.7 -0.18 40.3 -0.07 31.3 -0.66 43.8 -0.37 BT 32.3 -0.05 38.6 0.03 31.9 -0.35 41.2 -0.12 X∗ Y OP 31.3 -0.01 43.0 0.06 40.5 0.06 31.8 -0.02 BT 38.9 0.10 48.7 0.13 50.6 0.16 40.3 0.07 X∗∗ Y ∗ OP 39.7 -0.05 46.9 0.07 42.8 -0.17 49.9 -0.05 BT 39.2 0.03 45.6 0.16 44.0 -0.01 47.6 0.12 X∗ Y ∗∗ OP 39.5 -0.01 63.6 0.06 49.5 0.06 44.4 -0.02 BT 41.8 0.10 61.2 0.13 50.4 0.16 38.7 0.07 Table 3: BLEU and human preference judgements on four language directions with a bitext-only model as well as a back-translation model (BT). BLEU shows no strong preference when the source is natural text (X) but professional human translators prefer BT regardless of whether the source is X or translationese (X∗). Backtranslation also does not overproportionally benefit from inputting translationese since both OP and BT show similar improvements when switching from X to X∗∗inputs. BT human scores are statistically significantly better at p=0.05 than the respective OP as per paired bootstrap resampling (Koehn, 2004). sentences; X∗→Y ) and the forward direction (source original sentences; X →Y ∗). Table 1 shows that BT does not improve over OP on direct sentences (X →Y ∗) in aggregate. However, on the reverse portion BT does improve, and it does so by very large margins of between 5.7-10.1 BLEU. Appendix C shows that TER (Snover et al., 2006), BEER (Stanojevic and Sima’an, 2014), METEOR (Banerjee and Lavie, 2005) and BERTScore (Zhang et al., 2019) also do not distinguish very strongly between OP and BT for direct sentences. A possible explanation for this result is that BT can better translate target-original test sentences because those sentences mimic the training data of BT. The BT training data (§3) consists largely of target original sentences-pairs with back-translated sources which could explain the discrepancy between performance of the BT system on the direct and reverse portions. 4.2 Translationese Benefits Both BT & OP Translationese is known to be a different dialect with lower complexity than naturally occurring text (Toral et al., 2018). This is corroborated by the fact that this data is straightforward to identify by simple automatic classifiers (Koppel and Ordan, 2011). One possible explanation for why back-translation could be more effective for target original sentences is that the input to the system is translated language. This may give the BT system two advantages: i) the input is simpler than naturally occurring text and ii) this setup may be easier for the back-translation system which was trained on additional target original data that was automatically translated. To test this hypothesis we feed source original sentences and translationese into our systems and compare their performance. We created a test setup where we have both a source original sentence (X) and a translationese version of it (X∗∗) which share a reference (Y ), see §3.3. This enables us to precisely test the effect of translationese vs natural language. Table 2 shows that BLEU is substantially higher when the input is translationese (X∗∗) compared to natural language (X), however, both BT and OP obtain comparable improvements. Therefore, the BLEU discrepancy between BT and OP in direct vs. reverse cannot be explained by BT gaining an advantage over OP through translationese inputs. 4.3 Human Evaluation Contradicts BLEU The aforementioned experiments were evaluated in terms of BLEU, an automatic metric. To get a more complete picture, we ask professional human translators to judge translations using source-based direct assessment (unless otherwise specified, this is our default type of human evaluation; see §3.4). Table 3 (first two sets of rows) shows that human judges prefer BT over OP regardless of whether sentences are source original (X →Y ∗) or target original (X∗→Y ). This is in stark contrast to the corresponding BLEU results. 2841 Similar observations have been made in the two most recent WMT evaluation campaigns: at WMT’18 (Bojar et al., 2018), the large-scale sampled BT system of Facebook-FAIR (Edunov et al., 2018) ranked 6th in terms of BLEU while being ranked first in the human evaluation. The results of WMT’19 show a similar picture where a system relying on large scale back-translation ranked first in the human evaluation but only 8th in terms of BLEU (Bojar et al., 2019). We conclude that professional human translators prefer BT over OP - regardless of whether test sentences are source or target original. 4.4 Human Evaluation is Robust Our current observations could be explained by some idiosyncrasy in the human evaluation. To reject this hypothesis we performed both sourcebased and target-based assessment for all EnglishGerman systems of Table 3 using professional translators (§3.4) and computed the correlation between the two types of assessments. The correlation coefficient between source and target based assessment is 0.90 (95% confidence interval 0.55 - 0.98), which indicates that human evaluation is robust to the assessment type. This finding is consistent with other work comparing the two types of human evaluations (Bojar et al., 2018). 4.5 Why BLEU Fails in Direct Mode Next, we investigate why BLEU does not agree with human judgements in direct mode. BLEU measures n-gram overlap between a model output and a human reference translation. In the case of direct sentences, the references are translationese. We found earlier that BLEU does not distinguish between BT and OP even though professional human translators prefer BT. Given references are translationese, one possible explanation is that both systems produce translations which equally resemble translationese and thus BLEU fails to distinguish between them. To test this hypothesis and measure the closeness of system outputs with respect to translationese, we train two large transformer-based language models (Baevski and Auli, 2018). The first is trained on outputs produced by the En-De BT system, the second one on the outputs produced by the En-De OP system. The outputs are the translation of English Newscrawl 2018 comprising 76M sentences. We then evaluate the language models on source original sentences (Y ∗) of newstest2015-2018. data OP BT Y ∗ 37.2 36.8 Y 82.2 57.4 Table 4: Perplexity on the sourceoriginal/translationese portion (Y ∗) and the targetoriginal portion of newstest2014-2018 (Y ). We translate the English newscrawl training data with either OP and BT and train two language models on the outputs. Both BT and OP are equally close to translationese (first row), but BT is closer than OP to naturally occurring text (second row). The first row of Table 4 shows that both language models achieve similar perplexity on Y ∗(37.2 VS 36.8), suggesting that the translations of BT and OP are equally close to translationese. Interestingly, both system outputs are closer to translationese than natural text since PPL on Y ∗is significantly lower than the PPL on Y (second row of Table 4). This is also supported by BLEU being higher when using Y ∗∗as a reference compared to Y for the same input X∗(second and last row of Table 3). Our results support the hypothesis that the outputs of BT and OP are equally close to translationese. This in turn may explain why BLEU cannot distinguish between OP and BT in direct mode where the reference is translationese. 4.6 BT Generates More Natural Text Back-translation augments the training corpus with automatic translations from target original data. Training models on large amounts of target original data may bias BT systems to produce outputs that are closer to naturally occurring text. In contrast, OP systems have been trained on the original parallel data, a mix of direct and reverse data which contains a much smaller amount of target original sentences. This may explain why BLEU evaluation with translationese references (direct portion) does not capture the human preference for BT. To understand this better, we conduct two experiments. The first experiment is based on the language models we trained previously (§4.5) to assess how close our systems are to translationese and naturally occurring text. The second experiment is based on a human study where native speakers assess the fluency of each system output. For the first experiment we reuse the two language models from §4.5 to measure how close the system outputs are to natural text (Y ). The second 2842 BT OP draw De-En 28 16 63 En-De 50 33 18 En-Ru 37 21 42 Table 5: Human preference in terms of fluency for system outputs of BT and OP. Judgements are based on a pair-wise comparison between the two systems without the source sentence and conducted by native speakers. All results are based on 100 judgements and the preference of BT over OP is statistically significant at p=0.05. row of Table 4 shows that the BT language model assigns much higher probability to naturally occurring text, Y , compared to the OP language model (82.2 VS 57.4 perplexity), suggesting that BT does indeed produce outputs that are much closer to natural text than OP. We surmise that this difference, which is captured by a language model trained on system outputs and evaluated on Y , could be at least partially responsible for the marked human preference towards BT translations. In the second experiment, native speakers of English, German and Russian rate whether the output of OP is more fluent than the output of BT for 100 translations of the De-En, En-De and En-Ru systems. Human raters perform a pair-wise ranking and raters can only see two translations but not the source; the system identity is unknown to raters. Table 5 shows that BT is judged to be significantly more fluent by native speakers than OP in three languages. 5 Improving BT Evaluation In the previous sections, we gathered mounting evidence that BLEU fails at capturing the improved fluency of BT in direct mode. Next, we propose to use a language model to assess fluency as an additional measure to complement BLEU. Different to the setup above (§4.5, 4.6), where we used a separate LM for each system, we propose to use a single LM for all systems in order to simplify the evaluation. The language model is trained on a large monolingual dataset disjoint from the monolingual dataset used for generating back-translated data for BT training. This restriction is critical, otherwise the language model is likely to assign higher probably to BT generations simply because training and evaluation sets overlap. To train these language models we sample 315M, 284M and 120M comBT PPL OP PPL De-En 74.8 78.7 En-De 48.6 52.6 Ru-En 57.6 68.6 En-Ru 61.7 72.4 Table 6: Automatic fluency analysis with language models trained on the Common Crawl corpus in the respetive target language. BT receives lower perplexity (PPL) throughout, despite attaining the same BLEU score of OP, see Table 1. moncrawl sentences for each of the three target languages, namely English, German and Russian, respectively. The language model is used to score the outputs of BT and OP on the direct portion of the test set. If two systems have similar BLEU scores, then a lower perplexity with the LM indicates higher fluency in the target natural language. This fluency assessment is complementary to BLEU which in turn is more sensitive to adequacy. Table 6 shows that the language model assigns lower perplexity to BT in all four setups. This shows that a language model can help to assess the fluency of system output when a human evaluation is not possible. In future work, we intend to further investigate how to best combine BLEU and language model scoring in order to maximize correlation with human judgements, particularly when evaluating BT in direct mode. Meantime, practitioners can use this additional metric in their evaluation to break ties in BLEU scoring. 6 Conclusions According to our findings, back-translation improves translation accuracy, for both source and target original sentences. However, automatic metrics like BLEU fail to capture human preference for source original sentences (direct mode). We find that BT produces outputs that are closer to natural text than the output of OP, which may explain human preference for BT. We recommend distinguishing between direct and reverse translations for automatic evaluation, and to make final judgements based on human evaluation. If human evaluation is not feasible, complementing standard metrics like BLEU with a language model (§5) may help assessing the overall translation quality. In the future, we plan to investigate more thor2843 oughly the use of language models for evaluating fluency, the effect of domain mismatch in the choice of monolingual data, and ways to generalize this study to other applications beyond MT. Acknowledgements We thank Barry Haddow for initially pointing out the BLEU discrepancy between the forward and reverse portions of the WMT 2018 test set. References Alexei Baevski and Michael Auli. 2018. Adaptive input representations for neural language modeling. arXiv, abs/1809.10853. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR. Mona Baker. 1993. Corpus linguistics and translation studies: Implications and applications. Text and technology: In honour of John Sinclair, 233:250. Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In In Proc. of ACL Workshop on Intrinsic and Extrinsic Evaluation Measure for Machine Translation. Ondrej Bojar and Ales Tamchyna. 2011. Improving translation model by monolingual data. In Proc. of WMT. Ondˇrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (WMT18). In Proc. of WMT. Ondˇrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, and Christof Monz. 2019. Findings of the 2019 conference on machine translation (wmt19). In Proc. of WMT. Isaac Caswell, Ciprian Chelba, and David Grangier. 2019. Tagged back-translation. In Proc. of WMT. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proc. of EMNLP. Markus Freitag, Isaac Caswell, and Scott Roy. 2019. Text repair model for neural machine translation. arXiv, abs/1904.04790. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional Sequence to Sequence Learning. In Proc. of ICML. Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2017. Can machine translation systems be evaluated by the crowd alone. Natural Language Engineering, 23(1):330. Yvette Graham, Barry Haddow, and Philipp Koehn. 2019. Translationese in machine translation evaluation. arXiv, abs/1906.09833. Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative backtranslation for neural machine translation. In Proc. of 2nd Workshop on Neural Machine Translation and Generation. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proc. of EMNLP. Moshe Koppel and Noam Ordan. 2011. Translationese and its dialects. In Proc. of ACL. David Kurokawa, Cyril Goutte, and Pierre Isabelle. 2009. Automatic detection of translated text and its impact on machine translation. In Proc. of MT Summit. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Phrase-based and neural unsupervised machine translation. In EMNLP. Gennadi Lembersky, Noam Ordan, and Shuly Wintner. 2011. Language models for machine translation: Original vs. translated texts. In Proc. of EMNLP. Marco Lui and Timothy Baldwin. 2012. langid. py: An off-the-shelf language identification tool. In Proc. of ACL: Demonstrations. Qingsong Ma, Johnny Wei, Ondrej Bojar, and Yvette Graham. 2019. Results of the wmt19 metrics shared task: Segment-level and strong mt systems pose big challenges. In Proc. of WMT. Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook fair’s wmt19 news translation task submission. In Proc. of WMT. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proc. of NAACL: Demonstrations. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proc. of WMT. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proc. of ACL. Alberto Poncelas, Dimitar Sht. Shterionov, Andy Way, Gideon Maillette de Buy Wenniger, and Peyman Passban. 2018a. Investigating backtranslation in neural machine translation. arXiv, 1804.06189. 2844 Alberto Poncelas, Dimitar Sht. Shterionov, Andy Way, Gideon Maillette de Buy Wenniger, and Peyman Passban. 2018b. Investigating backtranslation in neural machine translation. arXiv, 1804.06189. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proc. of WMT. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proc. of ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proc. of ACL. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proc. of AMTA. Milos Stanojevic and Khalil Sima’an. 2014. Beer: Better evaluation as ranking. In Proc. of WMT. Sara Stymne. 2017. The effect of translationese on tuning for statistical machine translation. In Proc. of NoDaLiDa. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proc. of NIPS. Antonio Toral, Sheila Castilho, Ke Hu, and Andy Way. 2018. Attaining the unattainable? reassessing claims of human parity in neural machine translation. In Proc. of WMT. Gideon Toury. 2012. Descriptive translation studies and beyond: Revised edition, volume 100. John Benjamins Publishing. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Proc. of NIPS. Mike Zhang and Antonio Toral. 2019. The effect of translationese in machine translation test sets. arXiv, abs/1906.08069. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv, abs/1904.09675. 2845 A Forward/reverse BLEU for WMT’18 English-German systems system fwd rev delta online-Y 47.1 30.3 -16.8 MMT-production-system 51.8 36.7 -15.1 online-B.0 52.9 39.1 -13.8 NTT 50.7 39.7 -11.0 Microsoft-Marian 52.5 41.6 -10.9 KIT 50.3 39.5 -10.8 LMU-nmt 43.5 33.4 -10.1 uedin 47.8 37.8 -10.0 online-A 37.8 28.6 -9.2 JHU 46.0 38.2 -7.8 online-F 23.5 16.4 -7.1 UCAM 48.9 42.1 -6.8 RWTH-UNSUPER 16.7 12.0 -4.7 online-G 25.9 22.5 -3.4 LMU-unsup 15.2 14.3 -0.9 Facebook-FAIR 45.8 46.1 0.4 Table 7: Forward/reverse BLEU for WMT’18 English-German systems. Table 7 shows that a large-scale back-translation system, Facebook-FAIR, mostly improves BLEU on the reverse portion whereas it is outperformed by many other entrants in the forward portion. B Results with WMT references src ref sys en-de de-en en-ru ru-en BLEU human BLEU human BLEU human BLEU human X Y ∗ OP 33.7 -0.18 40.3 -0.07 31.3 -0.66 43.8 -0.37 BT 32.3 -0.05 38.6 0.03 31.9 -0.35 41.2 -0.12 X Y ∗ WMT OP 28.7 -0.18 35.4 -0.07 31.8 -0.66 39.7 -0.37 BT 29.9 -0.05 34.2 0.03 31.9 -0.35 38.5 -0.12 Table 8: BLEU results with respect to the original WMT references (document-level) and the sentence-level references used throughout this study. Sentence-level references result in higher BLEU but OP and BT still achieve very similar BLEU. Table 8 shows that BLEU does not strongly distinguish between BT and OP, regardless of whether the reference was obtained at the document-level (Y ∗ WMT ) or at the sentence-level (Y ∗). 2846 C Other metrics than BLEU src ref sys en-de human BLEU TER BEER METEOR BERTScore X Y ∗ OP -0.18 33.7 0.466 0.635 0.531 0.849 BT -0.05 32.3 0.473 0.619 0.512 0.843 X∗ Y OP -0.01 31.3 0.504 0.609 0.530 0.841 BT 0.10 38.9 0.431 0.652 0.580 0.866 X∗∗ Y ∗ OP -0.05 39.7 0.403 0.677 0.590 0.878 BT 0.03 39.2 0.409 0.669 0.578 0.876 X∗ Y ∗∗ OP -0.01 39.5 0.410 0.670 0.599 0.876 BT 0.10 41.8 0.383 0.683 0.610 0.884 Table 9: BLEU and other metrics as well as human preference judgements for English-German translations. Table 9 shows results for automatic metrics other than BLEU (Papineni et al., 2002). The metrics TER (Snover et al., 2006), BEER (Stanojevic and Sima’an, 2014), METEOR (Banerjee and Lavie, 2005) and BERTScore (Zhang et al., 2019) show similar trends as BLEU, i.e., they do not indicate human preference of BT over bitext for the direct portion of the test set (X →Y ∗).
2020
253
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2847–2853 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2847 Simultaneous Translation Policies: From Fixed to Adaptive Baigong Zheng 1 Kaibo Liu 1 Renjie Zheng 1 Mingbo Ma 1 Hairong Liu 1 Liang Huang 1,2 1Baidu Research, Sunnyvale, CA, USA 2Oregon State University, Corvallis, OR, USA {baigongzheng, kaiboliu, renjiezheng, mingboma}@baidu.com {liuhairong, lianghuang}@baidu.com Abstract Adaptive policies are better than fixed policies for simultaneous translation, since they can flexibly balance the tradeoff between translation quality and latency based on the current context information. But previous methods on obtaining adaptive policies either rely on complicated training process, or underperform simple fixed policies. We design an algorithm to achieve adaptive policies via a simple heuristic composition of a set of fixed policies. Experiments on Chinese→English and German→English show that our adaptive policies can outperform fixed ones by up to 4 BLEU points for the same latency, and more surprisingly, it even surpasses the BLEU score of full-sentence translation in the greedy mode (and very close to beam mode), but with much lower latency. 1 Introduction Simultaneous translation (ST) aims to provide good translation quality while keeping the latency of translation process as low as possible. This is very important for the scenarios that require simultaneity, such as international summits and negotiations. For this, human interpreters usually start translation before the source sentence ends. However, this makes the translation process much more challenging than the full-sentence translation, because to balance the translation quality and latency, interpreters need to make decisions on when to continue translation and when to stop temporarily to wait for more source side information, which are difficult, especially for syntactically divergent language pairs, such as German and English. The above decisions can be considered as two actions: READ (wait for a new source word) and WRITE (emit a translated target word) (Gu et al., 2017). Then we only need to decide which action to choose at each step, and the solution can be represented by a policy. Earlier works (Yarmohammadi tā ՜ he yĕxŭ Ԟᦜ probably bù ӧ not yīnggāi ଫᧆ should duì ੒ for cĭ ྌ this fùzé ᨮᨱ be responsible He probably should not be responsible for this Target Source He should be responsible for this R R R R R R wait-3 wait-2 wait-1 R probably not Figure 1: An adaptive policy (in bold arrows) composed of three wait-k policies (k = 1, 2, 3). et al., 2013; Bangalore et al., 2012; F¨ugen et al., 2007; Sridhar et al., 2013; Jaitly et al., 2016) study policies as a part of speech-to-speech ST system, where the policies usually try to separate the source sentence into several chunks that can be translated safely. Recent works focus on obtaining policies for text-to-text ST, which can be generally divided into two categories: fixed and adaptive. Fixed policies (Ma et al., 2019; Dalvi et al., 2018) usually follow some simple rules to choose actions. For example, the wait-k policy by Ma et al. (2019) first chooses k READ actions, and then chooses WRITE and READ alternatively. This kind of policies do not utilize the context information and can be either too aggressive or too conservative in different cases. By contrast, adaptive policies try to make decisions on the fly using the currently available information. It is obvious that this kind of policies is more desirable for ST than the fixed ones, and different methods are explored to achieve an adaptive policy. The majority of such methods (Grissom II et al., 2014; Cho and Esipova, 2016; Gu et al., 2017; Alinejad et al., 2018; Zheng et al., 2019a) are based on full-sentence translation models, which may be simple to use but cannot outperform fixed policies applied with “genuinely simultaneous” models trained for ST (Ma et al., 2019). Other meth2848 ods (Arivazhagan et al., 2019; Zheng et al., 2019b) try to learn a policy together with the underlying translation model, but they rely on complicated and time-consuming training process. In this paper, we propose to achieve an adaptive policy via a much simpler heuristic composition of a set of wait-k policies (e.g., k = 1 ∼10). See Fig. 1 for an example. To further improve the translation quality of our method, we apply ensemble of models trained with different wait-k policies. Our experiments on Chinese→English and German→English translation show that our method can achieve up to 4 BLEU points improvement over the wait-k method for same latency. More interestingly, compared with full-sentence translation, our method achieves higher BLEU scores than greedy search but with much lower latency, and is close to the results from beam search. 2 Preliminaries Full-sentence translation. Neural machine translation (NMT) model usually consists of two components: an encoder, which encodes the source sentence x = (x1, . . . , xm) into a sequence of hidden states, and a decoder, which sequentially predicts target tokens conditioned on those hidden states and previous predictions. The probability of the predicted target sequence y = (y1, . . . , yn) will be p(y | x) = Q|y| t=1 p(yt | x, y<t) where y<t = (y1, . . . , yt−1) denotes the target sequence predicted before step t. Simultaneous translation. Ma et al. (2019) propose a prefix-to-prefix framework to train models to make predictions conditioned on partial source sentences. In this way, the probability of predicted sequence y becomes pg(y | x) = Q|y| t=1 p(yt | x≤g(t), y<t) where g(t) is a monotonic non-decreasing function of t, denoting the number of processed source tokens when predicting yt. This function g(t) can be used to represent a policy for ST. Ma et al. (2019) introduce a kind of fixed policies, called wait-k policy, that can be defined by the following gk(t) = min{|x|, t + k −1}. Intuitively, this policy first waits k source tokens and then outputs predicted tokens concurrently with the rest of source sentence. tā 他 he yĕxŭ 也许 probably bù 不 not yīng gāi 应 该 should He probably Target Source He probably R R R R wait-3 wait-2 wait-1 tā 他 he yĕxŭ 也许 probably bù 不 not yīng gāi 应 该 should duì 对 for He probably He probably R R R R R tā 他 he yĕxŭ 也许 probably bù 不 not yīng gāi 应 该 should He probably should He R R R R ytop , ptop ? ? ptop ≥ρ2 ptop < ρ2 READ WRITE probably should Figure 2: Choose actions based on model confidence. In this example, we will choose an action based on the top probability ptop, and apply a new policy (the dotted arrows) after the chosen action. 3 Obtaining an Adaptive Policy Assume we have a set of wait-k policies and the corresponding models Mk (k = kmin . . . kmax). We can obtain an adaptive policy, whose lag at each step is between kmin and kmax, meaning that at each step, the target sequence falls behind the source sequence at most kmax tokens and at least kmin tokens. At each step, there is a wait-k policy synchronizing the adaptive policy, meaning that they have the same lag at that step. Specifically, at any step t, if the lag of the adaptive policy is k′, then we apply the NMT model with the wait-k′ policy and force it to predict existing target tokens until step t, when the model will make a new prediction as the output of step t. However, the above method only shows how to simulate the adaptive policy to make a prediction at one step if we would like to write at that step, but it does not tell us at which steps we should write. We utilize the model confidence to make such a decision. Specifically, we set a probability threshold ρk for each wait-k policy. At each step, if the NMT model follows a wait-k′ policy, and predicts the most likely token with probability higher than the threshold ρk′, then we consider the model is confident on this prediction, and choose WRITE action; otherwise, we choose READ action. Figure 2 gives an example for this process. 2849 We define the process of applying a wait-k model Mk with a wait-k policy on a given sequence pair (x, y) by the following ytop, ptop ←Pk(Mk, x, y) which forces model Mk to predict y, and returns the top token ytop at the final step with the corresponding probability ptop. The process of reading and returning a new source token is denoted by READ(), and expression x ◦x represents to append an element x to the end of sequence x. We denote by <s> and </s> the start symbol and end symbol of a sequence. Then Algorithm 1 gives the pseudocode of the above method. Algorithm 1 ST decoding with an adaptive policy Input: two integers kmin and kmax, a set of NMT models Mk, and a sequence of thresholds ρk for kmin ≤k ≤kmax. while x|x| ̸= </s> and y|y| ̸= </s> do k ←|x| −|y| ytop, ptop ←Pk(Mk, x, y) if k ≥kmax or (k ≥kmin and ptop ≥ρk) y ←y ◦ytop ▷Write action else x ←x ◦READ() ▷Read action while y|y| ̸= </s> do ytop, ptop ←Pkmax(Mkmax, x, y) y ←y ◦ytop ▷Write action return y 4 Ensemble of Wait-k Models Using the corresponding model Mk with each waitk policies may not give us the best performance. If we have a set of models trained independently with different wait-k policies, then we can apply ensemble of those models (Dietterich, 2000; Hansen and Salamon, 1990) to improve the translation quality, which is also used to improve the translation quality of full-sentence translation (Stahlberg and Byrne, 2017). However, there may be two issues to apply ensemble of all models: (1) the runtime for each prediction could be longer, resulting in higher latency; and (2) the translation accuracy may be worse, for the best model for one policy may give bad performance when doing inference with another policy. To avoid these, we propose to apply ensemble of the top-3 models for each policy. That is, we first generate distribution with the top-3 models independently with the same policy, and then take the arithmetic average of the three distributions as the final token distribution at that step. 5 Experiments Datasets and models. We conduct experiments on Chinese→English (ZH→EN) and German→English (DE→EN) translation. For ZH→EN, we use NIST corpus (2M sentence pairs) as training set, NIST 2006 as dev set, and NIST 2008 as test set. For DE→EN, we use WMT15 parallel corpus for training, newstest-2013 for validation and newstest-2015 for testing. All datasets are tokenized and segmented into sub-word units with byte-pair encoding (Sennrich et al., 2016). We take Transformer-base (Vaswani et al., 2017) as our model architecture, and follow Ma et al. (2019) to train our model with wait-k policies for integer 1 ≤k ≤10. In the following experiments, we only use catchup (Ma et al., 2019) for DE→EN translation, where we read one additional source token after every 6 predictions. We use BLEU (Papineni et al., 2002) as the translation quality metric, and Average Lagging (AL) (Ma et al., 2019) as the latency metric, which measures the lag behind source in terms of the number of source tokens. Performance with different policies. We first evaluate the performance of each model with different policies, which helps us to choose models for different policies. Specifically, we apply each model with ten different wait-k policies on dev set to compare the performance. Fig. 3 shows the results of five models. We find the best model for one policy may not be the one trained with that policy. For example, on ZH→EN translation, the best model for wait-1 policy is the one trained with wait-3 policy. Further, there is no one model could achieve the best performance for all policies. Comparing different methods. We compare our method with others from literature: wait-k method (Ma et al., 2019) (train and test models with the same wait-k policy), test-time waitk method (Ma et al., 2019) (apply full-sentence model with wait-k policies), wait-if-diff (Cho and Esipova, 2016) (start with s0 source tokens, choose to read only if top token at t-th step diffs from that at (t −δ)-th step), and wait-if-worse (Cho and Esipova, 2016) (start with s0 source tokens, choose to read only if the top probability at t-th step is smaller than that at (t −δ)-th step). For wait-if-diff we set 2850 4 6 8 10 12 27 29 31 33 35 37 39 41 43 45 4-ref BLEU ZH EN 33 AL wait-1 model wait-3 model wait-5 model wait-7 model wait-9 model 4 6 8 10 12 16 18 20 22 24 26 28 30 1-ref BLEU DE EN 30 AL wait-1 model wait-3 model wait-5 model wait-7 model wait-9 model Figure 3: Performance of models with different policies on dev set. Each model is trained with one wait-k policy (i.e. wait-k model) and tested with ten different wait-k′ policies for integer 1 ≤k′ ≤10. Each line corresponds to one model. # $: full-sentence translation with greedy search and beam search (beam size = 10) respectively. 2 4 6 8 10 12 14 28 30 32 34 36 38 40 4-ref BLEU ZH EN 30 AL wait-k method single ensemble all ensemble top3 test-time wait-k wait-if-diff wait-if-worse 2 4 6 8 10 12 20 22 24 26 28 30 1-ref BLEU DE EN 29 AL wait-k method single ensemble all ensemble top3 test-time wait-k wait-if-diff wait-if-worse Figure 4: Performance of different methods on test set. Our single method achieves better BLEU scores than wait-k method with same latency. And our ensemble top-3 method achieves the highest BLEU scores with same latency, and outperforms full-sentence greedy search with AL < 9. # $: full-sentence translation with greedy search and beam search (beam size = 10) respectively. s0 ∈{4, 6} and δ ∈{2, 4}; and for wait-if-worse we set s0 ∈{1, 2, 4, 6} and δ ∈{1, 2}. For our method, we test three different cases: (1) single, where for each policy we apply the corresponding model that trained with the same policy; (2) ensemble top-3, where for each policy we apply the ensemble of 3 models that achieve the highest BLEU scores with that policy on dev set; (3) ensemble all, where we apply the ensemble of all 10 models for each policy. For thresholds, we first choose ρ1 and ρ10, and the other thresholds are computed in the following way: ρi = ρ1−d·(i−1) for integer 1 ≤i ≤10 and d = (ρ1 −ρ10)/9. We test with ρ1 ∈{0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}, ρ10 = 0 and ρ1 = 1, ρ10 ∈{0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}, totally 18 different settings in our experiments. The reason behind these settings is that we assume our adaptive policy cannot be either too aggressive or too conservative (as mentioned at the beginning of Section 3). The policy is the most aggressive for k = 1, so we set ρ1 as the largest; while for k = 10 the policy is the most conservative, so we set ρ10 the smallest. The comparison is provided in Fig. 4 (the corresponding numeric scores are provided in Appendix A). Compared with wait-k method, our single method achieves improvement of up to 2 BLEU point, and our ensemble top-3 achieves improvement up to 4 BLEU points. Compared with full-sentence translation, our ensemble top-3 surprisingly outperforms greedy search with much lower latency (AL < 9), and achieves BLEU scores close to that from beam search (see Table 2). We also give one ZH→EN translation example from dev set in Table 1 to compare different methods, showing that our method achieves an adaptive policy with low latency and good translation quality. Efficiency. To evaluate the efficiency, we present in Table 3 the averaged time needed to predict one token for different methods. These methods are tested on one GeForce GTX TITAN-X GPU for ZH→EN test set. We can see that our ensemble top-3 method needs about 0.2 seconds to make a prediction on average. However, if the source sentence is revealed in the same speed as general 2851 pinyin wˇom´en xi`ang sh`ouh`aizhˇe de ji¯ashˇu biˇaosh`ı zu`ı ch´engzh`ı de t´ongq´ıng h´e ¯ai d`ao input “ 我们向 受害者的 家属 表示 最 诚挚 的 同情 和 哀悼 . ” gloss we to victim ’s family express most sincere ’s sympathy and condolence ensemble top-3 ρ1 =1, ρ10 =0 (AL=7) “ we express our most sincere sympathy and condol- ences to the families of the victims . ” ensemble top-3 ρ1 =0.4, ρ10 =0 (AL=2.8) “ we express the most sincere sympathy to the families of the victims . ” wait-3 (AL=3.72) “ we have offered our best wishes to the families of the victims , ” he said . full-sentence translation (AL=16) “ we express the most sincere sympathy and condol- ences to the families of the victims . ” Table 1: One example from ZH→EN dev set. Although wait-3 method has low latency, it makes anticipations on “offered” and “wishes”, and adds additional words “he said”, which are not accurate translation. Our ensemble top-3 method could provide better translation with lower latency. Method ZH→EN DE→EN BLEU AL BLEU AL Full-sentence (greedy) 39.47 29.551 29.74 28.581 Full-sentence (beam) 40.71 29.551 30.24 28.581 Ensemble Top-3 40.15 8.209 30.15 8.766 Table 2: Compare our method with full-sentence translation. Our ensemble top-3 method could outperform the greedy search and get close to beam search (beam size = 10) with lower latency. speech, which is about 0.6 seconds per token in Chinese (Zheng et al., 2019c), then our method is still faster than that (which means that it could be used for real-time). Further, we believe the efficiency of our method could be improved with other techniques, such as parallelizing the running of three models in the ensemble, making it less an issue. Method Time per Token Full-sentence 0.0122 s Wait-3 0.0162 s Single (ρ1 = 0.4, ρ10 = 0) 0.1057 s Ensemble Top-3 (ρ1 = 0.4, ρ10 = 0) 0.2085 s Table 3: Averaged time needed by different methods to predict one token on ZH→EN test set. 6 Conclusions We have designed a simple heuristic algorithm to obtain an adaptive policy based on a set of wait-k policies, and applied ensemble in our method to improve the translation quality while maintaining low latency. Experiments show that our method not only outperforms the original wait-k method with relatively large gap, but also surpasses greedy full-sentence translation with much lower latency. Acknowledgments We thank the anonymous reviewers for helpful suggestions. References Ashkan Alinejad, Maryam Siahbani, and Anoop Sarkar. 2018. Prediction improves simultaneous neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3022–3027. Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-Cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li, and Colin Raffel. 2019. Monotonic infinite lookback attention for simultaneous machine translation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, pages 1313–1323. Srinivas Bangalore, Vivek Kumar Rangarajan Sridhar, Prakash Kolan, Ladan Golipour, and Aura Jimenez. 2012. Real-time incremental speech-tospeech translation of dialogs. In Proc. of NAACLHLT. Kyunghyun Cho and Masha Esipova. 2016. Can neural machine translation do simultaneous translation? arXiv preprint arXiv:1606.02012. Fahim Dalvi, Nadir Durrani, Hassan Sajjad, and Stephan Vogel. 2018. Incremental decoding and training methods for simultaneous translation in neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 493–499. Thomas G Dietterich. 2000. Ensemble methods in machine learning. In International workshop on multiple classifier systems, pages 1–15. Springer. Christian F¨ugen, Alex Waibel, and Muntsin Kolss. 2007. Simultaneous translation of lectures and speeches. Machine translation, 21(4):209–252. 2852 Alvin Grissom II, He He, Jordan Boyd-Graber, John Morgan, and Hal Daum´e III. 2014. Don’t until the final verb wait: Reinforcement learning for simultaneous machine translation. In Proceedings of the 2014 Conference on empirical methods in natural language processing (EMNLP), pages 1342–1352. Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Victor O. K. Li. 2017. Learning to translate in realtime with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pages 1053–1062. Lars Kai Hansen and Peter Salamon. 1990. Neural network ensembles. IEEE Transactions on Pattern Analysis & Machine Intelligence, (10):993–1001. Navdeep Jaitly, David Sussillo, Quoc V Le, Oriol Vinyals, Ilya Sutskever, and Samy Bengio. 2016. An online sequence-to-sequence model using partial conditioning. In Advances in Neural Information Processing Systems, pages 5067–5075. Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, and Haifeng Wang. 2019. STACL: Simultaneous translation with implicit anticipation and controllable latency using prefix-to-prefix framework. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3025–3036, Florence, Italy. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of ACL, pages 311–318, Philadephia, USA. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725. Vivek Kumar Rangarajan Sridhar, John Chen, Srinivas Bangalore, Andrej Ljolje, and Rathinavelu Chengalvarayan. 2013. Segmentation strategies for streaming speech translation. In Proc. of NAACL-HLT, pages 230–238. Felix Stahlberg and Bill Byrne. 2017. Unfolding and shrinking neural machine translation ensembles. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1946–1956. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30. Mahsa Yarmohammadi, Vivek Kumar Rangarajan Sridhar, Srinivas Bangalore, and Baskaran Sankaran. 2013. Incremental segmentation and decoding strategies for simultaneous translation. In Proceedings of the Sixth International Joint Conference on Natural Language Processing. Baigong Zheng, Renjie Zheng, Mingbo Ma, and Liang Huang. 2019a. Simpler and faster learning of adaptive policies for simultaneous translation. In Proc. of EMNLP-IJCNLP, pages 1349–1354. Baigong Zheng, Renjie Zheng, Mingbo Ma, and Liang Huang. 2019b. Simultaneous translation with flexible policy via restricted imitation learning. In Proc. of ACL, pages 5816–5822. Renjie Zheng, Mingbo Ma, Baigong Zheng, and Liang Huang. 2019c. Speculative beam search for simultaneous translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 1395–1402. 2853 A Appendices We provide the complete results of Figure 4 from Section 5 in the following tables, where AL is Average Lagging. Note that for ZH→EN, we use 4reference BLEU; while for DE→EN we use singlereference BLEU. Hyper-parameters ZH→EN DE→EN BLEU AL BLEU AL wait-if-diff s0 = 4, δ = 2 28.52 5.493 22.16 5.121 s0 = 6, δ = 2 30.02 6.108 22.56 5.731 s0 = 4, δ = 4 33.91 9.764 25.16 8.763 s0 = 6, δ = 4 34.13 10.075 25.45 9.177 ensemble top-3 ρ1 = 0.2, ρ10 = 0.0 32.10 2.880 24.55 2.171 ρ1 = 0.3, ρ10 = 0.0 33.94 3.729 25.63 2.592 ρ1 = 0.4, ρ10 = 0.0 35.92 4.762 26.52 3.068 ρ1 = 0.5, ρ10 = 0.0 37.43 5.710 27.20 3.523 ρ1 = 0.6, ρ10 = 0.0 38.56 6.538 27.97 4.096 ρ1 = 0.7, ρ10 = 0.0 38.96 7.109 28.71 4.628 ρ1 = 0.8, ρ10 = 0.0 39.82 7.675 29.06 5.101 ρ1 = 0.9, ρ10 = 0.0 40.15 8.209 29.40 5.616 ρ1 = 1.0, ρ10 = 0.0 40.35 8.520 29.62 6.038 ρ1 = 1.0, ρ10 = 0.1 40.18 9.013 29.88 6.482 ρ1 = 1.0, ρ10 = 0.2 40.36 9.462 29.80 6.923 ρ1 = 1.0, ρ10 = 0.3 40.32 9.848 29.84 7.379 ρ1 = 1.0, ρ10 = 0.4 40.56 10.185 29.99 7.882 ρ1 = 1.0, ρ10 = 0.5 40.61 10.480 30.04 8.347 ρ1 = 1.0, ρ10 = 0.6 40.52 10.739 30.15 8.766 ρ1 = 1.0, ρ10 = 0.7 40.51 10.939 30.16 9.182 ρ1 = 1.0, ρ10 = 0.8 40.41 11.134 30.17 9.582 ρ1 = 1.0, ρ10 = 0.9 40.36 11.310 30.15 10.023 ensemble all ρ1 = 0.2, ρ10 = 0.0 26.81 1.231 24.55 2.383 ρ1 = 0.3, ρ10 = 0.0 32.61 3.536 25.74 2.851 ρ1 = 0.4, ρ10 = 0.0 35.96 5.219 26.46 3.367 ρ1 = 0.5, ρ10 = 0.0 37.31 6.270 26.97 3.973 ρ1 = 0.6, ρ10 = 0.0 38.40 6.959 27.20 4.666 ρ1 = 0.7, ρ10 = 0.0 38.64 7.590 27.63 5.241 ρ1 = 0.8, ρ10 = 0.0 39.10 8.134 27.78 5.828 ρ1 = 0.9, ρ10 = 0.0 39.18 8.523 27.89 6.290 ρ1 = 1.0, ρ10 = 0.0 38.80 8.761 27.89 6.650 ρ1 = 1.0, ρ10 = 0.1 38.67 9.264 27.94 7.151 ρ1 = 1.0, ρ10 = 0.2 38.62 9.682 27.86 7.594 ρ1 = 1.0, ρ10 = 0.3 38.62 10.029 27.98 8.014 ρ1 = 1.0, ρ10 = 0.4 38.62 10.274 28.17 8.395 ρ1 = 1.0, ρ10 = 0.5 38.57 10.477 28.17 8.710 ρ1 = 1.0, ρ10 = 0.6 38.60 10.632 28.23 8.989 ρ1 = 1.0, ρ10 = 0.7 38.59 10.770 28.31 9.253 ρ1 = 1.0, ρ10 = 0.8 38.58 10.890 28.32 9.517 ρ1 = 1.0, ρ10 = 0.9 38.56 11.029 28.34 9.830 Table 4: Complete results of wait-if-diff, ensemble top-3 and ensemble all. Hyper-parameters ZH→EN DE→EN BLEU AL BLEU AL wait-if-worse s0 = 1, δ = 1 31.67 6.857 21.77 4.930 s0 = 2, δ = 1 32.28 7.170 22.26 5.005 s0 = 4, δ = 1 33.36 7.964 23.30 5.697 s0 = 6, δ = 1 34.78 9.319 24.27 6.914 s0 = 1, δ = 2 36.28 12.731 26.52 10.268 s0 = 2, δ = 2 36.62 13.133 26.39 10.138 s0 = 4, δ = 2 36.89 13.629 26.68 10.806 s0 = 6, δ = 2 37.50 14.662 27.09 11.877 single ρ1 = 0.2, ρ10 = 0.0 31.24 3.335 22.72 1.989 ρ1 = 0.3, ρ10 = 0.0 32.96 3.781 23.85 2.211 ρ1 = 0.4, ρ10 = 0.0 34.39 4.455 25.05 2.672 ρ1 = 0.5, ρ10 = 0.0 36.23 5.254 25.61 3.047 ρ1 = 0.6, ρ10 = 0.0 36.75 5.750 26.73 3.627 ρ1 = 0.7, ρ10 = 0.0 36.95 6.526 27.21 4.187 ρ1 = 0.8, ρ10 = 0.0 37.67 7.030 27.84 4.785 ρ1 = 0.9, ρ10 = 0.0 38.41 7.604 28.41 5.330 ρ1 = 1.0, ρ10 = 0.0 37.89 8.021 28.81 5.813 ρ1 = 1.0, ρ10 = 0.1 38.45 8.458 29.02 6.169 ρ1 = 1.0, ρ10 = 0.2 38.20 8.839 29.20 6.596 ρ1 = 1.0, ρ10 = 0.3 38.59 9.386 29.32 7.042 ρ1 = 1.0, ρ10 = 0.4 38.81 9.805 29.19 7.581 ρ1 = 1.0, ρ10 = 0.5 38.77 10.141 29.29 8.079 ρ1 = 1.0, ρ10 = 0.6 38.75 10.463 29.21 8.589 ρ1 = 1.0, ρ10 = 0.7 38.76 10.733 29.25 9.044 ρ1 = 1.0, ρ10 = 0.8 38.51 10.944 29.19 9.491 ρ1 = 1.0, ρ10 = 0.9 38.49 11.201 29.10 9.972 wait-k k = 1 28.30 2.968 21.31 1.695 k = 2 30.74 3.519 23.10 2.652 k = 3 32.45 5.076 25.22 3.768 k = 4 33.80 5.896 26.29 4.697 k = 5 34.67 7.041 27.42 5.771 k = 6 35.80 8.175 27.73 6.658 k = 7 36.77 9.033 28.53 7.569 k = 8 37.49 9.542 28.64 8.548 k = 9 38.17 10.560 28.92 9.379 k = 10 38.44 11.337 29.06 10.261 test-time wait-k k = 1 27.54 2.884 21.84 3.204 k = 2 29.57 3.873 22.64 3.954 k = 3 30.70 5.103 22.96 4.729 k = 4 31.37 5.941 23.60 5.558 k = 5 32.67 6.993 24.48 6.412 k = 6 33.92 8.051 24.92 7.298 k = 7 34.16 8.850 25.23 8.144 k = 8 34.95 9.720 25.48 9.025 k = 9 35.34 10.566 26.05 9.867 k = 10 35.87 11.383 26.28 10.699 Table 5: Complete results of wait-if-worse, single, wait-k and test-time wait-k.
2020
254
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2854–2864 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2854 Breaking Through the 80% Glass Ceiling: Raising the State of the Art in Word Sense Disambiguation by Incorporating Knowledge Graph Information Michele Bevilacqua and Roberto Navigli Sapienza NLP Group Department of Computer Science Sapienza University of Rome {bevilacqua,navigli}@di.uniroma1.it Abstract Neural architectures are the current state of the art in Word Sense Disambiguation (WSD). However, they make limited use of the vast amount of relational information encoded in Lexical Knowledge Bases (LKB). We present Enhanced WSD Integrating Synset Embeddings and Relations (EWISER), a neural supervised architecture that is able to tap into this wealth of knowledge by embedding information from the LKB graph within the neural architecture, and to exploit pretrained synset embeddings, enabling the network to predict synsets that are not in the training set. As a result, we set a new state of the art on almost all the evaluation settings considered, also breaking through, for the first time, the 80% ceiling on the concatenation of all the standard allwords English WSD evaluation benchmarks. On multilingual all-words WSD, we report state-of-the-art results by training on nothing but English. 1 Introduction There is a growing body of research dealing with the integration of prior knowledge into neural networks for Natural Language Processing (NLP) tasks, be it through pretraining on self-supervised tasks such as language modeling (Peters et al., 2018; Devlin et al., 2019), or through the incorporation of information from knowledge bases (Peters et al., 2019; Logan et al., 2019). In Word Sense Disambiguation (WSD), i.e., the task of associating a word in context with the most appropriate meaning from a finite set of possible choices (Navigli, 2009), the gap between supervision and knowledge (Navigli, 2018) has been overcome by several efforts directed at learning effective vector representations (Loureiro and Jorge, 2019; Scarlini et al., 2020) in the same space as contextualized embeddings, and exploring the usage of definitional knowledge in supervised sequence learning neural architectures (Luo et al., 2018; Kumar et al., 2019; Huang et al., 2019). However, the Lexical Knowledge Bases (LKBs) from which such information is retrieved, such as WordNet (Miller, 1995) and BabelNet (Navigli and Ponzetto, 2012), also provide a great wealth of relational knowledge in structured form (i.e., hypernymy, meronymy, similarity, etc.), which is often neglected due to the non-trivial integration of data of this kind into neural architectures. Even though such information can, instead, be exploited by knowledge-based WSD algorithms (Agirre and Soroa, 2009; Moro et al., 2014), rivaling supervised pre-contextualized embedding approaches (Maru et al., 2019), the performances still lag behind (Huang et al., 2019; Vial et al., 2019). Building on Extended WSD Integrating Sense Embeddings (EWISE) (Kumar et al., 2019), a neural WSD system incorporating prior knowledge through synset embeddings, we present Enhanced WSD Integrating Synset Embeddings and Relations (EWISER), a hybrid knowledge-based and supervised approach to WSD that integrates explicit relational information from the WordNet LKB. Our approach offers the following contributions: 1. We introduce the novel structured logits mechanism, which enables the exploitation of concept relatedness as determined by LKB edges. In our method, pre-softmax scores are a weighted combination of synset-specific scores, and can be computed via dot product with a sparse adjacency matrix. 2. We generalise the sense vector dot product technique from EWISE, showing that off-theshelf pretrained embeddings can be used. 3. We show that the structured logits mechanism and the use of sense embeddings are orthogonal and can be exploited jointly. 2855 Our approach is simple and extensible, does not require fine tuning of contextualized embeddings, and has a very modest parameter budget apart from synset embeddings. EWISER achieves a new state of the art in all-words English WSD. Moreover, we obtain state-of-the-art performances on the crosslingual all-words WSD evaluation, without using non-English training data. 2 Related Work Supervised WSD Supervised systems have to rely on expensive hand-labeled data to achieve good results (Pasini, 2020). The best approaches currently rely on neural networks. The model presented by Raganato et al. (2017) formulates the task as a token classification problem, with an LSTM with attention classifier producing a probability distribution over both words and senses. Subsequent work has shown that better results can be obtained by only having scores for senses or synsets (Vial et al., 2019). Shallower, simpler networks can achieve even better performances (Uslu et al., 2018). Contextualized vectors can be exploited in token tagging architectures (Vial et al., 2019; Bevilacqua and Navigli, 2019; Hadiwinoto et al., 2019). However, purely supervised systems are dependent on the data they are trained on, therefore when some sense is underrepresented in the training corpus it is not easy for them to predict it. LKBs in Supervised WSD More closely related to the core of our contribution, LKB information, such as natural language definitions of word meaning, can be exploited in neural token tagging architectures. For example, in GlossBERT (Huang et al., 2019) a pretrained BERT encoder is fed both the context sentence and the gloss, and is trained to predict whether the gloss correctly describes the use of the target word. Successful results have been obtained by encoding glosses in dense vectors (Luo et al., 2018). In EWISE (Kumar et al., 2019), WSD is performed in a two-step process: first, gloss embeddings are produced through a training procedure that also takes into account the WordNet’s graph structure; then, the gloss embeddings are scored via dot product with a contextual vector computed with an LSTM model, which is trained through regular categorical cross-entropy. Our work builds on top of EWISE in that it generalizes its sense vector dot product approach, but features a novel mechanism that injects relational knowledge into the architecture through a simple additional sparse dot product operation. Moreover, we show that better performances can be obtained by training the output embedding matrix, and that different sense/synset vectors can be used to initialize the output embeddings. Note that our approach is different from that of Vial et al. (2019), in that we do not conflate senses together through the use of WordNet hypernymy; rather, we mantain all the original meaning distinctions, and exploit the logit scores over the full vocabulary in a second, distinct step. 3 EWISER: Neural WSD with More Prior Knowledge 3.1 WSD as a classification problem WSD can be treated as a simple token classification problem, similar to POS tagging or Named Entity Recognition. As such, abstracting away from all the intricacies of any particular supervised model, we need to produce a vector representation h ∈Rd of a target word in a given context, and use it to yield a probability distribution over all its possible labels, i.e., its senses or synsets. The simplest way to do this is to learn a weight matrix O ∈Rd×|V|, where V is the output vocabulary1, and compute a vector of unnormalized scores z as the product of hT and O. Having multiple instances to classify packed into the matrix H, we can compute all the scores at the same time by a single dot product followed by a sum over columns with a bias vector: Z = HO + b (1) Finally, Z is transformed into a probability distribution through a standard softmax activation function. Typically, O is randomly initialized, and just trained end-to-end with the rest of the architecture (Raganato et al., 2017; Vial et al., 2019; Bevilacqua and Navigli, 2019). During training the categorical cross-entropy loss is computed for each instance Zi. At inference time, the model predicts the synset ˆs with the highest probability among the set S(wi) ⊂V of possible synsets for word wi: ˆsi = argmax s∈S(wi) Zi,s (2) where, for each wi, S(wi) depends on both the lemma and its part-of-speech, and is determined by the WordNet inventory. 1We use synsets as output vocabulary. 2856 3.2 Neural WSD Architecture We now describe a simple neural WSD architecture to be used as the core on top of which we will integrate the EWISER additions. For each word to disambiguate, our network takes as input the sum of the outputs of the last 4 layers of BERT Large (cased) and uses a 2-layer feedforward to compute the logit scores Z: B = B−4 + B−3 + B−2 + B−1 H0 = BatchNorm(B) H1 = swish(H0W + b) Z = H1O (3) where W, b are parameters of the models, and B−4 to B−1 are BERT hidden states2. We employ the swish activation function (Ramachandran et al., 2018), which has shown very promising results in NLP (Eger et al., 2018). Note that, while our architecture is very simple, it would be straightforward to incorporate powerful additions such as a sequence encoder – like an LSTM or a Transformer (Vaswani et al., 2017) classifier. While this might indeed produce better performances, improvements of this kind are not directly pertinent to our contribution. 3.3 Structured Logits The matrix multiplication in Equation 1 is wasteful during both training and inference, as it produces scores over the entire vocabulary V, even though the number of possible synsets is much smaller than the cardinality of V. Since the model is equally penalized by the cross-entropy loss when it gives a high score to a synset either related or unrelated to the correct one, there is little incentive to learn similar vectors for related synsets. Moreover, computing logits over the whole vocabulary does not bring any benefit in inference, as each score is computed independently, without taking into account connections between output classes. We address this issue by devising an architecture, i.e., EWISER, that can inject into the network relatedness knowledge as encoded in an arbitrary graph, and use it in training as well as in inference. 3.3.1 Synset Graph in EWISER As LKBs are structured into graphs, we want to be able to exploit, when computing the probability 2If a token consists of more than one subword, we average its subword representations. distribution vector over V for a target word, the explicit information of an arbitrary weighted graph G = ⟨V, E, w⟩, where w : E →R, and the vertices V = V – i.e., the nodes are synsets. Instead of using the vector z for prediction, we compute another vector q where for each component, i.e.. for each synset s, the score synset qs is a function of both the “hidden” score zs for s, and the hidden scores zs′ for all synsets s′ such that there is an edge ⟨s′, s⟩∈E. In order to do this, we calculate qs as zs plus the sum of the products of z′ s and the weight of the edge ⟨s′, s⟩. qs = zs + X s′∈V |⟨s′,s⟩∈E w(⟨s′, s⟩) · zs′ (4) As a result, qs is a weighted combination of the scores for all the output vocabulary. In Figure 1 we show this process visually. 3.3.2 Computing Q The most natural way to encode the graph G is with the adjacency matrix A, in which As1s2 = w(⟨s1, s2⟩). If As1s2 = 0 there is no edge between the two synsets. The new logits matrix Q can be obtained efficiently by simply computing the dot product between the hidden logits Z and the transposed adjacency matrix AT , summing Z to the results. Z = HO + b Q = ZAT + Z (5) Finally, we apply the softmax function to Q to get the probabilities. 3.3.3 The matrix A In our case, we build the graph and adjacency matrix A from the relations between synsets or senses in WordNet. As WordNet relations are not weighted, for every synset s we set As′,s to 1/N, where N is the number of incoming connections. In this way we avoid imbalanced predictions towards synsets with more incoming connections. We experiment with including different relations in A. Our base configuration includes similarity, verb group, and derivationally related3 edges. As for hypernymy and its inverse, hyponymy, we experiment with different possible ways of including them in A: (i) including only hypernymy (hyper); (ii) only hyponymy (hypo); (iii) both hypernymy 3We connect two synsets with a derivationally related edge if at least one pair of senses therein is connected via a derivationally related edge. 2857 The root of 4 is 2. Input BERT (frozen) Feed-forward Structured Logits Logit (q) +1.22 ... +15.05 ... +13.70 ... Synset Score (z) Edge (w) root.n.01 (botany) plant_organ.n.01 root.v.01 (botany) rootlet.n.01 ... root.n.04 (√) number.n.02 ... solution.n.04 (of eq.) set.n.02 ... +2.33 -3.25 -5.00 -6.00 ... +7.33 +9.15 ... +7.88 +5.71 ... 0.246 0.250 0.250 ... 0.844 ... 1.020 ... Σ Σ Σ Figure 1: The structured logits mechanism in EWISER. The example input is the sentence “The root of 4 is 2.” Scores for a selection of synsets representing possible senses of root are shown. Going from left to right, the “hidden” logits (z) of related synsets are multiplied by the edge weights, summed together, and then added to the “hidden” logits of the related synsets, resulting in the “final” logits (q). and hyponymy (hyper+hypo); (iv) the transitive closure over hypernymy (the set of relations that are obtained by following hypernymy paths) (hyper*); (v) the transitive closure over hypernymy and hyponymy (hyper+hypo*); Informally, hypernymy and hyponymy correspond to different kinds of reasoning, which might be characterized as, respectively, inductive (“if it is an electronic device, then it might be a mouse”) and deductive (“if it is a mouse, then it is an electronic device”). The closures are a way to flatten the hierarchy, thus enabling multi-hop reasoning by making the qs score dependent on the z scores for synsets whose path distance to s is greater than 1 in the original graph. Fine-tuning the adjacency matrix If weights in A are frozen, every connected synset gives an equal contribution to the final score qs. However, it is also reasonable to assume that not all synsets are equally relevant. For example, the score for inanimate object should be less relevant than that for device for predicting the hardware meaning of mouse. Thus, we experiment on fine-tuning A by only updating non-zero weights. 3.4 Output Layer Weights While O can be seen as just the final linear map in the network, it is also reasonable to think about it as a counterpart of an embedding matrix. Whereas in the intermediate layers of the neural network there is no one-to-one mapping between values of the matrix and input or output classes, in O there is a distinct column for each of the elements in V. As a matter of fact, the logit of synset s (zs) is just the scalar product between h and OT s , i.e., the column in O associated with s. So, just as with word embeddings, O can be seen as a collection for vector representations that have one-to-one mappings to output classes. Thus, it is possible to use synset embeddings to provide a better initialization for O than random. This idea has already been exploited by EWISE (Kumar et al., 2019), in which logit scores over V are computed by dot product between the hidden vector h and the gloss embedding vector g(s) as follows: zs = hT g(s) + bT g(s) (6) where b is a learned bias vector. Note that if we pack the synset gloss vector g(s) for every s ∈V into the O matrix, this looks almost identical to the canonical linear layer in Eq. 1, with the only difference being the fact that the bias is now the result of the dot product between b and O, rather than being directly parametrized as a vector ∈R|V|. 3.4.1 Weight Training vs. Freezing vs. Thawing In EWISE, the sense embeddings are learned independently from the WSD system and kept frozen during training. It is worth exploring whether better results can be achieved by allowing further refining of the weights during training. We expect initialization and freezing (which we refer to as, respectively, O-init and O-freeze) to have different effects depending on whether the gold synset is found in the training set. If weights are initialized and then up2858 dated during training, the columns in O corresponding to unattested synsets will only receive a “negative” signal from the cross-entropy loss; conversely, attested synsets can be further refined and predicted more accurately. If weights are frozen, the architecture will have to accommodate to the pretrained synset representations, meaning that, especially if there is no learned bias, it will be easier to predict unseen classes. No fine-tuning may, however, result in diminished performance, as the pre-trained synset representations are not tailored to WSD. An additional possibility to achieve better transfer between the information in the embeddings and the WSD system is to use a freeze-then-thaw scheme, similar to the chain-thaw method of Howard and Ruder (2018). The approach entails training an O-freeze model, restoring the best checkpoint, and then doing further training with O “thawed”, i.e., with trainable weights. 4 Experiments We assess the performance of EWISER in allwords English WSD, against both a simple but competitive baseline, i.e., the simple feedforward network taking BERT hidden states as input described in Section 3.2, and state-of-art approaches. We first experiment separately on the integration of explicit relational information through structured logits (Section 4.1), and the integration of synset embeddings through the initialization of O (Section 4.2). Then, building on the results of these experiments, we evaluate the full EWISER architecture (Section 4.3). Finally, we assess our approach on cross-lingual WSD (Section 4.4), training on English and evaluating on French, German, Italian and Spanish. 4.1 Structured Logits As explained in Section 3.3.2, in EWISER, relational knowledge is integrated through a dot product between the logits matrix Z and the transposed adjacency matrix AT . We perform experiments with different configurations that vary according to which edges are included in A. 4.1.1 Setting We experiment with the edge sets which are listed in Section 3.3.3. For each configuration we evaluate two different training runs, one in which A is frozen (A-freeze), and the other where edge weights are trained (A-train). We contrast the perModel Arch. ALL No15 No15− baseline – 74.2 73.9 52.2 hyper A-freeze 75.6 75.4 59.8 A-train 75.9 75.5 59.2 hypo A-freeze 74.6 74.4 57.7 A-train 74.6 74.3 54.5 hyper+hypo A-freeze 75.7 75.5 59.8 A-train 75.7 75.4 57.7 hyper* A-freeze 75.2 75.0 58.6 A-train 75.4 75.3 57.7 hyper+hypo* A-freeze 75.4 75.3 59.9 A-train 74.7 74.4 56.5 Table 1: Evaluation of structured logits on English allwords WSD. F1 is reported. formance of the models with the above-mentioned baseline. 4.1.2 Data & Hyperparameters We train the baseline and the configurations under comparison on SemCor (Miller et al., 1994) for 20 epochs, with a batch size of 4000 tokens. We do not employ sentences as context. Rather, we split documents in chunks of at most 100 tokens. The hidden size of the 2-layer feedforward is 512, with a dropout value of 0.2. The optimizer is Adam (Kingma and Ba, 2015), which we employ with a learning rate of 10−4. Following Bevilacqua and Navigli (2019), we select as development set (to select the best epoch) the SemEval-2015 dataset (Moro and Navigli, 2015). As customary, we report the results on the concatenation (ALL) of all the evaluation datasets from Senseval2 (Edmonds and Cotton, 2001), Senseval-3 (Snyder and Palmer, 2004), SemEval-2007 (Pradhan et al., 2007), SemEval-2013 (Navigli et al., 2013), and the aforementioned SemEval-2015. In addition, we report performances on ALL with all instances from the development set removed (No15), and on the subset of No15 whose gold synsets do not appear in SemCor (No15−). 4.1.3 Results We report in Table 1 the results of the experiments on the addition of structured logits to the baseline architecture. As can be seen, the use of hypernyms brings the biggest gain to performances, with the strongest improvement against the baseline reported with simple hypernymy and fine-tuning of A: 1.7 points on ALL and 1.6 on No15. The closures, i.e., hyper* and hyper+hypo*, do not seem to be very 2859 beneficial, achieving slightly worse results than the simple counterpart. Much of the improvement seems to come from the increased performance of the unseen split No15−where the gold is not in SemCor, with an absolute improvement of 7.6 points with hypernymy edges and no fine-tuning, and of 7 points with hypernymy edges and finetuning. Fine-tuning A makes for better results than keeping the weights of the adjacency matrix fixed on both ALL and No15, but results in slight-tomoderate decreases on No15−, as the network is able to adjust the weights in order to bring down the q scores for unseen synsets. 4.2 Output Embeddings As in EWISE, in EWISER logits are computed by a dot product between a matrix of hidden scores and output synset embeddings. However, we do not train our own synset embeddings: rather, we employ off-the-shelf vectors. In this section we evaluate the performance of different options both in the choice of the embeddings and in how they are integrated into the network. We contrast the performance with our baseline, in which the O matrix is randomly initialized and the embeddings are trained. 4.2.1 Setting We experiment with different options for the initialization of O: Deconf 300d We use the 300-dimensional vectors released by Pilehvar and Collier (2016), which are built from Word2Vec Google news word embeddings. LMMS 2048d We use the 2048-dimensional vectors produced by Loureiro and Jorge (2019), built as the concatenation of BERT Large cased states’ centroids for instances in SemCor with the synset gloss vector, computed from BERT Large states as well. We normalize the vectors to unit length. Since LMMS vectors are quite big, we reduce the number of dimensions to 512 with truncated SVD. SensEmBERT+LMMS 2048d SensEmBERT (Scarlini et al., 2020) enhances LMMS by exploiting BabelNet and Wikipedia. SensEmBERT only includes nouns, but its vectors are in the same space as LMMS, so we use the former in combination with verbs, adjectives and adverbs from the latter. We employ the same preprocessing as with LMMS. Model Arch. ALL No15 No15− baseline – 74.2 73.9 52.2 Deconf O-init 75.3 75.2 55.2 O-freeze 66.4 66.0 72.2 O-thaw 75.3 75.2 60.5 O-thaw* 73.8 73.7 62.3 LMMS O-init 75.5 75.4 55.1 O-freeze 75.9 75.4 59.4 O-thaw 75.4 75.0 57.4 O-thaw* 75.8 75.4 57.3 LMMS + O-init 76.1 76.0 59.4 SensEmBERT O-freeze 76.3 76.0 64.7 O-thaw 76.4 76.1 62.3 O-thaw* 76.7 76.6 63.4 Table 2: Evaluation of O initialization and training strategies on English all-words WSD. F1 is reported. For each sense embedding system, we report results with four different training schemes: plain initialization (O-init); initialization and freezing (O-freeze); restore the best O-freeze, then thaw the weights of O (O-thaw); the same as for O-thaw, but reducing the learning rate to 10−5 (O-thaw*). In all cases, synset embeddings are computed as the centroid of the senses contained in the synset. 4.2.2 Data & Hyperparameters We train our baseline and O-init models for 20 epochs. The O-freeze model, which is much slower to converge, is trained for a maximum of 80 epochs. O-thaw and O-thaw* are trained for 10 epochs. The data on which we train and report the performances are the same as in Section 4.1.2. 4.2.3 Results We report in Table 2 the results of the evaluation of the use of synset embeddings for the initialization of the O output embeddings matrix. In general, the approach enables much better F1 scores compared to the baseline, but is very dependent on the quality of the embeddings, and on whether they incorporate supervision from SemCor. When using Deconf, which uses the WordNet graph to “deconflate” word-level Word2Vec vectors, with no use of training corpora, the O-freeze strategy produces the best result on No15−, i.e., 72.2, with an absolute increase of 20 points over the baseline. However, O-freeze with Deconf also achieves the worst result on both ALL and No15, indicating that some form of biasing towards the most frequent synsets, which is an effect of corpus supervision, is required for the global evaluation. Fine-tuning O enables the model to obtain a decent 2860 S G G+ E System ALL No15 No15− S2 S3 S7 S13 S15 N V A R ✓ ✓ Kumar et al. (2019) 71.8 70.9* 73.8 71.1 67.3 69.4 74.5 74.0 60.2 78.0 82.1 ✓ ✓ Loureiro and Jorge (2019) 75.4 75.2* 76.3 75.6 68.1 75.1 77.0 ✓ Hadiwinoto et al. (2019) 73.7* 73.2* 75.5 73.6 68.1 71.1 76.2 ✓ ✓ Huang et al. (2019) 77.0⋆ 76.2* 77.7 75.2 72.5 76.1 80.4 ✓ ✓ Scarlini et al. (2020) - Sup. 78.7 80.4 ✓ Vial et al. (2019) 75.6 ✓ Vial et al. (2019) - ENS 76.7 76.5* 77.5 77.4 69.5 76.0 78.3 79.6 65.9 79.5 85.5 ✓ † EWISERhyper 77.0⋆ 76.9 60.4 77.5 77.9 71.0 76.4 77.8 79.9 66.4 79.0 85.5 ✓ ✓ EWISERhyper 77.5 77.3 68.2 78.4 77.4 71.0 77.4 78.7 80.7 65.1 80.9 86.1 ✓ † EWISERhyper+hypo 76.8 76.8 59.5 77.7 77.9 70.3 76.2 76.3 79.4 65.9 80.0 86.7 ✓ ✓ EWISERhyper+hypo 78.3 78.2 69.1 78.9 78.4 71.0 78.9 79.3 81.7 66.3 81.2 85.8 ✓ ✓ ✓ ✓ Vial et al. (2019) 77.1 ✓ ✓ ✓ ✓ Vial et al. (2019) - ENS 79.0⋆ 78.4* 79.7 77.8 73.4 78.7 82.6 81.4 68.7 83.7 85.5 ✓ ✓ ✓ ✓ EWISERhyper 80.1 79.8 75.2 80.8 79.0 75.2 80.7 81.8 82.9 69.4 83.6 87.3 ✓ ✓ ✓ ✓ EWISERhyper+hypo 79.8 79.3 75.1 80.2 78.5 73.8 80.6 82.3 82.7 68.5 82.9 87.6 Scozzafava et al. (2020) 71.7 71.0* 71.6 72.0 59.3 72.2 75.8 ✓ Scarlini et al. (2020) - KB 74.8 75.9 Table 3: Evaluation of the joint use of structured logits and O-thaw* on English all-words WSD. F1 is reported. The column blocks report (i) the training corpora and system compared; (ii) overall F1; (iii) single dataset F1; (iv) POS-specific F1. †: Incorporates gloss information through synset embeddings. *: Computed from reported scores. ⋆: highest F1 that is statistically different from the best one (χ2 with p=0.1). F1 score, with the exception of O-thaw*, where the training run was underfitting. With LMMS, higher results are obtained, especially when freezing the weights. SensEmBERT with the LMMS backoff achieves the best results on both ALL and No15, with O-thaw* reaching at least 76.6 on ALL and No15. Probably due to the fact that SensEmBERT relies less on the supervision from SemCor, very strong results are obtained on No15−as well, with a margin of over 12 points above the baseline. As for the training scheme adopted, the best results are obtained from the freeze-then-thaw strategy with learning rate reduction (O-thaw*) and from the simple freezing of O. Thawing consistently raises the accuracy on ALL and No15, but lowers it on No15−, meaning that the fine-tuning of O shifts the balance of the trade-off between performances on seen and unseen synsets to the benefit of the former. O-init still improves over the baseline, but is less effective than its alternatives. 4.3 Combining Relational Knowledge and Sense Embeddings Bringing everything together, we now evaluate the joint exploitation of the O initialization and structured logits in EWISER. 4.3.1 Setting Building on the results of the previous experiments, we limit the number of model variants by only including the configurations that separately yielded the best results, namely: (i) the use of hypernyms (EWISERhyper) or hypernyms plus hyponyms (EWISERhyper+hypo) in the graph encoded in A, training the adjacency matrix, and (ii) the combination of SensEmBERT and LMMS for the output embeddings, trained according to the Othaw* scheme, i.e., the freeze-then-thaw approach, with the learning rate set to 10−5. 4.3.2 Data & Hyperparameters In order to make the results of EWISER comparable to those of the state-of-the-art approaches to WSD, we report results when training not only on SemCor (S), but also on the union of SemCor and untagged WordNet glosses (G), and on the union of SemCor, tagged WordNet glosses (G+), and WordNet examples (E) as well. When training on glosses, we prepend the lemma of the main sense and a semicolon to the raw gloss, and treat the added word as a tagged instance. We evaluate the model on the datasets mentioned in Section 4.1.2. 4.3.3 Results In Table 3 we report the results of the unified evaluation. In addition to our systems, we include in the comparison the best systems from the literature, grouping the two sets together in two internally comparable blocks: (i) systems trained on SemCor, possibly making use of LKB information such as untagged glosses or the WordNet graph; (ii) systems that also make use of tagged glosses and examples; (iii) the best performing knowledge-based systems. 2861 In almost every setting compared, EWISER outperforms the previous state of the art. Among systems in the first block (S/G) EWISERhyper+hypo trained on S+G obtains the best results on all the datasets except for SemEval-2015, with a margin over the two best performing systems, i.e., GlossBERT and the ensemble of 8 models of Vial et al. (2019), of, respectively, 1.3 and 1.6 points on ALL, and of 2.0 and 1.7 on No15, which does not include our dev set. Even if they do not train on untagged glosses, both EWISERhyper and EWISERhyper+hypo show comparable performances to GlossBERT on ALL, and better on No15 – without fine-tuning BERT, and with much less compute power required. The results on No15−, where EWISERhyper+hypo with glosses achieves an F1 of 69.1, almost 10 points more than when not using them, show that definitional knowledge is beneficial for the zero-shot setting. Adding tagged glosses and WordNet examples further boosts performances, with the best configuration, EWISERhyper, breaking through the 80 points ceiling on ALL, an estimated upper bound on human inter-annotator agreement that is often quoted as the glass ceiling for WSD performance (Navigli, 2009). The only model we can compare with, i.e., the one of Vial et al. (2019), is outperformed on every dataset except for SemEval-2015. On ALL and No15, however, we outscore the competitor by a margin of 1.1 and 1.4 points, establishing a new state of the art in English all-words WSD. The bigger training set improves performances on No15−, though the gap is not quite closed. Not surprisingly, even the best knowledge-based systems do not offer competitive performances, since they cannot take advantage of training corpus supervision. 4.4 Cross-lingual WSD To see whether the strong performances of EWISER carry over to the multilingual setting, we retrain the best global configuration, i.e., EWISERhyper trained on SemCor, WordNet’s tagged glosses and usage examples, with BERT multilingual cased. We compare our system against (i) the state of the art in multilingual WSD, i.e. SensEmBERT, which can, however, only disambiguate nouns; (ii) the best performing all-PoS system, i.e. SyntagRank (Scozzafava et al., 2020), a knowledge-based system; (iii) the feedforward baseline. We report results on the French, German, S13 S15 DE ES FR IT ES IT Scozzafava et al. (2020) 76.4 74.1 70.3 72.1 63.4 69.0 Scarlini et al. (2020) 79.2* 73.4* 77.8* 69.8* Ours (baseline) 81.7 76.6 80.8 77.2 67.3 70.6 Ours (EWISER) 80.9 78.8 83.6 77.7 69.5 71.8 Table 4: Evaluation of the joint use of structured logits and O-thaw* on cross-lingual WSD. F1 is reported. *: Recomputed by the authors. Italian and Spanish all-words evaluation datasets from SemEval-2013, which contain only nouns, and the Italian and Spanish datasets from SemEval2015, which contain all PoS. We use the revised version of the evaluation datasets4, which is updated to be consistent with the 4.0.1 release of the BabelNet graph. As a result, we can test on a larger number of instances than previously possible. We show the results in Table 4. As can be seen, we outperform SensEmBERT in the four datasets from SemEval-2013, sometimes by a large margin, i.e., by almost 8 points on the Italian dataset. On SemEval-2015 we outperform SyntagRank by 6.1 points on the Spanish dataset and by 2.8 points on Italian one. We also show noticeable improvements over the baseline in 5 out of 6 benchmarks. The evaluation demonstrates that the EWISER approach is robust in the cross-lingual setting as well, outperforming competitors across the board and setting a new state of the art. Moreover, the results provide the empirical grounds for believing that, in addition to the results achieved in the languages featured in the evaluation datasets, comparable figures could also be attained for other languages, at least for several European ones. 5 Analysis In this section we provide a qualitative analysis of our approach. Specifically, we are interested in the capability of the model to predict unseen synsets, thanks to the prior knowledge that is encoded in both the output embeddings O and the adjancency matrix A. Consider the following sentences: (1) a. Corporate debt defaults predicted to increase. b. Though people are free to change the default, they usually don’t. In Table 5 we report the predictions for the target default in sentences (1a) and (1b) of our best sys4github.com/SapienzaNLP/mwsd-datasets. 2862 Synset N Gloss w z (1a) q (1a) z (1b) q (1b) default.n.01 1 loss due to not showing up 8.6 15.9 14.9 24.5 loss.n.03 6 the act of losing someone or something .50 6.7 9.3 absence.n.02 8 failure to be present .48 8.1 10.2 default.n.02 0 act of failing to meet a financial obligation 10.2 17.0 8.9 14.6 default.v.01 1 fail to pay up .30 14.4 11.2 failure.n.01 18 an act that fails .27 9.3 8.6 nonpayment.n.02 0 loss resulting from failure of a debt to be paid 11.0 17.9 9.6 15.5 default.v.01 1 fail to pay up .30 14.4 11.2 financial loss.n.01 0 loss of money or decrease in financial value .29 8.7 8.6 default option.n.01 0 an option that is selected automatically unless an alternative is specified 6.7 12.5 14.6 25.5 option.n.02 19 one of a number of things from which only one can be chosen .76 7.7 14.3 Table 5: Predictions for sentences (1a) and (1b) of the best model trained on SemCor. In the first row of each block, we report the scores of the four synsets associated in WordNet with the noun default. The following rows contain the scores for synsets that are incident to those in the first row of the block, and contribute to their scores in q. The columns report, from left to right, a sense (therefore synset) identifier, the number of occurrences of that lemma in SemCor, the gloss, the weight of the edge, the hidden logits z and the output logits q. tem trained on SemCor only, i.e., EWISERhyper. In both cases, the correct synsets, respectively, default.n.02/nonpayment.n.02 and default option.n.01, are not in the training set. However, the model is still able to give the correct answer. In the first case, the embedding intialization is enough to predict nonpayment.n.02 (with default.n.02 having the second highest score), as its score in z is already the highest among possible predictions. In the latter, it is the contribution from the synset pointing to default option.n.01, i.e., option.n.02, that enables the network to make the correct prediction. However, we must note that the model still overrelies on corpus supervision. Because of this, even though our best overall model, i.e., EWISERhyper trained on SemCor, tagged glosses and examples, is able to distinguish and predict correctly the two well-attested mathematical meanings of root as equation solution and root as the number x such that y = x2 in sentences (2a) and (2b) below, it is not able to correctly detect the tooth sense of root (2c), which never occurs in SemCor: (2) a. The n roots of a polynomial of degree n depend continuously on the coefficients. b. The root of 4 is 2. c. There’s no need to be worried if your dentist prescribes a root canal procedure. Thus, while the EWISER model is indeed very effective, with the best configuration outdoing the upper bound on inter-annotator agreement, we are still far from having solved the task. 6 Conclusion We presented EWISER, a new neural WSD architecture that, by embedding information from the WordNet graph within the neural architecture, can also make use of the relational information that is usually only exploited by knowledge-based systems. Thanks to the joint exploitation of the WordNet graph and to the use of pretrained synset embeddings, EWISER is able to predict meanings which are not found in the training set, thus mitigating the knowledge acquisition bottleneck. On almost all the evaluation settings, our system beats the previous state of the art. Most notably, our model is the first to break through the 80 F1 ceiling on the overall evaluation, the estimated upper bound on the task. On the multilingual setting, even with no training data besides the English corpora, EWISER sets the new state of the art. We leave it as future work to explore ways to raise accuracy on unseen synsets without harming performances on frequent synsets. We release the code used in the experiments, as well as pretrained models at github.com/SapienzaNLP/ewiser. Acknowledgments The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No. 726487 under the European Union’s Horizon 2020 research and innovation programme. This work was supported in part by the MIUR under the grant “Dipartimenti di eccellenza 20182022” of the Department of Computer Science of the Sapienza University of Rome. 2863 References Eneko Agirre and Aitor Soroa. 2009. Personalizing PageRank for word sense disambiguation. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 33–41, Athens, Greece. Association for Computational Linguistics. Michele Bevilacqua and Roberto Navigli. 2019. Quasi Bidirectional Encoder Representations from Transformers for word sense disambiguation. In Proceedings of the International Conference Recent Advances in Natural Language Processing, pages 122– 131, Varna, Bulgaria. INCOMA Ltd. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional Transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Philip Edmonds and Scott Cotton. 2001. SENSEVAL2: Overview. In Proceedings of SENSEVAL2 Second International Workshop on Evaluating Word Sense Disambiguation Systems, pages 1–5, Toulouse, France. Association for Computational Linguistics. Steffen Eger, Paul Youssef, and Iryna Gurevych. 2018. Is it time to swish? Comparing deep learning activation functions across NLP tasks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4415–4424, Brussels, Belgium. Association for Computational Linguistics. Christian Hadiwinoto, Hwee Tou Ng, and Wee Chung Gan. 2019. Improved word sense disambiguation using pre-trained contextualized word representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5300– 5309, Hong Kong, China. Association for Computational Linguistics. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339, Melbourne, Australia. Association for Computational Linguistics. Luyao Huang, Chi Sun, Xipeng Qiu, and Xuanjing Huang. 2019. GlossBERT: BERT for word sense disambiguation with gloss knowledge. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3500–3505, Hong Kong, China. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015. Sawan Kumar, Sharmistha Jat, Karan Saxena, and Partha Talukdar. 2019. Zero-shot word sense disambiguation using sense definition embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5670–5681, Florence, Italy. Association for Computational Linguistics. Robert Logan, Nelson F. Liu, Matthew E. Peters, Matt Gardner, and Sameer Singh. 2019. Barack’s wife Hillary: Using knowledge graphs for fact-aware language modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5962–5971, Florence, Italy. Association for Computational Linguistics. Daniel Loureiro and Al´ıpio Jorge. 2019. Language modelling makes sense: Propagating representations through WordNet for full-coverage word sense disambiguation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5682–5691, Florence, Italy. Association for Computational Linguistics. Fuli Luo, Tianyu Liu, Qiaolin Xia, Baobao Chang, and Zhifang Sui. 2018. Incorporating glosses into neural word sense disambiguation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2473–2482, Melbourne, Australia. Association for Computational Linguistics. Marco Maru, Federico Scozzafava, Federico Martelli, and Roberto Navigli. 2019. SyntagNet: Challenging supervised word sense disambiguation with lexical-semantic combinations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3525–3531, Hong Kong, China. Association for Computational Linguistics. George A. Miller. 1995. WordNet: A lexical database for english. Commun. ACM, 38(11):39–41. George A. Miller, Martin Chodorow, Shari Landes, Claudia Leacock, and Robert G. Thomas. 1994. Using a semantic concordance for sense identification. In Proceedings of HUMAN LANGUAGE TECHNOLOGY: a Workshop held at Plainsboro, New Jersey, March 8-11, 1994. Andrea Moro and Roberto Navigli. 2015. SemEval2015 task 13: Multilingual all-words sense disambiguation and entity linking. In Proceedings of the 9th International Workshop on Semantic Evaluation 2864 (SemEval 2015), pages 288–297, Denver, Colorado. Association for Computational Linguistics. Andrea Moro, Alessandro Raganato, and Roberto Navigli. 2014. Entity linking meets word sense disambiguation: a unified approach. Transactions of the Association for Computational Linguistics, 2:231– 244. Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Comput. Surv., 41(2):10:1–10:69. Roberto Navigli. 2018. Natural Language Understanding: Instructions for (Present and Future) Use. In Proc. of the 27th International Joint Conference on Artificial Intelligence (IJCAI-18), pages 5697–5702, Stockholm, Sweden. Roberto Navigli, David Jurgens, and Daniele Vannella. 2013. SemEval-2013 task 12: Multilingual word sense disambiguation. In Proceedings of the Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 222–231, Atlanta, Georgia, USA. Association for Computational Linguistics. Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217–250. Tommaso Pasini. 2020. The knowledge acquisition bottleneck problem in multilingual word sense disambiguation. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2020. ijcai.org. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 43–54, Hong Kong, China. Association for Computational Linguistics. Mohammad Taher Pilehvar and Nigel Collier. 2016. De-conflated semantic representations. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1680–1690, Austin, Texas. Association for Computational Linguistics. Sameer Pradhan, Edward Loper, Dmitriy Dligach, and Martha Palmer. 2007. SemEval-2007 task-17: English lexical sample, SRL and all words. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 87–92, Prague, Czech Republic. Association for Computational Linguistics. Alessandro Raganato, Claudio Delli Bovi, and Roberto Navigli. 2017. Neural sequence learning models for word sense disambiguation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1156–1167, Copenhagen, Denmark. Association for Computational Linguistics. Prajit Ramachandran, Barret Zoph, and Quoc V. Le. 2018. Searching for activation functions. In Proceedings of the 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Workshop Track. Bianca Scarlini, Tommaso Pasini, and Roberto Navigli. 2020. SensEmBERT: Context-enhanced sense embeddings for multilingual word sense disambiguation. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020. Federico Scozzafava, Marco Maru, Fabrizio Brignone, Giovanni Torrisi, and Roberto Navigli. 2020. Personalized PageRank with syntagmatic information for multilingual word sense disambiguation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Benjamin Snyder and Martha Palmer. 2004. The English all-words task. In Proceedings of SENSEVAL3, the Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, pages 41–43, Barcelona, Spain. Association for Computational Linguistics. Tolga Uslu, Alexander Mehler, Daniel Baumartz, and Wahed Hemati. 2018. FastSense: An efficient word sense disambiguation classifier. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the Annual Conference on Neural Information Processing Systems 2017, 49 December 2017, Long Beach, CA, USA, pages 5998–6008. Lo¨ıc Vial, Benjamin Lecouteux, and Didier Schwab. 2019. Sense vocabulary compression through the semantic knowledge of WordNet for neural word sense disambiguation. In Proceedings of the Global WordNet Conference, pages 108–117.
2020
255
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2865–2871 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2865 Glyph2Vec: Learning Chinese Out-of-Vocabulary Word Embedding from Glyphs Hong-You Chen∗ The Ohio State University [email protected] Sz-Han Yu∗ National Taiwan University [email protected] Shou-De Lin National Taiwan University [email protected] Abstract Chinese NLP applications that rely on large text often contain huge amounts of vocabulary which are sparse in corpus. We show that characters’ written form, Glyphs, in ideographic languages could carry rich semantics. We present a multi-modal model, Glyph2Vec, to tackle Chinese out-of-vocabulary word embedding problem. Glyph2Vec extracts visual features from word glyphs to expand current word embedding space for out-of-vocabulary word embedding, without the need of accessing any corpus, which is useful for improving Chinese NLP systems, especially for lowresource scenarios. Experiments across different applications show the significant effectiveness of our model. 1 Introduction Word embedding encoded semantic and syntactic information (Mikolov et al., 2013a,b) in lowdimensional space have served as useful features for various NLP applications but often require large-scale corpus with billions of tokens to train. A natural constraint of word embedding is that it is not practical to collect the entire vocabulary of any language with large enough frequency to train the embedding for every word, since some new words may appear in downstream tasks. A typical solution is to simply assign a specific UNK embedding to all out-of-vocabulary (OOV) words that do not appear in the training data. Current solutions such as using subwords (e.g., characters) are mainly considering alphabetic languages (e.g., English and French) that are composed of small amount of characters. Such techniques may not be sufficient for ideographic lan∗Equally contribution. Figure 1: Statistics of length of Chinese words in Sinica Corpus. guages (e.g., Chinese and Japanese) in which a word is often composed with characters of a large amounts. An example is that traditional Chinese includes about 17k distinct tokens. Therefore, it could be expected to suffer from underfitting not only word embedding but also character embedding. Even worse, words in ideographic languages are often composed of 2-3 characters only, unlike words in alphabetic languages are longer but with smaller types of characters. Figure 1 provides the statistics in Chinese Sinica Corpus. (a) (b) Figure 2: Example of compositionality of Chinese character components. (a) The same radical 火(fire) implies related meaning for 烤(roast) and 炸(fried). Components may also share similar semantics even they are different in graphs. 鳥and 隹are both refer to birds. (b) Cangjie input method. Each character can be presented as several keyboard inputs based on its components (e.g., 惆is for 心+月+土+口). The visual structure (or glyph) of a Chinese character contains rich semantics. A Chinese charhttp://asbc.iis.sinica.edu.tw/indexreadme.htm 2866 acter is made up of several graphical components. Figure 2 shows some examples that components in characters represent similar semantic or pronunciation. In addition to glyphs, we propose to use the high-quality features provided by Cangjie input method to represent each character. Cangjie is a popular Chinese input method. Similar to radicals, characters are composed of 24 basic graphical units. Each unit is mapped to a corresponded letter key on a standard QWERTY keyboard. Building beyond character glyphs, one can intuitively guess the semantic of a word. Recent work (Chen et al., 2015; Xu et al., 2016; Yin et al., 2016; Liu et al., 2017; Su and Lee, 2017) have shown benefits of the compositionality at character level or visual feature of Chinese glyphs for some tasks. In this work, we suggest that in the OOV scenario glyphs can be particularly useful. A key observation for solving OOV problem matches the intuition of human generalization in Chinese. When a Chinese user reads an unseen word or a character, by decomposing the structure, graphical components such as radicals for a character often help Chinese users understand the meaning and sometimes pronunciation of the character. We study a novel application that recovers Chinese OOV word embeddings from glyphs. Our work is to answer a question : given the pretrained word embeddings, can we directly learn a mapping from word glyphs to their word embedding and generalize the mapping for the purpose of generating the embedding of OOV words? We formulate it as a visual-to-text transfer learning problem and show that the visual structure of Chinese characters is helpful in learning Chinese OOV embeddings. 2 Related Work Exploiting Structure of Chinese Characters Recent work have explored the use of Chinese character structure in different settings (E and Xiang, 2017; Liu et al., 2017; Dai and Cai, 2017). Several work aim to use character-level feature to enhance standard word embedding learning models (e.g., Word2Vec or GloVe). CWE (Chen et al., 2015) propose to use character-level formulation for words in training word embeddings; SCWE (Xu et al., 2016) and Li et al. (2015) extends to consider the relations of characters compositionally. MGE (Yin et al., 2016) and Shi et al. (2015) further includes radical information associated to characters. Yu et al. (2017) jointly embed Chinese words, characters, and radicals. GWE (Su and Lee, 2017) proposes to extract feature from character bitmaps as the inputs of Word2Vec and GloVe. Our work is different from all of them, since we emphasize on generating the OOV word embeddings, which is not handled by them. Learning Embedding for OOVs To handle OOV words, an approach is operating character level embeddings, then averages them into word embeddings (Kim et al., 2016; Wieting et al., 2016). Morphology-based approaches take advantage of meaningful linguistic substructures (Botha and Blunsom, 2014; Luong et al., 2013; Bhatia et al., 2016). Morphology-based approaches often struggle with those vocabularies lacking linguistic substructures such as names and transliterations of foreign language, which often appears as OOV words. In all the models above, just like Word2Vec (Mikolov et al., 2013c)), the embeddings meed to learned by training over a large corpus. The most similar work is Mimick model (Pinter et al., 2017). By learning a character language generating model, guided by minimizing the distance between the output embedding of LSTMs and pre-trained word embeddings, Mimick shows feasibility of generating OOV word embedding from character compositions. However, Mimick is mainly from the view of alphabetic languages that does not consider glyphs. Chinese words often consist of short sequences composed of many kinds of tokens that are difficult for language model approaches to handle (see Figure 1) and could suffer from under-fitting. 3 Our Model: Glyph2Vec We formulate the task of learning OOV embeddings as a transfer learning problem. Formally, given a Chinese vocabulary set V of size |V|, and a pre-trained embeddings matrix E ∈R|V|×d where each word wi is associated with a vector ei of dimension d as training set {wi, ei}|V| i=1. We aim to learn a mapping F : w →Rd, where F projects the input word to the d dimension embedding space such that F(wi) ≈ei. In testing, a word wt may be out of V, while the model is still obliged to predict the embedding et with F(wt). Given the glyphs for a word x = [cj]|x| 1 as a sequence of character 2D bitmaps c provided according to V, we can considering a function g : x →Rk that transforms glyphs into vi2867 Figure 3: Complete network architecture of our Glyph2Vec. White boxes annotate the feature dimension of each character. Different features are combined by concatenating. GRU takes sequence of character feature as inputs. sual features of k dimension. Another function f : g(x) →Rd later maps the visual space to the word embedding space. The final embedding can be obtained with ei = F(xi) = f(g(xi)), where input is glyph xi. The overall framework is illustrated in Figure 3. 3.1 Visual Feature Extractor We consider two implementations of visual feature extractor g. ConvAE We adopt the convolutional autoencoder ConvAE (Masci et al., 2011) to capture the structure of characters bitmaps c. The architecture of the ConvAE follows Figure 6 in (Su and Lee, 2017). Eventually, the well-trained encoder is fixed as extractor that extracts 512-dimensional feature for every character c. The input bitmaps are 60×60 8-bit images in grayscale. Cangjie Composition We propose to use Cangjie input codes as high-level annotations of characters, which can be easily collected from the input method dictionary. We construct a Bag-of-Root (BoR) vector for each character according to the Cangjie dictionary. Each BoR binary vector of 24 dimensions representing the roots that a character possesses. 3.2 Compositional Model: From Characters to Words After the visual features of every character in a word are extracted, we still need to compose them to word level. A compositional model f takes a sequence of characters’ visual feature and projects them onto the word embedding space. The right portion of Figure 3 shows the architecture of f. We construct a bi-directional RNN network with GRU cells (Cho et al., 2014) to compute the expected word embedding over the character feature sequence. Finally, the 300D word embeddings are predicted. To calculate the loss for backpropagation, we adopt squared Euclidean distance between the prediction F = f(g(x)) and the gold word embedding w: ∥F(x) −w∥2. 3.3 Pre-trained Chinese Character Embedding Unlike alphabetical languages, each Chinese character carries its own meaning. State-of-the-art Chinese word embedding models (Chen et al., 2015; Xu et al., 2016; Yin et al., 2016) often consider learning character embedding jointly. We demonstrate how to incorporate pre-trained character embedding to further improve the performance. The character embeddings are concatenated with the glyph features and the BoR Cangjie vectors as inputs. Character embedding is a huge embedding matrix. In Table 1, we summarized the required #parameters. We note that Glyph2Vec can infer OOV embedding directly from glyphs without character embedding. Model #Para Mimick 1449k Glyph2Vec + Pretrained Char 1362k Glyph2Vec (w ConvAE) 517k Glyph2Vec 306k Table 1: Number of parameters required by Mimick and Glyph2Vec. Mimick based on character embedding and can be initialized with pre-trained character embedding (64 dimension). Note that ConvAE can be pre-trained and discarded during training Glyph2Vec. 4 Experiment 4.1 Setup We adopt the Word2Vec traditional Chinese 300d word embedding pre-trained on public-available Sinica Corpus 4.0 which includes about 10M tokens. For optimization, we train 100 epochs with RMSProp optimizer with learning rate 4e-4 with batch-size 128. We note the models compared in the following experiments here. M is for Mimick baseline (Pinter et al., 2017) based on the authors’ code. For the proposed feature, we test several combinations. C is for using Cangjie BoR vector; V is for using glyph visual feature; Char is for appending pre-trained character embedding. We utihttp://asbc.iis.sinica.edu.tw/ 2868 Figure 4: Principal component analysis visualization of the produced word embedding. Zoom in for better resolution. lize the embeddings from Polyglot (Al-Rfou et al., 2013). As a sanity check, in Fig. 4 we visualize the embedding of seen and OOV words. One could observe meaningful clusters that have similar visual structure. For example, 烤雞(roast chicken) could be mapped with 烤鴨(roast duck) because 雞(chicken) and 鴨(duck) have different glyphs both about bird. Some cooking verbs that have the radical 火(fire) like 烤(roast) and 燒烤(roast) are also mapped closely. Some unseen characters (or ”words” with only one character) can also be predicted reasonably. 5 Nearest Neighbor Examples We qualitatively analyze Glyph2Vec with nearest neighbor (NN) sanity check. Table 2 shows the results of retrieved nearest neighbors with OOV word queries for Mimick and our Glyph2Vec embeddings (using V), respectively. We observe Glyph2Vec is able to model visual semantic by associating those characters that share related visual features since Glyph2Vec learns from the images of characters. For example, 鰻(eel) in 蛇鰻(snake-eel) shares the radicals of 魚(fish) with 石鱸(Haemulidae, fish name). 銠(Rh) and 氯(Cl) in 三氯化銠(RhCl3) associate some visual features relate to chemicals like 金in 鈰(Ce), 气in 氟(F), 酉in 酸(acid), and more. On the other hand, we observe some properties including composition (e.g., numbers) and character semantic that both Glyph2Vec and Mimick can provide. (1) Composition: composing characters that have very different meanhttps://sites.google.com/site/rmyeid/projects/polyglot ing after splitting them. For instance, 茲尼 約夫is a transliteration of Seleznev (Russian name), for which every character is meaningless alone but a meaningful transliteration when combined. With character-level compositional model in Glyph2Vec, it could be retrieved given 克羅迪歐(Claudio, western name). Moreover, Glyph2Vec preserves correct meaning of a character when attaching with the other characters. For example, 驟(abrupt)減(decrease) can retrieve 減 少(cut back) and 減低(reduce) properly when 減 (subtract) is associated to different characters. (2) character semantic: associating different characters with similar meaning. For example, 道(street) is related to 巷(lane) or 弄(alley) and they are retrieved by our model given 學府二道(Xuefu 2nd Street) as the OOV word even though the characters look completely different. 5.1 Joint Tagging of Parts-of-Speech (POS) and Morphosyntactic Attributes We follow the experiment protocols of partsof-speech tagging and morphosyntactic attributes tagging stated in Mimick (Pinter et al., 2017) for this experiment. There are two parts-of-speech tagging tasks based on the Chinese thread of Universal Dependencies (UD) scheme (De Marneffe et al., 2014). To avoid tuning towards those OOV words, we consider the similar evaluation protocols of generalized zero-shot learning (Chao et al., 2016; Xian et al., 2017) that the embedding of not only unseen but also seen words need to be generated. Both word-level LSTM and character LSTM are reported (Table 3). With visual feature available, Glyph2Vec consistently outperforms Mimick. On the other hand, we observe using pretrained character embedding only helps on accuracy of seen words but not OOV words, which suggests that it is necessary for a module like Mimick or Glyph2Vec to learn to compose characters for OOV words. 5.2 Wikipedia Title Classification As we introduced in Sec. 1, in real-world scenario Chinese systems could suffer from severe OOV problem. An example is Wikipedia encyclopedia. It contains lots of rarewords that easily become OOV words such as terminologies, scientific names, geography locations, ... etc. We utilize Wikipedia Title dataset (Liu et al., 2017) to https://github.com/frederick0329/Wikipedia-TitleDataset 2869 Query Word Top 5 Nearest Neighbors 一百幾十(numbers) 平樂縣(city) 四千多(numbers) 通山縣(county) 七千多一千三百二十多萬(numbers) 全劇(drama) 殘夢(dream) 活脫(lividly) 黃曉若(name) 茱莉紐瑪爾(Juliette Binoche) 市川實(name) 驟減(slump) 水蓄存(water resource) 猴蝦(shrimp) 投藥量(dosage) 百萬分之八(proportion) 河塘(pond) 供職(provided job) 猴蝦(shrimp) 管制課(office) 鄭龍營(name) 劉百真(name) 疾管課(office) 蛇鰻(snakebird) 廣鹽性(euryhaline) 石鱸(fish) 紅鰽(fish) 蒼燕鷗(gull) 沙蠶(worm) 克羅迪歐(Claudio) 查氏(Cha) 塞立格(Selig) 薩梅(Same) 歐卡南(Okana) 拉杜爾(Ladur) 三氯化銠(RhCl3) 炆(stew) 粘稠(viscous) 投藥量(medicine) 許敏昌(name) 放射線菌(bacteria) 杳無蹤跡(idiom) 潛水鏡(goggles) 捉蟹(catch crab) 堤邊(riverside) 十點多(time) 溪兩旁(riverside) 學府二道(street) 魏昭雄(name) 猴蝦(shrimp) 地仙(person) 陳建村(name) 張玉田(name) 一百幾十(numbers) 一百多(numbers) 兩百多(numbers) 二十多(numbers) 八十多(numbers) 五十多(numbers) 全劇(drama) 齣(unit for drama) 裸戲(naked play) 舞劇(dance drama) 歌舞劇(musical drama) 戲碼(drama) 驟減(slump) 減(slump) 減少(slump) 逐年(year by year) 大幅度(dramatically) 減低(slump) 供職(provided job) 現職(job) 專職(job) 軍職(military service) 聘任(hire) 任用(hire) 蛇鰻(snakebird) 魚類(fish) 廣鹽性(euryhaline) 石鱸(fish) 筍殼魚(fish) 性魚(fish) 克羅迪歐(Claudio) 柯普奇夫(Puchkov) 齊默特(Chimet) 采夫(Tsev) 茲尼約夫(Seleznev) 伊特金(Itkine) 三氯化銠(RhCl3) 無機酸(inorganic acid) 鈰(Ce) 氟二氯(FCl2) 化學式(chemical Eq.) 陽極板(anode plate) 杳無蹤跡(idiom) 無可奈何(idiom) 未必盡然(idiom) 不足為奇(idiom) 莫可奈何(idiom) 處之泰然(idiom) 學府二道(street) 三十九弄(street) 二二一巷(street) 二八五巷(street) 一百七十四巷(street) 三十弄(street) Table 2: Nearest neighbors examples retrieved by Mimick (upper) and Glyph2Vec (lower). Top 5 NNs are listed. Words are translated or given with explanation. POS Attr. Model Acc OOV Acc F1 Word-based LSTM UNK 0.888 0.474 0.931 M 0.909 0.617 0.934 V 0.924† 0.741 0.946 C 0.921 0.709 0.942 V + C 0.924† 0.747† 0.950† Character-based LSTM UNK 0.910 0.618 0.948 M 0.929 0.768 0.954 V 0.933 0.800 0.955 V + C 0.935 0.801 0.956 M (Char)* 0.931 0.768 0.955 V + Char 0.936 0.805 0.958 C + Char 0.934 0.794 0.958 V + C + Char 0.938† 0.810† 0.959† Table 3: Results for parts-of-speech and morphosyntactic attributes tagging based on word-level and character-level LSTM. *Initializing Mimick with pretrained character embedding. †Best model passing significant test against Mimick (M) with p-value < 0.05. Model Acc UNK 0.431 M 0.497 C 0.499 V 0.501 V + C 0.513 V + C + Char 0.516† Table 4: Wikipedia Title Classification Accuracy study the problem. The dataset is a collection of 593K Chinese articles from Wikipedia and categorizing them into 12 classes based on their titles. We preprocessed the data by removing punctuation, special characters, and other non-Chinese instances, and turning Arabic numbers into Chinese text. We use opensource Jieba toolkit to segment each title into words. 52.5% are OOV based on Sinica Corpus, and we generate their embeddings by Glyph2Vec. We construct a neural network classifier with the generated word embedding as input to evaluate our method. The classifier is consist of 3 fullyconnected (FC) layers on top of the averaged word embedding of titles. Results are shown in Table 4. With glyph feature and Cangie BoR feature provided, the performance could be improved significantly compared to neglecting OOV (as UNK) in such challenging setting. 6 Conclusion In this work, we propose a multi-modal framework that expand pre-trained embedding space to include OOV words using character visual features such as Cangjie feature and Chinese character glyphs. We have demonstrated the effectiveness of Glyph2Vec on traditional Chinese, and we believe Glyph2Vec can also be applied to other ideographic languages to handle OOV words as well. https://github.com/fxsjy/jieba We note that the accuracy cannot be compared with the report in (Liu et al., 2017) since they did not consider OOV and char/word embeddings. Here we only use the dataset to examine the performance of OOV embedding. For simplified Chinese, we suggest users to first translate into traditional Chinese since traditional characters have richer structures and probably more semantics can be extracted through Glyph2Vec. 2870 References Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual NLP. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, CoNLL 2013, Sofia, Bulgaria, August 8-9, 2013, pages 183–192. Parminder Bhatia, Robert Guthrie, and Jacob Eisenstein. 2016. Morphological priors for probabilistic neural word embeddings. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 490–500. Jan A. Botha and Phil Blunsom. 2014. Compositional morphology for word representations and language modelling. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pages 1899–1907. Wei-Lun Chao, Soravit Changpinyo, Boqing Gong, and Fei Sha. 2016. An empirical study and analysis of generalized zero-shot learning for object recognition in the wild. ECCV. Xinxiong Chen, Lei Xu, Zhiyuan Liu, Maosong Sun, and Huan-Bo Luan. 2015. Joint learning of character and word embeddings. In IJCAI, pages 1236– 1242. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Falcon Dai and Zheng Cai. 2017. Glyph-aware embedding of chinese characters. In Proceedings of the First Workshop on Subword and Character Level Models in NLP, Copenhagen, Denmark, September 7, 2017, pages 64–69. M.-C De Marneffe, T Dozat, N Silveira, K Haverinen, F Ginter, Joakim Nivre, and C.D. Manning. 2014. Universal stanford dependencies: A cross-linguistic typology. pages 4585–4592. Shijia E and Yang Xiang. 2017. Chinese named entity recognition with character-word mixed embedding. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, Singapore, November 06 - 10, 2017, pages 2055–2058. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. 2016. Character-aware neural language models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 1217, 2016, Phoenix, Arizona, USA., pages 2741– 2749. Yanran Li, Wenjie Li, Fei Sun, and Sujian Li. 2015. Component-enhanced chinese character embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 1721, 2015, pages 829–834. Frederick Liu, Han Lu, Chieh Lo, and Graham Neubig. 2017. Learning character-level compositionality with visual features. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2059–2068. Association for Computational Linguistics. Minh-Thang Luong, Richard Socher, and Christopher D. Manning. 2013. Better word representations with recursive neural networks for morphology. In CoNLL. Jonathan Masci, Ueli Meier, Dan Cires¸an, and J¨urgen Schmidhuber. 2011. Stacked convolutional autoencoders for hierarchical feature extraction. In International Conference on Artificial Neural Networks, pages 52–59. Springer. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013c. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 3111– 3119. Yuval Pinter, Robert Guthrie, and Jacob Eisenstein. 2017. Mimicking word embeddings using subword rnns. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 102–112. Xinlei Shi, Junjie Zhai, Xudong Yang, Zehua Xie, and Chao Liu. 2015. Radical embedding: Delving deeper to chinese radicals. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 2: Short Papers, pages 594–598. Tzu-ray Su and Hung-yi Lee. 2017. Learning chinese word representations from glyphs of characters. 2871 In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 264–273. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Charagram: Embedding words and sentences via character n-grams. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 1504–1515. Yongqin Xian, Bernt Schiele, and Zeynep Akata. 2017. Zero-shot learning - the good, the bad and the ugly. CVPR. Jian Xu, Jiawei Liu, Liangang Zhang, Zhengyu Li, and Huanhuan Chen. 2016. Improve chinese word embeddings by exploiting internal structure. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1041–1050. Rongchao Yin, Quan Wang, Peng Li, Rui Li, and Bin Wang. 2016. Multi-granularity chinese word embedding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 981–986. Jinxing Yu, Xun Jian, Hao Xin, and Yangqiu Song. 2017. Joint embeddings of chinese words, characters, and fine-grained subcharacter components. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 286–291.
2020
256
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2872–2882 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2872 Multidirectional Associative Optimization of Function-Specific Word Representations Daniela Gerz♠♦Ivan Vuli´c♠Marek Rei♣Roi Reichart♥Anna Korhonen♠ ♠Language Technology Lab, University of Cambridge ♦PolyAI Limited, London ♣Department of Computing, Imperial College London ♥Faculty of Industrial Engineering and Management, Technion, IIT {dan,ivan}@poly-ai.com, [email protected] [email protected], [email protected] Abstract We present a neural framework for learning associations between interrelated groups of words such as the ones found in Subject-VerbObject (SVO) structures. Our model induces a joint function-specific word vector space, where vectors of e.g. plausible SVO compositions lie close together. The model retains information about word group membership even in the joint space, and can thereby effectively be applied to a number of tasks reasoning over the SVO structure. We show the robustness and versatility of the proposed framework by reporting state-of-the-art results on the tasks of estimating selectional preference and event similarity. The results indicate that the combinations of representations learned with our task-independent model outperform task-specific architectures from prior work, while reducing the number of parameters by up to 95%. 1 Introduction Word representations are in ubiquitous usage across all areas of natural language processing (NLP) (Collobert et al., 2011; Chen and Manning, 2014; Melamud et al., 2016). Standard approaches rely on the distributional hypothesis (Harris, 1954; Sch¨utze, 1993) and learn a single word vector space based on word co-occurrences in large text corpora (Mikolov et al., 2013b; Pennington et al., 2014; Bojanowski et al., 2017). This purely context-based training produces general word representations that capture the broad notion of semantic relatedness and conflate a variety of possible semantic relations into a single space (Hill et al., 2015; Schwartz et al., 2015). However, this mono-faceted view of meaning is a well-known deficiency in NLP applications (Faruqui, 2016; Mrkˇsi´c et al., 2017) as it fails to distinguish between fine-grained word associations. In this work we propose to learn a joint functionspecific word vector space that accounts for the study eat need food help support assistance subject art science researcher chicken scientist implementation cat chicken Figure 1: Illustration of three neighbourhoods in a function-specific space trained for the SVO structure (marked #(S), 7(V), $(O)). The space is optimised such that vectors for plausible SVO compositions will be close. Note that one word can have several vectors, for example chicken can occur both as S and O. different roles and functions a word can take in text. The space can be trained for a specific structure, such as SVO, and each word in a particular role will have a separate representation. Vectors for plausible SVO compositions will then be optimized to lie close together, as illustrated by Figure 1. For example, the verb vector study will be close to plausible subject vectors researcher or scientist and object vectors subject or art. For words that can occur as either subject or object, such as chicken, we obtain separate vectors for each role: one for chicken as subject and another for chicken as object. The resulting representations capture more detailed associations in addition to basic distributional similarity and can be used to construct representations for the whole SVO structure. To validate the effectiveness of our representation framework in language applications, we focus on modeling a prominent linguistic phenomenon: a general model of who does what to whom (Gell2873 Word Nearest Neighbours Subject memory dream, feeling, shadow, sense, moment, consciousness country state, nation, britain, china, uk, europe, government student pupil, participant, learner, candidate, trainee, child Verb see saw, view, expect, watch, notice, witness eat drink, consume, smoke, lick, swallow, cook, ingest avoid eliminate, minimise, anticipate, overcome, escape Object virus bacteria, infection, disease, worm, mutation, antibody beer ale, drink, pint, coffee, tea, wine, soup, champagne Joint SVO study (V) researcher (S), scientist (S), subject (O), art (O) eat (V) food (O), cat (S), dog (S) need (V) help (O), implementation (S), support (O) Table 1: Nearest neighbours in a function-specific space trained for the SVO structure. In the Joint SVO space (bottom) we show nearest neighbors for verbs (V) from the two other subspaces (O and S). Mann and Ruhlen, 2011). In language, this event understanding information is typically captured by the SVO structures and, according to the cognitive science literature, is well aligned with how humans process sentences (McRae et al., 1997, 1998; Grefenstette and Sadrzadeh, 2011a; Kartsaklis and Sadrzadeh, 2014); it reflects the likely distinct storage and processing of objects (typically nouns) and actions (typically verbs) in the brain (Caramazza and Hillis, 1991; Damasio and Tranel, 1993). The quantitative results are reported on two established test sets for compositional event similarity (Grefenstette and Sadrzadeh, 2011a; Kartsaklis and Sadrzadeh, 2014). This task requires reasoning over SVO structures and quantifies the plausibility of the SVO combinations by scoring them against human judgments. We report consistent gains over established word representation methods, as well as over two recent tensor-based architectures (Tilk et al., 2016; Weber et al., 2018) which are designed specifically for solving the event similarity task. Furthermore, we investigate the generality of our approach by also applying it to other types of structures. We conduct additional experiments in a 4-role setting, where indirect objects are also modeled, along with a selectional preference evaluation of 2-role SV and VO relationships (Chambers and Jurafsky, 2010; Van de Cruys, 2014), yielding the highest scores on several established benchmarks. 2 Background and Motivation Representation Learning. Standard word representation models such as skip-gram negative sampling (SGNS) (Mikolov et al., 2013b,a), Glove (Pennington et al., 2014), or FastText (Bojanowski et al., 2017) induce a single word embedding space capturing broad semantic relatedness (Hill et al., 2015). For instance, SGNS makes use of two vector spaces for this purpose, which are referred to as Aw and Ac. SGNS has been shown to approximately correspond to factorising a matrix M = AwAT c , where elements in M represent the co-occurrence strengths between words and their context words (Levy and Goldberg, 2014b). Both matrices represent the same vocabulary: therefore, only one of them is needed in practice to represent each word. Typically only Aw is used while Ac is discarded, or the two vector spaces are averaged to produce the final space. Levy and Goldberg (2014a) used dependencybased contexts, resulting in two separate vector spaces; however, the relation types were embedded into the vocabulary and the model was trained only in one direction. Camacho-Collados et al. (2019) proposed to learn separate sets of relation vectors in addition to standard word vectors and showed that such relation vectors encode knowledge that is often complementary to what is coded in word vectors. Rei et al. (2018) and Vuli´c and Mrkˇsi´c (2018) described related task-dependent neural nets for mapping word embeddings into relation-specific spaces for scoring lexical entailment. In this work, we propose a task-independent approach and extend it to work with a variable number of relations. Neuroscience. Theories from cognitive linguistics and neuroscience reveal that single-space representation models fail to adequately reflect the organisation of semantic concepts in the human brain (i.e., semantic memory): there seems to be no single semantic system indifferent to modalities or categories in the brain (Riddoch et al., 1988). Recent fMRI studies strongly support this proposition and suggest that semantic memory is in fact a widely distributed neural network (Davies et al., 2009; Huth et al., 2012; Pascual et al., 2015; Rice et al., 2015; de Heer et al., 2017), where sub-networks might activate selectively or more strongly for a particular function such as modalityspecific or category-specific semantics (such as objects/actions, abstract/concrete, animate/inanimate, animals, fruits/vegetables, colours, body parts, countries, flowers, etc.) (Warrington, 1975; Warrington and McCarthy, 1987; McCarthy and Warrington, 1988). This indicates a function-specific 2874 division of lower-level semantic processing. Singlespace distributional word models have been found to partially correlate to these distributed brain activity patterns (Mitchell et al., 2008; Huth et al., 2012, 2016; Anderson et al., 2017), but fail to explain the full spectrum of fine-grained word associations humans are able to make. Our work has been partly inspired by this literature. Compositional Distributional Semantics. Partially motivated by similar observations, prior work frequently employs tensor-based methods for composing separate tensor spaces (Coecke et al., 2010): there, syntactic categories are often represented by tensors of different orders based on assumptions on their relations. One fundamental difference is made between atomic types (e.g., nouns) versus compositional types (e.g., verbs). Atomic types are seen as standalone: their meaning is independent from other types. On the other hand, verbs are compositional as they rely on their subjects and objects for their exact meaning. Due to this added complexity, the compositional types are often represented with more parameters than the atomic types, e.g., with a matrix instead of a vector. The goal is then to compose constituents into a semantic representation which is independent of the underlying grammatical structure. Therefore, a large body of prior work is concerned with finding appropriate composition functions (Grefenstette and Sadrzadeh, 2011a,b; Kartsaklis et al., 2012; Milajevs et al., 2014) to be applied on top of word representations. Since this approach represents different syntactic structures with tensors of varying dimensions, comparing syntactic constructs is not straightforward. This compositional approach thus struggles with transferring the learned knowledge to downstream tasks. State-of-the-art compositional models (Tilk et al., 2016; Weber et al., 2018) combine similar tensor-based approaches with neural training, leading to task-specific compositional solutions. While effective for a task at hand, the resulting models rely on a large number of parameters and are not robust: we observe deteriorated performance on other related compositional tasks, as shown in Section 6. Multivariable (SVO) Structures in NLP. Modeling SVO-s is important for tasks such as compositional event similarity using all three variables, and thematic fit modeling based on SV and VO associations separately. Traditional solutions are typically based on clustering of word co-occurrence counts from a large corpus (Baroni and Lenci, 2010; Greenberg et al., 2015a,b; Sayeed et al., 2016; Emerson and Copestake, 2016). More recent solutions combine neural networks with tensor-based methods. Van de Cruys (2014) present a feedforward neural net trained to score compositions of both two and three groups with a max-margin loss. Grefenstette and Sadrzadeh (2011a,b); Kartsaklis and Sadrzadeh (2014); Milajevs et al. (2014); Edelstein and Reichart (2016) employ tensor compositions on standard single-space word vectors. Hashimoto and Tsuruoka (2016) discern compositional and non-compositional phrase embeddings starting from HPSG-parsed data. Objectives. We propose to induce functionspecific vector spaces which enable a better model of associations between concepts and consequently improved event representations by encoding the relevant information directly into the parameters for each word during training. Word vectors offer several advantages over tensors: a large reduction in parameters and fixed dimensionality across concepts. This facilitates their reuse and transfer across different tasks. For this reason, we find our multidirectional training to deliver good performance: the same function-specific vector space achieves state-of-the-art scores across multiple related tasks, previously held by task-specific models. 3 Function-specific Representation Space Our goal is to model the mutual associations (cooccurrences) between N groups of words, where each group represents a particular role, such as subject or object in an SVO structure. We induce an embedding matrix R|Vi|×d for every group i = 1, . . . , N, where |Vi| corresponds to the vocabulary size of the i-th group and the group vocabularies can partially overlap. For consistency, the vector dimensionality d is kept equal across all variables. Multiple Groups. Without loss of generality we present a model which creates a function-specific vector space for N = 3 groups, referring to those groups as A, B, and C. Note that the model is not limited to this setup, as we show later in Section 6. A, B and C might be interrelated phenomena, and we aim for a model which can reliably score the plausibility of combining three vectors ( ⃗A, ⃗B,⃗C) taken from this space. In addition to the full joint prediction, we aim for any two vector combinations 2875 (a) Predicting n →1 (b) Predicting 1 →n (c) Our multidirectional approach Figure 2: The directionality of prediction in neural models is important. Representations can be of varying quality depending on whether they are induced at the input or output side of the model. Our multidirectional approach resolves this problem by training on shared representations in all directions. ( ⃗A ⃗B, ⃗B ⃗C, ⃗C ⃗A) to have plausible scores of their own. Observing relations between words inside single-group subspaces (A, B, or C) is another desirable feature. Directionality. To design a solution with the necessary properties, we first need to consider the influence of prediction directionality in representation learning. A representation model such as SGNS (Mikolov et al., 2013a,b) learns two vectors for each word in one large vocabulary: one vector on the input side (word vector), another on the output side (context vector), with only the input word vectors being commonly used (Levy and Goldberg, 2014b). Here, we require several distinct vocabularies (i.e., three, one each for group A, B, and C). Instead of context vectors, we train the model to predict words from another group, hence directionality is an important consideration. We find that prediction directionality has a strong impact on the quality of the induced representations, and illustrate this effect on an example that is skewed extremely to one side: an n:1 assignment case. Let us assume data of two groups, where each word of group A1 is assigned to exactly one of three clusters in group B3. We expect a function-specific word vector space customised for this purpose to show three clearly separated clusters. Figure 2 visualises obtained representations.1 Figure 2a plots the vector spaces when we use words on the input side of the model and predict the cluster: A1 →B3; 1We train on 10K randomly selected German nouns (A1) and their corresponding noun gender (B3) from a GermanEnglish dictionary obtained from dict.cc, and train a 25dim model for 24 epochs. Points in the figures show 1K words which were randomly selected from the 10K training vocabulary. The embedding spaces have been mapped to 2D with tSNE (van der Maaten and Hinton, 2012). this can be seen as n:1 assignment. In the opposite direction (B3 →A1, 1:n assignment) we do not observe the same trends (Figure 2b). Representations for other and more complex phenomena suffer from the same issue. For example, the verb eat can take many arguments corresponding to various food items such as pizza, beans, or kimchi. A more specific verb such as embark might take only a few arguments such as journey, whereas journey might be fairly general and can co-occur with many other verbs themselves. We thus effectively deal with an n:m assignment case, which might be inclined towards 1:n or n:1 entirely depending on the words in question. Therefore, it is unclear whether one should rather construct a model predicting verb →object or object →verb. We resolve this fundamental design question by training representations in a multidirectional way with a joint loss function. Figure 2c shows how this method learns accurately clustered representations without having to make directionality assumptions. 4 Multidirectional Synchronous Representation Learning The multidirectional neural representation learning model takes a list of N groups of words (G1, G2, . . . , GN), factorises it into all possible “group-to-group” sub-models, and trains them jointly by combining objectives based on skipgram negative sampling (Mikolov et al., 2013a,b). We learn a joint function-specific word vector space by using sub-networks that each consume one group Gi on the input side and predict words from a second group Gj on the output side, i, j = 1, 2 . . . , N; i ̸= j. All sub-network losses are tied into a single joint loss and all groups G1, . . . , Gn 2876 are shared between the sub-networks. Sub-Network Architecture. We first factorise groups into sub-networks, representing all possible directions of prediction. Two groups would lead to two sub-networks A →B and B →A; three groups lead to six sub-networks. Similar to (Mikolov et al., 2013a,b), we calculate the dot-product between two word vectors to quantify their association. For instance, the sub-network A →B computes its prediction: PA→B = σ(⃗a · BT e +⃗bab) (1) where ⃗a is a word vector from the input group A, Be is the word embedding matrix for the target group B,⃗bab is a bias vector, and σ is the sigmoid function. The loss of each sub-network is computed using cross-entropy between this prediction and the correct labels: LA→B = cross entropy(PA→B, LA→B). (2) LA→B are one-hot vectors corresponding to the correct predictions. We leave experiments with more sophisticated sub-networks for future work. Synchronous Joint Training. We integrate all sub-networks into one joint model via two following mechanisms: (1) Shared Parameters. The three embedding matrices referring to groups A, B and C are shared across all sub-networks. That is, we train one matrix per group, regardless of whether it is being employed at the input or the output side of any sub-network. This leads to a substantial reduction in the model size. For example, with a vocabulary of 50, 000 words and 25-dimensional vectors we work only with 1.35M parameters. Comparable models for the same tasks are trained with much larger sets of parameters: 26M or even up to 179M when not factorised (Tilk et al., 2016). Our modeling approach thus can achieve more that 95% reduction in the number of parameters. (2) Joint Loss. We also train all sub-networks with a single joint loss and a single backward pass. We refer to this manner of joining the losses as synchronous: it synchronises the backward pass of all sub-networks. This could also be seen as a form of multi-task learning, where each sub-network optimises the shared parameters for a different task (Ruder, 2017). In practice, we perform a forward pass in each direction separately, then join all subnetwork cross-entropy losses and backpropagate this joint loss through all sub-networks in order to update the parameters. The different losses are combined using addition: L = X µ Lµ (3) where µ iterates over all the possible sub-networks, Lµ is the corresponding loss from one network, and L the overall joint loss. When focusing on the SVO structures, the model will learn one joint space for the three groups of embeddings (one for S, V and O). The 6 subnetworks all share parameters and optimization is performed using the joint loss: L =LS→V + LV →S + LV →O + LO→V + LS→O + LO→S (4) The vectors from the induced function-specific space can then be composed by standard composition functions (Milajevs et al., 2014) to yield event representations (Weber et al., 2018), that is, representations for the full SVO structure. 5 Evaluation Preliminary Task: Pseudo-Disambiguation. In the first evaluation, we adopt a standard pseudodisambiguation task from the selectional preference literature (Rooth et al., 1999; Bergsma et al., 2008; Erk et al., 2010; Chambers and Jurafsky, 2010; Van de Cruys, 2014). For the three-group (S-V-O) case, the task is to score a true triplet (i.e., the (S-V-O) structure attested in the corpus) above all corrupted triplets (S-V’-O), (S’-V-O), (S-V-O’), where S’, V’ and O’ denote subjects and objects randomly drawn from their respective vocabularies. Similarly, for the two-group setting, the task is to express a higher preference towards the attested pairs (V-O) or (S-V) over corrupted pairs (V-O’) or (S’-V). We report accuracy scores, i.e., we count all items where score(true) > score(corrupted). This simple pseudo-disambiguation task serves as a preliminary sanity check: it can be easily applied to a variety of training conditions with different variables. However, as pointed out by Chambers and Jurafsky (2010), the performance on this task is strongly influenced by a number of factors 2877 such as vocabulary size and the procedure for constructing corrupted examples. Therefore, we additionally evaluate our models on a number of other established datasets (Sayeed et al., 2016). Event Similarity (3 Variables: SVO). A standard task to measure the plausibility of SVO structures (i.e., events) is event similarity (Grefenstette and Sadrzadeh, 2011a; Weber et al., 2018): the goal is to score similarity between SVO triplet pairs and correlate the similarity scores to humanelicited similarity judgements. Robust and flexible event representations are important to many core areas in language understanding such as script learning, narrative generation, and discourse understanding (Chambers and Jurafsky, 2009; Pichotta and Mooney, 2016; Modi, 2016; Weber et al., 2018). We evaluate event similarity on two benchmarking data sets: GS199 (Grefenstette and Sadrzadeh, 2011a) and KS108 (Kartsaklis and Sadrzadeh, 2014). GS199 contains 199 pairs of SV O triplets/events. In the GS199 data set only the V is varied, while S and O are fixed in the pair: this evaluation prevents the model from relying only on simple lexical overlap for similarity computation.2 KS108 contains 108 event pairs for the same task, but is specifically constructed without any lexical overlap between the events in each pair. For this task function-specific representations are composed into a single event representation/vector. Following prior work, we compare cosine similarity of event vectors to averaged human scores and report Spearman’s ρ correlation with human scores. We compose the function-specific word vectors into event vectors using simple addition and multiplication, as well as more sophisticated compositions from prior work (Milajevs et al., 2014, inter alia). The summary is provided in Table 4. Thematic-Fit Evaluation (2 Variables: SV and VO). Similarly to the 3-group setup, we also evaluate the plausibility of SV and V O pairs separately in the 2-group setup. The selectional preference evaluation (Sayeed et al., 2016), also referred to as thematic-fit, quantifies the extent to which a noun fulfils the selectional preference of a verb given a role (i.e., agent:S, or patient:O) (McRae et al., 1997). We evaluate our 2-group function-specific 2For instance, the phrases ’people run company’ and ’people operate company’ have a high similarity score of 6.53, whereas ’river meet sea’ and ’river satisfy sea’ have been given a low score of 1.84. Data set Train Test SVO+iO 187K 15K SVO 22M 214K Vocab size Freq. S 22K people,one,company,student V 5K have,take,include,provide O 15K place,information,way,number SV 69M 232K Vocab size Freq. S 45K people,what,one,these V 19K be,have,say,take,go VO 84M 240K Vocab size Freq. V 9K have,take,use,make,provide O 32K information,time,service Table 2: Training data statistics. Model Accuracy 4 Variables SVO+iO 0.950 3 Variables: SVO Van de Cruys (2009) 0.874 Van de Cruys (2014) 0.889 Tilk et al. (2016) 3 0.937 Ours 0.943 2 Variables Rooth et al. (1999) 0.720 Erk et al. (2010) 0.887 Van de Cruys (2014) 0.880 Ours: SV 0.960 Ours: VO 0.972 Table 3: Accuracy scores on the pseudo disambiguation task. 3 indicates our reimplementation. spaces on two standard benchmarks: 1) MST1444 (McRae et al., 1998) contains 1,444 word pairs where humans provided thematic fit ratings on a scale from 1 to 7 for each noun to score the plausibility of the noun taking the agent role, and also taking the patient role.3 2) PADO414 (Pad´o, 2007) is similar to MST1444, containing 414 pairs with human thematic fit ratings, where role-filling nouns were selected to reflect a wide distribution of scores for each verb. We compute plausibility by simply taking the cosine similarity between the verb vector (from the V space) and the noun vector from the appropriate function-specific space (S space for agents; O space for patients). We again report Spearman’s ρ correlation scores. 3Using an example from Sayeed et al. (2016), the human participants were asked “how common is it for a {snake, monster, baby, cat} to frighten someone/something” (agent role) as opposed to “how common is it for a {snake, monster, baby, cat} to be frightened by someone/something” (patient role). 2878 Training Data. We parse the ukWaC corpus (Baroni et al., 2009) and the British National Corpus (BNC) (Leech, 1992) using the Stanford Parser with Universal Dependencies v1.4 (Chen and Manning, 2014; Nivre et al., 2016) and extract cooccurring subjects, verbs and objects. All words are lowercased and lemmatised, and tuples containing non-alphanumeric characters are excluded. We also remove tuples with (highly frequent) pronouns as subjects, and filter out training examples containing words with frequency lower than 50. After preprocessing, the final training corpus comprises 22M SVO triplets in total. Table 2 additionally shows training data statistics when training in the 2-group setup (SV and VO) and in the 4-group setup (when adding indirect objects: SVO+iO). We report the number of examples in training and test sets, as well as vocabulary sizes and most frequent words across different categories. Hyperparameters. We train with batch size 128, and use Adam for optimisation (Kingma and Ba, 2015) with a learning rate 0.001. All gradients are clipped to a maximum norm of 5.0. All models were trained with the same fixed random seed. We train 25-dimensional vectors for all setups (2/3/4 groups), and we additionally train 100-dimensional vectors for the 3-group (SVO) setup. 6 Results and Analysis Pseudo-Disambiguation. Accuracy scores on the pseudo-disambiguation task in the 2/3/4-group setups are summarised in Table 3.4 We find consistently high pseudo-disambiguation scores (>0.94) across all setups. In a more detailed analysis, we find especially the prediction accuracy of verbs to be high: we report accuracy of 96.9% for the 3group SVO model. The vocabulary size for verbs is typically lowest (see Table 2), which presumably makes predictions into this direction easier. In summary, as mentioned in Section 5, this initial evaluation already suggests that our model is able to capture associations between interrelated groups which are instrumental to modeling SVO structures and composing event representations. Event Similarity. We now test correlations of SVO-based event representations composed from a 4We also provide baseline scores taken from prior work, but the reader should be aware that the scores may not be directly comparable due to the dependence of this evaluation on factors such as vocabulary size and sampling of corrupted examples (Chambers and Jurafsky, 2010; Sayeed et al., 2016). Composition Reference Formula Verb only Milajevs et al. (2014) ⃗V Addition Mitchell and Lapata (2008) ⃗S + ⃗V + ⃗O Copy Object Kartsaklis et al. (2012) ⃗S ⊙(⃗V × ⃗O) Concat Edelstein and Reichart (2016) [⃗S,⃗V , ⃗O] Concat Addition Edelstein and Reichart (2016) [⃗S,⃗V ] + [⃗V , ⃗O] Network Ours ⃗S⃗V T +⃗V ⃗OT +⃗S ⃗OT Table 4: Composition functions used to obtain event vectors from function-specific vector spaces. +: addition, ⊙: element-wise multiplication, ×: dot product. [·, ·]: concatenation. Spearman’s ρ Model Reference GS199 KS108 Copy Object W2V Milajevs et al. (2014) 0.46 0.66 Addition KS14 Milajevs et al. (2014) 0.28 0.73 Tilk et al. (2016) 0.34 Weber et al. (2018) 0.71 Ours: SVO d100 Verb only Ours 0.34 0.63 Addition Ours 0.27 0.76 Concat Ours 0.26 0.75 Concat Addition Ours 0.32 0.77 Copy Object Ours 0.40 0.52 Network Ours 0.53 Table 5: Results on the event similarity task. Best baseline score is underlined, and the best overall result is provided in bold. function-specific vector space (see Table 4) to human scores in the event similarity task. A summary of the main results is provided in Table 5. We also report best baseline scores from prior work. The main finding is that our model based on functionspecific word vectors outperforms previous stateof-the-art scores on both datasets. It is crucial to note that different modeling approaches and configurations from prior work held previous peak scores on the two evaluation sets.5 Interestingly, by relying only on the representations from the V subspace (i.e., by completely discarding the knowledge stored in S and O vectors), we can already obtain reasonable correlation scores. This is an indicator that the verb vectors indeed stores some selectional preference information as designed, i.e., the information is successfully encoded into the verb vectors themselves. Thematic-Fit Evaluation. Correlation scores on two thematic-fit evaluation data sets are summarised in Table 6. We also report results with 5Note the two tasks are inherently different. KS108 requires similarity between plausible triplets. Using the network score directly (which is a scalar, see Table 4) is not suitable for KS108 as all KS108 triplets are plausible and scored highly. This is reflected in the results in Table 5. 2879 representative baseline models for the task: 1) a TypeDM-based model (Baroni and Lenci, 2010), further improved by Greenberg et al. (2015a,b) (G15), and 2) current state-of-the-art tensor-based neural model by Tilk et al. (2016) (TK16). We find that vectors taken from the model trained in the joint 3-group SVO setup perform on a par with state-of-the-art models also in the 2-group evaluation on SV and VO subsets. Vectors trained explicitly in the 2-group setup using three times more data lead to substantial improvements on PADO414. As a general finding, our function-specific approach leads to peak performance on both data sets. The results are similar with 25-dim SVO vectors. Our model is also more light-weight than the baselines: we do not require a full (tensor-based) neural model, but simply function-specific word vectors to reason over thematic fit. To further verify the importance of joint multidirectional training, we have also compared our function-specific vectors against standard single-space word vectors (Mikolov et al., 2013b). The results indicate the superiority of function-specific spaces: respective correlation scores on MST1444 and PADO414 are 0.28 and 0.41 (vs 0.34 and 0.58 with our model). It is interesting to note that we obtain state-of-theart scores calculating cosine similarity of vectors taken from two groups found in the joint space. This finding verifies that the model does indeed learn a joint space where co-occurring words from different groups lie close to each other. Qualitative Analysis. We retrieve nearest neighbours from the function-specific (S, V , O) space, shown in Figure 1. We find that the nearest neighbours indeed reflect the relations required to model the SVO structure. For instance, the closest subjects/agents to the verb eat are cat and dog. The closest objects to need are three plausible nouns: help, support, and assistance. As the model has information about group membership, we can also filter and compare nearest neighbours in singlegroup subspaces. For example, we find subjects similar to the subject memory are dream and feeling, and objects similar to beer are ale and pint. Model Variants. We also conduct an ablation study that compares different model variants. The variants are constructed by varying 1) the training regime: asynchronous (async) vs synchronous (sync), and 2) the type of parameter sharing: training on separate parameters for each sub-network Setup Baselines Ours SVO SV-VO Dataset Eval G15 TK16 (d=100) (d=25) SV 0.36 0.37 0.31 MST1444 VO 0.34 0.35 0.35 full 0.33 0.38 0.36 0.34 SV 0.54 0.38 0.55 PADO414 VO 0.53 0.54 0.61 full 0.53 0.52 0.45 0.58 Table 6: Results on the 2-variable thematic-fit evaluation. Spearman’s ρ correlation. async sync sep shared sep shared 3 Variables KS108 Verb only 0.56 0.48 0.58 0.60 KS108 Addition 0.51 0.66 0.73 0.78 GS199 Verb only 0.24 0.26 0.26 0.34 GS199 Network 0.10 0.40 0.28 0.52 2 Variables MST1444 0.17 0.10 0.30 0.39 PADO414 0.41 0.21 0.44 0.44 Table 7: Evaluation of different model variants, by training regime and parameter sharing. (sep)6 or training on shared variables (shared). In the asynchronous setup we update the shared parameters per sub-network directly based on their own loss, instead of relying on the joint synchronous loss as in Section 3. Table 7 shows the results with the model variants, demonstrating that both aspects (i.e., shared parameters and synchronous training) are important to reach improved overall performance. We reach the peak scores on all evaluation sets using the sync+shared variant. We suspect that asynchronous training deteriorates performance because each sub-network overwrites the updates of other subnetworks as their training is not tied through a joint loss function. On the other hand, the synchronous training regime guides the model towards making updates that can benefit all sub-networks. 7 Conclusion and Future Work We presented a novel multidirectional neural framework for learning function-specific word representations, which can be easily composed into multiword representations to reason over event similarity and thematic fit. We induced a joint vector space 6With separate parameters we merge vectors from “duplicate” vector spaces by non-weighted averaging. 2880 in which several groups of words (e.g., S, V, and O words forming the SVO structures) are represented while taking into account the mutual associations between the groups. We found that resulting function-specific vectors yield state-of-the-art results on established benchmarks for the tasks of estimating event similarity and evaluating thematic fit, previously held by task-specific methods. In future work we will investigate more sophisticated neural (sub-)networks within the proposed framework. We will also apply the idea of functionspecific training to other interrelated linguistic phenomena and other languages, probe the usefulness of function-specific vectors in other language tasks, and explore how to integrate the methodology with sequential models. The pre-trained word vectors used in this work are available online at: https://github.com/cambridgeltl/fs-wrep. Acknowledgments This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909) awarded to Anna Korhonen. References Andrew Anderson, Douwe Kiela, Stephen Clark, and Massimo Poesio. 2017. Visually grounded and textual semantic models differentially decode brain activity associated with concrete and abstract nouns. Transactions of the ACL, 5:17–30. Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed webcrawled corpora. Language Resources and Evaluation, 43(3):209–226. Marco Baroni and Alessandro Lenci. 2010. Distributional memory: A general framework for corpus-based semantics. Computational Linguistics, 36(4):673–721. Shane Bergsma, Dekang Lin, and Randy Goebel. 2008. Discriminative learning of selectional preference from unlabeled text. In Proceedings of EMNLP, pages 59–68. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the ACL, 5:135–146. Jos´e Camacho-Collados, Luis Espinosa Anke, and Steven Schockaert. 2019. Relational word embeddings. In Proceedings of ACL, pages 3286–3296. Alfonso Caramazza and Argye E. Hillis. 1991. Lexical organization of nouns and verbs in the brain. Nature, 349(6312):788–790. Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In Proceedings of ACL, pages 602–610. Nathanael Chambers and Dan Jurafsky. 2010. Improving the use of pseudo-words for evaluating selectional preferences. In Proceedings of ACL, pages 445–453. Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of EMNLP, pages 740–750. Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. Mathematical foundations for a compositional distributional model of meaning. Linguistic Analysis, 36(1-4):345–384. Ronan Collobert, Jason Weston, Lon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493– 2537. Tim Van de Cruys. 2009. A non-negative tensor factorization model for selectional preference induction. In Proceedings of the Workshop on Geometrical Models of Natural Language Semantics, pages 83–90. Tim Van de Cruys. 2014. A neural network approach to selectional preference acquisition. In Proceedings of EMNLP, pages 26–35. Antonio R. Damasio and Daniel Tranel. 1993. Nouns and verbs are retrieved with differently distributed neural systems. Proceedings of the National Academy of Sciences of the United States of America, 90(11):4957–60. R. Rhys Davies, Glenda M. Halliday, John H. Xuereb, Jillian J. Kril, and John R. Hodges. 2009. The neural basis of semantic memory: Evidence from semantic dementia. Neurobiology of Aging, 30(12):2043– 2052. Lilach Edelstein and Roi Reichart. 2016. A factorized model for transitive verbs in compositional distributional semantics. CoRR, abs/1609.07756. Guy Emerson and Ann A. Copestake. 2016. Functional distributional semantics. In Proceedings of the 1st Workshop on Representation Learning for NLP, pages 40–52. Katrin Erk, Sebastian Pad´o, and Ulrike Pad´o. 2010. A flexible, corpus-driven model of regular and inverse selectional preferences. Computational Linguistics, 36(4):723–763. Manaal Faruqui. 2016. Diverse Context for Learning Word Representations. Ph.D. thesis, Carnegie Mellon University. 2881 Murray Gell-Mann and Merritt Ruhlen. 2011. The origin and evolution of word order. Proceedings of the National Academy of Sciences, 108(42):17290– 17295. Clayton Greenberg, Vera Demberg, and Asad Sayeed. 2015a. Verb polysemy and frequency effects in thematic fit modeling. In Proceedings of the 6th Workshop on Cognitive Modeling and Computational Linguistics, pages 48–57. Clayton Greenberg, Asad Sayeed, and Vera Demberg. 2015b. Improving unsupervised vector-space thematic fit evaluation via role-filler prototype clustering. In Proceedings of NAACL-HLT, pages 21–31. Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011a. Experimental support for a categorical compositional distributional model of meaning. In Proceedings of EMNLP, pages 1394–1404. Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011b. Experimenting with transitive verbs in a DisCoCat. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, pages 62–66. Zellig S. Harris. 1954. Distributional Structure. Word, 10(2-3):146–162. Kazuma Hashimoto and Yoshimasa Tsuruoka. 2016. Adaptive joint learning of compositional and noncompositional phrase embeddings. In Proceedings of ACL, pages 205–215. Wendy A. de Heer, Alexander G. Huth, Thomas L. Griffiths, Jack L. Gallant, and Fr´ed´eric E. Theunissen. 2017. The hierarchical cortical organization of human speech processing. Journal of Neuroscience, 37(27):6539–6557. Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, 41(4):665–695. Alexander G. Huth, Wendy A. de Heer, Thomas L. Griffiths, Fr´ed´eric E. Theunissen, and Jack L. Gallant. 2016. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature, 532(7600):453– 458. Alexander G. Huth, Shinji Nishimoto, An T. Vu, and Jack L. Gallant. 2012. A continuous semantic space describes the representation of thousands of object and action categories across the human brain. Neuron, 76(6):1210–1224. Dimitri Kartsaklis and Mehrnoosh Sadrzadeh. 2014. A study of entanglement in a categorical framework of natural language. In Proceedings of QPL, pages 249–261. Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, and Stephen Pulman. 2012. A unified sentence space for categorical distributional-compositional semantics: Theory and experiments. In Proceedings of COLING, pages 549–558. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR (Conference Track). Geoffrey Neil Leech. 1992. 100 million words of English: The British National Corpus (BNC). Omer Levy and Yoav Goldberg. 2014a. Dependencybased word embeddings. In Proceedings of ACL, pages 302–308. Omer Levy and Yoav Goldberg. 2014b. Neural word embedding as implicit matrix factorization. In Proceedings of NIPS, pages 2177–2185. Laurens van der Maaten and Geoffrey E. Hinton. 2012. Visualizing non-metric similarities in multiple maps. Machine Learning, 87(1):33–55. Rosaleen A. McCarthy and E. K. Warrington. 1988. Evidence for modality-specific meaning systems in the brain. Nature, 334(6181):428–430. Ken McRae, Todd Ferretti, and Liane Amyote. 1997. Thematic roles as verb-specific concepts. Language and Cognitive Processes, 12(2):137–176. Ken McRae, Michael J. Spivey-Knowlton, and Michael K. Tanenhaus. 1998. Modeling the influence of thematic fit (and other constraints) in on-line sentence comprehension. Journal of Memory and Language, 38(3):283–312. Oren Melamud, David McClosky, Siddharth Patwardhan, and Mohit Bansal. 2016. The role of context types and dimensionality in learning word embeddings. In Proceedings of NAACL-HLT, pages 1030– 1040. Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In Proceedings of ICLR (Workshop Papers). Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pages 3111–3119. Dmitrijs Milajevs, Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, and Matthew Purver. 2014. Evaluating neural word representations in tensor-based compositional settings. In Proceedings of EMNLP, pages 708–719. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. Proceedings of ACL, pages 236–244. Tom M. Mitchell, Svetlana V. Shinkareva, Andrew Carlson, Kai-Min Chang, Vicente L. Malave, Robert A. Mason, and Marcel Adam Just. 2008. Predicting human brain activity associated with the meanings of nouns. Science, 320(5880):1191–1195. 2882 Ashutosh Modi. 2016. Event embeddings for semantic script modeling. In Proceedings of CoNLL, pages 75–83. Nikola Mrkˇsi´c, Ivan Vuli´c, Diarmuid ´O S´eaghdha, Ira Leviant, Roi Reichart, Milica Gaˇsi´c, Anna Korhonen, and Steve Young. 2017. Semantic specialisation of distributional word vector spaces using monolingual and cross-lingual constraints. Transactions of the ACL, 5:309–324. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan T McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal Dependencies v1: A multilingual treebank collection. In Proceedings of LREC, pages 1659–1666. Ulrike Pad´o. 2007. The integration of syntax and semantic plausibility in a wide-coverage model of human sentence processing. Belen Pascual, Joseph C. Masdeu, Mark Hollenbeck, Nikos Makris, Ricardo Insausti, Song-Lin Ding, and Bradford C. Dickerson. 2015. Large-scale brain networks of the human left temporal pole: A functional connectivity MRI study. Cerebral Cortex, 25(3):680–702. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Procedings of EMNLP, pages 1532– 1543. Karl Pichotta and Raymond J. Mooney. 2016. Learning statistical scripts with LSTM recurrent neural networks. In Proceedings of AAAI, pages 2800–2806. Marek Rei, Daniela Gerz, and Ivan Vuli´c. 2018. Scoring lexical entailment with a supervised directional similarity network. In Proceedings of ACL, pages 638–643. Grace E. Rice, Paul Hoffman, and Matthew A. Lambon Ralph. 2015. Graded specialization within and between the anterior temporal lobes. Annals of the New York Academy of Sciences, 1359(1):84–97. M. Jane Riddoch, Glyn W. Humphreys, Max Coltheart, and Elaine Funnell. 1988. Semantic systems or system? Neuropsychological evidence re-examined. Cognitive Neuropsychology, 5(1):3–25. Mats Rooth, Stefan Riezler, Detlef Prescher, Glenn Carroll, and Franz Beil. 1999. Inducing a semantically annotated lexicon via EM-based clustering. In Proceedings of ACL, pages 104–111. Sebastian Ruder. 2017. An overview of multitask learning in deep neural networks. CoRR, abs/1706.05098. Asad Sayeed, Clayton Greenberg, and Vera Demberg. 2016. Thematic fit evaluation: An aspect of selectional preferences. In Proceedings of the 1st Workshop on Evaluating Vector Space Representations for NLP, pages 99–105. Hinrich Sch¨utze. 1993. Word space. In Proceedings of NIPS, pages 895–902. Roy Schwartz, Roi Reichart, and Ari Rappoport. 2015. Symmetric pattern based word embeddings for improved word similarity prediction. In Proceedings of CoNLL, pages 258–267. Ottokar Tilk, Vera Demberg, Asad Sayeed, Dietrich Klakow, and Stefan Thater. 2016. Event participant modelling with neural networks. In Proceedings of EMNLP, pages 171–182. Ivan Vuli´c and Nikola Mrkˇsi´c. 2018. Specialising word vectors for lexical entailment. Proceedings of NAACL-HLT. Elizabeth K. Warrington. 1975. The Selective Impairment of Semantic Memory. Quarterly Journal of Experimental Psychology, 27(4):635–657. Elizabeth K. Warrington and Rosaleen A. McCarthy. 1987. Categories of knowledge. Brain, 110(5):1273–1296. Noah Weber, Niranjan Balasubramanian, and Nathanael Chambers. 2018. Event representations with tensor-based compositions. In Proceedings of AAAI, pages 4946–4953.
2020
257
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2883–2889 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2883 Predicting Degrees of Technicality in Automatic Terminology Extraction Anna H¨atty1,2, Dominik Schlechtweg2, Michael Dorna1, Sabine Schulte im Walde2 1Robert Bosch GmbH, Corporate Research, Renningen, Germany 2Institute for Natural Language Processing, University of Stuttgart, Germany {anna.haetty, michael.dorna}@de.bosch.com, {schlecdk,schulte}@ims.uni-stuttgart.de Abstract While automatic term extraction is a wellresearched area, computational approaches to distinguish between degrees of technicality are still understudied. We semi-automatically create a German gold standard of technicality across four domains, and illustrate the impact of a web-crawled general-language corpus on predicting technicality. When defining a classification approach that combines general-language and domain-specific word embeddings, we go beyond previous work and align vector spaces to gain comparative embeddings. We suggest two novel models to exploit general- vs. domain-specific comparisons: a simple neural network model with pre-computed comparative-embedding information as input, and a multi-channel model computing the comparison internally. Both models outperform previous approaches, with the multi-channel model performing best. 1 Introduction Automatic term extraction, i.e. the task of extracting linguistic expressions characteristic to a specialized domain, is a long-researched field within natural language processing. Assessing the technicality of the extracted terms, however, is still a niche within this area: technicality refers to the degree to which a term is specialized and exclusively used by experts in a domain. Up to date, studies on term technicality are mostly restricted to medical terminology and relate to the communication between doctors and patients. Especially in times of growing amounts of domain-specific websites with both lay and expert users (e.g. DIY ‘do-it-yourself’ communities, such as 1-2-do.com), the communication between experts and lays becomes increasingly important across all specialized domains. Furthermore, term technicality prediction is important for a range of tasks such as automatic thesaurus creation, assessing text specialization, and domain knowledge acquisition. Above all, predicting technicality can be considered a more fine-grained and expressive form of terminology extraction. In this work, we first semi-automatically collect German specialized domain corpora to create a gold standard of term technicality across four domains: automotive, cooking, hunting and DIY. Based on a qualitative analysis of terminological phenomena and variants of ambiguity across domain-specific and general-language corpora, we then suggest two methods to explicitly integrate not only vector space model representations derived from the corpora, but also comparisons across the vector spaces. In a first approach, we enrich the combined general-language and domain-specific word embeddings with a difference vector as input for a classification system. In a second approach we design a multi-channel feed-forward neural network with a Siamese network component to represent the vector comparison internally. 2 Related Work Existing studies on technicality predominantly focus on levels of familiarity or difficulty of terminology in medical, biomedical or health domains. Term familiarity refers to a user’s subjective understanding of term technicality. These studies typically rely on classical readability features such as frequency, term length, syllable count, the DaleChall readability formula and affixes (Zeng et al., 2005; Zeng-Treitler et al., 2008; Grabar et al., 2014; Vinod Vydiswaran et al., 2014). They further make use of domain-specific terminology attributes such as neo-classical word components, given that medical terminology is strongly influenced by Greek and Latin (Del´eger and Zweigenbaum, 2009; Bouamor et al., 2016). Besides the feature specification, the majority of studies exploits contrastive approaches. 2884 Contrastive approaches compare a term’s distribution in a domain and a reference corpus, for example a general-language corpus. Furthermore, for technicality prediction, often expert (medical) texts are compared against reference lay texts. Only a small number of studies relies on context-based approaches, e.g. Zeng-Treitler et al. (2008) use a contextual network; Bouamor et al. (2016) exploit language models; P´erez (2016) compares collocation networks. For standard term extraction, contrastive techniques represent one of the main strands of methodologies, by comparing a term candidate’s frequencies in a domain-specific and a general-language corpus (Ahmad et al., 1994; Rayson and Garside, 2000; Drouin, 2003; Kit and Liu, 2008; Bonin et al., 2010; Kochetkova, 2015; Lopes et al., 2016; Mykowiecka et al., 2018, i.a.). Recent approaches use word embeddings trained separately on contrastive corpora; e.g. Amjadian et al. (2016, 2018) concatenate general and domain-specific word embeddings and use them as input for classifiers, such as a multilayer perceptron. Similarly, Hazem and Morin (2017) and Liu et al. (2018) apply such a concatenation to represent a term in one language, as data enrichment pre-step for bilingual terminology extraction. In sum, approaches using contrastive corpora are popular in both automatic term extraction and term technicality prediction studies. The few approaches that use word embeddings as basis for a contrastive approach separately train word embeddings on general-language and domain corpora. In our work, we extend these methodologies by aligning vector spaces in order to more adequately represent meaning variation across corpora. 3 Definition of Technicality According to Ha and Hyland (2017), there is no consensus among researchers about what exactly defines technicality. They provide an overview of what characterizes technical vocabulary, and observe two main categories. On the one hand, technical terms often exhibit a narrow range of senses specific to the domain. They are only understood by a limited set of people, because they require domain knowledge. On the other hand, there are terms which are also frequently used in general language. These terms are ambiguous: they carry specialized meanings in a particular domain which are different to the general-language meanings. Corpus sizes Preprocessed Lemma:POS Cooking 4.3 M 2.5 M Automotive 4.9 M 2.3 M DIY 4.0 M 2.1 M Hunting 0.7 M 0.3 M SdeWaC 778 M 326 M Table 1: Sizes of corpora. “Preprocessed” refers to the lemmatized corpus without punctuation, “Lemma:POS” to the version reduced to content words. As Ha and Hyland (2017), we see technicality as a continuum. In the course of this paper, we adopt a simplified handling and distinguish between three broad classes of technicality: technical terms, basic terms and non-terms. 4 Data and Gold Standard Creation Data. We collect German texts for four domains: automotive, cooking, DIY and hunting. Besides including technical handbooks, we crawl topicspecific data from Wikipedia1 and similar resources such as cooking recipes from cooking homepages (e.g. kochwiki.de), and car repair and DIY instructions from wikihow.de. As general-language reference corpus, we use SdeWaC (Faaß and Eckart, 2013), a cleaned version of the web-crawled corpus deWaC (Baroni et al., 2009). All corpora are lemmatized and POS-tagged with the TreeTagger (Schmid, 1995), and reduced to content words (nouns, verbs and adjectives). We follow the preprocessing steps described in Schlechtweg et al. (2019) that led to the best results in that study. The corpus sizes are shown in Table 1. Gold Standard. We select all words as term candidates with a minimum frequency of 10 in both the domain corpus and SdeWaC. The gold standard thus contains both simple and complex terms, the latter in the form of closed compounds. We did not extract multi-word terms other than closed compounds because we would have needed specific procedures to identify them (e.g. by chunking or by using association measures to identify valid collocations). Even more importantly, multi-word expressions are prone to variation (e.g. one could say ‘wood drill’ or ‘drill for wood’) and it is likely to not find all variants in the glossaries and other resources we use to create the gold standard. 1When using Wikipedia, we relied on grouping categories such as the category ‘automotive’ (https://de.wikipedia.org/wiki/Kategorie: Kraftfahrzeugtechnik). 2885 Instead of relying on labour-intensive human annotations, we determine the technicality labels semi-automatically. First, we collect domainspecific glossaries for each domain, i.e. textual glosses and specialized terms with their meanings2. These glossaries contain terms which require domain knowledge (especially if they are ambiguous) and thus need to be explained to a lay person, i.e. they contain technical terms. Secondly, we collect thematic basic vocabulary lists (from thematic base vocabulary books, thematic vocabulary training lists for foreign apprentices, etc.). These lists contain the basic terminology of a domain, with a low level of technicality. Finally, we collect indices and tables of contents of domain-specific handbooks, which include all kinds of terminological vocabulary3. We label the data as follows: 1. technical term: a word is contained in a glossary, but not in a basic vocabulary list 2. basic term: a word is contained in a basic vocabulary list, but not in a glossary 3. non-term: all other words, which do not overlap more than 4 characters with any term in the glossaries, the basic vocabulary lists, the indices or the table of contents The resulting sizes of the gold standards per domain are presented in Table 2. Overall, our semiautomatic labeling method leads to 1,690 technical terms, 1,525 terms and 10,956 non-terms, a total of 14,171 term candidates. To evaluate the quality of the gold standard, we randomly extract 30 words per domain and per system-assigned label (which leads to a total of 30 × 4 × 3 = 360 words in total). Together with three random context sentences, three annotators (including one of the authors) rated the labeling. We obtain an average Cohen’s κ inter-annotator agreement of 0.50 and an average agreement with the gold standard of 0.47. This corresponds to “moderate” agreement, which we judge as sufficient for our gold standard, given that agreement in term annotation is considered a difficult task (Terryn et al., 2019). 2Cf. the Merriam-Webster definition of glossaries: https://www.merriam-webster.com. 3We use information from handbooks and manuals, as well as homepages. Sources from books include Dietsche et al. (2019); Schroder (2006); Blass and Friederich (1974), sources from homepages include both professionally revised content (bosch-do-it.de) and user-created content (e.g. https://de.wikibooks.org/wiki/Kochbuch/ _Glossar, https://de.wikipedia.org/wiki/ Liste_der_K%C3%BCchenfachw%C3%B6rter). Cook. Hunt. Auto. DIY Tech. Terms 384 250 706 350 Basic Terms 853 186 236 250 Non-Terms 853 1,176 5,010 2,962 Total 3,045 1,612 5,952 3,562 Table 2: Size of gold standard. Qualitative Analysis We perform an in-depth analysis of our four domain corpora to identify the range of term phenomena and variants of ambiguity within and across general and domain-specific data, to motivate and apply an appropriate model. The automotive domain contains many compounds (such as Antriebsschlupfregelung ‘traction slip control’) and English words (Frontairbags). In the cooking and DIY corpora we find many complex verbs (such as entgraten ‘deburr’ for DIY and abbinden ‘thicken (a sauce)’ for cooking). Ambiguous terminology is an outstanding characteristic of the hunting domain, which contains many ambiguous expressions completely unknown by lay people, such as Licht ‘light’ as term for the eyes of game. With all those variations, it seems likely that surface form features will not be useful in a prediction task. Furthermore, frequency-based features might not be useful due to the high amount of ambiguity. Regarding levels of technicality, we find technical terms that seem to be rather unambiguous and have a very restricted usage, such as blanchieren ‘blanch’ for cooking, which often co-occurs with Salzwasser ‘salted water’ in the domain-specific context sentences. Surprisingly, we find very similar domain-specific contexts in the generallanguage corpus, where we would not expect them. Since the general-language corpus is web-crawled, it obviously contains a certain amount of domainspecific texts as well; especially if a highly technical term is not ambiguous, the general-language corpus contains only such contexts. Consequently, the general-language and domain-specific contexts are maximally similar in these cases. In contrast, we assume that the contexts will vary more strongly for basic terms, and for non-terms we do not expect to find domain-specific sentences in the generallanguage corpus at all. The picture is different for ambiguous terminology, where sense distributions vary across corpora. For example, for the hunting term Licht ‘light/eyes of game’ we both find general and domain-specific meanings in the domain corpus; for the cooking term Zauberstab ‘wand/hand blender’ senses seem 2886 to be largely disjunctive across the corpora. Example sentences for this phenomenon are given in Table 3 for illustration. Based on these observations, we suggest an approach by Amjadian et al. (2016, 2018) as basis to detect degrees of technicality, since both generallanguage and domain-specific word embeddings will encode termhood attributes. On top of that, we hypothesize that a comparison of the word vectors represents valuable information for a prediction system. 5 Models Baselines As baseline, we use a decision tree classifier (DT) with three standard features commonly used for term familiarity prediction: frequency (corpus-size normalized), word length and character n-grams. Further, we implement the approach by Amjadian et al. (2016, 2018) using a Multilayer Perceptron (MLP) and the concatenation of general-language word embeddings (GEN) and domain-specific word embeddings (SPEC) of a term candidate as input (MLP, GEN⊕SPEC), in comparison to using only one of the embeddings. We learn two separate word2vec SGNS vector spaces (Mikolov et al., 2013) for GEN and SPEC. Centering and Batch Normalization Across neural models we apply batch normalization (Ioffe and Szegedy, 2015), which normalizes the output of a preceding activation layer by subtracting the batch mean and then dividing by the batch standard deviation. This reduces the effect of inhomogeneous input data, in our case the different domain corpora. We further length-normalize and apply element-wise column mean-centering to the embeddings, which has proven to be beneficial as preprocessing step for rotational alignment of vector spaces (Artetxe et al., 2016; Schlechtweg et al., 2019) and as a general post-processing step for word embeddings (Mu and Viswanath, 2018). Note that the reason for the beneficial effect of centering is still unclear. Artetxe et al. (2016) provide an intuitive explanation that centering moves randomly similar embeddings further apart, while Mu and Viswanath (2018) consider centering as an operation making vectors “more isotropic”, i.e., more uniformly distributed across the directions in the space. Comparative Embeddings and Multi-Channel Model Simple vector concatenation does not incorporate any kind of comparison of the embeddings. We thus suggest two novel models to exploit general- vs. domain-specific comparisons: Comparative Embeddings (MLP, CON⊕DIFF) use an MLP classifier and add a difference vector to the input vector concatenation GEN⊕SPEC. Since the word embeddings were trained separately on different corpora, this model requires an alignment of the vector spaces. We use a state-of-the-art alignment method (Artetxe et al., 2016; Hazem and Morin, 2017), where the best rotation GW of a vector space G onto a vector space S is determined, with the rotation matrix W. W is computed as W = UV T , with U and V retrieved from Singular Value Decomposition ST G = UΣV T (Sch¨onemann, 1966). After the alignment, unit length is applied again (since the vectors are not unit length after alignment anymore) and the absolute difference vector (DIFF) is computed. The concatenation vector GEN ⊕SPEC ⊕DIFF is then taken as input to the model. As our second model, we use a MultiChannel Feed-Forward Neural Network (MULTICHANNEL). The network takes as input the unaligned GEN and SPEC vectors, and processes each GEN and SPEC in a different channel. The third channel is a variant of a Siamese network (Chopra et al., 2005), a dual-channel network with shared weights. Both GEN and SPEC are processed through the shared weight layer, in order to map them onto the same space. Then the element-wise absolute difference is computed, and the output of all three channels is concatenated. The network is defined as: h1 = σ1(W1 ∗E(x1) + b1) h2 = σ2(W2 ∗E(x2) + b2) h3a = σ3(W3 ∗E(x1) + b3) h3b = σ3(W3 ∗E(x2) + b3) d = |h3a −h3b|, d ∈Rl c = h1||h2||d, c ∈R3l p = softmax(c) where x is a term candidate, and E(x) is the embedding layer, a function E : xi →zi that maps the word xi onto its corresponding 300-dimensional vector zi. W denotes the weight matrices, b the bias, σ the activation functions, and l denotes the sizes of the hidden layers. 2887 General-language corpus Domain-specific corpus Ich denke, mit Zauberstab kann man leichter zaubern. 1 Mixger¨at, Handr¨uhrer mit Mixstab oder Zauberstab mit Sch¨ussel Nicht vergessen soll er bitte seinen Zauberstab und es bleibt ihm freigestellt, ob er eine Eule, eine Katze oder eine Kr¨ote mitbringt. Die Sauce abermals erhitzen, die Butter mit der Stopfleber zugeben und die Sauce mit einem Zauberstab schaumig aufmixen. Ich verließ die Bank und wanderte mit dem Blick gebannt auf den Mond, taumelnd, wie hypnotisiert, dem Licht entgegen. Lichter ist die Bezeichnung f¨ur die Augen, die Ohren werden auch Lauscher genannt. Mit Betten, Licht und einem Tisch. Auch bei schwachem Licht k¨onnen sie noch sehr gut sehen. Table 3: Example context sentences for the ambiguous terms Zauberstab (cooking, upper table) and Licht (hunting, lower table). The sentence with a lime green background contains the target term in its general-language sense. Training We use SMOTE subsampling (Chawla et al., 2002) and train our network to minimize the cross-entropy loss, using back-propagation with stochastic gradient descent. We perform a randomized search for hyperparameter optimization for each model, i.e. subsampling parameter combinations. We test with the following parameters: hidden layers, epochs and batch size with values between 16 and 64, learning rate between 0.001 and 0.3, momentum between 0.0 and 0.9, and tanh and rectified linear unit (ReLU) as activation functions. To initialize the weights of the embedding layer, we use word2vec SGNS trained with a window size of 2, negative sampling with k=1 and subsampling with a threshold of t = 0.001. These parameter settings obtained the best results in our recent study on terminological meaning shifts (Schlechtweg et al., 2019). We do not train embedding layer parameters to maintain the original word meaning. Due to the relatively small size of the training data, we use 5-fold cross-validation for training. 6 Results We use Macro-Precision, Recall and F1-Score for evaluation, to put more weight on the correctness of the smaller classes Base Term and Technical Term. The experiment results are shown in Table 4. The multi-layer perceptron (MLP) results outperform the decision-tree (DT) baseline with standard term familiarity prediction features. Using only a general-language vector GEN for classification performs better than using only a domainspecific vector SPEC, and the concatenation of both Method P R F1 DT, basic features 0.56 0.58 0.57 (–) MLP, SPEC 0.65 0.79 0.69 (0.62) MLP, GEN 0.68 0.82 0.73 (0.72) MLP, GEN⊕SPEC 0.76 0.89 0.81 (0.76) MLP, CON⊕DIFF 0.84 0.94 0.88 (0.88) MULTI-CHANNEL 0.86 0.94 0.89 (0.85) Table 4: Macro-Precision (P), Recall (R) and F1-Score results. The main results apply centering and batch normalization; results without centering are in brackets. (GEN⊕SPEC) performs better than each of them individually. This is most likely due to more training data and having both domain-specific and generallanguage parts in the general-language corpus. The models integrating a notion of vector comparison perform best, with the multi-channel network achieving slightly better results than the MLP comparative embeddings. Centering improves all but one results; i.e., it has an overall beneficial effect for our task. 7 Conclusion We semi-automatically created the first large-scale gold standard for technicality prediction across domains and proposed two novel neural network models to fine-tune automatic terminology extraction by distinguishing between degrees of technicality. The models integrate general- vs. domain-specific word embedding information in different ways. An adapted Siamese multi-channel network model performed best, and centering has an overall beneficial effect on pre-processing the vector spaces. 2888 References Khurshid Ahmad, Andrea Davies, Heather Fulford, and Margaret Rogers. 1994. What is a term? The semiautomatic extraction of terms from text. Translation Studies: An Interdiscipline. Selected papers from the Translation Studies Congress,Vienna, 1992, 2:267– –278. Ehsan Amjadian, Diana Inkpen, T Sima Paribakht, and Farahnaz Faez. 2016. Local-global vectors to improve unigram terminology extraction. In Proceedings of the 5th International Workshop on Computational Terminology, pages 2–11, Osaka, Japan. Ehsan Amjadian, Diana Inkpen, T Sima Paribakht, and Farahnaz Faez. 2018. Distributed specificity for automatic terminology extraction. Terminology. International Journal of Theoretical and Applied Issues in Specialized Communication, 24(1):23–40. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2289–2294, Austin, Texas. Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: A collection of very large linguistically processed webcrawled corpora. Language Resources and Evaluation, 43(3):209–226. Armin Blass and Wolf Friederich. 1974. Englischer Wortschatz in Sachgruppen. 2077. Hueber Verlag. Francesca Bonin, Felice Dell’Orletta, Giulia Venturi, and Simonetta Montemagni. 2010. A contrastive approach to multi-word term extraction from domain corpora. In Proceedings of the 7th International Conference on Language Resources and Evaluation, pages 19––21, Valletta, Malta. Dhouha Bouamor, Leonardo Campillos Llanos, AnneLaure Ligozat, Sophie Rosset, and Pierre Zweigenbaum. 2016. Transfer-based learning-to-rank assessment of medical term technicality. In Proceedings of the 10th International Conference on Language Resources and Evaluation, Paris, France. Nitesh Chawla, Kevin Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. 2002. Smote: Synthetic minority over-sampling technique. Journal of Artificial Intelligence Research (JAIR), 16:321–357. Sumit Chopra, Raia Hadsell, and Yann Lecun. 2005. Learning a similarity metric discriminatively, with application to face verification. In Proceedings of Computer Vision and Pattern Recognition Conference, pages 539–546. Louise Del´eger and Pierre Zweigenbaum. 2009. Extracting lay paraphrases of specialized expressions from monolingual comparable medical corpora. In Proceedings of the 2nd Workshop on Building and Using Comparable Corpora: from Parallel to Nonparallel Corpora, pages 2–10. Karl-Heinz Dietsche, Konrad Reif, et al. 2019. Kraftfahrtechnisches Taschenbuch. Robert Bosch GmbH (Eds.), Springer Vieweg, Wiesbaden. Patrick Drouin. 2003. Term extraction using nontechnical corpora as a point of leverage. Terminology. International Journal of Theoretical and Applied Issues in Specialized Communication, 9(1):99– 115. Gertrud Faaß and Kerstin Eckart. 2013. SdeWaC – A corpus of parsable sentences from the web. In Iryna Gurevych, Chris Biemann, and Torsten Zesch, editors, Language Processing and Knowledge in the Web, volume 8105 of Lecture Notes in Computer Science, pages 61–68. Springer, Berlin Heidelberg. Natalia Grabar, Thierry Hamon, and Dany Amiot. 2014. Automatic diagnosis of understanding of medical words. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations, pages 11–20, Gothenburg, Sweden. Althea Ying Ho Ha and Ken Hyland. 2017. What is technicality? A technicality analysis model for EAP vocabulary. Journal of English for Academic Purposes, 28:35–49. Amir Hazem and Emmanuel Morin. 2017. Bilingual word embeddings for bilingual terminology extraction from specialized comparable corpora. In Proceedings of the 8th International Joint Conference on Natural Language Processing, pages 685–693, Taipei, Taiwan. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, pages 448–456, Lille, France. Chunyu Kit and Xiaoyue Liu. 2008. Measuring monoword termhood by rank difference via corpus comparison. Terminology. International Journal of Theoretical and Applied Issues in Specialized Communication, 14(2):204–229. Natalia A. Kochetkova. 2015. A method for extracting technical terms using the modified weirdness measure. Automatic Documentation and Mathematical Linguistics, 49(3):89–95. Jingshu Liu, Emmanuel Morin, and Pe˜na Saldarriaga. 2018. Towards a unified framework for bilingual terminology extraction of single-word and multi-word terms. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2855– 2866, Santa Fe, New Mexico, USA. Lucelene Lopes, Paulo Fernandes, and Renata Vieira. 2016. Estimating term domain relevance through term frequency, disjoint corpora frequency-tf-dcf. Knowledge-Based Systems, 97:237–249. 2889 Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems, pages 3111–3119, Lake Tahoe, Nevada, USA. Curran Associates Inc. Jiaqi Mu and Pramod Viswanath. 2018. All-but-thetop: Simple and effective postprocessing for word representations. In Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada. Agnieszka Mykowiecka, Małgorzata Marciniak, and Piotr Rychlik. 2018. Recognition of irrelevant phrases in automatically extracted lists of domain terms. Terminology. International Journal of Theoretical and Applied Issues in Specialized Communication, 24(1):66–90. Mar´ıa Jos´e Mar´ın P´erez. 2016. Measuring the degree of specialisation of sub-technical legal terms through corpus comparison. Terminology. International Journal of Theoretical and Applied Issues in Specialized Communication, 22(1):80–102. Paul Rayson and Roger Garside. 2000. Comparing corpora using frequency profiling. In Proceedings of the Workshop on Comparing Corpora, pages 1–6. Dominik Schlechtweg, Anna H¨atty, Marco Del Tredici, and Sabine Schulte im Walde. 2019. A wind of change: Detecting and evaluating lexical semantic change across times and domains. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 732–746, Florence, Italy. Helmut Schmid. 1995. Improvements in part-ofspeech tagging with an application to German. In Proceedings of the ACL SIGDAT-Workshop, Dublin, Ireland. Peter H Sch¨onemann. 1966. A generalized solution of the orthogonal Procrustes problem. Psychometrika, 31(1):1–10. M. Schroder. 2006. Franz Dornseiff: Der deutsche Wortschatz nach Sachgruppen. 8., vollst¨andig neu bearbeitete und mit einem vollst¨andigen alphabetischen Zugriffsregister versehene Auflage von Uwe Quasthoff. Deutsch als Fremdsprache, 43(1):53. Ayla Rigouts Terryn, V´eronique Hoste, and Els Lefever. 2019. In no uncertain terms: A dataset for monolingual and multilingual automatic term extraction from comparable corpora. Language Resources and Evaluation, pages 1–34. V.G. Vinod Vydiswaran, Qiaozhu Mei, David A. Hanauer, and Kai Zheng. 2014. Mining consumer health vocabulary from community-generated text. In Proceedings of the AMIA Annual Symposium, volume 2014, pages 1150–1159. Qing Zeng, Eunjung Kim, Jon Crowell, and Tony Tse. 2005. A text corpora-based estimation of the familiarity of health terminology. In International Symposium on Biological and Medical Data Analysis, pages 184–192. Springer, Berlin/Heidelberg. Qing Zeng-Treitler, Sergey Goryachev, Tony Tse, Alla Keselman, and Aziz Boxwala. 2008. Estimating consumer familiarity with health terminology: A context-based approach. Journal of the American Medical Informatics Association, 15(3):349–356.
2020
258
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2890–2895 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2890 Verbal Multiword Expressions for Identification of Metaphor Omid Rohanian†, Marek Rei§ ‡, Shiva Taslimipoor‡, Le An Ha† †RGCL, University of Wolverhampton, United Kingdom ‡ALTA Institute, University of Cambridge, United Kingdom §Department of Computing, Imperial College London, United Kingdom {omid.rohanian, l.a.ha}@wlv.ac.uk [email protected], [email protected] Abstract Metaphor is a linguistic device in which a concept is expressed by mentioning another. Identifying metaphorical expressions, therefore, requires a non-compositional understanding of semantics. Multiword Expressions (MWEs), on the other hand, are linguistic phenomena with varying degrees of semantic opacity and their identification poses a challenge to computational models. This work is the first attempt at analysing the interplay of metaphor and MWEs processing through the design of a neural architecture whereby classification of metaphors is enhanced by informing the model of the presence of MWEs. To the best of our knowledge, this is the first “MWE-aware” metaphor identification system paving the way for further experiments on the complex interactions of these phenomena. The results and analyses show that this proposed architecture reach state-of-the-art on two different established metaphor datasets. 1 Introduction Human language is rife with a wide range of techniques that facilitate communication and expand the capacities of thinking and argumentation. One phenomenon of such kind is metaphor. Metaphor is defined as a figure of speech in which the speaker makes an implicit comparison between seemingly unrelated things which nonetheless have certain common characteristics (Shutova, 2010). This is done to convey an idea which is otherwise difficult to express succinctly or simply for rhetorical effect. As an example, in the sentence she devoured his novels, the verb devour is used in a metaphorical sense that implies reading quickly and eagerly. The literal and metaphorical senses share the element of intense desire which in turn helps to decode the meaning of the word in its context. It is clear that a mere literal understanding of semantics would not result in proper understanding of a metaphorical expression and a non-compositional approach would be required (Shutova et al., 2013; Vulchanova et al., 2019). The human brain is equipped with the necessary machinery to decode the intended message behind a metaphorical utterance. This involves mentally linking the seemingly unrelated concepts based on their similarities (Rapp et al., 2004). Verbal MWEs (VMWEs) are another example of non-literal language in which multiple words form a single unit of meaning. These two phenomena share some common ground. Expressions like take the bull by the horns, go places, kick the bucket, or break someone’s heart can be categorised as metaphorical VMWEs. Based on this observation we hypothesise that a metaphor classification model can be bolstered by knowledge of VMWEs. In this work we focus on how identification of verbal metaphors can be helped by verbal MWEs. We devise a deep learning model based on attention-guided graph convolutional neural networks (GCNs) that encode syntactic dependencies alongside information about the existence of VMWEs and we test the model on two established metaphor datasets. 2 Related Works The tasks of MWE and metaphor identification share some similarities. Many idiomatic MWEs can be considered as lexicalised metaphors. Idioms are where the overlap becomes clear (Kordoni, 2018). It is important to note, however, that not all verbal metaphors are VMWEs. Metaphors that are less conventionalised and appear in creative context (e.g. within a poem or a literary piece) and are not established enough to make it as entries into dictionaries are examples of such cases. However, the distinction between these categories is not always clear, and few precise tests exist for the 2891 annotators to tell them apart (Gross, 1982). 1 Most state-of-the-art MWE identification models are based on neural architectures (Ramisch et al., 2018; Taslimipoor and Rohanian, 2018) with some employing graph-based methods to make use of structured information such as dependency parse trees (Waszczuk et al., 2019; Rohanian et al., 2019). Top-performing metaphor detection models also use neural methods (Rei et al., 2017; Gao et al., 2018), with some utilising additional data such as sentiment and linguistic information to further improve performance (Mao et al., 2019; Dankers et al., 2019). 3 Graph Convolutional Networks Graph Convolutional Networks (GCNs) (Kipf and Welling, 2016) are a variation of the classic CNNs that perform the convolution operation on nodes of a graph, making them suitable for capturing nonsequential inter-dependencies in the input. Using the per-sentence formalism (Marcheggiani and Titov, 2017; Rohanian et al., 2019), GCN can be defined as: GCN = f(WXT A + b) (1) where W, X, A, b, and GCN refer to the weight matrix, representation of the input sentence, adjacency matrix, bias term, and the output of the convolution respectively. f is a nonlinearity which is often the relu function. 3.1 Multi-head Self-attention Attention is a mechanism inspired by human visual attention which aims to encode sequences by emphasising their most informative parts through weighting. Self-attention (Cheng et al., 2016), also referred to as intra-attention, is a special case of the attention mechanism which relates different parts of the same sequence and relies only on information from the same sequence. When the sequence is a series of words, this means encoding the sentence by learning correlations between words in the sentence. Self-attention is a powerful method to learn long-range dependencies in a sequence. In this work, we use a particular form of selfattention introduced by Vaswani et al. (2017) in which the weighting is determined by scaled dot product. Given the input representation X, three smaller sized vectors are created. These are Query, 1See PARSEME annotation guidelines at https://parsemefr.lis-lab.fr/parseme-st-guidelines/1.1/ Key, and Value which are represented with Q, K, and V respectively. The output of self-attention is computed with: Att(Q, K, V ) = softmax(QKT √ d )V (2) N different self-attention mechanisms are activated in parallel. This approach is known as N-headed self-attention, where each head Hi = Att(QW Q i , KW K i , V ) and the projections W Q i and W K i are parameter matrices. The outputs from these individual heads are later used in GCN layers (Guo et al., 2019). 3.2 Attention Guided Adjacency Central to GCN is the adjacency matrix where the relations between nodes are defined. Converting the graph of relations to an adjacency matrix involves a rule-based hard pruning strategy and potentially results in discarding valuable information due to the sparsity of the matrix. Influenced by Guo et al. (2019), in this work we consider dependency parse information as an undirected graph with adjacency A. To obtain ˜A, we combine matrix A with matrices H0, H1,..., HN−1 induced by the N-headed self-attention mechanism defined in Section 3.1. Given an N-headed attention, each A is converted to several ˜Ais where i ∈{1, 2, ..N} and each ˜Ai is a linear combination of A and Hi. ˜Ai = α × Hi + (1 −α) × A (3) Each ˜Ai can be interpreted as a fully connected graph where the relation strength between every two nodes is determined by a weight value. In this case, a higher weight signifies a stronger relation and a value close to zero would signal a lack of connection. These edge-weighted graphs are then fed to separate GCNs. A consolidated representation is finally achieved by a linear combination of the outputs from these N different GCNs. The use of attention within the GCN network is motivated by the assumption that multi-hop paths between distantly related nodes could potentially be captured this way. We stack n layers of attentionguided GCNs using residual connections with n being a hyper-parameter that is tuned independently in each dataset. Graph Attention (GAT) (Veliˇckovi´c et al., 2017) is a closely related work where the scope of attention is the neighbourhood of each node, whereas we make use of the entire sentence. 2892 3.3 MWE-Aware GCN In order to inform the model of the structural hierarchy within the sentence and encode information about MWEs, our attention-guided GCN component integrates information from two separate sources; namely, the dependency parse information and token-level relations between components of existing MWEs in the sentence. These correspond to adjacencies ˜ADEP and ˜AMWE which are fed each into separate GCNs and the output is a concatenation of the outputs from both components: GCN = concat[GCNsMW E; GCNsDEP ] (4) 4 Experiments We describe the datasets used in the experiments and then provide details of the overall system. 4.1 Datasets We apply the systems on two different metaphor datasets: MOH-X, and TroFi, which contain annotations for verb classification. Both of these datasets contain a set of sentences in which a single verb token is labelled as metaphorical or not. There is also an index provided that specifies the location of the target token in the sentence. MOH-X. MOH-X is based on earlier work by Mohammad et al. (2016). It consists of short ‘example’ sentences from WordNet (Fellbaum, 1998)2 with labels for metaphorical verbs along with associated confidence scores. Shutova et al. (2016) created a subset of this dataset, referred to as MOH-X, and added annotations for each verb and its argument. This dataset has 214 unique verbs. TroFi. Similar to MOH-X, TroFi (Birke and Sarkar, 2006) has annotations for target verbs in each sentence. It has a comparatively longer average sentence length with 28.3 words per sentence compared to MOH-X’s 8.0. The sentences in TroFi are constructed from the Wall Street Journal Corpus (Charniak et al., 2000). There are only 50 unique target verbs in this dataset. 4.2 MWE Identification We extract MWEs using the GCN-based system proposed by Rohanian et al. (2019). Since we are focusing on verbal metaphors in this study, we train the system on the PARSEME English dataset 2Examples are sentences after the gloss that show incontext usage TroFi MOH-X verbal metaphor 1627 315 MWE 257 77 Table 1: Number of predicted MWEs among target verbs. (Ramisch et al., 2018), which is annotated for verbal MWEs. As a result, predicted MWE labels in our target datasets are IOB formatted, where B and I denote the beginning and inside tokens of an MWE and O signifies tokens not belonging to MWEs. We encode the relations between components of MWEs in each sentence using an adjacency matrix. Tokens of a sentence are nodes of the adjacency matrix; edges exist between tokens of an MWE. Relation matrices are then fed to the attention guided system as explained in Section 4.3. The numbers of verbal MWEs in correlation with target verbs in metaphor datasets are shown in Table 1. As can be seen, almost 16% of metaphors in TroFi and 24% of metaphors in MOH-X are automatically labelled as VMWEs. This provides a strong motivation for incorporating this information into the metaphor identification system. 4.3 System Description For our experiments, we devise two strong baselines and compare them against our proposed model. All three systems are built on top of a pretrained BERT architecture (Devlin et al., 2019). The starting baseline (BERTBaseline) is vanilla pre-trained BERT with a classification layer added on top. The other two models (BERT+GCN and BERT+MWE-Aware GCN) are created by adding extra layers with trainable parameters on top of the BERT model, augmenting its original structure. 3 BERT+GCN is BERT plus an attention-guided GCN that uses dependency parse information. Finally, BERT+MWE-Aware GCN refers to the system that uses BERT along with the added MWEaware GCN component that utilises both dependency and VMWE information as detailed in Section 3.3. Adam (Kingma and Ba, 2014) is used for optimising the network; the learning rate is controlled with a linear warmup scheduler in which the rate 3For all the experiments we use the pre-trained BERT model, bert-base-uncased, from the transformers library (Wolf et al., 2019). 2893 MOH-X TroFi Models Acc P R F1 Acc P R F1 Gao et al. (2018) 78.5 75.3 84.3 79.1 73.7 68.7 74.6 72.0 RNN-HG (Mao et al., 2019) 79.7 79.7 79.8 79.8 74.9 67.4 77.8 72.2 RNN-MHCA (Mao et al., 2019) 79.8 77.5 83.1 80.0 75.2 68.6 76.8 72.4 BERTBaseline 78.04 78.38 77.87 77.82 70.38 70.54 68.89 68.84 BERT+GCN 79.44 79.79 79.36 79.31 72.01 72.32 70.45 70.65 BERT+MWE-Aware GCN 80.47 79.98 80.40 80.19 73.45 73.78 71.81 72.78 Table 2: Performance of MWE-Aware GCN against baselines and state-of-the-art on MOH-X and TroFi decreases linearly after increasing during a warmup period. In all the models, given the verb index in the dataset4, and before passing the token-level output of the GCN to the softmax layer, we slice the output tensor based on the provided index and only select for the representation of the token of interest and subsequently pass this sliced tensor to the classification layer. 5 Results We report the results in terms of accuracy, precision, recall and F1-score, macro averaged over the measures obtained from 10 fold cross-validation. As can be seen in Table 2, our proposed model outperforms the baselines and also surpasses stateof-the-art in terms of F1-score and precision in both datasets. As a whole, the results obtained for the two datasets are more homogeneous across the four metrics compared to previous state-of-the-art. In order to have a fair comparison with the previous state-of-the-art, it is important to consider their architectures. Gao et al. (2018), which our model outperforms in most criteria across the two datasets, is a BiLSTM-based system that uses a combination of ELMo and GLoVe vectors for input representation. The two models by Mao et al. (2019) are more competitive, especially in accuracy and precision for the TroFi dataset. RNN-HG and RNN-MHCA are BiLSTM-based systems grounded in linguistic theories of Selectional Preference Violation (SPV) (Wilks, 1978) and Metaphor Identification Procedure (MIP) (Steen et al., 2007) which are based on the semantic contrast between the metaphorical word and its context or between the literal and contextualised meanings of a target token. These two models also make use of contextualised embeddings. 4An index specifies the location of the target token. 6 Discussion The larger portion of annotated VMWEs in both datasets are figurative and thus provide a valuable signal to metaphoricity. TroFi proved to be more challenging as sentences can be as long as 118 tokens with several different VMWEs and only a single token of interest which could be labelled as literal. On the other hand, MOH-X is more focused and VMWEs, for the most part, coincide with the target verb. A notable pattern in the results is when the baselines miss a metaphor and the proposed model correctly identifies it due to the presence of a noncompositional VMWE. A typical example is given below where tack together, identified initially as an MWE, signals metaphoricity:5 (1) He tacked together some verses. There are examples of sentences falsely classified by BERT+GCN as metaphorical which are correctly identified as not by BERT+MWE-Aware GCN. This shows the model has picked up informative cues and general patterns. There are also metaphors missed by BERT+GCN that do not have explicitly tagged VMWEs, but the proposed model is still able to capture them. Example 2 is an instance of such case: (2) The residents of this village adhered to Catholicism. Due to their correlation with metaphoricity, VMWE information equips the model with the ability to identify metaphorical usage, which is reflected in the superior precision scores. However, this correlation is not always definitive, and in certain cases where a VMWE is realised in its literal meaning, the model might incorrectly associate its 5Target tokens are boldfaced 2894 presence with metaphor. The following two sentences from MOH-X are examples of false positives influenced by VMWEs. Here, jam the brake and land in are VMWEs with literal meanings which can be idiomatic in other contexts: (3) The driver jammed the brake pedal to the floor. (4) The ship landed in Pearl Harbor There are only a few such cases in MOH-X, however in TroFi, the problem is exacerbated by longer sentences with multiple target tokens. One possible remedy could be to not attend to all the tokens in each sentence but instead look at a certain window around the target token. We did not explore this idea in this work as it would defeat the purpose of attention-guided GCNs, but are open to considering it in future in such a way that accuracy is improved without hurting the precision scores which are higher in both datasets than previous state-of-the-art. 7 Conclusions and Future Work In this work, we presented a neural model to classify metaphorical verbs in their sentential context using information from the dependency parse tree and annotations for verbal multiword expressions. To the best of our knowledge, this is the first MWEaware metaphor identification system, that demonstrates how the knowledge of MWEs can enhance the performance of a metaphor classification model. Experiments showed that the resulting system sets a new state-of-the-art in several criteria across two benchmark metaphor datasets. The code used in the experiments will be made publicly available 6. For future work, we plan to add VMWE annotations to the VU Amsterdam Corpus (Steen, 2010) which is the largest metaphor dataset and extend our experiments using that resource. Directionality of edges did not result in improvement in our models in this work, however for future, we plan to develop GCNs that incorporate edge typing, which would enable us to differentiate between different MWE types and dependency relations while comparing them against the current models. 6https://github.com/omidrohanian/ metaphor_mwe References Julia Birke and Anoop Sarkar. 2006. A clustering approach for nearly unsupervised recognition of nonliteral language. In 11th Conference of the European Chapter of the Association for Computational Linguistics. Eugene Charniak, Don Blaheta, Niyu Ge, Keith Hall, John Hale, and Mark Johnson. 2000. BLLIP 198789 WSJ corpus release 1. Linguistic Data Consortium, Philadelphia, 36. Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 551–561. Verna Dankers, Marek Rei, Martha Lewis, and Ekaterina Shutova. 2019. Modelling the interplay of metaphor and emotion through multitask learning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2218– 2229, Hong Kong, China. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Cambridge, MA: MIT Press. Ge Gao, Eunsol Choi, Yejin Choi, and Luke Zettlemoyer. 2018. Neural metaphor detection in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 607–613, Brussels, Belgium. Association for Computational Linguistics. Maurice Gross. 1982. Une classification des phrases fig´ees du franc¸ais. Revue qu´eb´ecoise de linguistique, 11(2):151–185. Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Attention guided graph convolutional networks for relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 241–251, Florence, Italy. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. 2895 Valia Kordoni. 2018. Beyond multiword expressions: Processing idioms and metaphors. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, pages 15– 16, Melbourne, Australia. Association for Computational Linguistics. Rui Mao, Chenghua Lin, and Frank Guerin. 2019. Endto-end sequential metaphor identification inspired by linguistic theories. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3888–3898, Florence, Italy. Association for Computational Linguistics. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. arXiv preprint arXiv:1703.04826. Saif Mohammad, Ekaterina Shutova, and Peter Turney. 2016. Metaphor as a medium for emotion: An empirical study. In Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics, pages 23–33. Carlos Ramisch, Silvio Cordeiro, Agata Savary, Veronika Vincze, Verginica Mititelu, Archna Bhatia, Maja Buljan, Marie Candito, Polona Gantar, Voula Giouli, et al. 2018. Edition 1.1 of the PARSEME shared task on automatic identification of verbal multiword expressions. Alexander M Rapp, Dirk T Leube, Michael Erb, Wolfgang Grodd, and Tilo TJ Kircher. 2004. Neural correlates of metaphor processing. Cognitive brain research, 20(3):395–402. Marek Rei, Luana Bulat, Douwe Kiela, and Ekaterina Shutova. 2017. Grasping the finer point: A supervised similarity network for metaphor detection. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1537–1546. Omid Rohanian, Shiva Taslimipoor, Samaneh Kouchaki, Le An Ha, and Ruslan Mitkov. 2019. Bridging the gap: Attending to discontinuity in identification of multiword expressions. arXiv preprint arXiv:1902.10667. Ekaterina Shutova. 2010. Models of metaphor in NLP. In Proceedings of the 48th annual meeting of the association for computational linguistics, pages 688– 697. Association for Computational Linguistics. Ekaterina Shutova, Douwe Kiela, and Jean Maillard. 2016. Black holes and white rabbits: Metaphor identification with visual features. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 160–170. Ekaterina Shutova, Simone Teufel, and Anna Korhonen. 2013. Statistical metaphor processing. Computational Linguistics, 39(2):301–353. Gerard Steen. 2010. A method for linguistic metaphor identification: From MIP to MIPVU, volume 14. John Benjamins Publishing. GJ Steen, LJ Cameron, AJ Cienki, P Crisp, A Deignan, W Raymond jr, J Grady, Z K¨ovecses, GD Low, and E Semino. 2007. Mip: A method for identifying metaphorically used words in discourse. Metaphor and Symbol, 22(1):1–39. Shiva Taslimipoor and Omid Rohanian. 2018. SHOMA at parseme shared task on automatic identification of VMWEs: Neural multiword expression tagging with high generalisation. arXiv preprint arXiv:1809.03056. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903. Mila Vulchanova, Evelyn Milburn, Valentin Vulchanov, and Giosu`e Baggio. 2019. Boon or burden? the role of compositional meaning in figurative language processing and acquisition. Journal of Logic, Language and Information, 28(2):359–387. Jakub Waszczuk, Rafael Ehren, Regina Stodden, and Laura Kallmeyer. 2019. A neural graph-based approach to verbal MWE identification. In Proceedings of the Joint Workshop on Multiword Expressions and WordNet (MWE-WN 2019), pages 114– 124, Florence, Italy. Association for Computational Linguistics. Yorick Wilks. 1978. Making preferences more active. Artificial intelligence, 11(3):197–223. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.
2020
259
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 280–290 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 280 Review-based Question Generation with Adaptive Instance Transfer and Augmentation∗ Qian Yu1,2† Lidong Bing2 Qiong Zhang2 Wai Lam1 Luo Si2 1 The Chinese University of Hong Kong 2 DAMO Academy, Alibaba Group 1{yuqian, wlam}@se.cuhk.edu.hk 2{l.bing, qz.zhang, luo.si}@alibaba-inc.com Abstract While online reviews of products and services become an important information source, it remains inefficient for potential consumers to exploit verbose reviews for fulfilling their information need. We propose to explore question generation as a new way of review information exploitation, namely generating questions that can be answered by the corresponding review sentences. One major challenge of this generation task is the lack of training data, i.e. explicit mapping relation between the user-posed questions and review sentences. To obtain proper training instances for the generation model, we propose an iterative learning framework with adaptive instance transfer and augmentation. To generate to the point questions about the major aspects in reviews, related features extracted in an unsupervised manner are incorporated without the burden of aspect annotation. Experiments on data from various categories of a popular E-commerce site demonstrate the effectiveness of the framework, as well as the potentials of the proposed review-based question generation task. 1 Introduction The user-written reviews for products or service have become an important information source and there are a few research areas analyzing such data, including aspect extraction (Bing et al., 2016; Chen et al., 2013), product recommendation (Chelliah and Sarkar, 2017), and sentiment analysis (Li et al., 2018; Zhao et al., 2018a). Reviews reflect certain concerns or experiences of users on products or services, and such information is valuable for other ∗The work described in this paper is partially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14204418). †The work was done when Qian Yu was an intern at Alibaba. potential consumers. However, there are few mechanisms assisting users for efficient review digestion. It is time-consuming for users to locate critical review parts that they care about, particularly in long reviews. We propose to utilize question generation (QG) (Du et al., 2017) as a new means to overcome this problem. Specifically, given a review sentence, the generated question is expected to ask about the concerned aspect of this product, from the perspective of the review writer. Such question can be regarded as a reading anchor of the review sentence, and it is easier to view and conceive due to its concise form. As an example, the review for a battery case product in Table 1 is too long to find sentences that can answer a user question such as “How long will the battery last?”. Given the generated questions in the right column, it would be much easier to find out the helpful part of the review. Recently, as a topic attracting significant research attention, question generation is regarded as a dual task of reading comprehension in most works, namely generating a question from a sentence with a fixed text segment in the sentence designated as the answer (Duan et al., 2017; Sun et al., 2018). Two unique characteristics of our review-based question generation task differentiate it from the previous question generation works. First, there is no review-question pairs available for training, thus a simple Seq2Seq-based question generation model for learning the mapping from the input (i.e. review) to the output (i.e. question) cannot be applied. Even though we can easily obtain large volumes of user-posed review sets and question sets, they are just separate datasets and cannot provide any supervision of input-output mapping (i.e. reviewquestion pair). The second one is that different from the traditional question generation, the generated question from a review sentence will not simply take a fixed text segment in the review as its 281 Review Question It doesn’t heat up like most of the other ones, and I was completely fascinated by the ultra light and sleek design for the case. Before I was using the Mophie case but I couldn’t wear it often because it was like having a hot brick in your pocket, hence I had to always leave it at home. On the contrary, with PowerBear, I never take it off because I can’t even tell the difference. Also it is build in a super STRONG manner and even though I dropped my phone a few times, its shock resistant technology won’t let a single thing happen to the case or the phone. The PowerBear case became an extension to my phone that I never have to take off because when I charge it at night, it charges both my phone and the case. I have battery life for more than two days for normal use, i.e. not power-consuming gaming. Does this make the phone warm during charging? Have any of you that own this had a Mophie? Does this give protection to the phone? Can this charge the phone and the extra battery at the same time? How many days it can last? Table 1: A product review and the example questions. answer. The reason is that some reviews describing user experiences are highly context-sensitive. For the example in Table 1, for the review “I have battery life for more than two days for normal use, i.e. not power-consuming gaming.” and its corresponding example question “How many days it can last?”, obviously the text segment “more than two days” is a less precise answer, while the whole review sentence is much more informative. In some other case, even such less precise answer span cannot be extracted from the review sentence, e.g. for the example question “Does this give protection to the phone?” and the review sentence “Also it is ... even though I dropped my phone ..., its shock resistant technology won’t let a single thing happen to the case or the phone.”. Of course here, a simple “Yes” or “No” answer does not make much sense as well, while the whole review sentence is a vivid and informative answer. The above two unique characteristics raise two challenges for our task. The first challenge, namely lacking review-question pairs as training instances, appears to be intractable, particularly given that the current end-to-end models are very data-hungry. One instant idea is to utilize user-posed (question, answer) pairs as substitute for training. However, several instance-related defects hinder the learned generation model from being competent for the review-based question generation. Some answers are very short, e.g. “more than two days”, therefore, without necessary context, they are not helpful to generate good questions. The second challenge, namely the issue that some verbose answers contain irrelevant content especially for subjective questions. To handle this challenge, we propose a learning framework with adaptive instance transfer and augmentation. Firstly, a pre-trained generation model based on user-posed answer-question pairs is utilized as an initial question generator. A ranker is designed to work together with the generator to improve the training instance set by distilling it via removing unsuitable answer-question pairs to avoid “negative transfer” (Pan and Yang, 2009), and augmenting (Kobayashi, 2018) it by adding suitable reviewquestion pairs. For selecting suitable reviews for question generation, the ranker considers two factors: the major aspects in a review and the review’s suitability for question generation. The two factors are captured via a reconstruction objective and a reinforcement objective with reward given by the generator. Thus, the ranker and the generator are iteratively enhanced, and the adaptively transferred answer-question pairs and the augmented reviewquestion pairs gradually relieve the data lacking problem. In accordance with the second characteristic of our task, it is plausible to regard a review sentence or clause as the answer to the corresponding question originated from it. Such treatment brings in the second challenge: how can we guarantee that the generated question concentrates on the critical aspect mentioned by the review sentence? For example, a question like “How was the experience for gaming?” is not a favourable generation for “I have battery life for more than two days for normal use, i.e. not power-consuming gaming.”. To solve this problem, we incorporate aspect-based feature discovering in the ranker, and then we integrate the aspect features and an aspect pointer network in the generator. The incorporation of such aspect-related features and structures helps the generator to focus more on critical product aspects, other than the less important parts, which is complied with the real user-posed questions. To sum up, our main contributions are threefold. (1) A new practical task, namely question generation from reviews without annotated instance, is proposed and it has good potential for multiple applications. (2) A novel adaptive instance transfer and augmentation framework is proposed for handling the data lacking challenge in the task. (3) 282 Review-based question generation is conducted on E-commerce data of various product categories. 2 Related Work Question generation (QG) is an emerging research topic due to its wide application scenarios such as education (Wang et al., 2018), goal-oriented dialogue (Lee et al., 2018), and question answering (Duan et al., 2017). The preliminary neural QG models (Du et al., 2017; Zhou et al., 2017; Du and Cardie, 2017) outperform the rule-based methods relying on hand-craft features, and thereafter various models have been proposed to further improve the performance via incorporating question type (Dong et al., 2018), answer position (Sun et al., 2018), long passage modeling (Zhao et al., 2018b), question difficulty (Gao et al., 2019), and to the point context (Li et al., 2019). Some works try to find the possible answer text spans for facilitating the learning (Wang et al., 2019). Question generation models can be combined with its dual task, i.e., reading comprehension or question answering with various motivations, such as improving auxiliary task performance (Duan et al., 2017; Yang et al., 2017; Golub et al., 2017), collaborating QA and QG model (Tang et al., 2018, 2017), and unified learning (Xiao et al., 2018). Although question generation has been applied on other datasets, e.g., Wikipedia (Du and Cardie, 2018), most of the existing QG works treat it as a dual task of reading comprehension (Yu et al., 2018; Cui et al., 2017), namely generating a question from a piece of text where a certain text span is marked as answer, in spite of several exceptions where only sentences without answer spans are used for generating questions (Du et al., 2017; Chali and Baghaee, 2018). Such generation setting is not suitable for reviews due to the lack of (question, review) pairs and improper assumption of text span answer as aforementioned. There are works training the question generation model with the user-written QA pairs in E-commerce sites (Hu et al., 2018; Chali and Baghaee, 2018), but the practicality is limited since the questions are only generated from answers instead of reviews. Transfer learning (Pan and Yang, 2009; Tan et al., 2017; Li et al., 2020) refers to a broad scope of methods that exploit knowledge across domains for handling tasks in the target domain. A few terms are used for describing specific methods in this learning paradigm, e.g., self-taught learning (Raina et al., 2007), domain adaptation (Long et al., 2017), etc. Based on “what to transfer”, transfer learning is categorized into four groups (Pan and Yang, 2009), namely instance transfer, feature representation transfer, parameter transfer, and relational knowledge transfer. Our learning framework can be regarded as a case of instance transfer with iterative instance adaptation and augmentation. 3 The Proposed AITA Framework For handling the aforementioned issues, we propose an Adaptive Instance Transfer and Augmentation (AITA) framework as shown in Figure 1. Since the review-related processing is always sentencebased, we use “review” for short to refer to review sentence in this paper. Its two components, namely ranker and generator, are learned iteratively. Initially, AITA simply transfers all available (question, answer) pairs and trains a generator. Then it will iteratively enhance the generator with the help of the ranker. The ranker takes a (question, answer) pair and a review as its input and calculates a ranking score s. Thus, it can rank all reviews for a given QA pair. The ranking objective incorporates the reward provided by the generator, which helps find out those suitable reviews to form (review, question) pairs for training (i.e. augmenting the training data). Meanwhile, the reward from the generator also helps remove unsuitable QA pairs for training, so that it makes the transfer more adaptive. Note that the ranker also learns to model two hidden aspect related variables for the review, which are helpful for the generator to ask about the major aspects in review. Such an iterative instance manipulation procedure gradually transfers and augments the training set for handling review-based question generation. 3.1 Review Ranker for Data Augmentation There are two pieces of input text for ranker. The first one is the concatenation of a (question, answer) pair qa and the second one is a review sentence r. qa and r are associated with the same product. Since the ranker is responsible for instance augmentation that provides (question, review) pairs, it is trained to learn a score s(qa, r) which can be used to return suitable r’s for a given qa. Ranking with Partially Shared Encoders. The input qa and r are encoded with two Transformer encoders with the same structure and partially shared parameters, to leverage the advantage of 283 Figure 1: AITA framework. M is the shared parameter matrix for QA and review. multi-head self attention on modeling word associations without considering term position. An input (qa or r) is written as a matrix E = [eT 1 , ..., eT n]T , where e is a word embedding and n is the text length. The number of heads in the multi-head selfattention is denoted as m, and the output of the j-th head is written as: Qj, Kj, Vj = EWj Q, EWj K, EWj V (1) headj(E) = softmax(QjKjT √ d )Vj (2) where d is the dimension of word embedding. The outputs of different heads are concatenated and the encoding for the i-th word is written as hi = [head1 i ; ...; headm i ]. To obtain the sentence representation considering the complete semantics, we apply a global attention layer on the output of the Transformer encoder: hα = n X i=1 αihi (3) where the attention weight αi = exp(hi·M·h)/Zα, Zα is the normalization, and h = P hi/n. The parameter matrix M is shared by encoders for both qa and r for capturing the common attention features across them. After encoding qa and r as hα(qa) and hα(qa), a vector g(qa, r) is assigned with the concatenation of hα(qa), hα(qa) and their difference g(qa, r) = [ hα(qa), hα(r), |hα(qa) −hα(r)| ] The review ranking score s(qa, r) is calculated as: s(qa, r) = σ(Wsg(qa, r) + bs) (4) where σ is sigmoid function. Reinforcement Objective for Ranker Learning. To learn an appropriate s(qa, r), we encounter a major challenge, namely lacking ground truth labels for (question, review). Our solution takes the generator in our framework as an agent that can provide reward for guiding the learning of ranker. The generator is initially trained with (question, answer) data, and is gradually updated with adapted and augmented training instances, so that the rewards from the generator can reflect the ability of review for generating the corresponding question. Specifically, we propose a reinforcement objective that makes use of the reward from the generator, denoted as rewardG(r, q). For each pair of question and review, we take the normalized log ppl(q|r) in the generator as reward: rewardG(r, q) = log ppl(q|r) P r∗∈Rqa log ppl(q|r∗) (5) where Rqa is the reviews under the same product as qa, and log ppl(q|r) is the log perplexity of generating a question q from a review r: log ppl(q|r) = −1 |q| X t∈[1,|q|] pG(qt|r, q1...qt−1) The reinforcement objective for the ranker is to maximize the average reward for all the reviews given a question. The sampling probabilities for reviews are obtained via normalized ranking score, namely p(r|qa) = s(qa, r)/Zqa, where Zqa = P r∗∈Rqa s(qa, r∗). The loss function is: Lg(qa, r) = Er∼p(r|qa)rewardG(r, q) (6) The gradient calculation for the above objective is an intractable problem. As an approximated method which performs well in the iterative algorithm, the normalization term Zqa is fixed during 284 the calculation of the policy gradient: ∆Lg(qa, r) = X r ∆s(qa, r)rewardG(r, q)/Zqa Regularization with Unsupervised Aspect Extraction. Product aspects usually play a major role in all of product questions, answers and reviews, since they are the discussion focus of such text content. Thus, such aspects can act as connections in modeling input pairs of qa and r via the partially shared structure. To help the semantic vector hα in Eqn 3 capture salient aspects of reviews, an autoencoder module is connected to the encoding layer for reconstructing hα. Together with the matrix M, the autoencoder can be used to extract salient aspects from reviews. Note that this combined structure is similar to the ABAE model (He et al., 2017), which has been shown effective for unsupervised aspect extraction. Compared with supervised aspect detection methods, such a unsupervised module avoid the burden of aspect annotation for different product categories, and our experiments demonstrate that regularization based on this module is effective. Specifically, hα is mapped to an aspect distribution pα and then reconstructed: pα = softmax(Wp · hα + bp) (7) hα′ = pα · A (8) where each dimension in pα stands for the probability that the review contains the corresponding aspect, and hα′ is the reconstruction of review representation, and A is a learnable parameter matrix. Note that we define “aspects” as implicit aspect categories, namely clusters of associated attributes of product, which is commonly used in unsupervised aspect extraction (Wang et al., 2015; He et al., 2017). The reconstruction objective is written as: Lα(qa, r) = [hα(r) −hα′(r)]2 / 2. (9) Only the reconstruction of review representations is considered since we focus on discovering aspects in reviews.1 In this way, the aspect-based reconstruction will force hα to focus on salient aspects that facilitate the reconstruction. The final loss function of the ranker is regularized to: L(qa, r) = Lg(qa, r) −λLα(qa, r) (10) where λ is a hyper-parameter. 1We simplified the objective in AEAB model by eliminating the additional regularization term which is not necessary when combining Lα(qa, r) and Lg(qa, r). 3.2 Question Generator in Transfer Learning We adapt the Seq2Seq model for the aspect-focused generation model, which is updated gradually via the transferred and augmented instances. With the help of aspect-based variables learned in ranker, the generator can generate questions reflecting the major aspect in the review. Aspect-enhanced Encoding. To emphasize the words related to salient aspects, the attention weight αi obtained in the ranker is incorporated into the word embedding. Given an input review sentence, we obtain the extended word embedding ˜ei at position i: ˜ei = [ei, ePOS i , eNER i , αi] (11) where ei is the pre-trained word embedding, ePOS i is the one-hot POS tag of i-th word, eNER i is a BIO feature for indicating whether the i-th word is a named entity, and αi indicates the aspect-based weight for the i-th word. Bi-LSTM is adopted as the basic encoder of generator, encoding the i-th word as the concatenation of hidden states with both directions: hg i = [−→h i, ←−h i]. Decoding with Aspect-aware Pointer Network. Pointer network, i.e., copy mechanism, can significantly improve the performance of text generation. In our task, in addition to the word-level hidden state in the decoder, the overall aspect distribution of the review can also provide clues for how likely the generator should copy corresponding review aspect words into the generated question. The question is generated with an LSTM decoder. The word probability for the current time step is formulated as: p0(qt) = softmax(W2τ + b2) and related variables are calculated as: τ = σ(W1[st, ct] + b1) , st = LSTM(yt, st−1) , ct = X j ztjhg j , ztj = softmax(hg jWhst) where st is the hidden state for the t-th word in question and ct is the context encoding based on attention weight ztj. In the pointer network, for a particular position t in the generated text, the word may be copied from a distribution based on the attention weight zt={ztj}, where the copy probability is assigned according to the current hidden state st. We also 285 Data: QA set Sqa={(q,a)}; review set Sr={r}; µ Result: S; generator trained with S Prepare pairs of (qa, r) under each product Initialize the training set S = Sqa For each epoch Do 1. Train generator with S. 2. Prepare the rewardG(qa, r) as generator reward for each pair of (qa, r) (each answer a in qa pairs is regarded as a review for q). 3. Adapt S via removing µ instances with low reward. 4. Train ranker according to the objective in Eqn 10. 5. Augment S via adding µ pairs of instances, which are (q, r) pairs with top s(qa, r) in ranker. 6. Collect α and pα for instances in S from ranker. End Algorithm 1: Learning algorithm of AITA. consider the influence of the aspect distribution pα in the copy probability β for interpolation: β = σ(pαWcst + bc) (12) The incorporation of pα helps the pointer network to consider the overall aspect distribution of context in addition to the semantics in the current position for copying words. Finally, the t-th word is generated from the mixture of the two distributions: p(qt) = (1 −β) · p0(qt) + β · zt. (13) The generator is trained via maximizing the likelihood of the question q given the review r: p(r|q) = X i p(ri|q, r1, ..., ri−1) (14) 3.3 Iterative Learning Algorithm The purpose of our iterative learning, as by Alg 1, is to update the generator gradually via the instance augmentation. The input data for the iterative learning consists of the transferred instance set of question-answer pairs Sqa, an unlabeled review set Sr, and an adaption parameter µ. When the learning is finished, two outputs are produced: the final training instances S, and the learned generator. The training set S for generator is initialized with Sqa. In each iteration of the algorithm, the generator is trained with current S, and then S is adapted accordingly. The ranker is trained based on the rewards from the generation, which is used for instance augmentation in S. Thus, the training set S is updated during the iterative learning, starting from a pure (question, answer) set. Analysis on the influence of the composition of S, i.e., instance numbers of two types, is presented in Section 4.5. There are two kinds of updates for the instance set S: (1) adaption via removing (q, a) pairs with low generator reward, in order to avoid “negative transfer”; (2) augmentation via adding (q, r) pairs that are top ranked by ranker, in order to increase the proportion of suitable review-question instances in training set. The instance number hyperparameter µ for removing and adding can be set according to the scale of Sqa, and more details are given in our experimental setting. To guarantee the effective instance manipulation, two interactions exist between generator and ranker. First, aspect-related variables for reviews obtained by ranker are part of the generator input. The second interaction is that a reward from generator is part of the learning objective for ranker, in order to teach ranker to capture the suitable reviews for generating the corresponding question. 4 Experiments 4.1 Datasets We exploit the user-written QA dataset collected in (Wan and McAuley, 2016) and the review set collected in (McAuley et al., 2015) as our experimental data. The two datasets are collected from Amazon.com separately. We filter and merge the two datasets to obtain products whose associated QA pairs and reviews can both be found. The statistics for our datasets can be found in Table 2, where the numbers of product for several very large product categories are restricted to 5000. According to the average lengths, we can find that the whole review tend to be very long. It justified our assumption that it is not easy for users to exploit reviews, and questions with short length can be a good catalogue for viewing reviews. To test our question generation framework, we manually labeled 100 ground truth review-question pairs for each product category. 6 volunteers are asked to select user-posed questions and the corresponding review sentences that can serve as answers. Specifically, the volunteers are given pairs 286 #p #q #a #r #(s) Auto 0.8k 5.5k 18.7k 9.4k 46.5k Baby 1.9k 11.9k 38.7k 75.3k 450.7k Beauty 2.5k 15.9k 53.7k 62.4k 338.6k Phones 3.6k 23.8k 87.4k 104.5k 561.8k Cloth 0.4k 0.30k 10.7k 6.9k 32.2k Elec 5k 31.0k 101.2k 229.4k 1461.8k Health 5k 32.4k 114.2k 136.9k 749.9k Music 0.4k 2.7k 8.9k 5.2k 27.9k Sports 5k 34.2k 120.6k 122.6k 648.5k Tools 4.1k 29.8k 104.1k 70.7k 425.6k Lq La Lr Ls Auto 14.4 23.3 88.3 17.8 Baby 15.2 22.9 106.4 17.8 Beauty 13.1 22.0 88.6 16.3 Phones 13.2 19.2 97.0 18.1 Cloth 13.0 19.8 71.2 15.3 Elec 16.1 24.8 119.5 18.8 Health 13.0 22.5 96.0 17.5 Music 14.6 24.0 94.2 17.7 Sports 13.6 22.3 91.0 17.2 Tools 14.7 23.2 110.2 18.3 Table 2: Data statistics. #: number; p, q, a, r: product, question, answer, whole review; s: review sentence, Lq, La, Lr, Ls are their average lengths. of question and review, and only consider the relevance between question and review. The answer to the question is also accessible but it is only used for helping annotators to understand the question. All labeled pairs are validated by two experienced annotators with good understanding for the consumer information need in E-commerce. . The labeled instances are removed from the training set. 4.2 Experimental Settings For each product category, we train the AITA framework and use the learned generator for testing. The fixed 300 dimension GloVe word embeddings (Pennington et al., 2014) are used as the basic word vectors. For all text including question, answer and review, we utilize StanfordNLP for tokenizing, lower casing, and linguistic features extraction, e.g., NER & POS for the encoder in generator. In ranker, the dimension of aspect distribution is set to 20 and the λ in the final loss function in Eqn 10 is set to 0.8. In the multi-head self-attention, the head number is set to 3 and the dimension for Q, K, V is 300. The dimensions of matrices can be set accordingly. The hidden dimension in generator is set to 200. In the iterative learning algorithm, we set the epoch number to 10 and the updating instance number µ to 0.05 × |Sqa|. In testing, given a review r as input for generator, the additional input variables α(r) and pα(r) are obtained via the review encoder (Eqn 3) and aspect extraction (Eqn 8), which are question-independent. For testing the effectiveness of our learning framework and the incorporation of aspect, we compare our method with the following models: Ga (Du et al., 2017): A sentence-based Seq2Seq generation model trained with user-written answerquestion pairs. GPN a (Wang et al., 2018): A pointer network is incorporated in the Seq2Seq decoding to decide whether to copy word from the context or select from vocabulary. GPN ar : Review data is incorporated via a retrieval-based method. Specifically, the most relevant review sentence for each question is retrieved via BM25 method, and such review-question pairs are added into the training set. GPN a +aspect (Hu et al., 2018): Aspect is exploited in this model. We trained the aspect module in our framework, i.e. only using the reconstruction objective to obtain an aspect feature extractor from reviews. Then the aspect features and distributions can be used in the same way as in our method. AITA refers to our proposed framework. AITA-aspect: All the extracted aspect-related features are removed from AITA as an ablation for evaluating the effectiveness of the unsupervised module for aspect. For every product category, we run each model for 3 times and report the average performance with four evaluation metrics, including BLEU1 (B1), BLEU4 (B4), METEOR (MET) and ROUGE-L (RL). 4.3 Evaluation of Question Generation The results are demonstrated in Table 3. AITA achieves the best performance on all product categories regarding different evaluation metrics. The significant improvements over other models demonstrate that our instance transfer and augmentation method can indeed reduce inappropriate answerquestion pairs and provide helpful review-question pairs for the generator. The performance of Ga is very poor due to the missing of attention mechanism. Both GPN a and GPN a +aspect have worse performance than ours, even though some product categories have large volume of QA pairs (>100k), e.g., Electronics, Tools, etc. This indicates that the answer-question instances are not capable of learning a review-based question generator because of the different characteristics between the answer set and review set. GPN ar performs much worse than GPN a , which proves that a simple retrieval method 287 BLEU1 BLEU4 METEOR ROUGE-L BLEU1 BLEU4 METEOR ROUGE-L Automative Baby Ga 0.103 0.047 0.062 0.089 0.104 0.055 0.065 0.068 GP N a 0.162 0.090 0.091 0.140 0.153 0.088 0.087 0.195 GP N ar 0.147 0.082 0.078 0.118 0.133 0.060 0.068 0.102 GP N a +aspect 0.165 0.090 0.093 0.140 0.157 0.088 0.091 0.203 AITA-aspect 0.179 0.094 0.094 0.146 0.157 0.089 0.092 214 AITA 0.184 0.097 0.099 0.148 0.167 0.089 0.094 0.221 Beauty Cell Phone Ga 0.133 0.088 0.118 0.218 0.203 0.125 0.130 0.104 GP N a 0.235 0.122 0.128 0.257 0.250 0.122 0.150 0.217 GP N ar 0.194 0.098 0.119 0.205 0.215 0.117 0.136 0.141 GP N a +aspect 0.240 0.122 0.132 0.257 0.251 0.134 0.154 0.223 AITA-aspect 0.240 0.127 0.132 0.257 0.261 0.139 0.184 0.230 AITA 0.249 0.129 0.136 0.259 0.267 0.142 0.193 0.244 Clothing & Jewelry Electronics Ga 0.224 0.093 0.091 0.178 0.099 0.048 0.107 0.144 GP N a 0.283 0.134 0.118 0.227 0.124 0.069 0.131 0.171 GP N ar 0.258 0.110 0.101 0.198 0.100 0.053 0.121 0.156 GP N a +aspect 0.298 0.139 0.125 0.241 0.120 0.069 0.126 0.171 AITA-aspect 0.306 0.152 0.138 0.246 0.125 0.069 0.131 0.174 AITA 0.316 0.157 0.145 0.263 0.127 0.073 0.131 0.175 Health Musical Instruments Ga 0.114 0.062 0.091 0.095 0.088 0.054 0.096 0.091 GP N a 0.130 0.080 0.089 0.108 0.114 0.110 0.121 0.119 GP N ar 0.124 0.069 0.086 0.104 0.090 0.072 0.106 0.103 GP N a +aspect 0.133 0.100 0.123 0.175 0.118 0.110 0.130 0.192 AITA-aspect 0.137 0.100 0.121 0.179 0.124 0.110 0.136 0.201 AITA 0.142 0.109 0.132 0.194 0.129 0.112 0.141 0.205 Sports & Outdoors Tools Ga 0.079 0.046 0.042 0.064 0.098 0.059 0.093 0.105 GP N a 0.091 0.052 0.079 0.102 0.107 0.077 0.112 0.135 GP N ar 0.087 0.050 0.071 0.083 0.100 0.072 0.103 0.119 GP N a +aspect 0.091 0.052 0.079 0.102 0.110 0.079 0.110 0.136 AITA-aspect 0.094 0.052 0.080 0.102 0.112 0.079 0.116 0.142 AITA 0.097 0.057 0.083 0.102 0.117 0.083 0.120 0.149 Table 3: Overall performance on question generation. is not effective for merging the instances related to reviews and answers. AITA adapts and augments the QA set to select suitable review-question pairs considering both aspect and generation suitability, resulting in a better generator. In addition, effectiveness of aspect feature and aspect pointer network can be illustrated via the slight but stable improvement of GPN a +aspect over GPN a and the performance drop of AITA-aspect on all the categories. This proves that even without precise aspect annotation, our unsupervised aspect-based regularization is helpful for improving generation. 4.4 Human Evaluation and Case Study We conduct human evaluation on two product categories to study the quality of the generated questions. Two binary metrics Relevance and Aspect are used to indicate whether a question can be answered by the review and whether they share the same or related product aspect. The third metric, Clothing & Jewelry Relevance Aspect Fluency GPN a 0.58 0.62 2.58 GPN ar 0.47 0.58 2.29 GPN a +aspect 0.66 0.72 2.76 AITA 0.80 0.80 2.86 Cell Phone Relevance Aspect Fluency GPN a 0.42 0.55 2.79 GPN ar 0.35 0.41 2.44 GPN a +aspect 0.58 0.63 2.83 AITA 0.72 0.72 2.90 Table 4: Performance of human evaluation. Fluency with the value set {1, 2, 3}, is adopted for judging the question fluency. 1 means not fluent and 3 means very fluent. We selected 50 generated questions from each model and asked 4 volunteers 288 The entire length of the watch is 9 inches, but the effective length from the last hole to clasp is about 8 inches. - GP N a : What is the difference between gear 2 neo and this watch? - GP N a +aspect: How is the length? - AITA: What is the dimension in mm? If you have a huge wrist this watch mayn’t look good nor fit you well. - GP N a : What is the wrist size? - GP N a +aspect: How does it fit? - AITA: Will it fit my huge hand? The stainless steel case back can be pried off from the 12 o’clock position (from the back), and the battery CAN be replaced. - GP N a : Is the material good quality and not easy to tore? - GP N a +aspect: Can the lid be removed? - AITA: Can you tell me how to replace the battery? The watch has a Japanese Miyota movement inside, and has a Japanese Sony 626sw battery which requires you to loosen a very small flat head screw and slide a little metal arm out of the way to remove the battery. - GP N a : What is the battery life on this watch? - GP N a +aspect: Can I remove the battery? - AITA: Can I remove the battery? Table 5: Case study of generated questions. for evaluation. The average scores are reported in Table 4, which shows that our framework achieves the best performance regarding all the metrics, especially for Relevance, showing that our AITA can help generate more accurate questions based on reviews and thus facilitates exploiting reviews. Due to the incorporation of implicit aspect information, both AITA and GPN a +aspect significantly outperform GPN a regarding both Aspect and Relevance. Again, GPN ar with a simple retrieval method for augmenting training instances cannot perform well. The blue sentences in Table 5 are from a long review talking about some important information of a wat ch, and the questions generated by different models are also given. These questions are more user-friendly and potential consumers can browse them to quickly locate the information they care about. For example, if a user wants to know more about the battery replacement, the portion before the third sentence can be skipped. According to the generated questions via three methods in the Table 5, we can find that the questions from AITA are asking about major aspects of the review sentences. GPN a failed to capture major aspects in the first three sentences, and the questions generated by GPN a +aspect are not as concrete as ours, owning to the insufficient training instances. Figure 2: Analysis for proposition of instances. 4.5 Analysis on Instances Composition The training instance set for the generator, i.e., S in Algorithm 1, is initialized with QA set and gradually adapted and augmented. Here, we investigate the effect of composition property of S on the generator performance at different epochs. As shown in Fig 2, two product categories and two metrics are illustrated, with the gradually changed training instance set S. The proportion of review-question (qr) instances in S starts with 0, and significant performance improvement can be observed while the qr proportion gradually increases. The results stay stable until the qr proportion reach 80%. 5 Conclusions We propose a practical task of question generation from reviews, whose major challenge is the lack of training instances. An adaptive instance transfer and augmentation framework is designed for handling the task via an iterative learning algorithm. Unsupervised aspect extraction is integrated for aspect-aware question generation. Experiments on real-world E-commerce data demonstrate the effectiveness of the training instance manipulation in our framework and the potentials of the review-based question generation task. References Lidong Bing, Tak-Lam Wong, and Wai Lam. 2016. Unsupervised extraction of popular product attributes 289 from e-commerce web sites by considering customer reviews. ACM Transactions on Internet Technology, 16:1–17. Yllias Chali and Tina Baghaee. 2018. Automatic opinion question generation. In INLG, pages 152–158. Muthusamy Chelliah and Sudeshna Sarkar. 2017. Product recommendations enhanced with reviews. In ACM Conference on Recommender Systems, RecSys ’17, pages 398–399. Zhiyuan Chen, Arjun Mukherjee, Bing Liu, Meichun Hsu, Malu Castellanos, and Riddhiman Ghosh. 2013. Exploiting domain knowledge in aspect extraction. In EMNLP, pages 1655–1667. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2017. Attention-overattention neural networks for reading comprehension. In ACL, pages 593–602. Xiaozheng Dong, Yu Hong, Xin Chen, Weikang Li, Min Zhang, and Qiaoming Zhu. 2018. Neural question generation with semantics of question type. In CCF NLPCC, pages 213–223. Xinya Du and Claire Cardie. 2017. Identifying where to focus in reading comprehension for neural question generation. In EMNLP, pages 2067–2073. Xinya Du and Claire Cardie. 2018. Harvesting paragraph-level question-answer pairs from wikipedia. In ACL, pages 1907–1917. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In ACL, pages 1342–1352. Nan Duan, Duyu Tang, Peng Chen, and Ming Zhou. 2017. Question generation for question answering. In EMNLP, pages 866–874. Yifan Gao, Lidong Bing, Wang Chen, Michael R. Lyu, and Irwin King. 2019. Difficulty controllable generation of reading comprehension questions. In IJCAI, pages 4968–4974. David Golub, Po-Sen Huang, Xiaodong He, and Li Deng. 2017. Two-stage synthesis networks for transfer learning in machine comprehension. In EMNLP, pages 835–844. Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2017. An unsupervised neural attention model for aspect extraction. In ACL, pages 388– 397. Wenpeng Hu, Bing Liu, Jinwen Ma, Dongyan Zhao, and Rui Yan. 2018. Aspect-based question generation. In ICLR Workshop track. Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic relations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 452–457. Sang-Woo Lee, Yu-Jung Heo, and Byoung-Tak Zhang. 2018. Answerer in questioner’s mind: Information theoretic approach to goal-oriented visual dialog. In NeurIPS, pages 2579–2589. Jingjing Li, Yifan Gao, Lidong Bing, Irwin King, and Michael R. Lyu. 2019. Improving question generation with to the point context. In EMNLP, pages 3214–3224. Juntao Li, Ruidan He, Hai Ye, Hwee Tou Ng, Lidong Bing, and Rui Yan. 2020. Unsupervised domain adaptation of a pretrained cross-lingual language model. In IJCAI. Xin Li, Lidong Bing, Wai Lam, and Bei Shi. 2018. Transformation networks for target-oriented sentiment classification. In ACL, pages 946–956. Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. 2017. Deep transfer learning with joint adaptation networks. In ICML, pages 2208– 2217. Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. 2015. Image-based recommendations on styles and substitutes. In SIGIR, pages 43–52. Sinno Jialin Pan and Qiang Yang. 2009. A survey on transfer learning. IEEE Transactions on TKDE, 22(10):1345–1359. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y Ng. 2007. Self-taught learning: transfer learning from unlabeled data. In ICML, pages 759–766. ACM. Xingwu Sun, Jing Liu, Yajuan Lyu, Wei He, Yanjun Ma, and Shi Wang. 2018. Answer-focused and position-aware neural question generation. In EMNLP, pages 3930–3939. Ben Tan, Yu Zhang, Sinno Jialin Pan, and Qiang Yang. 2017. Distant domain transfer learning. In AAAI, pages 2604–2610. Duyu Tang, Nan Duan, Tao Qin, Zhao Yan, and Ming Zhou. 2017. Question answering and question generation as dual tasks. arXiv preprint arXiv:1706.02027. Duyu Tang, Nan Duan, Zhao Yan, Zhirui Zhang, Yibo Sun, Shujie Liu, Yuanhua Lv, and Ming Zhou. 2018. Learning to collaborate for question answering and asking. In NAACL-HLT, pages 1564–1574. Mengting Wan and Julian McAuley. 2016. Modeling ambiguity, subjectivity, and diverging viewpoints in opinion question answering systems. In ICDM, pages 489–498. 290 Linlin Wang, Kang Liu, Zhu Cao, Jun Zhao, and Gerard De Melo. 2015. Sentiment-aspect extraction based on restricted boltzmann machines. In ACL, pages 616–625. Siyuan Wang, Zhongyu Wei, Zihao Fan, Yang Liu, and Xuanjing Huang. 2019. A multi-agent communication framework for question-worthy phrase extraction and question generation. In AAAI, pages 7168– 7175. Zichao Wang, Andrew S Lan, Weili Nie, Andrew E Waters, Phillip J Grimaldi, and Richard G Baraniuk. 2018. QG-Net: a data-driven question generation model for educational content. In Annual ACM Conference on Learning at Scale, page 7. Han Xiao, Feng Wang, Yanjian Feng, and Jingyao Zheng. 2018. Dual ask-answer network for machine reading comprehension. arXiv preprint arXiv:1809.01997. Zhilin Yang, Junjie Hu, Ruslan Salakhutdinov, and William Cohen. 2017. Semi-supervised qa with generative domain-adaptive nets. In ACL, pages 1040– 1050. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. QANet: Combining local convolution with global self-attention for reading comprehension. In ICLR. Wei Zhao, Ziyu Guan, Long Chen, Xiaofei He, Deng Cai, Beidou Wang, and Quan Wang. 2018a. Weaklysupervised deep embedding for product review sentiment analysis. IEEE Transactions on TKDE, 30(1):185–197. Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. 2018b. Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In EMNLP, pages 3901–3910. Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2017. Neural question generation from text: A preliminary study. In CCF NLPCC, pages 662–671.
2020
26
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2896–2907 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2896 Gender Bias in Multilingual Embeddings and Cross-Lingual Transfer Jieyu Zhao§ ∗ Subhabrata Mukherjee‡ Saghar Hosseini‡ Kai-Wei Chang§ Ahmed Hassan Awadallah‡ §University of California, Los Angeles ‡Microsoft Research AI {jyzhao, kwchang}@cs.ucla.edu {Subhabrata.Mukherjee, Saghar.Hosseini, hassanam}@microsoft.com Abstract Multilingual representations embed words from many languages into a single semantic space such that words with similar meanings are close to each other regardless of the language. These embeddings have been widely used in various settings, such as cross-lingual transfer, where a natural language processing (NLP) model trained on one language is deployed to another language. While the crosslingual transfer techniques are powerful, they carry gender bias from the source to target languages. In this paper, we study gender bias in multilingual embeddings and how it affects transfer learning for NLP applications. We create a multilingual dataset for bias analysis and propose several ways for quantifying bias in multilingual representations from both the intrinsic and extrinsic perspectives. Experimental results show that the magnitude of bias in the multilingual representations changes differently when we align the embeddings to different target spaces and that the alignment direction can also have an influence on the bias in transfer learning. We further provide recommendations for using the multilingual word representations for downstream tasks. 1 Introduction Natural Language Processing (NLP) plays a vital role in applications used in our daily lives. Despite the great performance inspired by the advanced machine learning techniques and large available datasets, there are potential societal biases embedded in these NLP tasks – where the systems learn inappropriate correlations between the final predictions and sensitive attributes such as gender and race. For example, Zhao et al. (2018a) and Rudinger et al. (2018) demonstrate that coreference resolution systems perform unequally on ∗Most of the work was done while the first author was an intern at Microsoft Research. different gender groups. Other studies show that such bias is exhibited in various components of the NLP systems, such as the training dataset (Zhao et al., 2018a; Rudinger et al., 2018), the embeddings (Bolukbasi et al., 2016; Caliskan et al., 2017; Zhou et al., 2019; Manzini et al., 2019) as well as the pre-trained models (Zhao et al., 2019; Kurita et al., 2019). Recent advances in NLP require large amounts of training data. Such data may be available for resource-rich languages such as English, but they are typically absent for many other languages. Multilingual word embeddings align the embeddings from various languages to the same shared embedding space which enables transfer learning by training the model in one language and adopting it for another one (Ammar et al., 2016; Ahmad et al., 2019b; Meng et al., 2019; Chen et al., 2019). Previous work has proposed different methods to create multilingual word embeddings. One common way is to first train the monolingual word embeddings separately and then align them to the same space (Conneau et al., 2017; Joulin et al., 2018). While multiple efforts have focused on improving the models’ performance on low-resource languages, less attention is given to understanding the bias in cross-lingual transfer learning settings. In this work, we aim to understand the bias in multilingual word embeddings. In contrast to existing literature that mostly focuses on English, we conduct analyses in multilingual settings. We argue that the bias in multilingual word embeddings can be very different from that in English. One reason is that each language has its own properties. For example, in English, most nouns do not have grammatical gender, while in Spanish, all nouns do. Second, when we do the alignment to get the multilingual word embeddings, the choice of target space may cause bias. Third, when we do transfer learning based on multilingual word 2897 embeddings, the alignment methods, as well as the transfer procedure can potentially influence the bias in downstream tasks. Our experiments confirm that bias exists in the multilingual embeddings and such bias also impacts the cross-lingual transfer learning tasks. We observe that the transfer model based on the multilingual word embeddings shows discrimination against genders. To discern such bias, we perform analysis from both the corpus and the embedding perspectives, showing that both contribute to the bias in transfer learning. Our contributions are summarized as follows: • We build datasets for studying the gender bias in multilingual NLP systems.1 • We analyze gender bias in multilingual word embeddings from both intrinsic and extrinsic perspectives. Experimental results show that the pre-trained monolingual word embeddings, the alignment method as well as the transfer learning can have an impact on the gender bias. • We show that simple mitigation methods can help to reduce the bias in multilingual word embeddings and discuss directions for future work to further study the problem. We provide several recommendations for bias mitigation in cross-lingual transfer learning. 2 Related Work Gender Bias in Word Representations Word embeddings are widely used in different NLP applications. They represent words using low dimensional vectors. Bolukbasi et al. (2016) find that, in the embedding space, occupation words such as “professor” and “nurse” show discrepancy concerning the genders. Similarly, Caliskan et al. (2017) also reveal the gender stereotypes in the English word embeddings based on the Word Embedding Association Test (WEAT). However, both works only consider English and cannot be directly adapted to other languages such as Spanish. McCurdy and Serbetci (2017) reveal that bias exists in languages with grammatical gender while Zhou et al. (2019) and Lauscher and Glavaˇs (2019) show that there is bias in bilingual word embeddings. However, none of them consider the cross-lingual transfer learning which is an important application of the multilingual word embeddings. To mitigate the bias in word embeddings, various approaches 1Code and data will be available at https://aka.ms/ MultilingualBias. have been proposed (Bolukbasi et al., 2016; Zhao et al., 2018b). In contrast to these methods in English embedding space, we propose to mitigate the bias from the multilingual perspectives. Comparing to Zhou et al. (2019), we show that a different choice of alignment target can help to reduce the bias in multilingual embeddings from both intrinsic and extrinsic perspectives. Multilingual Word Embeddings and Crosslingual Transfer Learning Multilingual word embeddings represent words from different languages using the same embedding space which enables cross-lingual transfer learning (Ruder et al., 2019). The model is trained on a labeled data rich language and adopted to another language where no or a small portion of labeled data is available (Duong et al., 2015; Guo et al., 2016). To get the multilingual word embeddings, Mikolov et al. (2013) learn a linear mapping between the source and target language. However, Xing et al. (2015) argue that there are some inconsistencies in directly learning the linear mapping. To solve those limitations, they constrain the embeddings to be normalized and enforce an orthogonal transformation. While those methods achieve reasonable results on benchmark datasets, they all suffer from the hubness problem which is solved by adding cross-domain similarity constraints (Conneau et al., 2017; Joulin et al., 2018). Our work is based on the multilingual word embeddings achieved by Joulin et al. (2018). Besides the commonly used multilingual word embeddings obtained by aligning all the embeddings to the English space, we also analyze the embeddings aligned to different target spaces. Bias in Other Applications Besides the bias in word embeddings, such issues have also been demonstrated in other applications, including named entity recognition (Mehrabi et al., 2019), sentiment analysis (Kiritchenko and Mohammad, 2018), and natural language inferences (Rudinger et al., 2017). However, those analyses are limited to English corpus and lack the insight of multilingual situations. 3 Intrinsic Bias Quantification and Mitigation In this section, we analyze the gender bias in multilingual word embeddings. Due to the limitations of the available resources in other languages, we analyze the bias in English, Spanish, German and 2898 French. However, our systematic evaluation approach can be easily extended to other languages. We first define an evaluation metric for quantifying gender bias in multilingual word embeddings. Note that in this work, we focus on analyzing gender bias from the perspective of occupations. We then show that when we change the target alignment space, the bias in multilingual word embeddings also changes. Such observations provide us a way to mitigate the bias in multilingual word embeddings – by choosing an appropriate target alignment space. 3.1 Quantifying Bias in Multilingual Embeddings We begin with describing inBias, our proposed evaluation metric for quantifying intrinsic bias in multilingual word embeddings from word-level perspective. We then introduce the dataset we collected for quantifying bias in different languages. Bias Definition Given a set of masculine and feminine words, we define inBias as: inBias = 1 N N X i=1 |dis(OMi, SM)−dis(OFi, SF )|, (1) where dis(OGi, S) = 1 |S| X s∈S (1 −cos (OGi, s)). Here (OMi, OFi) stands for the masculine and feminine format of the i-th occupation word, such as (“doctor”, “doctora”). SM and SF are a set of gender seed words that contain male and female gender information in the definitions such as “he” or “she”. Intuitively, given a pair of masculine and feminine words describing an occupation, such as the words “doctor” (Spanish, masculine doctor) and “doctora” (Spanish, feminine doctor), the only difference lies in the gender information. As a result, they should have similar correlations to the corresponding gender seed words such as “´el” (Spanish, he) and “ella” (Spanish, she). If there is a gap between the distance of occupations and corresponding gender, (i.e., the distance between “doctor” and “´el” against the distance between “doctora” and “ella”), it means such occupation shows discrimination against gender. Note that such metric can also be generalized to other languages without grammatical gender, such as English, by just using the same format of the occupation words. It is also worth noting that our metric is general and can be used to define other types of bias with slight modifications. For example, it can be used to detect age or race bias by providing corresponding seed words (e.g., “young” - “old” or names correlated with different races). In this paper we focus on gender bias as the focus of study. We provide detailed descriptions of those words in the dataset collection subsection. Unlike previous work (Bolukbasi et al., 2016) which requires calculating a gender direction by doing dimensionality reduction, we do not require such a step and hence we can keep all the information in the embeddings. The goal of inBias is aligned to that of WEAT (Caliskan et al., 2017). It calculates the difference of targets (occupations in our case) corresponding to different attributes (gender). We use paired occupations in each language, reducing the influence of grammatical gender. Compared to Zhou et al. (2019), we do not need to separately generate the two gender directions, as in our definition, the difference of the distance already contains such information. In addition, we no longer need to collect the gender neutral word list. In multilingual settings, due to different gender assignments to each word (e.g., “spoon” is masculine is DE but feminine in ES), it is expensive to collect such resources which can be alleviated by the inBias metric. Multilingual Intrinsic Bias Dataset To conduct the intrinsic bias analysis, we create the MIBs dataset by manually collecting pairs of occupation words and gender seed words in four languages: English (EN), Spanish (ES), German (DE) and French (FR). We choose these four languages as they come from different language families (EN and DE belong to the Germanic language family while ES and FR belong to the Italic language family) and exhibit different gender properties (e.g., in ES, FR and DE, there is grammatical gender).2 We refer to languages with grammatical gender as GENDER-RICH languages; and otherwise, as GENDER-LESS languages. Among these three gender-rich languages, ES and FR only have feminine and masculine genders while in DE, there is also a neutral gender. We obtain the feminine and masculine words in EN from Zhao et al. (2018b) and extend them by manually adding other common occupations. The English gender seed words are from Bolukbasi et al. 2We also do analyses with Turkish where there is no grammatical gender and no gendered pronoun. Details are in Sec. 3.2.4. 2899 0.10 0.05 0.00 0.05 0.10 0.15 Avg-M Avg-F beau belle dudes gals governor governess dude chick tailor seamstress stewardsstewardesses M. F. (a) Original es embeddings. 0.10 0.05 0.00 0.05 0.10 0.15 Avg-M Avg-F beau belle dudes gals governor governess dude chick tailor seamstress stewardsstewardesses M. F. (b) In es-en embeddings. 0.10 0.05 0.00 0.05 0.10 0.15 Avg-M Avg-F beau belle dudes gals governor governess dudechick tailor seamstress stewardsstewardesses M. F. (c) In es-de embeddings. Figure 1: Most biased occupations in ES projected to the gender subspace defined by the difference between two gendered seed words. Green dots are masculine (M.) occupations while the red squares are feminine (F.) ones. We also show the average projections of the gender seed words for male and female genders denoted by “Avg-M” and “Avg-F”. Compared to EN, aligning to DE makes the distance between the occupation word and corresponding gender more symmetric. (2016). For all the other languages, we get the corresponding masculine and feminine terms by using online translation systems, such as Google Translate. We refer to the words that have both masculine and feminine formats in EN (e.g., “waiter” and “waitress”) as strong gendered words while others like “doctor” or “teacher” as weak gendered words. In total, there are 257 pairs of occupations and 10 pairs of gender seed words for each language. In the gender-rich languages, if the occupation only has one lexical format, (e.g., “prosecutor” in ES only has the format “fiscal”), we add it to both the feminine and the masculine lists. 3.2 Characterizing Bias in Multilingual Embeddings As mentioned in Sec. 1, multilingual word embeddings can be generated by first training word embeddings for different languages individually and then aligning those embeddings to the same space. During the alignment, one language is chosen as target and the embeddings from other languages are projected onto this target space. We conduct comprehensive analyses on the MIBs dataset to understand: 1) how gender bias exhibits in embeddings of different languages; 2) how the alignment target affects the gender bias in the embedding space; and 3) how the quality of multilingual embeddings is affected by choice of the target language. For the monolingual embeddings of individual languages and the multilingual embeddings that used English as the target language (*-en),3 we use 3We refer to the aligned multilingual word embeddings using the format src-tgt. For example, “es-en” means we align the ES embeddings to the EN space. An embedding not following such format refers to a monolingual embedding. Source Target EN ES DE FR EN 0.0830 0.0639* 0.0699* 0.0628* ES 0.0889* 0.0803 0.0634* 0.0642* DE 0.1124 0.0716* 0.1079 0.0805* FR 0.1027 0.0768* 0.0782* 0.0940 Table 1: inBias score before and after alignment to different target spaces. Rows stands for the source languages while columns are the target languages. The diagonal values stand for the bias in the original monolingual word embeddings. Here * indicates the difference between the bias before and after alignment is statistically significant (p < 0.05). the publicly available fastText embeddings trained on 294 languages in Wikipedia (Bojanowski et al., 2017; Joulin et al., 2018). For all other embeddings aligned to a target space other than EN, we adopt the RCSLS alignment model (Joulin et al., 2018) based on the same hyperparameter setting (details are in Appendix). 3.2.1 Analyzing Bias before Alignment We examine the bias using four languages mentioned previously based on all the word pairs in the MIBs. Table 1 reports the inBias score on this dataset. The diagonal values here stand for the bias in each language before alignment. Bias commonly exists across all the four languages. Such results are also supported by WEAT in Zhou et al. (2019), demonstrating the validity of our metric. What is more, comparing those four languages, we find DE and FR have stronger biases comparing to EN and ES. 2900 Source Target EN ES DE FR EN 83.08 78.60 83.00 ES 86.40 72.40 87.27 DE 76.33 69.80 78.13 FR 84.27 84.80 75.53 Table 2: Performance (accuracy %) of the BLI task for the aligned embeddings. Row stands for the source language and column is the target language. The values in the first row are from Joulin et al. (2018). 3.2.2 How will the bias change when aligned to different languages? Commonly used multilingual word embeddings align all languages to the English space. However, our analysis shows that the bias in the multilingual word embeddings can change if we choose a different target space. All the results are shown in Table 1. Specifically, when we align the embeddings to the gender-rich languages, the bias score will be lower compared to that in the original embedding space. In the other situation, when aligning the embeddings to the gender-less language space (i.e., EN in our case), the bias increases. For example, in original EN, the bias score is 0.0830 and when we align EN to ES, the bias decreases to 0.0639 with 23% reduction in the bias score. However, the bias in ES embeddings increases to 0.0889 when aligned to EN while only 0.0634 when aligned to DE.4 In Fig. 1, we show the examples of word shifting along the gender direction when aligning ES to different languages. The gender direction is calculated by the difference of male gendered seeds and female gendered seeds. We observe the feminine occupations are further away from female seed words than masculine ones, causing the resultant bias. In comparison to using EN as target space, when aligning ES to DE, the distance between masculine and feminine occupations with corresponding gender seed words become more symmetric, therefore reducing the inBias score. What words changed most after the alignment? We are interested in understanding how the gender bias of words changes after we do the alignment. To do this, we look at the top-15 most and least changed words. We find that in each language, the strongest bias comes from the strong gendered words; while the least bias happens among weak gendered words. When we align EN embeddings 4We show the bias for all the 257 pairs of words in EN. In the appendix, we also show the bias for strong gendered words and weak gendered words separately. to gender-rich languages, bias in the strong gendered words will change most significantly; and the weak gendered words will change least significantly. When we align gender-rich languages to EN, we observe a similar trend. Among all the alignment cases, gender seed words used in Eq. (1) do not change significantly. 3.2.3 Bilingual Lexicon Induction To evaluate the quality of word embeddings after the alignment, we test them on the bilingual lexicon induction (BLI) task (Conneau et al., 2017) goal of which is to induce the translation of source words by looking at their nearest neighbors. We evaluate the embeddings on the MUSE dataset with the CSLS metric (Conneau et al., 2017). We conduct experiments among all the pair-wise alignments of the four languages. The results are shown in Table 2. Each row depicts the source language, while the column depicts the target language. When aligning languages to different target spaces, we do not observe a significant performance difference in comparison to aligning to EN in most cases. This confirms the possibility to use such embeddings in downstream tasks. However, due to the limitations of available resources, we only show the result on the four languages and it may change when using different languages. 3.2.4 Languages of Study In this paper, we mainly focus on four European languages from different language families, partly caused by the limitations of the currently available resources. We do a simplified analysis on Turkish (TR) which belongs to the Turkic language family. In TR, there is no grammatical gender for both nouns and pronouns, i.e., it uses the same pronoun “o” to refer to “he”, “she” or “it”. The original bias in TR is 0.0719 and when we align it to EN, the bias remains almost the same at 0.0712. When aligning EN to TR, we can reduce the intrinsic bias in EN from 0.0830 to 0.0592, with 28.7% reduction. However, the BLI task shows that the performance on such aligned embeddings drops significantly: only 53.07% when aligned to TR but around 80% when aligned to the other four languages. Moreover, as mentioned in Ahmad et al. (2019a), some other languages such as Chinese and Japanese cannot align well to English. Such situations require more investigations and forming a direction for future work. 2901 Source Target ENDEB ES DE FR ENDEB 0.0501* 0.0458* 0.0524* 0.0441* ES 0.0665* 0.0803 DE 0.0876* 0.1079 FR 0.0905 0.0940 Table 3: inBias score before and after alignment to ENDEB. * indicates statistically significant difference between the bias in original and aligned embeddings. 3.3 Bias after Mitigation Researchers have proposed different approaches to mitigate the bias in EN word embeddings (Bolukbasi et al., 2016; Zhao et al., 2018b). Although these approaches cannot entirely remove the bias (Gonen and Goldberg, 2019), they significantly reduce the bias in English embeddings. We refer to such embedding as ENDEB. We analyze how the bias changes after we align the embeddings to such ENDEB space. The ENDEB embeddings are obtained by adopting the method in Bolukbasi et al. (2016) on the original fastText monolingual word embeddings. Table 3 and 4 show the bias score and BLI performance when we do the alignment between ENDEB and other languages. Similar to Zhou et al. (2019), we find that when we align other embeddings to the ENDEB space, we can reduce the bias in those embeddings. What is more, we show that we can reduce the bias in ENDEB embeddings further when we align it to a gender-rich language such as ES while keeping the functionality of the embeddings, which is consistent with our previous observation in Table 1. Besides, comparing aligning to gender-rich languages and to ENDEB, the former one can reduce the bias more. 4 Extrinsic Bias Quantification and Mitigation In addition to the intrinsic bias in multilingual word embeddings, we also analyze the downstream tasks, specifically in the cross-lingual transfer learning. One of the main challenges here is the absence of appropriate datasets. To motivate further research in this direction, we build a new dataset called MLBs. Experiments demonstrate that bias in multilingual word embeddings can also have an effect on models transferred to different languages. We further show how mitigation methods can help to reduce the bias in the transfer learning setting. Source Target ENDEB ES DE FR ENDEB 84.07 79.13 83.27 Target Source ENDEB ES DE FR ENDEB 86.07 76.27 84.33 Table 4: Performance (accuracy %) on the BLI task using the aligned embeddings based on ENDEB embeddings. The top one is the result of aligning ENDEB to other languages while the bottom is to align other languages to ENDEB. Language EN ES DE FR #occupation 28 72 27 27 #instance 397,907 82,863 12,976 59,490 Table 5: Statistics of the MLBs for each language. 4.1 Quantifying Bias in Multilingual Models In this section, we provide details of the dataset we collected for the extrinsic bias analysis as well as the metric we use for the bias evaluation. Multilingual BiosBias Datasets De-Arteaga et al. (2019) built an English BiosBias dataset to evaluate the bias in predicting the occupations of people when provided with a short biography on the bio of the person written in third person. To evaluate the bias in cross-lingual transfer settings, we build the Multilingual BiosBias (MLBs) Dataset which contains bios in different languages. Dataset Collection Procedure We collect a list of common occupations for each language and follow the data collection procedure used for the English dataset (De-Arteaga et al., 2019). To identify bio paragraphs, we use the pattern “NAME is an OCCUPATION-TITLE” where name is recognized in each language by using the corresponding Named Entity Recognition model from spaCy.5 To control for the same time period for datasets across languages, we process the same set of Common Crawl dumps ranging from the year 2014 to 2018. For the occupations, we use both the feminine and masculine versions of the word in the gender-rich languages. For EN, we use the existing BiosBias dataset. The number of occupations in each language is shown in Table 5. As the bios are written in third person, similar to De-Arteaga et al. (2019), we extract the binary genders based on the gendered pronouns in each language, such as “he” and “she”. 5https://spacy.io/usage/models 2902 0 4 8 12 16 20 24 28 0 10000 20000 30000 40000 50000 60000 Male Female (a) EN 0 5 10 15 20 25 30 35 40 0 2000 4000 6000 8000 10000 Male Female (b) ES 0 2 4 6 8 10 12 14 0 500 1000 1500 2000 Male Female (c) DE 0 3 6 9 12 15 18 0 2500 5000 7500 10000 12500 15000 Male Female (d) FR Figure 2: Gender statistics of MLBs dataset for different occupations where each occupation has at least 200 instances. X-axis here stands for the occupation index and y-axis is the number of instances for each occupation. Among all the languages, EN corpus is the most gender balanced one. All the corresponding occupations will be provided in the appendix. Bias Evaluation We follow the method in Zhao et al. (2018a) to measure the extrinsic bias: using the performance gap between different gender groups as a metric to evaluate the bias in the MLBs dataset. We split the dataset based on the gender attribute. A genderagnostic model should have similar performance in each group. To be specific, we use the average performance gap across each occupation in the male and female groups aggregated across all occupations (|Diff| in Table 6) to measure the bias. However, as described in Swinger et al. (2019), people’s names are potentially indicative of their genders. To eliminate the influence of names as well as the gender pronouns on the model predictions, we use a “scrubbed” version of the MLBs dataset by removing the names and some gender indicators (e.g., gendered pronouns and prefixes such as “Mr.” or “Ms.”). To make predictions of the occupations, we adopt the model used in De-Arteaga et al. (2019) by taking the fastText embeddings as the input and encoding the bio text with bi-directional GRU units following by an attention mechanism. The predictions are generated by a softmax layer. We train such models using standard cross-entropy loss and keep the embeddings frozen during the training. 4.2 Characterizing Bias in Multilingual Models In this section, we analyze the bias in the multilingual word embeddings from the extrinsic perspective. We show that bias exists in cross-lingual transfer learning and the bias in multilingual word embeddings contributes to such bias. The gender distribution of the MLBs dataset is shown in Fig. 2. Among the three languages, EN corpus is most gender neutral one where the ratio between male and female instances is around MLBs Emb. Avg. Female Male |Diff| EN en 82.82 84.69 80.70 7.26 endeb 83.00 84.71 81.06 6.09 ↓ en-es 83.43 85.14 81.51 6.72 ↓ en-de 82.85 84.64 80.84 6.37 ↓ en-fr 82.66 84.34 80.78 5.87 ↓ ES es 63.83 64.47 63.56 6.56 es-en 61.47 61.42 61.49 7.13 ↑ es-endeb 61.91 62.98 61.45 5.61 ↓ es-de 61.61 62.82 61.11 5.51 ↓ es-fr 62.91 63.31 62.73 4.32 ↓ Table 6: Results on scrubbed MLBs. “Emb.” stands for the embeddings used in model training. “Avg.”, “Female” and “Male” refer to the overall average accuracy (%), and average accuracy for different genders respectively. “ |Diff|” stands for the average absolute accuracy gap between each occupation in the male and female groups aggregated across all the occupations. The results of FR and DE are in the appendix. 1.2 : 1. For all the other languages, male instances are far larger than female ones. In ES, the ratio between male and female is 2.7 : 1, in DE it is 3.53 : 1, and in FR, it is 2.5 : 1; all are biased towards the male gender. Bias in Monolingual BiosBias We first evaluate the bias in the MLBs monolingual dataset by predicting the occupations of the bios in each language.6 From Table 6 we observe that: 1) Bias commonly exists across all languages (|Diff| > 0) when using different aligned embeddings, meaning that the model works differently for male and female groups. 2) When training the model using different aligned embeddings, it does not affect the overall average performance significantly (“Avg.” column in the table). 3) The alignment direction influences the bias. On training the model based on the embeddings aligned to different target space, we find that aligning the embeddings to ENDEB 6The results of DE and FR are in the appendix. 2903 Trans. Src. Tgt. Avg. Female Male |Diff| EN→ES en es-en 41.68 42.29 41.42 2.83 en-es es 34.15 33.97 34.22 3.49 ES→EN es en-es 57.33 59.61 54.75 8.33 es-en en 57.05 59.32 54.47 10.13 Table 7: Results of transfer learning on the scrubbed MLBs. “Src.” and “Tgt.” stand for the embeddings in source model and fine tuning procedure respectively. Trans. Src. Tgt. Avg. Female Male |Diff| EN→ES en es-en 39.17 41.30 38.70 7.97 en-es es 35.66 36.11 35.47 4.53 en-de es-de 34.12 34.46 33.98 4.07 en-fr es-fr 37.63 38.75 37.16 4.87 ES→EN es en-es 58.41 61.78 54.60 9.03 es-en en 55.62 58.00 52.93 9.52 es-de en-de 57.98 60.47 55.17 9.13 es-fr en-fr 55.04 57.85 51.86 8.47 Table 8: Results of transfer learning on gender balanced scrubbed MLBs. The bias in the last column demonstrates that the bias in the multilingual word embeddings also influences bias in transfer learning. Trans. Src. Tgt. Avg. Female Male |Diff| EN→ES endeb es-endeb 37.44 39.90 36.40 5.93 ES→EN es-endeb endeb 52.51 54.45 50.03 9.06 Table 9: Bias mitigation results of transfer learning when we aligned the embeddings to the ENDEB space on gender balanced scrubbed MLBs. or a gender-rich language reduces the bias in the downstream task. This is aligned with our previous observation in Section 3. Bias in Transfer Learning Multilingual word embeddings are widely used in cross-lingual transfer learning (Ruder et al., 2019). In this section, we conduct experiments to understand how the bias in multilingual word embeddings impacts the bias in transfer learning. To do this, we train our model in one language (i.e., source language) and transfer it to another language based on the aligned embeddings obtained in Section 3.2. For the transfer learning, we train the model on the training corpus of the source language and randomly choose 20% of the dataset from the target language and use them to fine-tune the model.7 Here, we do not aim at achieving state-of-the-art transfer learning performance but pay more attention to the bias analysis. Table 7 shows that the bias is present when we do the transfer learning regardless of the direction of transfer learning. 7As there are fewer examples in DE, we use the whole datasets for transfer learning. MLBs Avg. Female Male |Diff| EN 84.35 85.54 83.01 7.31 ES 67.93 65.79 68.82 4.16 DE 72.68 73.68 72.28 4.89 FR 79.18 78.80 79.35 8.75 Table 10: Bias in monolingual MLBs using M-BERT. Trans. Avg. Female Male |Diff| EN→ES 66.56 65.70 66.92 5.48 EN→DE 76.21 75.66 76.42 7.51 EN→FR 76.46 75.73 76.81 8.97 Table 11: Bias in MLBs using M-BERT when transferring from EN to other languages. Comparing to multilingual word embeddings, M-BERT achieves better transfer performance on the MLBs dataset across different languages. But the bias can be higher comparing to the multilingual word embeddings. Bias from Multilingual Word Embeddings The transfer learning bias in Table 7 is a combined consequence of both corpus bias and the multilingual word embedding bias. To better understand the influence of the bias in multilingual word embeddings on the transfer learning, we make the training corpus gender balanced for each occupation by upsampling to approximately make the model free of the corpus bias. We then test the bias for different languages with differently aligned embeddings. The results are shown in Table 8. When we adopt the embeddings aligned to gender-rich languages, we could reduce the bias in the transfer learning, whereas adopting the embeddings aligned to EN results in an increased bias. Bias after Mitigation Inspired by the method in Zhao et al. (2018a), we mitigate the bias in the downstream tasks by adopting the bias-mitigated word embeddings. To get the less biased multilingual word embeddings, we align other embeddings to the ENDEB space previously obtained in Section 3. Table 9 demonstrates that by adopting such less biased embeddings, we can reduce the bias in transfer learning. Comparing to Table 8, aligning the embeddings to a gender-rich language achieves better bias mitigation and, at the same time, remains the overall performance. 4.3 Bias Analysis Using Contextualized Embeddings Contextualized embeddings such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2018) and XLNet (Yang et al., 2019) have shown significant performance improvement in various NLP applica2904 tions. Multilingual BERT (M-BERT) has shown its great ability for the transfer learning. As MBERT provides one single language model trained on multiple languages, there is no longer a need for alignment procedure. In this section, we analyze the bias in monolingual MLBs dataset as well as in transfer learning by replacing the fastText embeddings with M-BERT embeddings. Similar to previous experiments, we train the model on the English dataset and transfer to other languages. Table 10 and 11 summarizes our results: comparing to results by fastText embeddings in Table 6, MBERT improves the performance on monolingual MLBs dataset as well as the transfer learning tasks. When it comes to the bias, using M-BERT gets similar or lower bias in the monolingual datasets, but sometimes achieves higher bias than the multilingual word embeddings in transfer learning tasks such as the EN →ES (in Table 7). 5 Conclusion Recently bias in embeddings has attracted much attention. However, most of the work only focuses on English corpora and little is known about the bias in multilingual embeddings. In this work, we build different metrics and datasets to analyze gender bias in the multilingual embeddings from both the intrinsic and extrinsic perspectives. We show that gender bias commonly exists across different languages and the alignment target for generating multilingual word embeddings also affects such bias. In practice, we can choose the embeddings aligned to a gender-rich language to reduce the bias. However, due to the limitation of available resources, this study is limited to the European languages. We hope this study can work as a foundation to motivate future research about the analysis and mitigation of bias in multilingual embeddings. We encourage researchers to look at languages with different grammatical gender (such as Czech and Slovak) and propose new methods to reduce the bias in multilingual embeddings as well as in crosslingual transfer learning. Acknowledgments This work was supported in part by NSF Grant IIS-1927554. We would like to thank Maria DeArteaga and Andi Peng for the helpful discussion, and thank all the reviewers for their feedback. References Wasi Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, and Nanyun Peng. 2019a. On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2440–2452. Wasi Uddin Ahmad, Zhisong Zhang, Xuezhe Ma, KaiWei Chang, and Nanyun Peng. 2019b. Cross-lingual dependency parsing with unlabeled auxiliary languages. In Proceedings of the 23rd Conference on Computational Natural Language Learning, pages 372–382. Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A Smith. 2016. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in neural information processing systems, pages 4349–4357. Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Xilun Chen, Ahmed Hassan, Hany Hassan, Wei Wang, and Claire Cardie. 2019. Multi-source cross-lingual model transfer: Learning what to share. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3098–3112. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087. Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 120–128. ACM. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. 2905 Long Duong, Trevor Cohn, Steven Bird, and Paul Cook. 2015. Low resource dependency parsing: Crosslingual parameter sharing in a neural network parser. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 845–850. Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 609–614. Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2016. A representation learning framework for multi-source transfer parsing. In Thirtieth AAAI Conference on Artificial Intelligence. Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Herv´e J´egou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Svetlana Kiritchenko and Saif M Mohammad. 2018. Examining gender and race bias in two hundred sentiment analysis systems. NAACL HLT 2018, page 43. Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Quantifying social biases in contextual word representations. In 1st ACL Workshop on Gender Bias for Natural Language Processing. Anne Lauscher and Goran Glavaˇs. 2019. Are we consistently biased? multidimensional analysis of biases in distributional word vectors. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (* SEM 2019), pages 85–91. Thomas Manzini, Yao Chong Lim, Yulia Tsvetkov, and Alan W Black. 2019. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. pages 615–621. Katherine McCurdy and Oguz Serbetci. 2017. Grammatical gender associations outweigh topical gender bias in crosslinguistic word embeddings. Proceedings of WiNLP. Ninareh Mehrabi, Thamme Gowda, Fred Morstatter, Nanyun Peng, and Aram Galstyan. 2019. Man is to person as woman is to location: Measuring gender bias in named entity recognition. arXiv preprint arXiv:1910.10872. Tao Meng, Nanyun Peng, and Kai-Wei Chang. 2019. Target language-aware constrained inference for cross-lingual dependency parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1117–1128. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2227–2237. Sebastian Ruder, Ivan Vuli´c, and Anders Søgaard. 2019. A survey of cross-lingual word embedding models. Journal of Artificial Intelligence Research, 65(1):569–630. Rachel Rudinger, Chandler May, and Benjamin Van Durme. 2017. Social bias in elicited natural language inferences. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 74–79. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 8–14. Nathaniel Swinger, Maria De-Arteaga, Neil Thomas Heffernan IV, Mark DM Leiserson, and Adam Tauman Kalai. 2019. What are the biases in my word embedding? In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 305– 311. ACM. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006–1011. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pages 5754–5764. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 629–634. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of 2906 the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 15–20. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and KaiWei Chang. 2018b. Learning gender-neutral word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4847–4853. Pei Zhou, Weijia Shi, Jieyu Zhao, Kuan-Hao Huang, Muhao Chen, Ryan Cotterell, and Kai-Wei Chang. 2019. Examining gender bias in languages with grammatical gender. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5279–5287. A Appendices A.1 Multilingual Word Embeddings Alignment We use the default hyperparameters in the RCSLS alignment model (https://github.com/ facebookresearch/fastText) but change batch size to 5000 and set “sgd” to true to make sure the batch size is used. The “maxsup” is set to the same as “maxneg” with 200000. A.2 Intrinsic Bias Analysis Category Target EN ES DE FR Strong-gendered 0.1138 0.0848 0.0935 0.0833 Weak-gendered 0.0477 0.0400 0.0430 0.0395 Table 12: Bias in EN before and after alignment to different languages for different word categories. For different situation, again we see the bias will reduce when we align the words to gender rich languages. Category Target ENDEB ES DE FR Strong-gendered 0.0830 0.0683 0.0747 0.0685 Weak-gendered 0.0126 0.0201 0.0269 0.0162 Table 13: Bias in ENDEB before and after alignment to different languages for different word categories. When aligning to a gender rich language, the bias in those strong-gendered words reduces. A.3 Transfer Learning Setting For the transfer learning, we filter some occupations that commonly occur across all languages and manually make the distribution of each occupation similar in each language. For each corpus, we use 60% of the corpus for training, 20% for validation and 20% for testing. MLBs Emb. Avg. Female Male |Diff| DE de 55.4 59.87 53.63 10.42 de-en 56.88 61.84 54.92 15.41 de-endeb 54.09 55.26 53.63 6.54 de-es 54.46 56.58 53.63 9.51 de-fr 55.8 57.50 55.18 10.43 FR fr 76.52 76.24 76.65 11.58 fr-en 74.13 74.87 73.79 12.96 fr-endeb 73.92 74.19 73.79 10.84 fr-es 74.57 74.19 74.74 11.23 fr-de 75.11 75.56 74.90 12.07 Table 14: Results on the scrubbed BiosBias dataset in DE and FR. Trans. Src. Tgt. Avg. Female Male |Diff| EN→DE en de-en 37.55 39.47 36.79 16.52 en-de es-de 34.57 32.89 35.23 13.58 DE→EN de en-de 42.47 45.76 38.77 6.46 de-en en 38.55 41.25 35.51 7.12 Table 15: Results of transfer learning between EN and DE on MLBs dataset. A.4 Occupation Lists for MLBs Gender Statistics We list all the occupations for each language in Fig. 2. EN: professor, accountant, journalist, architect, photographer, psychologist, teacher, nurse, attorney, software engineer, painter, physician, chiropractor, personal trainer, surgeon, filmmaker, dietitian, dentist, dj, model, composer, poet, comedian, yoga teacher, interior designer, pastor, rapper, paralegal ES: student, model, teacher, cook, musician, artist, painter, professor, administrator, scientist, writer, nurse, hotelier, lawyer, coach, computer programmer, doctor, journalist, architect, soldier, pharmacist, poet, dancer, engineer, farmer, pianist, pilot, psychologist, surgeon, athlete, mechanic, driver, accountant, rapper, photographer, filmmaker, attorney, physician, dj, comedian, composer DE: journalist, teacher, psychologist, attorney, dj, photographer, nurse, professor, pastor, architect, filmmaker, composer, painter, software engineer FR: filmmaker, teacher, composer, painter, journalist, physician, attorney, poet, photographer, pastor, rapper, architect, dj, comedian, psychologist, accountant, nurse, model, surgeon, dietitian A.5 Extrinsic Bias Results in DE and FR We show the bias in monolingual DE and FR datasets in Table 14 and in the transfer learning 2907 Trans. Src. Tgt. Avg. Female Male |Diff| EN→FR en fr-en 41.43 41.03 41.62 5.96 en-fr en 43.12 44.96 42.26 8.33 FR→EN fr en-fr 57.81 62.02 51.94 9.79 fr-en fr 55.15 58.83 50.0 8.3 Table 16: Results of transfer learning between EN and FR on MLBs dataset. between EN and them in Table 15 and 16 respectively. Table 17 and 18 is the bias result of the transfer learning between EN and DE, FR when we manually make the gender ratio balanced for each occupation in the corpus. We also show the mitigation results when we align all the embeddings to the ENDEB space. Trans. Src. Tgt. Avg. Female Male |Diff| EN→DE en de-en 39.40 38.28 39.82 10.65 endeb de-endeb 33.51 31.37 34.42 8.9 en-es de-es 33.16 32.21 33.50 9.31 en-de de 33.96 31.02 35.03 9.13 en-fr de-fr 38.31 34.17 39.82 11.04 DE→EN de en-de 46.43 48.83 43.72 7.93 de-en en 50.48 53.91 46.58 8.10 de-endeb endeb 44.44 46.84 41.73 7.16 de-es en-es 44.04 47.54 40.09 7.29 de-fr en-fr 46.01 47.57 44.25 7.03 Table 17: Results of transfer learning between EN and DE on the scrubbed BiosBias dataset when we make the dataset gender balanced. The bias in the last column demonstrates that the bias in the multilingual word embeddings will also influence the bias in the transfer learning. Trans. Src. Tgt. Avg. Female Male |Diff| EN→FR en fr-en 36.66 36.24 36.85 7.97 endeb fr-endeb 34.86 32.82 35.82 5.44 en-es fr-es 34.82 34.19 35.11 6.77 en-de fr-de 33.51 33.85 33.36 5.78 en-fr fr 35.68 33.50 36.70 6.81 FR→EN fr en-fr 59.21 61.55 55.94 10.3 fr-en en 50.80 54.44 45.73 11.42 fr-endeb endeb 49.33 52.91 44.33 10.14 fr-es en-es 49.28 51.86 45.66 10.42 fr-de en-de 50.92 54.10 46.46 7.36 Table 18: Results of transfer learning between EN and FR on the scrubbed BiosBias dataset when we make the dataset gender balanced. The bias in the last column demonstrates that the bias in the multilingual word embeddings will also influence the bias in the transfer learning.
2020
260
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2908–2913 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2908 Give Me Convenience and Give Her Death: Who Should Decide What Uses of NLP are Appropriate, and on What Basis? Kobi Leins Jey Han Lau Timothy Baldwin School of Computing and Information Systems, The University of Melbourne {kleins,laujh,tbaldwin}@unimelb.edu.au Abstract As part of growing NLP capabilities, coupled with an awareness of the ethical dimensions of research, questions have been raised about whether particular datasets and tasks should be deemed off-limits for NLP research. We examine this question with respect to a paper on automatic legal sentencing from EMNLP 2019 which was a source of some debate, in asking whether the paper should have been allowed to be published, who should have been charged with making such a decision, and on what basis. We focus in particular on the role of data statements in ethically assessing research, but also discuss the topic of dual use, and examine the outcomes of similar debates in other scientific disciplines. 1 Introduction NLP tools are increasingly being deployed in the wild with potentially profound societal implications. Alongside the rise in technical capabilities has been a growing awareness of the moral obligation of the field to self-assess issues including: dataset and system bias (Zhao et al., 2017), dataset ethics (Bender and Friedman, 2018), and dual use (Hovy and Spruit, 2016). More recently, there has also been vigorous debate on whether it is ethical for the community to work on certain topics or data types. This paper aims to investigate this issue, focused around the examination of a paper recently published at EMNLP 2019 on automatic prison term prediction by Chen et al. (2019). Specifically, the paper in question proposes a neural model which performs structured prediction of the individual charges laid against an individual, and the prison term associated with each, which can provide an overall prediction of the prison term associated with the case. This model was constructed using a large-scale dataset of real-world Chinese court cases. The primary question we attempt to address in this paper is on what basis a given paper satisfies basic ethical requirements for publication, in addition to examining the related question of who should make this judgement. Note that our intention is in no way to victimise the authors of the paper in question, but rather to use it as a test case to objectively ground an ethical assessment. The authors did highlight potential ethical concerns of its application, but missed the point that there are data ethics issue in the first place. Note also that, given the topic of the paper, we will focus somewhat on NLP applications in the legal domain, but the majority of the findings/recommendations generalise and will be of equal relevance to other domains. 2 Case Study in Ethical NLP Publication 2.1 Data ethics The first dimension to consider is data ethics: the data source and procedure used to construct a dataset have an immediate impact on the generalisabilty/interpretation of results based on that dataset, as well as the ability for real-world harm to happen (intentionally or otherwise) through its use. A number of proposals have recently been made regarding documentation procedures when releasing datasets to assist here, in particular data statements (Bender and Friedman, 2018) and datasheets (Gebru et al., 2018). Amalgamating the two, relevant questions to the specific case are the following, each of which we discuss briefly.1 Which texts were included and what were the goals in selecting texts? The dataset was constructed from published records of the Supreme People’s Court of China, following work by Xiao 1Note that many other important questions are covered in the respective frameworks, and our presentation here is biased towards the specific paper of interest. 2909 et al. (2018) in the context of a popular shared task on automatic legal judgement prediction. The reason for constructing this particular dataset is to “improve the accuracy of prison term prediction by decomposing it into a set of charge-based prison term predictions”. Why was the dataset created? To enhance the structure and granularity of earlier datasets, and achieve empirical gains in predictive accuracy. Were the people represented in the dataset informed about the data collection? There is no mention of interaction with either the defendants or court officials about the use of the data. The documents are in the public domain. Was there any ethical review? No ethical review is mentioned in the paper. Could this dataset expose people to harm or legal action? Yes, the defendants are identifiable and the dataset directly pertains to legal action. Does it unfairly advantage or disadvantage a particular social group? The dataset does not include explicit metadata regarding the demographics of the defendants, and the data has first names removed, but not surnames or other named entities. It is easy to imagine instances where the surname and location references could make the individual identifiable or could expose demographic information, esp. for ethnic minorities or areas of lower population density. Were the people represented in the dataset provided with privacy guarantees? No, no steps were taken other than removing their first names. Does the dataset contain information that might be considered sensitive or confidential? Yes, given that the labels represent prison time served by realworld individuals, and having personally identifying information entombed in a dataset that potentially has longevity (cf. the notoriety of Pierre Vinken from the Penn Treebank) could potentially have direct or indirect consequences for those individuals and their families or group. Does the dataset contain information that might be considered inappropriate or offensive? Many of the cases are criminal in nature, so there are potentially personal and confronting details in the court cases, including information about the victims. How was the data annotated, and what are the demographic characteristics of the annotators and annotation guideline developers? The “annotation” of the data is via court officials in terms of their legal findings, rather than via third-party annotations. No details are provided of the presiding court officials and their demographics, despite there being ample evidence of demographic bias in legal decision-making in other countries (Schanzenbach, 2005; Rachlinski et al., 2008; Yourstone et al., 2008). Will the dataset be updated? We highlight this particular question because cases can be overturned or appealed and new evidence can come to light. In this particular case, the Supreme People’s Court in China has no legal avenue for appeal, but it is still presumably possible for a case to be reopened on the basis of fresh evidence and a different finding made, or overturned completely if a miscarriage of justice is found to have occurred. On the one hand, this doesn’t immediately affect the labels in the dataset, as the sentencing is based on the facts that were available at the time, but it could lead to situations where a legal case which was ultimately annulled is inappropriately preserved in the dataset in its original form, implying guilt of the individuals which was later disproven. Of these, which are relevant to whether the paper is ethically sound, or could have made the paper less ethically questionable? Carrying out the research with the involvement of relevant legal authorities would certainly have helped, in terms of incorporating domain interpretation of the data, getting direct input as to the ultimate use of any model trained on the data (noting that the paper does return to suggest that the model be used in the “Review Phase” to help other judges post-check judgements of presiding judges). The lack of any mention of ethics approval is certainly troubling given the sensitivity of the data/task. The paper does briefly mention the possibility of demographic bias, without making any attempt to quantify or ameliorate any such bias. Privacy is an interesting question here, as we return to discuss under “data misuse” in Section 2.2, in addition to discussing the legality of using court documents for NLP research. Having said this, we acknowledge that similar datasets have been constructed and used by others (esp. Xiao et al. (2018)), including in major NLP conferences (e.g. Zhong et al. (2018), Hu et al. (2018)). However, this should never be taken as a waiver for data ethic considerations. Also notable here are court proceeding datasets such as that of Aletras et al. (2016), where the use case is 2910 the prediction of the violation of human rights (focusing on torture/degrading treatment, the right to a fair trial, and respect for privacy), which is more clearly aligned with “social good” (although there is more dataset documentation that could have been provided in that paper, along the lines described above). The conversation of what social good is, though, remains an open one (Green, 2019). In sum, there is a level of ethical naivety and insensitivity in the paper, with the lack of ethics approval, end-user engagement, and consideration of the privacy of the defendants all being of immediate concern, but also long-term concerns including whether NLP should be used to such ends at all. 2.2 Dual Use Dual use describes the situation where a system developed for one purpose can be used for another. An interesting case of dual use is OpenAI’s GPT-2. In February 2019, OpenAI published a technical report describing the development GPT-2, a very large language model that is trained on web data (Radford et al., 2019). From a science perspective, it demonstrates that large unsupervised language models can be applied to a range of tasks, suggesting that these models have acquired some general knowledge about language. But another important feature of GPT-2 is its generation capability: it can be used to generate news articles or stories. Due to dual-use concerns, e.g. fine-tuning GPT2 to generate fake propaganda,2 OpenAI released only the “small” version of the pre-trained models. It was, however, not received well by the scientific community,3 with some attributing this decision to an attempt to create hype around their research.4 The backlash ultimately made OpenAI reconsidered their approach, and release the models in stages over 9 months.5 During these 9 months, OpenAI engaged with other organisations to study the social implications of their models (Solaiman et al., 2019), and found minimal evidence of misuse, lending confidence to the publication of the 2https://www.middlebury.edu/institute/ academics/centers-initiatives/ctec/ctecpublications-0/industrializationterrorist-propaganda. 3https://thegradient.pub/openaiplease-open-source-your-language-model/. 4https://towardsdatascience.com/ openais-gpt-2-the-model-the-hype-andthe-controversy-1109f4bfd5e8. 5https://openai.com/blog/gpt-2-6month-follow-up/#fn1. larger models. In November 2019 OpenAI released the their final and largest model.6 OpenAI’s effort to investigate the implications of GPT-2 during the staged release is commendable, but this effort is voluntary, and not every organisation or institution will have the resources to do the same. It raises questions about self-regulation, and whether certain types of research should be pursued. A data statement is unlikely to be helpful here, and increasingly we are seeing more of these cases, e.g. GROVER (for generating fake news articles; Zellers et al. (2019)) and CTRL (for controllable text generation; Keskar et al. (2019)). All of that said, for the case under consideration it is not primarily a question of dual use or misuse, but rather its primary use: if the model were used to inform the Supreme Court, rather than automate decision-making, what weight should judges give the system? And what biases has the model learned which could lead to inequities in sentencing? It is arguable that decisions regarding human freedom, and even potentially life and death, require greater consideration than that afforded by an algorithm, that is, that they should not be used at all. Although no other governments appear to be automating legal decision-making per se, many governments are embracing algorithms to analyse/inform judicial decisions. In countries such as the United States and Australia, there has been analysis of legal decisions to understand factors such as the race/ethnicity of the defendant or the time of the day when the judge make a decision, and how this impacts on decision-making (Zatz and Hagan, 1985; Stevenson and Friedman, 1994; Snowball and Weatherburn, 2007; Kang et al., 2011). The French government has, however, under Article 33 of the Justice Reform Act made it illegal to analyse algorithmically any decision made by a judge, with what some argue is the harshest possible penalty for misconduct involving technology: a five-year sentence.7 Two decades ago, Helen Nissenbaum sounded the alarm about automating accountability (Nissenbaum, 1996). She expressed concerns that can be summarised in four categories. First, computerised systems are built by many hands and so lines of responsibility are not clear. Secondly, bugs are inevitable. Third, humans like to blame the com6https://openai.com/blog/gpt-2-1-5brelease/. 7https://www.legifrance.gouv.fr/eli/ loi/2019/3/23/2019-222/jo/article_33. 2911 puter, which is problematic because of her fourth observation: that software developers do not like to be held responsible for their tools that they create. Nissenbaum is not the only author who questions whether there should be limitations on certain uses of computer science (Leins, 2019). 3 Comparable Concerns in the Biological Sciences We have consultations, which of the inventions and experiences which we have discovered shall be published, and which not; and take all an oath of secrecy for the concealing of those which we think fit to keep secret; though some of those we do reveal sometime to the State, and some not. Sir Francis Bacon, New Atlantis, 1626 The work of Ron Fouchier, a Dutch virologist, is informative in considering publication practices in the NLP community. Fouchier discovered a way to make the bird flu H5N1 transmissible between ferrets, and therefore potentially very harmful to humans. Fouchier’s research extended the potential scope of the virus beyond its usual avian transmission routes and extended the reach of his research beyond his laboratory when he submitted his paper to a US journal. The Dutch government objected to this research being made public, and required Fouchier to apply for an export licence (later granted). The situation raised a lot of concerns, and a lot of discussion at the time (Enserink, 2013), as well as a series of national policies in response.8 That said, Fouchier’s work was not the first or last to be censored. Self-censorship was mentioned as early as the 17th-century by British philosopher Bacon, often credited with illuminating the scientific method (Grajzl and Murrell, 2019). Most recently, similar questions not about how research should be done, but whether it should be done at all, have arisen in the recent Chinese CRISPR-Cas 9 case, where HIV immunity in twins was allegedly increased, without prior ethical approval or oversight.9 As the capabilities of language models and computing as a whole increase, so do the potential implications for social disruption. Algorithms are not 8https://www.jst.go.jp/crds/en/ publications/CRDS-FY2012-SP-02.html. 9https://www.technologyreview.com/s/ 614761/nature-jama-rejected-he-jiankuicrispr-baby-lulu-nana-paper/. likely to be transmitted virally, nor to be fatal, nor are they governed by export controls. Nonetheless, advances in computer science may present vulnerabilities of different kinds, risks of dual use, but also of expediting processes and embedding values that are not reflective of society more broadly. 4 Who Decides Who Decides? Questions associated with who decides what should be published are not only legal, as illustrated in Fouchier’s work, but also fundamentally philosophical. How should values be considered and reflected within a community? What methodologies should be used to decide what is acceptable and what is not? Who assesses the risk of dual use, misuse or potential weaponisation? And who decides that potential scientific advances are so socially or morally repugnant that they cannot be permitted? How do we balance competing interests in light of complex systems (Foot, 1967). Much like nuclear, chemical and biological scientists in times past, computer scientists are increasingly being questioned about the potential applications, and long-term impact, of their work, and should at the very least be attuned to the issues and trained to perform a basic ethical self-assessment. 5 Moving Forward Given all of the above, what should have been the course of action for the paper in question? It is important to note that the only mentions of research integrity/ethics in the Call for Papers relate to author anonymisation, dual submissions, originality, and the veracity of the research, meaning that there was no relevant mechanism for reviewers or PC Chairs to draw on in ruling on the ethics of this or any other submission. A recent innovation in this direction has been the adoption of the ACM Code of Ethics by the Association for Computational Linguistics, and explicit requirement in the EMNLP 2020 Calls for Papers for conformance with the code:10 Where a paper may raise ethical issues, we ask that you include in the paper an explicit discussion of these issues, which will be taken into account in the review process. We reserve the right to reject papers on ethical grounds, where the authors are judged to have operated 10https://2020.emnlp.org/call-for-papers 2912 counter to the code of ethics, or have inadequately addressed legitimate ethical concerns with their work This is an important first step, in providing a structure for the Program Committee to assess a paper for ethical compliance, and potentially reject it in cases of significant concerns. Having said this, the ACM Code of Ethics is (deliberately) abstract in its terms, with relevant principles which would guide an assessment of the paper in question including: 1.2 Avoid harm; 1.4 Be fair and take action not to discriminate; 1.6 Respect privacy; 2.6 Perform work only in areas of competence; and 3.1 Ensure that the public good is the central concern during all professional computing work. In each of these cases, the introspection present in a clearlyarticulated data statement would help ameliorate potential concerns. What could an ethics assessment for ACL look like? Would an ethics statement for ACL be enough to address all concerns? As argued above, it is not clear that ACL should attempt to position itself as ethical gatekeeper, or has the resources to do so. And even if ACL could do so, and wanted to do so, the efficacy of ethics to answer complex political and societal challenges needs to be questioned (Mittelstadt, 2019). There certainly seems to be an argument for a requirement that papers describing new datasets are accompanied by a data statement or datasheet of some form (e.g. as part of the supplementary material, to avoid concerns over this using up valuable space in the body of the paper). This still leaves the question of what to do with pre-existing datasets: should they all be given a free pass; or should there be a requirement for a data statement to be retrospectively completed? The GDPR provides some protection for the use of data, but its scope and geographic reach are limited. Further, the term “anonymised” is often a misnomer as even data that is classified by governments and other actors as “anonymous” can often easily be reidentified (Culnane and Leins, 2020). What about code and model releases? Should there be a requirement that code/model releases also be subject to scrutiny for possible misuse, e.g. via a central database/registry? As noted above, there are certainly cases where even if there are no potential issues with the dataset, the resulting model can potentially be used for harm (e.g. GPT2). One could consider this as part of an extension of data statements, in requiring that all code/model releases associated with ACL papers be accompanied with a structured risk assessment of some description, and if risk is found to exist, some management plan be put in place. Looking to other scientific disciplines that have faced similar issues in the past may provide some guidance for our future. Finally, while we have used one particular paper as a case study throughout this paper, our intent was in no way to name and shame the authors, but rather to use it as a case study to explore different ethical dimensions of research publications, and attempt to foster much broader debate on this critical issue for NLP research. Acknowledgements This research was supported in part by the Australian Research Council (DP200102519 and IC170100030). The authors would like to thank Mark Dras, Sarvnaz Karimi, and Karin Verspoor for patiently engaging in rambling discussions which led to this hopefully less rambling paper, and to the anonymous reviewers for their suggestions and insights. References Nikolaos Aletras, Dimitrios Tsarapatsanis, Daniel Preot¸iuc-Pietro, and Vasileios Lampos. 2016. Predicting judicial decisions of the european court of human rights: a natural language processing perspective. PeerJ Computer Science, 2. Emily M. Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587–604. Huajie Chen, Deng Cai, Wei Dai, Zehui Dai, and Yadong Ding. 2019. Charge-based prison term prediction with deep gating network. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6361–6366, Hong Kong, China. Chris Culnane and Kobi Leins. 2020. Misconceptions in privacy protection and regulation. Law in Context, 36. Martin Enserink. 2013. Dutch H5N1 ruling raises new questions. Science, 342(6155):178–178. Philippa Foot. 1967. The problem of abortion and the doctrine of double effect. Oxford Review, 5:5–15. 2913 Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daum´e III, and Kate Crawford. 2018. Datasheets for datasets. In Proceedings of the 5th Workshop on Fairness, Accountability, and Transparency in Machine Learning, Stockholm, Sweden. Peter Grajzl and Peter Murrell. 2019. Toward understanding 17th century English culture: A structural topic model of Francis Bacon’s ideas. Journal of Comparative Economics, 47:111 – 135. Ben Green. 2019. “Good” isn’t good enough. In NeurIPS Joint Workshop on AI for Social Good, Vancouver, Canada. Dirk Hovy and Shannon L. Spruit. 2016. The social impact of natural language processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 591–598, Berlin, Germany. Association for Computational Linguistics. Zikun Hu, Xiang Li, Cunchao Tu, Zhiyuan Liu, and Maosong Sun. 2018. Few-shot charge prediction with discriminative legal attributes. In Proceedings of the 27th International Conference on Computational Linguistics, pages 487–498, Santa Fe, USA. Jerry Kang, Mark Bennett, Devon Carbado, Pam Casey, and Justin Levinson. 2011. Implicit bias in the courtroom. UCLA L. Rev., 59:1124–1187. Nitish Shirish Keskar, Bryan McCann, Lav Varshney, Caiming Xiong, and Richard Socher. 2019. CTRL – a conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858. Kobi Leins. 2019. AI for better or for worse, or AI at all? Future Leaders. Brent Mittelstadt. 2019. Principles alone cannot guarantee ethical AI. Nat Mach Intell, 1:501–507. Helen Nissenbaum. 1996. Accountability in a computerized society. Science and Engineering Ethics, 2:25–42. Jeffrey J Rachlinski, Sheri Lynn Johnson, Andrew J Wistrich, and Chris Guthrie. 2008. Does unconscious racial bias affect trial judges? Notre Dame Law Review, 84:1195–1246. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Technical report, OpenAI. Max Schanzenbach. 2005. Racial and sex disparities in prison sentences: The effect of district-level judicial demographics. The Journal of Legal Studies, 34(1):57–92. Lucy Snowball and Don Weatherburn. 2007. Does racial bias in sentencing contribute to indigenous overrepresentation in prison? Australian & New Zealand Journal of Criminology, 40(3):272–290. Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, Gretchen Krueger, Jong Wook Kim, Sarah Kreps, Miles McCain, Alex Newhouse, Jason Blazakis, Kris McGuffie, and Jasmine Wang. 2019. Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203. Bryan A Stevenson and Ruth E Friedman. 1994. Deliberate indifference: Judicial tolerance of racial bias in criminal justice. Wash. & Lee L. Rev., 51:509–528. Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Yansong Feng, Xianpei Han, Zhen Hu, Heng Wang, and Jianfeng Xu. 2018. CAIL2018: A large-scale legal dataset for judgment prediction. CoRR, abs/1807.02478. Jenny Yourstone, Torun Lindholm, Martin Grann, and Ola Svenson. 2008. Evidence of gender bias in legal insanity evaluations: A case vignette study of clinicians, judges and students. Nordic Journal of Psychiatry, 62(4):273–278. Marjorie S Zatz and John Hagan. 1985. Crime, time, and punishment: An exploration of selection bias in sentencing research. Journal of Quantitative Criminology, 1(1):103–126. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In Advances in Neural Information Processing Systems 32. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2979–2989, Copenhagen, Denmark. Association for Computational Linguistics. Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Chaojun Xiao, Zhiyuan Liu, and Maosong Sun. 2018. Legal judgment prediction via topological learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3540–3549, Brussels, Belgium.
2020
261
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2914–2919 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2914 Is Your Classifier Actually Biased? Measuring Fairness under Uncertainty with Bernstein Bounds Kawin Ethayarajh Stanford University [email protected] Abstract Most NLP datasets are not annotated with protected attributes such as gender, making it difficult to measure classification bias using standard measures of fairness (e.g., equal opportunity). However, manually annotating a large dataset with a protected attribute is slow and expensive. Instead of annotating all the examples, can we annotate a subset of them and use that sample to estimate the bias? While it is possible to do so, the smaller this annotated sample is, the less certain we are that the estimate is close to the true bias. In this work, we propose using Bernstein bounds to represent this uncertainty about the bias estimate as a confidence interval. We provide empirical evidence that a 95% confidence interval derived this way consistently bounds the true bias. In quantifying this uncertainty, our method, which we call Bernstein-bounded unfairness, helps prevent classifiers from being deemed biased or unbiased when there is insufficient evidence to make either claim. Our findings suggest that the datasets currently used to measure specific biases are too small to conclusively identify bias except in the most egregious cases. For example, consider a coreference resolution system that is 5% more accurate on gender-stereotypical sentences – to claim it is biased with 95% confidence, we need a bias-specific dataset that is 3.8 times larger than WinoBias, the largest available. 1 Introduction NLP models have drawn criticism for capturing common social biases with respect to gender and race (Manzini et al., 2019; Garg et al., 2018; Ethayarajh, 2019). These biases can be quantified by applying some metric to an embedding space (Bolukbasi et al., 2016), but it is unclear how bias in text embeddings affects decisions made by downstream classifiers. This is because bias is not propagated deterministically: it is possible for minimally biased embeddings to be fed into a classifier that makes maximally biased predictions (and vice-versa). Moreover, recent work has found that WEAT (Caliskan et al., 2017), the most popular test of embedding bias, can be easily manipulated to claim that bias is present or absent (Ethayarajh et al., 2019a,b). Unlike measuring embedding bias, measuring classification bias is difficult: most NLP datasets are not annotated with protected attributes, precluding the use of standard fairness measures such as equal opportunity (Hardt et al., 2016). However, manually annotating a large dataset with a protected attribute is slow and expensive. In response to this problem, some have created small datasets annotated with a single protected attribute – typically gender – that is used to estimate bias on tasks such as co-reference resolution (Zhao et al., 2018a; Kiritchenko and Mohammad, 2018; Rudinger et al., 2018). This can be done by creating new data or annotating a subset of an existing dataset with the protected attribute. Intuitively, the less data we annotate, the less certain we are that our sample bias is close to the true bias (i.e., what we would get by annotating the entire population). We propose using Bernstein bounds to express our uncertainty about the sample bias as a confidence interval. First, we show that for standard fairness measures such as equal opportunity and equalized odds (Hardt et al., 2016), we can define a cost function such that the fairness measure is equal to the difference in expected cost incurred by the protected and unprotected groups. We treat the contribution of each annotated example to the bias as a random variable. Using Bernstein’s inequality, we can thus estimate the probability that the true bias is within a constant t of our sample bias. Working backwards, we then derive a confidence interval for the true bias. Treating the “genres” of examples in MNLI (Williams et al., 2018) as the 2915 protected groups and the rate of annotator disagreement as the cost, we offer empirical evidence that our 95% confidence interval consistently bounds the true bias. In quantifying the uncertainty around bias estimates, Bernstein-bounded unfairness helps prevent classifiers from being deemed biased or unbiased when there is insufficient evidence to make either claim. For example, even when the sample bias is positive, it is possible that the true bias between groups is zero. Conversely, a sample bias of zero does not ensure the absence of bias at the population level. Moreover, our findings suggest that most bias-specific datasets in NLP are too small to conclusively identify bias except in the most egregious cases. For example, consider a co-reference resolution system that is 5% more accurate on gender-stereotypical sentences. For us to claim that this system is gender-biased with 95% confidence, we would need a bias-specific dataset that is 3.8 times larger than WinoBias (Zhao et al., 2018a), the largest such dataset currently available. Not only does the NLP community need more bias-specific datasets, but it also needs datasets that are much larger than the ones it currently has. 2 Bernstein-Bounded Unfairness In this section, we present the core idea of our paper: Bernstein-bounded unfairness (BBU). In practice, we estimate the bias – which we call the groupwise disparity – using a small sample of annotated data. Given that this estimate deviates from the true bias (i.e., at the population level), BBU helps us express our uncertainty about the bias estimate using a confidence interval. Definition 2.1. Let c : (y, ˆy) →[0,C] denote the cost of predicting ˆy when the true label is y, where C ∈R+ is the maximum cost that can be incurred. Definition 2.2. Let f : x →{−1,0,+1} denote an annotation function that maps an example to the protected group A (+1), the unprotected group B (−1), or neither (0). The groupwise disparity δ(f;c) between groups A and B is the difference in expected cost incurred by each group: δ(f;c) = Ea [c(ya, ˆya)]−Eb [c(yb, ˆyb)] Definition 2.3. The amortized disparity ˆδ(xi, f;c) for an example xi, given an annotation function f and cost function c, is: ˆδ(xi, f;c) = c(yi, ˆyi)f(xi) Pr[f(x) = f(xi)] The amortized disparity of xi is an estimate of the groupwise disparity based solely on xi. The expectation over all amortized disparities is the groupwise disparity: δ(f;c) = Ex[ ˆδ(x, f;c)]. In practice, given n i.i.d. examples X, we can take a Monte Carlo estimate of δ(f;c) by partitioning X into the protected and unprotected groups using f and then calculating the difference in mean cost. An equivalent way of framing this is that we have n random variables ˆδ(x1, f;c),..., ˆδ(xn, f;c) and we are taking their mean to estimate δ(f;c). Because examples X are i.i.d., so are the random variables. This means that we can use Bernstein’s inequality to calculate the probability that the sample mean ¯δ deviates from the true groupwise disparity δ by some constant t > 0. Where [−m,m] bounds each random variable ˆδ(xi, f;c) and σ2 = 1 n ∑Var[ ˆδi] denotes their variance, by Bernstein’s inequality: Pr[| ¯δ −δ| > t] = Pr[| ¯δ −E[ ˆδ]| > t] ≤2exp −nt2 2σ2 + 2 3tm ! (1) Since the interval [−m,m] is defined by the frequency of protected and unprotected examples (2.3), if we want it to strictly bound the random variable, it should be [−NC,NC], where N is the population size and we assume that there is at least one protected example. However, if this were the interval, (1) could be criticized for being too loose a bound and effectively useless. Therefore we assume that the proportion of the population that is protected and unprotected is bounded and that the lower bounds on these proportions are known. Definition 2.4. Let γA,γB denote the lower bounds of the proportion of the population that is protected and unprotected respectively. Let γ = min(γA,γB). Note that the protected group does not necessarily have to be the smaller of the two groups in this setup. We set γ to be the lesser of γA and γB to reflect this: if the unprotected group is smaller than the protected group, then [−m,m] will be bounded in [−C/γB,C/γB]. Proposition 2.5. Under (2.4), [−m,m] ⊆[−C γ , C γ ] for any random variable. Using this interval, (1) can be rewritten as: Pr[| ¯δ −δ| > t] ≤2exp −nt2 2σ2 + 2C 3γ t ! (2) Proposition 2.6. For a given confidence ρ ∈[0,1) that the true groupwise disparity δ falls in the inter2916 val [ ¯δ −t, ¯δ +t], we can derive t ∈R+ as follows: t = B+ q B2 −8nσ2 log 1 2(1−ρ)  2n where B = −2C 3γ log 1 2(1−ρ)  (3) This can be derived by rearranging (2) after setting both sides to be equal and then applying the quadratic formula to find the solution to t. Note that the width of the confidence interval grows as: (a) the desired confidence ρ increases; (b) the sample size n decreases; (c) γ decreases. To our knowledge, Bernstein bounds are the tightest that can be applied here, as they consider the variance of the random variables. We also validated empirically that they are a better candidate than Hoeffding bounds, another common choice. Standard Fairness Measures How can we use Bernstein-bounded unfairness to derive confidence intervals when the bias metric is demographic parity, equal opportunity, or equalized odds? • Demographic parity requires that the success rates be equal across all groups. In this case, the cost would be c(y, ˆy) = (1−ˆy), since the rate of predicting a positive outcome (ˆy = 1) must be the same. There are no constraints on the annotation function f. • Equal opportunity requires that the true positive rates be equal across groups (Hardt et al., 2016). The cost would still be (1−ˆy) but the annotation function would be g(x) = f(x) · y(x). To use terminology from Hardt et al. (2016), including y(x) means that we annotate “qualified” examples (i.e., y(x) = 1) but not “unqualified” ones (i.e., y(x) = 0). • Equalized odds requires that both true and false positive rates be equal across groups (Hardt et al., 2016). The annotation function would be the same as for equal opportunity but the cost would have to account for differences in false positive rates as well. This could be done by letting c be the zero-one loss. It is thus possible to define the cost and annotation functions such that the groupwise disparity is equivalent to the bias defined by a common fairness measure. Because of our framing of the problem, we treat the cost as something to be minimized. For example, for equal opportunity, the groupwise disparity was defined as the difference in false negative rates. However, we could set c(y, ˆy) = ˆy for equal opportunity as well, such that the groupwise disparity is the difference in true positive rates. Both perspectives are equivalent, but one may be more intuitive depending on the use case. 3 Proof-of-Concept Experiments We begin by providing empirical evidence that a 95% BBU confidence interval consistently bounds the true bias (i.e., population-level groupwise disparity). We conduct our experiments on the MNLI dev set (Williams et al., 2018), used for testing natural language inference. We treat the genres of examples in MNLI as the “protected groups”. Since the genre annotations are given, we calculate the true bias as the difference in annotator disagreement rates for in-genre versus out-genre examples, effectively treating the human annotators as the classifier whose bias we want to measure. We then use BBU and check whether the true bias falls within the 95% confidence interval when we estimate the bias using a subset of the data. The experiments on MNLI do not measure an important social bias. Rather, they are meant to be a proof-of-concept. We treat the MNLI genres as “protected groups” because the protected attribute – the genre – is clearly annotated. We use MNLI over smaller datasets annotated with attributes such as gender because this setup – where the cost is the rate of annotator disagreement – does not require any model training, making our results easy to replicate. Moreover, this use case illustrates that our conception of bias need not be restricted to social biases – it can be the difference in cost incurred by any arbitrarily defined groups. Lastly, we examine how large a bias-specific dataset needs to be in order to conclude that a given classifier is biased. Specifically, we consider a co-reference resolution system that is more accurate on sentences containing stereotypical gender roles. Fixing the confidence level at ρ = 0.95, we show that as the magnitude of the sample bias ¯δ decreases, we need a larger bias-specific dataset (i.e., larger n) in order to make a bias claim with 95% confidence. 3.1 Setup Annotator Disagreement The MNLI dev set has 10 genres of examples (e.g., ‘fiction’), with 2917 Figure 1: The true bias (red) for the ‘government’ genre in MNLI and our bias estimates with 95% confidence intervals (blue), based on a small sample of the data. The bias is defined as the difference in annotator disagreement rates across genres. Our confidence intervals consistently bound the true bias, and the bound grows tighter as the sample size increases (left) and the frequency of the protected group increases (right). On the left, the protected group frequency is fixed at 0.1; on the right, the sample size is fixed at 500. Genre In-Genre Cost Out-Genre Cost ∆ facetoface 0.116 0.128 −0.012 fiction 0.122 0.128 −0.006 government 0.154 0.124 0.029 letters 0.105 0.130 −0.024 nineeleven 0.115 0.129 −0.014 oup 0.132 0.127 0.005 slate 0.147 0.125 0.022 telephone 0.125 0.127 −0.002 travel 0.111 0.129 −0.018 verbatim 0.146 0.125 0.021 Table 1: The mean in-genre and out-genre cost for each genre in MNLI, where the cost per example is the rate of annotator disagreement with the gold label. roughly 2000 per genre. Since the genre annotation is known, we treat it as the protected attribute. We define the cost for a given example as the proportion of human annotators whose annotation differs from the gold label. The true bias for each genre (i.e., the groupwise disparity across all data) is the difference in mean cost incurred by the in-genre and out-genre examples. These statistics are in Table 1. The annotation function for each genre just samples some in-genre and out-genre examples to be the protected and unprotected groups respectively. In this setup, the ratio of in-genre to out-genre examples is controlled by γ (2.4). We then use this sample to calculate a 95% confidence interval [ ¯δ −t, ¯δ +t]. If ∆in Table 1 falls within [ ¯δ −t, ¯δ +t], then the BBU confidence interval correctly bounds the true bias for that genre. Gender Bias For our second experiment, we consider a hypothetical co-reference resolution system M that is more accurate when the input sentence is gender-stereotypical. For example, M might assume that ‘doctor’ is always replaced with a male pronoun and ‘nurse’ with a female pronoun. The existence of such systems motivated the creation of bias-specific datasets such as WinoBias and WinoGender for co-reference resolution (Zhao et al., 2018b; Rudinger et al., 2018). We define the cost for a given example as the zero-one loss (i.e., 1[y ̸= ˆy]) so that the true bias corresponds to the difference in accuracy between gender-stereotypical and non-gender-stereotypical sentences. The former is our protected group. Say ¯δ = 0.05 – that is, M is 5 percentage points more accurate on genderstereotypical sentences. How large must n be for us to claim with 95% confidence that M is genderbiased (i.e., for 0 ̸∈[ ¯δ −t, ¯δ +t])? 3.2 Bounding Population-level Bias On the MNLI data, even when as few as 100 examples are sampled and used to estimate the bias, a 95% BBU confidence interval bounds the true bias 100% of the time. This outcome is the average across all MNLI genres after averaging the results across 20 runs. As seen in Figure 1, 95% BBU bounds also grow tighter as the annotated sample size n increases and the frequency of the protected group γ increases from 0.1 to 0.5. Based on the derivation of the interval width in (3), both of these trends are expected. 3.3 Making Claims of Bias In our gender bias experiment, we want to know how large n needs to be such that given ¯δ = 0.05, 2918 Figure 2: The bias estimate ¯δ of a co-reference resolution system M is calculated on a sample of annotated data. How much data do we need to claim that M is gender-biased with 95% confidence? The smaller the bias estimate, the more data required. WinoBias, the largest such dataset available, can only be used when ¯δ ≥0.0975. we can say with 95% confidence that the coreference resolution system M is gender-biased. In other words, we want to find the smallest n such that 0 ̸∈[ ¯δ −t, ¯δ + t]. Since ¯δ > 0, we can set t ←¯δ and work backwards from (2): n > (2σ2 + 2C 3γ ¯δ) −log 1 2(1−ρ)  ¯δ 2 (4) In our hypothetical scenario, the maximum cost C = 1, the bias estimate ¯δ = 0.05, and ρ = 0.95. We assume that γ = 0.5, since bias-specific datasets often have equally many protected and unprotected examples. We also assume that the variance is maximal (i.e., σ2 = (C/γ)2). With these inputs, n > 11903: in other words, we would need a bias-specific dataset with at least 11903 examples to claim with 95% confidence that the system M is biased. This is ≈3.8 times larger than the size of WinoBias (Zhao et al., 2018a), the largest such dataset currently available. In Figure 2, we plot the amount of data needed against the magnitude of sample bias ¯δ. Note that with WinoBias, which has 3160 examples, we could only make a bias claim with 95% confidence if the bias estimate ¯δ = 0.0975 or higher (i.e., if the system M were 9.75 percentage points more accurate on the gender-stereotypical examples in WinoBias). 3.4 Implications It is possible to claim the existence of bias in a particular direction without knowing what the true bias is. For example, consider the γ = 0.5 error bars in Figure 1 (right): the 95% confidence interval for the bias faced by the ‘government’ genre in MNLI falls in the range (0.0, 0.12). This means that we are 95% confident that ‘government’ examples in MNLI face more annotator disagreement than other genres, even if we do not know precisely how much more that is. However, as shown in section 3.3, datasets currently used to estimate classification bias in NLP – such as WinoBias (Zhao et al., 2018b) and WinoGender (Rudinger et al., 2018) – are too small to conclusively identify bias except in the most egregious cases. There are two possible remedies to this. For one, even though we applied what we thought was the tightest applicable bound, it may be possible to derive a tighter confidence interval for δ. If so, one could use smaller datasets to make bias claims with a high degree of confidence. However, even in this optimistic scenario, current datasets would probably remain insufficient for detecting small magnitudes of bias. The more straightforward remedy would be to create larger bias-specific datasets. Even MNLI, for example, is orders of magnitude larger than WinoBias, suggesting that creating large bias-specific datasets is well within the realm of possibility. 4 Conclusion We first showed that many standard measures of fairness (e.g., equal opportunity) can be expressed as the difference in expected cost incurred by protected and unprotected groups. Given that most bias estimates are made using small samples, we proposed Bernstein-bounded unfairness (BBU) for quantifying the uncertainty about a bias estimate using a confidence interval. Using MNLI, we provided empirical evidence that 95% BBU confidence intervals consistently bound the true populationlevel bias. In quantifying this uncertainty, BBU helps prevent classifiers from being deemed biased or unbiased when there is insufficient evidence to make either claim. Although datasets currently used to estimate classification bias (e.g., WinoBias) are undoubtedly a step in the right direction, our findings suggest that they need to be much larger in order to be a useful diagnostic. Acknowledgments Many thanks to Aidan Perreault, Dallas Card, and Tengyu Ma for providing detailed feedback. We thank Nelson Liu for helpful discussion. 2919 References Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in Neural Information Processing Systems, pages 4349–4357. Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Kawin Ethayarajh. 2019. Rotate king to get queen: Word relationships as orthogonal transformations in embedding space. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 3503–3508, Hong Kong, China. Association for Computational Linguistics. Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2019a. Towards understanding linear word analogies. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3253–3262. Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2019b. Understanding undesirable word embedding associations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1696–1705. Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16):E3635–E3644. Moritz Hardt, Eric Price, Nati Srebro, et al. 2016. Equality of opportunity in supervised learning. In Advances in neural information processing systems, pages 3315–3323. Svetlana Kiritchenko and Saif Mohammad. 2018. Examining gender and race bias in two hundred sentiment analysis systems. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 43–53. Thomas Manzini, Lim Yao Chong, Alan W Black, and Yulia Tsvetkov. 2019. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 615–621. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8–14. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and KaiWei Chang. 2018b. Learning gender-neutral word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4847–4853.
2020
262
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2920–2935 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2920 It’s Morphin’ Time! Combating Linguistic Discrimination with Inflectional Perturbations Samson Tan§♮, Shafiq Joty‡§, Min-Yen Kan♮, Richard Socher§ §Salesforce Research ♮National University of Singapore ‡Nanyang Technological University §{samson.tan,sjoty,rsocher}@salesforce.com ♮[email protected] Abstract Training on only perfect Standard English corpora predisposes pre-trained neural networks to discriminate against minorities from nonstandard linguistic backgrounds (e.g., African American Vernacular English, Colloquial Singapore English, etc.). We perturb the inflectional morphology of words to craft plausible and semantically similar adversarial examples that expose these biases in popular NLP models, e.g., BERT and Transformer, and show that adversarially fine-tuning them for a single epoch significantly improves robustness without sacrificing performance on clean data.1 1 Introduction In recent years, Natural Language Processing (NLP) systems have gotten increasingly better at learning complex patterns in language by pretraining large language models like BERT, GPT-2, and CTRL (Devlin et al., 2019; Radford et al., 2019; Keskar et al., 2019), and fine-tuning them on taskspecific data to achieve state of the art results has become a norm. However, deep learning models are only as good as the data they are trained on. Existing work on societal bias in NLP primarily focuses on attributes like race and gender (Bolukbasi et al., 2016; May et al., 2019). In contrast, we investigate a uniquely NLP attribute that has been largely ignored: linguistic background. Current NLP models seem to be trained with the implicit assumption that everyone speaks fluent (often U.S.) Standard English, even though twothirds (>700 million) of the English speakers in the world speak it as a second language (L2) (Eberhard et al., 2019). Even among native speakers, a significant number speak a dialect like African American Vernacular English (AAVE) rather than Standard English (Crystal, 2003). In addition, these 1Code and adversarially fine-tuned models available at https://github.com/salesforce/morpheus. Figure 1: MORPHEUS looks at each noun, verb, or adjective in the sentence and selects the inflected form (marked in red) that maximizes the target model’s loss. To maximize semantic preservation, MORPHEUS only considers inflections belonging to the same universal part of speech as the original word. World Englishes exhibit variation at multiple levels of linguistic analysis (Kachru et al., 2009). Therefore, putting these models directly into production without addressing this inherent bias puts them at risk of committing linguistic discrimination by performing poorly for many speech communities (e.g., AAVE and L2 speakers). This could take the form of either failing to understand these speakers (Rickford and King, 2016; Tatman, 2017), or misinterpreting them. For example, the recent mistranslation of a minority speaker’s social media post resulted in his wrongful arrest (Hern, 2017). Since L2 (and many L1 dialect) speakers often exhibit variability in their production of inflectional morphology2 (Lardiere, 1998; Pr´evost and White, 2000; Haznedar, 2002; White, 2003; Seymour, 2004), we argue that NLP models should be robust to inflectional perturbations in order to minimize their chances of propagating linguistic discrimination. Hence, in this paper, we: 2Inflections convey tense, quantity, etc. See Appendix A for dialectal examples. 2921 • Propose MORPHEUS, a method for generating plausible and semantically similar adversaries by perturbing the inflections in the clean examples (Figure 1). In contrast to recent work on adversarial examples in NLP (Belinkov and Bisk, 2018; Ebrahimi et al., 2018; Ribeiro et al., 2018), we exploit morphology to craft our adversaries. • Demonstrate its effectiveness on multiple machine comprehension and translation models, including BERT and Transformer (Tables 1 & 2). • Show that adversarially fine-tuning the model on an adversarial training set generated via weighted random sampling is sufficient for it to acquire significant robustness, while preserving performance on clean examples (Table 5). To the best of our knowledge, we are the first to investigate the robustness of NLP models to inflectional perturbations and its ethical implications. 2 Related Work Fairness in NLP. It is crucial that NLP systems do not amplify and entrench social biases (Hovy and Spruit, 2016). Recent research on fairness has primarily focused on racial and gender biases within distributed word representations (Bolukbasi et al., 2016), coreference resolution (Rudinger et al., 2018), sentence encoders (May et al., 2019), and language models (Bordia and Bowman, 2019). However, we posit that there exists a significant potential for linguistic bias that has yet to be investigated, which is the motivation for our work. Adversarial attacks in NLP. First discovered in computer vision by Szegedy et al. (2014), adversarial examples are data points crafted with the intent of causing a model to output a wrong prediction. In NLP, this could take place at the character, morphological, lexical, syntactic, or semantic level. Jia and Liang (2017) showed that question answering models could be misled into choosing a distractor sentence in the passage that was created by replacing key entities in the correct answer sentence. Belinkov and Bisk (2018) followed by demonstrating the brittleness of neural machine translation systems against character-level perturbations like randomly swapping/replacing characters. However, these attacks are not optimized on the target models, unlike Ebrahimi et al. (2018), which makes use of the target model’s gradient to find the character change that maximizes the model’s error. Since these attacks tend to disrupt the sentence’s semantics, Ribeiro et al. (2018) and Michel et al. (2019) propose searching for adversaries that preserve semantic content. Alzantot et al. (2018) and Jin et al. (2019) explore the use of synonym substitution to create adversarial examples, using word embeddings to find the n nearest words. Eger et al. (2019) take a different approach, arguing that adding visual noise to characters leaves their semantic content undisturbed. Iyyer et al. (2018) propose to create paraphrase adversaries by conditioning their generation on a syntactic template, while Zhang et al. (2019b) swap key entities in the sentences. Zhang et al. (2019a) provide a comprehensive survey of this topic. Adversarial training. In order to ensure our NLP systems are not left vulnerable to powerful attacks, most existing work make use of adversarial training to improve the model’s robustness (Goodfellow et al., 2015). This involves augmenting the training data either by adding the adversaries to or replacing the clean examples in the training set. Summary. Existing work in fairness mostly focus on tackling bias against protected attributes like race and gender, while those in adversarial NLP primarily investigate character- and word-level perturbations and seek to improve the models’ robustness by retraining them from scratch on the adversarial training set. Our work makes use of perturbations in inflectional morphology to highlight the linguistic bias present in models such as BERT and Transformer, before showing that simply fine-tuning the models for one epoch on the adversarial training set is sufficient to achieve significant robustness while maintaining performance on clean data. 3 Generating Inflectional Perturbations Inflectional perturbations inherently preserve the general semantics of a word since the root remains unchanged. In cases where a word’s part of speech (POS) is context-dependent (e.g., duck as a verb or a noun), restricting perturbations to the original POS further preserves its original meaning. Additionally, since second language speakers are prone to inflectional errors (Haznedar, 2002; White, 2003), adversarial examples that perturb the inflectional morphology of a sentence should be less perceivable to people who interact heavily with non-native speakers or are themselves non-native speakers. Hence, we present MORPHEUS, our proposed method for crafting inflectional adversaries. 2922 Extractive Question Answering Original When is the suspended team scheduled to return? Adversary When are the suspended team schedule to returned? Prediction Before: 2018 After: No answer Original Who upon arriving gave the original viking settlers a common identity? Adversary Who upon arrive give the original viking settler a common identities? Prediction Before: Rollo After: almost no foreign settlers Neural Machine Translation Original Israeli warplanes struck a target inside the Syrian port city of Latakia Thursday night, a senior administration official confirms to Fox News. Adversary Israeli warplanes strikes a target inside the Syrian port city of Latakia Thursday night, a senior administration official confirms to Foxes News. Prediction Before: Un haut responsable de l’administration confirme `a Fox News que des avions de combat isra´eliens ont frapp´e une cible `a l’int´erieur de la ville portuaire syrienne de Lattaqui´e dans la nuit de jeudi. After: Le pr´esident de la R´epublique, Nicolas Sarkozy, a annonc´e jeudi que le pr´esident de la R´epublique, Nicolas Sarkozy, s’est rendu en R´epublique d´emocratique du Congo. Table 1: Adversarial examples found for BERT, SpanBERT, and Transformer-big. While not perfectly grammatical, it is plausible for English dialect and second language (L2) speakers to produce such sentences. (Top) Models trained on SQuAD 2.0 are more fragile than those trained on SQuAD 1.1, and have a bias towards predicting “no answer”. Examples are answerable questions and therefore present in both SQuAD 1.1 and 2.0. (Bottom) Perturbing two inflections caused Transformer-big to output a completely irrelevant sentence. In addition, adversarial examples for ∼1.4% of the test set caused the model to output the source (English) sentences. 3.1 MORPHEUS: A Greedy Approach Problem formulation. Given a target model f and an original input example x for which the ground truth label is y, our goal is to generate the adversarial example x′ that maximizes f’s loss. Formally, we aim to solve the following problem: x′ = arg max xc L(y, f(xc)) (1) where xc is an adversarial example generated by perturbing x, f(x) is the model’s prediction, and L(·) is the model’s loss function. In this setting, f is a neural model for solving a specific NLP task. Proposed solution. To solve this problem, we propose MORPHEUS (Algorithm 1), an approach that greedily searches for the inflectional form of each noun, verb, or adjective in x that maximally increases f’s loss (Eq. 1). For each token in x, MORPHEUS calls MAXINFLECTED to find the inflected form that caused the greatest increase in f’s loss.3 Table 1 presents some adversarial examples obtained by running MORPHEUS on state-of-theart machine reading comprehension and translation models: namely, BERT (Devlin et al., 2019), SpanBERT (Joshi et al., 2019), and Transformer-big (Vaswani et al., 2017; Ott et al., 2018). 3A task-specific evaluation metric may be used instead of the loss in situations where it is unavailable. However, as we discuss later, the choice of metric is important for optimal performance and should be chosen wisely. Algorithm 1 MORPHEUS Require: Original instance x, Label y, Model f Ensure: Adversarial example x′ T ←TOKENIZE(x) for all i = 1, . . . , |T| do if POS(Ti) ∈{NOUN, VERB, ADJ} then I ←GETINFLECTIONS(Ti) Ti ←MAXINFLECTED(I, T, y, f) end if end for x′ ←DETOKENIZE(T) return x′ There are two possible approaches to implementing MAXINFLECTED: one is to modify each token independently from the others in parallel, and the other is to do it sequentially such that the increase in loss is accumulated as we iterate over the tokens. A major advantage of the parallel approach is that it is theoretically possible to speed it up by t times, where t is the number of tokens which are nouns, verbs, or adjectives. However, since current state-of-the-art models rely heavily on contextual representations, the sequential approach is likely to be more effective in finding combinations of inflectional perturbations that cause major increases in loss. We found this to be the case in our preliminary experiments (see Table 6 in Appendix D). Assumptions. MORPHEUS treats the target model as a black box and maximally requires only access to the model’s logits to compute the loss. As mentioned, task-specific metrics may be used in2923 stead of the loss as long as the surface is not overly “flat”, like in a step function. Examples of inappropriate metrics are the exact match and F1 scores for extractive question answering, which tend to be 1 for most candidates but drop drastically for specific ones. This may affect MORPHEUS’ ability to find an adversary that induces absolute model failure. While the black box assumption has the advantage of not requiring access to the target model’s gradients and parameters, a limitation is that we need to query the model for each candidate inflection’s impact on the loss, as opposed to Ebrahimi et al. (2018)’s approach. However, this is not an issue for inflectional perturbations since each word usually has less than 5 possible inflections. Candidate generation. We make use of lemminflect4 to generate candidate inflectional forms in the GETINFLECTIONS method, a simple process in which the token is first lemmatized before being inflected. In our implementation of GETINFLECTIONS, we also allow the user to specify if the candidates should be constrained to the same universal part of speech. Semantic preservation. MORPHEUS constrains its search to inflections belonging to the same universal part of speech. For example, take the word “duck”. Depending on the context, it may either be a verb or a noun. In the context of the sentence “There’s a jumping duck”, “duck” is a noun andMORPHEUS may only choose alternate inflections associated with nouns. This has a higher probability of preserving the sentence’s semantics compared to most other approaches, like character/word shuffling or synonym swapping, since the root word and its position in the sentence remains unchanged. Early termination. MORPHEUS selects an inflection if it increases the loss. In order to avoid unnecessary searching, it terminates once it finds an adversarial example that induces model failure. In our case, we define this as a score of 0 on the task’s evaluation metric (the higher, the better). Other implementation details. In order to increase overall inflectional variation in the set of adversarial examples, GETINFLECTIONS shuffles the generated list of inflections before returning it (see Figure 4 in Appendix). Doing this has no 4 https://github.com/bjascob/LemmInflect effect on MORPHEUS’ ability to induce misclassification, but prevents overfitting during adversarial fine-tuning, which we discuss later in Section 6. Additionally, since MORPHEUS greedily perturbs each eligible token in x, it may get stuck in a local maximum for some x values. To mitigate this, we run it again on the reversed version of x if the early termination criterion was not fulfilled during the forward pass. Finally, we use sacremoses5 for tokenization and NLTK (Bird et al., 2009) for POS tagging. 4 Experiments NLP tasks. To evaluate the effectiveness of MORPHEUS at inducing model failure in NLP models, we test it on two popular NLP tasks: question answering (QA) and machine translation (MT). QA involves language understanding (classification), while MT also involves language generation. Both are widely used by consumers of diverse linguistic backgrounds and hence have a high chance of propagating discrimination. Baseline. In the below experiments, we include a random baseline that randomly inflects each eligible word in each original example. Measures. In addition to the raw scores, we also report the relative decrease for easier comparison across models since they perform differently on the clean dataset. Relative decrease (dr) is calculated using the following formula: dr = scoreoriginal −scoreadversarial scoreoriginal (2) 4.1 Extractive Question Answering Given a question and a passage containing spans corresponding to the correct answer, the model is expected to predict the span corresponding to the answer. Performance for this task is computed using exact match or average F1 (Rajpurkar et al., 2016). We evaluate the effectiveness of our attack using average F1, which is more forgiving (for the target model). From our experiments, the exact match score is usually between 3-9 points lower than the average F1 score. SQuAD 1.1 and 2.0. The Stanford Question Answering Dataset (SQuAD) comprises over 100,000 question–answer pairs written by crowdworkers 5https://github.com/alvations/sacremoses 2924 Dataset Model Clean Random MORPHEUS SQuAD 2.0 Answerable Questions (F1) GloVe-BiDAF 78.67 74.00 (−5.93%) 53.94 (−31.43%) ELMo-BiDAF 80.90 76.81 (−5.05%) 62.17 (−23.15%) BERTSQuAD 1.1 93.14 90.90 (−2.40%) 82.79 (−11.11%) SpanBERTSQuAD 1.1 91.88 91.61 (−0.29%) 82.86 (−9.81%) BERTSQuAD 2 81.19 74.13 (−8.69%) 57.47 (−29.21%) SpanBERTSQuAD 2 88.52 84.88 (−4.11%) 69.47 (−21.52%) SQuAD 2.0 All Questions (F1) BERTSQuAD 2 81.52 78.87 (−3.25%) 67.24 (−17.51%) SpanBERTSQuAD 2 87.71 85.46 (−2.56%) 73.26 (−16.47%) newstest2014 En-Fr (BLEU) ConvS2S 40.83 27.72 (−32.10%) 17.31 (−57.60%) Transformer-big 43.16 30.41 (−29.54%) 20.57 (−56.25%) Table 2: Results for MORPHEUS on QA and NMT models. The subscript in Modeldataset indicates the dataset used to fine-tune the model. Negated % decrease w.r.t. the scores on clean data are reported in parentheses for easy comparison across models. Bolded values indicate the largest % decrease. based on Wikipedia articles. SQuAD 1.1 guarantees that the passages contain valid answers to the questions posed (Rajpurkar et al., 2016). SQuAD 2.0 increases the task’s difficulty by including another 50,000 unanswerable questions, and models are expected to identify when a passage does not contain an answer for the given question (Rajpurkar et al., 2018). Since the test set is not public, we generate adversarial examples from and evaluate the models on the standard dev set. In addition, the answerable questions from SQuAD 2.0 are used in place of SQuAD 1.1 to evaluate models trained on SQuAD 1.1. This allows for easy comparison between the performance of the SQuAD 1.1-fine-tuned models and SQuAD 2.0-fine-tuned ones for answerable questions. We found performance on the answerable questions from SQuAD 2.0 to be comparable to SQuAD 1.1. Models. We evaluate MORPHEUS on Gardner et al. (2018)’s implementation of BiDAF (Seo et al., 2017), a common baseline model for SQuAD 1.1, ELMo-BiDAF (Peters et al., 2018), the transformers implementation (Wolf et al., 2019) of BERT, and SpanBERT, a pre-training method focusing on span prediction that outperforms BERT on multiple extractive QA datasets. 4.2 Results and Discussion From Table 2, we see that models based on contextual embeddings (e.g., ELMo and BERT variants) tend to be more robust than those using fixed word embeddings (GloVe-BiDAF). This difference is likely due to the pre-training process, which gives them greater exposure to a wider variety of contexts in which different inflections occur. Removing the POS constraint further degrades the models’ performance by another 10% of the original score, however, this difference is likely due to changes in the semantics and expected output of the examples. BiDAF vs. BERT. Even after accounting for the performance difference on clean data, the BiDAF variants are significantly less robust to inflectional adversaries compared to the BERT variants. This is likely a result of BERT’s greater representational power and masked language modeling pre-training procedure. Randomly masking out words during pre-training could have improved the models’ robustness to small, local perturbations (like ours). BERT vs. SpanBERT. In the context of question answering, SpanBERT appears to be slightly more robust than vanilla BERT when comparing overall performance on the two SQuAD datasets. However, the difference becomes significant if we look only at the SQuAD 2.0-fine-tuned models’ performance on answerable questions (7% difference). This indicates that BERT has a stronger bias towards predicting “no answer” when it encounters inflectional perturbations compared to SpanBERT. SQuAD 1.1 vs. SQuAD 2.0. The ability to “know what you don’t know” (Rajpurkar et al., 2018) appears to have been obtained at a great cost. The SQuAD 2.0-fine-tuned models are not only generally less robust to inflectional errors than their SQuAD 1.1 equivalents (6.5% difference), but also significantly less adept at handling answerable questions (12–18% difference). This discrepancy suggests a stronger bias in SQuAD 2.0 models towards predicting “no answer” upon receiving sentences containing inflectional errors (see Table 1). As we alluded to earlier, this is particularly troubling: since SQuAD 2.0 presents a more realistic 2925 SQuAD 2.0 Answerable Questions (F1) Original Transfer Clean MORPHEUS GloVeBiDAF BERTSQuAD 1.1 93.14 89.67 SpanBERTSQuAD 1.1 91.88 90.75 BERTSQuAD 2 81.19 72.21 SpanBERTSQuAD 2 88.52 81.95 BERTSQuAD 1.1 GloVe-BiDAF 78.67 71.33 SpanBERTSQuAD 1.1 91.88 88.68 BERTSQuAD 2 81.19 69.68 SpanBERTSQuAD 2 88.52 80.11 SpanBERTSQuAD 1.1 GloVe-BiDAF 78.67 71.41 BERTSQuAD 1.1 93.14 87.48 BERTSQuAD 2 81.19 70.05 SpanBERTSQuAD 2 88.52 77.89 SQuAD 2.0 All Questions (F1) Original Transfer Clean MORPHEUS BERTSQuAD 2 SpanBERTSQuAD 2 87.71 82.49 SpanBERTSQuAD 2 BERTSQuAD 2 81.52 75.54 Table 3: Transferability of our adversarial examples. scenario than SQuAD 1.1, it is fair to conclude that such models will inadvertently discriminate against L2 speakers if put into production as is. Transferability. Next, we investigate the transferability of adversarial examples found by MORPHEUS across different QA models and present some notable results in Table 3. The adversarial examples found for GloVe-BiDAF transfer to a limited extent to other models trained on SQuAD 1.1, however, they have a much greater impact on BERTSQuAD 2 and SpanBERTSQuAD 2 (3–4x more). We observe a similar pattern for adversarial examples found for SpanBERTSQuAD 1.1. Of the two, BERT is more brittle in general: the SpanBERTSQuAD 1.1 adversaries have a greater effect on BERTSQuAD 2’s performance on answerable questions than on SpanBERTSQuAD 2’s. Discussion. One possible explanation for the SQuAD 2.0 models’ increased fragility is the difference in the tasks they were trained for: SQuAD 1.1 models expect all questions to be answerable and only need to contend with finding the right span, while SQuAD 2.0 models have the added burden of predicting whether a question is answerable. Therefore, in SQuAD 1.1 models, the feature space corresponding to a possible answer ends where the space corresponding to another possible answer begins, and there is room to accommodate slight variations in the input (i.e., larger individual spaces). We believe that in SQuAD 2.0 models, the need to accommodate the unanswerable prediction forces the spaces corresponding to the possible answers to shrink, with unanswerable spaces potentially filling the gaps between them. For SQuAD 2.0 models, this increases the probability of an adversarial example “landing” in the space corresponding to the unanswerable prediction. This would explain the effectiveness of adversarial fine-tuning in Section 6, which intuitively creates a “buffer” zone and expands the decision boundaries around each clean example. The diminished effectiveness of the transferred adversaries at inducing model failure is likely due to each model learning slightly different segmentations of the answer space. As a result, different small, local perturbations have different effects on each model. We leave the in-depth investigation of the above phenomena to future work. 4.3 Machine Translation We now demonstrate MORPHEUS’ ability to craft adversaries for NMT models as well, this time without access to the models’ logits. The WMT’14 English-French test set (newstest2014), containing 3,003 sentence pairs, is used for both evaluation and generating adversarial examples. We evaluate our attack on the fairseq implementation of both the Convolutional Seq2Seq (Gehring et al., 2017) and Transformer-big models, and report the BLEU score (Papineni et al., 2002) using fairseq’s implementation (Ott et al., 2019). From our experiments (Table 2), ConvS2S and Transformer-big appear to be extremely brittle even to inflectional perturbations constrained to the same part of speech (56–57% decrease). In addition, some adversarial examples caused the models to regenerate the input verbatim instead of a translation: 1.4% of the test set for Transformer-big, 3% for ConvS2S (see Table 9 in the Appendix for some examples). This is likely due to the joint source/target byte–pair encoding (Sennrich et al., 2016) used by both NMT systems to tackle rare word translation. We experimented with both BLEU and chrF (Popovi´c, 2015) as our optimizing criterion6 and achieved comparable results for both, however, MORPHEUS found more adversarial examples that caused the model to output random sentences about Nicolas Sarkozy when optimizing for chrF. 5 Human Evaluation To test our hypothesis that inflectional perturbations are likely to be relatively natural and semantics preserving, we randomly sample 130 adversar6We use the sacrebleu implementation (Post, 2018). 2926 Plausibility Native U.S. English Speakers Unrestricted SQuAD 2.0 newstest2014 SQuAD 2.0 newstest2014 Native 11.58% 25.64% 22.82% 32.56% L2 Speaker 42.82% 42.30% 53.58% 52.82% Beginner 31.79% 23.33% 17.17% 10.25% Non-human 13.84% 8.71% 6.41% 4.35% Semantic Equivalence Native U.S. English Speakers Unrestricted SQuAD 2.0 newstest2014 SQuAD 2.0 newstest2014 Highly Likely 52.82% 62.30% 33.84% 40.76% Likely 20.51% 18.71% 36.15% 33.84% Somewhat Likely 11.02% 7.94% 22.82% 19.48% Somewhat Unlikely 6.92% 6.15% 5.38% 4.35% Unlikely 3.58% 3.07% 1.53% 1.28% Highly Unlikely 5.12% 1.79% 0.25% 0.25% Table 4: Human judgements for adversarial examples that caused a significant degradation in performance. ial examples7 from each dataset and ask 3 Amazon Mechanical Turk workers to indicate (1) whether the sentences could have been written by a native speaker, L2 speaker, beginner learner8, or no human; and (2) the likelihood of the original and adversarial examples sharing the same meaning. To ensure the quality of our results, only Turkers who completed >10,000 HITs with a ≥99% acceptance rate could access our task. For comparison, we also report ratings by native U.S. English speakers, who were selected via a demographic survey and fluency test adapted from Hartshorne et al. (2018). Workers were paid a rate of at least $12/hr.9 Table 4 shows that Turkers from our unrestricted sample judged ∼95% of our adversaries to be plausibly written by a human and 92% generally likely to be semantically equivalent to the original examples 92% of the time, hence validating our hypothesis. Qualitative analysis revealed that “is/are”→“am/been” changes accounted for 48% of the implausible adversaries. Discussion. We believe that non-native speakers may tend to rate sentences as more human-like for the following reasons: • Their exposure to another language as a native speaker leads them to accept sentences that mimic errors made by L2 English speakers who share their first language. • Their exposure to the existence of these abovementioned errors may lead them to be more forgiving of other inflectional errors that are uncommon to them; they may deem these errors as 7Only adversarial examples that degraded the F1 score by >50 and the BLEU score by >15 were considered. 8We define a beginner as one who has just started learning the language, and an L2 speaker to be an experienced speaker. 9Each task was estimated to take 20-25s to be comfortably completed, but they were routinely completed in under 20s. (a) SQuAD 2.0 dev set (b) SQuAD 2.0 training set Figure 2: Comparison of inflectional distributions for SpanBERTSQuAD 2. The adversarial distributions include only examples that degrade model performance. To make the best use of limited space, we omit the RBR, RBS, and NNPS tags since they do not vary much across distributions. Full figures in Appendix D. plausibly made by an L2 speaker who speaks a different first language from them. • They do not presume mastery of English, and hence may choose to give the higher score when deciding between 2 choices. 6 Adversarial Fine-tuning In this section, we extend the standard adversarial training paradigm (Goodfellow et al., 2015) to make the models robust to inflectional perturbations. Since directly running MORPHEUS on the entire training dataset to generate adversaries would be far too time-consuming, we use the findings from our experiments on the respective dev/test sets (Section 4) to create representative samples of good adversaries. This significantly improves robustness to inflectional perturbations while maintaining similar performance on the clean data. We first present an analysis of the inflectional distributions before elaborating on our method for generating the adversarial training set. 2927 SpanBERTSQuAD 2 (F1) Original Adversarially Fine-tuned Dataset Clean MORPHEUS Epoch Clean MORPHEUSorig MORPHEUSadv SQuAD 2.0 Ans 88.52 69.47 (−21.52%) 1 86.80 85.17 (−1.87%) 82.76 (−4.65%) 4 86.15 84.93 (−1.41%) 82.92 (−3.74%) SQuAD 2.0 All 87.71 73.26 (−16.47%) 1 86.00 84.72 (−1.48%) 82.41 (−4.17%) 4 87.08 85.93 (−1.32%) 84.71 (−2.72%) Transformer-big (BLEU) Original Adversarially Fine-tuned Dataset Clean MORPHEUS Epoch Clean MORPHEUSorig MORPHEUSadv newstest2014 43.16 20.57 (−56.25%) 1 39.84 31.79 (−20.20%) 31.43 (−21.10%) 4 40.60 31.99 (−21.20%) 30.82 (−24.08%) Table 5: Results from adversarially fine-tuning SpanBERTSQuAD 2 and Transformer-big. MORPHEUSorig refers to the initial adversarial examples, while MORPHEUSadv refers to the new adversarial examples obtained by running MORPHEUS on the robust model. Relevant results from Table 2 reproduced here for ease of comparison. 6.1 Distributional Analysis Figure 2a illustrates the overall distributional differences in inflection occurrence between the original and adversarial examples found by MORPHEUS for SQuAD 2.0. Note that these distributions are computed based on the Penn Treebank (PTB) POS tags, which are finer-grained than the universal POS (UPOS) tags used to constrain MORPHEUS’ search (Section 4). For example, a UPOS VERB may be actually be a PTB VBD, VBZ, VBG, etc. We can see obvious differences between the global inflectional distributions of the original datasets and the adversaries found by MORPHEUS. The differences are particularly significant for the NN, NNS, and VBG categories. NNS and VBG also happen to be uncommon in the original distribution. Therefore, we conjecture that the models failed (Section 4) because MORPHEUS is able to find the contexts in the training data where these inflections are uncommon. 6.2 Adversarial Training Set Generation Since there is an obvious distributional difference between the original and adversarial examples, we hypothesize that bringing the training set’s inflectional distribution closer to that of the adversarial examples will improve the models’ robustness. To create the adversarial training set, we first isolate all the adversarial examples (from the dev/test set) that caused any decrease in F1/BLEU score and count the number of times each inflection is used in this adversarial dataset, giving us the inflectional distribution in Figure 2a. Next, we randomly select an inflection for each eligible token in each training example, weighting the selection with this inflectional distribution instead of a uniform one. To avoid introducing unnecessary noise into our training data, only inflections from the same UPOS as the original word are chosen. We do this 4 times per training example, resulting in an adversarial training set with a clean–adversarial ratio of 1 : 4. This can be done in linear time and is highly scalable. Algorithm 2 in Appendix C details our approach and Figure 2b depicts the training set’s inflectional distribution before and after this procedure. Fine-tuning vs. retraining. Existing adversarial training approaches have shown that retraining the model on the augmented training set improves robustness (Belinkov and Bisk, 2018; Eger et al., 2019; Jin et al., 2019). However, this requires substantial compute resources. We show that finetuning the pre-trained model for just a single epoch is sufficient to achieve significant robustness to inflectional perturbations yet still maintain good performance on the clean evaluation set (Table 5). 6.3 Experiments SpanBERT. Following Joshi et al. (2019), we fine-tune SpanBERTSQuAD 2 for another 4 epochs on our adversarial training set. Table 5 shows the effectiveness of our approach for SpanBERTSQuAD 2. After just a single epoch of fine-tuning, SpanBERTSQuAD 2 becomes robust to most of the initial adversarial examples with a < 2-point drop in performance on the clean dev set. More importantly, running MORPHEUS on the robust model fails to significantly degrade its performance. 2928 After 4 epochs, the performance on the clean SQuAD 2.0 dev set is almost equivalent to the original SpanBERTSQuAD 2’s, however this comes at a slight cost: the performance on the answerable questions is slightly lower than before. In fact, if performance on answerable questions is paramount, our results show that fine-tuning on the adversarial training set for 1 epoch would be a better (and more cost effective) decision. Retraining SpanBERT adversarially did not result in better performance. We also found that weighting the random sampling with the adversarial distribution helped to improve the robust model’s performance on the answerable questions (refer to Table 7 in Appendix). Transformer-big. Similarly, model robustness improves dramatically (56.25% to 20.20% decrease) after fine-tuning for 1 epoch on the adversarial training set with a ∼3 BLEU point drop in clean data performance (Table 5). Fine-tuning for a further 3 epochs reduced the difference but made the model less robust to new adversarial examples. We also experimented with using randomly sampled subsets but found that utilizing the entire original training set was necessary for preserving performance on the clean data (see Table 8 in Appendix). 6.4 Discussion Our anonymous reviewers brought up the possibility of using grammatical error correction (GEC) systems as a defense against inflectional adversaries. Although we agree that adding a GEC model before the actual NLU/translation model would likely help, this would not only require an extra model—often another Transformer (Bryant et al., 2019)—and its training data to be maintained, but would also double the resource usage of the combined system at inference time. Consequently, institutions with limited resources may choose to sacrifice the experience of minority users rather than incur the extra maintenance costs. Adversarial fine-tuning only requires the NLU/translation model to be fine-tuned once and consumes no extra resources at inference time. 7 Limitations and Future Work Although we have established our methods’ effectiveness at both inducing model failure and robustifying said models, we believe they could be further improved by addressing the following limitations: 1. MORPHEUS finds the distribution of examples that are adversarial for the target model, rather than that of real L2 speaker errors, which produced some unrealistic adversarial examples. 2. Our method of adversarial fine-tuning is analogous to curing the symptom rather than addressing the root cause since it would have to be performed for each domain-specific dataset the model is trained on. In future work, we intend to address these limitations by directly modeling the L2 and dialectal distributions and investigating the possibility of robustifying these models further upstream. 8 Conclusion Ensuring that NLP technologies are inclusive, in the sense of working for users with diverse linguistic backgrounds (e.g., speakers of World Englishes such as AAVE, as well as L2 speakers), is especially important since natural language user interfaces are becoming increasingly ubiquitous. We take a step in this direction by revealing the existence of linguistic bias in current English NLP models—e.g., BERT and Transformer—through the use of inflectional adversaries, before using adversarial fine-tuning to significantly reduce it. To find these adversarial examples, we propose MORPHEUS, which crafts plausible and semantically similar adversaries by perturbing an example’s inflectional morphology in a constrained fashion, without needing access to the model’s gradients. Next, we demonstrate the adversaries’ effectiveness using QA and MT, two tasks with direct and wide-ranging applications, before validating their plausibility and semantic content with real humans. Finally, we show that, instead of retraining the model, fine-tuning it on a representative adversarial training set for a single epoch is sufficient to achieve significant robustness to inflectional adversaries while preserving performance on the clean dataset. We also present a method of generating this adversarial training set in linear time by making use of the adversarial examples’ inflectional distribution to perform weighted random sampling. Acknowledgments We would like to express our gratitude to Lav Varshney, Jason Wu, Akhilesh Gotmare, and our anonymous reviewers for their insightful feedback on our paper, and friends who participated in our pilot studies. Samson is supported by Salesforce and the Singapore Economic Development Board under its Industrial Postgraduate Programme. 2929 References Eneko Agirre, Daniel Cer, Mona Diab, Aitor GonzalezAgirre, and Weiwei Guo. 2013. *SEM 2013 shared task: Semantic textual similarity. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 32–43, Atlanta, Georgia, USA. Association for Computational Linguistics. Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890–2896, Brussels, Belgium. Association for Computational Linguistics. Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In 6th International Conference on Learning Representations, Vancouver, BC, Canada. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O’Reilly Media. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4349–4357. Curran Associates, Inc. Shikha Bordia and Samuel R. Bowman. 2019. Identifying and reducing gender bias in word-level language models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 7–15, Minneapolis, Minnesota. Association for Computational Linguistics. Christopher Bryant, Mariano Felice, Øistein E. Andersen, and Ted Briscoe. 2019. The BEA-2019 shared task on grammatical error correction. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52–75, Florence, Italy. Association for Computational Linguistics. David Crystal. 2003. English as a Global Language. Cambridge University Press. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. David M. Eberhard, Gary F. Simons, and Charles D. Fennig, editors. 2019. Ethnologue: Languages of the World, 22 edition. SIL International. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 31–36, Melbourne, Australia. Association for Computational Linguistics. Steffen Eger, G¨ozde G¨ul Sahin, Andreas R¨uckl´e, JiUng Lee, Claudia Schulz, Mohsen Mesgar, Krishnkant Swarnkar, Edwin Simpson, and Iryna Gurevych. 2019. Text processing like humans do: Visually attacking and shielding nlp systems. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1634–1647, Minneapolis, Minnesota. Association for Computational Linguistics. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1– 6, Melbourne, Australia. Association for Computational Linguistics. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1243–1252, International Convention Centre, Sydney, Australia. PMLR. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations, San Diego, California. J. Hartshorne, B. Tenenbaum, J., and S. Pinker. 2018. A critical period for second language acquisition: Evidence from 2/3 million english speakers. Cognition, 177:263–277. Belma Haznedar. 2002. Missing surface inflection in adult and child l2 acquisition. In Proceedings of the 6th Generative Approaches to Second Language Acquisition Conference, pages 140–149, Somerville, Massachusetts. Cascadilla Proceedings Project. Alex Hern. 2017. Facebook translates ’good morning’ into ’attack them’, leading to arrest. The Guardian. Dirk Hovy and Shannon L. Spruit. 2016. The social impact of natural language processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 591–598, Berlin, Germany. Association for Computational Linguistics. 2930 Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875–1885, New Orleans, Louisiana. Association for Computational Linguistics. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021–2031, Copenhagen, Denmark. Association for Computational Linguistics. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is bert really robust? natural language attack on text classification and entailment. arXiv e-prints, arXiv:1907.11932. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2019. SpanBERT: Improving pre-training by representing and predicting spans. arXiv e-prints, arXiv:1907.10529. Braj B. Kachru, Yamuna Kachru, and Cecil Nelson, editors. 2009. The Handbook of World Englishes. Wiley-Blackwell. Nitish Shirish Keskar, Bryan McCann, Lav Varshney, Caiming Xiong, and Richard Socher. 2019. CTRL - A Conditional Transformer Language Model for Controllable Generation. arXiv e-prints, arXiv:1909.05858. Donna Lardiere. 1998. Case and tense in the ‘fossilized’ steady state. Second Language Research, 14(1):1–26. Jacob RE Leimgruber. 2009. Modelling variation in Singapore English. Ph.D. thesis, Oxford University. Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622–628, Minneapolis, Minnesota. Association for Computational Linguistics. Paul Michel, Xian Li, Graham Neubig, and Juan Pino. 2019. On evaluation of adversarial perturbations for sequence-to-sequence models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3103–3114, Minneapolis, Minnesota. Association for Computational Linguistics. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1–9, Brussels, Belgium. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Maja Popovi´c. 2015. chrF: character n-gram f-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Brussels, Belgium. Association for Computational Linguistics. Philippe Pr´evost and Lydia White. 2000. Missing surface inflection or impairment in second language acquisition? evidence from tense and agreement. Second Language Research, 16:103–133. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784– 789, Melbourne, Australia. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for 2931 machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging NLP models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 856–865, Melbourne, Australia. Association for Computational Linguistics. John Rickford and Sharese King. 2016. Language and linguistics on trial: Hearing rachel jeantel (and other vernacular speakers) in the courtroom and beyond. Language, 92:948–988. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the Annual Meeting of the North American Association of Computational Linguistics (NAACL), pages 8–14, New Orleans, Louisiana. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In 5th International Conference on Learning Representations, Toulon, France. Harry Seymour. 2004. The challenge of language assessment for african american english-speaking children: A historical perspective. Seminars in Speech and Language, 25:3–12. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In 2nd International Conference on Learning Representations, Banff, AB, Canada. Rachael Tatman. 2017. Gender and dialect bias in YouTube’s automatic captions. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 53–59, Valencia, Spain. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Lydia White. 2003. Fossilization in steady state l2 grammars: Persistent problems with inflectional morphology. Bilingualism: Language and Cognition, 6:129 – 141. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. arXiv e-prints, arXiv:1910.03771. Walt Wolfram. 2004. The grammar of urban African American Vernacular English. Handbook of varieties of English, 2:111–32. Wei Emma Zhang, Quan Z. Sheng, Ahoud Alhazmi, and Chenliang Li. 2019a. Adversarial attacks on deep learning models in natural language processing: A survey. arXiv e-prints, arXiv:1901.06796. Yuan Zhang, Jason Baldridge, and Luheng He. 2019b. PAWS: paraphrase adversaries from word scrambling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1298–1308, Minneapolis, Minnesota. Association for Computational Linguistics. 2932 A Examples of Inflectional Variation in English Dialects African American Vernacular English (Wolfram, 2004) • They seen it. • They run there yesterday. • The folks was there. Colloquial Singapore English (Singlish) (Leimgruber, 2009) • He want to see how we talk. • It cover up everything in the floss. It’s not nice. It look very cheap. • I want to shopping only. B More Details on Human Evaluation Figure 3: Amazon Mechanical Turk UI. Figure 3 contains a screenshot of the UI we present to crowd workers. We intentionally prime Turkers by asking if the sentence could be written by an L2 speaker instead of directly asking for acceptability/naturalness ratings in order to ensure that they consider these possibilities. We also do not use the Semantic Textual Similarity evaluation scheme (Agirre et al., 2013); during preliminary pilot studies, we discovered that annotators interpreted certain words in the scheme (e.g., “information”, “details”, and “topics”) considerably differently, introducing substantial noise into an already subjective judgement task. Possible limitations. It is possible that seeing the original sentence could affect the worker’s judgment of the perturbed sentence’s plausibility. However, we argue that this is not necessarily negative since seeing the original sentence would make it easier to spot perturbations that are just outright wrong (i.e., a human will not make that error regardless of their level of fluency). 2933 C Adversarial Training Set Generation Algorithm 2 RandomInflect Require: Original instance x, hyperparameter k Adversarial distribution Dadv Ensure: Adversarial training dataset X′ x for x X′ x ←{x} for i = 1 to k do T ←TOKENIZE(x) for all i = 1, . . . , |T| do if POS(Ti) ∈{NOUN, VERB, ADJ} then I ←GETINFLECTIONS(Ti) Ti ←RANDOMWEIGHTED(I, Dadv) end if end for x′ ←DETOKENIZE(T) X′ x ←X′ x ∪{x′} end for return X′ x D Tables and Figures SpanBERTSQuAD 2 (F1) Dataset Clean Morpheusseq Morpheusparallel SQuAD 2.0 Ans 88.52 69.47 (-21.52%) 74.38 (-15.97%) SQuAD 2.0 All 87.71 73.26 (-16.47%) 76.64 (-12.62%) Transformer-big (BLEU) Dataset Clean Morpheusseq Morpheusparallel newstest2014 43.16 20.57 (-56.25%) 20.85 (-51.69%) Table 6: Results of the parallel and sequential approaches to implementing MORPHEUS on SpanBERTSQuAD 2 and Transformer-big. SpanBERTSQuAD 2 (F1) Weighted Dataset Clean Morpheusorig Yes SQuAD 2.0 Ans 86.80 85.17 (-1.87%) SQuAD 2.0 All 86.00 84.72 (-1.48%) No SQuAD 2.0 Ans 84.52 83.15 (-1.62%) SQuAD 2.0 All 87.12 86.03 (-1.25%) Table 7: Comparison of results from using weighted vs. uniform random sampling to the create adversarial training set for fine-tuning SpanBERTSQuAD 2 Transformer-big (BLEU) Subset Original Clean Morpheusorig 1 20 43.16 30.90 24.95 1 4 43.16 36.59 29.46 Full 43.16 40.60 31.99 Table 8: Results from adversarially fine-tuning Tranformer-big on different subsets of the original training set. 2934 Figure 4: Effect of shuffling the inflection list on the adversarial distribution. We observe that shuffling the inflection list induces a more uniform inflectional distribution by reducing the higher frequency inflections and boosting the lower frequency ones. Original Source According to Detroit News, the queen of Soul will be performing at the Sound Board hall of MotorCity Casino Hotel on 21 December. Adversarial Source Accorded to Detroit News, the queen of Soul will be performing at the Sound Board hall of MotorCity Casino Hotel on 21 December. Original Translation Selon Detroit News, la reine de Soul se produira au Sound Board Hall de l’hˆotel MotorCity Casino le 21 d´ecembre. Original Source Intersex children pose ethical dilemma. Adversarial Source Intersex child posing ethical dilemma. Original Translation Les enfants intersexuels posent un dilemme ´ethique. Original Source The Guangzhou-based New Express made a rare public plea for the release of journalist Chen Yongzhou. Adversarial Source The Guangzhou-based New Expresses making a rare public plea for the release of journalist Chen Yongzhou. Original Translation Le New Express, bas´e `a Guangzhou, a lanc´e un rare appel public en faveur de la lib´eration du journaliste Chen Yongzhou. Original Source Cue stories about passport controls at Berwick and a barbed wire border along Hadrian’s Wall. Adversarial Source Cue story about passport controls at Berwick and a barbed wires borders along Hadrian’s Walls. Original Translation Cue histoires sur le contrˆole des passeports `a Berwick et une fronti`ere de barbel´es le long du mur d’Hadrien. Table 9: Some of the adversaries that caused Transformer-big to output the source sentence instead of a translation. 2935 (a) SQuAD 2.0 dev set (b) SQuAD 2.0 training set Figure 5: Full versions of Figure 2
2020
263
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2936–2942 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2936 Mitigating Gender Bias Amplification in Distribution by Posterior Regularization Shengyu Jia♣∗, Tao Meng♠∗, Jieyu Zhao♠, Kai-Wei Chang♠ ♣Tsinghua University ♠University of California, Los Angeles [email protected], {mengt18, jieyuzhao, kwchang}@ucla.edu Abstract Advanced machine learning techniques have boosted the performance of natural language processing. Nevertheless, recent studies, e.g., Zhao et al. (2017) show that these techniques inadvertently capture the societal bias hidden in the corpus and further amplify it. However, their analysis is conducted only on models’ top predictions. In this paper, we investigate the gender bias amplification issue from the distribution perspective and demonstrate that the bias is amplified in the view of predicted probability distribution over labels. We further propose a bias mitigation approach based on posterior regularization. With little performance loss, our method can almost remove the bias amplification in the distribution. Our study sheds the light on understanding the bias amplification. 1 Introduction Data-driven machine learning models have achieved high performance in various applications. Despite the impressive results, recent studies (e.g., Wang et al. (2019); Hendricks et al. (2018)) demonstrate that these models may carry societal biases exhibited in the dataset they trained on. In particular, Zhao et al. (2017) show that a model trained on a biased dataset may amplify the bias. For example, we can consider a task of labeling the activity and objects depicted in an image. The training set contains 30% more images with “woman cooking” than “man cooking”. However, when evaluating the top predictions of a trained model, the disparity between males and females is amplified to around 70%. Based on this observation, Zhao et al. (2017) conduct a systematic study and propose to calibrate the top predictions of a learned model by injecting ∗Both authors contributed equally to this work and are listed in alphabetical order. corpus-level constraints to ensure that the gender disparity is not amplified. However, when analyzing the top predictions, the models are forced to make one decision. Therefore, even if the model assigns high scores to both labels of “woman cooking” and “man cooking”, it has to pick one as the prediction. This process obviously has a risk to amplify the bias. However, to our surprise, we observe that gender bias is also amplified when analyzing the posterior distribution of the predictions. Since the model is trained with regularized maximal likelihood objective, the bias in distribution is a more fundamental perspective of analyzing the bias amplification issue. In this paper, we conduct a systematic study to quantify the bias in the predicted distribution over labels. Our analysis demonstrates that when evaluating the distribution, though not as significant as when evaluating top predictions, the bias amplification exists. About half of activities show significant bias amplification in the posterior distribution, and on average, they amplify the bias by 3.2%. We further propose a new bias mitigation technique based on posterior regularization because the approaches described in Zhao et al. (2017) can not be straightforwardly extended to calibrate bias amplification in distribution. With the proposed technique, we successfully remove the bias amplification in the posterior distribution while maintain the performance of the model. Besides, the bias amplification in the top predictions based on the calibrated distribution is also mitigated by around 30%. These results suggest that the bias amplification in top predictions comes from both the requirement of making hard predictions and the bias amplification in the posterior distribution of the model predictions. Our study advances the understanding of the bias amplification issue in natural language processing models. The code and data are available at https://github.com/uclanlp/reducingbias. 2937 2 Related Work Algorithmic Bias Machine learning models are becoming more and more prevalent in the real world, and algorithmic bias will have a great societal impact (Tonry, 2010; Buolamwini and Gebru, 2018). Researchers have found societal bias in different applications such as coreference resolution (Rudinger et al., 2018; Zhao et al., 2018), machine translation (Stanovsky et al., 2019) and online advertisement (Sweeney, 2013). Without appropriate adjustments, the model can amplify the bias (Zhao et al., 2017). Different from the previous work, we aim at understanding the bias amplification from the posterior perspective instead of directly looking at the top predictions of the model. Posterior Regularization The posterior regularization framework (Ganchev et al., 2010) is aiming to represent and enforce constraints on the posterior distribution. It has been shown effective to inject domain knowledge for NLP applications. For example, Ji et al. (2012); Gao et al. (2014) design constraints based on similarity to improve question answering and machine translation, respectively. Yang and Cardie (2014) propose constraints based on lexical patterns in sentiment analysis. Meng et al. (2019) apply corpus-level constraints to guide a dependency parser in the cross-lingual transfer setting. In this paper we leverage corpus-level constraints to calibrate the output distribution. Our study resembles to the confidence calibration (Guo et al., 2017; Naeini et al., 2015). However, the temperature turning and binning methods proposed in these papers cannot straightforwardly be extended to calibrate the bias amplification. 3 Background We follow the settings in Zhao et al. (2017) to focus on the imSitu vSRL dataset (Yatskar et al., 2016), in which we are supposed to predict the activities and roles in given images and this can be regraded as a structure prediction task (see Fig. 1). We apply the Conditional Random Field (CRF) model for the structure prediction task. We denote y as a joint prediction result for all instances, and yi as a prediction result for instance i. We use yv to denote the predicted activity, and yr to denote the predicted role. An activity can have multiple roles and usually one of them conveys the gender information. For an instance i, the CRF model predicts the scores for every activity and role, and Figure 1: An instance from the imSitu dataset. Given an input image, the task it to identify the activity depicted in the image as well as the objects (noun) and their semantic role. the score for a prediction is the summation of all these scores. Formally, fθ(yi, i) = sθ(yi v, i) + X e∈yir sθ(yi v, e, i), where sθ(yi v, i) and sθ(yi v, e, i) are the scores for activity yi v of instance i, and the score for role e of instance i with activity yi v, respectively. We can infer the top structure for instance i by: arg maxyi∈Yi fθ(yi, i), where Yi refers to all the possible assignments to the instance. 4 Bias Amplification Quantification and Corpus-level Constraints Zhao et al. (2017) demonstrate bias amplification in the top prediction and present a bias mitigation technique by inference with corpus-level constraints. In the following, we extend their study to analyze the bias amplification in the posterior distribution by the CRF model and define the corresponding corpus-level constraints. Formally, the probability of prediction yi for instance i and the joint prediction y defined by CRF model with parameters θ are given by pθ(yi, i) ∝exp(fθ(yi, i)), pθ(y) = Y i pθ(yi, i), (1) since instances are mutually independent. In this section, we will define how to quantify the bias and the bias amplification in the distribution, and introduce the corpus-level constraints towards restricting the bias in the distribution. We focus on the gender bias on activities in the vSRL task. To quantify the gender bias given a particular activity v∗, Zhao et al. (2017) uses the percentage that v∗is predicted together with male agents among all prediction with genders. This 2938 evaluation focuses on the top prediction. In the contrast, we define bias function B(p, v∗, D) w.r.t distribution p and activity v∗, evaluating the bias toward male in dataset D based on the conditional probability P(X|Y ), where event Y : given an instance, its activity is predicted to be v∗and its role is predicted to have a gender; event X : this instance is predicted to have gender male. Formally, B(p, v∗, D) =Pi∼D,y∼p(yi r ∈M|yi v = v∗∧yi r ∈M ∪W) = P i∈D P yi:yiv=v∗,yir∈M p(yi, i) P i∈D P yi:yiv=v∗,yir∈M∪W p(yi, i). (2) This bias can come from the training set Dtr. Here we use b∗(v∗, male) to denote the “dataset bias” toward male in the training set, measured by the ratio of between male and female from the labels: b∗= P i∈Dtr 1[ˆyi v = v∗, ˆyi r ∈M] P i∈Dtr 1[ˆyiv = v∗, ˆyir ∈M ∪W], where ˆyi denotes the label of instance i. Ideally, the bias in the distribution given by CRF model should be consistent with the bias in the training set, since CRF model is trained by maximum likelihood. However, the amplification exists in practice. Here we use the difference between the bias in the posterior distribution and in training set to quantify the bias amplification, and average it over all activities to quantify the amplification in the whole dataset: A(p, v∗, D) = sgn(b∗−0.5)[B(p, v∗, D) −b∗], ¯A(p, D) = 1 |V | X v∗∈V A(p, v∗, D). Note that if we use the top prediction indicator function to replace p in A, ¯A, it is the same as the definition of the bias amplification in top prediction in Zhao et al. (2017). The corpus-level constraints aim at mitigating the bias amplification in test set Dts within a predefined margin γ, ∀v∗, |A(p, v∗, Dts)| ≤γ. (3) 5 Posterior Regularization Posterior regularization (Ganchev et al., 2010) is an algorithm leveraging corpus-level constraints to regularize the posterior distribution for a structure model. Specifically, given corpus-level constraints and a distribution predicted by a model, we 1) define a feasible set of the distributions with respect to the constraints; 2) find the closest distribution in the feasible set from given distribution; 3) do maximum a posteriori (MAP) inference on the optimal feasible distribution. The feasible distribution set Q is defined by the corpus-level constraints defined in Eq. (3): Q = {q | ∀v∗, |B(q, v∗, Dts) −b∗| ≤γ}, (4) where B(·) is defined in Eq. (2). Given the feasible set Q and the model distribution pθ defined by Eq. (1), we want to find the closest feasible distribution q∗: q∗= arg minq∈Q KL(q∥pθ). (5) This is an optimization problem and our variable is the joint distribution q with constraints, which is intractable in general. Luckily, according to the results in Ganchev et al. (2010), if the feasible set Q is defined in terms of constraints feature functions φ and their expectations: Q = {q | Ey∼q[φ(y) ≤c]}, (6) Eq. (5) will have a close form solution q∗(y) = pθ(y) exp(−λ∗· φ(y)) Z(λ∗) , (7) where λ∗is the solution of λ∗= arg maxλ≥0 −c · λ −log Z(λ). Z(λ) = X y pθ(y) exp(−λ · φ(y)). (8) Actually, we can derive the constraints into the form we want. We set c = 0 and φ(y) = X i φi(yi). (9) We can choose a proper φi(yi) to make Eq. (4) equal to Eq. (6). The detailed derivation and the definition of φi(yi) are shown in Appendix A. We can solve Eq. (8) by gradient-based methods to get λ∗, and further compute the close form solution in Eq. (7). Actually, considering the relation between y and yi in Eq. (1) and (9), we can factorize the solution in Eq. (7) on instance level: q∗(yi, i) = pθ(yi, i) exp(−λ∗· φi(yi)) Zi(λ∗) , and the derivation details are in Appendix B. With this, we can reuse original inference algorithm to conduct MAP inference based on the distribution q∗for every instance seperately. 2939 0.0 0.2 0.4 0.6 0.8 1.0 bias in training set 0.0 0.2 0.4 0.6 0.8 1.0 bias in predictions (a) bias in distribution before bias mitigation. 0.0 0.2 0.4 0.6 0.8 1.0 bias in training set 0.0 0.2 0.4 0.6 0.8 1.0 bias in predictions (b) bias in distribution after bias mitigation. 0.0 0.2 0.4 0.6 0.8 1.0 bias in training set 0.0 0.2 0.4 0.6 0.8 1.0 bias in predictions (c) bias in top predictions before bias mitigation. 0.0 0.2 0.4 0.6 0.8 1.0 bias in training set 0.0 0.2 0.4 0.6 0.8 1.0 bias in predictions (d) bias in top predictions after bias mitigation. Figure 2: x-axis and y-axis are the bias toward male in the training corpus and the predictions, respectively. Each dot stands for an activity. The blue reference lines indicate the bias score in training is equal to that in test and the dash lines indicate the margin (= 0.05). The dots in red stand for being out of margin and violating the constraints. The black lines are linear regressions of the dots. Results show that we can almost remove the bias amplification in distributions (see 2a and 2b), and reduce 30.9% amplification in top predictions (see 2c and 2d) after applying posterior regularization. 6 Experiments We conduct experiments on the vSRL task to analyze the bias amplification issue in the posterior distribution and demonstrate the effectiveness of the proposed bias mitigation technique. Dataset Our experiment settings follow Zhao et al. (2017). We evaluate on imSitu (Yatskar et al., 2016) that activities are selected from verbs, roles are from FrameNet (Baker et al., 1998) and nouns from WordNet (Fellbaum, 1998). We filter out the non-human oriented verbs and images with labels that do not indicate the genders. Model We analyze the model purposed together with the dataset. The score functions we describe in Sec. 3 are modeled by VGG (Simonyan and Zisserman, 2015) with a feedforward layer on the top of it. The scores are fed to CRF for inference. 6.1 Bias Amplification in Distribution Figures 2a and 2c demonstrate the bias amplification in both posterior distribution pθ and the top predictions y defined in Sec.4, respectively. For most activities with the bias toward male (i.e., higher bias score) in the training set, both the top prediction and posterior distribution are even more biased toward male, vise versa. If the bias is not amplified, the dots should be scattered around the reference line. However, most dots are on the top-right or bottom-left, showing the bias is amplified. The black regression line with slope > 1 also indicates the amplification. Quantitatively, 109 and 173 constraints are violated when analyzing the bias in distribution an in top predictions. Most recent models are trained by minimizing the cross-entropy loss which aims at fitting the model’s predicted distribution with observed distribution on the training data. In the inference time, 2940 0 5 10 15 20 25 30 35 40 45 50 #Epoch 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy train_acc test_acc Amp. A 0.01 0.02 0.03 0.04 0.05 Amplification A Figure 3: The curve of training and test accuracy, and bias amplification with the number of training epochs. The optimal model evaluated on the development set is found in the grey shade area. the model outputs the top predictions based on the underlying prediction distribution. Besides, in practice, the distribution has been used as an indicator of confidence in the prediction. Therefore, understanding bias amplification in distribution provides a better view about this issue. To analyze the cause of bias amplification, we further show the degree of amplification along with the learning curve of the model (see Fig. 3). We observed that when the model is overfitted, the distribution of the model prediction becomes more peaky1. We suspect this is one of the key reasons causes the bias amplification. 6.2 Bias Amplification Mitigation We set the margin γ = 0.05 for every constraint in evaluation. However, we employ a stricter margin (γ = 0.001) in performing posterior regularization to encourage the model to achieve a better feasible solution. We use mini-batch to estimate the gradient w.r.t λ with Adam optimizer (Kingma and Ba, 2015) when solving Eq. (5). We set the batchsize to be 39 and train for 10 epochs. The learning rate is initialized as 0.1 and decays after every mini-batch with the decay factor 0.998. Results We then apply the posterior regularization technique to mitigate the bias amplification in distribution. Results are demonstrated in Figures 2b (distribution) and 2d (top predictions). The posterior regularization effectively calibrates the bias in distribution and only 5 constraints are violated 1This effect, called overconfident, has been also discussed in the literature (Guo et al., 2017). after the calibration. The average bias amplification is close to 0 ( ¯A: 0.032 to −0.005). By reducing the amplification of bias in distribution, the bias amplification in top predictions also reduced by 30.9% ( ¯A: 0.097 to 0.067). At the same time, the model’s performance is kept (accuracy: 23.2% to 23.1%). Note that calibrating the bias in distribution cannot remove all bias amplification in the top predictions. We posit that the requirement of making hard predictions (i.e., maximum a posteriori estimation) also amplifies the bias when evaluating the top predictions. 7 Conclusion We analyzed the bias amplification from the posterior distribution perspective, which provides a better view to understanding the bias amplification issue in natural language models as these models are trained with the maximum likelihood objective. We further proposed a bias mitigation technique based on posterior regularization and show that it effectively reduces the bias amplification in the distribution. Due to the limitation of the data, we only analyze the bias over binary gender. However, our analysis and the mitigation framework is general and can be adopted to other applications and other types of bias. One remaining open question is why the gender bias in the posterior distribution is amplified. We posit that the regularization and the over-fitting nature of deep learning models might contribute to the bias amplification. However, a comprehensive study is required to prove the conjecture and we leave this as future work. Acknowledgement This work was supported in part by National Science Foundation Grant IIS1927554. We thank anonymous reviewers and members of the UCLA-NLP lab for their feedback. References Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The berkeley framenet project. In COLINGACL. Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency. C Fellbaum. 1998. Wordnet: An on-line lexical database. 2941 Kuzman Ganchev, Jennifer Gillenwater, Ben Taskar, et al. 2010. Posterior regularization for structured latent variable models. Journal of Machine Learning Research. Jianfeng Gao, Xiaodong He, Wen-tau Yih, and Li Deng. 2014. Learning continuous phrase representations for translation modeling. In ACL. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In ICML. Lisa Anne Hendricks, Kaylee Burns, Kate Saenko, Trevor Darrell, and Anna Rohrbach. 2018. Women also snowboard: Overcoming bias in captioning models. In ECCV. Zongcheng Ji, Fei Xu, Bin Wang, and Ben He. 2012. Question-answer topic model for question retrieval in community question answering. In CIKM. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Tao Meng, Nanyun Peng, and Kai-Wei Chang. 2019. Target language-aware constrained inference for cross-lingual dependency parsing. In EMNLPIJCNLP. Mahdi Pakdaman Naeini, Gregory F. Cooper, and Milos Hauskrecht. 2015. Obtaining well calibrated probabilities using bayesian binning. In AAAI. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In NAACL-HLT. Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In ICLR. Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In ACL. Latanya Sweeney. 2013. Discrimination in online ad delivery. Commun. ACM. Michael Tonry. 2010. The social, psychological, and political causes of racial disparities in the american criminal justice system. Crime and justice. Tianlu Wang, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang, and Vicente Ordonez. 2019. Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations. In ICCV. Bishan Yang and Claire Cardie. 2014. Context-aware learning for sentence-level sentiment analysis with posterior regularization. In ACL. Mark Yatskar, Luke S. Zettlemoyer, and Ali Farhadi. 2016. Situation recognition: Visual semantic role labeling for image understanding. In CVPR. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In EMNLP. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In NAACL-HLT. 2942 A Definition of the Feature Functions The feature function for predictions y is defined as the summation of feature functions for each instance yi, which is a 2n−dimensional vector where n is the number of constraints. Each entry is the feature function corresponding to a constraint and the inequality sign direction. Formally, φi v∗,−(yi)=      1 −b∗−γ yi v = v∗, yi r ∈M −b∗−γ yi v = v∗, yi r ∈W 0 otherwise φi v∗,+(yi)=      −1 + b∗−γ yi v = v∗, yi r ∈M b∗−γ yi v = v∗, yi r ∈W 0 otherwise φi = (φi v1,−, φi v1,+, ..., φi vn,−, φi vn,+) φ(y) = X i φi(yi) B Derivation of Feature Functions Expectation We can derive the feature functions expection as Ey∼q[φ(y)] ≤ 0 Ey∼q "X i φi(yi) # ≤ 0 X i Eyi∼q(·,i)  φi(yi)  ≤ 0 Thus, it is equivalent as ∀v∗, X i Eyi∼q(·,i)  φi v∗,−(yi)  ≤0, X i Eyi∼q(·,i)  φi v∗,+(yi)  ≤0. The inequality about φi v∗,−can be derived as X i Eyi∼q(·,i)  φi v∗,−(yi)  ≤ 0 X i X yi q(yi, i)φi v∗,−(yi) ≤ 0 X i X yi:yiv=v∗,yir∈M (1 −b∗−γ)q(yi, i) − X i X yi:yiv=v∗,yir∈W (b∗+ γ)q(yi, i) ≤ 0 P i P yi:yiv=v∗,yir∈M q(yi, i) P i P yi:yiv=v∗,yir∈M∪W q(yi, i) ≤ b∗+ γ B(q, v∗, ·) ≤b∗+ γ The inequality about φi v∗,−can be derived similarly.
2020
264
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2943–2953 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2943 Towards Understanding Gender Bias in Neural Relation Extraction Andrew Gaut*†, Tony Sun*†, Shirlyn Tang†, Yuxin Huang†, Jing Qian†, Mai ElSherief††, Jieyu Zhao‡, Diba Mirza†, Elizabeth Belding†, Kai-Wei Chang‡, and William Yang Wang† †Department of Computer Science, UC Santa Barbara ‡Department of Computer Science, UC Los Angeles ††School of Interactive Computing, Georgia Institute of Technology {ajg, tonysun, shirlyntang, yuxinhuang}@ucsb.edu {jing qian, dimirza, ebelding, william}@cs.ucsb.edu [email protected] {jyzhao, kwchang}@cs.ucla.edu Abstract Recent developments in Neural Relation Extraction (NRE) have made significant strides towards automated knowledge base construction. While much attention has been dedicated towards improvements in accuracy, there have been no attempts in the literature to evaluate social biases exhibited in NRE systems. In this paper, we create WikiGenderBias, a distantly supervised dataset composed of over 45,000 sentences including a 10% human annotated test set for the purpose of analyzing gender bias in relation extraction systems. We find that when extracting spouse and hypernym (i.e., occupation) relations, an NRE system performs differently when the gender of the target entity is different. However, such disparity does not appear when extracting relations such as birth date or birth place. We also analyze two existing bias mitigation techniques, word embedding debiasing and data augmentation. Unfortunately, due to NRE models relying heavily on surface level cues, we find that existing bias mitigation approaches have a negative effect on NRE. Our analysis lays groundwork for future quantifying and mitigating bias in relation extraction. 1 Introduction With the wealth of information being posted online daily, relation extraction has become increasingly important. Relation extraction aims specifically to extract relations from raw sentences and represent them as succinct relation tuples of the form (head, relation, tail) e.g., (Barack Obama, spouse, Michelle Obama). * Equal Contribution. The concise representations provided by relation extraction models have been used to extend Knowledge Bases (KBs) (Riedel et al., 2013; Subasic et al., 2019; Trisedya et al., 2019). These KBs are then used heavily in NLP systems, such as question answering systems (Bordes et al., 2014; Yin et al., 2016; Cui et al., 2019). In recent years, much focus in the Neural Relation Extraction (NRE) community has been centered on improvements in model precision and the reduction of noise (Lin et al., 2016; Liu et al., 2017; Wu et al., 2017; Feng et al., 2018; Vashishth et al., 2018; Qin et al., 2018). Yet, little attention has been devoted towards the fairness of such systems. We take the first step at understanding and evaluating gender bias in NRE systems by measuring the differences in model performance when extracting relations from sentences written about females versus sentences written about males. If a NRE model predicts a relation such occupation with higher recall on male entities, this could lead to the resulted knowledge bases having more occupation information for males than for females (see the illustration in Figure 1). Eventually, the gender bias in knowledge bases may affect downstream predictions, causing undesired allocative harms (Crawford, 2017) and reinforcing gender-stereotypical beliefs in society. In this paper, we present an evaluation framework to analyze social bias in NRE models. Specifically, we evaluate gender bias in English language predictions of a collection of popularly used and open source NRE models1 (Lin et al., 2016; Wu et al., 2017; Liu et al., 2017; Feng et al., 2018). We evaluate on two fronts: (1) examining gender bias 1https://github.com/thunlp/OpenNRE/ 2944 Figure 1: An illustration of gender bias in relation extraction and how it affects a downstream application. In their Wikipedia articles, both Beatrice (female) and Ben (male) are described as engineers. These sentences contain the (entity; occupation; engineer) relation. However, the model only predicts that the sentence from the male article expresses the occupation relation. If on a large scale, models extract the (entity; occupation; engineer) relation more often for males, knowledge bases will contain information for male engineers more often than female. Question answering models that query these knowledge bases may give biased answers and propagate gender bias downstream. exhibited in a model that is trained on a relation extraction dataset; and (2) examining if the existing bias mitigation techniques (Bolukbasi et al., 2016; Zhao et al., 2018; Lu et al., 2018) can be applied to reduce the bias in an NRE system while maintaining its performance. Carrying out such an evaluation is difficult with existing NRE datasets, such as the NYT dataset (Sandhaus, 2018), because there is no reliable way to obtain gender information about the entities mentioned in input sentences. Therefore, we create a new dataset, WikiGenderBias, specifically aimed at evaluating gender bias for NRE. WikiGenderBias is a distantly supervised dataset extracted using Wikipedia and DBPedia. It contains 45,000 sentences, each of which describe either a male or female entity with one of four relations: spouse, hypernym (i.e., occupation), birthDate, and birthPlace. We posit that a biased NRE system leverages gender information as a proxy when extracting knowledge tuples with spouse and hypernym relations. However, gender of the entity does not affect the extraction of relations such as birthDate and birthPlace, as they are not intuitively related to gender. Experiment results confirm our conjecture. Our contributions are as such: • We create WikiGenderBias, a new dataset for evaluating gender bias in NRE systems. • We present an evaluation framework to demonstrate that gender bias is exhibited in NRE model outputs. • We test several existing bias mitigation approaches to reducing gender bias in NRE system. Our analysis sheds light for designing future mitigating techniques. 2 Related Work Gender Bias Measurement. Existing studies have revealed gender bias in various NLP tasks (Zhao et al., 2017; Rudinger et al., 2018; Zhao et al., 2018; Dixon et al., 2018; Lu et al., 2018; Kiritchenko and Mohammad, 2018; Romanov et al., 2019; Sheng et al., 2019; Sun et al., 2019). People have proposed different metrics to evaluate gender bias, for example, by using the performance difference of the model on male and female datapoints for bias evaluation (Lu et al., 2018; Kiritchenko and Mohammad, 2018). Other metrics have been proposed to evaluate fairness of predictors and allocative bias (Dwork et al., 2012; Hardt et al., 2016), such as Equality of Opportunity. In this work, we use both of these metrics to evaluate NRE models. Mitigation Methods. After discovering gender bias existing, prior work has developed various methods to mitigate that bias (Escud´e Font and Costa-juss`a, 2019; Bordia and Bowman, 2019). Those mitigation methods can be applied in different levels of a model, including in the training phase, in the embedding layer, or in the inference procedure. In this paper, we test three existing debiasing approaches, namely data augmentation (Zhao et al., 2018; Lu et al., 2018), and word embedding debiasing technique (Hard Debiasing (Bolukbasi et al., 2016)) for mitigating bias in NRE models. 2945 Original Dataset Equalized Dataset Entity Pairs Instances Entity Pairs Instances M F M F M F M F Train 12,139 4,571 27,048 9,391 2,479 4,571 9,465 9,415 Development 1,587 553 3,416 1,144 336 553 1,144 1,144 Test 1,030 1,101 2,320 2,284 1,030 1,101 2,320 2,284 Total 14,756 6,225 32,784 12,819 3,845 6,225 12,929 12,843 Table 1: WikiGenderBias’s Dataset splits. Entity Pairs means distinct pairs (e1, e2) such that (e1, relation, e2) is a relation in WikiGenderBias. Instances are the total number of (e1, relation, e2, sentence) tuples in WikiGenderBias, where sentence is distantly supervised. We categorize an entity pair as male (female) if e1 is male (female), since the sentence in the instance is taken from e1’s article and we define datapoints as male (female) if that is the gender of the subject of the article. The left two entries are for the dataset taken from the true distribution; the right two are the gender-equalized dataset created by down-sampling male instances. Neural Relation Extraction. Relation extraction is a task in NLP with a long history that typically seeks to extract structured tuples (e1, r, e2) from texts (Bach and Badaskar, 2007). Early on, learning algorithms for relation extraction models were typically categorized as supervised, including feature-based methods (Kambhatla, 2004; Zhou et al., 2005; Zhao and Grishman, 2005) and kernelbased methods (Lodhi et al., 2002; Zelenko et al., 2003), or semi-supervised (Brin, 1998; Agichtein and Gravano, 2000; Etzioni et al., 2005; Pantel and Pennacchiotti, 2006), or purely unsupervised (Etzioni et al., 2008). Supervised approaches suffer from the need for large amounts of labelled data, which is sometimes not feasible, and generalizes poorly to open domain relation extraction, since labeled data is required for every entity-relation type (Bach and Badaskar, 2007; Mintz et al., 2009). Many semi-supervised approaches rely on patternmatching, which is not robust, and many are unable to extract intra-sentence relations (Bach and Badaskar, 2007). When data annotation is insufficient or hard to obtain and semi-supervised approaches are insufficient, the distant supervision assumption is used to collect data to train supervised models (Mintz et al., 2009). Given a relation (e1, r, e2) in a knowledge base (KB), distant supervision assumes any sentence that contains both e1 and e2 expresses r (Mintz et al., 2009). Great efforts have been made to improve NRE models by mitigating the effects of noise in the training data introduced by Distant Supervision (Hoffmann et al., 2011; Surdeanu et al., 2012; Lin et al., 2016; Liu et al., 2017; Feng et al., 2018; Qin et al., 2018). However, to our knowledge, there are no studies on bias or ethics in NRE, which is filled by this work. 3 WikiGenderBias We define gender bias in NRE as a difference in model performance when predicting on sentences from male versus female articles. Thus, we need articles written about entities for which we can identify the gender information. However, to obtain gender information for existing annotated datasets could be costly or impossible. Thus, we elected to create WikiGenderBias with this gender information to be able to detect scenarios like that in Figure 1. The data statistics of WikiGenderBias are given in Table 1. 3.1 Dataset Creation Wikipedia is associated with a knowledge base, DBPedia, that contains relation information for entities with articles on Wikipedia (Mendes et al., 2012). Many of these entities have gender information and their corresponding articles are readily available. Therefore, we create our dataset based on sentences extracted from Wikipedia. To generate WikiGenderBias, we use a variant of the distant supervision assumption: for a given relation between two entities, if one sentence from an article written about one entity also mentions the other entity, then we assume that such sentence expresses the relation. For instance, if we know (Barack, spouse, Michelle) is a relation tuple and we find the sentence He and Michelle were married in Barack’s Wikipedia article, then we assume that sentence expresses the (Barack, spouse, Michelle) relation. This assumption is similar to that made by Mintz et al. (2009) and allows us to scalably create the dataset. WikiGenderBias considers four relations that stored in DBPedia: spouse, hypernym, birthDate, 2946 Relation Head Entity Tail Entity Sentence Birthdate Robert M. Kimmitt December 19, 1947 Robert M. Kimmitt ( born December 19 , 1947 ) was United States Deputy Secretary of the Treasury under President George W. Bush . Birthplace Charles Edward Stuart Rome Charles was born in the Palazzo Muti , Rome , Italy , on 31 December 1720 , where his father had been given a residence by Pope Clement XI Spouse John W. Caldwell Sallie J. Barclay Caldwell married Sallie J. Barclay , and the couple had one son and two daughters . hypernym Handry Satriago CEO Handry Satriago ( born in Riau , Pekanbaru on June 13 , 1969 ) is the CEO of General Electric Indonesia . Table 2: Examples of relations of each type in WikiGenderBias. and birthPlace. Note that the hypernym relation on DBPedia is similar to occupation, with entities having hypernym labels such as Politican. We also generate negative examples by obtaining datapoints for three unrelated relations: parents, deathDate, and almaMater. We label them as NA (not a relation). As each sentence only labelled with one relation based on our distant supervision assumption, WikiGenderBias is a 5-class classification relation extraction task. Figure 2 lists the label distribution. We hypothesize that a biased relation extraction model might use gender as a proxy to influence predictions for spouse and hypernym relations, since words pertaining to marriage are more often mentioned in Wikipedia articles about female entities and words pertaining to hypernym (which is similar to occupation) are more often mentioned in Wikipedia articles about male entities (Wagner et al., 2015; Graells-Garrido et al., 2015). On the other hand, we posit that birthDate and birthPlace would operate like control groups and believe gender would correlate with neither relation. To simplify the analysis, we only consider the head entities that associated with at least one of the four targeted relations. We set up our experiment such that head entities are not repeated across the train, dev, and test sets so that the model will see only new head entities at the test time. Since we obtain the distantly supervised sentences for a relation from the head entity’s article, this guarantees the model will not reuse sentences from an article. However, it is possible that the head entity will appear as a tail entity in other relations because an entity could appear in multiple articles. The data splits are given in Table 1. Besides, Wikipedia includes more articles written about males than about females. Therefore, there are more male instances than female instances in WikiGenderBias as well. To remove the effect of dataset bias in our analysis, we also create a genderequalized version of the training and development sets by down-sampling male instances. We discuss the creation of gender-equalized test set below. 3.2 Test Sets We equalize the male and female instances in the test set. In this way, a model cannot achieve high performance by performing well only on the dominant class. Furthermore, since some data instances that are collected using distant supervision are noisy, we annotated the correctness of the test instances using Amazon Mechanical Turk annotations to perform a fair comparison. Specifically, we asked workers to determine whether or not a given sentence expressed a given relation. If the majority answer was “no”, then we labeled that sentence as expressing “no relation” (we denote them as NA). Each sentence was annotated by three workers. Each worker was paid 15 cents per annotation. We only accepted workers from England, the US or Australia and with HIIT Approval Rate greater than 95% and Number of HIITs greater than 100. We found the pairwise inter-annotator agreement as measured by Fleiss’ Kappa (Fleiss, 1971) κ is 0.44, which is consistent across both genders and signals moderate agreement. We note that our κ value is affected by asking workers to make binary classifications, which limits the degree of agreement that is attainable above chance. We also found the pairwise inter-annotator agreement to be 84%. 2947 Figure 2: Proportion of sentences corresponding to a given relation over total sentences in WikiGenderBias for each entity. This demonstrates that, of the entities we sampled to create WikiGenderBias, the spouse relation is expressed more often relative to the birthdate, birthplace, and hypernym relations in articles about female entities than in articles about male entities. Additionally, hypernym is mentioned more often relative to the other relations in articles about male entities than in articles about female entities. 3.3 Data Analysis We build on the work of Graells-Garrido et al. (2015), who discovered that female entities are more likely to have spouse information in the Infoboxes on their Wikipedia page than male entities. Figure 2 demonstrates a further discrepancy: amongst articles we sampled, proportionally, the spouse relation is mentioned more often relative to hypernym, birthPlace, and birthDate in female articles than in male articles. Additionally, we show that amongst female and male articles we sampled, hypernyms are mentioned more often in male than female articles relative to spouse, birthPlace, and birthDate (see Section 2). This observation aligns with the literature, arguing that authors do not write about the two genders equally (Wagner et al., 2015; Graells-Garrido et al., 2015). 4 Gender Bias in NRE We evaluate OpenNRE (Han et al., 2019), a popular open-source NRE system. OpenNRE implements the approach from (Lin et al., 2016). To convert sentences into vectors, researchers propose convolutional neural networks as well as the pieceweise convoultional neural networks (PCNN) which retain more structural information between entities (Zeng et al., 2015). In this work, we use a PCNN with Selective Attention for the experiments. We train every encoder-selector combination on the training set of WikiGenderBias and its genderequalized version. We input Word2Vec (Mikolov et al., 2013) word embeddings trained on WikiGenderBias to the models2. We use commit 709b2f from the OpenNRE repository tensorflow branch to obtain the models. 4.1 Performance Parity Score The goal of a successful relation extraction model is to maximize F1 score while minimizing the model performance gender gap (or disparity score). However, when comparing different systems, it is hard to decide what is the right balance between these two objectives. On one end, a model which has zero gender gap but has only 10% accuracy for both male and female test instances has almost no practical value. Other methods that have high accuracy or F1 score may do so at the cost of a wide gender gap. Although our test set for WikiGenderBias is gender-equalized, one can imagine that improving performance on a test set that is heavily skewed towards males can be done by focusing on male test instances while largely ignoring female ones. Therefore, it is important to strike a balance between model performance and inter-group parity. To measure model performance, we use Macroaverage F1 score. To measure inter-group parity, we use the pairwise difference in F1 scores averaged over all the groups for predictions on a given relation i. We describe the average difference over all relations as Disparity Score (DS): DS = 1 n n X i=1 1 x x X j=1 x X k=j+1 F1ik −F1ij , where n denotes the number of relations (e.g. {birthDate, birthPlace, spouse, hypernym}). x denotes the number of groups (e.g. {male, female}). F1rk is the F1 score for the model when predicting datapoints with true label relation r that belong to group k. (So, for instance, F1spouse,male is the F1 score on sentences that express the spouse relation from male articles.) The Disparity Score measures the F1 score gap between predictions on male and female data points. Bringing these two metrics together, we propose the Performance Parity Score (PPS). PPS is the Macro-average difference (equally weighted) of the 2We performed Grid Search to determine the optimal hyperparameters. We set epochs= 60, learning rate η = 0.5, early stopping with patience of 10, batch size= 160, and sliding window size= 3 (for CNN and PCNN). These hyperparameters are similar to the default settings found in the OpenNRE repository tensorflow branch, which uses epochs= 60, learning rate η = 0.5, and early stopping with patience of 20. 2948 Figure 3: Aggregate performance of the NRE model for each relation (left) and male−female F1 score gender gap for each relation (right). An ideal model maximizes performance and minimizes the gender gap. The experiment is run five times. We give the mean values and standard error bars. F1 score subtracted by the model performance gender gap, which we defined as the Disparity Score, per relation. We place equal importance on the F1 score and Disparity Score by giving each score an implicit weight of 1. In our formula for PPS above, we also divide the final result by the number of relations n. This keeps the range of PPS within (−1, 1], although PPS will generally fall between [0, 1] because it is highly unlikely that the Disparity Score will be greater than the overall F1 score. PPS seeks to incentivize a combination of both model performance and inter-group parity for the task of relation extraction: PPS = 1 n n X i=1  F1i −1 x x X j=1 x X k=j+1 F1ik −F1ij  = 1 n n X i=1 F1i −1 n n X i=1 1 x x X j=1 x X k=j+1 F1ik −F1ij = Macro F1 score −Disparity Score. 4.2 Measuring Performance Differences Similar to the parity term in PPS, gender bias can be measured as the difference in a performance metric for a model when evaluated on male and female datapoints (De-Arteaga et al., 2019). We define male (female) datapoints to be relations for which the head entity is male (female), which means the distantly supervised sentence is taken from a male (female) article. Prior work has used area under the precision-recall curve and F1 score to measure NRE model performance (Gupta et al., 2019; Han et al., 2019; Kuang et al., 2019). We use MacroF1 score as our performance metric. We denote the F1 gender difference as F1Gap, which is used to calculate the disparity score. A larger disparity score indicates higher bias in predictions. 4.3 Equality of Opportunity Evaluation Equality of Opportunity (EoO) was originally proposed to measure and address allocative biases (Hardt et al., 2016). Consequently, we examine this metric in the context of relation extraction to better understand how allocative biases can begin to emerge at this stage. Equality of Opportunity (EoO) is defined in terms of the joint distribution of (X, A, Y ), where X is the input, A is a protected attribute that should not influence the prediction, and Y is the true label (Hardt et al., 2016). A predictor satisfies Equality of Opportunity if and only if: P( ˆY = 1|A = male, Y = 1) and P( ˆY = 1|A = female, Y = 1). In our case A = {male, female}, because gender is our protected attribute and we assume it to be binary. We evaluate EoO on a per-relation, one-versus-rest basis. Thus, when calculating EoO for spouse, Y = 1 indicates the true label is spouse and ˆY = 1 indicates a prediction of spouse. We do this for each relation. Note that this is equivalent to measuring per-relation recall for each gender. 4.4 Result As shown in Figure 3, the NRE system performs better when predicting the spouse relation on sentences from articles about male entities than from articles on female entities (see Figure 3, right). Further, there is a large recall gap (see EoO column, row 1, in Table 3). Notably, the gender difference in performance is much smaller on birthDate, birthPlace, and hypernym relations, although the gender difference is non-zero for birthPlace and hyerpym. This is interesting given that a higher percentage of female instances in WikiGenderBias are spouse relations than male (see Figure 2). We encourage future work to explore whether the writing style differences between male and female spouse in2949 Spouse Birth Date Birth Place Hypernym Total F1Gap EoO F1Gap EoO F1Gap EoO F1Gap EoO F1 Score Disparity Score PPS PCNN,ATT .041 .058 .004 .000 -.003 -.017 .015 .009 .886 .016 .870 CNN,ATT .034 .043 -.003 .001 .014 .004 .028 .014 .882 .020 .862 RNN,ATT .032 .043 .015 .019 .005 -.011 -.006 -.006 .889 .014 .875 BIRNN,ATT .039 .061 .013 .021 -.016 -.033 -.013 -.026 .884 .020 .864 PCNN,AVE .034 .044 .005 .010 -.001 -.011 .005 -.005 .903 .011 .892 CNN,AVE .027 .028 .013 .029 .007 .009 .002 -.028 .895 .012 .883 RNN,AVE .039 .036 .004 .021 .016 .020 .006 -.012 .912 .016 .895 BIRNN,AVE .024 .018 .001 .015 .009 .018 -.005 -.022 .913 .010 .903 Table 3: Results from running combinations of encoders and selectors of the OpenNRE model for the male and female genders of each relation. A positive F1Gap indicates a higher F1 on male instances. A higher Equality of Opportunity (EoO) indicates higher recall on male instances. A higher PPS score indicates a better balance of performance and parity (see Section 4.1). We ran the experiment five times and report the mean values. Varying the encoder and selector appears to have no conclusive effect on bias, although models using the average selector doe achieve better aggregate performance. These results were obtained using the gender unequalized training data. stances causes those male instances to be easier to classify. In addition, we explore different types of sentence encoder and sentence-level attention used in the creation of the bag representation for each entity pair and examined how these models performed on our dataset. Notably, the bias in spouse relation persists across OpenNRE architectures (see Table 3). It seems models using average attention, which merely averages all the sentence vectors in the bag to create a representation of the entire bag, allows for better aggregate performance on WikiGenderBias. However, the effect on the Disparity Score (and therefore the bias exhibited in the predictions) seems negligible. We note that these results do not necessarily indicate that the model itself contains biases given that males and females are written about differently on Wikipedia. These results do, however, demonstrate that we must be cautious when deploying NRE models, especially those trained on Wikipedia data, since they can propagate biases latent in their training data to the knowledge bases they help create. 5 Bias Mitigation We examine data augmentation and HardDebiasing as bias mitigation techniques for reducing gender bias in NRE system. 5.1 Bias Mitigation Techniques Equalizing the Gender Distribution Sometimes, the true distribution contains an imbalance in gendered data. For instance, perhaps the training set contains more instances from male articles than female. To mitigate this, one can simply downsample the male instances until the male and female instances are approximately equal, then train on this modified, equalized distribution. Data Augmentation. The contexts in which males and females are written about can differ; for instance, on Wikipedia women are more often written about with words related to sexuality than men (Graells-Garrido et al., 2015). Data augmentation mitigates these contextual biases by replacing masculine words in a sentence with their corresponding feminine words and vice versa for all sentences in a corpus, and then training on the union of the original and augmented corpora3 (Zhao et al., 2018; Lu et al., 2018; Dixon et al., 2018; Maudslay et al., 2019; Zhao et al., 2019). Word Embedding Debiasing Word embeddings can encode gender biases (Bolukbasi et al., 2016; Caliskan et al., 2017; Garg et al., 2018) and this can affect bias in downstream predictions for models using the embeddings (Zhao et al., 2018; Font and Costa-Jussa, 2019). In this work, we apply the Hard-Debiasing technique (Bolukbasi et al., 2016). We applied Hard-Debiasing to Word2Vec embeddings (Mikolov et al., 2013), which we trained on the sentences in WikiGenderBias. When used in conjunction with data augmentation, the embeddings are re-trained on the union of the two corpora. Below, we give metrics used for measuring model performance and bias in our experiments. 3We use the following list to perform data augmentation: https://github.com/uclanlp/corefBias/ blob/master/WinoBias/wino/generalized_ swaps.txt 2950 Figure 4: Bias in relation extraction model on each relation as measured by male −female F1 score gender gap (used to calculate disparity score) for the default training set without modifications (left) and equalized training set (right). This is evaluated on the model with No Debiasing and two bias mitigation methods: debiased embeddings and data augmentation. The experiment is run five times. We give the mean values and standard error bars. # Equalization Debiased Embeddings Data Aug. EoO ↓ PPS Score ↑ Macro F1 Score ↑ Disparity Score ↓ 1 .012 .870 .886 .016 2 ✓ -.011 .851 .860 0.010 3 ✓ .015 .886 .902 .016 4 ✓ .014 .841 .866 .026 5 ✓ ✓ .001 .863 .872 .009 6 ✓ ✓ -.024 .805 .835 .030 7 ✓ ✓ .018 .868 .891 .023 8 ✓ ✓ ✓ .006 .867 .877 .010 Table 4: PPS Scores when using debiased embeddings and data augmentation with the unequalized, original dataset. We find that using debiased embeddings alone leads to the best PPS score. Other combinations of debiasing parameters lowers either F1 score, disparity score, or both. We bold the best values, which represent the maximum for PPS score and F1 score and minimum for Disparity Score. 5.2 Effectiveness of Bias Mitigation We note that by downsampling the training instances to equalize the number of male and female datapoints, the difference in performance on male versus female sentences decreases to almost 0 for every relation aside from hypernym (see Figure 4, right). Additionally, the drop in aggregate is performance is relatively small (see Macro F1, Table 4). Given that we down-sampled male instances to create this equalized dataset, training on the equalized data was also more efficient. We also examined the effect of various debiasing techniques. Table 4 shows the results. Unfortunately, most of these techniques cause a significant performance drop and none of them is effective in reducing the performance gap between genders. Interestingly, debiasing embeddings increased aggregate performance by achieving slightly better F1 performance. As none of these mitigation approaches is effective, their combinations are not effective as well. They either lowering Macro F1 or raising Disparity Score or both. We further examine the performance of various bias mitigation techniques evaluated in each relation in Figure 4. NRE relies heavily on surfacelevel cues such as context, the entities, and their positions. Data augmentation might potentially introduce artifacts and biases, causing the NRE system captures unwanted patterns and spurious statistics between contexts. 6 Conclusion In our study, we create and publicly release WikiGenderBias: the first dataset aimed at evaluating bias in NRE models. We train NRE models on the WikiGenderBias dataset and test them on genderseparated test sets. We find a difference in F1 scores for the spouse relation between predictions on male sentences and female for the model’s predictions. We also examine existing bias mitigation techniques and find that naive data augmentation causes a significant performance drop. 2951 It is an open and difficult research question to build unbiased neural relation extractors. One possibility is that some bias mitigation methods that add noise to the dataset encourage neural relation extraction models to learn spurious correlations and unwanted biases. We encourage future work to dive deeper into this problem. While these findings will help future work avoid gender biases, this study is preliminary. We only consider binary gender, but future work should consider non-binary genders. Additionally, future work should further probe the source of gender bias in the model’s predictions, perhaps by visualizing attention or looking more closely at the model’s outputs. Acknowledgments We thank anonymous reviewers for their helpful feedback. This material is based upon work supported in part by the National Science Foundation under IIS Grant 1927554 and Grant 1821415: Scaling the Early Research Scholars Program. References Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of the Fifth ACM International Conference on Digital Libraries (ACM ‘00), pages 85– 94. Nguyen Bach and Sameer Badaskar. 2007. A review of relation extraction. Literature review for Language and Statistics II, 2:1–15. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man Is to Computer Programmer As Woman Is to Homemaker? Debiasing Word Embeddings. In Neural Information Processing Systems (NIPS‘16). Antoine Bordes, Sumit Chopra, and Jason Weston. 2014. Question answering with subgraph embeddings. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 615–620, Doha, Qatar. Association for Computational Linguistics. Shikha Bordia and Samuel R. Bowman. 2019. Identifying and reducing gender bias in word-level language models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 7–15, Minneapolis, Minnesota. Association for Computational Linguistics. Sergey Brin. 1998. Extracting Patterns and Relations from the World Wide Web. In International Workshop on The World Wide Web and Databases at EDBT ‘98, pages 172–183. Springer. Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics Derived Automatically from Language Corpora Contain Human-Like Biases. Science, 356(6334):183–186. Kate Crawford. 2017. The Trouble With Bias. Keynote at Neural Information Processing Systems (NIPS’17). Wanyun Cui, Yanghua Xiao, Haixun Wang, Yangqiu Song, Seung-won Hwang, and Wei Wang. 2019. Kbqa: Learning question answering over qa corpora and knowledge bases. Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in Bios: A Case Study of Semantic Representation Bias in a High-stakes Setting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 120–128. ACM. Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and Mitigating Unintended Bias in Text Classification. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (AAAI‘18), pages 67–73. ACM. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness Through Awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pages 214–226. ACM. Joel Escud´e Font and Marta R. Costa-juss`a. 2019. Equalizing gender bias in neural machine translation with word embeddings techniques. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 147–154, Florence, Italy. Association for Computational Linguistics. Oren Etzioni, Michele Banko, Stephen Soderland, and Daniel S Weld. 2008. Open Information Extraction from the Web. 51(12):68–74. Oren Etzioni, Michael Cafarella, Doug Downey, AnaMaria Popescu, Tal Shaked, Stephen Soderland, Daniel S Weld, and Alexander Yates. 2005. Unsupervised Named-Entity Extraction from the Web: An Experimental Study. Artificial Intelligence, 165(1):91–134. Jun Feng, Minlie Huang, Li Zhao, Yang Yang, and Xiaoyan Zhu. 2018. Reinforcement Learning for Relation Classification from Noisy Data. In ThirtySecond Conference on Advancement of Artificial Intelligence (AAAI ‘18). Joseph L Fleiss. 1971. Measuring Nominal Scale Agreement Among Many Raters. Psychological bulletin, 76(5):378. 2952 Joel Escud´e Font and Marta R Costa-Jussa. 2019. Equalizing gender biases in neural machine translation with word embeddings techniques. arXiv preprint arXiv:1901.03116. Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word Embeddings Quantify 100 Years of Gender and Ethnic Stereotypes. Proceedings of the National Academy of Sciences, 115(16):E3635–E3644. Eduardo Graells-Garrido, Mounia Lalmas, and Filippo Menczer. 2015. First Women, Second sex: Gender Bias in Wikipedia. In Proceedings of the 26th ACM Conference (ACM ‘15), pages 165–174. ACM. Pankaj Gupta, Subburam Rajaram, Hinrich Sch¨utze, and Thomas Runkler. 2019. Neural Relation Extraction Within and Across Sentence Boundaries. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI ‘19), volume 33, pages 6513–6520. Xu Han, Tianyu Gao, Yuan Yao, Demin Ye, Zhiyuan Liu, and Maosong Sun. 2019. OpenNRE: An Open and Extensible Toolkit for Neural Relation Extraction. arXiv preprint arXiv:1909.13078. Moritz Hardt, Eric Price, and Srebro. 2016. Equality of Opportunity in Supervised Learning. In Advances in Neural Information Processing Systems (NIPS ‘16), pages 3315–3323. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledgebased Weak Supervision for Information Extraction of Overlapping Relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL ‘11), pages 541–550. Association for Computational Linguistics. Nanda Kambhatla. 2004. Combining Lexical, Syntactic, and Semantic Features with Maximum Entropy Models for Extracting Relations. In Proceedings of the ACL 2004 on Interactive Poster and Demonstration Sessions, pages 22–es. Svetlana Kiritchenko and Saif M Mohammad. 2018. Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems. arXiv preprint arXiv:1805.04508. Jun Kuang, Yixin Cao, Jianbing Zheng, Xiangnan He, Ming Gao, and Aoying Zhou. 2019. Improving Neural Relation Extraction with Implicit Mutual Relations. arXiv preprint arXiv:1907.05333. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural Relation Extraction with Selective Attention Over Instances. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL ‘16), volume 1, pages 2124–2133. Tianyu Liu, Kexiang Wang, Baobao Chang, and Zhifang Sui. 2017. A Soft-Label Method for NoiseTolerant Distantly Supervised Relation Extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP ‘17), pages 1790–1795. Huma Lodhi, Craig Saunders, John Shawe-Taylor, Nello Cristianini, and Chris Watkins. 2002. Text Classification using String Kernels. Journal of Machine Learning Research, 2(Feb):419–444. Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, and Anupam Datta. 2018. Gender Bias in Neural Natural Language Processing. Rowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. It’s All in the Name: Mitigating Gender Bias with Name-Based Counterfactual Data Substitution. arXiv preprint arXiv:1909.00871. Pablo N. Mendes, Max Jakob, and Christian Bizer. 2012. Dbpedia for nlp: A multilingual cross-domain knowledge base. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12), Istanbul, Turkey. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed Representations of Words and Phrases and Their Compositionality. In Advances in Neural Information Processing Systems (NIPS ‘13), pages 3111–3119. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant Supervision for Relation Extraction Without Labeled Data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the Association of Computational Linguistics (ACL‘09), pages 1003–1011. Association for Computational Linguistics. Patrick Pantel and Marco Pennacchiotti. 2006. Espresso: Leveraging generic patterns for automatically harvesting semantic relations. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 113–120, Sydney, Australia. Association for Computational Linguistics. Pengda Qin, Weiran Xu, and William Yang Wang. 2018. Robust distant supervision relation extraction via deep reinforcement learning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2137–2147, Melbourne, Australia. Association for Computational Linguistics. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation Extraction with Matrix Factorization and Universal Schemas. In booktitle=North American Chapter of the Association for Computational Linguistics (NAACL‘13), pages 74–84. Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra 2953 Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, and Adam Tauman Kalai. 2019. What’s in a Name? Reducing Bias in Bios without Access to Protected Attributes. arXiv preprint arXiv:1904.05233. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender Bias in Coreference Resolution. In North American Chapter of the Association for Computational Linguistics (NAACL‘18). Evan Sandhaus. 2018. The New York Times Annotated Corpus. Linguistic Data Consortium (LDC ’08). Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407– 3412, Hong Kong, China. Association for Computational Linguistics. Pero Subasic, Hongfeng Yin, and Xiao Lin. 2019. Building Knowledge Base through Deep Learning Relation Extraction and Wikidata. In AAAI Spring Symposium: Combining Machine Learning with Knowledge Engineering. Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating Gender Bias in Natural Language Processing: Literature Review. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance Multi-label Learning for Relation Extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing (EMNLP ’12), pages 455–465. Association for Computational Linguistics. Bayu Distiawan Trisedya, Gerhard Weikum, Jianzhong Qi, and Rui Zhang. 2019. Neural Relation Extraction for Knowledge Base Enrichment. In Assocation for Computational Linguistics (ACL ‘19). ACL. Shikhar Vashishth, Rishabh Joshi, Sai Suman Prayaga, Chiranjib Bhattacharyya, and Partha Talukdar. 2018. Reside: Improving Distantly-Supervised Neural Relation Extraction Using Side Information. Claudia Wagner, David Garcia, Mohsen Jadidi, and Markus Strohmaier. 2015. It’s a Man’s Wikipedia? Assessing Gender Inequality in an Online Encyclopedia. In Ninth International AAAI Conference on Web and Social Media (AAAI‘15). Yi Wu, David Bamman, and Stuart Russell. 2017. Adversarial Training for Relation Extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP ‘17), pages 1778–1783. Jun Yin, Xin Jiang, Zhengdong Lu, Lifeng Shang, Hang Li, and Xiaoming Li. 2016. Neural generative question answering. In Proceedings of the Workshop on Human-Computer Question Answering, pages 36–42, San Diego, California. Association for Computational Linguistics. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel Methods for Relation Extraction. Journal of Machine Learning Research, 3(Feb):1083–1106. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant Supervision for Relation Extraction Via Piecewise Convolutional Neural Networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP ‘15), pages 1753–1762. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender Bias in Contextualized Word Embeddings. arXiv preprint arXiv:1904.03310. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men Also Like Shopping: Reducing Gender Bias Amplification Using Corpus-Level Constraints. In Empirical Methods of Natural Language Processing (EMNLP‘17). Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. In North American Chapter of the Association for Computational Linguistics (NAACL‘18). Shubin Zhao and Ralph Grishman. 2005. Extracting relations with integrated information using kernel methods. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 419–426, Ann Arbor, Michigan. Association for Computational Linguistics. GuoDong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 427–434, Ann Arbor, Michigan. Association for Computational Linguistics.
2020
265
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2954–2960 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2954 A Probabilistic Generative Model for Typographical Analysis of Early Modern Printing Kartik Goyal1 Chris Dyer2 Christopher Warren3 Max G’Sell4 Taylor Berg-Kirkpatrick5 1Language Technologies Institute, Carnegie Mellon University 2Deepmind 3Department of English, Carnegie Mellon University 4Department of Statistics, Carnegie Mellon University 5Computer Science and Engineering, University of California, San Diego {kartikgo,cnwarren,mgsell}@andrew.cmu.edu [email protected] [email protected] Abstract We propose a deep and interpretable probabilistic generative model to analyze glyph shapes in printed Early Modern documents. We focus on clustering extracted glyph images into underlying templates in the presence of multiple confounding sources of variance. Our approach introduces a neural editor model that first generates well-understood printing phenomena like spatial perturbations from template parameters via interpertable latent variables, and then modifies the result by generating a non-interpretable latent vector responsible for inking variations, jitter, noise from the archiving process, and other unforeseen phenomena associated with Early Modern printing. Critically, by introducing an inference network whose input is restricted to the visual residual between the observation and the interpretably-modified template, we are able to control and isolate what the vector-valued latent variable captures. We show that our approach outperforms rigid interpretable clustering baselines (Ocular) and overly-flexible deep generative models (VAE) alike on the task of completely unsupervised discovery of typefaces in mixed-font documents. 1 Introduction Scholars interested in understanding details related to production and provenance of historical documents rely on methods of analysis ranging from the study of orthographic differences and stylometrics, to visual analysis of layout, font, and printed characters. Recently developed tools like Ocular (BergKirkpatrick et al., 2013) for OCR of historical documents have helped automate and scale some textual analysis methods for tasks like compositor attribution (Ryskina et al., 2017) and digitization of historical documents (Garrette et al., 2015). However, researchers often find the need to go beyond Figure 1: We desire a generative model that can be biased to cluster according to typeface characteristics (e.g. the length of the middle arm) rather than other more visually salient sources of variation like inking. textual analysis for establishing provenance of historical documents. For example, Hinman (1963)’s study of typesetting in Shakespeare’s First Folio relied on the discovery of pieces of damaged or distinctive type through manual inspection of every glyph in the document. More recently, Warren et al. (2020) examine pieces of distinctive types across several printers of the early modern period to posit the identity of clandestine printers of John Milton’s Areopagitica (1644). In such work, researchers frequently aim to determine whether a book was produced by a single or multiple printers (Weiss (1992); Malcolm (2014); Takano (2016)). Hence, in order to aid these visual methods of analyses, we propose here a novel probabilistic generative model for analyzing extracted images of individual printed characters in historical documents. We draw from work on both deep generative modeling and interpretable models of the printing press to develop an approach that is both flexible and controllable – the later being a critical requirement for such analysis tools. As depicted in Figure 1, we are interested in identifying clusters of subtly distinctive glyph shapes as these correspond to distinct metal stamps in the type-cases used by printers. However, other 2955 sources of variation (inking, for example, as depicted in Figure 1) are likely to dominate conventional clustering methods. For example, powerful models like the variational autoencoder (VAE) (Kingma and Welling, 2014) capture the more visually salient variance in inking rather than typeface, while more rigid models (e.g. the emission model of Ocular (Berg-Kirkpatrick et al., 2013)), fail to fit the data. The goal of our approach is to account for these confounding sources of variance, while isolating the variables pertinent to clustering. Hence, we propose a generative clustering model that introduces a neural editing process to add expressivity, but includes interpretable latent variables that model well-understood variance in the printing process: bi-axial translation, shear, and rotation of canonical type shapes. In order to make our model controllable and prevent deep latent variables from explaining all variance in the data, we introduce a restricted inference network. By only allowing the inference network to observe the visual residual of the observation after interpretable modifications have been applied, we bias the posterior approximation on the neural editor (and thus the model itself) to capture residual sources of variance in the editor – for example, inking levels, ink bleeds, and imaging noise. This approach is related to recently introduced neural editor models for text generation (Guu et al., 2018). In experiments, we compare our model with rigid interpretable models (Ocular) and powerful generative models (VAE) at the task of unsupervised clustering subtly distinct typeface in scanned images early modern documents sourced from Early English Books Online (EEBO). 2 Model Our model reasons about the printed appearances of a symbol (say majuscule F) in a document via a mixture model whose K components correspond to different metal stamps used by a printer for the document. During various stages of printing, random transformations result in varying printed manifestations of a metal cast on the paper. Figure 2 depicts our model. We denote an observed image of the extracted character by X. We denote choice of typeface by latent variable c (the mixture component) with prior π. We represent the shape of the k-th stamp by template Tk, a square matrix of parameters. We denote the interpretable latent variables corresponding to spatial adjustment of Figure 2: Proposed generative model for clustering images of a symbol by typeface. Each mixture component c corresponds to a learnable template Tk. The λ variables warp (spatially adjust) the original template T to ˜T. This warped template is then further transformed via the z variables to ˆT via an expressive neural filter function parametrized by θ. the metal stamp by λ, and the editor latent variable responsible for residual sources of variation by z. As illustrated in Fig. 2, after a cluster component c = k is selected, the corresponding template Tk undergoes a transformation to yield ˆTk. This transformation occurs in two stages: first, the interpretable spatial adjustment variables (λ) produce an adjusted template (§2.1), ˜Tk = warp(Tk, λ), and then the neural latent variable transforms the adjusted template (§2.2), ˆTk = filter( ˜Tk, z). The marginal probability under our model is p(X) = X k πk Z p(X|λ, z; Tk)p(λ)p(z)dzdλ, where p(X|λ, z; Tk) refers to the distribution over the binary pixels of X where each pixel has a bernoulli distribution parametrized by the value of the corresponding pixel-entry in ˆTk. 2.1 Interpretable spatial adjustment Early typesetting was noisy, and the metal pieces were often arranged with slight variations which resulted in the printed characters being positioned with small amounts of offset, rotation and shear. These real-valued spatial adjustment variables are denoted by λ = (r, o, s, a), where r represents the rotation variable, o = (oh, ov) represents offsets along the horizontal and vertical axes, s = (sh, sv) 2956 denotes shear along the two axes. A scale factor, ˜a = 1.0 + a, accounts for minor scale variations arising due to the archiving and extraction processes. All variables in λ are generated from a Gaussian prior with zero mean and fixed variance as the transformations due to these variables tend to be subtle. In order to incorporate these deterministic transformations in a differentiable manner, we map λ to a template sized attention map Hij for each output pixel position (i, j) in ˜T as depicted in Figure 3. The attention map for each output pixel is formed in order to attend to the corresponding shifted (or scaled or sheared) portion of the input template and is shaped according to a Gaussian distribution with mean determined by an affine transform. This approach allows for strong inductive bias which contrasts with related work on spatial-VAE (Bepler et al., 2019) that learns arbitrary transformations. Figure 3: Translation operation: The mode of the attention map is shifted by the offset values for every output pixel in ˜T. Similar operations account for shear, rotation, and scale. 2.2 Residual sources of variations Apart from spatial perturbations, other major sources of deviation in early printing include random inking perturbations caused by inconsistent application of the stamps, unpredictable ink bleeds, and noise associated with digital archiving of the documents. Unlike in the case of spatial perturbations which could be handled by deterministic affine transformation operators, it is not possible to analytically define a transformation operator due to these variables. Hence we propose to introduce a non-interpretable real-valued latent vector z, with a Gaussian prior N(0, I) , that transforms ˜T into a final template ˆT via neurally-parametrized function filter( ˜T, z; θ) with neural network parameters θ. This function is a convolution over ˜T whose kernel is parametrized by z, followed by non-linear operations. Intuitively, parametrizing the filter by z results in the latent variable accounting for variations like inking appropriately because convolution filters capture local variations in appearance. Srivatsan et al. (2019) also observed the effectiveness of using z to define a deconvolutional kernel for c X Observation Template choice ˜Tc Warped template Rc Residual z Inference parameters ϕ Posterior Approximation q(z|X, c; ϕ) Rc = X −˜Tc z = InferNet(Rc, c) Figure 4: Inference network for z conditions on the mixture component and only the residual image left after subtracting the λ-transformed template from the image. This encourages z to model variance due to sources other than spatial adjustments. font generation. 2.3 Learning and Inference Our aim is to maximize the log likelihood of the observed data ({Xd | d ∈N, d < n}) of n images wrt. model parameters: LL(T1,...,k, θ) = max T,θ X d log h X k πk Z p(Xd|λd, zd; Tk, θ)p(λd)p(zd)dzddλd i During training, we maximize the likelihood wrt. λ instead of marginalizing, which is an approximation inspired by iterated conditional modes (Besag, 1986): max T,θ X d log X k max γk,d πk Z p(Xd|λd = γk,d, zd; Tk, θ)p(λd = γk,d)p(zd)dzd However, marginalizing over z remains intractable. Therefore we perform amortized variational inference to define and maximize a lower bound on the above objective (Kingma and Welling, 2014). We use a convolutional inference neural network parametrized by φ (Fig. 4), that takes as input, the mixture component k, the residual image Rk = X −˜Tk, and produces mean and variance parameters for an isotropic gaussian proposal distribution q(z | Rk, k; φ). This results in the final training objective: max T,θ,φ X d log X k Eq(zd|Rd,k,k;φ)  max γk,d πk p(Xd|λ = γk,d, zd; Tk, θ)p(λ = γk,d)  −KL q(zd|Rd,k, k; φ)||p(z)  2957 We use stochastic gradient ascent to maximize this objective with respect to T, γ, θ and φ. 3 Experiments We train our models on printed occurrences of 10 different uppercase character classes that scholars have found useful for bibliographic analysis (Warren et al., 2020) because of their distinctiveness. As a preprocessing step, we ran Ocular (BergKirkpatrick et al., 2013) on the grayscale scanned images of historical books in EEBO dataset and extracted the estimated image segments for the letters of interest. 3.1 Quantitative analysis We show that our model is superior to strong baselines at clustering subtly distinct typefaces (using realistic synthetic data), as well as in terms of fitting the real data from historical books. 3.1.1 Baselines for comparison Ocular: Based on the emission model of Ocular that uses discrete latent variables for the vertical/horizontal offset and inking variables, and hence has limited expressivity. λ-only: This model only has the interpretable continuous latent variables pertaining to spatial adjustment. VAE-only: This model is expressive but doesn’t have any interpretable latent variables for explicit control. It is an extension of Kingma et al. (2014)’s model for semi-supervised learning with a continuous latent variable vector in which we obtain tighter bounds by marginalizing over the cluster identities explicitly. For fair comparison, the encoder and decoder convolutional architectures are the same as the ones in our full model. The corresponding training objective for this baseline is: max T,θ,φ X d log X k Eq(zd|Xd,k;φ)  πkp(Xd|zd; Tk, θ)  −KL q(zd|Xd, k; φ)||p(z)  No-residual: The only difference from the full model is that the encoder for the inference network conditions the variational distribution q(z) on the entire input image X instead of just the residual image X −˜T. 3.1.2 Font discovery in Synthetic Data Early modern books were frequently composed from two or more type cases, resulting in documents with mixed fonts. We aim to learn the difV-measure Mutual Info F&M NLL Ocular 0.42 0.45 0.61 379.21 λ-only 0.49 0.51 0.70 322.04 VAE-only 0.22 0.29 0.38 263.45 No-residual 0.54 0.58 0.73 264.27 Our Model 0.73 0.74 0.85 257.92 Table 1: (a) Clustering results on synthetic data (Vmeasure, Mutual Info, F&M). (b) Test negative log likelihood (NLL) on real data from historical documents, or negative ELBO bound for intractable models (NLL). ferent shapes of metal stamps that were used as templates for each cluster component in our model. Data: In order to quantitatively evaluate our model’s performance, we experiment with synthetically generated realistic dataset for which we know the ground truth cluster identities in the following manner: For each character of interest, we pick three distinct images from scanned segmented EEBO images, corresponding to three different metal casts. Then we randomly add spatial perurbations related to scale, offset, rotation and shear. To incorporate varying inking levels and other distortions, we randomly either perform erosion, dilation, or a combination of these warpings using OpenCV (Bradski, 2000) with randomly selected kernel sizes. Finally, we add a small Gaussian noise to the pixel intensities and generate 300 perturbed examples per character class. Results: We report macro-averaged results across all the character classes on three different clustering measures, V-measure (Rosenberg and Hirschberg, 2007), Mutual Information and Fowlkes and Mallows Index (Fowlkes and Mallows, 1983). In Table 1, we see that our model significantly outperforms all other baselines on every metric. Ocular and λ-only models fail because they lack expressiveness to explain the variations due to random jitters, erosions and dilations. The VAE-only model, while very expressive, performs poorly because it lacks the inductive bias needed for successful clustering. The No-residual model performs decently but our model’s superior performance emphasizes the importance of designing a restrictive inference network such that z only focuses on extraneous sources of variation. 3.1.3 Fitting Real Data from Historical Books For the analysis of real books, we selected three books from the EEBO dataset printed by different printers. We modeled each character class for each book separately and report the macro-aggregated 2958 upper bounds on the negative log likelihood (NLL) in Table 1. We observe that adding a small amount of expressiveness makes our λ-only model better than Ocular. The upper bounds of other inference network based models are much better than the tight1 bounds of both the interpretable models. Our model has the lowest upper bound of all the models while retaining interpretability and control. 3.2 Qualitative analysis We provide visual evidence of desirable behavior of our model on collections of character extractions from historical books with mixed fonts. Specifically, we discus the performance of our model on the mysterious edition of Thomas Hobbes’ Leviathan known as “the 25 Ornaments” edition. (Hobbes, 1651 [really 1700?]). The 25 Ornaments Leviathan is an interesting test case for several reasons. While its title page indicates a publisher and year of publication, both are fabricated (Malcolm, 2014). The identities of its printer(s) remain speculative, and the actual year of publication is uncertain. Further, the 25 Ornaments exhibits two distinct fonts. 3.2.1 Quality of learned templates X ˆT T Learned Template parameters Transformed Templates Observations Figure 5: The learned templates for F and R and the transformed templates ˆT for four examples of F are shown. Our model is able to learn desirable templates based on underlying glyph structure. Our model is successful in discovering distinctly shaped typefaces in the 25 Ornaments Leviathan. We focus on the case study of majuscule letters F and R, each of which have two different typefaces mixed in throughout. The two typefaces for F differ in the length of the middle arm (Fig. 1), and the two typefaces for R have differently shaped legs. In Fig. 5, we show that our model successfully learns the two desired templates T1 and T2 for both the characters which indicates that the clusters in our 1For Ocular and λ-only models, we report the upper bound obtained via maximization over the interpretable latent variables. Intuitively, these latent variables are likely to have unimodal posterior distributions with low variance, hence this approximation is likely tight. model mainly focus on subtle differences in underlying glyph shapes. We also illustrate how the latent variables transform the model templates T to ˆT for four example F images. The model learns complex functions to transform the templates which go beyond simple affine and morphological transformations in order to account for inking differences, random jitter, contrast variations etc. 3.2.2 Interpretable variables (λ) and Control 1 2 3 Avg. Unaligned raw Images Aligned Images Figure 6: Result of alignment on Leviathan extractions using the interpretable λ variables along with their pixelwise average images. Aligned average image is much sharper than the unaligned average image. Finally, we visualize the ability of our model to separate responsibility of modelling variation among the interpretable and non-interpretable variables appropriately. We use the inferred values of the interpretable (λ) variable for each image in the dataset to adjust the corresponding image. Since the templates represent the canonical shape of the letters, the λ variables which shift the templates to explain the images can be reverse applied to the input images themselves in order to align them by accounting for offset, rotation, shear and minor size variations. In Fig. 6, we see that the input images (top row) are uneven and vary by size and orientation. By reverse applying the inferred λ values, we are able to project the images to a fixed size such that they are aligned and any remaining variations in the data are caused by other sources of variation. Moreover, this alignment method would be crucial for automating certain aspects of bibliographic studies that focus on comparing specific imprints. 4 Conclusion Beyond applications to typeface clustering, the general approach we take might apply more broadly to other clustering problems, and the model we developed might be incorporated into OCR models for historical text. 5 Acknowledgements This project is funded in part by the NSF under grants 1618044 and 1936155, and by the NEH under grant HAA256044-17. 2959 References Tristan Bepler, Ellen Zhong, Kotaro Kelley, Edward Brignole, and Bonnie Berger. 2019. Explicitly disentangling image content from translation and rotation with spatial-vae. In Advances in Neural Information Processing Systems, pages 15409–15419. Taylor Berg-Kirkpatrick, Greg Durrett, and Dan Klein. 2013. Unsupervised transcription of historical documents. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 207–217, Sofia, Bulgaria. Association for Computational Linguistics. Julian Besag. 1986. On the statistical analysis of dirty pictures. Journal of the Royal Statistical Society: Series B (Methodological), 48(3):259–279. G. Bradski. 2000. The OpenCV Library. Dr. Dobb’s Journal of Software Tools. Edward B Fowlkes and Colin L Mallows. 1983. A method for comparing two hierarchical clusterings. Journal of the American statistical association, 78(383):553–569. Dan Garrette, Hannah Alpert-Abrams, Taylor BergKirkpatrick, and Dan Klein. 2015. Unsupervised code-switching for multilingual historical document transcription. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1036–1041, Denver, Colorado. Association for Computational Linguistics. Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. Transactions of the Association for Computational Linguistics, 6:437–450. Charlton Hinman. 1963. The printing and proofreading of the first folio of Shakespeare, volume 1. Oxford: Clarendon Press. Thomas Hobbes. 1651 [really 1700?]. Leviathan, or, the matter, form, and power of a common-wealth ecclesiastical and civil. By Thomas Hobbes of Malmesbury. Number R13935 in ESTC. [false imprint] printed for Andrew Crooke, at the Green Dragon in St. Pauls Church-yard, London. Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings. Durk P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. 2014. Semi-supervised learning with deep generative models. In Advances in neural information processing systems, pages 3581–3589. Noel Malcolm. 2014. Editorial Introduction. In Leviathan, volume 1. Clarendon Press, Oxford. Andrew Rosenberg and Julia Hirschberg. 2007. Vmeasure: A conditional entropy-based external cluster evaluation measure. In Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL), pages 410–420. Maria Ryskina, Hannah Alpert-Abrams, Dan Garrette, and Taylor Berg-Kirkpatrick. 2017. Automatic compositor attribution in the first folio of shakespeare. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 411–416, Vancouver, Canada. Association for Computational Linguistics. Nikita Srivatsan, Jonathan Barron, Dan Klein, and Taylor Berg-Kirkpatrick. 2019. A deep factorization of style and structure in fonts. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2195–2205, Hong Kong, China. Association for Computational Linguistics. Akira Takano. 2016. Thomas Warren: A Printer of Leviathan (head edition). Annals of Nagoya University Library Studies, 13:1–17. Christopher N. Warren, Pierce Williams, Shruti Rijhwani, and Max G’Sell. 2020. Damaged type and Areopagitica’s clandestine printers. Milton Studies, 62.1. Adrian Weiss. 1992. Shared Printing, Printer’s Copy, and the Text(s) of Gascoigne’s ”A Hundreth Sundrie Flowres”. Studies in Bibliography, 45:71–104. A Character wise quantitative analysis The quantitative experiments were performed on the following character classes: A, B, E, F, G, H, M, N, R, W. V-measure Mutual Info F&M NLL λ-only 0.77 0.82 0.89 264.90 VAE-only 0.33 0.38 0.5 230.45 No-residual 0.79 0.85 0.90 231.45 Our Model 0.78 0.86 0.89 226.25 Table 2: Results for character A V-measure Mutual Info F&M NLL λ-only 0.37 0.39 0.59 261.1 VAE-only 0.15 0.2 0.32 229.1 No-residual 0.37 0.39 0.58 228.1 Our Model 0.68 0.73 0.81 226.25 Table 3: Results for character B 2960 V-measure Mutual Info F&M NLL λ-only 0.33 0.36 0.55 282.4 VAE-only 0.17 0.19 0.30 253.2 No-residual 0.33 0.35 0.56 251.45 Our Model 0.65 0.70 0.76 234.05 Table 4: Results for character E V-measure Mutual Info F&M NLL λ-only 0.09 0.10 0.55 258.40 VAE-only 0.03 0.05 0.31 218.2 No-residual 0.12 0.09 0.59 208.1 Our Model 0.81 0.56 0.94 204.48 Table 5: Results for character F V-measure Mutual Info F&M NLL λ-only 0.60 0.62 0.73 268.40 VAE-only 0.28 0.38 0.40 250.8 No-residual 0.64 0.66 0.77 244.5 Our Model 0.60 0.62 0.73 240.84 Table 6: Results for character G V-measure Mutual Info F&M NLL λ-only 0.72 0.71 0.79 313.75 VAE-only 0.32 0.32 0.40 254.2 No-residual 0.90 0.97 0.94 258.8 Our Model 0.92 1.01 0.96 249.81 Table 7: Results for character H V-measure Mutual Info F&M NLL λ-only 0.62 0.64 0.78 392.06 VAE-only 0.29 0.38 0.40 323.5 No-residual 0.70 0.83 0.74 329.25 Our Model 0.75 0.84 0.87 323.04 Table 8: Results for character M V-measure Mutual Info F&M NLL λ-only 0.65 0.70 0.73 331.6 VAE-only 0.30 0.45 0.40 265.2 No-residual 0.74 0.81 0.82 270.11 Our Model 0.69 0.75 0.75 264.23 Table 9: Results for character N V-measure Mutual Info F&M NLL λ-only 0.07 0.08 0.55 330.6 VAE-only 0.03 0.04 0.34 247.1 No-residual 0.06 0.07 0.53 251.32 Our Model 0.46 0.32 0.78 246.02 Table 10: Results for character R V-measure Mutual Info F&M NLL λ-only 0.65 0.71 0.79 418.01 VAE-only 0.31 0.45 0.42 364.2 No-residual 0.72 0.78 0.82 369.5 Our Model 0.72 0.79 0.84 364.21 Table 11: Results for character W
2020
266
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2961–2970 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2961 Attentive Pooling with Learnable Norms for Text Representation Chuhan Wu†, Fangzhao Wu‡, Tao Qi†, Xiaohui Cui§, Yongfeng Huang† †Department of Electronic Engineering & BNRist, Tsinghua University, Beijing 100084, China ‡Microsoft Research Asia, Beijing 100080, China §School of Cyber Science and Engineering, Wuhan University, Wuhan 430072, China. {wuchuhan15, wufangzhao}@gmail.com [email protected] [email protected] [email protected] Abstract Pooling is an important technique for learning text representations in many neural NLP models. In conventional pooling methods such as average, max and attentive pooling, text representations are weighted summations of the L1 or L∞norm of input features. However, their pooling norms are always fixed and may not be optimal for learning accurate text representations in different tasks. In addition, in many popular pooling methods such as max and attentive pooling some features may be over-emphasized, while other useful ones are not fully exploited. In this paper, we propose an Attentive Pooling with Learnable Norms (APLN) approach for text representation. Different from existing pooling methods that use a fixed pooling norm, we propose to learn the norm in an end-to-end manner to automatically find the optimal ones for text representation in different tasks. In addition, we propose two methods to ensure the numerical stability of the model training. The first one is scale limiting, which re-scales the input to ensure non-negativity and alleviate the risk of exponential explosion. The second one is re-formulation, which decomposes the exponent operation to avoid computing the realvalued powers of the input and further accelerate the pooling operation. Experimental results on four benchmark datasets show that our approach can effectively improve the performance of attentive pooling. 1 Introduction In recent years, neural network based methods are widely used in the natural language processing (NLP) field to learn text representations (Yang et al., 2016; Peters et al., 2018). In these methods, pooling is a core technique to build the text representation vector from a collection of input feature vectors by summarizing their information (Lai et al., 2015). Thus, an effective pooling method Sentiment Classification Average Pooling The movie is good, but not to my taste Max Pooling The movie is good, but not to my taste Attentive Pooling The movie is good, but not to my taste News Topic Classification Average Pooling Fire on Queensland Island Takes Heavy Toll on Wildlife Max Pooling Fire on Queensland Island Takes Heavy Toll on Wildlife Attentive Pooling Fire on Queensland Island Takes Heavy Toll on Wildlife Figure 1: The pooling weights of several different pooling methods on the representations produced by an LSTM network. Darker colors indicate higher weights. that can select salient features accurately will facilitate many NLP methods (Ma et al., 2017). Among existing pooling methods, average pooling is a representative one which takes the average of the L1 norm of input features (Tang et al., 2014, 2015a,b). However, average pooling equally regards the input representation vector at each position and ignores their different informativeness for learning text representation, which may not be optimal (Johnson and Zhang, 2015). Thus, other pooling methods such as max pooling (Collobert et al., 2011; Kim, 2014) and attentive pooling (Yang et al., 2016; Zhou et al., 2016; Cui et al., 2017; Devlin et al., 2019; Wu et al., 2019b) are widely used in neural NLP models. For example, Kim (2014) proposed to apply max pooling to the contextual word representations learned by CNN networks to build the representations of the entire sentence. Yang et al. (2016) proposed to use attentive pooling at both word and sentence levels to learn informative sentence and document representations by selecting important words and sentences. However, these pooling methods use fixed average norms, i.e., L1 norm for average and attentive pooling and L∞norm for max pooling, to build text representations, which may not be optimal when handling different tasks. Our work is motivated by the following obser2962 vations. First, different contexts usually have different informativeness for learning text representations. For example, in Fig. 11, the word “but” is very important for inferring the sentiment polarity of this sentence, while “The” is uninformative. Thus, modeling the different informativeness of contexts and attending to them differently may help learn more informative text representations. Second, different tasks and even different datasets have different characteristics. For example, in Fig. 1, sentiment and negation words may be the key clues for inferring the sentiment polarity of the first sentence, while the global contexts may be useful for understanding the topic of the second sentence. Thus, using a fixed pooling norm for universal text representation learning is probably not optimal. Third, in popular pooling methods such as max pooling and attentive pooling, some contexts may be over-emphasized, and other useful contextual information is not fullyrespected. For example, as shown in Fig. 1, the sentiment word “good” is highlighted, but other useful clues such as “but” and “not” do not gain sufficient attentions, which may not be optimal for learning accurate text representations. Thus, a dynamically learnable degree of “hard” or “soft” for pooling may benefit text representation learning. In this paper, we propose an Attentive Pooling with Learnable Norms (APLN) approach to enhance the learning of text representations2. Instead of manually setting a fixed pooling norm, we propose to automatically learn it in a unified framework, which can find the optimal values to learn text representations for different tasks in an end-to-end manner. In addition, since the learning of pooling norm may be numerically unstable in some cases due to the exponent operation, we propose two methods to improve its computational stability. The first one is limiting the scale of input features, which aims to ensure their non-negativity and avoid exponential explosion. The second one is a re-formulation method, which aims to avoid computing the real-valued power of input features by decomposing the exponent operation into three safe and fast atomic operations. We conducted experiments on four benchmark datasets, and the results show that our approach can effectively improve the learning of text representation. 1The visualized weights of max pooling are summations of the maximum elements over time for each word. 2https://github.com/wuch15/ACL2020-APLN 2 Related Work Neural networks are widely used to learn text representations from contexts (Peng et al., 2018). Pooling is usually an essential step in these methods to build contextual representations by summarizing the information of input features (LeCun et al., 2015). The simplest pooling method is average pooling, which is used in many approaches to construct text representations (Tang et al., 2014, 2015a,b). For example, Tang et al. (2015a) proposed to apply average pooling to the output of CNN filters to capture global contexts in a sentence. In addition, they also proposed to average the sentence representations learned by parallel CNN networks with different window sizes. In their another work (Tang et al., 2015b), they proposed to apply average pooling to the sequence of sentence representations to build the representations of an entire document. Although average pooling is computationally efficient, it cannot distinguish important contexts from unimportant ones, which may not be optimal for learning accurate text representations. There are also other popular pooling methods that can select salient features to learn more informative text representations, such as max pooling (Kim, 2014; Zhang et al., 2015) and attentive pooling (Yang et al., 2016), which are employed by many neural NLP methods (Collobert et al., 2011; Kim, 2014; Huang et al., 2012; Yang et al., 2016; Chen et al., 2016; Zhou et al., 2016; Du et al., 2017; Li et al., 2018; Wu et al., 2019a; Tao et al., 2019; Devlin et al., 2019; Wu et al., 2019b). For example, Collobert et al. (2011) proposed to learn representations of contexts within each window using feed forward neural networks, and used max pooling to build final text representations. Kim (2014) proposed to apply max pooling over time to the contextual word representations learned by multiple CNN filters. Huang et al. (2012) proposed to build representations of the entire document using the summation of word representations weighted by their TF-IDF scores. Yang et al. (2016) proposed a hierarchical attention network to first learn sentence representations from words and then learn document representations from sentences. They proposed to apply attentive pooling at both word and sentence levels to select informative words and sentences for more informative representation learning. Wu et al. (2019b) proposed a hierarchical user and 2963 𝛼1 𝛼2 𝛼𝑁 1 𝑁 𝒓 𝒗1 𝒗2 𝒗𝑁 Max-over-time ... 𝒓 𝒗1 𝒗2 𝒗𝑁 ... 𝒒 𝒓 𝒗1 𝒗2 𝒗𝑁 ... 𝒓 𝒗1 𝒗2 𝒗𝑁 ... 𝑥𝑝 𝑥𝑝 𝑥𝑝 ... 𝑥1/𝑝 × × × Σ 𝛼1 𝛼2 𝛼𝑁 𝒒 × × × Σ ... Σ (a) Average pooling. 𝛼1 𝛼2 𝛼𝑁 1 𝑁 𝒓 𝒗1 𝒗2 𝒗𝑁 Max-over-time ... 𝒓 𝒗1 𝒗2 𝒗𝑁 ... 𝒒 𝒓 𝒗1 𝒗2 𝒗𝑁 ... 𝒓 𝒗1 𝒗2 𝒗𝑁 ... 𝑥𝑝 𝑥𝑝 𝑥𝑝 ... 𝑥1/𝑝 × × × Σ 𝛼1 𝛼2 𝛼𝑁 𝒒 × × × Σ ... Σ (b) Max pooling. 𝛼1 𝛼2 𝛼𝑁 1 𝑁 𝒗1 𝒗2 𝒗𝑁 Max-over-time ... 𝒓 𝒗1 𝒗2 𝒗𝑁 ... 𝒒 𝒓 𝒗1 𝒗2 𝒗𝑁 ... 𝒗1 𝒗 𝑥𝑝 × × × Σ 𝛼1 𝛼 𝒒 × ... Σ (c) Attentive pooling. Figure 2: Comparisons of several popular pooling methods. item representation model with three-tier attention, which applies attentive pooling to simultaneously select important words, sentences and reviews. However, the pooling norms of max and attentive pooling are always fixed, which may not be optimal for universal text representation learning since the characteristics of different tasks may be different. In addition, both pooling methods may over-emphasize the most salient features, and other useful contextual information is not fully exploited, which may also be sub-optimal. There are a few methods to adapt the pooling norms in different tasks. For example, Gulcehre et al. (2014) explored the influence of selecting different pooling norms on the performance of different image classification tasks. However, the norms in their method are manually tuned, which are usually very time-consuming and may not be optimal. Different from all aforementioned methods, our approach can automatically optimize pooling norms in an end-to-end manner, and can effectively select important contexts to learn informative text representations. Extensive experiments on four datasets with different characteristics validate the effectiveness of our approach. 3 Preliminaries In this section, we will first present a brief introduction to several popular pooling methods, i.e., average, max and attentive pooling. To make it easier to understand, we present an intuitive comparison of the mechanisms of these different pooling methods in Fig. 2. Average Pooling. Average pooling is used to build contextual representations by taking the arithmetic mean of input features, as shown in Fig. 2(a). It uses the L1 norm of the input. Denote the input sequence of hidden representations as [h1, h2, ..., hN], where N is the sequence length. The output representation is computed as: r = 1 N N X i=1 hi. (1) Max Pooling. Max pooling aims to build contextual representations by selecting the most salient features via max-over-time operations, as shown in Fig. 2(b). It utilizes the L∞norm at the time dimension of input features. Denote rj as the j-th value in the vector r, which is computed as: rj = max(hj 1, hj 2, ..., hj N), (2) where hj i represents the j-th value in the feature vector hi. Attentive Pooling. As shown in Fig. 2(c), attentive pooling usually builds contextual representations by selecting important input features, which can also be regarded as a kind of L1 norm average. It computes an attention weight αi for the input at each position to indicate its informativeness, which is formulated as follows: αi = exp[qT f(hi)] PN j=1 exp[qT f(hj)] , (3) where f(·) is a non-linear function, q is the attention query vector. Following Yang et al. (2016), we apply the tanh operation to the linear transformation of hi to form the function f(·). The final contextual representation r is the summation of input representation vectors weighted by their attention weight as follows: r = N X i=1 αihi. (4) 2964 𝛼𝑁 𝑁 Max-over-time 𝒗1 𝒗2 𝒗𝑁 ... 𝒗𝑁 𝒓 𝒗1 𝒗2 𝒗𝑁 ... 𝑥𝑝 𝑥𝑝 𝑥𝑝 ... 𝑥1/𝑝 × 𝛼1 𝛼2 𝛼𝑁 𝒒 × × × Σ Figure 3: Architecture of our Attentive Pooling with Learnable Norms (APLN) approach. 4 Attentive Pooling with Learnable Norms In this section, we will introduce the details of our Attentive Pooling with Learnable Norms (APLN) approach. In the aforementioned pooling methods, the pooling norm is always fixed (i.e., L1 or L∞). However, the characteristics of different NLP tasks and even different datasets should have some differences, and it may not be optimal to use a fixed pooling norm for universal text representation learning. In addition, tuning the pooling norm manually is usually very time-consuming, and it may also be sub-optimal. Thus, it is an intuitive idea to automatically learn the pooling norm in an end-to-end manner to alleviate the efforts on hyperparameter searching and learn more informative text representations. The architecture of our APLN approach is shown in Fig. 3. We will introduce its details as follows. Since different contexts usually have different importance, modeling their informativeness may help learn more informative text representations. Thus, similar to the vanilla attentive pooling, in our APLN approach, we also compute an attention score for the input at each position. However, instead of using the simple weighted summation to build the contextual representation r, we propose to compute the Lp norm3 average of the input feature vectors weighted their attention weights, which is formulated as follows: r = [ 1 PN i=1 αp i N X i=1 (αihi)p] 1 p , (5) 3It should be noticed that when p < 1, this definition is not a norm since it does not obey the triangle inequality. But we still call it “norm” for consistency. Figure 4: Illustration of the influence of p on the shape of the function y = xp. where p is a learnable parameter. In this way, our model will automatically find appropriate values of pooling norms for learning text representations in different tasks. To show the influence of p on the inputs of the APLN module, we vary the value of p and illustrate the shape of the function y = xp in Fig. 4. According to Fig. 4, we can see when p is larger, the attention of APLN is sharper and sparser since small values of αihi will be suppressed, which indicates the attentive pooling is “harder”. In contrast, if p is smaller, the attentions are more distributed, which indicates the attentive pooling is “softer”. Thus, in this manner, our APLN model can automatically explore how “hard/soft” the attention should be when constructing text representations, which may help recognize important contexts and avoid the problem of over-emphasizing some features and not fully respecting other useful ones, both of which are important for learning accurate text representations. Unfortunately, in most cases the training of APLN is unstable if we directly use it for pooling. Thus, we propose two methods to ensure the numerical stability of the model training. The first one is scale limiting, which is used to limit the range of the elements of αihi. The second one is Re-formulation, which is used to avoid the direct computation of the real-valued powers of the input features and accelerate the pooling operation. We will introduce the two methods as follows. 4.1 Scale Limiting According to Eq. (5), to ensure the values of r are real, the elements of αihi must be non-negative. Thus, we apply a ReLU function to αihi to keep 2965 αihi ≥0. However, there are still some risks if there exist elements with αihj i > 1, since the gradients may explode when p > 1 due to the amplification of the exponent, which are also observed in our practice. To solve this problem, we propose to clip the values of αihi as follows: 0 ≤αihj i ≤1. (6) In this way, the input features is re-scaled to a “safe” range. We also explored other kinds of re-scaling methods such as normalization, but we find there are no significant differences in the model performance. Thus, we simply use the clipping operation for its efficiency. 4.2 Re-formulation However, there are still some problems in our approach. We find the training of our approach is not numerically stable (e.g., NAN problem) when implemented by several popular deep learning frameworks such as Tensorflow. In addition, computing the real-value powers of input features is quite time-consuming. Thus, we propose a reformulation strategy by converting the exponent computation in Eq. (5). For instance, the exponent xp is re-formulated as follows: xp = elog(xp) = ep log(x) ≈ep log(x+ϵ), (7) where ϵ = 10−7 is a protection value. In this way, the computation of the power of x is divided into three atomic operations, i.e., logarithm, multiplication and exponent, all of them are fast4 and numerically stable in our approach. Thus, using the re-formulation strategy can enhance the numerical stability and accelerate the pooling operation. 5 Experiments 5.1 Datasets and Experimental Settings Our experiments are widely conducted on four benchmark datasets with different characteristics. The first one is AG’s News5, which is a news topic classification dataset. Following (Zhang et al., 2015), we only use the title and description fields in this dataset. The second one is IMDB6 (Diao et al., 2014), which is a dataset with movie reviews and ratings. The third one is Amazon Electronics 4In experiments on a machine with a GTX1080ti GPU, the computation of xp is accelerated by more than 10 times. 5https://www.di.unipi.it/en/ 6https://github.com/nihalb/JMARS Figure 5: Class distributions of datasets. For IMDB, Amazon and Yelp, darker colors indicate higher ratings. (denoted as Amazon) (He and McAuley, 2016), which contains reviews on electronics. The fourth one is Yelp 2015 (denoted as Yelp), which is a restaurant review dataset. The latter three datasets are all for sentiment classification. Since the original Amazon and Yelp datasets are too large, we sampled 50,000 reviews to form each dataset. The detailed statistics are shown in Table 1. The class distributions of the AG’s News and Yelp are balanced, but are imbalanced on IMDB and Amazon, as shown in Fig. 5. In addition, AG’s News is a sentence-level classification dataset, while the others are document-level. Since the AG’s News dataset only contains the training and test sets, we randomly sampled 10% of news in the training set for validation. For the other three datasets, we used 80% of samples for training, 10% for validation and the rest 10% for test. Dataset # Train # Val. # Test # Classes Balanced AG’s News 108,000 12,000 7,600 4 ✓ IMDB 108,535 13,567 13,567 10 × Amazon 40,000 5,000 5,000 5 × Yelp 40,000 5,000 5,000 5 ✓ Table 1: Statistics of our datasets. In our experiments, the word embeddings were 300-dimensional and initialized by Glove (Pennington et al., 2014)7. In our comparative experiments, the CNN networks had 400 filters, and their window size was 3. The dimension of LSTM hidden states was 200. The attention query vectors were 200-dimensional. The initial pooling norm p was set to 1, which is consistent with the vanilla attentive pooling. Adam (Kingma and Ba, 2014) was used as the optimizer, and the 7We do not use language models such as ELMo and BERT since our work focuses on facilitating the pooling technique rather than boosting the performance of our approach against the state-of-the-art methods. 2966 Methods AG’s News IMDB Amazon Yelp Accuracy Macro-F Accuracy Macro-F Accuracy Macro-F Accuracy Macro-F CNN-Avg 91.55 91.52 49.96 38.88 64.73 36.68 55.41 54.78 CNN-Max 92.10 92.07 50.53 40.96 66.24 43.80 59.19 59.14 CNN-Att 92.32 92.30 51.24 42.24 66.79 44.01 59.22 59.19 CNN-APLN 92.48 92.45 51.63 43.57 66.86 45.80 59.97 59.95 LSTM-Last 91.65 91.62 48.96 38.32 64.55 39.62 55.20 54.88 LSTM-Avg 91.10 91.07 48.65 38.67 62.09 40.09 55.76 54.92 LSTM-Max 92.01 91.99 50.94 40.94 66.80 43.63 59.63 59.26 LSTM-Att 92.20 92.18 51.12 41.83 67.07 43.70 59.87 59.44 LSTM-APLN 92.45 92.43 51.77 43.65 67.39 45.55 60.21 60.01 HAN 52.05 42.81 67.22 45.01 60.18 59.72 HAN-APLN 52.59 44.01 67.95 46.01 60.55 60.35 Table 2: The performance of different methods on the four benchmark datasets. batch size was 64. We applied dropout (Srivastava et al., 2014) techniques to the word embeddings, CNN networks or LSTMs to mitigate overfitting, and the dropout ratio was 0.2. These hyperparameters were tuned on the validation set. In classification tasks the metrics were accuracy and macro-F scores, and in regression tasks the performance was evaluated by rooted mean squared error (RMSE). We reported the average results of 10 independently repeated experiments. 5.2 Performance Evaluation We compare the performance of different neural text classification models with different pooling methods to evaluate the performance of our approach. The methods to be compared include: (1) CNN-Avg (Tang et al., 2015b), applying average pooling to the representations learned by CNN to build contextual text representations; (2) CNN-Max (Kim, 2014), using a combination of CNN and max pooling; (3) CNN-Att (Gong and Zhang, 2016), using a combination of CNN and vanilla attentive pooling; (4) CNN-APLN, combining CNN with our APLN approach; (5) LSTMLast (Hochreiter and Schmidhuber, 1997), using the last hidden state in an LSTM network; (6) LSTM-Avg (Zhao et al., 2016), using average pooling after LSTM; (7) LSTM-Max (Johnson and Zhang, 2016), using max pooling after LSTM; (8) LSTM-Att (Zhou et al., 2016), using attentive pooling after LSTM; (9) LSTM-APLN, combining LSTM with APLN; (10) HAN (Yang et al., 2016), a hierarchical LSTM network with both word-level and sentence-level attentive pooling; (11) HAN-APLN, using APLN at both word and sentence levels. In methods based on LSTM, we used two parallel LSTMs to scan the input in both directions. The results of these methods are summarized in Table 2, which reveal several findings. First, the methods based on average pooling are usually inferior to those using other pooling methods in our experiments. This is probably because average pooling equally regards different features and cannot distinguish their informativeness. Thus, modeling the importance of different features has the potential to improve text representation learning. Second, the methods based on attentive pooling outperform their variants based on max pooling. This may be because attentive pooling can model the informativeness of contexts for text representation, while max pooling only selects the most salient features, which may be sub-optimal. Third, our APLN approach can consistently outperform other pooling methods, and further hypothesis test results show that the improvement brought by our approach is significant (p < 0.01). This may be because vanilla max pooling and attentive pooling methods use a fixed pooling norm for universal text representation learning, and the differences in the characteristics of different tasks and datasets are not considered, which may also be sub-optimal. Our approach can dynamically adapt the pooling norm in different scenarios, which may facilitate text representation learning. In addition, we find the advantage in Macro-F score of our approach over other methods is more significant on the datasets with imbalanced class distributions. This may be because our approach can build text representation in a softer manner, which may help neural models avoid focusing on the clues of major classes only and alleviate their dominance. Fourth, we find hierarchical models (HAN and HAN-APLN) outperform flatten models (e.g., LSTM-APLN) for doc2967 Methods IMDB Amazon Yelp CNN-Avg 1.388 0.920 0.847 CNN-Max 1.322 0.908 0.834 CNN-Att 1.292 0.899 0.824 CNN-APLN 1.271 0.886 0.801 LSTM-Last 1.316 0.896 0.822 LSTM-Avg 1.343 0.911 0.830 LSTM-Max 1.269 0.890 0.815 LSTM-Att 1.257 0.878 0.799 LSTM-APLN 1.233 0.865 0.784 HAN 1.230 0.866 0.789 HAN-APLN 1.214 0.858 0.776 Table 3: The performance of different methods on rating regression. Lower RMSE scores indicate better performance. ument representation learning. This may be because modeling documents in a hierarchical manner can better utilize the structure of documents. In addition, since our approach can be applied at both word and sentence levels in HAN, text representation may be learned more accurately. These results validate the effectiveness of our approach. To further validate the generality of our approach in regression tasks8, we also conduct experiments on the IMDB, Amazon and Yelp datasets by formulating the task as a rating regression problem, and the results in terms of RMSE are shown in Table 3. From the results, we find our APLN approach can also bring consistent improvements to many existing methods in the regression task. 5.3 Influence of Scale Limiting and Re-formulation In this section, we will explore the influence of the scale limiting and re-formulation techniques on the stability and relative pooling speed of our approach. The results are summarized in Table 4. From these results, if the limitation of nonnegativity is removed, the model training is usually unstable, which is intuitive. In addition, if the scale limitation (≤1) is removed, our model occasionally does not converge. This may be because when p > 1, our model has the risk of gradient explosion. Thus, the scale of input features should be limited. Besides, the re-formulation method also has critical impacts on our approach. This is probably because directly computing the real-valued 8We find that the regression labels need to be normalized, or the performance may be sub-optimal. exponents of input features may be numerically unstable. In our approach we decompose the exponents into three stable operations, which is robust to numerical errors. In addition, the pooling speed can be effectively improved, since the computational costs of these atomic operations are usually small. These results validate the effectiveness of our approach. Stability Speed -SL (≥0) × 1.001 -SL (≤1) ◦ 1.001 -RF × 0.116 APLN ✓ 1.000 Table 4: Influence of the scale limiting (abbreviated as SL) and re-formulation (abbreviated as RF) on the stability and relative pooling speed of APLN. The symbol ◦represents the model training is unstable on occasion. 5.4 Influence of Norm Initialization In this section, we study the influence of a small but very important step, i.e., the initialization of the trainable pooling norm p, on the performance of our approach. We compare the performance of LSTM-APLN by varying the initialized values of p. The results are shown in Fig. 6. From Fig. 6, we find the performance of our approach increases when the initialized value of p increases. This is intuitive because when p is too small, the attention network may not be capable of recognizing important contexts effectively, which is not optimal for learning accurate text representations. In addition, when p is initialized with a too large value, the performance will start to decline. This is probably because a large value of p will lead to sharp attentions on critical contexts, and other useful information is not fully exploited. Thus, the performance is also not optimal. These results show that a moderate value (e.g., 1.0) is the most appropriate for initializing the pooling norm p, which is also consistent with standard attentive pooling. 5.5 Parameter Analysis In this section, we analyze a critical parameter learned by our model, i.e., the pooling norm p in the APLN module. The evolution of the values of p learned by LSTM-APLN on the four benchmark datasets during model training is portrayed in Fig. 7. From the results, we have several interesting observations. First, the pooling norms learned by our model are consistently less than 2968 9DOXHRI Figure 6: The influence of the initialization of the pooling norm p on our approach. 1 2 3 4 5 6 7 Epoch 0.4 0.5 0.6 0.7 0.8 0.9 1.0 AG's News Yelp Amazon IMDB Figure 7: The evolution of the pooling norm p learned by our model on different datasets. The required training epochs to achieve the best performance are marked as the grey region. 1, which indicates that our norm-wise attention is “softer” than vanilla attention. This may be because L1 norm is not optimal for attentive pooling, and a softer attention manner may be more suitable for learning accurate text representations. Second, we find it is interesting that the norm p consistently decreases when the training epoch increases. This may be because the model may tend to take the global contexts into consideration rather than focus on important ones. Third, a moderate norm p is more appropriate for our approach. This may be because when p is too large, the attentions may be too sparse and useful contextual information is not fully exploited. When p is too small, the attention networks cannot effectively distinguish informative contexts from uninformative ones, which may also be sub-optimal for learning text representations. Fourth, we observe that the norm p learned on datasets with imbalanced class distributions is lower than those with balanced distributions. This may be because on imbalanced dataset, if p is too large, the clues I really liked the case , at first . It protects well , holds the iPad firmly . The stand is convenient and well angled . After a few weeks , the plastic casing started to split . That really soured me on the case . really liked case protects well I the , at first . , It holds the iPad firmly . . The After That really soured me on the case . stand is convenient and well angled a few weeks , the plastic casing started to split . (a) Attention weights in HAN. Predicted rating is 4. I really liked the case , at first . It protects well , holds the iPad firmly . The stand is convenient and well angled . After a few weeks , the plastic casing started to split . That really soured me on the case . really liked case protects well I the , at first . , It holds the iPad firmly . . The After That really soured me on the case . stand is convenient and well angled a few weeks , the plastic casing started to split . (b) Attention weights in HAN-APLN. Predicted rating is 3. Figure 8: Visualization of the word-level and sentencelevel attention weights in HAN and HAN-APLN on a randomly selected review in the Amazon dataset, whose gold rating score is 3. Darker colors indicate higher attention weights. The visualized attention weights of APLN are αp i of words and sentences, under p = 0.885 at the word level and p = 0.892 at the sentence level. of the majority classes may be over-emphasized, and other useful information is not fully respected. Thus, the performance of our APLN approach is better when it learns a moderate pooling norm. 5.6 Case Study In this section, we conducted several case studies to further explore the effectiveness of our APLN approach. We visualize the word-level and sentence-level attention weights in HAN and HAN-APLN of a randomly selected review to compare their differences, and the results are portrayed in Fig. 8. According to the results, we have several observations. First, both HAN and HAN-APLN can recognize important words and sentences. For example, the word “liked” and the sentence “I really liked the case, at first.” are highlighted since they are important for modeling the opinions condensed by this review. Second, the attentions of HAN are sparse, which indicates that HAN tends to focus more on some contexts in a review such as the first and the third sentence, and pays little attentions to the useful information in other contexts such as the fourth and fifth sentences. In addition, HAN wrongly classifies the rating of this review. This is probably because the rating of a review is usually a synthesis of all opinions conveyed by it. Thus, it may not be optimal for learning accurate text representations if only salient contexts are considered. Third, different from HAN, the attentions of HAN-APLN are smoother. This is probably because the pooling norm learned by our approach is less than 1, which encourages our model 2969 to attend to important contexts in a softer manner. In addition, HAN-APLN can classify this review correctly. This is probably because our approach can effectively take global contextual information into consideration, and does not over-emphasize critical contexts. Thus, our APLN approach can learn more accurate text representations than the methods based on vanilla attentive pooling. These results show the effectiveness of our approach. 6 Conclusion and Future Work In this paper, we propose an Attentive Pooling with Learnable Norms (APLN) approach for text representation. Instead of using a fixed pooling norm for universal text representation learning, we propose to learn the norm in an end-to-end framework to automatically find the optimal ones for learning text representations in different tasks. In addition, we propose two methods to ensure the numerical stability of the model training. The first one is scale limiting, which limits the scale of input representations to ensure their non-negativity and avoid potential exponential explosion. The second one is re-formulation, which decomposes the exponent operation into several safe atomic operations to avoid computing the real-valued powers of input features with less computational cost. Extensive experiments on four benchmark datasets validate the effectiveness of our approach. In our future work, we will explore several potential directions. First, we plan to explore why the model prefers “soft” attentions rather than “hard” ones, which is different from the findings in several prior works based on hard attention. Second, we plan to study how to model the differences on the characteristics of different samples and use different pooling norms, which may have the potential to further improve our approach. Third, we will explore how to generalize our approach to other modalities, such as images, audios and videos, to see whether it can facilitate more attention-based methods. Acknowledgments This work was supported by the National Key Research and Development Program of China under Grant number 2018YFC1604002, the National Natural Science Foundation of China under Grant numbers U1936208, U1936216, U1836204, and U1705261. References Huimin Chen, Maosong Sun, Cunchao Tu, Yankai Lin, and Zhiyuan Liu. 2016. Neural sentiment classification with user and product attention. In EMNLP, pages 1650–1659. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of machine learning research, 12(Aug):2493–2537. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2017. Attention-overattention neural networks for reading comprehension. In ACL, pages 593–602. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, pages 4171–4186. Qiming Diao, Minghui Qiu, Chao-Yuan Wu, Alexander J Smola, Jing Jiang, and Chong Wang. 2014. Jointly modeling aspects, ratings and sentiments for movie recommendation (jmars). In KDD, pages 193–202. ACM. Jiachen Du, Lin Gui, Ruifeng Xu, and Yulan He. 2017. A convolutional attention model for text classification. In NLPCC, pages 183–195. Springer. Yuyun Gong and Qi Zhang. 2016. Hashtag recommendation using attention-based convolutional neural network. In IJCAI, pages 2782–2788. Caglar Gulcehre, Kyunghyun Cho, Razvan Pascanu, and Yoshua Bengio. 2014. Learned-norm pooling for deep feedforward and recurrent neural networks. In ECML-PKDD, pages 530–546. Springer. Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In WWW, pages 507–517. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. 2012. Improving word representations via global context and multiple word prototypes. In ACL, pages 873–882. Rie Johnson and Tong Zhang. 2015. Semi-supervised convolutional neural networks for text categorization via region embedding. In NIPS, pages 919–927. Rie Johnson and Tong Zhang. 2016. Supervised and semi-supervised text categorization using lstm for region embeddings. In ICML, pages 526–534. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP, pages 1746– 1751. 2970 Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In AAAI. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature, 521(7553):436. Zheng Li, Ying Wei, Yu Zhang, and Qiang Yang. 2018. Hierarchical attention transfer network for crossdomain sentiment classification. In AAAI. Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2017. Interactive attention networks for aspect-level sentiment classification. In AAAI, pages 4068–4074. Hao Peng, Jianxin Li, Yu He, Yaopeng Liu, Mengjiao Bao, Lihong Wang, Yangqiu Song, and Qiang Yang. 2018. Large-scale hierarchical text classification with recursively regularized deep graph-cnn. In WWW, pages 1063–1072. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL-HLT, pages 2227–2237. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929–1958. Duyu Tang, Bing Qin, and Ting Liu. 2015a. Document modeling with gated recurrent neural network for sentiment classification. In EMNLP, pages 1422– 1432. Duyu Tang, Bing Qin, and Ting Liu. 2015b. Learning semantic representations of users and products for document level sentiment classification. In ACLIJCNLP, pages 1014–1023. Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentimentspecific word embedding for twitter sentiment classification. In ACL, pages 1555–1565. Hanqing Tao, Shiwei Tong, Hongke Zhao, Tong Xu, Binbin Jin, and Qi Liu. 2019. A radical-aware attention-based model for chinese text classification. In AAAI. Chuhan Wu, Fangzhao Wu, Junxin Liu, Shaojian He, Yongfeng Huang, and Xing Xie. 2019a. Neural demographic prediction using search query. In WSDM, pages 654–662. Chuhan Wu, Fangzhao Wu, Junxin Liu, and Yongfeng Huang. 2019b. Hierarchical user and item representation with three-tier attention for recommendation. In NAACL-HLT, pages 1818–1826. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In NAACL-HLT, pages 1480–1489. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In NIPS, pages 649–657. Xue Zhao, Chao Wang, Zhifan Yang, Ying Zhang, and Xiaojie Yuan. 2016. Online news emotion prediction with bidirectional lstm. In International Conference on Web-Age Information Management, pages 238–250. Springer. Xinjie Zhou, Xiaojun Wan, and Jianguo Xiao. 2016. Attention-based lstm network for cross-lingual sentiment classification. In EMNLP, pages 247–256.
2020
267
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2971–2985 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2971 Estimating the influence of auxiliary tasks for multi-task learning of sequence tagging tasks Fynn Schr¨oder Language Technology Group Universit¨at Hamburg Hamburg, Germany [email protected] Chris Biemann Language Technology Group Universit¨at Hamburg Hamburg, Germany [email protected] Abstract Multi-task learning (MTL) and transfer learning (TL) are techniques to overcome the issue of data scarcity when training state-of-theart neural networks. However, finding beneficial auxiliary datasets for MTL or TL is a time- and resource-consuming trial-and-error approach. We propose new methods to automatically assess the similarity of sequence tagging datasets to identify beneficial auxiliary data for MTL or TL setups. Our methods can compute the similarity between any two sequence tagging datasets, i.e. they do not need to be annotated with the same tagset or multiple labels in parallel. Additionally, our methods take tokens and their labels into account, which is more robust than only using either of them as an information source, as conducted in prior work. We empirically show that our similarity measures correlate with the change in test score of neural networks that use the auxiliary dataset for MTL to increase the main task performance. We provide an efficient, opensource implementation.1 1 Introduction State-of-the-art neural networks usually require large amounts of training data and vast computational resources. Especially for low-resource tasks, data scarcity is the main issue hampering the training of robust models. By leveraging multitask learning or transfer learning, auxiliary data can be incorporated into the training to boost the main task performance. Finding suitable auxiliary datasets for these cases is a time- and resourceconsuming trial-and-error approach, because there can be plenty of plausible auxiliary datasets that could help to learn the main task. For a proper evaluation of different auxiliary datasets, hyperparameter search and training runs with multiple random seeds have to be performed for each auxiliary 1github.com/uhh-lt/seq-tag-sim dataset individually. Thus, the process takes even longer and uses even more computational resources. We propose methods to shorten this trial-and-error approach by computing the similarity between any two sequence tagging datasets. Based on the similarity, suitable datasets can be quickly selected to be used as auxiliary training data for multi-task or transfer learning. Our contributions are a family of novel methods to compute the similarity of sequence tagging datasets, where the similarity values correlate with the change in multi-task learning performance when using one dataset as auxiliary data for training the other. We evaluate our methods in experiments with five part-of-speech (POS) tagging, nine named-entity recognition (NER) and three argumentation mining (AM) datasets. Our similarity measures allow for comparison both datasets for the same and different tasks, not requiring the same set of labels on target and auxiliary dataset. The calculated similarity scores can be used to predict which dataset will be beneficial as auxiliary training data for multi-task training in order to shorten the search process. 2 Related work 2.1 Neural multi-task and transfer learning Multi-task learning (MTL) is a technique to learn multiple tasks jointly (Caruana, 1997). Depending on the setting, either all tasks are equally important, or only the performance on the main task is of interest, which shall be improved with additional training data. MTL has been successfully applied in natural language processing for various sequence tagging tasks (Søgaard and Goldberg, 2016; Bjerva et al., 2016; Plank et al., 2016; Mart´ınez Alonso and Plank, 2017; Kaiser et al., 2017; Bingel and Søgaard, 2017; Augenstein and Søgaard, 2017; Kim et al., 2017; Yang et al., 2017; Changpinyo 2972 et al., 2018; Liu et al., 2018; Schulz et al., 2018). These approaches use hard parameter sharing in the hidden layers of neural learning architectures, where the same weights are updated from several tasks. The majority of works combined a main task with a single, supervised auxiliary task. In transfer learning, a model is pre-trained on an auxiliary dataset to increase the main task performance. Howard and Ruder (2018) showed knowledge transfer based on large-scale language modeling. Before the breakthrough with BERT (Devlin et al., 2019), only partial knowledge transfer via word embeddings such as word2vec (Mikolov et al., 2013) or ELMo (Ili´c et al., 2018) was utilized. 2.2 Effect of auxiliary task similarity In theory, auxiliary tasks can have various relationships to the main task (Ruder, 2017). In practice, the most common choice is to use a “somehow” related task. Caruana (1997) argues that tasks are similar if the same features are used for making predictions. Baxter (2000) suggests similar tasks should have the same inductive bias. Ben-David and Schuller (2003) indicate that tasks originating from the same probability distribution are similar and perform well in an MTL setting. No universal measure for task similarity exists, but it is needed to select tasks to prefer for training (Ruder, 2017). Although MTL is frequently applied in recent work, few elaborate on the effect of task and dataset similarity. Recent work on neural MTL found different hints regarding task similarity that are only applicable to a specific scenario. Kim et al. (2017) performed MTL on POS tagging across 14 languages and found that language similarity seems to correlate with MTL performance. Yang et al. (2017) worked on common tasks with artificially reduced datasets. They attribute the degree of performance increase to label abundance for the main task, dataset similarity and number of shared parameters. Changpinyo et al. (2018) compared eleven tasks and observed that some tasks increase the performance in most cases, while tasks with a small tagset decreased the main task performance. In contrast, Mart´ınez Alonso and Plank (2017) show results that auxiliary tasks with few labels and a uniform label distribution perform better for MTL in neural sequence tagging: Auxiliary tasks having many labels or high entropy harm the main task performance. While Ruder et al. (2019) confirm these findings, Bjerva (2017) found no evidence of label entropy correlating with MTL performance. Mart´ınez Alonso and Plank (2017) found a difference between two POS datasets when used as auxiliary data because converting one to another tagset changes the effect of MTL significantly. Kim et al. (2015) propose a method using label embeddings to map labels from auxiliary datasets to the target tagset so that MTL can be treated as single-task learning (STL) with an increased amount of training data. Bingel and Søgaard (2017) predict MTL performance from dataset and STL learning features and found the learning curve to be much more important. From the dataset features, the number of labels on the main task and the auxiliary label entropy showed predictive potential. Most similar to our approach is the work of Bjerva (2017), who estimates the effect of an auxiliary task in MTL with information-theoretic measures. As the method requires the same datasets to be tagged with multiple tasks in parallel, at least one task must be automatically taggable with almost perfect results. He shows a correlation of conditional entropy and mutual information with a change in accuracy compared to STL. Results on the semantic task of Bjerva et al. (2016); Mart´ınez Alonso and Plank (2017) indicate that mutual information for helpful auxiliary tasks is higher than for harmful tasks. Augenstein et al. (2018) propose an architecture that learns label embeddings for natural language classification tasks and find that label embeddings indicate gains or harms of MTL. Ruder et al. (2019) correlate task properties with performance differences and learned meta-network parameters of their proposed sluice networks. They find that MTL gains are higher for smaller training datasets and that sluice networks learn to share more in case of higher variance in the training data. Opposed to previous approaches, our methods can compare same-task datasets and are not restricted to datasets with parallel labels. As our experiments in Section 5 require these properties, previous approaches are not applicable and thus not comparable. Next, we will introduce informationtheoretic measures that build the foundation for our dataset similarity measures proposed in Section 4. 3 Information-theoretic clustering comparison measures Entropy is a measure of the uncertainty of a random variable. The entropy H(X) of a discrete random 2973 variable X with alphabet X is defined as H(X) = − X x∈X p(x) log2 p(x) (1) where p(x) is the probability mass function p(x) = Pr{X = x}, x ∈X. It is 0 when p = 0 or 1 and maximal when p = 1 |X| (uniform distribution) with an upper bound of H(X) ≤log2 |X|. Joint entropy H(X, Y ) extends entropy from a single to two random variables. For a pair of discrete random variables (X, Y ) with a joint probability distribution p(x, y), it is defined as H(X, Y ) = − X x∈X X y∈Y p(x, y) log2 p(x, y). (2) Mutual information (MI) I(X; Y ) describes the amount of information one random variable X contains about another Y . It is a symmetric measure of range [0, min{H(X), H(Y )}] defined as I(X; Y ) = X x∈X X y∈Y p(x, y) log2 p(x, y) p(x)p(y) (3) with probability mass functions p(x), p(y) and a joint probability mass function p(x, y). For a detailed description of entropy, mutual information and information theory in general, please refer to Cover and Thomas (2006). A clustering C is a way to partition a dataset D into non-overlapping subsets {c1, c2, . . . } together containing all N items of D. Comparing clusterings requires a measure to determine the quality of a clustering according to another clustering, e.g. the ground truth. Such a measure should quantify the amount of information shared between both clusterings. (Vinh et al., 2010) Information-theoretic clustering comparison measures are based on a solid mathematical foundation from information theory and can work with non-linear similarities. They have become popular by the works of Strehl and Ghosh (2003) and Meil˘a (2005). Mutual information measures the information shared between two clusterings C and C′. A higher MI signals a greater help in predicting the cluster labels in C with information from C′. Several normalized mutual information variants can be derived: NMIjoint = I(C; C′) H(C, C′) (4) NMImax = I(C; C′) max(H(C), H(C′)) (5) Analogously to NMImax, there are NMIsum, NMIsqrt and NMImin that use entropy sums, square root of the entropy products or minimum of both entropy values as a normalization factor (Kvalseth, 1987; Strehl and Ghosh, 2003; Yao, 2003; Liu et al., 2008). They are all bounded in [0, 1], equaling 0 when two clusterings share no information at all, i.e. are fully independent and 1 when two clusterings are identical. According to Vinh et al. (2010), NMImax and NMIjoint satisfy the highest number of theoretical properties desirable among the clustering comparison measures. They prove that only the unit complements of both measures satisfy the metric property (positive definiteness, symmetry and triangle inequality). While all measures satisfy the normalization property, none conform to the constant baseline property unless the number of items N is large, compared to the number of clusters. 4 Method The high-level idea of our dataset similarity measures is the following: Words and labels from one dataset are correlated with the words and their labels from another dataset to create a probabilistic mapping between both label sets. Either an exact string matching or a fuzzy matching based on word embedding representations can be used. The dataset similarity is measured via the quality of this label mapping. 4.1 Casting label similarity as a clustering comparison problem Transforming the problem of token-label dataset similarity to a clustering comparison problem allows reusing existing clustering comparison measures. A clustering represents one label set, and each label is a cluster within the clustering, i.e. all tokens having the same label belong to one cluster. A contingency table, also called a confusion matrix, is a handy tool to compare clusterings. Let us assume that a dataset D is annotated with two labels in parallel from two tasks T and T ′ with arbitrary label sets L and L′. The comparison of L with L′ on D can be transformed into a clustering comparison problem. The clusters for T are the labels l1, l2, . . . , lN when the label set L has N different labels in total. The clusters for T ′ are labeled analogously l′ 1, l′ 2, . . . , l′ M for the M labels in the set L′. Table 1 shows the resulting contingency table for the described setting. The values cxy are 2974 the counts how many tokens are in the dataset that are labeled as / belong to cluster lx in task T and simultaneously l′ y in the task T ′.2 l′ 1 l′ 2 . . . l′ M Σ l1 c11 c12 . . . c1M c1. l2 c21 c22 . . . c2M c2. . . . . . . . . . . . . . . . lN cN1 cN2 . . . cNM cN. Σ c.1 c.2 . . . c.M c Table 1: Contingency table for a comparison of label sets L and L′ with N and M unique labels Based on the counts in the contingency table, information-theoretic measures such as (joint) entropy or mutual information can be calculated. Because the probability mass functions p(x), p(y) and p(x, y) are unknown for the label sets L and L′ in dataset D, the probabilities are approximated by the relative frequencies of the label pairs. The entropy of both label sets has to be taken into account to know whether the tasks T and T ′ are similar, i.e. a normalized mutual information variant shown in Equations 4 and 5 has to be used. With the notation in Table 1, the NMIjoint definition becomes NMI(L, L′)joint = I(L; L′) H(L, L′) = PN i=1 PM j=1 cij c log2  cijc ci.c.j  −PN i=1 PM j=1 cij c log2 cij c  . (6) The other measures can be changed analogously. Next, we show how to transform label similarity to clustering comparison without being restricted to datasets annotated in parallel with both label sets. 4.2 Obtaining label pairs from datasets To compare two datasets, one of the datasets can be tagged automatically with the other task’s labels as proposed by Bjerva (2017). However, a comparison is only possible if at least one of the tasks can be tagged automatically with near-perfect accuracy. While the necessary performance-level has been reached for a few simple tasks, the state-of-the-art performance on most tasks seems insufficient for this purpose. Further, two datasets of the same task, e.g. two NER datasets with the same tagset, cannot be meaningfully compared when tagged automatically. We propose two approaches to lift the 2Illustrating examples are provided in Appendix A.1 restrictions on the datasets and tasks. The solutions enable a comparison of arbitrary task and dataset combinations. 4.2.1 Text overlap If a manually defined one-to-one mapping from labels of one dataset to another one exists, datasets can be compared to each other using this label mapping function, because it produces a dataset with parallel label sets. While mapping a fine-grained label set to a coarse label set is possible, it is unclear how to map a coarse label to finer sub-labels. The text overlap approach implicitly generates a label mapping from the token-label pairs of both datasets. This has the advantage of being independent of external knowledge and enabling a probabilistic mapping from coarse to fine-grained label sets specific to the datasets. Tokens are aggregated so that a token is associated with the number of times it has been tagged with each label. Only tokens occurring in both datasets can be used to fill in the counts of a contingency table. By looking only at the intersection of tokens occurring in both datasets, a new virtual dataset is created, where each token is tagged with two labels. For each token, the count at the position (li, l′ j) in the contingency table is increased by a combination of the number of times the current token was tagged with labels li and l′ j. With the additive method to fill a contingency table, label counts for words from both datasets are added because they are viewed as multiple instances from one dataset.3 An alternative to addition is to use multiplication to combine the counts for matching words. The counts for each label combination are multiplied and added at the corresponding position in the contingency table. An effect of this approach is that words being frequent in both datasets contribute more to be counts. There are more possible schemes on how to combine the raw counts from two datasets into a mutual contingency table. Similarity measures such as NMI can be computed on any contingency table obtained from these methods. An advantage of the text overlap approach is that it is fast because it only involves text processing and a few counts. The downside is that an identical dataset can only be identified with 100% similarity if each word always has the same label. Another issue is that only a fraction of each dataset is used 3Illustrating examples are provided in Appendix A.2 2975 for the actual comparison. As the plain text overlap approach does not consider the ratio of shared vocabulary, it is possible to have a “false positive”, i.e. a high similarity is reported for two datasets although they share only one word. To fix this, we combine the NMI value and the ratio of shared vocabulary (SV) via the harmonic mean into our text overlap (TO) measure TO = 2 · NMI · SV NMI + SV (7) with the shared vocabulary SV = |V ∩V ′| |V ∪V ′| (8) where V and V ′ are the sets of all unique words in the two datasets D and D′. When constructing the contingency table (e.g. Table 1) with the text overlap approach, the sequence information of label-word pairs, i.e. the context, cannot be captured in the counts. With the usage of contextual embeddings, this issue can be mitigated sufficiently. 4.2.2 Vector space similarity Word embeddings allow representing words in the form of dense vectors within a vector space instead of a specific character sequence in the language’s vocabulary. Thus, it is possible to perform mathematical operations on these vectors and compute e.g. the semantic similarity of two words by computing their cosine similarity within the vector space (Elekes et al., 2017). These word vector techniques can be used to tackle the problems of the previously shown text overlap approach. A first extension allows incorporating words not occurring in both datasets. Vector representations are obtained for each unique word in the datasets. Instead of ignoring words contained only in one dataset, the closest word from the other dataset is chosen via cosine similarity for the pairwise label comparison. The remaining process and similarity measure computation stays the same.4 In the vector space approach, all tokens are compared. For each token, a unique vector representation is obtained via contextual embeddings such as ELMo (Ili´c et al., 2018) or BERT (Devlin et al., 2019). In order to fill in the counts of a contingency table, each token from one dataset is matched with the most similar vector representation in the other 4Illustrating examples are provided in Appendix A.3 dataset and the count for the label-pair is increased by the vector space similarity of the two tokens.4 The usage of contextual embeddings allows to incorporate the sequence information of label-word pairs into the counts. A similarity measure like NMI can be calculated from these counts as before. Identical datasets can be scored with 100% similarity when the contextual embeddings are able to produce unique vector representations for each token. In general, this method handles ambiguity in language much better as compared to the plain text approach, which should help to improve the similarity comparison between various datasets. Because the process of selecting the closest vector representation from the main dataset to the auxiliary dataset or vice versa can result in different combinations, the counts in the contingency table will be different depending on the direction. Thus, for a symmetric similarity measure like NMI, two scores are obtained. We further combine the forward and backward direction using the harmonic mean into a unified undirectional embedding (UUE) measure: UUE = 2 · NMIforward · NMIbackward NMIforward + NMIbackward (9) The forward and backward NMI in Equation 9 use the same NMI formula and applies it to different counts obtained from the two directions of embeddings comparisons. In our experiments, the actual NMI formula is either NMImax or NMIjoint due to their desirable theoretical properties. 5 Experiments In this section, experiments will be performed to check whether the similarity of two datasets correlates with the effect on the MTL performance when using the second dataset as auxiliary training data. 5.1 Controlled environment experiments Before the similarity measures are evaluated together with the MTL performance, we evaluate them independently in a controlled environment. We perform a sanity check by comparing the similarity scores with the intuitive, expected outcome. Two POS tagging datasets (WSJ, EWT) and two NER datasets (CNLE, ONT) shown in Table 2 will be used to sample three new, non-overlapping datasets each. The samples are named e.g. WSJ-1, WSJ-2, and WSJ-3. Their sizes are equal to 1⁄6, 2⁄6 and 3⁄6 of the original number of tokens. Under the assumption that the similarity within samples from 2976 the same original dataset is higher than the similarity between samples from different datasets, the pairwise NMI scores can be qualitatively evaluated. WSJ-1 WSJ-2 WSJ-3 EWT-1 EWT-2 EWT-3 ONT-1 ONT-2 ONT-3 CNLE-1 CNLE-2 CNLE-3 WSJ-1 WSJ-2 WSJ-3 EWT-1 EWT-2 EWT-3 ONT-1 ONT-2 ONT-3 CNLE-1 CNLE-2 CNLE-3 1.00 0.72 0.73 0.47 0.50 0.50 0.10 0.10 0.10 0.05 0.05 0.06 0.70 1.00 0.73 0.47 0.49 0.49 0.10 0.10 0.10 0.05 0.05 0.06 0.70 0.72 1.00 0.47 0.49 0.49 0.10 0.10 0.10 0.05 0.05 0.06 0.47 0.48 0.48 0.99 0.68 0.70 0.06 0.06 0.06 0.04 0.04 0.04 0.47 0.48 0.48 0.64 0.99 0.69 0.06 0.06 0.06 0.04 0.05 0.04 0.47 0.48 0.48 0.65 0.68 0.99 0.05 0.06 0.06 0.04 0.04 0.04 0.06 0.07 0.07 0.06 0.06 0.07 1.00 0.47 0.48 0.15 0.17 0.17 0.06 0.07 0.07 0.06 0.06 0.07 0.43 1.00 0.48 0.15 0.17 0.17 0.06 0.07 0.07 0.06 0.06 0.06 0.43 0.46 0.99 0.15 0.16 0.17 0.06 0.07 0.06 0.06 0.07 0.07 0.19 0.18 0.18 0.94 0.50 0.53 0.06 0.07 0.07 0.06 0.07 0.07 0.18 0.18 0.18 0.46 0.93 0.54 0.06 0.07 0.06 0.06 0.07 0.07 0.18 0.18 0.18 0.45 0.50 0.94 Figure 1: Pairwise NMIjoint similarity scores (Equation 6) obtained on contingency tables filled with the vector space similarity approach using contextual BERT embeddings. The heat map encodes the values from 0.0 in black to 1.0 in white. Figure 1 shows the pairwise NMIjoint similarity scores obtained with Equation 6 between these twelve samples. The pairs of identical datasets create a visible diagonal line of maximal similarity. The visible 3 × 3 blocks along the diagonal show high similarity scores and are aligned with comparisons of samples within the same original dataset. Per row or column, the values within these blocks are higher than any other value outside. Thus, the NMIjoint score allows identifying other samples of the same original datasets. Another interesting property is that the similarity between samples of the two original POS tagging datasets (WSJ, EWT) is higher than the similarity between any POS–NER pair. The same is true the other way around for the NER dataset samples (CNLE, ONT). Hence, the NMIjoint score can be used to distinguish datasets of the same task from others. Note that all four original datasets use different tagsets with a greatly varying number of tags (see Table 2) and that neither the shared vocabulary nor the joint label entropy can be employed to distinguish the POS and NER samples correctly.5 Overall, the NMIjoint scores presented in Figure 1 agree with the intuition which dataset sam5See Figures 3 and 4 in Appendix A.4 for details. ples should be similar. For each row or column, the similarity values can be ordered descending by identical, same original dataset, same task, and other samples. 5.2 Experimental setup Experiments to correlate dataset similarity and the network’s multi-task learning performance will be performed a) using two neural network architectures with Softmax and conditional random field classifiers, b) for the tasks of POS tagging, NER, and AM, c) on multiple datasets per task. Table 2 shows the datasets used in the experiments. Similar to Yang et al. (2017), we sample new training datasets as subsets of the originals to show a larger influence of auxiliary data as there is no room for improvement for simple tasks on large training sets. For the auxiliary datasets, subsets of different sizes are sampled to allow a fair comparison of the performance effect. The standard development and test sets of the original datasets are used if available. Otherwise, random samples without overlap with any other subsampled dataset are used. From the POS tagging datasets, a new training dataset of 25 000 tokens is sampled for WSJ, BC, and EWT. From all POS tagging datasets, auxiliary datasets of increasing size are sampled containing 25, 50, 100, 250, 500, 1000 × 1000 tokens limited by the size of the original dataset. For NER, training sets of 50 000 tokens are sampled from all datasets except GMB, SEC, and WNUT. Auxiliary datasets containing 50, 100, 250 × 1000 tokens are created for all datasets whenever possible. For AM, we use the full PE and WD datasets for training and as auxiliary data. We sample auxiliary data from the IBM data equal in size to the others. As the primary concern of the experiments is to enable significant differences in the neural network results with different auxiliary datasets, the network shares most of its parameters. In order to allow every training and auxiliary dataset combination to use their full potential, all relevant hyperparameters are tested for each pair of training and auxiliary dataset similar to Schulz et al. (2018). The neural network architecture for the experiments uses hard parameter sharing with a bidirectional gated recurrent unit (GRU) (Cho et al., 2014), a simpler version of the long short-term memory (Hochreiter and Schmidhuber, 1997), that is commonly used in MTL sequence tagging works 2977 ID Dataset Reference Tokens Tags STL performance PART-OF-SPEECH TAGGING DATASETS BNC British National Corpus BNC Consortium (2007) 111 973 625 91 WSJ Penn Treebank Wall Street Journal Marcus et al. (1999) 1 286 980 45 86.35 ± 0.26 BC Penn Treebank Brown Corpus Marcus et al. (1999) 1 162 358 45 85.61 ± 0.35 EWT UD English Web Treebank Silveira et al. (2014) 254 854 17 88.35 ± 0.42 GSD UD German GSD McDonald et al. (2013) 297 836 17 NAMED-ENTITY RECOGNITION DATASETS ONT English OntoNotes Release 5.0 Weischedel et al. (2013) 2 001 102 37 47.53 ± 0.83 CNLE CoNLL’03 Shared Task (English) Tjong Kim Sang and De Meulder (2003) 301 418 9 70.30 ± 2.50 CNLG CoNLL’03 Shared Task (German) Tjong Kim Sang and De Meulder (2003) 310 318 9 41.62 ± 0.27 EPG Part of EUROPARL (German) Faruqui and Pad´o (2010) 110 405 9 86.99 ± 0.42 GEN GermEval 2014 NER Shared Task Benikova et al. (2014) 591 005 24 26.97 ± 1.16 GMB Groningen Meaning Bank 2.2.0 Bos et al. (2017) 1 354 149 17 SEC SEC filings Salinas Alvarado et al. (2015) 54 256 8 WIKI Wikigold Balasuriya et al. (2009) 39 152 8 67.19 ± 1.38 WNUT W-NUT’17 Shared Task Derczynski et al. (2017) 101 736 13 ARGUMENTATION MINING DATASETS PE Persuasive Essays (version 2) Stab and Gurevych (2017) 148 182 11 53.71 ± 1.01 WD Web Discourse Habernal and Gurevych (2017) 84 817 12 24.58 ± 1.32 IBM IBM Debater Levy et al. (2018) 48 626 006 5 Table 2: Datasets used to sample new training or auxiliary datasets. The number of tags is a generic count, where e.g. B-PER and I-PER are considered to be different tags. STL performance (accuracy for POS, else macro F1 score) is not obtained on the full, but on the sampled training sets. STL scores are not shown for datasets only used as auxiliary data. Note that the IBM dataset contains many duplicate claims and near-duplicate sentences. (see Section 2.1). Apart from self-learned word embeddings, character features based on another bidirectional GRU are included. Similar to Plank et al. (2016); Mart´ınez Alonso and Plank (2017); Bjerva (2017); Ruder et al. (2019) we decided against pre-trained word embeddings in the network to avoid any influence on the comparison of STL and MTL performance. The last two, taskspecific layers transform the GRU’s hidden state to the task-specific labels and apply either a Softmax or conditional random field (CRF) (Lafferty et al., 2001) to predict the label.6 Auxiliary data is only used for the same task, i.e. no POS tagging dataset is used as auxiliary training data for NER and vice versa. For POS tagging, 81 pairs of training and auxiliary datasets are tested with 64 hyperparameter combinations and three random seeds. In the case of NER, 117 pairs of training and auxiliary datasets are tested with two neural network models, 16 hyperparameter combinations, and three random seeds. In total, 26 784 training runs have been performed. We compute the similarities for pairs of training and auxiliary datasets in three ways. The text overlap approach is used with and without word embeddings. For the latter, 300-dimensional fastText 6Training procedure and hyperparameters are described in more detail in Appendix A.5 embeddings7 with sub-word information are used that consist of 2 million word vectors trained on the Common Crawl (Mikolov et al., 2018). We evaluate the additive and multiplicative ways with multiple weighting schemes to combine the label counts and calculate various similarity measures from the resulting contingency table. The “BERT-Base Multilingual Cased” model (Devlin et al., 2019) is used for the third, token-based approach. 5.3 Results and analysis In Figure 2, the difference in accuracy over STL is plotted against the UUE NMIjoint similarity measure using BERT embeddings. Overall, the data points are scattered from the bottom left to the top right. There are no cases of low similarity coinciding with high accuracy increase. The data points with auxiliary data from the German GSD dataset are clustered close to the bottom left, i.e. low similarity and almost no accuracy gain. This concurs with the intuition that using a German auxiliary dataset for an English training dataset should not lead to a significant performance increase. The data points with auxiliary data from the same original dataset as the training set are clustered to the top right, i.e. have the highest similarity and performance increase as expected. The scatter plots for 7crawl300d2Msubword.zip from fasttext.cc 2978 other sizes of auxiliary data and methods, e.g. computing NMImax on the contingency table from the text overlap approach, look similar. 0.3 0.4 0.5 0.6 0.7 similarity score 0 2 4 6 8 accuracy Aux. data BC EWT WSJ BNC GSD Train. data BC EWT WSJ Figure 2: Plot comparing the POS tagging difference in accuracy between STL and MTL (auxiliary size 250 000 tokens) with the UUE NMIjoint similarity obtained using BERT embeddings for each token To quantify the various similarity computation methods, we correlate the change in accuracy with the similarity value. Table 3 shows the median and mean correlation of similarity with change in accuracy for the best ten methods averaged over groups of identically-sized auxiliary datasets. As a baseline, the correlation with the ratio of shared vocabulary is included. We only show the results for NMIjoint as the correlation was equal to or better than NMImax in most cases. The correlation between the similarity and change in accuracy is strong according to both Kendall’s rank correlation and Pearson’s linear correlation coefficients, which is in line with the plot shown in Figure 2. Since the p-values for the similarity methods are well below 0.005, it is very unlikely that similarity and accuracy are not correlated. The strongest correlation, according to Kendall’s τ, is achieved with the harmonic mean of shared vocabulary and multiplicative text overlap. According to Pearson’s ρ, the highest linear correlation is achieved with the UUE (Equation 9) vector space method, which is depicted in Figure 2. The correlation coefficients of the text overlap approach are consistently higher than the shared vocabulary baseline since the baseline is oblivious to the labels. For NER, the results are shown in Table 4. In comparison to the POS tagging results, methods using embeddings perform better than those without. The strongest Kendall and Pearson correlations are achieved by the vector space approach computing the joint NMI on a contingency table filled from forward BERT embeddings. While a linear correlation on the POS tagging results was deemed reasonable based on a data analysis, the Pearson correlation values for NER might be prone to outlier effects and are therefore only included for completeness. For AM, no quantitative analysis could be performed due to a limited number of samples. With MTL, the performance on PE increased to 54.26 when using WD as auxiliary data, while IBM reduced it to 51.37. WD performance is slightly reduced by PE as auxiliary data to 21.72, but reduced to 9.42 by IBM. While we saw no correlation with the text overlap similarities, the forward vector space measure matches the MTL score change Primary method Combination Count method Embedding ˜τ Kendall’s ¯τ ˜ρ Pearson’s ¯ρ text overlap & SV TO multiplicative 0.73 0.71 ± 0.05 0.80 0.79 ± 0.07 text overlap & SV TO additive 0.72 0.72 ± 0.10 0.78 0.79 ± 0.04 text overlap multiplicative fastText 0.70 0.69 ± 0.08 0.83 0.82 ± 0.07 vector space UUE BERT 0.70 0.69 ± 0.12 0.84 0.84 ± 0.06 vector space BERT 0.69 0.65 ± 0.09 0.83 0.82 ± 0.06 text overlap multiplicative 0.68 0.64 ± 0.12 0.73 0.74 ± 0.08 text overlap UUE additive 0.67 0.66 ± 0.12 0.75 0.77 ± 0.06 text overlap additive 0.67 0.65 ± 0.11 0.74 0.76 ± 0.06 text overlap additive 0.66 0.64 ± 0.12 0.68 0.69 ± 0.08 text overlap UUE multiplicative fastText 0.65 0.65 ± 0.11 0.83 0.83 ± 0.04 shared vocabulary 0.63 0.60 ± 0.14 0.77 0.75 ± 0.07 Table 3: Correlation between various NMIjoint similarity measures and the change in POS tagging accuracy using MTL. The entries show the median and mean of Kendall’s and Pearson’s correlation coefficients sorted descendingly by ˜τ. The average p-values for all methods (except the shared vocabulary baseline) are below 0.005. 2979 Primary method Combination Count method Embedding ˜τ Kendall’s ¯τ ˜ρ Pearson’s ¯ρ vector space BERT 0.65 0.62 ± 0.06 0.95 0.92 ± 0.05 vector space UUE BERT 0.59 0.55 ± 0.11 0.89 0.89 ± 0.05 text overlap multiplicative fastText 0.57 0.54 ± 0.09 0.91 0.88 ± 0.07 text overlap additive fastText 0.57 0.54 ± 0.09 0.87 0.86 ± 0.05 text overlap UUE multiplicative fastText 0.52 0.50 ± 0.13 0.80 0.83 ± 0.06 text overlap & SV TO additive 0.51 0.50 ± 0.13 0.81 0.79 ± 0.04 text overlap & SV TO multiplicative 0.51 0.50 ± 0.13 0.80 0.79 ± 0.06 text overlap UUE additive fastText 0.49 0.48 ± 0.08 0.83 0.84 ± 0.04 text overlap multiplicative 0.47 0.44 ± 0.11 0.83 0.82 ± 0.08 text overlap additive 0.42 0.41 ± 0.07 0.82 0.80 ± 0.04 shared vocabulary 0.48 0.49 ± 0.13 0.75 0.73 ± 0.05 Table 4: Correlation between NMIjoint various similarity measures and the change in NER F1 score using MTL. The entries show the median and mean of Kendall’s and Pearson’s correlation coefficients sorted descendingly by ˜τ. The average p-values for all methods (except the shared vocabulary baseline) are below 0.001. The change in F1 score was highly affected by random initialization, so the correlation scores must be used with caution. when comparing averaged span embeddings: The NMIjoint similarity of PE–IBM is 0.09, and PEWD is measured 0.26 whereas WD–PE has a similarity score of 0.06 and WD–IBM is scored 0.04. Thus, our similarity measure identifies the most promising auxiliary dataset also in this case. Overall, there is a strong correlation between MTL scores and dataset similarity computed by our proposed methods. In the case of POS tagging, the correlation is impressive — it is visible in the scatter plot and accompanied by high-confidence correlation coefficients. The results for NER are less clear but still indicate that similarity and test set performance are correlated. We can recommend the text overlap approach combined with the shared vocabulary for syntactic tasks with single-token labels. It performed the best in our POS tagging evaluation and is computed in less than a second. Both additive and multiplicative count combination methods worked equally well in our tests. For more complex tasks such as NER or AM and in case labels span multiple tokens, we suggest using the approach based on the forward vector space similarity. It performed the best in our NER evaluation. Further, it was the only method to work reasonably well with the AM datasets because spans of multiple tokens could be compared by combining the embeddings of all contained tokens. In all cases, we recommend using the mutual information normalized by the joint entropy NMIjoint as the actual similarity measure because it was either equal to or better than the other variants. 6 Conclusion The similarity measures allow distinguishing good from bad candidates for usage as auxiliary data. This is an immensely valuable information as the number of expensive neural network training runs can be reduced to a fraction while still finding the best auxiliary dataset(s) to increase performance on the main task. In contrast to previous methods, our measures do not require the label sets to be the same and do not require automatic tagging. The experiments show that similarity measures allow ordering the effects of auxiliary datasets by direction and intensity for an individual training dataset. Our experimental findings are also supported from a theoretical point of view. The developed methods working on both words and their labels have a substantial advantage over approaches that are based only on words or the label distributions. The quick similarity calculation can improve the main task performance when better datasets are used as auxiliary data that would never have made it through the otherwise purely manual preselection process. In future work, apart from improving the similarity measures, it could be examined to predict MTL scores or estimate the right amount of auxiliary data or shared parameters in the neural network. Acknowledgments We would like to thank all anonymous reviewers for their valuable feedback. This work was partially funded by the Cluster of Excellence CLICCS (EXC 2037), Universit¨at Hamburg, funded through the German Research Foundation (DFG). 2980 References Isabelle Augenstein, Sebastian Ruder, and Anders Søgaard. 2018. Multi-task learning of pairwise sequence classification tasks over disparate label spaces. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1896– 1906, New Orleans, Louisiana. Association for Computational Linguistics. Isabelle Augenstein and Anders Søgaard. 2017. Multitask learning of keyphrase boundary classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 341–346, Vancouver, Canada. Association for Computational Linguistics. Dominic Balasuriya, Nicky Ringland, Joel Nothman, Tara Murphy, and James R. Curran. 2009. Named entity recognition in Wikipedia. In Proceedings of the 2009 Workshop on The People’s Web Meets NLP: Collaboratively Constructed Semantic Resources (People’s Web), pages 10–18, Suntec, Singapore. Association for Computational Linguistics. Jonathan Baxter. 2000. A model of inductive bias learning. Journal of Artificial Intelligence Research (JAIR), 12(1):149–198. Shai Ben-David and Reba Schuller. 2003. Exploiting task relatedness for multiple task learning. In Learning Theory and Kernel Machines, pages 567–580, Berlin, Heidelberg. Springer Berlin Heidelberg. Darina Benikova, Chris Biemann, and Marc Reznicek. 2014. NoSta-d named entity annotation for German: Guidelines and dataset. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 2524– 2531, Reykjavik, Iceland. European Language Resources Association (ELRA). Joachim Bingel and Anders Søgaard. 2017. Identifying beneficial task relations for multi-task learning in deep neural networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 164–169, Valencia, Spain. Association for Computational Linguistics. Johannes Bjerva. 2017. Will my auxiliary tagging task help? estimating auxiliary tasks effectivity in multi-task learning. In Proceedings of the 21st Nordic Conference on Computational Linguistics, pages 216–220, Gothenburg, Sweden. Association for Computational Linguistics. Johannes Bjerva, Barbara Plank, and Johan Bos. 2016. Semantic tagging with deep residual networks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3531–3541, Osaka, Japan. The COLING 2016 Organizing Committee. BNC Consortium. 2007. The British National Corpus, version 3 (BNC XML Edition). Johan Bos, Valerio Basile, Kilian Evang, Noortje Venhuizen, and Johannes Bjerva. 2017. The Groningen Meaning Bank. In Nancy Ide and James Pustejovsky, editors, Handbook of Linguistic Annotation, volume 2, pages 463–496. Springer. Rich Caruana. 1997. Multitask learning. Machine Learning, 28(1):41–75. Soravit Changpinyo, Hexiang Hu, and Fei Sha. 2018. Multi-task learning for sequence tagging: An empirical study. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2965–2977, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734, Doha, Qatar. Association for Computational Linguistics. Thomas M. Cover and Joy A. Thomas. 2006. Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing). Wiley-Interscience, New York, New York, USA. Leon Derczynski, Eric Nichols, Marieke van Erp, and Nut Limsopatham. 2017. Results of the WNUT2017 shared task on novel and emerging entity recognition. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 140–147, Copenhagen, Denmark. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. ´Abel Elekes, Martin Sch¨aler, and Klemens B¨ohm. 2017. On the various semantics of similarity in word embedding models. In Proceedings of the 17th ACM/IEEE Joint Conference on Digital Libraries, JCDL ’17, pages 139–148, Toronto, Ontario, Canada. IEEE Press. Manaal Faruqui and Sebastian Pad´o. 2010. Training and evaluating a German named entity recognizer with semantic generalization. In Semantic Approaches in Natural Language Processing: Proceedings of the 10th Conference on Natural Language Processing, KONVENS 2010, pages 129–133, Saarbr¨ucken, Germany. 2981 Ivan Habernal and Iryna Gurevych. 2017. Argumentation mining in user-generated web discourse. Computational Linguistics, 43(1):125–179. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339, Melbourne, Australia. Association for Computational Linguistics. Suzana Ili´c, Edison Marrese-Taylor, Jorge Balazs, and Yutaka Matsuo. 2018. Deep contextualized word representations for detecting sarcasm and irony. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 2–7, Brussels, Belgium. Association for Computational Linguistics. Lukasz Kaiser, Aidan N. Gomez, Noam Shazeer, Ashish Vaswani, Niki Parmar, Llion Jones, and Jakob Uszkoreit. 2017. One model to learn them all. arXiv:1706.05137. Joo-Kyung Kim, Young-Bum Kim, Ruhi Sarikaya, and Eric Fosler-Lussier. 2017. Cross-lingual transfer learning for POS tagging without cross-lingual resources. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2832–2838, Copenhagen, Denmark. Association for Computational Linguistics. Young-Bum Kim, Karl Stratos, Ruhi Sarikaya, and Minwoo Jeong. 2015. New transfer learning techniques for disparate label sets. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 473–482, Beijing, China. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, Conference Track Proceedings, San Diego, California, USA. T. O. Kvalseth. 1987. Entropy and correlation: Some comments. IEEE Transactions on Systems, Man, and Cybernetics, 17(3):517–519. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML ’01, pages 282–289, San Francisco, California, USA. Morgan Kaufmann Publishers Inc. Ran Levy, Ben Bogin, Shai Gretz, Ranit Aharonov, and Noam Slonim. 2018. Towards an argumentative content search engine using weak supervision. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2066–2081, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Liyuan Liu, Jingbo Shang, Frank F. Xu, Xiang Ren, Huan Gui, Jian Peng, and Jiawei Han. 2018. Empower Sequence Labeling with Task-Aware Neural Language Model. In Proceedings of the ThirtySecond Conference on Artificial Intelligence (AAAI2018), pages 5253–5260, New Orleans, Louisiana, USA. Zhenqiu Liu, Zhongmin Guo, and Ming Tan. 2008. Constructing tumor progression pathways and biomarker discovery with fuzzy kernel kmeans and dna methylation data. Cancer informatics, 6:1–7. Mitchell P. Marcus, Beatrice Santorini, Mary Ann Marcinkiewicz, and Ann Taylor. 1999. Penn Treebank 3 LDC99T42. Web Download. Philadelphia: Linguistic Data Consortium. H´ector Mart´ınez Alonso and Barbara Plank. 2017. When is multitask learning effective? semantic sequence prediction under varying data conditions. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 44–53, Valencia, Spain. Association for Computational Linguistics. Ryan McDonald, Joakim Nivre, Yvonne QuirmbachBrundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T¨ackstr¨om, Claudia Bedini, N´uria Bertomeu Castell´o, and Jungmee Lee. 2013. Universal dependency annotation for multilingual parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 92–97, Sofia, Bulgaria. Association for Computational Linguistics. Marina Meil˘a. 2005. Comparing clusterings: An axiomatic view. In Proceedings of the 22Nd International Conference on Machine Learning, ICML ’05, pages 577–584, Bonn, Germany. ACM. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations (ICLR), Workshop Track Proceedings, Scottsdale, Arizona, USA. Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in pre-training distributed word representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics 2982 (Volume 2: Short Papers), pages 412–418, Berlin, Germany. Association for Computational Linguistics. Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv:1706.05098. Sebastian Ruder, Joachim Bingel, Isabelle Augenstein, and Anders Søgaard. 2019. Latent multi-task architecture learning. In Proceedings of the Thirty-Third Conference on Artificial Intelligence (AAAI-2019), pages 4822–4829, Honolulu, Hawaii, USA. Association for the Advancement of Artificial Intelligence. Julio Cesar Salinas Alvarado, Karin Verspoor, and Timothy Baldwin. 2015. Domain adaption of named entity recognition to support credit risk assessment. In Proceedings of the Australasian Language Technology Association Workshop 2015, pages 84–90, Parramatta, Australia. Claudia Schulz, Steffen Eger, Johannes Daxenberger, Tobias Kahse, and Iryna Gurevych. 2018. Multi-task learning for argumentation mining in low-resource settings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 35–41, New Orleans, Louisiana. Association for Computational Linguistics. Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Chris Manning. 2014. A gold standard dependency corpus for English. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 2897– 2904, Reykjavik, Iceland. European Language Resources Association (ELRA). Anders Søgaard and Yoav Goldberg. 2016. Deep multitask learning with low level tasks supervised at lower layers. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 231–235, Berlin, Germany. Association for Computational Linguistics. Christian Stab and Iryna Gurevych. 2017. Parsing argumentation structures in persuasive essays. Computational Linguistics, 43(3):619–659. Alexander Strehl and Joydeep Ghosh. 2003. Cluster ensembles — a knowledge reuse framework for combining multiple partitions. Journal of Machine Learning Research (JMLR), 3:583–617. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003 - Volume 4, CoNLL ’03, pages 142–147, Edmonton, Canada. Association for Computational Linguistics. Nguyen Xuan Vinh, Julien Epps, and James Bailey. 2010. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. Journal of Machine Learning Research (JMLR), 11:2837–2854. Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, and Ann Houston. 2013. OntoNotes Release 5.0 LDC2013T19. Web Download. Philadelphia: Linguistic Data Consortium. Zhilin Yang, Ruslan Salakhutdinov, and William W. Cohen. 2017. Transfer learning for sequence tagging with hierarchical recurrent networks. In 5th International Conference on Learning Representations, ICLR 2017, Conference Track Proceedings, Toulon, France. Yiyu Yao. 2003. Information-Theoretic Measures for Knowledge Discovery and Data Mining, pages 115– 136. Springer Berlin Heidelberg, Berlin, Heidelberg. A Appendices A.1 Examples for casting label similarity as a clustering comparison problem Let the dataset D use simplified named entity recognition (NER) as task T and part-of-speech (POS) tagging as task T ′ having the label sets: L = {ORGanization, PERson, LOCation, OTHer} L′ = {NN noun, VB verb, DT determiner, X other} Let dataset D contain the following two sentences: ORG NN Walt ORG NN Disney ORG NN Productions OTH VB created OTH DT the OTH NN cartoon OTH NN character PER NN Donald PER NN Duck LOC NN Berlin OTH VB is OTH DT a OTH X large OTH NN city OTH X in LOC NN Germany Table 5 shows the contingency table filled with the counts from both example sentences. The last row resp. column shows the sum of the counts in each column resp. row. The count cORG,NN is three because there are exactly three tokens (Walt Disney Productions) tagged both ORG and NN. Other label-pairs are derived analogously from the remaining tokens of the dataset D. With Equation 6, the normalized mutual information can be calculated from the counts in the contingency table. Note that the logarithm is only defined for positive values, but the counts 2983 NN VB DT X Σ ORG 3 0 0 0 3 PER 2 0 0 0 2 LOC 2 0 0 0 2 OTH 3 2 2 2 9 Σ 10 2 2 2 16 Table 5: Counts from example dataset D for comparison of NER and POS tagsets cij are often zero. The convention 0 log(0) = 0 is used to mitigate this issue because x log(x) →0 when x →0. The normalized mutual information for the data in Table 5 can now be calculated: I(L; L′) = 0.437893 and H(L, L′) = 2.78064. Finally, NMIjoint = 0.157479. A.2 Examples for the text overlap approach Below are two example datasets annotated with the reduced POS tagset introduced previously: (10) VB Creating DT an NN example X to VB explain DT the NN process VB is DT an X impossible NN task X . X To VB process DT the NN data X , NN counts X of NN words X and NN labels VB are VB needed X . (11) X This VB is DT the NN data X for DT the X second NN dataset X . DT The NN process X to VB find DT the X right NN words X for X this NN example VB took DT a NN second X . Table 6 shows the two Datasets 10 and 11 after the transformation. In the examples, the words process and second are ambiguous without context and thus have multiple labels. Table 7 shows the result of the additive method to combine the label counts from both datasets. The word example occurs once in each dataset and is both times tagged as NN. In the contingency table the count for (NN, NN), i.e. row 2 column 2, is increased by two. The word the occurs two resp. three times in the datasets and is always labeled DT. Consequently, the count in the contingency table at (DT, DT), i.e. row 1 column 1, is increased by five. For process, an issue is that Word # DT NN VB X example 1 0 1 0 0 to 1 0 0 0 1 the 2 2 0 0 0 process 2 0 1 1 0 is 1 0 0 1 0 . 2 0 0 0 2 data 1 0 1 0 0 words 1 0 1 0 0 (a) Counts for words and their labels in Dataset 10 Word # DT NN VB X is 1 0 0 1 0 the 3 3 0 0 0 data 1 0 1 0 0 . 2 0 0 0 2 process 1 0 1 0 0 to 1 0 0 0 1 words 1 0 1 0 0 example 1 0 1 0 0 (b) Counts for words and their labels in Dataset 11 Table 6: Transformation of word-label pairs to an associated count-based representation. Only words occurring in both datasets are shown. it has multiple labels in the first dataset: NN and VB. In the second dataset, there is only a single occurrence of process with label NN. The counts in the contingency table are increased by two for the positions (NN, NN) and (VB, NN). However, the single occurrence is now used twice. An improvement is to split the counts by the number of labels in the other dataset, so that the two affected positions are not increased by two but by 1.5. A.3 Examples for the vector space approach Applying the extension using word embeddings on the two example Datasets 10 and 11 would use the words not occurring in both datasets. Creating from Dataset 10 might have the closest match with process from Dataset 11. Thus, the count for (VB, NN) would be increased, which clearly is a mismatch. The word an might have the lowest vector space distance to a from the other dataset. This accurate match would increase the count for (DT, DT). The remaining, so far unused, words from Dataset 2984 DT NN V B X Σ DT 5 0 0 0 5 NN 0 8 0 0 8 V B 0 2 2 0 4 X 0 0 0 6 6 Σ 5 10 2 6 23 Table 7: Contingency table derived from the additive combination of counts in Table 6. 10 have to be matched with their most similar counterparts from Dataset 11. For each pair of words, the count for the corresponding label-pair needs to be increased in the contingency table. While most vector representation matches between those two example datasets are inadequate, the quality of these matches is higher with larger datasets. The application of the token-based approach using contextual embeddings on the two example Datasets 10 and 11 would work in the following way. All tokens in the two datasets are augmented with their corresponding contextual vector representations, thereby creating an associative array from a numeric vector to a label. For each word embedding in the first dataset, the vector representation with the closest distance from the other dataset is selected. Assuming the five matches are Creating–is, an–the, example–data, to–for and explain–is, the counts in a contingency table have to be increased for the label-pairs (VB, VB), (DT, DT), (NN, NN), (X, X) and (VB, VB). A.4 Additional scores for the controlled environment experiments The shared vocabulary values shown in Figure 3 exhibit a clear diagonal line of maximal shared vocabulary due to pairs of identical dataset samples. The remaining values are in accordance with the dataset sizes. For a chosen dataset, the shared vocabulary ratio increases with the size of the second dataset used in the comparison. Thus, there is no systematic difference between POS tagging and NER datasets nor a clear distinction between samples within the same original dataset and other datasets. Overall, the shared vocabulary is unsuitable to select datasets deemed similar. Figure 4 shows the joint label entropy obtained from the same contingency tables as the NMI WSJ-1 WSJ-2 WSJ-3 EWT-1 EWT-2 EWT-3 ONT-1 ONT-2 ONT-3 CNLE-1 CNLE-2 CNLE-3 WSJ-1 WSJ-2 WSJ-3 EWT-1 EWT-2 EWT-3 ONT-1 ONT-2 ONT-3 CNLE-1 CNLE-2 CNLE-3 1.00 0.67 0.73 0.22 0.30 0.35 0.58 0.69 0.75 0.25 0.33 0.38 0.47 1.00 0.65 0.17 0.24 0.28 0.51 0.62 0.69 0.20 0.27 0.31 0.41 0.53 1.00 0.15 0.21 0.25 0.46 0.57 0.65 0.17 0.23 0.27 0.56 0.63 0.67 1.00 0.60 0.67 0.65 0.70 0.74 0.37 0.46 0.50 0.49 0.57 0.61 0.40 1.00 0.59 0.58 0.65 0.68 0.31 0.40 0.43 0.45 0.53 0.58 0.35 0.47 1.00 0.54 0.61 0.65 0.27 0.36 0.40 0.50 0.62 0.69 0.22 0.30 0.35 1.00 0.72 0.78 0.24 0.31 0.36 0.43 0.56 0.62 0.17 0.24 0.29 0.52 1.00 0.70 0.19 0.25 0.30 0.38 0.51 0.59 0.15 0.21 0.25 0.47 0.58 1.00 0.16 0.22 0.26 0.46 0.53 0.56 0.28 0.35 0.39 0.51 0.56 0.59 1.00 0.61 0.68 0.41 0.47 0.51 0.23 0.30 0.33 0.46 0.51 0.54 0.41 1.00 0.61 0.38 0.44 0.48 0.20 0.26 0.30 0.42 0.47 0.50 0.36 0.48 1.00 Figure 3: Pairwise shared vocabulary ratio (Equation 8) between the twelve sampled datasets. The heat map encodes the values from 0.0 in black to 1.0 in white. WSJ-1 WSJ-2 WSJ-3 EWT-1 EWT-2 EWT-3 ONT-1 ONT-2 ONT-3 CNLE-1 CNLE-2 CNLE-3 WSJ-1 WSJ-2 WSJ-3 EWT-1 EWT-2 EWT-3 ONT-1 ONT-2 ONT-3 CNLE-1 CNLE-2 CNLE-3 4.33 5.02 4.99 5.37 5.28 5.28 5.22 5.21 5.20 4.83 4.85 4.82 5.08 4.34 5.00 5.37 5.29 5.29 5.23 5.22 5.22 4.84 4.85 4.83 5.08 5.03 4.34 5.38 5.30 5.30 5.24 5.23 5.22 4.85 4.86 4.83 5.47 5.46 5.44 3.66 4.33 4.28 4.26 4.26 4.25 4.09 4.06 4.04 5.48 5.47 5.45 4.43 3.65 4.29 4.24 4.23 4.23 4.08 4.06 4.05 5.47 5.46 5.43 4.40 4.33 3.64 4.22 4.21 4.23 4.06 4.04 4.02 5.08 5.07 5.06 4.35 4.33 4.33 1.02 1.40 1.38 1.47 1.46 1.44 5.07 5.07 5.06 4.35 4.33 4.33 1.43 1.03 1.39 1.48 1.46 1.44 5.07 5.07 5.06 4.34 4.33 4.33 1.43 1.41 1.02 1.48 1.46 1.44 4.99 5.01 5.01 4.40 4.34 4.34 2.38 2.40 2.40 1.14 1.49 1.47 4.99 5.02 5.00 4.39 4.35 4.32 2.43 2.43 2.42 1.53 1.15 1.45 4.99 5.02 5.00 4.40 4.35 4.33 2.41 2.42 2.42 1.53 1.48 1.13 Figure 4: Pairwise joint label entropy values (Equation 2 and denominator of Equation 6) obtained on contingency tables filled with the vector space similarity approach using contextual BERT embeddings. The heat map encodes the values from min in black to max in white. 2985 scores presented in Figure 2. While pairs of identical datasets exhibit a lower entropy relative to other pairs in the same row or column, there is no way to distinguish samples of the same original dataset from any other. The entropy values for NER–NER pairs are by far lower than any other pairs. This is reasonable as the “O” labels by far make up the majority of all labels in NER datasets. However, this does not help to find similar dataset in other cases, because there is no meaningful ordering of the entropy values when comparing any of the POS samples with all the other samples. In short, joint label entropy is not appropriate to find datasets deemed similar. A.5 Neural network training procedure and hyperparameters We train each model for at most 100 epochs with an early-stopping patience of 10 and a batch size of 256. The main and auxiliary training datasets are combined via interleaved batches from both datasets. Due to negligible effect, the dimensions of the character embeddings and hidden units are fixed at 32 resp. 64. 128 and 256 dimensions are tested for the word embeddings and the hidden units of the word GRU that can have either one or two layers. We use the Adam (Kingma and Ba, 2015) optimizer. For POS tagging, the learning rate is fixed at 0.002. The best dropout value is chosen from the values 0, 0.25, 0.5, 0.75. Additional regularization via weight decay is selected from the values 0, 0.1, 0.01, 0.001. For NER, the learning rate is set to 0.005 and weight decay uses a fixed value of 0.05. The range for dropout is narrowed to the values 0.3, 0.4, 0.5, 0.6. Each combination of hyperparameters is run with three random seeds to mitigate performance fluctuations due to the random initialization of the network weights. While the POS tagging experiments only used a Softmax classifier, we evaluate both Softmax and CRF classifiers for NER.
2020
268
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2986–2995 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2986 How Does Selective Mechanism Improve Self-Attention Networks? Xinwei Geng1∗Longyue Wang2 Xing Wang2 Bing Qin1,3 Ting Liu1,3 Zhaopeng Tu2 1Harbin Institute of Technology 2Tencent AI Lab 3Peng Cheng Laboratory 1{xwgeng, qinb, tliu}@ir.hit.edu.cn 2{vinnylywang, brightxwang, zptu}@tencent.com Abstract Self-attention networks (SANs) with selective mechanism has produced substantial improvements in various NLP tasks by concentrating on a subset of input words. However, the underlying reasons for their strong performance have not been well explained. In this paper, we bridge the gap by assessing the strengths of selective SANs (SSANs), which are implemented with a flexible and universal Gumbel-Softmax. Experimental results on several representative NLP tasks, including natural language inference, semantic role labelling, and machine translation, show that SSANs consistently outperform the standard SANs. Through well-designed probing experiments, we empirically validate that the improvement of SSANs can be attributed in part to mitigating two commonly-cited weaknesses of SANs: word order encoding and structure modeling. Specifically, the selective mechanism improves SANs by paying more attention to content words that contribute to the meaning of the sentence. The code and data are released at https://github.com/xwgeng/SSAN. 1 Introduction Self-attention networks (SANs) (Lin et al., 2017) have achieved promising progress in various natural language processing (NLP) tasks, including machine translation (Vaswani et al., 2017), natural language inference (Shen et al., 2018b), semantic role labeling (Tan et al., 2018; Strubell et al., 2018) and language representation (Devlin et al., 2019). The appealing strength of SANs derives from high parallelism as well as flexibility in modeling dependencies among all the input elements. Recently, there has been a growing interest in integrating selective mechanism into SANs, which has produced substantial improvements in a variety ∗Work done when interning at Tencent AI Lab. of NLP tasks. For example, some researchers incorporated a hard constraint into SANs to select a subset of input words, on top of which self-attention is conducted (Shen et al., 2018c; Hou et al., 2019; Yang et al., 2019b). Yang et al. (2018) and Guo et al. (2019) proposed a soft mechanism by imposing a learned Gaussian bias over the original attention distribution to enhance its ability of capturing local contexts. Shen et al. (2018c) incorporated reinforced sampling to dynamically choose a subset of input elements, which are fed to SANs. Although the general idea of selective mechanism works well across NLP tasks, previous studies only validate their own implementations in a few tasks, either on only classification tasks (Shen et al., 2018c; Guo et al., 2019) or sequence generation tasks (Yang et al., 2018, 2019b). This poses a potential threat to the conclusive effectiveness of selective mechanism. In response to this problem, we adopt a flexible and universal implementation of selective mechanism using GumbelSoftmax (Jang et al., 2017), called selective selfattention networks (i.e., SSANs). Experimental results on several representative types of NLP tasks, including natural language inference (i.e., classification), semantic role labeling (i.e., sequence labeling), and machine translation (i.e., sequence generation), demonstrate that SSANs consistently outperform the standard SANs (§3). Despite demonstrating the effectiveness of SSANs, the underlying reasons for their strong performance have not been well explained, which poses great challenges for further refinement. In this study, we bridge this gap by assessing the strengths of selective mechanism on capturing essentially linguistic properties via well-designed experiments. The starting point for our approach is recent findings: the standard SANs suffer from two representation limitation on modeling word order encoding (Shaw et al., 2018; Yang et al., 2019a) 2987 and syntactic structure modeling (Tang et al., 2018; Hao et al., 2019a), which are essential for natural language understanding and generation. Experimental results on targeted linguistic evaluation lead to the following observations: • SSANs can identify the improper word orders in both local (§4.1) and global (§4.2) ranges by learning to attend to the expected words. • SSANs produce more syntactic representations (§5.1) with a better modeling of structure by selective attention (§5.2). • The selective mechanism improves SANs by paying more attention to content words that posses semantic content and contribute to the meaning of the sentence (§5.3). 2 Methodology 2.1 Self-Attention Networks SANs (Lin et al., 2017), as a variant of attention model (Bahdanau et al., 2015; Luong et al., 2015), compute attention weights between each pair of elements in a single sequence. Given the input layer H = {h1, · · · , hN} ∈RN×d, SANs first transform the layer H into the queries Q ∈RN×d, the keys K ∈RN×d, and the values V ∈RN×d with three separate weight matrices. The output layer O is calculated as: O = ATT(Q, K)V (1) where the alternatives to ATT(·) can be additive attention (Bahdanau et al., 2015) or dot-product attention (Luong et al., 2015). Due to time and space efficiency, we used the dot-product attention in this study, which is computed as: ATT(Q, K) = softmax(QKT √ d ) (2) where √ d is the scaling factor with d being the dimensionality of layer states (Vaswani et al., 2017). 2.2 Weaknesses of Self-Attention Networks Despite SANs have demonstrated its effectiveness on various NLP tasks, recent studies empirically revealed that SANs suffer from two representation limitations of modeling word order encoding (Yang et al., 2019a) and syntactic structure modeling (Tang et al., 2018). In this work, we concentrate on these two commonly-cited issues. Word Order Encoding SANs merely rely on attention mechanism with neither recurrence nor convolution structures. In order to incorporate sequence order information, Vaswani et al. (2017) proposed to inject position information into the input word embedding with additional position embedding. Nevertheless, SANs are still weak at learning word order information (Yang et al., 2019a). Recent studies have shown that incorporating recurrence (Chen et al., 2018; Hao et al., 2019b,c), convolution (Song et al., 2018; Yang et al., 2019b), or advanced position encoding (Shaw et al., 2018; Wang et al., 2019a) into vanilla SANs can further boost their performance, confirming its shortcomings at modeling sequence order. Structure Modeling Due to lack of supervision signals of learning structural information, recent studies pay widespread attention on incorporating syntactic structure into SANs. For instance, Strubell et al. (2018) utilized one attention head to learn to attend to syntactic parents of each word. Towards generating better sentence representations, several researchers propose phrase-level SANs by performing self-attention across words inside a ngram phrase or syntactic constituent (Wu et al., 2018; Hao et al., 2019a; Wang et al., 2019b). These studies show that the introduction of syntactic information can achieve further improvement over SANs, demonstrating its potential weakness on structure modeling. 2.3 Selective Self-Attention Networks In this study, we implement the selective mechanism on SANs by introducing an additional selector, namely SSANs, as illustrated in Figure 1. The selector aims to select a subset of elements from the input sequence, on top of which the standard self-attention (Equation 1) is conducted. We implement the selector with Gumbel-Softmax, which has proven effective for computer vision tasks (Shen et al., 2018a; Yang et al., 2019c). Selector Formally, we parameterize selection action a ∈{SELECT, DISCARD} for each input element with an auxiliary policy network, where SELECT indicates that the element is selected for self-attention while DISCARD represents to abandon the element. The output action sequence A ∈RN is calculated as: π(A) = sigmoid(Es) (3) Es = QsKT s (4) 2988 Bush held a talk with Sharon 1 1 0 0 0 1 ✔ ✔ ✔ ✘ ✘ ✘ Selector SANs Figure 1: Illustration of SSANs that select a subset of input elements with an additional selector network, on top of which self-attention is conducted. In this example, the word “talk” performs attention operation over input sequence, where the words “Bush”, “held” and “Sharon” are chosen as the truly-significant words. where Qs ∈RN×d and Ks ∈RN×d are transformed from the input layer H with distinct weight matrices. We utilize sigmoid as activation function to calculate the distribution for choosing the action SELECT with the probability π or DISCARD with the probability 1 −π. Gumbel Relaxation There are two challenges for training the selector: (1) the ground-truth labels indicating which words should be selected are unavailable; and (2) the discrete variables in A lead to a non-differentiable objective function. In response to this problem, Jang et al. (2017) proposed Gumbel-Softmax to give a continuous approximation to sampling from the categorical distribution. We adopt a similar approach by adding Gumbel noise (Gumbel, 1954) in the sigmoid function, which we refer as Gumbel-Sigmoid. Since sigmoid can be viewed as a special 2-class case (Es and 0 in our case) of softmax, we derive the Gumbel-Sigmoid as: Gumbel-Sigmoid(Es) = sigmoid((Es + G′ −G′′)/τ) = exp((Es + G′)/τ) exp((Es + G′)/τ) + exp(G′′/τ) (5) where G′ and G′′ are two independent Gumbel noises (Gumbel, 1954), and τ ∈(0, ∞) is a temperature parameter. As τ diminishes to zero, a sample from the Gumbel-Sigmoid distribution becomes cold and resembles the one-hot samples. At training time, we can use Gumbel-Sigmoid to obtain differentiable sample A as Gumbel-Sigmoid(Es). In inference, we choose the action with maximum probability as the final output. 3 NLP Benchmarks To demonstrate the robustness and effectiveness of the SSANs, we evaluate it in three representative NLP tasks: language inference, semantic role labeling and machine translation. We used them as NLP benchmarks, which cover classification, sequence labeling and sequence generation categories. Specifically, the performances of semantic role labeling and language inference models heavily rely on structural information (Strubell et al., 2018), while machine translation models need to learn word order and syntactic structure (Chen et al., 2018; Hao et al., 2019c). 3.1 Experimental Setup Natural Language Inference aims to classify semantic relationship between a pair of sentences, i.e., a premise and corresponding hypothesis. We conduct experiments on the Stanford Natural Language Inference (SNLI) dataset (Bowman et al., 2015), which has three classes: Entailment, Contradiction and Neutral. We followed Shen et al. (2018b) to use a token2token SAN layer followed by a source2token SAN layer to generate a compressed vector representation of input sentence. The selector is integrated into the token2token SAN layer. Taking the premise representation sp and the hypothesis vector sh as input, their semantic relationship is represented by the concatenation of sp, sh, sp−sh and sp · sh, which is passed to a classification module to generate a categorical distribution over the three classes. We initialize the word embeddings with 300D GloVe 6B pre-trained vectors (Pennington et al., 2014), and the hidden size is set as 300. Semantic Role Labeling is a shallow semantic parsing task, which aims to recognize the predicateargument structure of a sentence, such as “who did what to whom”, “when” and “where”. Typically, it assigns labels to words that indicate their semantic role in the sentence. Our experiments are conducted on CoNLL2012 dataset provided by Pradhan et al. (2013). We evaluated selective mechanism on top of DEEPATT1 (Tan et al., 2018), which consists of 1https://github.com/XMUNLP/Tagger. 2989 stacked SAN layers and a following softmax layer. Following their configurations, we set the number of SAN layers as 10 with hidden size being 200, the number of attention heads as 8 and the dimension of word embeddings as 100. We use the GloVe embeddings (Pennington et al., 2014), which are pretrained on Wikipedia and Gigaword, to initialize our networks, but they are not fixed during training. We choose the better feed-forward networks (FFN) variants of DEEPATT as our standard settings. Machine Translation is a conditional generation task, which aims to translate a sentence from a source language to its counterpart in a target language. We carry out experiments on several widelyused datasets, including small English⇒Japanese (En⇒Ja) and English⇒Romanian (En⇒Ro) corpora, as well as a relatively large English⇒German (En⇒De) corpus. For En⇒De and En⇒Ro, we respectively follow Li et al. (2018) and He et al. (2018) to prepare WMT20142 and IWSLT20143 corpora. For En⇒Ja, we use KFTT4 dataset provided by Neubig (2011). All the data are tokenized and then segmented into subword symbols using BPE (Sennrich et al., 2016) with 32K operations. We implemented the approach on top of advanced TRANSFORMER model (Vaswani et al., 2017). On the large-scale En⇒De dataset, we followed the base configurations to train the NMT model, which consists of 6 stacked encoder and decoder layers with the layer size being 512 and the number of attention heads being 8. On the small-scale En⇒Ro and En⇒Ja datasets, we followed He et al. (2018) to decrease the layer size to 256 and the number of attention heads to 4. For all the tasks, we applied the selector to the first layer of encoder to better capture lexical and syntactic information, which is empirically validated by our further analyses in Section 4. 3.2 Experimental Results Table 1 shows the results on the three NLP benchmarks. Clearly, introducing selective mechanism significantly and consistently improves performances in all tasks, demonstrating the universality and effectiveness of the selective mechanism for SANs. Concretely, SSANs relatively improve prediction accuracy over SANs by +0.8% and +0.5% 2http://www.statmt.org/wmt14. 3https://wit3.fbk.eu/mt.php?release= 2014-01. 4http://www.phontron.com/kftt. Task Size SANs SSANs △ Natural Language Inference (Accuracy) SNLI 550K 85.60 86.30 +0.8% Semantic Role Labeling (F1 score) CoNLL 312K 82.48 82.88 +0.5% Machine Translation (BLEU) En⇒Ro 0.18M 23.22 23.91 +3.0% En⇒Ja 0.44M 31.56 32.17 +1.9% En⇒De 4.56M 27.60 28.50 +3.3% Table 1: Results on the NLP benchmarks. “Size” indicates the number of training examples, and “△” denotes relative improvements over the vanilla SANs. respectively on the NLI and SRL tasks, showing their superiority on structure modeling. Shen et al. (2018c) pointed that SSANs can better capture dependencies among semantically important words, and our results and further analyses (§5) provide supports for this claim. In the machine translation tasks, SSANs consistently outperform SANs across language pairs. Encouragingly, the improvement on translation performance can be maintained on the large-scale training data. The relative improvements on the En⇒Ro, En⇒Ja, and En⇒De tasks are respectively +3.0%, +1.9%, and +3.3%. We attribute the improvement to the strengths of SSANs on word order encoding and structure modeling, which are empirically validated in Sections 4 and 5. Shen et al. (2018c) implemented the selection mechanism with the REINFORCE algorithm. Jang et al. (2017) revealed that compared with Gumbel-Softmax (Maddison et al., 2014), REINFORCE (Williams, 1992) suffers from high variance, which consequently leads to slow converge. In our preliminary experiments, we also implemented REINFORCE-based SSANs, but it underperforms the Gumbel-Softmax approach on the benchmarking En⇒De translation task (BLEU: 27.90 vs. 28.50, not shown in the paper). The conclusion is consistent with Jang et al. (2017), and we thus use Gumbel-Softmax instead of REINFORCE in this study. 4 Evaluation of Word Order Encoding In this section, we investigate the ability of SSANs of capturing both local and global word orders on the bigram order shift detection (§4.1) and word reordering detection (§4.2) tasks. 2990 Model Layer Acc. △ SANs – 52.23 – SSANs 1 62.55 +19.8% 2 53.73 +2.9% 3 54.65 +4.6% 4 54.29 +3.9% 5 54.78 +4.9% 6 54.23 +3.8% Table 2: Results on the local bigram order shift detection task when SSANs are applied into different layers. 4.1 Detection of Local Word Reordering Task Description Conneau et al. (2018) propose a bigram order shift detection task to test whether an encoder is sensitive to local word orders. Given a monolingual corpus, a certain portion of sentences are randomly extracted to construct instances with illegal word order. Specially, given a sentence X = {x1, . . . , xN}, two adjacent words (i.e., xn, xn+1) are swapped to generate an illegal instance X′ as a substitute for X. Given processed data which consists of intact and inverted sentences, examined models are required to distinguish intact sentences from inverted ones. To detect the shift of bigram word order, the models should learn to recognize normal and abnormal word orders. The model consists of 6-layer SANs and 3-layer MLP classifier. The layer size is 128, and the filter size is 512. We trained the model on the open-source dataset5 provided by Conneau et al. (2018). The accuracy of SAN-based encoder is higher than previously reported result on the same task (Li et al., 2019) (52.23 vs. 49.30). Detection Accuracy Table 2 lists the results on the local bigram order shift detection task, in which SSANs are applied to different encoder layers. Clearly, all the SSANs variants consistently outperform SANs, demonstrating the superiority of SSANs on capturing local order information. Applying the selective mechanism to the first layer achieves the best performance, which improves the prediction accuracy by +19.8% over SANs. The performance gap between the SSANs variants is very large (i.e., 19.8% vs. around 4%), which we attribute to that the detection of local word reorder depends more on lexical information embedded in the bottom layer. 5https://github.com/facebookresearch/ SentEval/tree/master/data/probing. Attention Weight 0.00 0.03 0.06 0.09 0.12 0.15 Relative Distance 1 2 3 4 5 6 SANs SSANs Attention Weight 0.00 0.08 0.16 0.24 0.32 0.40 (0, 5] Figure 2: Attention weights over attended words with different relative distance from the query word on the local reordering task. SSANs pay more attention to the adjacent words (distance=1) than SANs. (a) SANs (b) SSANs Figure 3: Visualization of attention weights from an example on the local reordering detection task. We highlight the attended word (Y-axis) with maximum attention weight for each query (X-axis) in red rectangles. Attention Behaviors The objective of local reordering task is to distinguish the swap of two adjacent words, which requires the examined model to pay more attention to the adjacent words. Starting from this intuition, we investigate the attention distribution over the attended words with different relative distances from the query word, as illustrated in Figure 2. We find that both SANs and SSANs focus on neighbouring words (e.g., distance < 3), and SSANs pays more attention to the adjacent words (distance=1) than SANs (14.6% vs. 12.4%). The results confirm our hypothesis that the selective mechanism helps to exploit more bigram patterns to accomplish the task objective. Figure 3 shows an example, in which SSANs attend most to the adjacent words except the inverted bigram “he what”. In addition, the surrounding words “exactly” and “wanted” also pay more attention to the exceptional word “he”. We believe such features help to distinguish the abnormally local word order. 2991 Model Layer Insert Original Both SANs – 73.20 66.00 60.10 SSANs 1 81.52 72.19 66.77 2 80.14 70.01 63.97 3 79.82 69.69 63.93 4 79.08 70.22 63.67 5 80.19 69.84 64.12 6 80.27 69.50 63.73 Table 3: Performance on the global word reordering detection (WRD) task. 4.2 Detection of Global Word Reordering Task Description Yang et al. (2019a) propose a word reordering detection task to investigate the ability of SAN-based encoder to extract global word order information. Given a sentence X = {x1, . . . , xN}, a random word xi is popped and inserted into another position j (i ̸= j). The objective is to detect both the original position the word is popped out (labeled as “O”), and the position the word is inserted (labeled as “I”). The model consists of 6-layer SANs and a output layer. The layer size is 512, and the filter size is 2048. We trained the model on the open-source dataset6 provided by Yang et al. (2019a). Detection Accuracy Table 3 lists the results on the global reordering detection task, in which all the SSANs variants improve prediction accuracy. Similarly, applying the selective mechanism to the first layer achieves the best performance, which is consistent with the results on the global word reordering task (Table 2). However, the performance gap between the SSANs variants is much lower that that on the local reordering task (i.e., 4% vs. 15%). One possible reason is that the detection of global word reordering may also need syntactic and semantic information, which are generally embedded in the high-level layers (Peters et al., 2018). Attention Behaviors The objective of the WRD is to distinguish a global reordering (averaged distance is 8.7 words), which requires the examined model to pay more attention to distant words. Figure 4 depicts the attention distribution according to different relative distances. SSANs alleviate the leaning-to-local nature of SANs and pay more attention to distant words (e.g., distance> 5), which better accomplish the task of detecting global reordering. Figure 5 illustrates an example, in which 6https://github.com/baosongyang/WRD. Attention Weight 0.00 0.03 0.06 0.09 0.12 0.15 Relative Distance 1 2 3 4 5 6 SANs SSANs Attention Weight 0.00 0.08 0.16 0.24 0.32 0.40 Relative Distance (0, 5] (5,10] (10,15] (15,20] (20, 25] >25 SANs SSANs Figure 4: Attention weights over attended words with different relative distance from the query word on the global WRD task. SSANs pay more attention to the distant words (distance> 5) than SANs. (a) SANs (b) SSANs Figure 5: Visualization of attention weights from an example on the global reordering detection task. We highlight the attended word (Y-axis) with maximum attention weight for each query (X-axis) in red rectangles. more queries in SSANs attend most to the inserted word “the” than SANs. Particularly, SANs pay more attention to the surrounding words (e.g., distance < 3), while the inserted word “the” only accepts subtle attention. In contrast, SSANs dispense much attention over words centred on the inserted position (i.e., “the”) regardless of distance, especially for the queries “current rules for now”. We speculate that SSANs benefits from such features on detecting the global word reordering . 5 Evaluation of Structure Modeling In this section, we investigate whether SSANs better capture structural information of sentences. To this end, we first empirically evaluate the syntactic structure knowledge embedded in the learned representations (§5.1). Then we investigate the attention behaviors by extracting constituency tree from the attention distribution (§5.2). 2992 Class Ratio SANs SSANs △ 5 6.9% 68.66 75.22 +9.6% 6 14.3% 56.10 64.09 +14.2% 7 16.3% 46.63 55.05 +18.1% 8 17.9% 39.68 50.88 +28.2% 9 17.4% 38.33 50.97 +33.0% 10 15.3% 35.54 49.88 +40.3% 11 11.9% 48.86 56.39 +15.4% All 100% 45.68 55.90 +22.4% Table 4: F1 score on the tree depth task. “Ratio” denotes the portion each class takes. Type Ratio SANs SSANs △ Ques. 10% 95.90 97.06 +1.2% Decl. 60% 88.48 91.34 +3.2% Clau. 25% 72.78 78.32 +7.6% Other 5% 50.67 61.13 +20.6% All 100% 83.78 87.25 +4.1% Table 5: F1 score on the top constituent task. We report detailed results on 4 types of sentences: question (“Ques.”), declarative (“Decl.”), a clause (“Clau.”),nd other (“Other”) sentences. 5.1 Structures Embedded in Representations Task Description We leverage two linguistic probing tasks to assessing the syntactic information embedded in a given representation. Both tasks are cast as multi-label classification problem based on the representation of a given sentence, which is produced by an examined model: Tree Depth (TreeDepth) task (Conneau et al., 2018) checks whether the examined model can group sentences by the depth of the longest path from root to any leaf in their parsing tree. Tree depth values range from 5 to 11, and the task is to categorize sentences into the class corresponding to their depth (7 classes). Top Constituent (TopConst) task (Shi et al., 2016) classifies the sentence in terms of the sequence of top constituents immediately below the root node, such as “ADVP NP VP .”. The top constituent sequences fall into 20 categories: 19 classes for the most frequent top constructions, and one for all other constructions. We trained the model on the open-source dataset provided by Conneau et al. (2018), and used the same model architecture in Section 4.1. Probing Accuracy Table 4 lists the results on the TreeDepth task. SSANs significantly outperMetric SANs SSANs △ BP 21.09 22.07 +4.7% BR 22.05 23.07 +4.6% F1 21.56 22.56 +4.2% Table 6: Evaluation on constituency trees generated from the attention distribution. form SANs by 22.4% on the overall performance. Concretely, the performance of SANs dramatically drops as the depth of the sentences increases.7 On the other hand, SSANs is more robust to the depth of the sentences, demonstrating the superiority of SSANs on capturing complex structures. Table 5 shows the results on the TopConst task. We categorize the 20 classes into 4 categories based on the types of sentences: question sentence (“* SQ .”), declarative sentence (“* NP VP *” etc.), clause sentence (“SBAR *” and “S *”), and others (“OTHER”). Similarly, the performance of SANs drops as the complexity of sentence patterns increases (e.g., “Ques.” ⇒“Others”, 95.90 ⇒50.67). SSANs significantly improves the prediction F1 score as the complexity of sentences increases, which reconfirm the superiority of SSANs on capturing complex structures. 5.2 Structures Modeled by Attention Task Description We evaluate the ability of selfattention on structure modeling by constructing constituency trees from the attention distributions. Under the assumption that attention distribution within phrases is stronger than the other, Mareˇcek and Rosa (2018) define the score of a constituent with span from position i to position j as the attention merely inside the span denoted as score(i, j). Based on these scores, a binary constituency tree is generated by recurrently splitting the sentence. When splitting a phrase with span (i, j), the target is to look for a position k maximizing the scores of the two resulting phrases: k = arg max k′ (score(i, k ′) · score(k ′, j)) (6) We utilized Stanford CoreNLP toolkit to annotate English sentences as golden constituency trees. We used EVALB8 to evaluate the generated constituency trees, including bracketing precision, bracketing recall, and bracketing F1 score. 7The only exception is the class of “11”, which we attribute to the extraction of feature of associating “very complex sentence” with maximum depth “11”. 8http://nlp.cs.nyu.edu/evalb. 2993 (a) SANs (b) SSANs Figure 6: Example of constituency trees generated from the attention distributions. Type TreeDepth TopConst En⇒De Translation SANs SSANs △ SANs SSANs △ SANs SSANs △ Content Noun 0.149 0.245 +64.4% 0.126 0.196 +55.6% 0.418 0.689 +64.8% Verb 0.165 0.190 +15.2% 0.165 0.201 +21.8% 0.146 0.126 -13.7% Adj. 0.040 0.069 +7.3% 0.033 0.054 +63.6% 0.077 0.074 -3.9% Total 0.354 0.504 +42.4% 0.324 0.451 +39.2% 0.641 0.889 +38.7% Content-Free Prep. 0.135 0.082 -39.3% 0.123 0.119 -3.3% 0.089 0.032 -64.0% Dete. 0.180 0.122 -32.2% 0.103 0.073 -29.1% 0.070 0.010 -85.7% Punc. 0.073 0.068 -6.8% 0.078 0.072 -7.7% 0.098 0.013 -86.7% Others 0.258 0.224 -13.2% 0.373 0.286 -23.3% 0.102 0.057 -41.1% Total 0.646 0.496 -23.3% 0.676 0.549 -18.8% 0.359 0.111 -69.1% Table 7: Attention distributions on linguistic roles for the structure modeling probing tasks (§5.1, “TreeDepth” and “TopConst”) and the constituency tree generation task (§5.2, “En⇒De Translation”). Parsing Accuracy As shown in Table 6, SSANs consistently outperform SANs by 4.6% in all the metrics, demonstrating that SSANs better model structures than SANs. Figure 6 shows an example of generated trees. As seen, the phrases “he ran” and “heart pumping” can be well composed for both SANs and SSANS. However, SANs fail to parse the phrase structure “legs churning”, which is correctly parsed by SSANs. 5.3 Analysis on Linguistic Properties In this section, we follow He et al. (2019) to analyze the linguistic characteristics of the attended words in the above structure modeling tasks, as listed in Table 7. Larger relative increase (“△”) denotes more attention assigned by SSANs. Clearly, SSANs pay more attention to content words in all cases, although there are considerable differences among NLP tasks. Content words possess semantic content and contribute to the meaning of the sentence, which are essential in various NLP tasks. For example, the depth of constituency trees mainly relies on the nouns, while the modifiers (e.g., adjective and content-free words) generally make less contributions. The top constituents mainly consist of VP (95% examples) and NP (75% examples) categories, whose head words are verbs and nouns respectively. In machine translation, content words carry essential information, which should be fully transformed to the target side for producing adequate translations. Without explicit annotations, SANs are able to learn the required linguistic features, especially on the machine translation task (e.g., dominating attention on nouns). SSANs further enhance the strength by paying more attention to the content words. However, due to their high frequency with a limited vocabulary (e.g., 150 words9), content-free words, or function words generally receive a lot of attention, although they have very little substantive meaning. This is more series in structure probing tasks (i.e., TreeDepth and TopConst), since the scalar guiding signal (i.e., class labels) for a whole sentence is non-informative as it does not necessarily preserve the picture about the intermediate syntactic structure of the sentence that is being 9https://en.wikipedia.org/wiki/Function word. 2994 generated for the prediction. On the other hand, the problem on content-free words is alleviated on machine translation tasks due to the informative sequence signals. SSANs can further alleviate this problem in all cases with a better modeling of structures. 6 Conclusion In this work, we make an early attempt to assess the strengths of the selective mechanism for SANs, which is implemented with a flexible GumbelSoftmax approach. Through several well-designed experiments, we empirically reveal that the selective mechanism migrates two major weaknesses of SANs, namely word order encoding and structure modeling, which are essential for natural language understanding and generation. Future directions include validating our findings on other SAN architectures (e.g., BERT (Devlin et al., 2019)) and more general attention models (Bahdanau et al., 2015; Luong et al., 2015). Acknowledgments We thank the anonymous reviewers for their insightful comments. We also thank Xiaocheng Feng, Heng Gong, Zhangyin Feng, and Xiachong Feng for helpful discussion. This work was supported by the National Key R&D Program of China via grant 2018YFB1005103 and National Natural Science Foundation of China (NSFC) via grant 61632011 and 61772156. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In ICLR. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP. Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Mike Schuster, Noam Shazeer, Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. 2018. The best of both worlds: Combining recent advances in neural machine translation. In ACL. Alexis Conneau, German Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What You Can Cram Into A Single Vector: Probing Sentence Embeddings for Linguistic Properties. In ACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL. Xiaocheng Feng, Ming Liu, Jiahao Liu, Bing Qin, Yibo Sun, and Ting Liu. 2018. Topic-to-essay generation with neural networks. In IJCAI. Xiaocheng Feng, Duyu Tang, Bing Qin, and Ting Liu. 2016. English-chinese knowledge base translation with neural network. In COLING. Emil Julius Gumbel. 1954. Statistical theory of extreme values and some practical applications: a series of lectures. U. S. Govt. Print. Office. Maosheng Guo, Yu Zhang, and Ting Liu. 2019. Gaussian transformer: a lightweight approach for natural language inference. In AAAI. Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, and Zhaopeng Tu. 2019a. Multi-granularity selfattention for neural machine translation. In EMNLP. Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, and Zhaopeng Tu. 2019b. Towards better modeling hierarchical structure for self-attention with ordered neurons. In EMNLP. Jie Hao, Xing Wang, Baosong Yang, Longyue Wang, Jinfeng Zhang, and Zhaopeng Tu. 2019c. Modeling recurrence for transformer. In NAACL. Shilin He, Zhaopeng Tu, Xing Wang, Longyue Wang, Michael Lyu, and Shuming Shi. 2019. Towards understanding neural machine translation with word importance. In EMNLP. Tianyu He, Xu Tan, Yingce Xia, Di He, Tao Qin, Zhibo Chen, and Tie-Yan Liu. 2018. Layer-wise coordination between encoder and decoder for neural machine translation. In NIPS. Xiaochen Hou, Jing Huang, Guangtao Wang, Kevin Huang, Xiaodong He, and Bowen Zhou. 2019. Selective attention based graph convolutional networks for aspect-level sentiment classification. ArXiv, abs/1910.10857. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In ICLR. Jian Li, Zhaopeng Tu, Baosong Yang, Michael R. Lyu, and Tong Zhang. 2018. Multi-Head Attention with Disagreement Regularization. In EMNLP. Jian Li, Baosong Yang, Zi-Yi Dou, Xing Wang, Michael R. Lyu, and Zhaopeng Tu. 2019. Information aggregation for multi-head attention with routing-by-agreement. In NAACL. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In ICLR. 2995 Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective Approaches to Attentionbased Neural Machine Translation. In EMNLP. Chris J Maddison, Daniel Tarlow, and Tom Minka. 2014. A* sampling. In NIPS. David Mareˇcek and Rudolf Rosa. 2018. Extracting syntactic trees from transformer encoder self-attentions. In BlackboxNLP@EMNLP. Graham Neubig. 2011. The Kyoto free translation task. http://www.phontron.com/kftt. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In NAACL. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj¨orkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using ontonotes. In CoNLL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In ACL. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-Attention with Relative Position Representations. In NAACL. Chen Shen, Guo-Jun Qi, Rongxin Jiang, Zhongming Jin, Hongwei Yong, Yaowu Chen, and Xian-Sheng Hua. 2018a. Sharp attention network via adaptive sampling for person re-identification. IEEE Transactions on Circuits and Systems for Video Technology, 29(10):3016–3027. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2018b. DiSAN: Directional Self-Attention Network for RNN/CNNFree Language Understanding. In AAAI. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Sen Wang, and Chengqi Zhang. 2018c. Reinforced selfattention network: a hybrid of hard and soft attention for sequence modeling. In IJCAI. Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does String-based Neural MT Learn Source Syntax? In EMNLP. Kaitao Song, Xu Tan, Di He, Jianfeng Lu, Tao Qin, and Tie-Yan Liu. 2018. Double path networks for sequence to sequence learning. In COLING. Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. In EMNLP. Zhixing Tan, Mingxuan Wang, Jun Xie, Yidong Chen, and Xiaodong Shi. 2018. Deep semantic role labeling with self-attention. In AAAI. Gongbo Tang, Mathias M¨uller, Annette Rios, and Rico Sennrich. 2018. Why Self-Attention? A Targeted Evaluation of Neural Machine Translation Architectures. In EMNLP. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In NIPS. Xing Wang, Zhaopeng Tu, Longyue Wang, and Shuming Shi. 2019a. Self-attention with structural position representations. In EMNLP. Yaushian Wang, Hung-Yi Lee, and Yun-Nung Chen. 2019b. Tree transformer: Integrating tree structures into self-attention. In EMNLP. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, pages 229–256. Wei Wu, Houfeng Wang, Tianyu Liu, and Shuming Ma. 2018. Phrase-level self-attention networks for universal sentence encoding. In EMNLP. Baosong Yang, Zhaopeng Tu, Derek F. Wong, Fandong Meng, Lidia S. Chao, and Tong Zhang. 2018. Modeling localness for self-attention networks. In EMNLP. Baosong Yang, Longyue Wang, Derek F. Wong, Lidia S. Chao, and Zhaopeng Tu. 2019a. Assessing the ability of self-attention networks to learn word order. In ACL. Baosong Yang, Longyue Wang, Derek F. Wong, Lidia S. Chao, and Zhaopeng Tu. 2019b. Convolutional self-attention networks. In NAACL. Jiancheng Yang, Qiang Zhang, Bingbing Ni, Linguo Li, Jinxian Liu, Mengdie Zhou, and Qi Tian. 2019c. Modeling point clouds with self-attention and gumbel subset sampling. In CVPR.
2020
269
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 291–301 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 291 TAG : Type Auxiliary Guiding for Code Comment Generation Ruichu Cai1, Zhihao Liang1, Boyan Xu1∗, Zijian Li1, Yuexing Hao2, Yao Chen3 1 School of Computer Science, Guangdong University of Technology, China 2 Rutgers University New Brunswick, USA 3 Advanced Digital Sciences Center, Singapore [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Existing leading code comment generation approaches with the structure-to-sequence framework ignores the type information of the interpretation of the code, e.g., operator, string, etc. However, introducing the type information into the existing framework is non-trivial due to the hierarchical dependence among the type information. In order to address the issues above, we propose a Type Auxiliary Guiding encoder-decoder framework for the code comment generation task which considers the source code as an N-ary tree with type information associated with each node. Specifically, our framework is featured with a Typeassociated Encoder and a Type-restricted Decoder which enables adaptive summarization of the source code. We further propose a hierarchical reinforcement learning method to resolve the training difficulties of our proposed framework. Extensive evaluations demonstrate the state-of-the-art performance of our framework with both the auto-evaluated metrics and case studies. 1 Introduction The comment for the programming code is critical for software development, which is crucial to the further maintenance of the project codebase with significant improvement of the readability (Aggarwal et al., 2002; Tenny, 1988). Code comment generation aims to automatically transform program code into natural language with the help of deep learning technologies to boost the efficiency of the code development. Existing leading approaches address the code comment generation task under the structure-tosequence (Struct2Seq) framework with an encoderdecoder manner by taking advantage of the inherent structural properties of the code. For instance, existing solutions leverage the syntactic structure of abstract syntax trees (AST) or parse trees from ∗Corresponding author Comment 1: What is the name of <UNK>  Tree-LSTM Encoder agg LSTM <UNK> name LSTM … Tree-LSTM Decoder Select city where Compare equal name ACL Comment 2: What is the location of ACL  Type-associated Encoder ACL city where equal name copy ACL Operator location generate LSTM LSTM … Operator generate copy Type-restricted Decoder stmt agg agg col cond cmp str Select Compare (a) Struct2Seq example Co What is the Tree-L agg LSTM LSTM … Tree-LS S c equal Comment 2: What is the location of ACL  Type-associated Encoder ACL city where equal name copy ACL Operator location generate LSTM LSTM … Operator generate copy Type-restricted Decoder stmt agg agg col cond cmp str Select Compare (b) TAG example Figure 1: Comment generation frameworks. Different types are denoted as different colors and shapes in (b). source code have shown significant improvement to the quality of the generated comments (Liang and Zhu, 2018; Alon et al., 2018; Hu et al., 2018; Wan et al., 2018); Solutions representing source code as graphs have also shown high-quality comment generation abilities by taking advantage of extracting the structural information of the codes (Xu et al., 2018a,b; Fernandes et al., 2018). Although promising results were reported, we observe that the information of the node type in the code is not considered in these aforementioned Struct2Seq based solutions. The lack of such essential information lead to the following common limitations: 1) Losing the accuracy for encoding the source code with the same structure but has different types. As shown in Fig. 1(a), a Tree-LSTM (Tai et al., 2015) encoder is illustrated to extract the structural information, the two subtrees of the code ‘Select’ and ‘Compare’ in the dashed box have the same structure but different types, with the ignorance of the type information, the traditional encoders illustrate the same set of neural network parameters to encode the tree, which leads to an inaccurate generation of the comment. 2) Losing both the efficiency and accuracy for searching the large vocabulary in the decoding procedure, 292 especially for the out-of-vocabulary (OOV) words that exist in the source code but not in the target dictionary. As shown in the Fig. 1(a), missing the type of ‘ACL’ node usually results in an unknown word ‘UNK’ in the generated comments. Thus, the key to tackle these limitations is efficiently utilizing the node type information in the encoder-decoder framework. To well utilize the type information, we propose a Type Auxiliary Guiding (TAG) encoder-decoder framework. As shown in Fig. 1(b), in the encoding phase, we devise a Type-associated encoder to encode the type information in the encoding of the N-ary tree. In the decoding phase, we facilitate the generation of the comments with the help of type information in a two-stage process naming operation selection and word selection to reduce the searching space for the comment output and avoid the out-of-vocabulary situation. Considering that there is no ground-truth labels for the operation selection results in the two-stage generation process, we further devised a Hierarchical Reinforcement Learning (HRL) method to resolve the training of our framework. Our proposed framework makes the following contributions: • An adaptive Type-associated encoder which can summarize the information according to the node type; • A Type-restricted decoder with a two-stage process to reduce the search space for the code comment generation; • A hierarchical reinforcement learning approach that jointly optimizes the operation selection and word selection stages. 2 Related Work Code comment generation frameworks generate natural language from source code snippets, e.g. SQL, lambda-calculus expression and other programming languages. As a specified natural language generation task, the mainstream approaches could be categorized into textual based method and structure-based method. The textual-based method is the most straightforward solution which only considers the sequential text information of the source code. For instance, Movshovitz-Attias and Cohen (2013) uses topic models and n-grams to predict comments with given source code snippets; Iyer et al. (2016) presents a language model Code-NN using LSTM networks with attention to generate descriptions about C# and SQL; Allamanis et al. (2016) predicts summarization of code snippets using a convolutional attention network; Wong and Mooney (2007) presents a learning system to generate sentences from lambda-calculus expressions by inverting semantic parser into statistical machine translation methods. The structure-based methods take the structure information into consideration and outperform the textual-based methods. Alon et al. (2018) processes a code snippet into the set of compositional paths in its AST and uses attention mechanism to select the relevant paths during the decoding. Hu et al. (2018) presents a Neural Machine Translation based model which takes AST node sequences as input and captures the structure and semantic of Java codes. Wan et al. (2018) combines the syntactic level representation with lexical level representation by adopting a tree-to-sequence (Eriguchi et al., 2016) based model. Xu et al. (2018b) considers a SQL query as a directed graph and adopts a graph-to-sequence model to encode the global structure information. Copying mechanism is utilized to address the OOV issues in the natural language generation tasks by reusing parts of the inputs instead of selecting words from the target vocabulary. See et al. (2017) presents a hybrid pointer-generator network by introducing pointer network (Vinyals et al., 2015) into a standard sequence-to-sequence (Seq2Seq) model for abstractive text summarization. COPYNET from Gu et al. (2016) incorporates the conventional copying mechanism into Seq2Seq model and selectively copy input segments to the output sequence. In addition, Ling et al. (2016) uses the copying mechanism to copy strings from the code. Our targeted task is considered as the opposite process of natural language to programming code (NL-to-code) task. So some of the NL-to-code solutions are also taken as our references. Dong and Lapata (2016) distinguishes types of nodes in the logical form by whether nodes have child nodes. Yin and Neubig (2017); Rabinovich et al. (2017); Xu et al. (2018a) take the types of AST nodes into account and generate the corresponding programming codes. Cai et al. (2018) borrows the idea of Automata theory and considers the specific types of SQL grammar in Backus-Naur form (BNF) and generates accurate SQL queries with the help of it. Inspired by the methods considering the type 293  s LSTM LSTM LSTM … … gen copy Operation Selection Stage what of SELECT ACL Word Distribution (Generation)      Word Selection Stage Word Distribution (Copying) Y N = ? <start>         …  Neural Network Pointwise Operation () ,  ,        …     Operation Distribution Encoding Process in Cell Type–associated Encoder tanh Type-restricted Decoder Two stages Decoding Process     (Attention Vector)    Figure 2: TAG Encoder and Decoder framework. information of the code, our solution differs from the existing method with a Type-associated Encoder that encodes the type information during the substructure summarization and a Type-restricted Decoder that can reduce search space for the code comment generation. In addition, two improvements are developed according to our objectives. First, we design a type-restricted copying mechanism to reduce the difficulty of extracting complex grammar structure from the source code. Second, we use a hierarchical reinforcement learning methods to train the model in our framework to learn to select from either copy or other actions, the details will be presented in Section 3. 3 Model Overview We first make the necessary definition and formulation for the input data and the code comment generation problem for our Type Auxiliary Guiding (TAG) encoder-decoder framework. Definition 1 Token-type-tree. Token-type-tree Tx,τ represents the source code with the node set V , which is a rooted N-ary tree. And V = {v1, v2, .., v|V |} denotes a partial order nodes set satisfying v1 ⪯v2 ⪯..., ⪯v|V |. Let internal node vj = {xj, τj}, where xj denotes the token sequence and τj denotes a type from grammar type set T . Token-type-tree can be easily constructed from token information of the original source code and type information of its AST or parse tree. According to Definition 1, we formulate the code comment generation task as follows. Formulation 1 Code Comment Generation with Token-type-tree as the Input. Let S denote training dataset and labeled sample (Tx,τ, y) ∈S, where Tx,τ is the input token-type-tree, y = (y1, y2, · · · , yM) is the ground truth comment with M words. The task of code comment generation is to design a model which takes the unlabeled sample Tx,τ as input and predicts the output as its comment, denoted as y. Our framework follows the encoder-decoder manner, and consists of the revised two major components, namely the Type-associated Encoder and Type-restricted Decoder. As shown in Fig. 2. The Type-associated Encoder, as shown in Fig. 2, recursively takes the token-type-tree Tx,τ as input, and maintains the semantic information of the source code in the hidden states. Instead of using the same parameter sets to learn the whole tokentype-tree, Type-associated Encoder utilizes multiple sets of parameters to learn the different type of nodes. The parameters of the cells are adaptively invoked according to the type of the current node during the processing of the input token-type-tree. Such a procedure enables the structured semantic representation to contain the type information of the source code. The Type-restricted Decoder, as shown in the right part of Figure 2, takes the original toke-typetree Tx,τ and its semantic representation from encoder as input and generates the corresponding comment. Different from conventional decoders which generate output only based on the target dictionary, our Type-restricted Decoder considers both 294 input code to the encoder and target dictionary as the source of output. Attention mechanism is employed to compute an attention vector which is used to generate the output words through a two-stage process: (1) Determine either to copy from the original token-type-tree or to generate from the current hidden state according to the distribution of the operation. (2) If the copying operation is selected, the words are copied from the selected node from the token-type-tree Tx,τ with restricted types; otherwise, the candidate word will be selected from the target dictionary. The above two-stage process is guided by the type which is extracted from the hidden state of encoder with the help of attention mechanism. Such a process enables adaptive switching between copying and generation processes, and not only reduces the search space of the generation process but also addresses the OOV problem with the copying mechanism. Although the proposed framework provides an efficient solution with the utilization of the type information in the code, training obstacles are raised accordingly: (1) No training labels are provided for the operation selection stage. (2) There is a mismatch between the evaluation metric and the objective function. Thus, we further devised an HRL method to train our TAG model. In the HRL training, the TAG model feeds back the evaluation metric as the learning reward to train the two-stage sampling process without relying on the groundtruth label of operation selection stage. 4 Type-associated Encoder The encoder network aims to learn a semantic representation of the input source code. The key challenge is to provide distinct summarization for the sub-trees with the same structure but different semantics. As shown in the Type-associated Encoder in Fig. 1, the blue and red dashed blocks have the same 3-ary substructure. The sub-tree in the blue box shares the same sub-structure with the tree in the red box, which is usually falsely processed by the same cell in a vanilla Tree-LSTM. By introducing the type information, the semantics of the two subtrees are distinguished from each other. Our proposed Type-associated Encoder is designed as a variant N-ary Tree-LSTM. Instead of directly inputting type information as features into the encoder for learning, we integrate the type information as the index of the learning parameter sets of the encoder network. More specifically, different sets of parameters are defined through different types, which provides a more detailed summarization of the input. As is shown in Fig. 1(b), the two sub-trees in our proposed Type-associated Encoder are distinguished by the type information. The tree contains N ordered child nodes, which are indexed from 1 to N. For the j-th node, the hidden state and memory cell of its k-th child node is denoted as hjk and cjk, respectively. In order to effectively capture the type information, we set Wτj and bτj to be the weight and bias of the j-th node, and Uτjk be the weight of the k-th child of the j-th node. The transition equation of the variant N-ary Tree-LSTM is shown as follow: ij = σ W (i) τj φ (xj) + N X l=1 U (i) τjlhjl + b(i) τj ! , (1) fjk = σ W (f) τjk φ (xj) + N X l=1 U (f) τjl,khjl + b(f) τjk ! , (2) oj = σ W (o) τj φ (xj) + N X l=1 U (o) τjl hjl + b(o) τj ! , (3) uj = tanh W (u) τj φ (xj) + N X l=1 U (u) τjl hjl + b(u) τj ! , (4) cj = ij ⊙uj + N X l=1 fjl ⊙cjl, (5) hj = oj ⊙tanh (cj) , (6) We employ the forget gate (Tai et al., 2015) for the Tree-LSTM, the parameters for the k-th child of the j-th node’s is denoted as fjk. Uτjl,k is used to represent the weight of the type for the l-th child of the j-th node in the k-th forget gate. The major difference between our variants and the traditional Tree-LSTM is that the parameter set (Wτ, Uτ, bτ) are specified for each type τ. 5 Type-restricted Decoder Following with the Type-associated Encoder, we propose a Type-restricted Decoder for the decoding phase, which incorporates the type information into its two-stage generation process. First of all, an attention mechanism is adopted in the decoding phase which takes hidden states from the encoder as input and generates the attention vector. The resulted attention vector is used as input to the following two-stage process, named operation selection stage and word selection stage, respectively. The operation selection stage selects between generation operation and copying operation 295 for the following word selection stage. If the generation operation is selected, the predicted word will be generated from the targeted dictionary. If the copying operation is selected, then a type-restricted copying mechanism is enabled to restrict the search space by masking down the illegal grammar types. Furthermore, a copying decay strategy is illustrated to solve the issue of repetitively focusing on specific nodes caused by the attention mechanism. The details of each part are given below. Attention Mechanism: The encoder extracts the semantic representation as the hidden state of the rooted nodes, denoted as hr, which are used to initialize the hidden state of the decoder, z0 ←hr. At time step m, given output ym−1 and the hidden state of the decoder zm−1 at last time step m −1, the hidden state zm is recursively calculated by the LSTM cells in the decoder, zm = LSTM(zm−1, ym−1). (7) The attention vector q is calculate with: αmj = exp  h⊤ j zm  P|Vx| j=1 exp  h⊤ j zm , f qm = |Vx| X j=1 αmjhj, qm = tanh (Wq [eq, zm]) , (8) where Wq is the parameters of the attention mechanism. The attention vector contains the token and type information, which is further facilitated in the following operation selection and word selection stages. Operation Selection Stage: Operation Selection Stage determines either using the copying operation or the generation operation to select the words based on the attention vector and hidden states from the encoder. Specifically, given the attention vector qm at time step m, Operation Selection Stage estimates the conditional probabilities as the distribution of the operation p(ˆam|ˆy<m; Tx,τ), where ˆam ∈{0, 1} and 0 and 1 represents the copy and the generation operations, respectively. A fully connected layer followed by a softmax is implemented to compute the distribution of the operations. p(ˆam|ˆy<m; Tx,τ) = softmax(Wsqm), (9) The Ws in the Eq. 9 is the trainable parameters. Since there is no ground-truth label for operation selection, we employ an HRL method to jointly train the operation selection stage and the following stage, the details are provided in Section 6. Word Selection Stage: Word Selection Stage also contains two branches. The selection between them is determined by the previous stage. If the generation operation is selected in the Operation Selectoin Stage, the attention vector will be fed into a softmax layer to predict the distribution of the target word, formulated as p(ym|ˆam = 1, ˆy<m; Tx,τ) = softmax (Wgqm) , (10) where Wg is the trainable parameters of the output layer. Otherwise, if the copy operation is selected, we employ the dot-product score function to calculate score vector sm of the hidden state of the node and the attention vector. Similarly, score vector sm will be fed into a softmax layer to predict the distribution of the input word, noted as: sm =  h1, h2, · · · , h|Vx| ⊤qm p(ym|ˆam = 0; ˆy<m; Tx,τ) = softmax (sm) . (11) One step further, to filter out the illegally copied candidates, we involve a grammar-type based mask vector dm ∈R|Vx| at each decoding step m. Each dimension of dm corresponds to each node of the token-type-tree. If the mask of the node in tokentype-tree indicates the node should be filtered out, then the corresponding dimension is set as negative infinite. Otherwise, it is set to 0. Thus, the restricted copying stage is formulated as p(ym|ˆam = 0, ˆy<m; Tx,τ) = softmax (sm + dm) . (12) The word distribution of the two branches is represented with a softmax over input words or target dictionary words in Eq. 10 and Eq. 12. At each time step, the word with the highest probability in the word distribution will be selected. Copying Decay Strategy: Similar to the conventional copying mechanism, we also use the attention vector as a pointer to guide the copying process. The type-restricted copying mechanism tends to pay more attention to specific nodes, resulting in the ignorance of other available nodes, which makes certain copied tokens repeatedly active in a short distance in a single generated text, lead to a great redundancy of the content. So we design a Copying Decay Strategy to smoothly penalize certain probabilities of outstand296 ingly copied nodes. We define a copy time-based decay rate λmi for the i-th tree node xi in the m-th decoding step. If one node is copied in time step m, its decay rate is initialized as 1. In the next time step m + 1, it is scaled by a coefficient γ ∈(0, 1): λm+1,i = γλm,i (13) The overall formulation for the Type-restricted Decoder is: p(ym|ˆam = 0, ˆy<m; Tx,τ) = softmax (sm + dm) ⊙(1 −λm) (14) 6 Hierarchical Reinforcement Learning There remain two challenges to train our proposed framework, which are 1) the lack of ground truth label for the operation selection stage and 2) the mismatch between the evaluation metric and objective function. Although it is possible to train our framework by using the maximum likelihood estimation (MLE) method which constructs pseudo-labels or marginalize all the operations in the operation selection stage (Jia and Liang, 2016; Gu et al., 2016), the loss-evaluation mismatch between MLE loss for training and non-differentiable evaluation metrics for testing lead to inconsistent results (Keneshloo et al., 2019; Ranzato et al., 2015). To address these issues, we propose a Hierarchical Reinforcement Learning method to train the operation selection stage and word selection stage jointly. We set the objective of the HRL as maximizing the expectation of the reward R(ˆy, y) between the predicted sequence ˆy and the ground-truth sequence y, denoted as Lr. It could be formulated as a function of the input tuple {Tx,τ, y} as, Lr = 1 |S| X (Tx,τ,y)∈S Eˆy∼p(ˆy|Tx,τ)[R(ˆy, y)] = 1 |S| X (Tx,τ,y)∈S X ˆy∈Y p(ˆy|Tx,τ)R(ˆy, y), (15) Here, Y is the set of the candidate comment sequences. The reward R(ˆ(y), y) is the nondifferentiable evaluation metric, i.e., BLEU and ROUGE (details are in Section 7). The expectation in Eq. (15) is approximated via sampling ˆy from the distribution p(ˆy|Tx,τ). The procedure of sampling ˆy from p(ˆy|Tx,τ) is composed of the subprocedures of sampling ˆym from p(ˆym|ˆy<m; Tx,τ) in each decoding step m. As mentioned above, the predicted sequence ˆy comes from the two branches of Word Selection Stage, depending on the Operation Selection Stage. a is defined as the action of the Operation selection stage. After involving the action am in time step m, Eq. (15) can be constructed by the joint distribution of the two stages: 1 |S| X (Tx,τ ,y)∈S X ˆ y∈Y p(ˆy|Tx,τ)R(ˆy, y) = 1 |S| X ... X ˆ y∈Y ( M Y m=1 X ˆam p(ˆym, ˆam|ˆy<m; Tx,τ) | {z } Two-stage Joint Distribution )R(ˆy, y) = ... p(ˆym|ˆam;ˆy<m;Tx,τ) | {z } Word Distribution p(ˆam|ˆy<m;Tx,τ) | {z } Operation Distribution ... (16) As shown in Eq. (16), the model finally selects the word ˆym in time step m from the word distribution conditioned on ˆy<m, Tx,τ and the operation ˆam which is determined in the operation selection stage. In other words, there is a hierarchical dependency between the word selection stage and the operation selection stage. As mentioned above, Y represents the space for all candidate comments, which is too large to practically maximize Lr. Since decoding is constructed via sampling from p(ˆym|ˆam, ˆy<m; Tx,τ) and p(ˆam|ˆy<m; Tx,τ), We adopt the Gumbel-Max solution (Gumbel, 1954) for the following sampling procedure: ˆam ∼p(ˆam|ˆy<m; Tx,τ), ˆym ∼p(ˆym|ˆam, ˆy<m; Tx,τ). (17) Through the maximum sampling step M, Eq. (16) could be further approximated as the following equation: ˆLr = 1 |S| X y∈S R(ˆy, y) (18) The objective in Eq. (18) remains another challenge: for the entire sequence ˆy, there is only a final reward R(ˆy, y) available for model training, which is a sparse reward and leads to inefficient training of the model. So we introduce reward shaping (Ng et al., 1999) strategy to provide intermediate rewards to proceed towards the training goal, which adopts the accumulation of the intermediate rewards to update the model. To further stabilize the HRL training process, we combine our HRL objective with the maximumlikelihood estimation(MLE) function according to 297 Wu et al. (2018a, 2016); Li et al. (2017); Wu et al. (2018b): Le = 1 |S| X (Tx,τ,y)∈S X ˆy∈Y logp(y|Tx,τ) L = µLe + (1 −µ)ˆLr, (19) where µ is a variational controlling factor that controls the trade-off between maximum-likelihood estimation function and our HRL objective. In the current training step tr, µ varies according to the training step tt as follows: µ = 1 −tr tt (20) 7 Evaluation and Analysis 7.1 Experimental Setup 7.1.1 Datasets We evaluate our TAG framework on three widely used benchmark data sets, which are WikiSQL (Zhong et al., 2017), ATIS (Dong and Lapata, 2016) and CoNaLa (Yin et al., 2018). WikiSQL is a dataset of 80654 hand-annotated examples of SQL query and natural language comment pairs distributed across 24241 tables from Wikipedia. These SQL queries are further split into training (56355 examples), development (8421 examples) and test (15878 examples) sets. ATIS is in the form of lambda-calculus, which is a set of 5410 inquiries for flight information containing 4434 training examples, 491 development examples and 448 test examples. CoNaLa is a python related dataset. Its original version is used which includes 2879 snippet/intent pairs crawled from Stack Overflow, split into 2379 training and 500 test examples. We extract 200 random examples from its training set as the development set. We transfer the SQL queries of WikiSQL into ASTs with 6 types according to the Abstract Syntax Description Language (ASDL) grammar, where the ASDL grammar for SQL queries is proposed in Yin and Neubig (2017). We transfer the lambdacalculus logical forms of ATIS to tree structure with 7 types according to the method proposed in Dong and Lapata (2016). The python snippets of CoNaLa are transformed into ASTs with 20 types, following the official ASDL grammar of python1. The data of the ASTs of these datasets is shown in Table 1, where the maximum depth of ASTs (Max-Tree-Depth), the maximum number of child 1https://docs.python.org/3.5/library/ast.html nodes in ASTs (Max-Child-Count) and the average number of tree nodes in ASTs (Avg-Tree-NodeCount) are shown. Dataset WikiSQL ATIS CoNaLa Max Tree Depth 5 18 28 Max Child Num 4 15 10 Avg Tree Node Count 11.11 33.54 28.37 Table 1: Statistics of ASTs on the datasets. 7.1.2 Baselines Frameworks We choose the representative designs for code comment generation as our baselines for comparison. Code-NN (Iyer et al., 2016) is chosen because of it is the first model to transform the source code into sentences. Pointer Generator (See et al., 2017) (P-G) is a seq2seq based model with a standard copying mechanism. In addition, we choose the attention based Tree-to-Sequence (Tree2Seq) model proposed by Eriguchi et al. (2016). Moreover, we also add the copying mechanism into Tree2Seq model as another baseline (T2S+CP). We choose Graph-to-Sequence (Graph2Seq) (Xu et al., 2018b) as a graph-based baseline for comparison. Since the authors have not released the code for datapreprocessing, we convert the tree-structured representation for the source code of SQL data into directed graphs for our replication. 7.1.3 Hyperparameters Code-NN uses embedding size and hidden size both as 400, and applies random uniform initializer with 0.35 initialized weight, and adopts stochastic gradient descent algorithm to train the model with a learning rate at 0.5. P-G uses 128 embedding size, 256 hidden size and applies random uniform initializer with 0.02 initialized weights for initialization and Adam optimizer to train the model with 0.001 learning rate. Graph2Seq uses 100 embedding size, 200 hidden size and applies the truncated normal initializer for initialization. Adam optimizer is used to train the model with a 0.001 learning rate. We use the Xavier initializer (Glorot and Bengio, 2010) to initialize the parameters of our proposed TAG framework. The size of embeddings is equivalent to the dimensions of LSTM states and hidden layers, which is 64 for ATIS and CoNaLa and 128 for WikiSQL. TAG is trained using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001. In order to reduce the size of the vocabulary, low-frequency words are not kept in both the 298 Model WikiSQL (SQL) ATIS (lambda-calculus) CoNaLa (Python) BLEU-4 ROUGE-2 ROUGE-L BLEU-4 ROUGE-2 ROUGE-L BLEU-4 ROUGE-2 ROUGE-L Code-NN 6.7 9.7 30.9 37.1 43.28 59.4 8.1 12.2 26.1 P-G 25.7 29.2 50.1 41.9 47.3 60.5 10.0 13.8 28.0 Tree2Seq 22.0 22.0 43.4 40.1 47.2 60.9 6.6 9.2 25.2 Graph2Seq 17.6 24.3 45.7 34.6 41.8 58.3 10.4 14.1 28.2 T2S+CP 31.0 36.8 54.5 39.0 43.7 58.4 13.3 18.5 31.5 TAG(B) 35.8 41.0 57.8 42.4 47.4 61.2 14.1 19.4 31.8 TAG(R) 35.2 41.1 58.1 40.6 47.1 61.5 12.6 19.7 32.2 Table 2: Comparisons with baseline models on different test sets. vocabulary for the source codes and the vocabulary for target comments. Specifically, the minimum threshold frequency for WikiSQL and ATIS is set as 4 while for CoNaLa it is set as 2. The hyperparameters of Tree2Seq and T2S+CP is equivalent to ours. The minibatch size of all the baseline models and ours are set to 32. 7.1.4 Evaluation Metric We illustrate the n-gram based BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) evaluations to evaluate the quality of our generated comments and also use them to set the reward in the HRL based training. Specifically, BLEU-4, ROUGE-2 and ROUGE-L are used to evaluate the performance of our model since they are the most representative evaluation metric for context-based text generation. 7.2 Results and Analysis 7.2.1 Comparison with the Baselines Table 2 presents the evaluation results of the baseline frameworks and our proposed ones. Since our HRL could be switched to different reward functions, we evaluate both the BLEU oriented and ROUGE oriented training of our framework, denoted as TAG(B) and TAG(R). The results of TAG(B) and TAG(R) varies slightly compared to each other. However, both of them are significantly higher than all the selected counterparts, which demonstrates the state-of-the-art generation quality of our framework on all the datasets with different programming languages. Specifically, TAG improves over 15% of BLEU4, over 10% of ROUGE-2 and 6% of ROUGE-L on WikiSQL when compared to T2S+CP, which is the best one among all the baseline target for all the evaluations. For the lambda-calculus related corpus, TAG improves 1.0% of BLEU, 0.2% ROUGE-2 and 0.5% ROUGE-L on ATIS. The performance is more difficult to be improved on ATIS Model BLEU-4 ROUGE-2 ROUGE-L TAG-TA 34.8(-1.4) 41.0(-1.3) 57.8(-1.6) TAG-MV 35.2(-1.0) 41.1(-1.2) 58.1(-1.3) TAG-CD 33.5(-2.7) 40.0(-2.3) 57.1(-2.3) TAG-RL 34.6(-1.6) 41.4(-0.9) 58.7(-0.7) TAG(B) 36.2 42.0 58.8 TAG(R) 35.6 42.3 59.4 Table 3: Ablation study of TAG framework. than the other two corpora due to the great dissimilarity of sub-trees of the lambda-calculus logical forms in it. In terms of the python related corpus, TAG improves 6% of BLEU, 6.4% of ROUGE-2 and 2.2% of ROUGE-L on CoNaLa when compared to the best one in our baselines. The low evaluation score and improvement of CoNaLa are due to the complex grammatical structures and lack of sufficient training samples, i.e., 20 types across only 2174 training samples, which result in an inadequately use of the advantage of our approach. However, our TAG framework still outperforms all the counterparts on these two datasets. 7.2.2 Ablation Study To investigate the performance of each component in our model, we conduct ablation studies on the development sets. Since all the trends are the same, we omit the results on the other data sets and only present the ones of WikiSQL. The variants of our model are as follows: • TAG-TA: remove Type-associated Encoder, use Tree-LSTM instead. • TAG-MV: remove the mask vector dm. • TAG-CD: remove Copying Decay Strategy. • TAG-RL replace HRL with MLE, marginalize the actions of the operation selection. The results of the ablation study are given in Table 3. Overall, all the components are necessary to TAG framework and providing important contributions to the final output. When compared to TAG-TA, the high performance of standard TAG 299 Code Comment SQL: SELECT MAX(Capacity) FROM table WHERE Stadium = “Otkrytie Arena” Ground-Truth: What is the maximum capacity of the Otkrytie Arena Stadium ? Code-NN: What is the highest attendance for ? P-G: Who is the % that ’s position at 51 ? Tree2Seq: What is the highest capacity at <unk> at arena ? Graph2Seq: What is the highest capacity for arena arena ? T2S+CP: What is the highest capacity for the stadium ? TAG: What is the highest capacity for the stadium of Otkrytie Arena ? Python: i: d [i] for i in d if i != ’c’ Ground-Truth: remove key ’c’ from dictionary ’d’ Code-NN: remove all keys from a dictionary ’d’ P-G: select a string ’c’ in have end of a list ’d’ Tree2Seq: get a key ’key’ one ’,’ one ’,’ <unk> Graph2Seq: filter a dictionary of dictionaries from a dictionary ‘d’ where a dictionary of dictionaries ’d’ T2S+CP: find all the values in dictionary ’d’ from a dictionary ’d’ TAG: remove the key ’c’ if a dictionary ’d’ Table 4: Case study comparisons. benefits from the Type-associated Encoder which adaptively processes the nodes with different types and extracts a better summarization of the source code. The downgraded performance of TAG-MV and TAG-CD indicates the advantages of the typerestricted masking vector and Copying Decay Strategy. These together ensure the accurate execution of the copy and word selection. The comparison of TAG and TAG-RL shows the necessity of the HRL for the training of our framework. 7.2.3 Case Study In order to show the effectiveness of our framework in a more obvious way, some cases generated by TAG are shown in Table 4. SQL and Python are taken as the targeted programming languages. The comments generated by TAG show great improvements when compared to the baselines. Specifically, for the case in SQL, the keyword “Otkrytie Area” is missing in all the baselines but accurately generated by our framework. For the case in Python, the comment generated by TAG is more readable than the others. These cases demonstrate the high quality of the comments generated by our TAG framework. 8 Conclusion In this paper, we present a Type Auxiliary Guiding encoder-decoder framework for the code comment generation task. Our proposed framework takes full advantage of the type information associated with the code through the well designed Type-associated Encoder and Type-restricted Decoder. In addition, a hierarchical reinforcement learning method is provided for the training of our framework. The experimental results demonstrate significant improvements over state-of-the-art approaches and strong applicable potential in software development. Our proposed framework also verifies the necessity of the type information in the code translation related tasks with a practical framework and good results. As future work, we will extend our framework to more complex contexts by devising efficient learning algorithms. Acknowledgments This research was supported in part by Natural Science Foundation of China (61876043, 61976052), Natural Science Foundation of Guangdong (2014A030306004, 2014A030308008), Science and Technology Planning Project of Guangzhou (201902010058). Besides, this project is also partly supported by the National Research Foundation, Prime Minister’s Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme. This research was also made possible by NPRP grant NPRP10-0208170408 from the Qatar National Research Fund (a member of Qatar Foundation). The findings herein reflect the work, and are solely the responsibility of the authors. References Krishan K Aggarwal, Yogesh Singh, and Jitender Kumar Chhabra. 2002. An integrated measure of software maintainability. In Annual Reliability and Maintainability Symposium. 2002 Proceedings (Cat. No. 02CH37318), pages 235–241. IEEE. Miltiadis Allamanis, Hao Peng, and Charles Sutton. 300 2016. A convolutional attention network for extreme summarization of source code. In International Conference on Machine Learning, pages 2091–2100. Uri Alon, Shaked Brody, Omer Levy, and Eran Yahav. 2018. code2seq: Generating sequences from structured representations of code. arXiv preprint arXiv:1808.01400. Ruichu Cai, Boyan Xu, Zhenjie Zhang, Xiaoyan Yang, Zijian Li, and Zhihao Liang. 2018. An encoderdecoder framework translating natural language to database queries. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 3977–3983. AAAI Press. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 33–43, Berlin, Germany. Association for Computational Linguistics. Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016. Tree-to-sequence attentional neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 823–833, Berlin, Germany. Association for Computational Linguistics. Patrick Fernandes, Miltiadis Allamanis, and Marc Brockschmidt. 2018. Structured neural summarization. arXiv preprint arXiv:1811.01824. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631–1640, Berlin, Germany. Association for Computational Linguistics. Emil Julius Gumbel. 1954. Statistical theory of extreme values and some practical applications: a series of lectures, volume 33. US Government Printing Office. Xing Hu, Ge Li, Xin Xia, David Lo, and Zhi Jin. 2018. Deep code comment generation. In Proceedings of the 26th Conference on Program Comprehension, pages 200–210. ACM. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2073–2083, Berlin, Germany. Association for Computational Linguistics. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12–22, Berlin, Germany. Association for Computational Linguistics. Yaser Keneshloo, Tian Shi, Naren Ramakrishnan, and Chandan K Reddy. 2019. Deep reinforcement learning for sequence-to-sequence models. IEEE Transactions on Neural Networks and Learning Systems. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Jiwei Li, Will Monroe, Tianlin Shi, S´ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2157–2169, Copenhagen, Denmark. Association for Computational Linguistics. Yuding Liang and Kenny Qili Zhu. 2018. Automatic generation of text descriptive comments for code blocks. In Thirty-Second AAAI Conference on Artificial Intelligence. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Wang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Fumin Wang, and Andrew Senior. 2016. Latent predictor networks for code generation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 599–609, Berlin, Germany. Association for Computational Linguistics. Dana Movshovitz-Attias and William W. Cohen. 2013. Natural language models for predicting programming comments. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 35–40, Sofia, Bulgaria. Association for Computational Linguistics. Andrew Y Ng, Daishi Harada, and Stuart Russell. 1999. Policy invariance under reward transformations: Theory and application to reward shaping. In ICML, volume 99, pages 278–287. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Maxim Rabinovich, Mitchell Stern, and Dan Klein. 2017. Abstract syntax networks for code generation 301 and semantic parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1139– 1149, Vancouver, Canada. Association for Computational Linguistics. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Association for Computational Linguistics. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1556–1566, Beijing, China. Association for Computational Linguistics. Ted Tenny. 1988. Program readability: Procedures versus comments. IEEE Transactions on Software Engineering, 14(9):1271–1279. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692–2700. Yao Wan, Zhou Zhao, Min Yang, Guandong Xu, Haochao Ying, Jian Wu, and Philip S Yu. 2018. Improving automatic source code summarization via deep reinforcement learning. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pages 397–407. ACM. Yuk Wah Wong and Raymond Mooney. 2007. Generation by inverting a semantic parser that uses statistical machine translation. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 172–179, Rochester, New York. Association for Computational Linguistics. Lijun Wu, Fei Tian, Tao Qin, Jianhuang Lai, and TieYan Liu. 2018a. A study of reinforcement learning for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3612–3621, Brussels, Belgium. Association for Computational Linguistics. Lijun Wu, Yingce Xia, Fei Tian, Li Zhao, Tao Qin, Jianhuang Lai, and Tie-Yan Liu. 2018b. Adversarial neural machine translation. In Asian Conference on Machine Learning, pages 534–549. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, and Vadim Sheinin. 2018a. SQL-to-text generation with graph-to-sequence model. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 931–936, Brussels, Belgium. Association for Computational Linguistics. Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, Michael Witbrock, and Vadim Sheinin. 2018b. Graph2seq: Graph to sequence learning with attention-based neural networks. arXiv preprint arXiv:1804.00823. Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, and Graham Neubig. 2018. Learning to mine aligned code and natural language pairs from stack overflow. In 2018 IEEE/ACM 15th International Conference on Mining Software Repositories (MSR), pages 476–486. IEEE. Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 440–450, Vancouver, Canada. Association for Computational Linguistics. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103.
2020
27
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2996–3005 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2996 Improving Transformer Models by Reordering their Sublayers Ofir Press♦ Noah A. Smith♦♠ Omer Levy♣ ♦Paul G. Allen School of Computer Science & Engineering, University of Washington ♠Allen Institute for AI ♣Facebook AI Research Abstract Multilayer transformer networks consist of interleaved self-attention and feedforward sublayers. Could ordering the sublayers in a different pattern lead to better performance? We generate randomly ordered transformers and train them with the language modeling objective. We observe that some of these models are able to achieve better performance than the interleaved baseline, and that those successful variants tend to have more self-attention at the bottom and more feedforward sublayers at the top. We propose a new transformer pattern that adheres to this property, the sandwich transformer, and show that it improves perplexity on multiple word-level and character-level language modeling benchmarks, at no cost in parameters, memory, or training time. However, the sandwich reordering pattern does not guarantee performance gains across every task, as we demonstrate on machine translation models. Instead, we suggest that further exploration of task-specific sublayer reorderings is needed in order to unlock additional gains.1 1 Introduction The transformer layer (Vaswani et al., 2017) is currently the primary modeling component in natural language processing, playing a lead role in recent innovations such as BERT (Devlin et al., 2019) and GPT-2 (Radford et al., 2019). Each transformer layer consists of a self-attention sublayer (s) followed by a feedforward sublayer (f), creating an interleaving pattern of self-attention and feedforward sublayers (sfsfsf · · · ) throughout a multilayer transformer model. To the best of our knowledge, there is no reason to expect this particular pattern to be optimal. We conduct a series of explorations to obtain insights about the nature of transformer orderings that work well, and based on this, we 1Our code is available at https://github.com/ ofirpress/sandwich_transformer sfsfsfsfsfsfsfsfsfsfsfsfsfsf (a) Interleaved Transformer sssssssfsfsfsfsfsfsfsfffffff (b) Sandwich Transformer Figure 1: A transformer model (a) is composed of interleaved self-attention (green) and feedforward (purple) sublayers. Our sandwich transformer (b), a reordering of the transformer sublayers, performs better on language modeling. Input flows from left to right. design a new transformer ordering pattern that improves upon the baseline. First, we generate random transformer models, varying the number of each type of sublayer, and their ordering, while keeping the number of parameters constant. We train these models on the standard WikiText-103 word-level language modeling benchmark (Merity et al., 2016), and observe that some of these random models outperform the original interleaved transformer model, even when the number of self-attention and feedforward layers is not equal. Our analysis shows that models with more self-attention toward the bottom and more feedforward sublayers toward the top tend to perform better in general. Based on this insight, we design a new family of transformer models that follow a distinct sublayer ordering pattern: sandwich transformers (Figure 1). Our experiments demonstrate that a sandwich transformer outperforms the baseline of Baevski and Auli (2019). This result is made more interesting by the fact that our sandwich transformer is simply a reordering of the sublayers in the baseline model, and does not require more parameters, memory, or training time. Finally, we demonstrate that even though the 2997 Model PPL f s f s f f f s f f s f s s s f f s f s s f s s s s f f s f f s 20.74 s f s s f f s f f f f s s s s f s f f f s f s f f s f s s s s f 20.64 f s f f s s f f s s s s f f s s s s s f f s f s s f s f f f f f 20.33 f s f f f f f f s s s f s s f f s f s s f f s f s s s f f s s s 20.27 f s s f f f f f f s f s s s f f f s s s s f f f s s s s f f s s 19.98 s s s f s s f s f f f f s s f s f s f s s s f f s f s f f f s f 19.92 f f f s f s s s f s f f s f s f f s f f s s s s s f f s s f f s 19.69 f f f s f f s s f f s s s f s s f s s s f f f f f s f s s s f s 19.54 s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f 19.13 f s f f s s f s s f f f s s s s f f f s s s f f f f s f s s f s 19.08 s f s f f s s s s f f s s f f f f s s s f f s s s f s f f s f f 18.90 s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f 18.83 s s s s s s s f f s f f s f s f s f f f f s f f f s f s s f f s 18.83 s f f s f s f f s f s s s f f s s f s s s s s s f f f f f f f s 18.77 s s s f s s f f s f s s f s f f s f f f s s f f s f s f f s s f 18.68 f f f s s s s s f f f s f s s s s f f s f s f s f s s f f s f f 18.64 s f f f s s s f s f s s f s s s s s f s s f f f f f s f f f s f 18.61 s s f f s s f s s s s f f f f f f s s f f s s s f s f f s s f f 18.60 f s f s s s s s f s f s f f f f f s f f f s f f s s f f s s s s 18.55 s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f 18.54 s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f 18.49 f s f s s s s s f s f f f s s f s f f s f s f s f s f f f f s s 18.38 s f s s f f s f s f s f f s s s s s f f f s s s f f f s f f s f 18.28 s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f 18.25 s f s f s s f s s s f f s f s f s f s f f f f s s f f s f s s f 18.19 Table 1: Randomly generated models with 16 selfattention (s) sublayers and 16 feedforward (f) sublayers, and their perplexity on the WikiText-103 development set. The baselines (the standard transformer trained with different random seeds) are in bold. sandwich transformer is motivated by random search experiments on WikiText-103, it can improve performance on additional domains and tasks. Sandwich transformers achieve state-of-the-art results on the enwik8 character-level language modeling dataset and on an additional word-level corpus, but have no significant effect on machine translation. We conjecture that tuning transformer reorderings to specific tasks could yield even larger gains, and that further exploration of the ordering space may provide universally beneficial patterns. 2 Notation Each transformer layer consists of a self-attention sublayer followed by a feedforward sublayer, modifying a sequence of vectors X0 as follows:2 X1 = self-attention(X0) + X0 X2 = feedforward(X1) + X1 Stacking multiple transformer layers creates an interleaved network of sublayers. We denote these 2We omit dropout (Srivastava et al., 2014) and layer normalization (Ba et al., 2016) to simplify the notation. Random Models: Shuffling Baseline 18 19 20 21 Perplexity Figure 2: The perplexities on the WikiText-103 development set of 20 randomly generated models with 16 self-attention and 16 feedforward sublayers and of the 5 baselines (the standard transformer trained with different random seeds). models as strings, with s and f representing selfattention and feedforward sublayers, respectively. A three-layer transformer network, for example, would be denoted sfsfsf, with the flow of computation moving from input on the left to output on the right. Thus, any string in the regular language (s|f)∗defines a valid network that uses the same building blocks as the original transformer. For simplicity, we refer to these alternatives as transformers as well. 3 Random Search We conduct a series of experiments to understand which transformer networks work well and whether particular architectural patterns can improve performance. First, we generate random transformer models while keeping the number of parameters constant. We then train these random models to determine whether the interleaving pattern (sfsfsf · · · ) is optimal (Section 3.1), and whether balancing the number of self-attention and feedforward sublayers is desirable (Section 3.2). Finally, we analyze additional properties of these random models, and find that those with more selfattention at the beginning and more feedforward sublayers near the end tend to outperform the standard interleaved model (Section 3.3). Experimental Setup Our baseline is the strong transformer language model of Baevski and Auli (2019), trained on WikiText-103 (Merity et al., 2016). WikiText-103 contains roughly 103 million tokens from English Wikipedia, split into train, development, and test sets by article. The Baevski 2998 Model PPL s f f f s s f s f s f s s f f f f s f s f f s f f f f f f 22.80 s f f s s f s s s s s s s s s s s s s f s f s s s f s f f s s s f s s s f s 21.02 s s s s s s f f s f f f f s s f f f f f s s s f s f s s s s s s s s s 20.98 f f f f f f f f f s f f s s f f s f f s s s s f s f s s s f 20.75 f s s f s s s f f f f f f s s f s s s f s f f f s s s s f s f s s 20.43 s f f s f f f f f f s f s f s s f s s s f s f s f s s f s s f s 20.28 s f f s s f f s f f f s f s f s s s s f f f f f f s s s s f f 20.02 f s f f s f s s f f f f s f s f f f s f f f s s f f f s s s 19.93 s f f s f f s s f f s f s f f s s s f s s s s s f s s s f f f s s s 19.85 s s f f f f f f f s s f f f s s f s s f f s f s f s f f s f 19.82 s f s f s f f f s f f f s s f s f f f s f f s s f s f s f s s 19.77 s f s f f s s s f f s f f s s s f s s f f f f f s s s s f s s s f 19.55 s f f s f s s f f f s f f s f s s s s f s f s f f f f s f s s s 19.49 s f f f f s f f s s s s f s s s f s s f f f s s s f s s s s f s f s 19.47 f s s s f f s s s s s s f s f s f s f f s f f f f s s f s f s s s s 19.25 s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f 19.13 f s s s s s s f s f s f s f f f s f s s s f s s f f s s s s f s f f 18.86 s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f 18.83 s s f s f s s s f s s s s s f f s f s f s s s f s s f s f s s s s s s s f 18.62 s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f 18.54 s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f 18.49 s s s f s f f s f s s f s s s f f s f f f f f f s s f s f f f 18.34 s s s f s f s f f s s s f s f f f f f s f s f f f f s s s f f 18.31 s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f s f 18.25 s s s s s s f s s s f f f f s f s f f f f f f f f f f f s f 18.12 Table 2: Randomly generated models with the same number of parameters as the baseline, and their perplexity on the WikiText-103 development set. The baselines (the standard transformer trained with different random seeds) are in bold. and Auli model contains 16 transformer layers of d = 1024 dimensions, with 16 heads in each self-attention sublayer, and feedforward sublayers with an inner dimension of 4096. In this setting, each self-attention sublayer contains 4d2 parameters, while each feedforward sublayer contains 8d2 parameters (excluding bias terms, which have a marginal contribution). Thus, each f sublayer contains twice the parameters of a s sublayer, following the parameter ratio between self-attention and feedforward sublayers described in Vaswani et al. (2017). All of our experiments use the same hyperparameters as Baevski and Auli’s original model. To set an accurate baseline, we train the baseline model (the standard interleaved transformer) with five different random seeds, achieving 18.65 ± 0.24 perplexity on the development set. 3.1 Is Interleaving Optimal? In the baseline 16-layer transformer model, 16 sublayers of each type are interleaved. Can we improve model performance by simply rearranging them? We thus generate 20 random transformer models with 16 self-attention sublayers and 16 feedforward Random Models: Parameter Budget Baseline 18 19 20 21 22 23 Perplexity Figure 3: The perplexities on the WikiText-103 development set of 20 randomly generated models with the same number of parameters as the baseline, and of the 5 baselines (the standard transformer trained with different random seeds). sublayers, randomly permuted, and train these models from scratch, without modifying any of the hyperparameters. Table 1 shows the entire sample, while Figure 2 plots the perplexity distributions of the shuffled transformers and the baseline side by side. We observe that 7 of the 20 randomly-permuted models perform at least as well as the interleaved baseline’s average performance, with the best model achieving 18.19 perplexity. While the average performance of the baseline model beats the average performance of these random models, the fact that a third of our random models outperformed the average baseline suggests that a better ordering than interleaving probably exists. 3.2 Are Balanced Architectures Better? Is it necessary to have an identical number of sublayers of each type, or could models with more selfattention (or more feedforward) sublayers yield better results? To find out, we generate 20 unbalanced transformer models by randomly selecting one sublayer at a time (either s or f with equal probability) until the parameter budget is exhausted. Since a feedforward sublayer contains double the parameters of a self-attention sublayer, the networks’ depth is not necessarily 32 sublayers as before and can range from 24 (all f) to 48 (all s). Table 2 shows the entire sample, while Figure 3 plots the perplexity distributions of the randomly-generated transformers and the baseline side by side. We see that four of the generated unbalanced models outperform the average baseline transformer. The best performing random model reaches 2999 Models that are worse than baseline Models that are better than baseline 0 2 4 6 8 10 12 14 16 Average sublayer count in the bottom half of the model Self-attention Feedforward (a) Models that are worse than baseline Models that are better than baseline 0 2 4 6 8 10 12 14 16 Average sublayer count in the top half of the model Self-attention Feedforward (b) Figure 4: Analysis of sublayer distribution in models that do better or worse than the average baseline, split across bottom (a) and top (b) halves of the model. a perplexity of 18.12 and has 12 self-attention and 18 feedforward sublayers. Both the average and the median perplexities of this sample of unbalanced models are worse than those of the balanced permuted models (Section 3.1). We do not observe any preference for more sublayers of one type over the other; there are self-attention-heavy and feedforward-heavy models in both the top five and the bottom five of the results table. While offering no guarantees – given the small sample sizes and fixed hyperparameters – we conclude that a balanced number of self-attention and feedforward sublayers seems to be a desirable property, though not a necessary one. 3.3 Attention First, Feedforward Later So far, it is not clear which characteristics make one transformer model more successful than another; for example, measuring the number of times each sublayer type appears in the network does not reveal any strong correlation with performance. However, analyzing the bottom (or top) half of the network in isolation reveals an interesting property. We first split the models to those that perform better than the average baseline and those that do not. We then slice each one of the previouslygenerated random models in half by parameter count (e.g., ssssff would be split to ssss and ff, since every f contains twice as many parameters as an s), and count how many sublayers of each type appear in each slice. Figure 4 shows that models that outperform the average baseline tend to have more self-attention s in the first (bottom) half of the network and more f in the second (top) half. While we do not have a good hypothesis to explain this phenomenon, we can exploit it to improve transformers (Section 4). 4 Designing a Better Transformer Our analysis in the previous section motivates designing a transformer model that is heavy on selfattention at the bottom and feedforward sublayers at the top, while at the same time containing a more-or-less balanced amount of both sublayer types. As a first attempt to manually design a better transformer, we take this hypothesis to the extreme, and train a transformer model of 16 self-attention sublayers followed by 16 feedforward sublayers (s16f16). This model achieves 18.82 perplexity, which is comparable to the performance of the baseline with the same number of parameters. We next generalize this model and the original interleaved transformer, creating the family of sandwich transformers. A sandwichn k transformer consists of 2n sublayers in total (n of each type), conforming to the regular expression sk(sf)n−k fk. The first k sublayers are purely self-attention (s), while the last k are feedforward sublayers (f). In between, we use the original interleaving pattern (sf) to fill the remaining 2(n−k) sublayers. When k = 0, we get the original transformer model, and when k = n −1 (its maximal value) we get the previously mentioned snfn model. We refer to k as the transformer’s sandwich coefficient. We train sandwich transformers for n = 16 (to remain within the same parameter budget as our baseline language model) and all values of k ∈{0, . . . , 15}. Figure 5 shows the transformer’s performance as a function of the sandwich coefficient k. With the exception of k = 14, 15, all sandwich transformers achieve lower perplexities 3000 Model Test Baseline (Baevski and Auli, 2019) 18.70 Transformer XL (Dai et al., 2019) 18.30 kNN-LM (Khandelwal et al., 2019) 15.79 Baseline (5 Runs) 18.63 ± 0.26 Sandwich16 6 17.96 Table 3: Performance on the WikiText-103 test set. We compare the best sandwich transformer to the unmodified, interleaved transformer baseline (Baevski and Auli, 2019) trained over 5 random seeds and to other previously reported results. than the average baseline transformer. Of those, 6 models outperform the best baseline transformer (k = 5, 6, 8, 9, 10, 11). The best performance of 17.84 perplexity is obtained when k = 6. We compare this model to the baseline on WikiText-103’s test set. Table 3 shows that, despite its simple design, the sandwich transformer outperforms the original transformer baseline by roughly double the gap between the baseline (Baevski and Auli, 2019) and Transformer XL (Dai et al., 2019). This improvement comes at no extra cost in parameters, data, memory, or computation; we did not even change any of the original hyperparameters, including the number of training epochs. To check whether this advantage is consistent, we train 4 more sandwich16 6 models with different random seeds (5 in total) and evaluate them on the development set, to avoid evaluating our model more than once on the test set. This is the only experiment in which we modify our model’s random seed. Figure 6 shows that we obtain a mean perplexity value of 17.98 with a standard deviation of 0.10, while the baseline achieves 18.65 mean perplexity, with a larger standard deviation of 0.34 (these values reflect development set performance, not test set performance as in Table 3). In very recent work, kNN-LM (Khandelwal et al., 2019) set a new state of the art on WikiText103, surpassing other recent models by a wide margin. The model achieves this result by storing the entire training set in an auxiliary memory component. Since this approach appears orthogonal to ours, it is quite possible that kNN-LM could benefit from sublayer reordering as well. 5 One Reordering to Rule Them All? The sandwich transformer is a manually-crafted pattern motivated by the performance of random 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Sandwich Coefficient 17.75 18.00 18.25 18.50 18.75 19.00 Perplexity Figure 5: The transformer’s sandwich coefficient (k) and validation perplexity, for k ∈{1, . . . , 15}. The dotted line is the average baseline model’s perplexity (trained with different random seeds), whereas the dashed line represents the best baseline model. Baseline Sandwich16 6 17.5 18.0 18.5 19.0 19.5 Perplexity Figure 6: Performance on the WikiText-103 development set of the Sandwich16 6 transformer and the baseline. Each model is trained with 5 different random seeds to assess the perplexity distribution. sublayer reorderings of the Baevski and Auli (2019) model, trained on the WikiText-103 word-level language modeling benchmark (Merity et al., 2016). Does this particular pattern improve performance in other settings as well? To find out, we apply sandwich transformers to three other tasks: word-level language modeling on a different domain (Section 5.1), character-level language modeling (Section 5.2), and machine translation (Section 5.3). Results show that as we drift away from our original setting, sandwich transformers provide diminishing gains, but always perform at least as well as the baseline transformers (provided that the sandwich coefficient is properly tuned). This finding suggests that different settings may benefit from different sublayer reordering patterns. 3001 Model PPL Baseline (5 runs) 11.89 ± 0.35 kNN-LM (Khandelwal et al., 2019) 10.89 Sandwich16 7 10.83 Table 4: Performance on the Toronto Books Corpus language modeling test set. The baseline model (Baevski and Auli, 2019) is trained over 5 random seeds. The sandwich coefficient is tuned on the validation set and we run our model on the test set only once. 5.1 Books-Domain Language Modeling We first apply sandwich transformers to a different domain, while retaining the other architectural aspects and hyperparameter settings from Baevski and Auli (2019). Specifically, we use the Toronto Books Corpus (Zhu et al., 2015), which has previously been used to train GPT (Radford et al., 2018) and also BERT (Devlin et al., 2019) (combined with Wikipedia). The corpus contains roughly 700M tokens. We use the same train/validation/test split as Khandelwal et al. (2019), as well as their tokenization, which uses BERT’s vocabulary of 29K byte-pair encodings. Since the vocabulary is much smaller than WikiText-103’s, we replace the adaptive word embedding and softmax of Baevski and Auli (2019) with a tied word embedding and softmax matrix (Press and Wolf, 2017; Inan et al., 2017). Finally, we tune the sandwich coefficient on the development set for k ∈{4, . . . , 8}, i.e., a neighborhood of 2 around the best value we found for WikiText-103 (k = 6). Table 4 shows that the sandwich transformer transfers well to the books domain, improving performance by 1.06 perplexity, achieving similar performance to the datastore-augmented kNN-LM (Khandelwal et al., 2019), which is the state of the art on WikiText-103 (see Section 4). 5.2 Character-level Language Modeling Modeling text as a stream of characters, rather than word or subword tokens, presents a different modeling challenge: long-range dependencies become critical, and the vocabulary takes on a more uniform distribution. We apply our sandwich reordering to the adaptive span model of Sukhbaatar et al. (2019), which is state of the art on the popular English-language benchmark text8 and is currently a close second on enwik8.3 The adaptive span 3Both datasets are taken from http://mattmahoney. net/dc/textdata.html model learns to control each attention head’s maximal attention span, freeing up memory in the bottom layers (which typically need very short attention spans) and applying it to the top layers, allowing the top-level attention heads to reach significantly longer distances. The adaptive span model’s efficient use of attention also results in a significant speed boost. We tune the sandwich coefficient on the development set for k ∈{1, . . . , 8} (the baseline model has 24 transformer layers). We do not modify any hyperparameters, including the number of training epochs. Table 5 compares the baseline model’s performance with the sandwich transformer’s. On text8, the sandwich transformer performs within the baseline’s random seed variance. On enwik8, the sandwich transformer gains an improvement of about 0.007 bits-per-character, matching the state of the art results obtained by the TransformerXL-based Compressive Transformer of Rae et al. (2020). However, our approach is able to achieve this result without applying the Transformer-XL’s recurrent attention, which is much slower (Sukhbaatar et al., 2019), and without adding additional parameters (the compressive transformer uses 277M parameters, while our baseline and sandwich models use only 209M). 5.3 Machine Translation Sandwich Decoders Tranformer-based translation models (Vaswani et al., 2017) consist of an encoder and decoder, where the encoder has interleaved self-attention and feedforward sublayers (just as in language models), while the decoder includes an additional sublayer, cross-attention (c), between every pair of self-attention and feedforward sublayers. Cross-attention sublayers attend to the encoder’s representations of the input sentence’s tokens. Following our notation from Section 2, a transformer decoder layer modifies the sequence of tokens in the target language Y0, using the encoded source tokens X, as follows: Y1 = self-attention(Y0) + Y0 Y2 = cross-attention(Y1, X) + Y1 Y3 = feedforward(Y2) + Y2 Applying the sandwich pattern to the encoder follows the same methodology as our previous experiments. However, for the decoder, we group the 3002 Model text8 (BPC) enwik8 (BPC) Transformer-XL (Dai et al., 2019) 1.08 0.99 Adaptive Span (Sukhbaatar et al., 2019) 1.07 0.98 Compressive (Rae et al., 2020) — 0.97 Baseline (Adaptive Span; 5 Runs) 1.0802 ± 0.0103 0.9752 ± 0.0008 Sandwich24 3 1.076 — Sandwich24 5 — 0.968 Table 5: Performance on character-level language modeling, evaluated on the enwik8 and text8 test sets. The baseline model (Sukhbaatar et al., 2019) is trained over 5 random seeds. The sandwich coefficient is tuned on each benchmark’s validation set, and we run our model on the test only once. self-attention (s) and cross-attention (c) sublayers, and treat them as a single unit for reordering purposes (sc). For example, a three layer decoder (scfscfscf) with a sandwiching coefficient of k = 1 would be: scscfscff. We apply the sandwich pattern to either the encoder or decoder separately, while keeping the other stack in its original interleaved pattern. Experiment Setting As a baseline, we use the large transformer model (6 encoder/decoder layers, embedding size of 1024, feedforward inner dimension of 4096, and 16 attention heads) with the hyperparameters of Ott et al. (2018). We also follow their setup for training and evaluation: we train on the WMT 2014 En-De dataset which contains 4.5M sentence pairs; we validate on newstest13 and test on newstest14. We use a vocabulary of 32K symbols based on a joint source and target byte pair encoding (Sennrich et al., 2016). For inference we use beam search with a beam width of 4 and length penalty of 0.6, following Vaswani et al. (2017) and Ott et al. (2018). As before, we do not modify our model’s hyperparameters or training procedure. Results Table 6 shows that reordering of either the encoder or decoder does not have a significant impact on performance, across the board. We also find that using the most extreme sandwich decoder (sc)6f6 performs almost exactly the same as the average baseline; this result is consistent with our observation from Section 4, where we show that the extreme sandwich language model (s16f16) performs as well as the baseline. Discussion This experiment indicates that a reordering pattern that benefits one particular task (language modeling) might not carry the same performance gains to another (machine translation). However, it also demonstrates the general robustness of transformer architectures to sublayer reordering, as we did not observe any major perforSandwich Encoder Decoder Coefficient Sandwich Sandwich 0 (Baseline) 28.74 ± 0.15 1 28.71 28.64 2 28.71 28.56 3 28.81 28.67 4 28.48 28.66 5 28.45 28.76 Table 6: BLEU on newstest2014 En-De. Our encoder (decoder) sandwich model keeps the decoder (encoder) unmodified. We train the baseline model (Transformerlarge with the hyperparameters of Ott et al., 2018) 5 times with different random seeds. mance degradation. Since the sandwich pattern naively groups self- and cross-attention sublayers together, it is also possible that a reordering pattern that takes all three sublayer types into account could potentially improve performance. 6 Analysis At the time of writing, we do not have an explanation for why sublayer reordering improves performance on language modeling. However, we are able to determine that sandwich transformers spread their attention in a different fashion than interleaved models. We analyze two baseline models and two sandwich16 6 models trained with different seeds on the WikiText-103 dataset, by first recording the attention values that each token’s heads assign to all other tokens during inference on the validation set. Given the attention outputs of two models, we then compute the models’ attention distance for each token, and for each self-attention sublayer. This metric compares the attention distribution in the ith self-attention sublayer of the first model to that of the ith self-attention sublayer of the second model, for a specific token. Given a token and a self-attention sublayer, 3003 Model Pair Average Attention Distance Baseline – Baseline 1.081 · 10−3 Sandwich – Sandwich 1.067 · 10−3 Baseline – Sandwich 1.289 · 10−3 ± 0.049 · 10−3 Table 7: The average attention distance, on the WikiText-103 validation dataset, of each model pair. Since there are two baselines and two sandwich transformers (initialized with different random seeds), the distance between the baseline and sandwich models is averaged over all four baseline-sandwich combinations. we use the Hungarian algorithm (Kuhn, 1955) to find a matching of heads in the first model to heads in the second model [a1, b1], . . . , [a8, b8] such that P8 i=1 EMD(ai, bi) is minimized, where EMD(ai, bi) is the earth mover’s (Wasserstein) distance between the attention distributions of head ai in the first model and head bi in the second model. That minimal value is the attention distance for that token, in that layer. We then average the attention distances across all tokens and layers. Table 7 shows the average attention distances between every pair of models. We observe that models of the same architecture have significantly lower attention distances than models with different sublayer orderings. This indicates that sublayer reordering has a strong effect on the attention function that the model learns in each head. Future investigations of what this difference is, in a qualitative sense, could potentially provide important insights for designing better reordering patterns. 7 Related Work 7.1 Neural Architecture Search In this paper, we manually search through a constrained transformer architecture space, after analyzing the results of two small-scale random searches. This human-in-the-loop method for architecture search has advantages over previous methods (Jozefowicz et al., 2015; Zoph and Le, 2016; Tan and Le, 2019) since it requires that only a few dozen models be trained, unlike typical architecture search methods that require training thousands of instances, consuming massive computational resources. While we do find a better performing transformer, our goal is not only to do so, but to better understand how sublayer ordering affects transformer models. Future work could apply methods from the architecture space literature to the sublayer ordering problem. Furthermore, a better understanding of the inner workings of transformers could inspire more efficient, constrained architecture search. 7.2 Transformer Modifications Much recent work has been devoted to improving transformers by modifying their sublayers. This includes sparsifying their attention patterns, either in an input-based manner (as in Correia et al., 2019), or in a static manner (as in Guo et al., 2019). So et al. (2019) proposed modifying the transformer by adding convolutions and changing the activation function, while others have demonstrated that different initialization schemes (Zhang et al., 2019) and repositioning the layer normalization (Nguyen and Salazar, 2019) can also have a positive effect on performance. In this paper, we do not modify the sublayers at all, but simply rearrange their order. The performance gains from sublayer reordering are orthogonal to improving the sublayers themselves, and could be combined to achieve even better performance. Recently, Lu et al. (2019) introduced a new transformer ordering, where instead of stacking layers of the form sf (as in the vanilla interleaved transformer), they stack layers of the form fsf. In order keep the total parameter count unchanged, Lu et al. cut the hidden dimension of their feedforward sublayers by half. However, the overall depth of the network is increased by 50%, which causes a similar increase in the model’s inference time (Sanh, 2019). 8 Conclusion We train random transformer models with reordered sublayers, and find that some perform better than the baseline interleaved transformer in language modeling. We observe that, on average, better models contain more self-attention sublayers at the bottom and more feedforward sublayer at the top. This leads us to design a new transformer stack, the sandwich transformer, which significantly improves performance over the baseline at no cost in parameters, memory, or runtime. We then show that the sandwich ordering also improves language modeling performance on a different word-level language modeling benchmark, and that the sandwich pattern can be used to achieve state of the art results on character-level language 3004 modeling. Although sandwich ordering does not improve translation models, we show that they are robust to layer order changes, and that even extreme reorderings (all attention sublayers at the bottom, and all the feedforward sublayers at the top) perform as well as the baseline. Sublayer reordering can improve the performance of transformer models, but an ordering that improves models on one group of tasks (word/character-level language modeling) might not improve the performance on another task. By showing that sublayer ordering can improve models at no extra cost, we hope that future research continues this line of work by looking into optimal sublayer ordering for other tasks, such as translation, question answering, and classification. Acknowledgments We thank Tim Dettmers, Jungo Kasai, Sainbayar Sukhbaatar, and the anonymous reviewers for their valuable feedback. References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. arXiv:1607.06450. Alexei Baevski and Michael Auli. 2019. Adaptive input representations for neural language modeling. In ICLR. Gonc¸alo M. Correia, Vlad Niculae, and Andr´e F. T. Martins. 2019. Adaptively sparse transformers. arXiv:1909.00015. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In ACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL. Qipeng Guo, Xipeng Qiu, Pengfei Liu, Yunfan Shao, Xiangyang Xue, and Zheng Zhang. 2019. Startransformer. In NAACL. Hakan Inan, Khashayar Khosravi, and Richard Socher. 2017. Tying word vectors and word classifiers: A loss framework for language modeling. In ICLR. Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An empirical exploration of recurrent network architectures. In ICLR. Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2019. Generalization through memorization: Nearest neighbor language models. arXiv:1911.00172. Harold W. Kuhn. 1955. The hungarian method for the assignment problem. Naval Research Logistics Quarterly, 2(1-2):83–97. Yiping Lu, Zhuohan Li, Di He, Zhiqing Sun, Bin Dong, Tao Qin, Liwei Wang, and Tie-Yan Liu. 2019. Understanding and improving transformer from a multi-particle dynamic system point of view. arXiv:1906.02762. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. arXiv:1609.07843. Toan Q. Nguyen and Julian Salazar. 2019. Transformers without tears: Improving the normalization of self-attention. arXiv:1910.05895. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In CMT. Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In EACL. Alec Radford, Karthik Narasimhan, Time Salimans, and Ilya Sutskever. 2018. Improving language understanding with unsupervised learning. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Chloe Hillier, and Timothy P. Lillicrap. 2020. Compressive transformers for long-range sequence modelling. In ICLR. Victor Sanh. 2019. Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL. David So, Quoc Le, and Chen Liang. 2019. The evolved transformer. In ICML. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, and Armand Joulin. 2019. Adaptive attention span in transformers. In ACL. Mingxing Tan and Quoc Le. 2019. EfficientNet: Rethinking model scaling for convolutional neural networks. In ICML. 3005 Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. Biao Zhang, Ivan Titov, and Rico Sennrich. 2019. Improving deep transformer with depth-scaled initialization and merged attention. arXiv:1908.11365. Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. arXiv:1506.06724. Barret Zoph and Quoc V. Le. 2016. Neural architecture search with reinforcement learning. arXiv:1611.01578.
2020
270
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3006–3013 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3006 Single Model Ensemble using Pseudo-Tags and Distinct Vectors Ryosuke Kuwabara1, Jun Suzuki23, Hideki Nakayama1 1 The University of Tokyo, 2 Tohoku University, 3 RIKEN {kuwabara, nakayama}@nlab.ci.i.u-tokyo.ac.jp [email protected] Abstract Model ensemble techniques often increase task performance in neural networks; however, they require increased time, memory, and management effort. In this study, we propose a novel method that replicates the effects of a model ensemble with a single model. Our approach creates K-virtual models within a single parameter space using K-distinct pseudotags and K-distinct vectors. Experiments on text classification and sequence labeling tasks on several datasets demonstrate that our method emulates or outperforms a traditional model ensemble with 1/K-times fewer parameters. 1 Introduction A model ensemble is a promising technique for increasing the performance of neural network models (Lars. and Peter., 1990; Anders and Jesper, 1994). This method combines the outputs of multiple models that are individually trained using the same training data. Recent submissions to natural language processing(NLP) competitions are primarily composed of neural network ensembles (Bojar et al., 2018; Barrault et al., 2019). Despite its effectiveness, a model ensemble is costly. Because it handles multiple models, it requires increased time for training and inference, increased memory, and greater management effort. Therefore, the model ensemble technique cannot always be applied to real systems, as many systems, such as edge devices, must work with limited computational resources. In this study, we propose a novel method that replicates the effects of the ensemble technique with a single model. Following the principle that aggregating multiple models improves performance, we create multiple virtual models in a shared space. Our method virtually inflates the training data K times with K-distinct pseudo-tags [Tag 1] I watched this .. [Tag 2] I watched this .. [Tag 3] I watched this .. 𝒐𝟑 𝒆𝟎:𝑻 𝟑 '𝒆𝟎:𝑻 𝟑 𝜱(𝑬𝑵𝑪('𝒆𝟎:𝑻 𝟏)) 𝜱(𝑬𝑵𝑪('𝒆𝟎:𝑻 𝟐)) 𝜱(𝑬𝑵𝑪('𝒆𝟎:𝑻 𝟑)) Aggregate 𝒐𝟏 𝒆𝟎:𝑻 𝟏 '𝒆𝟎:𝑻 𝟏 𝒐𝟐 𝒆𝟎:𝑻 𝟐 '𝒆𝟎:𝑻 𝟐 Figure 1: Overview of our proposed method. A single model processes the same input with distinct pseudotags. Each pseudo-tag defines the k-th virtual model, and the corresponding vector ok is added to the embedding. Thus, the model function of a singe model φ (ENC(·)) generates different outputs. appended to all input data. It also incorporates Kdistinct vectors, which correspond to pseudo-tags. Each pseudo-tag k ∈{1, . . . , K} is attached to the beginning of the input sentence, and the k-th vector is added to the embedding vectors for all tokens in the input sentence. Fig. 1 presents a brief overview of our proposed method. Intuitively, this operation allows the model to shift the embedding of the same data to the k-th designated subspace and can be interpreted as explicitly creating K virtual models in a shared space. We thus expect to obtain the same (or similar) effects as the ensemble technique composed of K models with our K virtual models generated from a single model. Experiments in text classification and sequence labeling tasks reveal that our method outperforms single models in all settings with the same parameter size. Moreover, our technique emulates or surpasses the normal ensemble with 1/K-times fewer parameters on several datasets. 2 Related Work The neural network ensemble is a widely studied method (Lars. and Peter., 1990; Anders and Jesper, 3007 1994; Hashem, 1994; Opitz and Shavlik, 1996); however studies have focused mainly on improving performance while ignoring cost, such as computational cost, memory space, and management cost. Several methods have overcome the shortcomings of traditional ensemble techniques. For training Snapshot Ensembles, (Huang et al., 2017) used a single model to construct multiple models by converging into multiple local minima along the optimization path. For inference distillation, (Hinton et al., 2015) transferred the knowledge of the ensemble model into a single model. These methods use multiple models either during training or inference, which partially solves the negative effects of the traditional ensemble. The incorporation of pseudo-tags is a standard technique widely used in the NLP community, (Rico et al., 2016; Melvin et al., 2017). However, to the best of our knowledge, our approach is the first attempt to incorporate pseudo-tags as an identification marker of virtual models within a single model. The most similar approach to ours is dropout (Srivastava et al., 2014), which stochastically omits each hidden unit during each mini-batch, and in which all units are utilized for inference. Huang et al. (2017) interpreted this technique as implicitly using an exponential number of virtual models within the same network. As opposed to dropout, our method explicitly utilizes virtual models with a shared parameter, which is as discussed in Section 5, complementary to dropout. 3 Base Encoder Model The target tasks of this study are text classification and sequence labeling. The input is a sequence of tokens (i.e., a sentence). Here, xt denotes the one-hot vector of the t-th token in the input. Let E ∈RD×|V| be the embedding matrices where D is the dimension of the embedding vectors and V is the vocabulary of the input. We obtain the embedding vector et at position t by et = Ext. Here, we introduce the notation e1:T to represent the list of vectors (e1, e2, . . . , eT ) that correspond to the input sentence, where T is the number of tokens in the input. Given e1:T , the feature (or hidden) vectors ht ∈RH for all t ∈{1, . . . , T} are computed as an encoder neural network ENC(·), where H denotes the dimensions of the feature vector. Namely, h1:T = ENC (e1:T ) . (1) Finally, the output by given input x1:T is estimated as by = φ (h1:T ) where φ (·) represents the task dependent function (e.g., a softmax function for text classification and a conditional random field layer for sequence labeling). It should be noted that the form of the output by differs depending on the target task. 4 Single Model Ensemble using Pseudo-Tags and Distinct Vectors In this section, we introduce the proposed method, which we refer to as SINGLEENS. Fig. 1 presents an overview of the method. The main principle of this approach is to create different virtual models within a single model. We incorporate pseudo-tags and predefined distinct vectors. For the pseudo-tags, we add special tokens {ℓk}K k=1 to the input vocabulary, where hyper-parameter K represents the number of virtual models. For the predefined distinct vectors, we leverage mutually orthogonal vectors {ok}K k=1, where the orthogonality condition requires satisfying ok · ok′ ≃0 for all (k, k′) when k ̸= k′. Finally, we assume that all input sentences start from one of the pseudo-tags. We then add the corresponding orthogonal vector ok of the attached pseudo-tag ℓk to the embedding vectors at all positions. The new embedding vector ˜e0:T is written in the following form: ˜e(k) 0:T = (ℓk, e1 + ok, e2 + ok, . . . , eT + ok). (2) We substitute e1:T in Eq. 1 by ˜e(k) 0:T in the proposed method. An intuitive explanation of the role of pseudotags is to allow a single model to explicitly recognize differences in homogeneous input, while the purpose of orthogonal vectors is to linearly shift the embedding to the virtual model’s designated direction. Therefore, by combining these elements, we believe that we can define virtual models within a single model and effectively use the local space for each virtual model. Aggregating these virtual models can then result in imitation of ensemble. 5 Experiments To evaluate the effectiveness of our method, we conducted experiments on two tasks: text classification and sequence labeling. We used the IMDB (Andrew et al., 2011), Rotten (Bo and Lillian, 3008 Dataset Model Method # params Accuracy SINGLE 12 M 87.03 TFM: 1/K ENS 14 M 81.93 (−5.10) GLOVE SINGLEENS 12 M 87.30 (+0.27) IMDB NORMALENS 108 M 87.67 (+0.64) SINGLE 400 M 91.99 TFM: 1/K ENS 1000 M 90.63 (−1.36) BERT SINGLEENS 400 M 92.91 (+0.92) NORMALENS 3600 M 92.75 (+0.76) SINGLE 400 M 81.75 TFM: 1/K ENS 1000 M 82.67 (+0.92) Rotten BERT SINGLEENS 400 M 85.01 (+3.26) NORMALENS 3600 M 82.57 (+0.82) SINGLE 400 M 87.18 TFM: 1/K ENS 1000 M 80.27 (−6.91) RCV1 BERT SINGLEENS 400 M 89.16 (+1.98) NORMALENS 3600 M 90.01 (+2.83) Table 1: Test accuracy and parameter size for text classification tasks. Our method, SINGLEENS, outperformed SINGLE and 1/K ENS on all datasets. Most notably, SINGLEENS surpassed NORMALENS on IMDB and Rotten with 1/9 fewer parameters. 2005), and RCV1 (Yiming et al., 2004) datasets for text classification, and the CoNLL-2003 (Sang and Meulder, 2003) and CoNLL-2000 datasets (Sang and Sabine, 2000) for sequence labeling. We used the Transformer model (Vaswani et al., 2017) as the base model for all experiments, and its token vector representations were then empowered by pretrained vectors of GloVe, (Jeffrey et al., 2014), BERT (Devlin et al., 2018), or ELMo (Matthew et al., 2018). The models are referred to as TFM:GLOVE, TFM:BERT, and TFM:ELMO, respectively.1 For TFM:BERT, we incorporated the feature (or hidden) vectors of the final layer in the BERT model as the embedding vectors while adopting drop-net technique (Zhu et al., 2020). All the models have dropout layers to assess the complementarity of our method and dropout. We compared our method (SINGLEENS) to a single model (SINGLE), a normal ensemble (NORMALENS), and a normal ensemble in which each component has approximately 1/K parameters2 (1/K ENS).3 Although other ensemble-like methods discussed in Section 2 could have been compared (e.g., snapshot ensemble, knowledge distillation, or dropout during testing to generate predictions and aggregate them), they are imitations of a normal ensemble, and we assumed that the results of a normal ensemble were upper-bound. We used K = 9 for reporting the primary results of NOR1See Appendix A for detailed experimental settings. 2Because BERT requires a fixed number of parameters, we did not reduce the parameters accurately for 1/K TFM:BERT. 3See Appendix A for detailed experimental settings. Dataset Model Method # params F1 Score SINGLE 100 M 91.93 CoNLL TFM: 1/K ENS 150 M 91.65 (−0.28) 2003 ELMO SINGLEENS 100 M 92.37 (+0.44) NORMALENS 900 M 92.86 (+0.93) SINGLE 100 M 96.42 CoNLL TFM: 1/K ENS 150 M 95.67 (−0.75) 2000 ELMO SINGLEENS 100 M 96.56 (+0.14) NORMALENS 900 M 96.67 (+0.25) Table 2: Test F1 score and parameter size for sequence labeling tasks. Similarly to NORMALENS, SINGLEENS improved the score even at high performance levels. MALENS, 1/K ENS, and SINGLEENS. We thus prepared nine pseudo-tags {ℓk}9 k=1 in the same training (trainable) and initialization manner as other embeddings. We created untrainable distinct vectors {ok}9 k=1 using the implementation by Saxe et al. (2013) that was prepared in PyTorch’s default function, torch.nn.init.orthogonal. We empirically determined the correct scaling for the distinct vectors as 1 out of 1, 3, 5, 10, 30, 50, 100, and the scale that was closest to the model’s embedding vectors. We obtained the final predictions of K ensemble models by averaging and voting the outputs of individual models for text classification and sequence labeling, respectively. The results were obtained by the averaging five distinct runs with different random seeds. 5.1 Evaluation of text classification Data We followed the settings used in the implementation by Kiyono et al. (2018) for data partition.4 Our method, SINGLEENS inflates the training data by K times. During the inflation, the k-th subset is sampled by bootstrapping (Efron and Tibshirani, 1993) with the corresponding k-th pseudotag. For NORMALENS and 1/K ENS, we attempted both bootstrapping and normal sampling, and a higher score was reported. Results Table 1 presents the overall results evaluated in terms of accuracy. For both TFM:GLOVE and TFM:BERT, SINGLEENS outperformed SINGLE with the same parameter size. In our experiments, SINGLEENS achieved the best scores on IMDB and Rotten with TFM:BERT; it recorded 92.91% and 85.01%, which was higher than NORMALENS by 0.16 and 2.44, respectively with 89% fewer parameters. The standard deviation of the results for the IMDB dataset was, 0.69 and 0.14 4See Appendix B for data statistics. 3009 IMDB CoNLL-2003 Setting Accuracy F1 Score SINGLE 91.99 91.93 1) Only pseudo-tags 89.84 92.20 2) Random distinct vectors 92.06 92.21 3) Random noise 92.38 92.32 SINGLEENS 92.91 92.37 Table 3: Comparison of proposed method (pseudo-tags + corresponding distinct vectors) with other settings. Pseudo-tags and distinct vectors appear to complement each other. for SINGLE and SINGLEENS, respectively, for TFM:GLOVE, and 0.34 and 0.11, respectively, for TFM:BERT. These results support the claim that explicit operations for defining K virtual models have a significant effect for a single model and are complementary to normal dropout. Through the series of experiments, we observed that the number of iterations of SINGLEENS was 1.0 ˜1.5 times greater than that of SINGLE. 5.2 Evaluation of sequence labeling Data We followed the instructions of the task settings used in CoNLL-2000 and CoNLL-2003.5 We inflated the training data by nine times for SINGLEENS, and normal sampling was used for NORMALENS and 1/K ENS. Because bootstrapping was not effective for the task, the results were omitted. Results As displayed in Table 2, SINGLEENS surpassed SINGLE by 0.44 and 0.14 on CoNLL2003 and CoNLL-2000, respectively, for TFM:ELMO with the same parameter size. However, NORMALENS produced the best results in this setting. The standard deviations of the single model and our methods were 0.08 and 0.05, respectively, on CoNLL-2000. Through the series of experiments, we observed that the number of iterations of SINGLEENS was 1.0 ˜1.5 times greater than that of SINGLE. 6 Analysis In this section, we investigate the properties of our proposed method. Unless otherwise specified, we use TFM:BERT and TFM:ELMO on IMDB and CoNLL-2003 for the analysis. Significance of pseudo-tags and distinct vectors To assess the significance of using both pseudo5The statistics of the datasets are presented in Appendix B. IMDB CoNLL-2003 Setting Accuracy F1 Score SINGLE 91.99 91.93 1) Emb (SINGLEENS) 92.91 92.37 2) Hidden 90.68 92.45 1) + 2) 92.64 92.19 Table 4: Test metrics on IMDB and CoNLL-2003 with the pattern of three vector addition operations. Adding distinct vectors to only embeddings is the best or second best approach. tags and distinct vectors, we conducted an ablation study of our method, SINGLEENS. We compared our method with the following three settings: 1) Only pseudo-tags, 2) Random distinct vectors, and 3) Random noise. In detail, the first setting (Only pseudo-tags) attached the pseudo-tags to the input without adding the corresponding distinct vectors. The second setting (Random distinct vectors) randomly shuffles the correspondence between the distinct vectors and pseudo-tags in every iteration during the training. Additionally, the third setting (Random noise) adds random vectors as the replacement of the distinct vectors to clarify whether the effect of incorporating distinct vectors is essentially identical to the random noise injection techniques or explicit definition of virtual models in a single model. Table 3 shows the results of the ablation study. This table indicates that using both pseudo-tags and distinct vectors, which matches the setting of SINGLEENS, leads to the best performance, while the effect is limited or negative if we use pseudo-tags alone or distinct vectors and pseudo-tags without correspondence. Thus, this observation explains that the increase in performance can be attributed to the combinatorial use of pseudo-tags and distinct vectors, and not merely data augmentation. We can also observe from Table 3 that the performance of SINGLEENS was higher than that of 3) Random noise. Note that the additional vectors by SINGLEENS are fixed in a small number K while those by Random noise are a large number of different vectors. Therefore, this observation supports our claim that the explicit definition of virtual models by distinct vectors has substantial positive effects that are mostly irrelevant to the effect of the random noise. This observation also supports the assumption that SINGLEENS is complementary to dropout. Dropout randomly uses sub-networks by stochastically omitting each hidden unit, which can be interpreted as a variant of Random noise. 3010 3 5 9 # of models 77 78 79 80 81 82 83 84 85 86 Accuracy (%) Rotten TFM:BERT SINGLE ENS (Ours) SINGLE NORMAL ENS w/ Normal Sampling NORMAL ENS w/ Boostrap Sampling 1/K ENS Figure 2: Accuracy depending on the number of models for each ensemble method on the Rotten dataset. 3 5 9 # of models 90 91 92 F1 Score (%) CoNLL-2003 TFM:ELMo SINGLE ENS (Ours) SINGLE NORMAL ENS w/ Normal Sampling 1/K ENS Figure 3: F1 score depending on the number of models for each ensemble method on CoNLL-2003. Moreover, it has no specific operations to define an explicitly prepared number of virtual models as SINGLEENS has. We conjecture that this difference yields the complementarity that our proposed method and dropout can co-exist. Vector addition We investigated the patterns with which distinct vectors should be added: 1) Emb, 2) Hidden, and 3) Emb + Hidden. Emb adds distinct vectors only to the embedding, while Hidden adds distinct vectors only to the final feature vectors. Emb + Hidden adds distinct vectors to both the embedding and final feature vectors. As illustrated in Table 4, adding vectors to the embedding is sufficient for improving performance, while adding vectors to hidden vectors has as adverse effect. This observation can be explained by the architecture of Transformer. The distinct vectors in the embedding are recursively propagated through the entire network without being absorbed as non-essential information since the Transformer employs residual connections (He et al., 2015). Comparison with normal ensembles To evaluate the behavior of our method, we examined the relationship between the performance and the number of models used for training. Our experiments revealed that having more than nine models did not result in significant performance improvement; thus, we only assessed the results up to nine models. Figs 2 and 3 present the metrics on Rotten and CoNLL-2003, respectively. The performance of our method increased with the number of models, which is a general feature of normal ensemble. Notably, on Rotten, the accuracy of our method rose while that of other methods did not. Investigation of this behavior is left for future work. 7 Conclusion In this paper, we propose a single model ensemble technique called SINGLEENS. The principle of SINGLEENS is to explicitly create multiple virtual models in a single model. Our experiments demonstrated that the proposed method outperformed single models in both text classification and sequence labeling tasks. Moreover, our method with TFM:BERT surpassed the normal ensemble on the IMDB and Rotten datasets, while its parameter size was 1/K-times smaller. The results thus indicate that explicitly creating virtual models within a single model improves performance. The proposed method is not limited to the two aforementioned tasks, but can be applied to any NLP as well as other tasks such as machine translation and image recognition. Further theoretical analysis can also be performed to elucidate the mechanisms of the proposed method. Acknowledgment The research results were achieved by ”Research and Development of Deep Learning Technology for Advanced Multilingual Speech Translation”, the Commissioned Research of National Institute of Information and Communications Technology (NICT), JAPAN. The work was partly supported by JSPS KAKENHI Grant Number 19H04162. We would like to thank Motoki Sato of Preferred Networks and Shun Kiyono of RIKEN for cooperating in preparing the experimental data. We would also like to thank the three anonymous reviewers for their insightful comments. 3011 References Krogh Anders and Vedelsby Jesper. 1994. Neural network ensembles, cross validation and active learning. In Proceedings of the 7th International Conference on Neural Information Processing Systems, (NeurIPS), pages 231–238. Maas L. Andrew, Daly E. Raymond, Pham T. Peter, Huang Dan, Ng Y. Andrew, and Potts Christopher. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, (ACL), pages 142–150. Lo¨ıc Barrault, Bojar, et al. 2019. Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation: Shared Task Papers, (WMT), pages 1– 61. Pang Bo and Lee Lillian. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. CoRR, pages 115–124. Ondˇrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (WMT18). In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, (WMT), pages 272–303. Jacob Devlin, Ming W. Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Bradley Efron and Robert J. Tibshirani. 1993. An Introduction to the Bootstrap. Monographs on Statistics and Applied Probability. Springer. Sherif Hashem. 1994. Optimal linear combinations of neural networks. NEURAL NETWORKS, 10(4):599– 614. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition. CoRR, abs/1512.03385. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, and Kilian Q. Weinberger. 2017. Snapshot ensembles: Train 1, get M for free. CoRR, abs/1704.00109. Pennington Jeffrey, Socher Richard, and Manning Christopher. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, (EMNLP), pages 1532–1543. Shun Kiyono, Jun Suzuki, and Kentaro Inui. 2018. Mixture of expert/imitator networks: Scalable semi-supervised learning framework. CoRR, abs/1810.05788. Hansen K. Lars. and Salamon Peter. 1990. Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell., pages 993–1001. Peters Matthew, Neumann Mark, Iyyer Mohit, Gardner Matt, Clark Christopher, Lee Kenton, and Zettlemoyer Luke. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (NAACL). Johnson Melvin, Schuster Mike, Le V. Quoc, Krikun Maxim, Wu Yonghui, Chen Zhifeng, Thorat Nikhil, Vi´egas Fernanda, Wattenberg Martin, Corrado Greg, Hughes Macduff, and Dean Jeffrey. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. David W. Opitz and Jude W. Shavlik. 1996. Actively searching for an effective neural network ensemble. Connect. Sci., 8:337–354. Sennrich Rico, Haddow Barry, and Birch Alexandra. 2016. Controlling politeness in neural machine translation via side constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (NAACL), pages 35–40, San Diego, California. Erik TK. Sang and Fien D. Meulder. 2003. Introduction to the conll-2003 shared task: Languageindependent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning, (NAACL), pages 142–147. Erik TK. Sang and Buchholz Sabine. 2000. Introduction to the CoNLL-2000 shared task chunking. In Fourth Conference on Computational Natural Language Learning and the Second Learning Language in Logic Workshop. Andrew M. Saxe, James L. McClelland, and Surya Ganguli. 2013. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. CoRR, abs/1706.03762. Lewis D. Davidand Yang Yiming, Rose G. Tony, and Li Fan. 2004. Rcv1: A new benchmark collection for text categorization research. J. Mach. Learn. Res., 5:361–397. 3012 Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, and Tieyan Liu. 2020. Incorporating bert into neural machine translation. In International Conference on Learning Representations, (ICLR). 3013 A Hyper-parameters and Ensemble Strategy Text Classification Sequence Labeling TFM:GLOVE TFM:BERT TFM:ELMO Embedding dimension 200 768 256 Hidden dimension 200 768 256 Number of layers 6 6 6 Number of attention heads 8 8 8 Frozen vectors GloVe 200 BERT-Large ELMo 1024 0.5(Emb) 0.2 (Residual) 0.5 (Residual) 0.2 (Residual) Dropout 0.1 (Attention) 0.1 (Attention) 0.1 (FF) Label smoothing 0.1 0.1 Optimizer Adam Adam Adam Initial learning rate 0.0001 0.0001 0.0001 Batch size 64 128 32 Gradient Clipping 1.0 1.0 5.0 Aggrgation Strategy Averaging Averaging Voting Sampling strategy Normal & Bootstrapping Normal & Bootstrapping Normal Table 5: Hyper-parameters and ensemble strategies for SINGLE, NORMALENS and SINGLEENS. For TFM:BERT, we followed the model architecture of Zhu et al. (2020). For TFM:ELMO on sequence labeling, we referenced the architecture of Matthew et al. (2018) with replacing the encoder with Transformer. It should be noted that for TFM:ELMO, we add Linear →Relu →LayerNorm between embedding and self-attention. Text Classification Sequence Labeling TFM:GLOVE TFM:BERT TFM:ELMO Embedding dimension 50 64 370 Hidden dimension 50 64 128 Frozen vectors GloVe 50 BERT-Base ELMo 256 Number of layers 3 3 4 Number of attention heads 10 8 8 Feed forward dimension 128 128 128 Aggregation Strategy Averaging Averaging Voting Sampling strategy Normal Normal Normal Table 6: Hyper-parameters and ensemble strategies for 1/K ENS. The other values are same as Table 5. It should be noted that we ensemble K models of each sub model for final prediction. B Data Statistics Task Dataset Train Valid Test IMDB 21,246 3,754 25,000 Text Classification Rotten 8,636 960 1,066 RCV 1 14,007 1,557 49,838 Sequence Labeling CoNLL-2003 14,987 3,466 3,684 CoNLL-2000 8,926 2,012 2,012 Table 7: Summary of the datasets. The values are the number of sentences contained in each dataset.
2020
271
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3014–3024 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3014 Zero-shot Text Classification via Reinforced Self-training Zhiquan Ye1,2, Yuxia Geng1,2, Jiaoyan Chen4, Xiaoxiao Xu3, Suhang Zheng3, Feng Wang3, Jingmin Chen3, Jun Zhang3, Huajun Chen†1,2 1College of Computer Science, Zhejiang University 2AZFT Joint Lab of Knowledge Engine 3Alibaba Group 4Department of Computer Science, Oxford University {yezq,gengyx,huajunsir}@zju.edu.cn {jiaoyan.chen}@cs.ox.ac.uk, {xiaoxiao.xuxx}@alibaba-inc.com {suhang.zhengsh,wf135777,jingmin.cjm,zj157077}@alibaba-inc.com Abstract Zero-shot learning has been a tough problem since no labeled data is available for unseen classes during training, especially for classes with low similarity. In this situation, transferring from seen classes to unseen classes is extremely hard. To tackle this problem, in this paper we propose a self-training based method to efficiently leverage unlabeled data. Traditional self-training methods use fixed heuristics to select instances from unlabeled data, whose performance varies among different datasets. We propose a reinforcement learning framework to learn data selection strategy automatically and provide more reliable selection. Experimental results on both benchmarks and a real-world e-commerce dataset show that our approach significantly outperforms previous methods in zero-shot text classification. 1 Introduction Zero-shot learning (ZSL) is a challenging task as no labeled data is available for unseen classes during training. There are extensive works proposed in zero-shot image classification task. The main focus of these works is how to transfer knowledge from seen classes to unseen classes. To associate unseen classes with seen classes, they usually resort to semantic information such as visual attributes (Lampert et al., 2009), word embeddings of class names (Norouzi et al., 2013) and class hierarchy (Socher et al., 2013). For example, if the model has not seen any instances of “humpback whale” in the training stage, it could still make predictions at testing stage since “humpback whale” is semantically close to “killer whale” and “blue whale” in the seen class set ∗, so the model is capable of transferring knowledge from seen †Corresponding Author. ∗This example is taken from awa2 dataset, https:// cvml.ist.ac.at/AwA2/. classes to unseen classes. These methods assume that semantically similar classes share similar image features, however, they may fail in the cases where classes share low similarities. This problem becomes even more salient in typical NLP tasks such as text classification. For example, let us consider a 10-class emotion classification task (Yin et al., 2019), in which the model is trained on class “sadness” while makes predictions on instances from class “joy”. Notice that most emotions are relatively independent, which means the way we express certain emotion is pretty different from other emotions. As a result, for an unseen class we can hardly find a similar class in the seen class set. Transferring from seen classes to unseen classes can be extremely hard as matching patterns that can be shared among classes are rare. Essentially, ZSL methods aim to learn a matching model between feature space and semantic space, which refers to text and label in text classification task respectively. Matching patterns between text and label can be roughly classified as class-invariant patterns and class-specific ones. The former refers to the patterns that are shared among classes, while the latter is dependent on a certain class. Table 1 shows an example to illustrate this definition. The string match of label and text, which is highlighted with red color, indicates a simple matching pattern that can be shared among classes. On the contrary, the words that are highlighted with blue color indicates a matching pattern that is specific to a certain class and cannot be transferred among classes easily. Imagine if the model is trained on sentence 1, it can make a correct prediction on sentence 2 while failing on sentence 3 probably. There are mainly two ways to deal with this troublesome zero-shot learning situation, including (1) integrating more external knowledge to 3015 Label Sentence fear 1. One day, when I realized that I was alone, I felt fear of loneliness. guilty 2. I felt guilty when I lied to my parents. guilty 3. I wished secretly and lied to a friend because I didn’t want her to stay in my house. Table 1: Illustration of class-invariant and class-specific matching pattern. better describe class and build more sophisticated connections between classes (Rios and Kavuluru, 2018; Zhang et al., 2019); (2) integrating the unlabeled data to improve the generalization performance. Generally, existing works mainly adopt the former solution, while little attention is paid to the latter one. In this paper, we focus on the latter one and propose a self-training based method to leverage unlabeled data. The basic idea of selftraining (McClosky et al., 2006; Sagae, 2010) is to select unlabeled instances that are predicted with high confidence and add them into the training set. It is straightforward to consider that if we add sentence 2 to training set, the model is capable of learning class-specific pattern as sentence 2 and sentence 3 share the intra-class similarity. In this way, we can mine class-specific feature through class-invariant feature. However, directly applying traditional selftraining method to zero-shot learning may encounter some problems: (1) traditional selftraining methods use manually designed heuristics to select data, so manual adjustment of selection strategy is costly (Chen et al., 2018). (2) due to the severe domain shift (Fu et al., 2015), traditional self-training method may not provide reliable selection. To alleviate these problems, we present a reinforcement learning framework to learn data selection policy, which can select unlabeled data automatically and provide more reliable selection. The contributions of our work can be summarized as follows: • We propose a self-training based method to leverage unlabeled data in zero-shot text classification. Our method is capable of alleviating the domain shift problem and enabling transferring between classes sharing low similarities and connections. • We propose a reinforcement learning framework to learn data selection policy automatically instead of using manually designed heuristics. • Experimental results on both benchmarks and a real-world e-commerce dataset show that our method outperforms previous methods with a large margin of 15.4% and 5.4% on average in generalized and non-generalized ZSL respectively. 2 Related Work 2.1 Zero-shot Learning Zero-shot learning has been widely studied in image classification, in which training classes and testing classes are disjoint (Lampert et al., 2013; Larochelle et al., 2008; Rohrbach et al., 2011). The general idea of zero-shot learning is to transfer knowledge from seen classes to unseen classes (Wang et al., 2019). Most methods focus on learning a matching model between image feature space and class semantic space, such as visual attributes (Lampert et al., 2009), word embeddings of class names (Socher et al., 2013), class hierarchy (Socher et al., 2013). For zero-shot text classification, similar methods have been adopted. (Dauphin et al., 2013) associated text with class label through semantic space, which is learned by deep neural networks trained on large amounts of search engine query log data. (Nam et al., 2016) proposed an approach to embed text and label into joint space while sharing word representations between text and label. (Pushp and Srivastava, 2017) proposed three neural networks to learn the relationship between text and tags, which are trained on a large text corpus. (Rios and Kavuluru, 2018) incorporated word embeddings and hierarchical class structure using GCN (Kipf and Welling, 2016) for multi-label zero-shot medical records classification. (Zhang et al., 2019) proposed a two-phase framework together with data augmentation and feature augmentation, in which four kinds of semantic knowledge (word embeddings, class descriptions, class hierarchy, and knowledge graph) were incorporated. These works benefit from large training corpus 3016 and external semantic knowledge, however, none of these works have tried to leverage unlabeled unseen data in zero-shot text classification, namely transductive zero-shot learning (Xian et al., 2018). There exists some work to utilize unlabeled data in image classification to alleviate domain shift problem, including (Fu et al., 2012; Rohrbach et al., 2013; Li et al., 2015; Fu et al., 2015), etc. As far as we know, our work is the first to explore transductive zero-shot learning in text classification. 2.2 Self-training Self-training is a widely used algorithm in semisupervised learning (Triguero et al., 2015). The basic process of self-training is to iteratively select high-confidence data from unlabeled data and add these pseudo-labeled data to training set. Self-training has shown its effectiveness for various natural language processing tasks, including text classification (Drury et al., 2011; Van Asch and Daelemans, 2016), name entity recognition (Kozareva et al., 2005), parsing (McClosky et al., 2006, 2008; Huang and Harper, 2009). However, there are two main drawbacks of selftraining. Firstly, its data selection strategy is simply confidence-based, which may not provide reliable selection (Chen et al., 2011) and cause error accumulation. Secondly, self-training relies on pre-defined confidence threshold which varies among datasets and manual adjustment is costly. 2.3 Reinforcement Learning for Data Selection There have been some works applying reinforcement learning to data selection in semi-supervised learning, including active learning (Fang et al., 2017), self-training (Chen et al., 2018), co-training (Wu et al., 2018). These works share a similar framework which uses deep Q-Network (Mnih et al., 2015) to learn a data selection strategy guided by performance change of model. This process is time-consuming as the reward is immediate which means the classifier is retrained and evaluated after each instance is selected. Reinforcement learning has also been applied in relation extraction to alleviate the noisy label problem caused by distant supervision. (Feng et al., 2018; Qin et al., 2018) proposed a policy network to automatically identify wrongly-labeled instances in training set. Earlier, (Fan et al., 2017) proposed an adaptive data selection strategy, enabling to dynamically choose different data at different trainFigure 1: Illustration of the traditional classifier and standard ZSL model. ing stages. 3 Methodology 3.1 Problem Formulation and Overview Here we first formalize the zero-shot text classification problem. Let Ys and Yu denote seen and unseen class set respectively, where Ys ∩ Yu = ∅, Ys ∪Yu = Y. Suppose there is Ds = {(xs i, ys i )}N i=1 for seen classes and Du = {xu i , yu i }M i=1 for unseen classes, where xi represents i-th text and yi represents the corresponding label. As shown in Figure 1, ZSL method turns a classification problem into a matching problem between text and class label. During training, we learn a matching model f(x, y; θ) from seen classes Ds and then make predictions on unseen classes: ˆy = arg max y∈Y f(x, y; θ) , (1) where θ refers to the parameter of f . For transductive ZSL, both labeled seen data Ds and unlabeled unseen data Du = {xu i }M i=1 are available during training. To tackle zero-shot text classification, a reinforced self-training framework is developed in this work. Figure 2 shows an overview of our reinforced self-training framework for zero-shot text classification. The goal of our framework is to select high quality data from unseen classes automatically by agent and use these data to augment the performance of the base matching model. Specifically, we first train the base matching model on seen class data and make predictions on unseen class data. To make it more efficient, the agent performs data selection from a subset of unlabeled data instead of all unlabeled data at each iteration. We rank the instances by prediction confidence and take a certain ratio of instances from 3017 Figure 2: Overview of our reinforced self-training framework for zero-shot text classification. it at each iteration. The agent is responsible for selecting data from this subset and filter negative instances. The reward is determined by the performance of matching model in validation set. We will introduce the details of our method in the following subsections. 3.2 The Base Matching Model Our RL-based data selection framework is modelagnostic, which means any matching model is compatible. Here we adopt the widely recognized pre-trained model BERT (Devlin et al., 2018) as the base matching model. For seen classes, given text x and label y, we generate {(x, y′)|y′ ∈Ys} as training instances, in which (x, y′) is a positive training instance if y′ = y. We take the text as premise and transform the label into its corresponding hypothesis provided in (Yin et al., 2019). Therefore, the input sequence of BERT is packed as “[CLS] x [SEP] hypotheis of y′ [SEP]”, where [CLS] and [SEP] are special start and separator tokens, as shown in Figure 3. BERT encoder is composed of multi-layer bidirectional transformers (Vaswani et al., 2017). We use the hidden vector cx,y′ ∈RH corresponding to [CLS] in the final layer as the aggregate representation. We add a linear layer and compute loss as below: px,y′ = σ(W T cx,y′ + b), (2) L =  −log(px,y′) y′ = y −log(1 −px,y′) y′ ̸= y , (3) where W and b are parameters of the linear layer, W ∈RH, b ∈R, H is the hidden dimension size, and px,y′ indicates the matching score between x and y′, σ(·) is sigmoid function. 3.3 Reinforcement Learning for Self-training The conventional self-training method simply selects data predicted with high confidence, which Figure 3: BERT as the base matching model. is confidence-based. We formalize the data selection as a sequential decision-making process and introduce a RL framework to combine confidencebased strategy and performance-driven strategy. We describe the whole process in Algorithm 1 . The details of the RL modules are described below. 3.3.1 State For each text x, we get prediction scores {px,y′|y′ ∈Yu}. The label y∗with maximum matching score is considered as the pseudo label. For time step t, the current state st consists of 2 parts: the prediction confidence px,y∗, the representation of arriving instance cx,y∗. We take the hidden vector corresponding to [CLS] as the representation of current instance (x, y∗). The policy network takes px,y∗and cx,y∗as input and outputs the probability whether to select or not. 3.3.2 Action At each step, the agent is required to take action for the current instance(x, y∗) – whether to select it or not. At time step t, at = 1 means the agent accepts the current instance and adds it to training set; at = 0 means rejection. The action value is obtained through sampling from the policy network’s output P(a|st). 3.3.3 Reward If wrongly-labeled instances are added into training set, it will degrade the performance of the matching model. Therefore the function of reward is to guide the agent to select the instances that are consistent with training set. The reward is determined by the performance of the matching model on validation set, which consists of 2 parts: seen validation set Ds dev and unseen validation set Du dev. Du dev comes from the pseudo labeled data, which guides newly-selected data to be consistent with previously-selected data. More specifically, after each batch of selection, we train the matching model using the selected instances, 3018 and evaluate on validation set. We use macro-F1 as the evaluation metric. Assume there are N3 batches in one episode, we get two F sequences F s = {F s 1 , F s 2 , ..., F s N3} for seen validation set and F u = {F u 1 , F u 2 , ..., F u N3} for unseen validation set. For batch k, the reward is formulated as: rk = (F s k −µs) σs + λ · (F u k −µu) σu , (4) where λ controls the weight of seen class and unseen class, µ and σ represent the mean and standard deviation of F, respectively. 3.3.4 Policy Network We adopt a multi-layer perceptron (MLP) as the policy network. The policy network receives states: the prediction confidence px,y∗and the representation of arriving instance cx,y∗, then output the probability for each action. zt = ReLU(W T 1 cx,y∗+ W T 2 px,y∗+ b1), (5) P(a|st) = softmax(W T 3 zt + b2) . (6) We use ReLU as the activation function, W1, W2, W3, b1, b2 are the parameters of MLP, and P(a|st) is the probability of actions. 3.3.5 Optimization To learn an optimal data selection policy, we aim to maximize the expected total reward, which can be formulated as: J(φ) = EPφ(a|s)[R(s, a)] , (7) where R(s, a) is the state-action value function and φ is the parameter of policy network. We update the φ via policy gradient (Sutton et al., 2000), φ ←φ + η∇φ ˜J(φ) , (8) where η is the discount learning rate. For a batch Bk, we sample an action at for each state st according to policy Pφ(a|s). After one episode , we compute rewards {rk}N3 k=1 by Equation 4. The gradient can be approximated by ∇φ ˜J(φ) = rk |Bk| |Bk| X t=1 ∇φlogP(at|st) , (9) where |Bk| is the number of instances in one batch, rk is the reward of batch Bk, the parameter of policy network is updated after each episode. Algorithm 1 Reinforced self-training for zeroshot text classification Require: labeled seen data Ds = {(xs i, ys i )}N i=1, unlabeled unseen data Du = {(xu i )}M i=1, seen validation set Ds dev. 1: Initialize pseudo-labeled data Dp ←∅ 2: for i = 1 →N1 do //iteration i 3: Train matching model f with instances 4: from Ds and Dp. 5: Make prediction on Du, get confidence P. 6: Get a subset Ωfrom Du by ranked confi7: dence P. 8: for j = 1 →N2 do //episode j 9: if early stop criteria is met then 10: break 11: end if 12: Shuffle Ω= {B1, B2, ..., BN3}. 13: for k = 1 →N3 do //batch k 14: Get a batch Bk from Ω. 15: Decide action for each instance in 16: Bk, get selected instances Bp k. 17: Train model f′ with Bp k. 18: Evaluate on Ds dev and Du dev, 19: get F s k, F u k . 20: end for 21: Compute rewards {rk}N3 k=1 by equa22: tion 4. 23: // update policy network 24: for k = 1 →N3 do 25: φ ←φ + η rk |Bk| P|Bk| t=1 ∇φlogP(at|st) 26: end for 27: end for 28: Dp i ←∪N3 k=1Bp k 29: Dp ←Dp ∪Dp i 30: Du ←Du \ Dp i 31: Du dev ←Dp. 32: end for 4 Experiments 4.1 Datasets We use two kinds of datasets for our experiments. The first comes from the recently released benchmarks for zero-shot text classification (Yin et al., 2019), including 3 datasets: topic, emotion and situation classification. Considering that some texts in situation dataset has multiple labels, we remove texts with multiple labels and keep single-label texts. To keep consistent with Equation 1, “none” type is not included in unseen classes. Datasets are prepared with two versions of partitions with non3019 Seen class Unseen class #Train #Valid #Test Topic I 650000 5000 50000 II 650000 5000 50000 Emotion I 20465 2405 5101 II 14204 1419 8901 Situation I 2428 240 689 II 1747 173 1102 E-commerce I 9000 1000 5000 II 9000 1000 5000 Table 2: Statistics of text classification Datasets, where I and II refer to two ways of partitions respectively described in (Yin et al., 2019). overlapping labels so as to get rid of the models over-fitting on one of them. To further evaluate our method in real-world scenario, we construct a new dataset from ecommerce platform, where texts consist of user search queries. For seen classes Ys, it consists of the categories of product that users click on after searching. For unseen classes Yu, it consists of the pre-defined user preference classes. User preference refers to the product’s attribute that users prefer, such as the efficacy of cosmetic products, the style of furniture. The user preference and product category are disjoint so it can be formalized as a zero-shot learning problem. We annotate 10-class user preference dataset for evaluation and there is 1000 instances for each class. Following (Yin et al., 2019), we created two versions of unseen classes each with 5 classes that do not overlap. The statistics of datasets are shown in Table 2. 4.2 Implementation Details We use the BERT-Base (Devlin et al., 2018) as our base matching model, with 12-layer transformer blocks, 768-dimension hidden state, 12 attention heads and total 110M parameters. We use the pre-trained BERT-Base-Uncased∗for the English benchmarks and BERT-Base-Chinese† for e-commerce dataset. For training stage, we use Adam (Kingma and Ba, 2014) for fine-tuning with β1 as 0.9, β2 as 0.999. The max sequence length of BERT input is set to 64. For other hyperparameters, we set learning rate as 5e-5, ratio δ = size(Ω)/M as 0.2, iteration number N1 as 5 and episode number N2 as 20. We select weight λ ∗https://storage.googleapis.com/bert models/2018 10 18 /uncased L-12 H-768 A-12.zip †https://storage.googleapis.com/bert models/2018 11 03 /chinese L-12 H-768 A-12.zip among {1, 2, 5, 10}. For baselines, we adopt 300dim GloVe vectors (Pennington et al., 2014) for English words and 300-dim word vectors from (Li et al., 2018) for Chinese words. Policy network pre-train is widely used by reinforcement learning based methods to accelerate the training of RL agent (Silver et al., 2016; Xiong et al., 2017; Qin et al., 2018). We use seen class data to pre-train the agent, enabling the agent to distinguish negative instances. We set early stop criteria to avoid overfitting to seen class data. 4.3 Baseline Methods We compare our method with the following baselines: (1) Word2vec measures how well a label matches the text by computing cosine similarity of their representations. Both the representations of text and labels are average of word embeddings. (2) Label similarity (Veeranna et al.) uses word embeddings to compute semantic similarity as well, which computes the cosine similarity between class label and every n-gram (n=1,2,3) of the text, and takes the max similarity as final matching score; (3) FC and RNN+FC refers to the architecture 1 and architecture 2 proposed in (Pushp and Srivastava, 2017). We also compare multiple variants of our models: (1) BERT refers to the base matching model without self-training and RL; (2) BERT+selftraining refers to the traditional self-training method, which selects instances with high confidence. However, confidence threshold has great impact on performance. With different thresholds, the number of selected instances differs, resulting in performance change of the model. To provide a fair comparison, we record the number of instances k selected in every iteration in RL selection process. For self-training, we select top k instances for every iteration. (3) BERT+RL refers to full model of our methods. We use macro-F1 as evaluation metric in our experiments since datasets are not well balanced. We report the results in two ZSL setting: generalized and non-generalized. In non-generalized ZSL, at test time we aim to assign an instance to unseen class label (Yu). While in generalized ZSL, class label comes from both unseen and seen classes (Ys ∪Yu). The harsh policy in testing (Yin et al., 2019) is not adopted in our experiments. 3020 Topic Emotion Situation E-commerce I II I II I II I II Word2vec 35.50 35.33 4.77 11.45 40.67 36.33 53.09 55.47 Label similarity 34.62 36.14 10.63 16.89 54.56 37.45 59.04 55.89 FC 19.45 22.46 27.36 8.31 24.33 25.01 26.40 22.45 RNN+FC 9.68 13.41 15.45 3.15 15.58 14.09 25.76 18.15 BERT 57.07 45.50 16.86 10.21 60.23 34.15 58.05 66.47 BERT+self-training 72.21 62.90 31.96 19.72 69.00 49.30 65.14 76.72 BERT+RL 73.41 65.53 36.98 19.38 73.14 52.44 70.63 80.32 Table 3: Generalized experimental results on benchmarks and real-world e-commerce dataset, where I and II refer to two versions of partitions respectively. Topic Emotion Situation E-commerce I II I II I II I II Word2vec 38.16 49.08 18.42 12.17 59.02 37.89 59.52 70.17 Label similarity 39.36 45.70 27.43 17.81 67.73 39.96 61.90 72.73 FC 20.93 29.29 33.76 12.98 38.47 34.15 34.10 30.57 RNN+FC 31.09 28.63 33.05 19.47 32.98 25.61 32.44 26.52 BERT 67.73 60.20 29.31 11.96 75.08 51.48 70.77 79.74 BERT+self-training 73.24 67.97 33.71 20.76 76.03 53.18 73.95 82.74 BERT+RL 74.46 66.70 37.33 20.57 77.23 53.63 75.58 83.97 Table 4: Non-generalized experimental results on benchmarks and real-world e-commerce dataset, where I and II refer to two versions of partitions respectively. 4.4 Results Table 3 shows the experimental results on benchmarks and real-world e-commerce dataset in generalized setting. For baseline methods, Word2vec and Label similarity are unsupervised approaches, which cannot get desirable results as the effectiveness of these methods heavily rely on the similarity of text and label. Therefore, it may not perform well on dataset like emotion detection. Label similarity performs slightly better than Word2vec, which proves that max aggregation of n-grams is better than mean aggregation in Word2vec method. As for the supervised FC and RNN+FC method, FC gets slightly better results than RNN+FC in most datasets. As the number of categories and the scale of training dataset are small, RNN+FC may overfit on seen class data and cannot generalize well on unseen class data. For variants of our method, we can observe that the full model BERT+RL outperforms all other baselines. On average, BERT+RL achieves an improvement of 15.4% over BERT. To be specific, the base matching model BERT performs better than previous baselines, which shows good generalization results benefiting from pre-training on large-scale corpus. For BERT+self-training, the integration of unlabeled data augments the base matching model and shows superior performance than BERT. Last but not least, our full model BERT+RL shows substantial improvement over BERT+self-training in most datasets. Under the condition that the number of selected instances remains the same, reinforced selection strategy can still yield better performance than the simply confidence-based strategy, which proves the effectiveness of our RL policy. For non-generalized ZSL setting, we can get similar results as presented in Table 4. On average, BERT+RL achieves an improvement of 5.4% over BERT. However, we notice that the improvement is more significant in generalized ZSL compared to non-generalized ZSL. The reason is that model trained on seen class data tends to bias towards seen classes, resulting in poor performance in generalized setting (Song et al., 2018). Our approach, however, could relieve the bias in favour of seen classes by incorporating pseudo-labeled unseen class data. 3021                        (a) Topic                          (b) Emotion                        (c) Situation                        (d) E-commerce Figure 4: Performance with regards to selected instance ratio ϵ. One can see the RL data selection strategy does not rely on manually-set ratio and can yield consistently better performance than the competitors in most cases. Label BERT BERT+RL Joy 1. Good morning joyful people. Choose happiness to have a great day today. 1. And they all rejoiced, and embraced him and kissed him without stopping. 2. I was filled with joy when I heard I had been selected to come here at Kamuzu College of Nursing. 2. When I got a record as a gift from a friend. Sadness 1. I’m sick and sad , missing out on Martini Lounge tonight. 1. When I learned that two of my friends had a serious car accident. 2. Crossing the bridge, leaving ocean city I’m sad . 2. Oh my god! Got in a car accident! Pray for him! Whitening 1. Mizon Good Night White Sleeping Mask. 1. VieBeauti Dark Spot Corrector Remover. 2. Intimate Skin Whitening Cream For Face. 2. Intimate Skin lightening Cream. Nordic style 1.Aah Nordic modern cloth sofa size living room. 1. Fabric sofa, simple and modern apartment living room. 2. Nordic Side Table, Modern Decoration. 2. Modern simple style living room chandelier. Table 5: Qualitative comparison between BERT and BERT+RL. Left: texts predicted with high confidence; Right: texts being misclassified by BERT while being correctly labeled by BERT+RL. 4.5 Impact of Selection Ratio When selecting the same number of instances per iteration, previous experimental results show our reinforced selection strategy can yield better performance than the greedy strategy. We define ϵ as the ratio of selected instances size to all unlabeled instances size. In this section, we vary the selection ratio ϵ among {0.2, 0.4, 0.6, 0.8, 1.0} for self-training method. For each iteration, we select top ϵ N1 M instances and add them into training set. Figure 4 shows the performances with different selection ratios in generalized ZSL setting. Clearly, the performance of self-training method varies with different ratio of instances selected. The optimal ratio of selection instances also varies with different datasets. However, our reinforced data selection strategy does not rely on manuallyset ratio and can yield consistently better performance than the self-training method in most cases. 4.6 Case Study In Table 5, we listed some examples to further reveal the differences between BERT and BERT+RL method. In the left part of the table, texts predicted by BERT with highest confidence are listed. We can easily find that these texts share a simple matching pattern that label words appear in the text, which is highlighted with red color. These simple patterns are exactly classinvariant patterns we defined previously, which can be shared among classes. In the right part of the table, we select the texts which are misclassified by BERT but are predicted correctly by BERT+RL. We can observe that those texts are harder to be distinguished since these matching patterns are more class-dependent, which cannot be directly transferred from other classes. There is no doubt that model trained on other classes would fail in such cases. For our method, we first tackle the easy instances, then add these instances into training set iteratively. With the integration of instances with easy pattern, the model can learn harder pattern gradually. In this way, our method can learn to transfer between classes even with low similarity. 3022 5 Conclusion In this paper, we propose a reinforced self-training framework for zero-shot text classification. To realize the transferring between classes with low similarity, our method essentially turns a zero-shot learning problem into a semi-supervised learning problem. In this way, our approach could leverage unlabeled data and alleviate the domain shift between seen classes and unseen classes. Beyond that, we use reinforcement learning to learn data selection policy automatically, thus obviating the need to manual adjustment. Experimental results on both benchmarks and real-world e-commerce dataset demonstrate the effectiveness of the integration of unlabeled data and the reinforced data selection policy. Acknowledgments This work is funded by NSFC U19B2027/91846204/61473260, national key research program 2018YFB1402800, and supported by AlibabaZJU Frontier Technology Research Center. References Chenhua Chen, Yue Zhang, and Yuze Gao. 2018. Learning how to self-learn: Enhancing self-training using neural reinforcement learning. In 2018 International Conference on Asian Language Processing (IALP), pages 25–30. IEEE. Minmin Chen, Kilian Q Weinberger, and John Blitzer. 2011. Co-training for domain adaptation. In Advances in neural information processing systems, pages 2456–2464. Yann N Dauphin, Gokhan Tur, Dilek Hakkani-Tur, and Larry Heck. 2013. Zero-shot learning for semantic utterance classification. arXiv preprint arXiv:1401.0509. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Brett Drury, Luis Torgo, and Jose Joao Almeida. 2011. Guided self training for sentiment classification. In Proceedings of Workshop on Robust Unsupervised and Semisupervised Methods in Natural Language Processing, pages 9–16. Yang Fan, Fei Tian, Tao Qin, Jiang Bian, and TieYan Liu. 2017. Learning what data to learn. arXiv preprint arXiv:1702.08635. Meng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to active learn: A deep reinforcement learning approach. arXiv preprint arXiv:1708.02383. Jun Feng, Minlie Huang, Li Zhao, Yang Yang, and Xiaoyan Zhu. 2018. Reinforcement learning for relation classification from noisy data. In Thirty-Second AAAI Conference on Artificial Intelligence. Yanwei Fu, Timothy M Hospedales, Tao Xiang, and Shaogang Gong. 2012. Attribute learning for understanding unstructured social activity. In European Conference on Computer Vision, pages 530– 543. Springer. Yanwei Fu, Timothy M Hospedales, Tao Xiang, and Shaogang Gong. 2015. Transductive multi-view zero-shot learning. IEEE transactions on pattern analysis and machine intelligence, 37(11):2332– 2345. Zhongqiang Huang and Mary Harper. 2009. Selftraining pcfg grammars with latent annotations across languages. In Proceedings of the 2009 conference on empirical methods in natural language processing: Volume 2-Volume 2, pages 832–841. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Zornitsa Kozareva, Boyan Bonev, and Andres Montoyo. 2005. Self-training and co-training applied to spanish named entity recognition. In Mexican International conference on Artificial Intelligence, pages 770–779. Springer. Christoph H Lampert, Hannes Nickisch, and Stefan Harmeling. 2009. Learning to detect unseen object classes by between-class attribute transfer. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 951–958. IEEE. Christoph H Lampert, Hannes Nickisch, and Stefan Harmeling. 2013. Attribute-based classification for zero-shot visual object categorization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(3):453–465. Hugo Larochelle, Dumitru Erhan, and Yoshua Bengio. 2008. Zero-data learning of new tasks. In AAAI, volume 1, page 3. Shen Li, Zhe Zhao, Renfen Hu, Wensi Li, Tao Liu, and Xiaoyong Du. 2018. Analogical reasoning on chinese morphological and semantic relations. arXiv preprint arXiv:1805.06504. Xin Li, Yuhong Guo, and Dale Schuurmans. 2015. Semi-supervised zero-shot classification with label representation learning. In Proceedings of the IEEE international conference on computer vision, pages 4211–4219. 3023 David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceedings of the main conference on human language technology conference of the North American Chapter of the Association of Computational Linguistics, pages 152–159. Association for Computational Linguistics. David McClosky, Eugene Charniak, and Mark Johnson. 2008. When is self-training effective for parsing? In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 561–568. Association for Computational Linguistics. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. Nature, 518(7540):529. Jinseok Nam, Eneldo Loza Menc´ıa, and Johannes F¨urnkranz. 2016. All-in text: Learning document, label, and word representations jointly. In Thirtieth AAAI Conference on Artificial Intelligence. Mohammad Norouzi, Tomas Mikolov, Samy Bengio, Yoram Singer, Jonathon Shlens, Andrea Frome, Greg S Corrado, and Jeffrey Dean. 2013. Zero-shot learning by convex combination of semantic embeddings. arXiv preprint arXiv:1312.5650. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Pushpankar Kumar Pushp and Muktabh Mayank Srivastava. 2017. Train once, test anywhere: Zeroshot learning for text classification. arXiv preprint arXiv:1712.05972. Pengda Qin, Weiran Xu, and William Yang Wang. 2018. Robust distant supervision relation extraction via deep reinforcement learning. arXiv preprint arXiv:1805.09927. Anthony Rios and Ramakanth Kavuluru. 2018. Fewshot and zero-shot multi-label learning for structured label spaces. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing, volume 2018, page 3132. NIH Public Access. Marcus Rohrbach, Sandra Ebert, and Bernt Schiele. 2013. Transfer learning in a transductive setting. In Advances in neural information processing systems, pages 46–54. Marcus Rohrbach, Michael Stark, and Bernt Schiele. 2011. Evaluating knowledge transfer and zero-shot learning in a large-scale setting. In CVPR 2011, pages 1641–1648. IEEE. Kenji Sagae. 2010. Self-training without reranking for parser domain adaptation and its impact on semantic role labeling. In Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing, pages 37–44. Association for Computational Linguistics. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484. Richard Socher, Milind Ganjoo, Christopher D Manning, and Andrew Ng. 2013. Zero-shot learning through cross-modal transfer. In Advances in neural information processing systems, pages 935–943. Jie Song, Chengchao Shen, Yezhou Yang, Yang Liu, and Mingli Song. 2018. Transductive unbiased embedding for zero-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1024–1033. Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057–1063. Isaac Triguero, Salvador Garc´ıa, and Francisco Herrera. 2015. Self-labeled techniques for semisupervised learning: taxonomy, software and empirical study. Knowledge and Information systems, 42(2):245–284. Vincent Van Asch and Walter Daelemans. 2016. Predicting the effectiveness of self-training: Application to sentiment classification. arXiv preprint arXiv:1601.03288. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Sappadla Prateek Veeranna, Jinseok Nam, Eneldo Loza Mencıa, and Johannes F¨urnkranz. Using semantic similarity for multi-label zero-shot classification of text documents. Wei Wang, Vincent W Zheng, Han Yu, and Chunyan Miao. 2019. A survey of zero-shot learning: Settings, methods, and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2):13. Jiawei Wu, Lei Li, and William Yang Wang. 2018. Reinforced co-training. arXiv preprint arXiv:1804.06035. Yongqin Xian, Christoph H Lampert, Bernt Schiele, and Zeynep Akata. 2018. Zero-shot learning-a comprehensive evaluation of the good, the bad and the 3024 ugly. IEEE transactions on pattern analysis and machine intelligence. Wenhan Xiong, Thien Hoang, and William Yang Wang. 2017. Deeppath: A reinforcement learning method for knowledge graph reasoning. arXiv preprint arXiv:1707.06690. Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. arXiv preprint arXiv:1909.00161. Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo. 2019. Integrating semantic knowledge to tackle zero-shot text classification. arXiv preprint arXiv:1903.12626.
2020
272
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3025–3035 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3025 A Novel Graph-based Multi-modal Fusion Encoder for Neural Machine Translation Yongjing Yin1∗, Fandong Meng2, Jinsong Su1†, Chulun Zhou1, Zhengyuan Yang3, Jie Zhou2, Jiebo Luo3 1Xiamen University, Xiamen, China 2Pattern Recognition Center, WeChat AI, Tencent Inc, Beijing, China 3Department of Computer Science, University of Rochester, Rochester NY 14627, USA [email protected] [email protected] [email protected] Abstract Multi-modal neural machine translation (NMT) aims to translate source sentences into a target language paired with images. However, dominant multi-modal NMT models do not fully exploit fine-grained semantic correspondences between semantic units of different modalities, which have potential to refine multi-modal representation learning. To deal with this issue, in this paper, we propose a novel graph-based multi-modal fusion encoder for NMT. Specifically, we first represent the input sentence and image using a unified multi-modal graph, which captures various semantic relationships between multi-modal semantic units (words and visual objects). We then stack multiple graph-based multi-modal fusion layers that iteratively perform semantic interactions to learn node representations. Finally, these representations provide an attention-based context vector for the decoder. We evaluate our proposed encoder on the Multi30K datasets. Experimental results and in-depth analysis show the superiority of our multi-modal NMT model. 1 Introduction Multi-modal neural machine translation (NMT) (Huang et al., 2016; Calixto et al., 2017) has become an important research direction in machine translation, due to its research significance in multimodal deep learning and wide applications, such as translating multimedia news and web product information (Zhou et al., 2018). It significantly extends the conventional text-based machine translation by taking images as additional inputs. The assumption behind this is that the translation is expected to be more accurate compared to purely text-based ∗This work is done when Yongjing Yin was interning at Pattern Recognition Center, WeChat AI, Tencent Inc, China. †Corresponding author. translation, since the visual context helps to resolve ambiguous multi-sense words (Ive et al., 2019). Apparently, how to fully exploit visual information is one of the core issues in multi-modal NMT, which directly impacts the model performance. To this end, a lot of efforts have been made, roughly consisting of: (1) encoding each input image into a global feature vector, which can be used to initialize different components of multi-modal NMT models, or as additional source tokens (Huang et al., 2016; Calixto et al., 2017), or to learn the joint multi-modal representation (Zhou et al., 2018; Calixto et al., 2019); (2) extracting object-based image features to initialize the model, or supplement source sequences, or generate attention-based visual context (Huang et al., 2016; Ive et al., 2019); and (3) representing each image as spatial features, which can be exploited as extra context (Calixto et al., 2017; Delbrouck and Dupont, 2017a; Ive et al., 2019), or a supplement to source semantics (Delbrouck and Dupont, 2017b) via an attention mechanism. Despite their success, the above studies do not fully exploit the fine-grained semantic correspondences between semantic units within an input sentence-image pair. For example, as shown in Figure 1, the noun phrase “a toy car” semantically corresponds to the blue dashed region. The neglect of this important clue may be due to two big challenges: 1) how to construct a unified representation to bridge the semantic gap between two different modalities, and 2) how to achieve semantic interactions based on the unified representation. However, we believe that such semantic correspondences can be exploited to refine multimodal representation learning, since they enable the representations within one modality to incorporate cross-modal information as supplement during multi-modal semantic interactions (Lee et al., 2018; Tan and Bansal, 2019). 3026 Two boys are playing with a toy car Multi-modal Graph 𝒗𝒙𝟐 𝒗𝒙𝟑 𝒗𝒙𝟒 𝒗𝒙𝟓 𝒗𝒙𝟔 𝒗𝒙𝟕 𝒗𝒙𝟖 𝒗𝒐𝟑 𝒗𝒐𝟏 𝒗𝒐𝟐 𝒗𝒙𝟏 Image Text Two boys are playing with a toy car Figure 1: The multi-modal graph for an input sentence-image pair. The blue and green solid circles denote textual nodes and visual nodes respectively. An intra-modal edge (dotted line) connects two nodes in the same modality, and an inter-modal edge (solid line) links two nodes in different modalities. Note that we only display edges connecting the textual node “playing” and other textual ones for simplicity. In this paper, we propose a novel graph-based multi-modal fusion encoder for NMT. We first represent the input sentence and image with a unified multi-modal graph. In this graph, each node indicates a semantic unit: textual word or visual object, and two types of edges are introduced to model semantic relationships between semantic units within the same modality (intra-modal edges) and semantic correspondences between semantic units of different modalities (inter-modal edges) respectively. Based on the graph, we then stack multiple graph-based multi-modal fusion layers that iteratively perform semantic interactions among the nodes to conduct graph encoding. Particularly, during this process, we distinguish the parameters of two modalities, and sequentially conduct intraand inter-modal fusions to learn multi-modal node representations. Finally, these representations can be exploited by the decoder via an attention mechanism. Compared with previous models, ours is able to fully exploit semantic interactions among multimodal semantic units for NMT. Overall, the major contributions of our work are listed as follows: • We propose a unified graph to represent the input sentence and image, where various semantic relationships between multi-modal semantic units can be captured for NMT. • We propose a graph-based multi-modal fusion encoder to conduct graph encoding based on the above graph. To the best of our knowledge, our work is the first attempt to explore multimodal graph neural network (GNN) for NMT. • We conduct extensive experiments on Multi30k datasets of two language pairs. Experimental results and in-depth analysis indicate that our encoder is effective to fuse multi-modal information for NMT. Particularly, our multi-modal NMT model significantly outperforms several competitive baselines. • We release the code at https://github.com/ DeepLearnXMU/GMNMT. 2 NMT with Graph-based Multi-modal Fusion Encoder Our multi-modal NMT model is based on attentional encoder-decoder framework with maximizing the log likelihood of training data as the objective function. 2.1 Encoder Essentially, our encoder can be regarded as a multimodal extension of GNN. To construct our encoder, we first represent the input sentence-image pair as a unified multi-modal graph. Then, based on this graph, we stack multiple multi-modal fusion layers to learn node representations, which provides the attention-based context vector to the decoder. 2.1.1 Multi-modal Graph In this section, we take the sentence and the image shown in Figure 1 as an example, and describe how to use a multi-modal graph to represent them. Formally, our graph is undirected and can be formalized as G=(V ,E), which is constructed as follows: In the node set V , each node represents either a textual word or a visual object. Specifically, we adopt the following strategies to construct these two kinds of nodes: (1) We include all words as separate textual nodes in order to fully exploit textual 3027 Multi-modal Graph Embedding Layer Cross-modal Gating Visual FFN Textual FFN Cross-modal Gating Intra-modal Fusion Inter-modal Fusion Target Inputs Embedding Layer × 𝑳𝒆 × 𝑳𝒅 Textual Self-Attention Visual Self-Attention Softmax Layer Target Outputs Self-Attention EncoderDecoder Attention FFN Encoder Decoder Figure 2: The architecture of our NMT model with the graph-based multi-modal fusion encoder. Note that we actually do not apply a Visual FFN to the last layer in the encoder. information. For example, in Figure 1, the multimodal graph contains totally eight textual nodes, each of which corresponds to a word in the input sentence; (2) We employ the Stanford parser to identify all noun phrases in the input sentence, and then apply a visual grounding toolkit (Yang et al., 2019) to detect bounding boxes (visual objects) for each noun phrase. Subsequently, all detected visual objects are included as independent visual nodes. In this way, we can effectively reduce the negative impact of abundant unrelated visual objects. Let us revisit the example in Figure 1, where we can identify two noun phrases “Two boys” and “a toy car” from the input sentence, and then include three visual objects into the multi-modal graph. To capture various semantic relationships between multi-modal semantic units for NMT, we consider two kinds of edges in the edge set E: (1) Any two nodes in the same modality are connected by an intra-modal edge; and (2) Each textual node representing any noun phrase and the corresponding visual node are connected by an inter-modal edge. Back to Figure 1, we can observe that all visual nodes are connected to each other, and all textual nodes are fully-connected. However, only nodes vo1 and vx1, vo1 and vx2, vo2 and vx1, vo2 and vx2, vo3 and vx6, vo3 and vx7, vo3 and vx8 are connected by inter-modal edges. 2.1.2 Embedding Layer Before inputting the multi-modal graph into the stacked fusion layers, we introduce an embedding layer to initialize the node states. Specifically, for each textual node vxi, we define its initial state H(0) xi as the sum of its word embedding and position encoding (Vaswani et al., 2017). To obtain the initial state H(0) oj of the visual node voj, we first extract visual features from the fully-connected layer that follows the ROI pooling layer in FasterRCNN (Ren et al., 2015), and then employ a multilayer perceptron with ReLU activation function to project these features onto the same space as textual representations. 2.1.3 Graph-based Multi-modal Fusion Layers As shown in the left part of Figure 2, on the top of embedding layer, we stack Le graph-based multimodal fusion layers to encode the above-mentioned multi-modal graph. At each fusion layer, we sequentially conduct intra- and inter-modal fusions to update all node states. In this way, the final node states encode both the context within the same modality and the cross-modal semantic information simultaneously. Particularly, since visual nodes and textual nodes are two types of semantic units containing the information of different modalities, we apply similar operations but with different parameters to model their state update process, respectively. Specifically, in the l-th fusion layer, both updates of textual node states H(l) x ={H(l) xi } and visual node states H(l) o ={H(l) oj } mainly involve the following steps: 3028 Step1: Intra-modal fusion. At this step, we employ self-attention to generate the contextual representation of each node by collecting the message from its neighbors of the same modality. Formally, the contextual representations C(l) x of all textual nodes are calculated as follows: 1 C(l) x = MultiHead(H(l−1) x , H(l−1) x , H(l−1) x ), (1) where MultiHead(Q, K, V) is a multi-head selfattention function taking a query matrix Q, a key matrix K, and a value matrix V as inputs. Similarly, we generate the contextual representations C(l) o of all visual nodes as C(l) o = MultiHead(H(l−1) o , H(l−1) o , H(l−1) o ). (2) In particular, since the initial representations of visual objects are extracted from deep CNNs, we apply a simplified multi-head self-attention to preserve the initial representations of visual objects, where the learned linear projects of values and final outputs are removed. Step2: Inter-modal fusion. Inspired by studies in multi-modal feature fusion (Teney et al., 2018; Kim et al., 2018), we apply a cross-modal gating mechanism with an element-wise operation to gather the semantic information of the cross-modal neighbours of each node. Concretely, we generate the representation M(l) xi of a text node vxi in the following way: M(l) xi = X j∈A(vxi) αi,j ⊙C(l) oj , (3) αi,j = Sigmoid(W(l) 1 C(l) xi + W(l) 2 C(l) oj ), (4) where A(vxi) is the set of neighboring visual nodes of vxi, and W(l) 1 and W(l) 2 are parameter matrices. Likewise, we produce the representation M(l) oj of a visual node voj as follows: M(l) oj = X i∈A(voj ) βj,i ⊙C(l) xi , (5) βj,i = Sigmoid(W(l) 3 C(l) oj + W(l) 4 C(l) xi ), (6) where A(voj) is the set of adjacent textual nodes of voj, and W(l) 3 and W(l) 4 are also parameter matrices. The advantage is that the above fusion approach can better determine the degree of inter-modal fusion according to the contextual representations of 1For simplicity, we omit the descriptions of layer normalization and residual connection. each modality. Finally, we adopt position-wise feed forward networks FFN(∗) to generate the textual node states H(l) x and visual node states H(l) o : H(l) x = FFN(M(l) x ), (7) H(l) o = FFN(M(l) o ), (8) where M(l) x = {M(l) xi }, M(l) o = {M(l) oj } denote the above updated representations of all textual nodes and visual nodes respectively. 2.2 Decoder Our decoder is similar to the conventional Transformer decoder. Since visual information has been incorporated into all textual nodes via multiple graph-based multi-modal fusion layers, we allow the decoder to dynamically exploit the multi-modal context by only attending to textual node states. As shown in the right part of Figure 2, we follow Vaswani et al. (2017) to stack Ld identical layers to generate target-side hidden states, where each layer l is composed of three sub-layers. Concretely, the first two sub-layers are a masked self-attention and an encoder-decoder attention to integrate targetand source-side contexts respectively: E(l) = MultiHead(S(l−1), S(l−1), S(l−1)), (9) T(l) = MultiHead(E(l), H(Le) x , H(Le) x ), (10) where S(l−1) denotes the target-side hidden states in the l-1-th layer. In particular, S(0) are the embeddings of input target words. Then, a position-wise fully-connected forward neural network is uesd to produce S(l) as follows: S(l) = FFN(T(l)). (11) Finally, the probability distribution of generating the target sentence is defined by using a softmax layer, which takes the hidden states in the top layer as input: P(Y |X, I) = Y t Softmax(WS(Ld) t + b), (12) where X is the input sentence, I is the input image, Y is the target sentence, and W and b are the parameters of the softmax layer. 3 Experiment We carry out experiments on multi-modal English⇒German (En⇒De) and English⇒French (En⇒Fr) translation tasks. 3029 3.1 Setup Datasets We use the Multi30K dataset (Elliott et al., 2016), where each image is paired with one English description and human translations into German and French. Training, validation and test sets contain 29,000, 1,014 and 1,000 instances respectively. In addition, we evaluate various models on the WMT17 test set and the ambiguous MSCOCO test set, which contain 1,000 and 461 instances respectively. Here, we directly use the preprocessed sentences 2 and segment words into subwords via byte pair encoding (Sennrich et al., 2016) with 10,000 merge operations. Visual Features We first apply the Stanford parser to identify noun phrases from each source sentence, and then employ the visual ground toolkit released by Yang et al. (2019) to detect associated visual objects of the identified noun phrases. For each phrase, we keep the visual object with the highest prediction probability, so as to reduce negative effects of abundant visual objects. In each sentence, the average numbers of objects and words are around 3.5 and 15.0 respectively. 3 Finally, we compute 2,048-dimensional features for these objects with the pre-trained ResNet-100 FasterRCNN (Ren et al., 2015). Settings We use Transformer (Vaswani et al., 2017) as our baseline. Since the size of training corpus is small and the trained model tends to be over-fitting, we first perform a small grid search to obtain a set of hyper-parameters on the En⇒De validation set. Specifically, the word embedding dimension and hidden size are 128 and 256 respectively. The decoder has Ld=4 layers4 and the number of attention heads is 4. The dropout is set to 0.5. Each batch consists of approximately 2,000 source and target tokens. We apply the Adam optimizer with a scheduled learning rate to optimize various models, and we use other same settings as (Vaswani et al., 2017). Finally, we use the metrics BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) to evaluate the quality of translations. Particularly, we run all models three times for each experiment and report the average results. 2http://www.statmt.org/wmt18/multimodal-task.html 3There is no parsing failure for this dataset. If no noun is detected for a sentence, the object representations will be set to zero vectors and the model will degenerate to Transformer. 4The encoder of the text-based Transformer also has 4 layers. 39.2 39.6 40.0 40.4 40.8 1 2 3 4 5 BLEU Le Figure 3: Results on the En⇒De validation set regarding the number Le of graph-based multi-modal fusion layers. Baseline Models In addition to the text-based Transformer (Vaswani et al., 2017), we adapt several effective approaches to Transformer using our visual features, and compare our model with them5: • ObjectAsToken(TF) (Huang et al., 2016). It is a variant of the Transformer, where all visual objects are regarded as extra source tokens and placed at the front of the input sentence. • Enc-att(TF) (Delbrouck and Dupont, 2017b). An encoder-based image attention mechanism is incorporated into Transformer, which augments each source annotation with an attention-based visual feature vector. • Doubly-att(TF) (Helcl et al., 2018). It is a doubly attentive Transformer. In each decoder layer, a cross-modal multi-head attention sublayer is inserted before the fully connected feed-forward layer to generate the visual context vector from visual features. We also display the performance of several dominant multi-modal NMT models such as Doubly-att(RNN) (Calixto et al., 2017), Soft-att(RNN) (Delbrouck and Dupont, 2017a), Stochastic-att(RNN) (Delbrouck and Dupont, 2017a), Fusion-conv(RNN) (Caglayan et al., 2017), Trg-mul(RNN) (Caglayan et al., 2017), VMMT(RNN) (Calixto et al., 2019) and Deliberation Network(TF) (Ive et al., 2019) on the same datasets. 3.2 Effect of Graph-based Multi-modal Fusion Layer Number Le The number Le of multi-modal fusion layer is an important hyper-parameter that directly determines 5We use suffixes “(RNN)” and “(TF)” to represent RNNand Transformer-style NMT models, respectively. 3030 Model En⇒De Test2016 Test2017 MSCOCO BLEU METEOR BLEU METEOR BLEU METEOR Existing Multi-modal NMT Systems Doubly-att(RNN) (Calixto et al., 2017) 36.5 55.0 Soft-att(RNN) (Delbrouck and Dupont, 2017a) 37.6 55.3 Stochastic-att(RNN) (Delbrouck and Dupont, 2017a) 38.2 55.4 Fusion-conv(RNN) (Caglayan et al., 2017) 37.0 57.0 29.8 51.2 25.1 46.0 Trg-mul(RNN)(Caglayan et al., 2017) 37.8 57.7 30.7 52.2 26.4 47.4 VMMT(RNN) (Calixto et al., 2019) 37.7 56.0 30.1 49.9 25.5 44.8 Deliberation Network(TF) (Ive et al., 2019) 38.0 55.6 Our Multi-modal NMT Systems Transformer (Vaswani et al., 2017) 38.4 56.5 30.6 50.4 27.3 46.2 ObjectAsToken(TF) (Huang et al., 2016) 39.0 57.2 31.7 51.3 28.4 47.0 Enc-att(TF) (Delbrouck and Dupont, 2017b) 38.7 56.6 31.3 50.6 28.0 46.6 Doubly-att(TF) (Helcl et al., 2018) 38.8 56.8 31.4 50.5 27.4 46.5 Our model 39.8 57.6 32.2 51.9 28.7 47.6 Table 1: Experimental results on the En⇒De translation task. the degree of fine-grained semantic fusion in our encoder. Thus, we first inspect its impact on the EN⇒DE validation set. Figure 3 provides the experimental results using different Le and our model achieves the best performance when Le is 3. Hence, we use Le=3 in all subsequent experiments. 3.3 Results on the En⇒De Translation Task Table 1 shows the main results on the En⇒De translation task. Ours outperforms most of the existing models and all baselines, and is comparable to Fusion-conv(RNN) and Trg-mul(RNN) on METEOR. The two results are from the state-of-the-art system on the WMT2017 test set, which is selected based on METEOR. Comparing the baseline models, we draw the following interesting conclusions: First, our model outperforms ObjectAsToken(TF), which concatenates regional visual features with text to form attendable sequences and employs self-attention mechanism to conduct intermodal fusion. The underlying reasons consist of two aspects: explicitly modeling semantic correspondences between semantic units of different modalities, and distinguishing model parameters for different modalities. Second, our model also significantly outperforms Enc-att(TF). Note that Enc-att(TF) can be considered as a single-layer semantic fusion encoder. In addition to the advantage of explicitly modeling semantic correspondences, we conjecture that multi-layer multi-modal semantic interactions are also beneficial to NMT. Third, compared with Doubly-att(TF) simply using an attention mechanism to exploit visual in15 20 25 30 35 40 [5,10) [10,15) [15,20) [20,25) [25,...) BLEU Sentence Length Transformer ObjectAsToken(TF) Enc-att(TF) Doubly-att(TF) Our model Figure 4: BLEU scores on different translation groups divided according to source sentence lengths. 15 20 25 30 35 40 [0,2) [2,4) [4,6) [6,8) [8,...) BLEU Phrase Number Transformer ObjectAsToken(TF) Enc-att(TF) Doubly-att(TF) Our model Figure 5: BLEU scores on different translation groups divided according to source phrase numbers. formation, our model achieves a significant improvement, because of sufficient multi-modal fusion in our encoder. Besides, we divide our test sets into different groups based on the lengths of source sentences and the numbers of noun phrases, and then compare the performance of different models in each group. Figures 4 and 5 report the BLEU scores on these groups. Overall, our model still consistently achieves the best performance in all groups. Thus, we confirm again the effectiveness and gen3031 Model En⇒De Test2016 Test2017 MSCOCO BLEU METEOR BLEU METEOR BLEU METEOR Our model 39.8 57.6 32.2 51.9 28.7 47.6 w/o inter-modal fusion 38.7 56.7 30.7 50.6 27.0 46.7 visual grounding ⇒fully-connected 36.4 53.4 28.3 47.0 24.4 42.9 different parameters ⇒unified parameters 39.2 57.3 31.9 51.4 27.7 47.4 w/ attending to visual nodes 39.6 57.3 32.0 51.3 27.9 46.8 attending to textual nodes ⇒attending to visual nodes 30.9 48.6 22.3 41.5 20.4 38.7 Table 2: Ablation study of our model on the EN⇒DE translation task. Model En⇒Fr Test2016 Test2017 BLEU METEOR BLEU METEOR Existing Multi-modal NMT Systems Fusion-conv(RNN) (Caglayan et al., 2017) 53.5 70.4 51.6 68.6 Trg-mul(RNN)(Caglayan et al., 2017) 54.7 71.3 52.7 69.5 Deliberation Network(TF) (Ive et al., 2019) 59.8 74.4 Our Multi-modal NMT Systems Transformer (Vaswani et al., 2017) 59.5 73.7 52.0 68.0 ObjectAsToken(TF) (Huang et al., 2016) 60.0 74.3 52.9 68.6 Enc-att(TF) (Delbrouck and Dupont, 2017b) 60.0 74.3 52.8 68.3 Doubly-att(TF) (Helcl et al., 2018) 59.9 74.1 52.4 68.1 Our model 60.9 74.9 53.9 69.3 Table 3: Experimental results on the En⇒Fr translation task. Model Training Decoding Parameter Transformer 2.6K 17.8 3.4M ObjectAsToken(TF) 1.6K 17.2 3.7M Enc-att(TF) 1.3K 16.9 3.6M Doubly-att(TF) 1.0K 12.9 3.8M Our model 1.1K 16.7 4.0M Table 4: Training speed (tokens/second), decoding speed (sentences/second) and the number of parameters of different models on the En⇒De translation task. erality of our proposed model. Note that in the sentences with more phrases, which are usually long sentences, the improvements of our model over baselines are more significant. We speculate that long sentences often contain more ambiguous words. Thus compared with short sentences, long sentences may require visual information to be better exploited as supplementary information, which can be achieved by the multi-modal semantic interaction of our model. We also show the training and decoding speed of our model and the baselines in Table 4. During training, our model can process approximately 1.1K tokens per second, which is comparable to other multi-modal baselines. When it comes to decoding procedure, our model translates about 16.7 sentences per second and the speed drops slightly compared to Transformer. Moreover, our model only introduces a small number of extra parameters and achieves better performance. 3.4 Ablation Study To investigate the effectiveness of different components, we further conduct experiments to compare our model with the following variants in Table 2: (1) w/o inter-modal fusion. In this variant, we apply two separate Transformer encoders to learn the semantic representations of words and visual objects, respectively, and then use the doublyattentive decoder (Helcl et al., 2018) to incorporate textual and visual contexts into the decoder. The result in line 3 indicates that removing the intermodal fusion leads to a significant performance drop. It suggests that semantic interactions among multi-modal semantic units are indeed useful for multi-modal representation learning. (2) visual grounding ⇒fully-connected. We make the words and visual objects fully-connected to establish the inter-modal correspondences. The result in line 4 shows that this change causes a significant performance decline. The underlying reason is the fully-connected semantic correspondences introduce much noise to our model. (3) different parameters ⇒unified parameters. When constructing this variant, we assign unified parameters to update node states in different modalities. Apparently, the performance drop reported in line 5 also demonstrates the validity of our ap3032 proach using different parameters. (4) w/ attending to visual nodes. Different from our model attending to only textual nodes, we allow our decoder of this variant to consider both two types of nodes using doubly-attentive decoder. From line 6, we can observe that considering all nodes does not bring further improvement. The result confirms our previous assumption that visual information has been fully incorporated into textual nodes in our encoder. (5) attending to textual nodes ⇒attending to visual nodes. However, when only considering visual nodes, the model performance drops drastically (line 7). This is because the number of visual nodes is far fewer than that of textual nodes, which is unable to produce sufficient context for translation. 3.5 Case Study Figure 6 displays the 1-best translations of a sampled test sentence generated by different models. The phrase “a skateboarding ramp” is not translated correctly by all baselines, while our model correctly translates it. This reveals that our encoder is able to learn more accurate representations. 3.6 Results on the En⇒Fr Translation Task We also conduct experiments on the EN⇒Fr dataset. From Table 3, our model still achieves better performance compared to all baselines, which demonstrates again that our model is effective and general to different language pairs in multi-modal NMT. 4 Related Work Multi-modal NMT Huang et al. (2016) first incorporate global or regional visual features into attention-based NMT. Calixto and Liu (2017) also study the effects of incorporating global visual features into different NMT components. Elliott and K´ad´ar (2017) share an encoder between a translation model and an image prediction model to learn visually grounded representations. Besides, the most common practice is to use attention mechanisms to extract visual contexts for multimodal NMT (Caglayan et al., 2016; Calixto et al., 2017; Delbrouck and Dupont, 2017a,b; Barrault et al., 2018). Recently, Ive et al. (2019) propose a translate-and-refine approach and Calixto et al. (2019) employ a latent variable model to capture the multi-modal interactions for multi-modal NMT. Apart from model design, Elliott (2018) reveal that visual information seems to be ignored by the multimodal NMT models. Caglayan et al. (2019) conduct a systematic analysis and show that visual information can be better leveraged under limited textual context. Different from the above-mentioned studies, we first represent the input sentence-image pair as a unified graph, where various semantic relationships between multi-modal semantic units can be effectively captured for multi-modal NMT. Benefiting from the multi-modal graph, we further introduce an extended GNN to conduct graph encoding via multi-modal semantic interactions. Note that if we directly adapt the approach proposed by Huang et al. (2016) into Transformer, the model (ObjectAsToken(TF)) also involves multimodal fusion. However, ours is different from it in following aspects: (1) We first learn the contextual representation of each node within the same modality, so that it can better determine the degree of inter-modal fusion according to its own context. (2) We assign different encoding parameters to different modalities, which has been shown effective in our experiments. Additionally, the recent study LXMERT (Tan and Bansal, 2019) also models relationships between vision and language, which differs from ours in following aspects: (1) Tan and Bansal (2019) first apply two transformer encoders for two modalities, and then stack two cross-modality encoders to conduct multi-modal fusion. In contrast, we sequentially conduct self-attention and cross-modal gating at each layer. (2) Tan and Bansal (2019) leverage an attention mechanism to implicitly establish cross-modal relationships via large-scale pretraining, while we utilize visual grounding to capture explicit cross-modal correspondences. (3) We focus on multi-modal NMT rather than visionand-language reasoning in (Tan and Bansal, 2019). Graph Neural Networks Recently, GNNs (Marco Gori and Scarselli, 2005) including gated graph neural network (Li et al., 2016), graph convolutional network (Duvenaud et al., 2015; Kipf and Welling, 2017) and graph attention network (Velickovic et al., 2018) have been shown effective in many tasks such as VQA (Teney et al., 2017; Norcliffe-Brown et al., 2018; Li et al., 2019), text generation (Gildea et al., 2018; Becky et al., 2018; Song et al., 2018b, 2019) and text representation (Zhang et al., 2018; Yin et al., 2019; Song et al., 3033 Source: A boy riding a skateboard on a skateboarding ramp . Reference: Ein junge fährt skateboard auf einer skateboardrampe . Tranformer: Ein junge fährt auf einem skateboard auf einer rampe . Doubly-att(TF): Ein junge fährt mit einem skateboard auf einer rampe . Enc-att(TF): Ein junge fährt ein skateboard auf einer rampe . ObjectAsToken(TF): Ein junge fährt auf einem skateboard auf einer rampe . Our model: Ein junge fährt auf einem skateboard auf einer skateboardrampe . Figure 6: A translation example of different multi-modal NMT models. The baseline models do not accurately understand the phrase “a skateboarding ramp” (orange), while our model correctly translate it. 2018a; Xue et al., 2019). In this work, we mainly focus on how to extend GNN to fuse multi-modal information in NMT. Close to our work, Teney et al. (2017) introduce GNN for VQA. The main difference between their work and ours is that they build an individual graph for each modality, while we use a unified multimodal graph. 5 Conclusion In this paper, we have proposed a novel graphbased multi-modal fusion encoder, which exploits various semantic relationships between multimodal semantic units for NMT. Experiment results and analysis on the Multi30K dataset demonstrate the effectiveness of our model. In the future, we plan to incorporate attributes of visual objects and dependency trees to enrich the multi-modal graphs. Besides, how to introduce scene graphs into multi-modal NMT is a worthy problem to explore. Finally, we will apply our model into other multi-modal tasks such as multimodal sentiment analysis. Acknowledgments This work was supported by the Beijing Advanced Innovation Center for Language Resources (No. TYR17002), the National Natural Science Foundation of China (No. 61672440), and the Scientific Research Project of National Language Committee of China (No. YB135-49). References Lo¨ıc Barrault, Fethi Bougares, Lucia Specia, Chiraag Lala, Desmond Elliott, and Stella Frank. 2018. Findings of the third shared task on multimodal machine translation. In Proceedings of WMT 2018, pages 304–323. Daniel Becky, Gholamreza Haffariz, and Trevor Cohn. 2018. Graph-to-sequence learning using gated graph neural networks. In Proceedings of ACL 2018, pages 273–283. Ozan Caglayan, Walid Aransa, Adrien Bardet, Mercedes Garc´ıa-Mart´ınez, Fethi Bougares, Lo¨ıc Barrault, Marc Masana, Luis Herranz, and Joost van de Weijer. 2017. LIUM-CVC submissions for WMT17 multimodal translation task. In Proceedings of WMT 2017, pages 432–439. Ozan Caglayan, Lo¨ıc Barrault, and Fethi Bougares. 2016. Multimodal attention for neural machine translation. CoRR, abs/1609.03976. Ozan Caglayan, Pranava Madhyastha, Lucia Specia, and Lo¨ıc Barrault. 2019. Probing the need for visual context in multimodal machine translation. In Proceedings of NAACL-HLT 2019, pages 4159–4170. Iacer Calixto and Qun Liu. 2017. Incorporating global visual features into attention-based neural machine translation. In Proceedings of ACL 2017, pages 992– 1003. Iacer Calixto, Qun Liu, and Nick Campbell. 2017. Doubly-attentive decoder for multi-modal neural machine translation. In Proceedings of ACL 2017, pages 1913–1924. Iacer Calixto, Miguel Rios, and Wilker Aziz. 2019. Latent variable model for multi-modal translation. In Proceedings of ACL 2019, pages 6392–6405. Jean-Benoit Delbrouck and St´ephane Dupont. 2017a. An empirical study on the effectiveness of images in multimodal neural machine translation. In Proceedings of EMNLP 2017, pages 910–919. Jean-Benoit Delbrouck and St´ephane Dupont. 2017b. Modulating and attending the source image during encoding improves multimodal translation. CoRR, abs/1712.03449. Michael J. Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of WMT 2014, pages 376–380. David K. Duvenaud, Dougal Maclaurin, Jorge Aguilera-Iparraguirre, Rafael G´omez-Bombarelli, Timothy Hirzel, Al´an Aspuru-Guzik, and Ryan P. Adams. 2015. Convolutional networks on graphs for learning molecular fingerprints. In Proceedings of NeurIPS 2015, pages 2224–2232. 3034 Desmond Elliott. 2018. Adversarial evaluation of multimodal machine translation. In Proceedings of EMNLP 2018, pages 2974–2978. Desmond Elliott, Stella Frank, Khalil Sima’an, and Lucia Specia. 2016. Multi30k: Multilingual englishgerman image descriptions. In Proceedings of ACL 2016, pages 70–74. Desmond Elliott and ´Akos K´ad´ar. 2017. Imagination improves multimodal translation. In Proceedings of IJCNLP 2017, pages 130–141. Daniel Gildea, Zhiguo Wang, Yue Zhang, and Linfeng Song. 2018. A graph-to-sequence model for amr-totext generation. In Proceedings of ACL 2018, pages 1616–1626. Jindrich Helcl, Jindrich Libovick´y, and Dusan Varis. 2018. CUNI system for the WMT18 multimodal translation task. In Proceedings of WMT 2018, pages 616–623. Po-Yao Huang, Frederick Liu, Sz-Rung Shiang, Jean Oh, and Chris Dyer. 2016. Attention-based multimodal neural machine translation. In Proceedings of WMT 2016, pages 639–645. Julia Ive, Pranava Madhyastha, and Lucia Specia. 2019. Distilling translations with visual awareness. In Proceedings of ACL 2019, pages 6525–6538. Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. 2018. Bilinear attention networks. In Proceedings of NeurIPS 2018, pages 1571–1581. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In Proceedings of ICLR 2017. Kuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, and Xiaodong He. 2018. Stacked cross attention for image-text matching. In Proceedings of ECCV 2018, pages 212–228. Linjie Li, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. Relation-aware graph attention network for visual question answering. In Proceedings of ICCV 2019, pages 10312–10321. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard S. Zemel. 2016. Gated graph sequence neural networks. In Proceedings of ICLR 2016. Gabriele Monfardini Marco Gori and Franco Scarselli. 2005. A new model for learning in graph domains. In Proceedings of IJCNN 2005, pages 729–734. Will Norcliffe-Brown, Stathis Vafeias, and Sarah Parisot. 2018. Learning conditioned graph structures for interpretable visual question answering. In Proceedings of NeurIPS 2018, pages 8344–8353. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL 2002, pages 311–318. Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2015. Faster R-CNN: towards real-time object detection with region proposal networks. In Proceedings of NeurIPS 2015, pages 91–99. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of ACL 2016, pages 1715–1725. Linfeng Song, Daniel Gildea, Yue Zhang, Zhiguo Wang, and Jinsong Su. 2019. Semantic neural machine translation using amr. Transactions of the Association for Computational Linguistics, 7:19–31. Linfeng Song, Zhiguo Wang, Mo Yu, Yue Zhang, Radu Florian, and Daniel Gildea. 2018a. Exploring graph-structured passage representation for multihop reading comprehension with graph neural networks. arXiv preprint arXiv:1809.02040. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018b. A graph-to-sequence model for amrto-text generation. In Proceedings of ACL 2018, pages 1616–1626. Hao Tan and Mohit Bansal. 2019. LXMERT: learning cross-modality encoder representations from transformers. In Proceedings of EMNLP 2019, pages 5099–5110. Damien Teney, Peter Anderson, Xiaodong He, and Anton van den Hengel. 2018. Tips and tricks for visual question answering: Learnings from the 2017 challenge. In Proceedings of CVPR 2018, pages 4223– 4232. Damien Teney, Lingqiao Liu, and Anton van den Hengel. 2017. Graph-structured representations for visual question answering. In Proceedings of CVPR 2017, pages 3233–3241. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NeurIPS 2017, pages 4831–4836. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. 2018. Graph attention networks. In Proceedings of ICLR 2018. Mengge Xue, Weiming Cai, Jinsong Su, Linfeng Song, Yubin Ge, Yubao Liu, and Bin Wang. 2019. Neural collective entity linking based on recurrent random walk network learning. In Proceedings of IJCAI 2019, pages 5327–5333. Zhengyuan Yang, Boqing Gong, Liwei Wang, Wenbing Huang, Dong Yu, and Jiebo Luo. 2019. A fast and accurate one-stage approach to visual grounding. In Proceedings of ICCV 2019, pages 4682–4692. 3035 Yongjing Yin, Linfeng Song, Jinsong Su, Jiali Zeng, Chulun Zhou, and Jiebo Luo. 2019. Graph-based neural sentence ordering. In Proceedings of IJCAI 2019, pages 5387–5393. Yue Zhang, Qi Liu, and Linfeng Song. 2018. Sentencestate lstm for text representation. In Proceedings of ACL 2018, pages 317–327. Mingyang Zhou, Runxiang Cheng, Yong Jae Lee, and Zhou Yu. 2018. A visual attention grounding neural model for multimodal machine translation. In Proceedings of EMNLP 2018, pages 3643–3653.
2020
273
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3036–3041 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3036 A Relaxed Matching Procedure for Unsupervised BLI Xu Zhao1, Zihao Wang1, Hao Wu2, Zhang Yong1 † 1BNRist, Department of Computer Science and Technology, RIIT, Institute of Internet Industry, Tsinghua University, Beijing, China 2Department of Mathematical Sciences, Tsinghua University, Beijing, China {zhaoxu18, wzh17}@mails.tsinghua.edu.cn {hwu, zhangyong05}@tsinghua.edu.cn Abstract Recently unsupervised Bilingual Lexicon Induction(BLI) without any parallel corpus has attracted much research interest. One of the crucial parts in methods for the BLI task is the matching procedure. Previous works impose a too strong constraint on the matching and lead to many counterintuitive translation pairings. Thus, We propose a relaxed matching procedure to find a more precise matching between two languages. We also find that aligning source and target language embedding space bidirectionally will bring significant improvement. We follow the previous iterative framework to conduct experiments. Results on standard benchmark demonstrate the effectiveness of our proposed method, which substantially outperforms previous unsupervised methods. 1 Introduction Pretrained word embeddings (Mikolov et al., 2013b) are the basis of many other natural language processing and machine learning systems. Word embeddings of a specific language contain rich syntax and semantic information. Mikolov et al. (2013a) stated that the continuous embedding spaces exhibit similar structures across different languages, and we can exploit the similarity by a linear transformation from source embedding space to target embedding space. This similarity derives the Bilingual Lexicon Induction(BLI) task. The goal of bilingual lexicon induction is to align two languages’ embedding space and generates word translation lexicon automatically. This fundamental problem in natural language processing benefits much other research such as sentence translation (Rapp, 1995; Fung, 1995), unsupervised machine translation (Lample et al., 2017), cross-lingual information retrieval (Lavrenko et al., 2002). Recent endeavors (Lample et al., 2018; AlvarezMelis and Jaakkola, 2018; Grave et al., 2019; †Yong Zhang is the corresponding author. Artetxe et al., 2017) have proven that unsupervised BLI’s performance is even on par with the supervised methods. A crucial part of these approaches is the matching procedure, i.e., how to generate the translation plan. Alvarez-Melis and Jaakkola (2018) used Gromov-Wasserstein distance to approximate the matching between languages. Grave et al. (2019) regarded it as a classic optimal transport problem and used the sinkhorn algorithm (Cuturi, 2013) to compute the translation plan. In this work, we follow the previous iterative framework but use a different matching procedure. Previous iterative algorithms required to compute an approximate 1 to 1 matching every step. This 1 to 1 constraint brings out many redundant matchings. Thus in order to avoid this problem, we relax the constraint and control the relaxation degree by adding two KL divergence regularization terms to the original loss function. This relaxation derives a more precise matching and significantly improves performance. Then we propose a bidirectional optimization framework to optimize the mapping from source to target and from target to source simultaneously. In the section of experiments, we verify the effectiveness of our method, and results show our method outperforms many SOTA methods on the BLI task. 2 Background The early works for the BLI task require a parallel lexicon between languages. Given two embedding matrices X and Y with shape n × d (n:word number, d:vector dimension) of two languages and word xi in X is the translation of word yi in Y , i.e., we get a parallel lexicon X →Y . Mikolov et al. (2013a) pointed out that we could exploit the similarities of monolingual embedding spaces by learning a linear transformation W ⋆such that W ⋆= arg min W∈Md(R) ∥XW −Y ∥2 F (1) 3037 where Md(R) is the space of d × d matrices of real numbers. Xing et al. (2015) stated that enforcing an orthogonal constraint on W would improve performance. There is a closed-form solution to this problem called Procrutes: W ⋆= Q = UV T where USV T = XY T . Under the unsupervised condition without parallel lexicon, i.e., vectors in X and Y are totally out of order, Lample et al. (2018) proposed a domainadversarial approach for learning W ⋆. On account of the ground truth that monolingual embedding spaces of different languages keep similar spatial structures, Alvarez-Melis and Jaakkola (2018) applied the Gromov-Wasserstein distance based on infrastructure to find the corresponding translation pairings between X and Y and further derived the orthogonal mapping Q. Grave et al. (2019) formulated the unsupervised BLI task as min Q∈Od,P∈Pn ∥XQ −PY ∥2 F (2) where Od is the set of orthogonal matrices and Pn is is the set of permutation matrices.Given Q, estimating P in Problem (2) is equivalent to the minimization of the 2-Wasserstein distance between the two sets of points: XQ and Y . W 2 2 (XQ, Y ) = min P∈Pn⟨D, P⟩ (3) where Dij = ∥xiQ −yj∥2 2 and ⟨D, P⟩ = P i,j PijDij denotes the matrix inner product. Grave et al. (2019) proposed a stochastic algorithm to estimate Q and P jointly. Problem (3) is the standard optimal transport problem that can be solved by Earth Mover Distance linear program with O(n3) time complexity. Considering the computational cost, Zhang et al. (2017) and Grave et al. (2019) used the Sinkhorn algorithm (Cuturi, 2013) to estimate P by solving the entropy regularized optimal tranpsort problem (Peyr´e et al., 2019). We also take Problem (2) as our loss function and our model shares a similar alternative framework with Grave et al. (2019). However, we argue that the permutation matrix constraint on P is too strong, which leads to many inaccurate and redundant matchings between X and Y , so we relax it by unbalanced optimal transport. Alaux et al. (2019) extended the line of BLI to the problem of aligning multiple languages to a common space. Zhou et al. (2019) estimated Q by a density matching method called normalizing flow. Artetxe et al. (2018) proposed a multi-step framework of linear transformations that generalizes a substantial body of previous work. Garneau et al. (2019) further investigated the robustness of Artetxe et al. (2018)’s model by introducing four new languages that are less similar to English than the ones proposed by the original paper. Artetxe et al. (2019) proposed an alternative approach to this problem that builds on the recent work on unsupervised machine translation. 3 Proposed Method In this section, we propose a method for the BLI task. As mentioned in the background, we take Problem (2) as our loss function and use a similar optimization framework in Grave et al. (2019) to estimate P and Q alternatively. Our method focuses on the estimation of P and tries to find a more precise matching P between XQ and Y . Estimation of Q is by stochastic gradient descent. We also propose a bidirectional optimization framework in section 3.2. 3.1 Relaxed Matching Procedure Regarding embedding set X and Y as two discrete distributions µ = PI i=1 uiδxi and ν = PJ j=1 vjδyj, where u (or v) is column vector satisfies P i ui = 1, ui > 0 (v is similar), δx is the Dirac function supported on point x. Standard optimal transport enforces the optimal transport plan to be the joint distribution P ∈Pn. This setting leads to the result that every mass in µ should be matched to the same mass in ν. Recent application of unbalanced optimal transport (Wang et al., 2019) shows that the relaxation of the marginal condition could lead to more flexible and local matching, which avoids some counterintuitive matchings of source-target mass pairs with high transportation cost. The formulation of unbalanced optimal transport (Chizat et al., 2018a) differs from the balanced optimal transport in two ways. Firstly, the set of transport plans to be optimized is generalized to RI×J + . Secondly, the marginal conditions of the Problem (3) are relaxed by two KL-divergence terms. min P∈RI×J + ⟨D, P⟩+ λ1KL(P1J||u) + λ2KL(P T 1I||v) (4) where KL(p||q) = P i pi log  pi qi  −pi + qi is the KL divergence. 3038 Algorithm 1 Generalized Sinkhorn Algorithm Require: source and target measure µi ∈ Rm +, νj ∈ Rn, entropy regularizer ϵ, KL relaxation coefficient λ1, λ2 and distance matrixDij. Ensure: Transport PlanPij 1: Initialize u ←0 ∈Rm, v ←0 ∈Rn, K ← e−D/γ ∈Rm×n 2: while not converge do 3: u ← µ Kv  λ1 ϵ+λ1 4: v ← ν K⊤u  λ2 ϵ+λ2 5: end while 6: P ←diag(u)Kdiag(v) We estimate P by considering the relaxed Problem (4) instead of the original Problem (3) in (Grave et al., 2019). Problem (4) could also be solved by entropy regularization with the generalized Sinkhorn algorithm (Chizat et al., 2018b; Wang et al., 2019; Peyr´e et al., 2019). In short, we already have an algorithm to obtain the minimum of the Problem (4). In order to avoid the hubness phenomenon, we replace l2 distance of embedding with the rcsls distance proposed in Joulin et al. (2018) formalized as Dij = rcsls(xiQ, yj). rcsls can not provide significantly better results than euclidean distance in our evaluation. However, previous study suggests that RCSLS could be considered as a better metric between words than euclidean distance. So we propose our approach with RCSLS. The ”relaxed matching” procedure and the ”bi-directional optimization” we proposed bring most of the improvement. We call this relaxed estimation of P as Relaxed Matching Procedure(RMP). With RMP only when two points are less than some radius apart from each other, they may be matched together. Thus we can avoid some counterintuitive matchings and obtain a more precise matching P. In the section of experiments we will verify the effectiveness of RMP. 3.2 Bidirectional Optimization Previous research solved the mapping X to Y and the mapping Y to X as two independent problems, i.e., they tried to learn two orthogonal matrix Q1 and Q2 to match the XQ1 with Y and Y Q2 with X, respectively. Intuitively from the aspect of point cloud matching, we consider these two problems Algorithm 2 Bidirectional Optimization with RMP Require: word vectors from two languagesX, Y Ensure: Transformation Q 1: for each e ∈[1, E] do 2: for each i ∈[1, I] do 3: Draw Xb, Yb of size b from X and Y 4: set rand = random() 5: if rand mod 2 = 1 then 6: Yb, Xb, Q ⇐Xb, Yb, QT 7: end if 8: Run RMP by solving Problem (4) and obtain P ∗ 9: Update Q by gradient descent and Procrutes 10: if rand mod 2 = 1 then 11: Q ⇐QT 12: end if 13: end for 14: end for in opposite directions are symmetric. Thus we propose an optimization framework to solve only one Q for both directions. In our approach, we match XQ with Y and Y QT with X simultaneously. Based on the stochastic optimization framework of Grave et al. (2019), we randomly choose one direction to optimize at each iteration. The entire process of our method is summarized in Algorithm 2. At iteration i, we start with sampling batches Xb, Yb with shape Rb×d. Then we generate a random integer rand and choose to map XbQ to Yb or map YbQT to Xb by rand’s parity. Given the mapping direction, we run the RMP procedure to solve Problem (4) by sinkhorn and obtain a matching matrix P ∗between XbQ and Yb(or YbQT and X). Finally we use gradient descent and procrutes to update Q by the given P ∗. The procedure of Q’s update is detailed in Grave et al. (2019). 4 Experiments In this section, we evaluate our method in two settings. First, We conduct distillation experiments to verify the effectiveness of RMP and bidirectinal optimization. Then we compare our method consisting of both RMP and bi-directional optimization with various SOTA methods on the BLI task. DataSets∗We conduct word translation experiments on 6 pairs of languages and use pretrained ∗https://github.com/facebookresearch/MUSE 3039 Method EN-ES EN-FR EN-DE EN-RU EN-IT Avg. Supervision → ← → ← → ← → ← → ← Proc. 5K words 81.9 83.4 82.1 82.4 74.2 72.7 51.7 63.7 77.4 77.9 74.7 RCSLS 5K words 84.1 86.3 83.3 84.1 79.1 76.3 57.9 67.2 77.3 GW None 81.7 80.4 81.3 78.9 71.9 78.2 45.1 43.7 78.9 75.2 71.5 Adv. - Refine None 81.7 83.3 82.3 82.1 74.0 72.2 44.0 59.1 77.9 77.5 73.4 W.Proc. - Refine None 82.8 84.1 82.6 82.9 75.4 73.3 43.7 59.1 73.0 Dema - Refine None 82.8 84.9 82.6 82.4 75.3 74.9 46.9 62.4 74.0 Ours - Refine None 82.7 85.8 83.0 83.8 76.2 74.9 48.1 64.7 79.1 80.3 75.9 Table 1: Comparison between SOTA methods on BLI task. Methods in Line 1-2 are supervised. Methods in Line 3-8 are unsupervised. Except the GW method, other unsupervised methods are refined. In bold, the best among unsupervised approaches. All numbers of others are taken from their papers. (’EN’: English, ’ES’: Spanish, ’FR’: French, ’DE’: German, ’RU’: Russian, ’IT’: Italian). word embedding from fasttext. We use the bilingual dictionaries opensourced in the work (Lample et al., 2018) as our evaluate set.We use the CSLS retrieval method for evaluation as Lample et al. (2018) in both settings. All the translation accuracy reported is the precision at 1 with CSLS criterion. We open the source code on Github†. 4.1 Main Results Through the experimental evaluation, we seek to demonstrate the effectiveness of our method compared to other SOTA methods. The word embeddings are normalized and centered before entering the model. We start with a batch size 500 and 2000 iterations each epoch. We double the batch size and quarter the iteration number after each epoch. First 2.5K words are taken for initialization, and samples are only drawn from the first 20K words in the frequently ranking vocabulary. The coefficients λ1 and λ2 of the relaxed terms in Problem (4) are both set to 0.001. Baselines We take basic Procrutes and RCSLSLoss of Joulin et al. (2018) as two supervised baselines. Five unsupervised methods are also taken into accounts: the Gromov Wasserstein matching method of Alvarez-Melis and Jaakkola (2018), the adversarial training(Adv.-Refine) of Lample et al. (2018), the Wasserstein Procrutes method(W.Proc.Refine) of Grave et al. (2019), the density matching †https://github.com/BestActionNow/bidirectional-RMP method(Dema-Refine) of Zhou et al. (2019). In Table 1, it’s shown that leading by an average of 2 percentage points, our approach outperforms other unsupervised methods in most instances and is on par with the supervised method on some language pairs. Surprisingly we find that our method achieves significant progress in some tough cases such as English - Russian, English - Italian, which contain lots of noise. Our method guarantees the precision of mapping computed every step which achieves the effect of noise reduction. However, there still exists an noticeable gap between our method and the supervised RCSLS method, which indicates further research can be conducted to absorb the superiority of this metric to unsupervised methods. We also compare our method with W.Proc on two non-English pairs including FR-DE and FRES to show how bidirectional relaxed matching improves the performance and results are presented in Table 2. Most of the recent researches didn’t report results of non-English pairs, which makes it hard for fair comparison. However from the results in Table 2, we could find that our method keeps an advantage over W.Proc. Note that the W.Proc. results here are our implementation rather than that are reported in the original paper. 3040 FR-DE DE-FR FR-ES ES-FR W.Proc. 65.8 73.5 82.0 84.9 Ours-Refine 67.7 74.0 83.3 84.9 Table 2: Comparision bewtween W.Proc. and our method on non-English language pairs 79.8 79.8 66.4 37.5 81.7 82.1 68.9 38.5 82.0 82.5 70.3 38.5 82.7 83.0 74.9 48.1 30 40 50 60 70 80 90 EN-ES EN-FR EN-DE EN-RU Accuracy Ablation Study of Our Approach WP WP+RMP(w/o bidirection) WP+RMP WP+RMP+Refine(Ours) Figure 1: Ablation study of our methods’ effectiveness. ’WP’ refers to the original Wasserstein Procrutes Method proposed by Grave et al. (2019). ’WPRMP’ applies RMP to ’WP’. ’WP-RMP-bidiretion’ applies bidirectional optimization framework to ’WPRMP’. ’WP-RMP-bidirection-refine’ applies the refinement procedure to ’WP-RMP-bidirection’.(’EN’: English, ’ES’: Spanish, ’FR’: French, ’DE’: German, ’RU’: Russian, ’IT’: Italian). 4.2 Ablation Study The algorithms for BLI could be roughly divided into three parts: 1. initialization, 2 iterative optimization, and 3. refinement procedure, such as Lample et al. (2017). W.Proc.(Grave et al., 2019) only covers the first two parts. Our approaches, i.e. relaxed matching and bi-directional optimization are categorized into the second part. To ensure a fair comparison, W.Proc.-Refine is compared to ours-Refine which is discussed in next section. To verify the effectiveness of RMP and bidirectional optimization directly, we apply them to the method proposed in Grave et al. (2019) one by one. We take the same implementation and hyperparameters reported in their paper and code ‡ but using RMP to solve P instead of ordinary 2-Wasserstein. On four language pairs, We applied RMP, bidirectional optimization and refinement procedure to original W.Proc. gradually and evaluate the performance change. In Figure 1 it’s clearly shown that after applying bidirectional RMP, the translation accuracy improves by 3 percentage averagely. The results of ’WP-RMP’ are worse than ’WP-RMP‡https://github.com/facebookresearch/fastText/alignment bidirection’ but better than original ’WP’. Moreover, we find that by applying RMP, a more precise P not only eliminates many unnecessary matchings but also leads to a faster converge of the optimization procedure. Furthurmore, the effectiveness of refinement procedure is quite significant. To summarize, we consider the average of scores (from en-es to ru-en). By mitigating the counterintuitive pairs by polysemies and obscure words, the ”relaxed matching” procedure improves the average score about 2 points, the ”bi-directional optimization” improves the average score about 0.6 points. From the results we could get some inspiration that our ideas of relaxed matching and bidirectional optimization can also be applied to other frameworks such as adversarial training by Lample et al. (2017) and Gromov-Wasserstein by Alvarez-Melis and Jaakkola (2018). 5 Conclusion This paper focuses on the matching procedure of BLI task. Our key insight is that the relaxed matching mitigates the counter-intuitive pairs by polysemy and obscure words, which is supported by comparing W.Proc.-RMP with W.Proc in Table 1. The optimal transport constraint considered by W.Proc. is not proper for BLI tasks. Moreover, Our approach also optimizes the translation mapping Q in a bi-directional way, and has been shown better than all other unsupervised SOTA models with the refinement in Table 1. 6 Acknowledgement This work was supported by the National Natural Science Foundation of China (11871297, 91646202), National Key R&D Program of China(2018YFB1404401, 2018YFB1402701), Tsinghua University Initiative Scientific Research Program. References Jean Alaux, Edouard Grave, Marco Cuturi, and Armand Joulin. 2019. Unsupervised hyperalignment for multilingual word embeddings. CoRR, abs/1811.01124. David Alvarez-Melis and Tommi S. Jaakkola. 2018. Gromov-wasserstein alignment of word embedding spaces. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1881–1890. 3041 Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451– 462. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In Thirty-Second AAAI Conference on Artificial Intelligence. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019. Bilingual lexicon induction through unsupervised machine translation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28August 2, 2019, Volume 1: Long Papers, pages 5002–5007. Lenaic Chizat, Gabriel Peyr´e, Bernhard Schmitzer, and Franc¸ois-Xavier Vialard. 2018a. An interpolating distance between optimal transport and fisher–rao metrics. Foundations of Computational Mathematics, 18(1):1–44. Lenaic Chizat, Gabriel Peyr´e, Bernhard Schmitzer, and Franc¸ois-Xavier Vialard. 2018b. Scaling algorithms for unbalanced optimal transport problems. Mathematics of Computation, 87(314):2563–2609. Marco Cuturi. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 2292–2300. Pascale Fung. 1995. Compiling bilingual lexicon entries from a non-parallel english-chinese corpus. In Third Workshop on Very Large Corpora. Nicolas Garneau, Mathieu Godbout, David Beauchemin, Audrey Durand, and Luc Lamontagne. 2019. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings: Making the method robustly reproducible as well. CoRR, abs/1912.01706. Edouard Grave, Armand Joulin, and Quentin Berthet. 2019. Unsupervised alignment of embeddings with wasserstein procrustes. In The 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019, 16-18 April 2019, Naha, Okinawa, Japan, pages 1880–1890. Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Herv´e J´egou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2979–2984. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2017. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043. Guillaume Lample, Alexis Conneau, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word translation without parallel data. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. Victor Lavrenko, Martin Choquette, and W Bruce Croft. 2002. Cross-lingual relevance models. In Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, pages 175–182. ACM. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. CoRR, abs/1309.4168. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Gabriel Peyr´e, Marco Cuturi, et al. 2019. Computational optimal transport. Foundations and Trends R⃝ in Machine Learning, 11(5-6):355–607. Reinhard Rapp. 1995. Identifying word translations in non-parallel texts. arXiv preprint cmp-lg/9505037. Zihao Wang, Datong Zhou, Yong Zhang, Hao Wu, and Chenglong Bao. 2019. Wasserstein-fisher-rao document distance. CoRR, abs/1904.10294. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA, May 31 June 5, 2015, pages 1006–1011. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Earth movers distance minimization for unsupervised bilingual lexicon induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1934–1945. Chunting Zhou, Xuezhe Ma, Di Wang, and Graham Neubig. 2019. Density matching for bilingual word embedding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 1588–1598.
2020
274
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3042–3051 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3042 Dynamic Programming Encoding for Subword Segmentation in Neural Machine Translation Xuanli He Monash University Gholamreza Haffari Monash University {xuanli.he1,gholamreza.haffari}@monash.edu [email protected] Mohammad Norouzi Google Research Abstract This paper introduces Dynamic Programming Encoding (DPE), a new segmentation algorithm for tokenizing sentences into subword units. We view the subword segmentation of output sentences as a latent variable that should be marginalized out for learning and inference. A mixed character-subword transformer is proposed, which enables exact log marginal likelihood estimation and exact MAP inference to find target segmentations with maximum posterior probability. DPE uses a lightweight mixed character-subword transformer as a means of pre-processing parallel data to segment output sentences using dynamic programming. Empirical results on machine translation suggest that DPE is effective for segmenting output sentences and can be combined with BPE dropout for stochastic segmentation of source sentences. DPE achieves an average improvement of 0.9 BLEU over BPE (Sennrich et al., 2016) and an average improvement of 0.55 BLEU over BPE dropout (Provilkov et al., 2019) on several WMT datasets including English ↔(German, Romanian, Estonian, Finnish, Hungarian). 1 Introduction The segmentation of rare words into subword units (Sennrich et al., 2016; Wu et al., 2016) has become a critical component of neural machine translation (Vaswani et al., 2017) and natural language understanding (Devlin et al., 2019). Subword units enable open vocabulary text processing with a negligible pre-processing cost and help maintain a desirable balance between the vocabulary size and decoding speed. Since subword vocabularies are built in an unsupervised manner (Sennrich et al., 2016; Wu et al., 2016), they are easily applicable to any language. Given a fixed vocabulary of subword units, rare words can be segmented into a sequence of subword units in different ways. For instance, “un+conscious” and “uncon+scious” are both suitable segmentations for the word “unconscious”. This paper studies the impact of subword segmentation on neural machine translation, given a fixed subword vocabulary, and presents a new algorithm called Dynamic Programming Encoding (DPE). We identify three families of subword segmentation algorithms in neural machine translation: 1. Greedy algorithms: Wu et al. (2016) segment words by recursively selecting the longest subword prefix. Sennrich et al. (2016) recursively combine adjacent word fragments that co-occur most frequently, starting from characters. 2. Stochastic algorithms (Kudo, 2018; Provilkov et al., 2019) draw multiple segmentations for source and target sequences resorting to randomization to improve robustness and generalization of translation models. 3. Dynamic programming algorithms, studied here, enable exact marginalization of subword segmentations for certain sequence models. We view the subword segmentation of output sentences in machine translation as a latent variable that should be marginalized out to obtain the probability of the output sentence given the input. On the other hand, the segmentation of source sentences can be thought of as input features and can be randomized as a form of data augmentation to improve translation robustness and generalization. Unlike previous work, we recommend using two distinct segmentation algorithms for tokenizing source and target sentences: stochastic segmentation for source and dynamic programming for target sentences. We present a new family of mixed charactersubword transformers, for which simple dynamic programming algorithms exist for exact marginalization and MAP inference of subword segmenta3043 tions. The time complexity of the dynamic programming algorithms is O(TV ), where T is the length of the target sentence in characters, and V is the size of the subword vocabulary. By comparison, even computing the conditional probabilities of subword units in an autoregressive model requires O(TV ) to estimate the normalizing constant of the categorical distributions. Thus, our dynamic programming algorithm does not incur additional asymptotic costs. We use a lightweight mixed character-subword transformer as a means to pre-process translation datasets to segment output sentences using DPE for MAP inference. The performance of a standard subword transformer (Vaswani et al., 2017) trained on WMT datasets tokenized using DPE is compared against Byte Pair Encoding (BPE) (Sennrich et al., 2016) and BPE dropout (Provilkov et al., 2019). Empirical results on English ↔(German, Romanian, Estonian, Finnish, Hungarian) suggest that stochastic subword segmentation is effective for tokenizing source sentences, whereas deterministic DPE is superior for segmenting target sentences. DPE achieves an average improvement of 0.9 BLEU over greedy BPE (Sennrich et al., 2016) and an average improvement of 0.55 BLEU over stochastic BPE dropout (Provilkov et al., 2019)1. 2 Related Work Neural networks have revolutionized machine translation (Sutskever et al., 2014; Bahdanau et al., 2015; Cho et al., 2014). Early neural machine translation (NMT) systems used words as the atomic element of sentences. They used vocabularies with tens of thousands words, resulting in prohibitive training and inference complexity. While learning can be sped up using sampling techniques (Jean et al., 2015), word based NMT models have a difficult time handling rare words, especially in morphologically rich languages such as Romanian, Estonian, and Finnish. The size of the word vocabulary should increase dramatically to capture the compositionality of morphemes in such languages. More recently, many NMT models have been developed based on characters and a combination of characters and words (Ling et al., 2015; Luong and Manning, 2016; Vylomova et al., 2017; Lee et al., 2017; Cherry et al., 2018). Fully character based models (Lee et al., 2017; Cherry et al., 2018) demonstrate a significant improvement over word 1code and corpora: https://github.com/xlhex/dpe based models on morphologically rich languages. Nevertheless, owing to the lack of morphological information, deeper models are often required to obtain a good translation quality. Moreover, elongated sequences brought by a character representation drastically increases the inference latency. In order to maintain a good balance between the vocabulary size and decoding speed, subword units are introduced in NMT (Sennrich et al., 2016; Wu et al., 2016). These segmentation approaches are data-driven and unsupervised. Therefore, with a negligible pre-processing overhead, subword models can be applied to any NLP task (Vaswani et al., 2017; Devlin et al., 2019). Meanwhile, since subword vocabularies are generated based on word frequencies, only the rare words are split into subword units and common words remain intact. Previous work (Chan et al., 2016; Kudo, 2018) has explored the idea of using stochastic subword segmentation with multiple subword candidates to approximate the log marginal likelihood. Kudo (2018) observed marginal gains in translation quality at the cost of introducing additional hyperparameters and complex sampling procedures. We utilize BPE dropout (Provilkov et al., 2019), a simple stochastic segmentation algorithm for tokenizing source sentences. Dynamic programming has been used to marginalize out latent segmentations for speech recognition (Wang et al., 2017), showing a consistent improvement over greedy segmentation methods. In addition, dynamic programming has been successfully applied to learning sequence models by optimizing edit distance (Sabour et al., 2018) and aligning source and target sequences (Chan et al., 2020; Saharia et al., 2020). We show the effectiveness of dynamic programming for segmenting output sentences in NMT using a mixed character-transformer in a pre-processing step. 3 Latent Subword Segmentation Let x denote a source sentence and y = (y1, . . . , yT ) denote a target sentence comprising T characters. The goal of machine translation is to learn a conditional distribution p(y | x) from a large corpus of source-target sentences. State-ofthe-art neural machine translation systems make use of a dictionary of subword units to tokenize the target sentences in a more succinct way as a sequence of M ≤T subword units. Given a subword vocabulary, there are multiple ways to segment a 3044 rare word into a sequence of subwords (see Figure 1). The common practice in neural machine translation considers subword segmentation as a pre-process and uses greedy algorithms to segment each word across a translation corpus in a consistent way. This paper aims to find optimal subword segmentations for the task of machine translation. Let z = (z1, .., zM+1) denote a sequence of character indices 0 = z1 < z2 < . . . < zM < zM+1 = T in an ascending order, defining the boundary of M subword segments {yzi,zi+1}M i=1. Let ya,b ≡[ya+1, . . . , yb] denote a subword that spans the segment between (a + 1)th and bth characters, including the boundary characters. For example, given a subword dictionary {‘c’, ‘a’, ‘t’, ‘at’, ‘ca’}, the word ‘cat’ may be segmented using z = (0, 1, 3) as (‘c’, ‘at’), or using z = (0, 2, 3) as (‘ca’, ‘t’), or using z = (0, 1, 2, 3) as (‘c’, ‘a’, ‘t’). The segmentation z = (0, 3) is not valid since ‘cat’ does not appear in the subword vocabulary. Autoregressive language models create a categorical distribution over the subword vocabulary at every subword position and represent the logprobability of a subword sequence using chain rule, log p(y, z) = X|z| i=1 log p(yzi,zi+1 | yz1,z2, . . . , yzi−1,zi) . (1) Note that we suppress the dependence of p on x to reduce notational clutter. Most neural machine translation approaches assume that z is a deterministic function of y and implicitly assume that log p(y, z) ≈log p(y). We consider a subword segmentation z as a latent variable and let each value of z ∈Zy, in the set of segmentations compatible with y, contribute its share to p(y) according to p(y) = P z p(y, z), log p(y) = log X z∈Zy exp |z| X i=1 log p(yzi,zi+1 | . . . , yzi−1,zi) . (2) Note that each particular subword segmentation z ∈Zy provides a lower bound on the log marginal likelihood log p(y) ≥log p(y, z). Hence, optimizing (1) for a greedily selected segmentation can be justified as a lower bound on (2). That said, optimizing (2) directly is more desirable. Unfortunately, exact marginalization over all segmentations is computationally prohibitive in a combinatorially large space Zy, especially because the Figure 1: An illustration of different ways of segmenting ‘unconscious’ into subword units. probability of each subword depends on the segmentation of its conditioning context. In the next section, we discuss a sequence model in which the segmentation of the conditioning context does not influence the probability of the next subword. We describe an efficient Dynamic Programming algorithm to exactly marginalize out all possible subword segmentations in this model. 4 A Mixed Character-Subword Transformer We propose a mixed character-subword transformer architecture, which enables one to marginalize out latent subword segmentations exactly using dynamic programming (see Figure 2). Our key insight is to let the transformer architecture process the inputs and the conditioning context based on characters to remain oblivious to the specific choice of subword segmentation in the conditioning context and enable exact marginalization. That said, the output of the transformer is based on subword units and at every position it creates a categorical distribution over the subword vocabulary. More precisely, when generating a subword yzi,zi+1, the model processes the conditioning context (yz1, . . . , yzi) based solely on characters using, log p(y, z) = X|z| i=1 log p(yzi,zi+1 | yz1, ..., yzi) , (3) where the dependence of p on x is suppressed to reduce notational clutter. Given a fixed subword vocabulary denoted V , at every character position t within y, the mixed character-subword model induces a distribution over the next subword w ∈V based on, p(w|y1, .., yt)= exp(f(y1, .., yt)⊤e(w)) P w′∈V exp(f(y1, .., yt)⊤e(w′)) where f(·) processes the conditioning context using a Transformer, and e(·) represents the weights of the softmax layer. 3045 Algorithm 1 Dynamic Programming (DP) for Exact Marginalization Input: y is a sequence of T characters, V is a subword vocabulary, m is the maximum subword length Output: log p(y) marginalizing out different subword segmentations. 1: α0 ←0 2: for k = 1 to T do 3: αk ←log Pk−1 j=k−m 1[yj,k ∈V ] exp  αj + log Pθ(yj,k|y1, .., yj)  4: end for 5: return αT ▷the marginal probability log p(y) = log P z∈Zy p(y, z) Figure 2: An illustration of the mixed charactersubword Transformer. The input is a list of characters, whereas the output is a sequence of subwords. As depicted in in Figure 2, the mixed charactersubword Transformer consumes characters as input generates subwords as output. This figure only shows the decoder architecture, since as the encoder that processes x is a standard subword Transformer. Once a subword w is emitted at time step t, the characters of the subword w are fed into the decoder for time steps t + 1 to t + |w|, and the next subword is generated at time step t + |w|, conditioned on all of the previously generated characters. 4.1 Optimization The training objective for our latent segmentation translation model is P (x,y)∈D log Pθ(y|x) where D is the training corpus consisting of parallel bilingual sentence pairs. Maximizing the training objective requires marginalization and the computation of the gradient of the log marginal likelihood. Exact Marginalization. Under our model, the probability of a subword only depends on the character-based encoding of the conditioning context and not its segmentation, as in (3). This means that we can compute the log marginal likelihood for a single example y, exactly, using the Dynamic Programming algorithm shown in Algorithm 1. The core of the algorithm is line 3, where the probability of the prefix string y0,k is computed by summing terms corresponding to different segmentations. Each term consists of the product of the probability of a subword yj,k times the probability of its conditioning context (y1, . . . , yj). The running time of the algorithm is O(mT), where T is the length of the string, and m is the size of the longest subword unit in the vocabulary. Gradient Computation. We use automatic differentiation in PyTorch to backpropagate through the dynamic program in Algorithm 1 and compute its gradient. Compared to a standard Transformer decoder, our mixed character-subword Transformer is 8x slower with a larger memory footprint, due to computation involved in the DP algorithm and large sequence length in characters. To address these issues, we reduce the number of transformer layers from 6 to 4, and accumulate 16 consecutive gradients before one update. 4.2 Segmenting Target Sentences Once the mixed character-subword transformer is trained, it is used to segment the target side of a bilingual corpus. We randomize the subword segmentation of source sentences using BPE dropout (Provilkov et al., 2019). Conditional on the source sentence, we use Algorithm 2, called Dynamic Programming Encoding (DPE) to find a segmentation of the target sentence with highest posterior probability. This algorithm is similar to the marginalization algorithm, where we use a max operation instead of log-sum-exp. The mixed character-subword transformer is used only for tokenization, and a standard subword transformer is trained on the segmented sentences. For inference using beam search, the mixed character-subword transformer is not needed. 3046 Algorithm 2 Dynamic Programming Encoding (DPE) for Subword Segmentation Input: y is a sequence of T characters, V is a subword vocabulary, m is the maximum subword length Output: Segmentation z with highest posterior probability. for k = 1 to T do βk ←max{j∈[k−m,k−1] | yj,k∈V } βj + log Pθ(yj,k|y1, .., yj) bk ←argmax{j∈[k−m,k−1] | yj,k∈V }βj + log Pθ(yj,k|y1, .., yj) end for z ←backtrace(b1, .., bT ) ▷backtrace the best segmentation using b Number of sentences Vocab train dev test size En-Hu WMT09 0.6M 2,051 2,525 32K En-De WMT14 4.2M 3000 3003 32K En-Fi WMT15 1.7M 1,500 1,370 32K En-Ro WMT16 0.6M 1,999 1,999 32K En-Et WMT18 1.9M 2,000 2,000 32K Table 1: Statistics of the corpora. 5 Experiments Dataset We use WMT09 for En-Hu, WMT14 for En-De, WMT15 for En-Fi, WMT16 for En-Ro and WMT18 for En-Et. We utilize Moses toolkit2 to pre-process all corpora, and preserve the true case of the text. Unlike Lee et al. (2018), we retain diacritics for En-Ro to retain the morphological richness. We use all of the sentence pairs where the length of either side is less than 80 tokens for. training. Byte pair encoding (BPE) (Sennrich et al., 2016) is applied to all language pairs to construct a subword vocabulary and provide a baseline segmentation algorithm. The statistics of all corpora is summarized in Table 1. Training with BPE Dropout. We apply BPE dropout (Provilkov et al., 2019) to each mini-batch. For each complete word, during the BPE merge operation, we randomly drop a particular merge with a probability of 0.05. This value worked the best in our experiments. A word can be split into different segmentations at the training stage, which helps improve the BPE baseline. DPE Segmentation. DPE can be used for target sentences, but its use for source sentences is not justified as source segmentations should not be marginalized out. Accordingly, we use BPE dropout for segmenting source sentences. That is, 2https://github.com/moses-smt/mosesdecoder Figure 3: The workflow of the proposed DPE approach. we train a mixed character-subword transformer to marginalize out the latent segmentations of a target sentence, given a randomized segmentation of the source sentence by BPE dropout. After the mixed character-subword transformer is trained, it is used to segment the target sentences as describe in section 4.2 for tokenization. As summarized in Figure 3, we first train a mixed character-subword transformer with dynamic programming. Then, this model is frozen and used for DPE segmentation of target sentences. Finally, a standard subword transformer is trained on source sentences segmented by BPE dropout and target sentences segmented by DPE. The mixed charactersubword transformer is not needed for translation inference. Transformer Architectures. We use transformer models to train three translation models on BPE, BPE dropout, and DPE corpora. We make use of transformer base for all of the experiments. 5.1 Main Results Table 2 shows the main results. First, we see that BPE dropout consistently outperforms BPE across language pairs. In Table 2, the column labeled to ∆1 shows the improvement of BPE dropout over 3047 Method BPE BPE dropout ∆1 This paper ∆2 Source segmentation BPE BPE dropout BPE dropout Target segmentation BPE BPE dropout DPE En→De 27.11 27.27 +0.16 27.61 +0.34 En→Ro 27.90 28.07 +0.17 28.66 +0.59 En→Et 17.64 18.20 +0.56 18.80 +0.60 En→Fi 15.88 16.18 +0.30 16.89 +0.71 En→Hu 12.80 12.94 +0.14 13.36 +0.42 De→En 30.82 30.85 +0.03 31.21 +0.36 Ro→En 31.67 32.56 +0.89 32.99 +0.43 Et→En 23.13 23.65 +0.52 24.62 +0.97 Fi→En 19.10 19.34 +0.24 19.87 +0.53 Hu→En 16.14 16.61 +0.47 17.05 +0.44 Average 22.22 22.57 +0.35 23.12 +0.55 Table 2: Average test BLEU scores (averaged over 3 independent runs) for 3 segmentation algorithms (BPE (Sennrich et al., 2016), BPE dropout (Provilkov et al., 2019) and our DPE algorithm) on 10 different WMT datasets. For each language pair, all of the segmentation techniques use the same subword dictionary with 32K tokens shared between source and target languages. ∆1 shows the improvement of BPE dropout compared to BPE, and ∆2 shows further improvement of our proposed DPE method compared to BPE dropout. BPE source: Die G@@ le@@ is@@ anlage war so ausgestattet , dass dort elektr@@ isch betrie@@ bene Wagen eingesetzt werden konnten . DPE target: The railway system was equipped in such a way that electrical@@ ly powered cart@@ s could be used on it . BPE target: The railway system was equipped in such a way that elect@@ r@@ ically powered car@@ ts could be used on it . BPE source: Normalerweise wird Kok@@ ain in kleineren Mengen und nicht durch Tunnel geschm@@ ug@@ gelt . DPE target: Normal@@ ly c@@ oca@@ ine is sm@@ ugg@@ led in smaller quantities and not through tunnel@@ s . BPE target: Norm@@ ally co@@ c@@ aine is sm@@ ugg@@ led in smaller quantities and not through tun@@ nels . Table 3: Two examples of segmentation of English sentences given German inputs. BPE. This gain can be attributed to the robustness of the NMT model to the segmentation error on the source side, as our analysis in Section 5.3 will confirm. Second, we observe further gains resulted from DPE compared to BPE dropout. The column labeled ∆2 shows the improvement of DPE over BPE dropout. DPE provides an average improvement of 0.55 BLEU over BPE dropout and BPE dropout provides an average improvement of 0.35 BLEU over BPE. As our proposal uses BPE dropout for segmenting the source, we attribute our BLEU score improvements to a better segmentation of the target language with DPE. Finally, compared to BPE for segmenting the source and target, our proposed segmentation method results in large improvements in the translation quality, up to 1.49 BLEU score improvements in Et→En. 5.2 Segmentation Examples Table 3 shows examples of target sentences segmented using DPE and BPE and the corresponding source sentences. In addition, Table 4 presents the top 50 most common English words that result in a disagreement between BPE and DPE segmentations based on the Et→En corpus. For DPE, for each word, we consider all segmentations produced and show the segmentation that attains the highest frequency of usage in Table 4. As can be observed, DPE produces more linguistically plausible morpheme-based subwords compared to BPE. For instance, BPE segments “carts” into “car”+“ts”, as both “car” and “ts” are common subwords and 3048 listed in the BPE merge table. By contrast DPE segments “carts” into “cart”+“s”. We attribute the linguistic characteristics of the DPE segments to the fact that DPE conditions the segmentation of a target word on the source sentence and the previous tokens of the target sentence, as opposed to BPE, which mainly makes use of frequency of subwords, without any context. DPE generally identifies and leverages some linguistic properties, e.g., plural, antonym, normalization, verb tenses, etc. However, BPE tends to deliver less linguistically plausible segmentations, possibly due to its greedy nature and the lack of context. We believe this phenomenon needs further investigation, i.e., the contribution of source vs. target context in DPE segmentations, and a quantitative evaluation of linguistic nature of word fragments produced by DPE. We will leave this to future work. 5.3 Analysis Conditional Subword Segmentation. One of our hypothesis for the effectiveness of subword segmentation with DPE is that it conditions the segmentation of the target on the source language. To verify this hypothesis, we train mixed charactersubword Transformer solely on the target language sentences in the bilingual training corpus using the language model training objective. This is in contrast to the mixed character-subword model used in the DPE segmentation of the main results in Table 2, where the model is conditioned on the source language and trained on the sentence pairs using a conditional language model training objective. Once the mixed character-subword Transformer language model is trained, it is then used to segment the target sentence of the bilingual corpus in the pre-processing step before a translation model is trained. Table 5 shows the results. It compares the unconditional language model (LM) DPE vs the conditional DPE for segmenting the target language, where we use BPE dropout for segmenting the source language. We observe that without the information from the source, LM DPE is on-par to BPE, and is significantly outperformed by conditional DPE. This observation confirms our hypothesis that segmentation in NMT should be source-dependent. We are further interested in analyzing the differences of the target language segmentation depending on the source language. For this analysis, BPE DPE (ours) recognises recognise + s advocates advocate + s eurozone euro + zone underlines underline + s strengthens strengthen + s entrepreneurship entrepreneur + ship acknowledges acknowledge + s 11.30 11 + .30 wines wine + s pres + ently present + ly f + illed fill + ed endors + ement endorse + ment blo + c bl + oc cru + cially crucial + ly eval + uations evaluation + s tre + es tr + ees tick + ets tick + et + s predic + table predict + able multilater + alism multilateral + ism rat + ings rating + s predic + ted predict + ed mo + tives motiv + es reinfor + ces reinforce + s pro + tocols protocol + s pro + gressively progressive + ly sk + ill ski + ll preva + ils prevail + s decent + ralisation decent + ral + isation sto + red stor + ed influ + enz + a influen + za margin + alised marginal + ised 12.00 12 + .00 sta + ying stay + ing intens + ity intensi + ty rec + ast re + cast guid + eline guide + line emb + arked embark + ed out + lines outline + s scen + ari + os scenario + s n + ative na + tive ma + ture ma + ture preven + tative prevent + ative hom + eland home + land bat + hing bath + ing endang + ered endanger + ed cont + inen + tal continent + al t + enth ten + th vul + n + era + bility vul + ner + ability realis + ing real + ising t + ighter tight + er Table 4: Word fragments obtained by BPE vs. DPE. The most frequent words that resulted in a disagreement between BPE and DPE segmentations on Et → En are shown. we filtered out a multilingual parallel corpus from WMT, which contains parallel sentences in three languages English, Estonian and Romanian. That is, for each English sentence we have the corresponding sentences in Et and Ro. We then trained two DPE segmentation models for the translation tasks of Et→En and Ro→En, where English is the target language. Figure 4 shows when conditioning 3049 Source BPE drop BPE drop BPE drop Target BPE drop LM DPE DPE En→Ro 28.07 28.07 28.66 En→Hu 12.94 12.87 13.36 Ro→En 32.56 32.57 32.99 Hu→En 16.61 16.41 17.05 Table 5: DPE-LM learns a segmentation of the target based on language modelling, which is not conditioned on the source language. Figure 4: Disagreement of DPE segments between EtEn and Ro-En over English vocabulary the segmentation of the target on different source languages, DPE can lead to different segmentations even for an identical multi-parallel corpus. The differences are more significant for low frequency words. Another aspect of DPE segmentation method is its dependency on the segmentation of the source. As mentioned, we segment the target sentence on the fly using our mixed character-subword model given a randomized segmentation of the source produced by BPE dropout. That means during the training of the NMT model where we use BPE dropout for the source sentence, the corresponding target sentence may get a different DPE segmentation given the randomized segmentation of the source sentence. We are interested in the effectiveness of the target segmentation if we commit to a fixed DPE segmentation conditioned on the BPE segmentation of the input. Table 6 shows the results. We observe that there is a marginal drop when using the fixed DPE, which indicates that the encoder can benefit from a stochastic segmentation, while the decoder prefers a deterministic segmentation corresponding to the segmentation of the source. DPE vs BPE. We are interested to compare the effectiveness of DPE versus BPE for the target, given BPE dropout as the same segmentation Source BPE drop BPE drop Target DPE Fixed DPE On The Fly En→Ro 28.58 28.66 En→Hu 13.14 13.36 En→Et 18.51 18.80 Ro→En 32.73 32.99 Hu→En 16.82 17.05 Et→En 24.37 24.62 Table 6: “DPE Fixed” obtains a fixed segmentation of the target sentence given the BPE-segmented source sentence, whereas “DPE On The Fly” obtain the best segmentation of the target sentence given a randomized segmentation of the source produced by BPE dropout. Source BPE drop BPE drop BPE drop Target BPE BPE drop DPE En→Ro 28.04 28.07 28.66 En→Et 18.09 18.20 18.80 Ro→En 32.40 32.56 32.99 Et→En 23.52 23.65 24.62 Table 7: BLEU score of different segmentation methods for the target. method for the source. Table 7 shows the results. As observed, target segmentation with DPE consistently outperforms BPE, leading to up to .9 BLEU score improvements. We further note that using BPE dropout on the target has a similar performance to BPE, and it is consistently outperformed by DPE. We further analyze the segmentations produced by DPE vs BPE. Figure 5 shows the percentage of the target words which have different segmentation with BPE and DPE, for different word frequency bands in En→Et translation task. We observe that for Estonian words whose occurrence is up to 5 in the training set, the disagreement rate between DPE and BPE is 64%. The disagreement rate decreases as we go to words in higher frequency bands. This may imply that the main difference between the relatively large BLEU score difference between BPE and DPE is due to their different segmentation mainly for low-frequency words. We further plot the distribution of BLEU scores by the length of target sentences. As shown in Figure 6, DPE demonstrates much better gains on the longer sentences, compared with the BPE version. 3050 Figure 5: Disagreement of segments between BPE and DPE over Estonian vocabulary. Figure 6: BLEU scores of BPE vs DPE by the lengths of sentences for En→Et. 6 Conclusion This paper introduces Dynamic Programming Encoding in order to incorporate the information of the source language into subword segmentation of the target language. Our approach utilizes dynamic programming for marginalizing the latent segmentations when training, and inferring the highest probability segmentation when tokenizing. Our comprehensive experiments show impressive improvements compared to state-of-the-art segmentation methods in NMT, i.e., BPE and its stochastic variant BPE dropout. Acknowledgment We would like to thank the anonymous reviewers, Taku Kudo, Colin Cherry and George Foster for their comments and suggestions on this work. The computational resources of this work are supported by the Google Cloud Platform (GCP), and by the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE) (www.massive.org.au). This material is partly based on research sponsored by Air Force Research Laboratory and DARPA under agreement number FA8750-19-2-0501. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. ICLR. William Chan, Chitwan Saharia, Geoffrey Hinton, Mohammad Norouzi, and Navdeep Jaitly. 2020. Imputer: Sequence modelling via imputation and dynamic programming. arXiv:2002.08926. William Chan, Yu Zhang, Quoc Le, and Navdeep Jaitly. 2016. Latent sequence decompositions. arXiv preprint arXiv:1610.03035. Colin Cherry, George Foster, Ankur Bapna, Orhan Firat, and Wolfgang Macherey. 2018. Revisiting character-based neural machine translation with capacity and compression. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4295–4305. Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1–10. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66–75. Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2017. Fully character-level neural machine translation without explicit segmentation. Transactions of the Association for Computational Linguistics, 5:365–378. 3051 Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural sequence modeling by iterative refinement. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1173– 1182. Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ram´on Fermandez, Silvio Amir, Luis Marujo, and Tiago Lu´ıs. 2015. Finding function in form: Compositional character models for open vocabulary word representation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1520–1530. Minh-Thang Luong and Christopher D Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1054–1063. Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. 2019. Bpe-dropout: Simple and effective subword regularization. arXiv:1910.13267. Sara Sabour, William Chan, and Mohammad Norouzi. 2018. Optimal completion distillation for sequence learning. arXiv preprint arXiv:1810.01398. Chitwan Saharia, William Chan, Saurabh Saxena, and Mohammad Norouzi. 2020. Nonautoregressive machine translation with latent alignments. arXiv:2004.07437. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1715–1725. I Sutskever, O Vinyals, and QV Le. 2014. Sequence to sequence learning with neural networks. NIPS. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. NIPS. Ekaterina Vylomova, Trevor Cohn, Xuanli He, and Gholamreza Haffari. 2017. Word representation models for morphologically rich languages in neural machine translation. Proceedings of the First Workshop on Subword and Character Level Models in NLP. Chong Wang, Yining Wang, Po-Sen Huang, Abdelrahman Mohamed, Dengyong Zhou, and Li Deng. 2017. Sequence modeling via segmentations. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3674–3683. JMLR. org. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv:1609.08144.
2020
275
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3052–3058 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3052 Geometry-aware Domain Adaptation for Unsupervised Alignment of Word Embeddings Pratik Jawanpuria, Mayank Meghwanshi, Bamdev Mishra Microsoft, India {pratik.jawanpuria,mamegh,bamdevm}@microsoft.com Abstract We propose a novel manifold based geometric approach for learning unsupervised alignment of word embeddings between the source and the target languages. Our approach formulates the alignment learning problem as a domain adaptation problem over the manifold of doubly stochastic matrices. This viewpoint arises from the aim to align the second order information of the two language spaces. The rich geometry of the doubly stochastic manifold allows to employ efficient Riemannian conjugate gradient algorithm for the proposed formulation. Empirically, the proposed approach outperforms state-of-the-art optimal transport based approach on the bilingual lexicon induction task across several language pairs. The performance improvement is more significant for distant language pairs. 1 Introduction Learning bilingual word embeddings is an important problem in natural language processing (Mikolov et al., 2013; Faruqui and Dyer, 2014; Artetxe et al., 2016; Conneau et al., 2018), with usage in cross-lingual information retrieval (Vuli´c and Moens, 2015), text classification (Wan et al., 2011; Klementiev et al., 2012), machine translation (Artetxe et al., 2018c) etc. Given a sourcetarget language pair, the aim is to represent the words in both languages in a common embedding space. This is usually achieved by learning a linear function that maps word embeddings of one language to the embedding space of the other language (Mikolov et al., 2013). Several works have focused on learning such bilingual mapping in supervised setting, using a bilingual dictionary during the training phase (Artetxe et al., 2018a; Joulin et al., 2018; Jawanpuria et al., 2019). Recently, unsupervised bilingual word embeddings have also been explored (Zhang et al., 2017a,b; Conneau et al., 2018; Artetxe et al., 2018b; Hoshen and Wolf, 2018; Grave et al., 2019; Alvarez-Melis and Jaakkola, 2018; Zhou et al., 2019; Jawanpuria et al., 2020). Learning unsupervised cross-lingual mapping may be viewed as an instance of the more general unsupervised domain adaptation problem (BenDavid et al., 2007; Gopalan et al., 2011; Sun et al., 2016; Mahadevan et al., 2018). The latter fundamentally aims at aligning the input feature (embeddings) distributions of the source and target domains (languages). In this paper, we take this point of view and learn cross-lingual word alignment by finding alignment between the second order statistics of the source and the target language embedding space. We formulate a novel optimization problem on the set of doubly stochastic matrices. The objective function consists of matching covariances of words from source to target languages in a leastsquares sense. For optimization, we exploit the fact that the set of doubly stochastic matrices has rich geometry and forms a Riemannian manifold (Douik and Hassibi, 2019). The Riemannian optimization framework (Absil et al., 2008; Edelman et al., 1998; Smith, 1994) allows to propose a computationally efficient conjugate gradient algorithm (Douik and Hassibi, 2019). Experiments show the efficacy of the proposed approach on the bilingual lexicon induction benchmark, especially on the language pairs involving distant languages. 2 Motivation and Related Work We introduce the bilingual word alignment setup followed by a discussion on domain adaptation approaches. Bilingual alignment. Let X ∈Rn×d and Z ∈ Rn×d be d-dimensional word embeddings of n words of the source and the target languages, re3053 spectively. The aim is to learn a linear operator W : Rd →Rd that best approximates source embeddings in the target language space. In the supervised setup, a list of source words and their translations in the target language is provided. This is represented by an alignment matrix Y of size n × n, where Yij = 1 if j-th word in the target language is a translation of the i-th word in the source language and Yij = 0 otherwise. A standard way to learn orthogonal W is by solving the orthogonal Procrustes problem (Artetxe et al., 2016; Smith et al., 2017), i.e., min W∈Rd×d ∥XW −YZ∥2 Fro subject to W⊤W = I, (1) where ∥·∥Fro is the Frobenius norm and I is the identity matrix. Problem (1) has the closed-form solution W⋆ = UV⊤, where U and V are the respective left and right orthogonal factors of the singular value decomposition of X⊤YZ (Sch¨onemann, 1966). In the unsupervised setting, Y is additionally unknown apart from W. Most unsupervised works (Zhang et al., 2017b; Artetxe et al., 2018b; Grave et al., 2019; Conneau et al., 2018) tackle this challenge by learning Y and W jointly. However, their performance rely on finding a good initialization candidate for the alignment matrix Y (Zhang et al., 2017b; Grave et al., 2019; Alaux et al., 2019; Jawanpuria et al., 2020). Performing optimization over the set of binary matrices, Y ∈{0, 1}n×n, to learn the bilingual alignment matrix is computationally hard. Hence, some works (Zhang et al., 2017b; Xu et al., 2018) view the source and the target word embedding spaces as two distributions and learn Y as the transformation that makes the two distributions close. This viewpoint is based on the theory of optimal transport (Villani, 2009; Peyr´e and Cuturi, 2019). Y is, thus, modeled as a doubly stochastic matrix: the entries in Y ∈[0, 1] and each row/column sums to 1. Permutation matrices are extreme points in the space of doubly stochastic matrices. Alvarez-Melis and Jaakkola (2018) propose learning the doubly stochastic Y as a transport map between the metric spaces of the words in the source and the target languages. They optimize the Gromov-Wasserstein (GW) distance, which measures how distances between pairs of words are mapped across languages. For learning Y, they propose to min Y∈DSn −Trace(Y⊤CXYCZ), (2) where DSn := {Y ∈Rn×n : Y ≥0, Y⊤1 = 1 and Y1 = 1} is the set of n×n doubly stochastic matrices, Y ≥0 implies entry-wise non-negativity, 1 is a column vector of ones, and CX = XX⊤and CZ = ZZ⊤are n × n word covariance matrices of source and target languages, respectively. An iterative scheme is proposed for solving (2), where each iteration involves solving an optimal transport problem with entropic regularization (Peyr´e et al., 2016; Peyr´e and Cuturi, 2019). The optimal transport problem is solved with the popular Sinkhorn algorithm (Cuturi, 2013). It should be noted that the GW approach (2) only learns Y. The linear operator to map source language word embedding to the target language embedding space can then be learned by solving (1). Domain adaptation. Domain adaption refers to transfer of information across domains and has been an independent research of interest in many fields including natural language processing (Daum´e III, 2007; Borgwardt et al., 2006; Adel et al., 2017; Baktashmotlagh et al., 2013; Fukumizu et al., 2007; Wang et al., 2015; Prettenhofer and Stein, 2011; Wan et al., 2011; Sun et al., 2016; Mahadevan et al., 2018; Ruder, 2019). One modeling of interest is by Sun et al. (2016), who motivate a linear transformation on the features in source and target domains. In (Sun et al., 2016), the linear map A ∈Rd×d is solved by min A∈Rd×d A⊤DXA −DZ 2 Fro , (3) where D1 and D2 are d×d are feature covariances of source and target domains (e.g., DX = X⊤X and DZ = Z⊤Z), respectively. Interestingly, (3) has a closed-form solution and shows good performance on standard benchmark domain adaptation tasks (Sun et al., 2016). 3 Domain Adaptation Based Cross-lingual Alignment The domain adaptation solution strategies of (Sun et al., 2016; Mahadevan et al., 2018) can be motivated directly for the cross-lingual alignment problem by dealing with word covariances instead of feature covariances. However, the cross-lingual word alignment problem additionally has a bidirectional symmetry: if Y aligns X to Z, then 3054 Y⊤aligns Z to X. We exploit this to propose a bi-directional domain adaptation scheme based on (3). The key idea is to adapt the second order information of the source and the target languages into each other’s domain. We formulate the above as follows: min Y∈DSn ∥Y⊤CXY −CZ∥2 Fro + ∥YCZY⊤−CX∥2 Fro, (4) The first term in the objective function ∥Y⊤CXY −CZ∥2 Fro adapts the domain of X (source) into Z (target). Equivalently, minimizing only the first term in the objective function of (4) leads to row indices in Y⊤X aligning closely with the row indices of Z. Similarly, minimizing only the second term ∥YCZY⊤−CX∥2 Fro adapts Z (now treated as the source domain) into X (now treated as the target domain), which means that the row indices YZ and X are closely aligned. Overall, minimizing both the terms of the objective function allows to learn the alignment matrix Y from X to Z and Y⊤from Z to X simultaneously. Empirically, we observe that bi-directionality acts as a self regularization, leading to optimization stability and better generalization ability. The differences of the proposed formulation (4) with respect to the GW formulation (2) are two fold. First, the formulation (2) maximizes the inner product between Y⊤CXY and CZ. This inner product is sensitive to differences in the norms of Y⊤CXY and CZ. The proposed approach circumvents this issue since (4) explicitly penalizes entry-wise mismatch between Y⊤CXY and CZ. Second, the GW algorithm for (2) is sensitive to choices of the entropic regularization parameter (Alvarez-Melis and Jaakkola, 2018; Peyr´e and Cuturi, 2019). In our case, no such regularization is required. Most recent works that solve optimal transport problem by optimizing over doubly stochastic matrices employ the Sinkhorn algorithm with entropic regularization (Cuturi, 2013; Peyr´e et al., 2016; Peyr´e and Cuturi, 2019). In contrast, we exploit the Riemannian manifold structure of the set of doubly stochastic matrices (DSn) recently studied in (Douik and Hassibi, 2019). DSn is endowed with a smooth Fisher information metric (inner product) that makes the manifold smooth (Douik and Hassibi, 2019; Sun et al., 2015; Lebanon and Lafferty, 2004). In differential geometric terms, DSn has the structure of a Riemannian submanifold. This makes computation of optimization-related ingredients, e.g., gradient and Hessian of a function, projection operators, and retraction operator, straightforward. Leveraging the versatile Riemannian optimization framework (Absil et al., 2008; Edelman et al., 1998; Smith, 1994), the constrained problem (4) is conceptually transformed to an unconstrained problem over the nonlinear manifold. Consequently, most unconstrained optimization algorithms generalize well to manifolds. We solve (4) using the Riemannian conjugate gradient algorithm (Absil et al., 2008; Douik and Hassibi, 2019). There exist several manifold optimization toolboxes such as Manopt (Boumal et al., 2014), Pymanopt (Townsend et al., 2016), Manopt.jl (Bergmann, 2019), McTorch (Meghwanshi et al., 2018) or ROPTLIB (Huang et al., 2016), which have scalable off-the-shelf generic implementation of Riemannian algorithms. We use Manopt for our experiments, where we only need to provide the objective function (4) and its derivative with respect to Y. The manifold optimization related ingredients are handled by Manopt internally. The computational cost per iteration of the algorithm is O(n2), which is similar to that of GW (Alvarez-Melis and Jaakkola, 2018). We term our algorithm as Manifold Based Alignment (MBA) algorithm. Our code is available at https://pratikjawanpuria.com/ publications/. 4 Experiments We compare the proposed algorithm MBA with state-of-the-art GW alignment algorithm (AlvarezMelis and Jaakkola, 2018) for the bilingual induction (BLI) task. Both the algorithms use second order statistics (word covariance matrices) to learn the word alignment between two languages. In our experimental setup, we first learn the word alignment between the source and the target languages and then compute cross-lingual mapping by solving the Procrustes problem (1). For inference of nearest neighbors, we employ the cross-domain similarity local scaling (CSLS) similarity score (Conneau et al., 2018). We report Precision@1 (P@1) as in (Alvarez-Melis and Jaakkola, 2018; Artetxe et al., 2018b) for the BLI task. We show results on the MUSE dataset (Conneau et al., 2018), which consists of fastText monolingual embeddings for different languages (Bojanowski et al., 2017) and dictionaries between several languages (but mostly with English). Follow3055 Method de-xx en-xx es-xx fr-xx it-xx pt-xx xx-de xx-en xx-es xx-fr xx-it xx-pt avg. GW 62.6 77.4 78.2 75.4 77.5 77.2 62.6 75.9 79.7 79.0 76.2 74.9 74.7 MBA 63.3 78.4 78.2 75.3 77.0 77.5 63.1 77.3 79.4 78.7 76.2 75.0 75.0 Table 1: P@1 for BLI on six European languages: English, German, Spanish, French, Italian, and Portuguese. Here ‘en-xx’ refers to the average P@1 when English is the source language and others are target language. Similarly, ‘xx-en’ implies English as the target language and others as source language. Thus, ‘avg.’ shows P@1 averaged over all the thirty BLI results for each algorithm. The proposed algorithm MBA performs similar when the language pairs are closely related to each other. Method en-bg en-cs en-da en-el en-fi en-hu en-nl en-pl en-ru GW 22.8 42.1 54.4 21.5 37.7 43.7 72.9 49.1 36.1 MBA 38.1 46.8 56.1 40.0 40.4 46.1 73.8 50.4 37.5 Method bg-en cs-en da-en el-en fi-en hu-en nl-en pl-en ru-en avg. GW 29.9 52.9 60.7 32.7 49.5 57.6 70.9 57.7 48.3 47.0 MBA 50.0 57.7 62.3 54.4 54.4 61.0 71.0 60.5 54.1 53.0 Table 2: P@1 for BLI on English and nine European languages: Bulgarian, Czech, Danish, Greek, Finnish, Hungarian, Dutch, Polish, and Russian. The ‘avg.’ shows P@1 averaged over all the eighteen BLI results. The proposed algorithm MBA outperforms GW when the bilingual mapping is learned between distant languages. Method en-ar en-hi en-tr ar-en hi-en tr-en GW 27.4 0.0 40.9 41.0 0.0 52.4 MBA 27.9 25.1 42.0 40.8 28.9 54.6 Table 3: P@1 for BLI on English and three nonEuropean languages (Arabic, Hindi, and Turkish). MBA obtains significantly better results. ing existing works (Artetxe et al., 2018b; AlvarezMelis and Jaakkola, 2018; Alaux et al., 2019), the embeddings are normalized. The MUSE dataset provides predefined thirty test bilingual dictionaries between six European languages: English (en), German (de), Spanish (es), French (fr), Italian (it), and Portuguese (pt) on which we evaluate the methods. Additionally, we compute performance on the test dictionaries between English and twelve other languages: Arabic (ar), Bulgarian (bg), Czech (cs), Danish (da), Dutch (nl), Finnish (fi), Greek (el), Hindi (hi), Hungarian (hu), Polish (po), Russian (ru), and Turkish (tr). Following Alvarez-Melis and Jaakkola (2018), we consider top n = 20 000 most frequent words in the vocabulary set for all the languages during the training stage. The inference is performed on the the full vocabulary set. For GW, we use the original codes shared by Alvarez-Melis and Jaakkola (2018) and follow their recommendations on tuning the entropic regularization parameter and scaling of covariance matrices CX and CZ. As a practical implementation of MBA, we incrementally increase n starting from 1000 to 20 000 every fixed-number of iterations. We begin by discussing the results on six closeby European languages in Table 1. We observe that both MBA and GW perform similarly when the languages are related. Hence, in the second set of experiments, we consider other European languages that are distant to English. We observe from Table 2 that MBA outperforms GW, by an average BLI score of 6 points, in this challenging setting. Table 3 reports results on language pairs involving English and three non-European languages. We again observe that the proposed algorithm MBA performs significantly better than GW. Overall, the experiments show the benefit of a geometric optimization framework. 5 Conclusion Aligning the metric spaces of languages has a wide usage in cross-lingual applications. A popular approach in literature is the Gromov-Wasserstein (GW) alignment approach (M´emoli, 2011; Peyr´e et al., 2016; Alvarez-Melis and Jaakkola, 2018), which constructs a transport map by viewing the two embedding spaces as distributions. In contrast, we have viewed unsupervised bilingual word alignment as an instance of the more general unsupervised domain adaptation problem. In particular, our formulation allows search over the space of doubly stochastic matrices and induces bi-directional mapping between the source and target words. Both are motivated solely from the language perspective. 3056 The Riemannian framework allows to exploit the geometry of the doubly stochastic manifold. Empirically, we observe that the proposed algorithm MBA outperforms the GW algorithm for learning bilingual mapping (Alvarez-Melis and Jaakkola, 2018), demonstrating the benefit of geometric optimization modeling. References Pierre-Antoine Absil, Robert Mahony, and Rodolphe Sepulchre. 2008. Optimization Algorithms on Matrix Manifolds. Princeton University Press, Princeton, NJ. Tameem Adel, Han Zhao, and Alexander Wong. 2017. Unsupervised domain adaptation with a relaxed covariate shift assumption. In Proceedings of the AAAI Conference on Artificial Intelligence. Jean Alaux, Edouard Grave, Marco Cuturi, and Armand Joulin. 2019. Unsupervised hyperalignment for multilingual word embeddings. In Proceedings of the International Conference on Learning Representations. URL: https://github.com/facebookresearch/ fastText/tree/master/alignment. David Alvarez-Melis and Tommi S. Jaakkola. 2018. Gromov-wasserstein alignment of word embedding spaces. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. URL: https://github.com/dmelis/otalign. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 2289–2294. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 5012–5019. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 789–798. URL: https://github.com/artetxem/vecmap. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018c. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3632–3642. Mahsa Baktashmotlagh, Mehrtash T Harandi, Brian C Lovell, and Mathieu Salzmann. 2013. Unsupervised domain adaptation by domain invariant projection. In Proceedings of the IEEE International Conference on Computer Vision, pages 769–776. Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. 2007. Analysis of representations for domain adaptation. In Advances in neural information processing systems, pages 137–144. Ronny Bergmann. 2019. Optimisation on Manifolds in Julia. https://github.com/kellertuer/ Manopt.jl. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Karsten M Borgwardt, Arthur Gretton, Malte J Rasch, Hans-Peter Kriegel, Bernhard Sch¨olkopf, and Alex J Smola. 2006. Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics, 22(14):e49–e57. Nicolas Boumal, Bamdev Mishra, Pierre-Antoine Absil, and Rodolphe Sepulchre. 2014. Manopt, a Matlab toolbox for optimization on manifolds. Journal of Machine Learning Research, 15(Apr):1455– 1459. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word translation without parallel data. In Proceedings of the International Conference on Learning Representations. URL: https://github.com/facebookresearch/MUSE. Marco Cuturi. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in neural information processing systems, pages 2292– 2300. Hal Daum´e III. 2007. Frustratingly easy domain adaptation. In Proceedings of the Annual Meeting of the Association of Computational Linguistics, pages 256–263. Ahmed Douik and Babak Hassibi. 2019. Manifold optimization over the set of doubly stochastic matrices: A second-order geometry. IEEE Transactions on Signal Processing, 67(22):5761–5774. Alan Edelman, Tom´as A. Arias, and Steven T. Smith. 1998. The geometry of algorithms with orthogonality constraints. SIAM Journal on Matrix Analysis and Applications, 20(2):303–353. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics, pages 462–471. 3057 Kenji Fukumizu, Francis R Bach, and Arthur Gretton. 2007. Statistical consistency of kernel canonical correlation analysis. Journal of Machine Learning Research, 8(Feb):361–383. R. Gopalan, Ruonan Li, and R. Chellappa. 2011. Domain adaptation for object recognition: An unsupervised approach. In International Conference on Computer Vision. Edouard Grave, Armand Joulin, and Quentin Berthet. 2019. Unsupervised alignment of embeddings with Wasserstein Procrustes. In International Conference on Artificial Intelligence and Statistics. Yedid Hoshen and Lior Wolf. 2018. Non-adversarial unsupervised word translation. Technical report, arXiv preprint arXiv:1801.06126v3. Wen Huang, Pierre-Antoine Absil, Kyle A. Gallivan, and Paul Hand. 2016. Roptlib: an object-oriented C++ library for optimization on Riemannian manifolds. Technical Report FSU16-14.v2, Florida State University. Pratik Jawanpuria, Arjun Balgovind, Anoop Kunchukuttan, and Bamdev Mishra. 2019. Learning multilingual word embeddings in latent metric space: A geometric approach. Transactions of the Association for Computational Linguistics, 7:107–120. Pratik Jawanpuria, Mayank Meghwanshi, and Bamdev Mishra. 2020. A simple approach to learning unsupervised multilingual embeddings. Technical report, arXiv preprint arXiv:2004.05991. Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Edouard Grave, and Herv´e J´egou. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of the International Conference on Computational Linguistics: Technical Papers, pages 1459–1474. Guy Lebanon and John Lafferty. 2004. Hyperplane margin classifiers on the multinomial manifold. In Proceedings of the International Conference on Machine learning, page 66. Sridhar Mahadevan, Bamdev Mishra, and Shalini Ghosh. 2018. A unified framework for domain adaptation using metric learning on manifolds. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 843– 860. Mayank Meghwanshi, Pratik Jawanpuria, Anoop Kunchukuttan, Hiroyuki Kasai, and Bamdev Mishra. 2018. Mctorch, a manifold optimization library for deep learning. Technical report, arXiv preprint arXiv:1810.01811. Facundo M´emoli. 2011. Gromov–Wasserstein distances and the metric approach to object matching. Foundations of computational mathematics, 11(4):417–487. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. Technical report, arXiv preprint arXiv:1309.4168. Gabriel Peyr´e and Marco Cuturi. 2019. Computational optimal transport. Foundations and Trends in Machine Learning, 11(5-6):355–602. Gabriel Peyr´e, Marco Cuturi, and Justin Solomon. 2016. Gromov-Wasserstein averaging of kernel and distance matrices. In Proceedings of the International Conference on Machine Learning. Peter Prettenhofer and Benno Stein. 2011. Crosslingual adaptation using structural correspondence learning. ACM Transactions on Intelligent Systems and Technology, 3(1):13. Sebastian Ruder. 2019. Neural transfer learning for natural language processing. Ph.D. thesis, National University of Ireland, Ireland. Peter H Sch¨onemann. 1966. A generalized solution of the orthogonal Procrustes problem. Psychometrika, 31(1):1–10. S. T. Smith. 1994. Optimization techniques on Riemannian manifold. In A. Bloch, editor, Hamiltonian and Gradient Flows, Algorithms and Control, volume 3, pages 113–136. American Mathematical Society, Providence, RI. Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of the International Conference on Learning Representations. Baochen Sun, Jiashi Feng, and Kate Saenko. 2016. Return of frustratingly easy domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence. Yanfeng Sun, Junbin Gao, Xia Hong, Bamdev Mishra, and Baocai Yin. 2015. Heterogeneous tensor decomposition for clustering via manifold optimization. IEEE transactions on pattern analysis and machine intelligence, 38(3):476–489. James Townsend, Niklas Koep, and Sebastian Weichwald. 2016. Pymanopt: A python toolbox for optimization on manifolds using automatic differentiation. Journal of Machine Learning Research, 17(137):1–5. URL: https://pymanopt.github.io. C´edric Villani. 2009. Optimal Transport: Old and New. Springer-Verlag. 3058 Ivan Vuli´c and Marie-Francine Moens. 2015. Monolingual and cross-lingual information retrieval models based on (bilingual) word embeddings. In Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval, pages 363– 372. Chang Wan, Rong Pan, and Jiefei Li. 2011. Biweighting domain adaptation for cross-language text classification. In International Joint Conference on Artificial Intelligence. Weiran Wang, Raman Arora, Karen Livescu, and Nathan Srebro. 2015. Stochastic optimization for deep cca via nonlinear orthogonal iterations. In Annual Allerton Conference on Communication, Control, and Computing, pages 688–695. Ruochen Xu, Yiming Yang, Naoki Otani, and Yuexin Wu. 2018. Unsupervised cross-lingual transfer of word embedding spaces. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017a. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 1959–1970. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017b. Earth mover’s distance minimization for unsupervised bilingual lexicon induction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1934–1945. Chunting Zhou, Xuezhe Ma, Di Wang, and Graham Neubig. 2019. Density matching for bilingual word embedding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics.
2020
276
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3059–3069 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3059 Learning to Recover from Multi-Modality Errors for Non-Autoregressive Neural Machine Translation Qiu Ran∗, Yankai Lin∗†, Peng Li∗†, Jie Zhou Pattern Recognition Center, WeChat AI, Tencent Inc., China {soulcaptran,yankailin,patrickpli,withtomzhou}@tencent.com Abstract Non-autoregressive neural machine translation (NAT) predicts the entire target sequence simultaneously and significantly accelerates inference process. However, NAT discards the dependency information in a sentence, and thus inevitably suffers from the multi-modality problem: the target tokens may be provided by different possible translations, often causing token repetitions or missing. To alleviate this problem, we propose a novel semiautoregressive model RecoverSAT in this work, which generates a translation as a sequence of segments. The segments are generated simultaneously while each segment is predicted token-by-token. By dynamically determining segment length and deleting repetitive segments, RecoverSAT is capable of recovering from repetitive and missing token errors. Experimental results on three widelyused benchmark datasets show that our proposed model achieves more than 4× speedup while maintaining comparable performance compared with the corresponding autoregressive model. 1 Introduction Although neural machine translation (NMT) has achieved state-of-the-art performance in recent years (Cho et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017), most NMT models still suffer from the slow decoding speed problem due to their autoregressive property: the generation of a target token depends on all the previously generated target tokens, making the decoding process intrinsically nonparallelizable. Recently, non-autoregressive neural machine translation (NAT) models (Gu et al., 2018; Li et al., 2019; Wang et al., 2019; Guo et al., 2019a; Wei et al., 2019) have been investigated to mitigate the ∗indicates equal contribution †indicates corresponding author Src. es gibt heute viele Farmer mit diesem Ansatz Feasible there are lots of farmers doing this today Trans. there are a lot of farmers doing this today Trans. 1 there are lots of of farmers doing this today Trans. 2 there are a lot farmers doing this today Table 1: A multi-modality problem example: NAT models generate each target token independently such that they may correspond to different feasible translations, which usually manifests as repetitive (Trans. 1) or missing (Trans. 2) tokens. slow decoding speed problem by generating all target tokens independently in parallel, speeding up the decoding process significantly. Unfortunately, these models suffer from the multi-modality problem (Gu et al., 2018), resulting in inferior translation quality compared with autoregressive NMT. To be specific, a source sentence may have multiple feasible translations, and each target token may be generated with respect to different feasible translations since NAT models discard the dependency among target tokens. This generally manifests as repetitive or missing tokens in the translations. Table 1 shows an example. The German phrase “viele Farmer” can be translated as either “lots of farmers” or “a lot of farmers”. In the first translation (Trans. 1), “lots of” are translated w.r.t. “lots of farmers” while “of farmers” are translated w.r.t. “a lot of farmers” such that two “of” are generated. Similarly, “of” is missing in the second translation (Trans. 2). Intuitively, the multi-modality problem has a significant negative effect on the translation quality of NAT. Intensive efforts have been devoted to alleviate the above problem, which can be roughly divided into two lines. The first line of work leverages the iterative decoding framework to break the independence assumption, which first generates an initial translation and then refines the translation 3060 BOS there there are are EOS BOS lots lots of of farmers BOS lots lots of BOS doing doing this this today Decoder of DEL farmers EOS Encoder es gibt Ansatz today EOS … t=1 t=1 t=1 t=1 t=2 t=2 t=2 t=2 t=3 t=3 t=3 t=3 t=4 t=4 Segment 1 Segment 2 Segment 3 Segment 4 Final translation: there are lots of farmers doing this today Post-process Figure 1: An overview of our RecoverSAT model. RecoverSAT generates a translation as a sequence of segments. The segments are generated simultaneously while each segment is generated token-by-token conditioned on both the source tokens and the translation history of all segments (e.g., the token “are” in the first segment is predicted based on all the tokens colored green). Repetitive segments (e.g., the third segment “lots of”) are detected and deleted automatically. iteratively by taking both the source sentence and the translation of last iteration as input (Lee et al., 2018; Ghazvininejad et al., 2019). Nevertheless, it requires to refine the translations for multiple times in order to achieve better translation quality, which hurts decoding speed significantly. The other line of work tries to improve the vanilla NAT model to better capture target-side dependency by leveraging extra autoregressive layers in the decoder (Shao et al., 2019a; Wang et al., 2018), introducing latent variables and/or more powerful probabilistic frameworks to model more complex distributions (Kaiser et al., 2018; Akoury et al., 2019; Shu et al., 2019; Ma et al., 2019), guiding the training process with an autoregressive model (Li et al., 2019; Wei et al., 2019), etc. However, these models cannot alter a target token once it has been generated, which means these models are not able to recover from an error caused by the multi-modality problem. To alleviate the multi-modality problem while maintaining a reasonable decoding speedup, we propose a novel semi-autoregressive model named RecoverSAT in this work. RecoverSAT features in three aspects: (1) To improve decoding speed, we assume that a translation can be divided into several segments which can be generated simultaneously. (2) To better capture target-side dependency, the tokens inside a segment is autoregressively generated conditioned not only on the previously generated tokens in this segment but also on those in other segments. On one hand, we observe that repetitive tokens are more likely to occur within a short context. Therefore, autoregressively generating a segment is beneficial for reducing repetitive tokens. On the other hand, by conditioning on previously generated tokens in other segments, the model is capable of guessing what feasible translation candidates have been chosen by each segment and adapts accordingly, e.g., recovering from missing token errors. As a result, our model captures more targetside dependency such that the multi-modality problem can be alleviated naturally. (3) To make the model capable of recovering from repetitive token errors, we introduce a segment deletion mechanism into our model. Informally speaking, our model will mark a segment to be deleted once it finds the content has been translated in other segments. We conduct experiments on three benchmark datasets for machine translation to evaluate the proposed method. The experimental results show that RecoverSAT is able to decode over 4× faster than the autoregressive counterpart while maintaining comparable performance. The source code of this work is released on https://github.com/ ranqiu92/RecoverSAT. 2 Background 2.1 Autoregressive Neural Machine Translation Autoregressive neural machine translation (AT) generates the translation token-by-token conditioned on translation history. Denoting a source sentence as x = {xi}T ′ i=1 and a target sentence as y = {yj}T j=1, AT models the joint probability as: P(y|x) = T Y t=1 P(yt|y<t, x). (1) where y<t denotes the generated tokens before yt. 3061 During decoding, the translation history dependency makes the AT model predict each token after all previous tokens have been generated, which makes the decoding process time-consuming. 2.2 Non-Autoregressive Neural Machine Translation Non-autoregressive neural machine translation (NAT) (Gu et al., 2018) aims to accelerate the decoding process, which discards the dependency of translation history and models P(y|x) as a product of the conditionally independent probability of each token: P(y|x) = T Y t=1 P(yt|x). (2) The conditional independence enables the NAT models to generate all target tokens in parallel. However, independently predicting all target tokens is challenging as natural language often exhibits strong correlation across context. Since the model knows little information about surrounding target tokens, it may consider different possible translations when predicting different target tokens. The problem is known as the multi-modality problem (Gu et al., 2018) and significantly degrades the performance of NAT models. 3 Approach 3.1 Overview RecoverSAT extends the original Transformer (Vaswani et al., 2017) to enable the decoder to perform generation autoregressively in local and non-autoregressively in global. An overview of the architecture of our RecoverSAT model is shown in Figure 1. As illustrated in the figure, RecoverSAT simultaneously predicts all segments “there are EOS”, “lots of farmers EOS”, “a lot DEL” and “doing this today EOS”. And at each time step, it generates a token for each incomplete segment. The special token DEL denotes the segment should be deleted and EOS denotes the end of a segment. Combining all the segments, we obtain the final translation “there are lots of farmers doing this today”. Formally, assuming a translation y is generated as K segments S1, S2, · · · , SK, where Si is a subsequence of the translation1. For description simplicity, we assume that all the segments have the 1Note that, by fixing segment length (token number of each segment) instead, the segment number K can be changed same length. RecoverSAT predicts a token for each segment conditioned on all previously generated tokens at each generation step, which can be formulated as: P(y|x) = L Y t=1 K Y i=1 P(Si t|S1 <t · · · SK <t; x), (3) where Si t denotes the t-th token in the i-th segment, Si <t = {Si 1, · · · , Si t−1} denotes the translation history in the i-th segment, and L is segment length. Here, two natural problems arise for the decoding process: • How to determine the length of a segment? • How to decide a segment should be deleted? We address the two problems in a uniform way in this work. Suppose the original token vocabulary is V , we extend it with two extra tokens EOS and DEL. Then for the segment Si, the most probable token ˆSi t at time step t: ˆSi t = arg max Si t∈V ∪{EOS,DEL} P(Si t|S1 <t · · · SK <t; x) (4) has three possibilities: (1) ˆSi t ∈V : the segment Si is incomplete and the decoding process for it should continue; (2) ˆSi t = EOS: the segment Si is complete and the decoding process for it should terminate; (3) ˆSi t = DEL: the segment Si is repetitive and should be deleted. Accordingly, the decoding process for it should terminate. The entire decoding process terminates when all the segments meet EOS/DEL or reach the maximum token number. It should be noticed that we do not explicitly delete a segment when DEL is encountered but do it via post-processing. In other words, the model is trained to ignore the segment to be deleted implicitly. 3.2 Learning to Recover from Errors As there is little target-side information available in the early stage of the decoding process, the errors caused by the multi-modality problem is inevitable. In this work, instead of reducing such errors directly, we propose two training mechanisms to teach our RecoverSAT model to recover dynamically according to the sentence length. In other words, we can predict the target sentence length to determine the segment number during inference. In this case, our model can also decode in constant time. 3062 from errors: (1) Dynamic Termination Mechanism: learning to determine segment length according to target-side context; (2) Segment Deletion Mechanism: learning to delete repetitive segments. 3.2.1 Dynamic Termination Mechanism As shown in Section 3.1, instead of pre-specifying the lengths of segments, we let the model determine the lengths by emitting the EOS token. This strategy helps our model recover from multi-modality related errors in two ways: 1. The choice of the first few tokens is more flexible. Taking Figure 1 as an example, if the decoder decides the first token of the second segment is “of” instead of “lots” (i.e., “lots” is not generated in the second segment), it only needs to generate “lots” before “EOS” in the first segment in order to recover from missing token errors. In contrast, if the decoder decides the first token is “are”, it can avoid repetitive token error by not generating “are” in the first segment; 2. As shown in Eq. 3, a token is generated conditioned on all the previously generated tokens in all the segments. Therefore, the decoder has richer target-side information to detect and recover from such errors. However, it is non-trivial to train the model to learn such behaviour while maintaining a reasonable speedup. On one hand, as the decoding time of our RecoverSAT model is proportional to the maximum length of the segments, we should divide the target sentences of training instances into equal-length segments to encourage the model to generate segments with identical length. On the other hand, the model should be exposed to the multi-modality related errors to enhance its ability of recovering from such errors, which suggests that the target sentences of training instances should be divided randomly to simulate these errors. To alleviate the problem, we propose a mixed annealing dividing strategy. To be specific, we randomly decide whether to divide a target sentence equally or randomly at each training step and gradually anneal to the equally-dividing method at the end of training. Formally, given the target sentence y and the segment number K, we define the segment dividing indice set r as follows: s ∼ Bernoulli(p), (5) r = ( EQUAL(T, K −1) s = 0 RAND(T, K −1) s = 1 , (6) where Bernoulli(p) is the Bernoulli distribution with parameter p, EQUAL(n, m) =  ⌈ n m+1⌉, ⌈2n m+1⌉, · · · , ⌈mn m+1⌉ , RAND(n, m) sampling m non-duplicate indices from [1, n]. A larger value of p leads to better error recovering ability while a smaller one encourages the model to generate segments with similar lengths (in other words, better speedup). To balance the two aspects, we gradually anneal p from 1 to 0 in the training process, which achieves better performance (Section 4.5). 3.2.2 Segment Deletion Mechanism Although the dynamic termination mechanism makes the model capable of recovering from missing token errors and reducing repetitive tokens, the model still can not recover from errors where token repetition errors have already occurred. We find the major errors of our model occur when generating the first token of each segment since it cannot see any history and future. In this situation, two repetitive segments will be generated. To alleviate this problem, we propose a segment-wise deletion strategy, which uses a special token DEL to indicate a segment is repetitive and should be deleted2. A straightforward way to train the model to learn to delete a segment is to inject pseudo repetitive segments into the training data. The following is an example: Target Sentence there are lots of farmers doing this today + Pseudo Repetitive Segment there are lots of farmers lots of DEL doing this today Given the target sentence “there are lots of farmers doing this today”, we first divide it into 3 segments “there are”, “lots of farmers” and “doing this today”. Then we copy the first two tokens of the second segment and append the special token DEL to the end to construct a pseudo repetitive segment “lots of DEL”. Finally, we insert the repetitive segment to the right of the chosen segment, resulting in 4 segments. Formally, given the expected segment number K and the target sentence y, we first divide y into K −1 segments S1, S2, · · · , SK−1 and then build a pseudo repetitive segment Si rep by copying the first m tokens of a randomly chosen segment Si and appending DEL to the end, m is uniformly 2It is more flexible to employ token-wise deletion strategy which could handle more complex cases. We will explore this in future. 3063 sampled from [1, |Si|]. Finally, Si rep is inserted at the right side of Si. The final K segments are S1, S2, · · · , Si, Si rep, Si+1, · · · , SK−1. However, injecting such pseudo repetitive segments to all training instances will mislead the model that generating then deleting a repetitive segment is a must-to-have behaviour, which is not desired. Therefore, we inject pseudo repetitive segment into a training instance with probability q in this work. 4 Experiments 4.1 Datasets We conduct experiments on three widely-used machine translation datasets: IWSLT16 En-De (196k pairs), WMT14 En-De (4.5M pairs) and WMT16 En-Ro (610k pairs). For fair comparison, we use the preprocessed datasets in Lee et al. (2018), of which sentences are tokenized and segmented into subwords using byte-pair encoding (BPE) (Sennrich et al., 2016) to restrict the vocabulary size. We use a shared vocabulary of 40k subwords for both source and target languages. For the WMT14 En-De dataset, we use newstest-2013 and newstest2014 as validation and test sets respectively. For the WMT16 En-Ro dataset, we employ newsdev2016 and newstest-2016 as validation and test sets respectively. For the IWSLT16 En-De dataset, we use test2013 as the validation set. 4.2 Experimental Settings For model hyperparameters, we follow most of the settings in (Gu et al., 2018; Lee et al., 2018; Wei et al., 2019). For the IWSLT16 En-De dataset, we use a small Transformer model (dmodel = 278, dhidden = 507, nlayer = 5, nhead = 2, pdropout = 0.1). For the WMT14 En-De and WMT16 EnRo datasets, we use a larger Transformer model (dmodel = 512, dhidden = 512, nlayer = 6, nhead = 8, pdropout = 0.1). We linearly anneal the learning rate from 3 × 10−4 to 10−5 as in Lee et al. (2018) for the IWSLT16 En-De dataset, while employing the warm-up learning rate schedule (Vaswani et al., 2017) with twarmup = 4000 for the WMT14 En-De and WMT16 En-Ro datasets. We also use label smoothing of value ϵls = 0.15 for all datasets. We utilize the sequence-level distillation (Kim and Rush, 2016), which replaces the target sentences in the training dataset with sentences generated by an autoregressive model, and set the beam size of the technique to 4. We use the encoder of the corresponding autoregressive model to initialize the encoder of RecoverSAT, and share the parameters of source and target token embedding layers and the pre-softmax linear layer. We measure the speedup of model inference in each task on a single NVIDIA P40 GPU with the batch size 1. 4.3 Baselines We use the Transformer (Vaswani et al., 2017) as our AT baseline and fifteen latest strong NAT models as NAT baselines, including: (1) fertility-based model: NAT-FT (Gu et al., 2018); (2) iterative decoding based models: NAT-IR (Lee et al., 2018) and CMLM (Ghazvininejad et al., 2019); (3) models learning from AT teachers: imitate-NAT (Wei et al., 2019), NART (Li et al., 2019) and FCLNAT (Guo et al., 2019b); (4) latent variable framework based models: LV NAR (Shu et al., 2019) and FlowSeq (Ma et al., 2019); (5) regularization framework based model: NAT-REG (Wang et al., 2019); (6) models introducing extra target-side dependencies: SAT (Wang et al., 2018), SynST (Akoury et al., 2019), NAT-FS (Shao et al., 2019a), PNAT (Bao et al., 2019), NART-DCRF (Sun et al., 2019) and ReorderNAT (Ran et al., 2019). 4.4 Overall Results The performance of our RecoverSAT model and the baselines is shown in Table 2. Due to the space limitation, we only show the results corresponding to the settings of the best BLEU scores for the baselines 3. From Table 2, we can observe that: (1) Our RecoverSAT model achieves comparable performance with the AT baseline (Transformer) while keeping significant speedup. When K = 2, the BLEU score gap is moderate (from 0.06 to 0.4, even better than Transformer on the WMT16 En→Ro and Ro→En tasks) and the speedup is about 2×. When K = 10, the BLEU scores drop less than 5% relatively, and the speedup is considerably good (over 4×). (2) Our RecoverSAT model outperforms all the strong NAT baselines except CMLM (on the WMT16 En→Ro and Ro→En tasks). However, the performance gap is negligible (0.16 and 0.12 respectively), and CMLM is a multi-step NAT method which is significantly slower than our model. 3A thorough comparison under other settings can be found in Appendix B. 3064 Model Iterative WMT14 En-De WMT16 En-Ro IWSLT16 En-De Decoding En→ De→ Speedup En→ Ro→ Speedup En→ Speedup Transformer 27.17 31.95 1.00× 32.86 32.60 1.00× 31.18 1.00× NAT-FT+NPD (n = 100) 19.17 23.20 29.79 31.44 28.16 2.36× SynST 20.74 25.50 4.86× 23.82 3.78× NAT-IR (iter = 10) ✓ 21.61 25.48 2.01× 29.32 30.19 2.15× 27.11 1.55× NAT-FS 22.27 27.25 3.75× 30.57 30.83 3.70× 27.78 3.38× imitate-NAT+LPD (n = 7) 24.15 27.28 31.45 31.81 30.68 9.70× PNAT+LPD (n = 9) 24.48 29.16 NAT-REG+LPD (n = 9) 24.61 28.90 27.02 LV NAR 25.10 6.8× NART+LPD (n = 9) 25.20 29.52 17.8× FlowSeq+NPD (n = 30) 25.31 30.68 <1.5× 32.20 32.84 FCL-NAT+NPD (n = 9) 25.75 29.50 16.0× ReorderNAT 26.51 31.13 31.70 31.99 30.26 5.96× NART-DCRF+LPD (n = 19) 26.80 30.04 4.39× SAT (K = 2) 26.90 1.51× CMLM (iter = 10) ✓ 27.03 30.53 <1.5× 33.08 33.31 RecoverSAT (K = 2) 27.11 31.67 2.16× 32.92 33.19 2.02× 30.78 2.06× RecoverSAT (K = 5) 26.91 31.22 3.17× 32.81 32.80 3.16× 30.55 3.28× RecoverSAT (K = 10) 26.32 30.46 4.31× 32.59 32.29 4.31× 29.90 4.68× Table 2: Performance (BLEU) of Transformer, the NAT/semi-autoregressive models and RecoverSAT on three widely-used machine translation benchmark datasets. NPD denotes the noisy parallel decoding technique (Gu et al., 2018) and LPD denotes the length parallel decoding technique (Wei et al., 2019). n denotes the sample size of NPD or LPD. iter denotes the refinement number of the iterative decoding method. (3) As K grows, the BLEU scores drop moderately and the speedup grows significantly, indicating that our RecoverSAT model has a good generalizability. For example, the BLEU scores drop less than 0.45 when K grows from 2 to 5, and drop no more than 0.90 except on the WMT14 De→En task when K further grows to 10. Meanwhile, the speedup for K = 10 is larger than 4×, which is considerably good. (4) There are only 7 baselines (SynST, imitate-NAT+LPD, LV NAR, NART+LPD, FCLNAT+NPD, ReorderNAT and NART-DCRF+LPD) achieving better speedup than our RecoverSAT model when K = 10. However, only ReorderNAT and NART-DCRF+LPD achieve comparable BLEU scores with our model.The improvements of both ReorderNAT and NART-DCRF are complementary to our method. It is an interesting future work to join these works together. 4.5 Effect of Dynamic Termination Mechanism As discussed in Section 3.2.1, the dynamic termination mechanism is used to train our RecoverSAT model to learn to determine segment length dynamically conditioned on target-side context such that it is recoverable from multi-modality related errors. In this section, we investigate the effect of this mechanism and the results are shown in Table 3. As multi-modality related errors generally manifest as repetitive or missing tokens in the translation, we propose two quantitative metrics “Rep” and “Mis” to measure these two phenomenons respectively. “Rep” is defined as the relative increment of repetitive token ratio w.r.t. to a reference AT model. And “Mis” is defined as the relative increment of missing token ratio given the references w.r.t. to a reference AT model. Formally, given the translations ˆY = {ˆy1 · · · ˆyk · · · } produced by the model to be evaluated and the translations ˆYauto = {ˆy1 auto · · · ˆyk auto · · · } produced by the reference AT model, “Rep” is defined as Rep = r( ˆY) −r( ˆYauto) r( ˆYauto) , (7) r(Y) = P k |yk| P j=2 1  9P i=1 1(yk j = yk j−i) ≥1  P k |yk| , (8) where 1(cond) = 1 if the condition cond holds otherwise 0, and yk j is the j-th token of the translation sentence yk. Given ˆY, ˆYauto and references ¯Y = {¯y1 · · · ¯yk · · · }, “Mis” is defined as Mis = m( ˆY, ¯Y) −m( ˆYauto, ¯Y) m( ˆYauto, ¯Y) , (9) 3065 p BLEU Rep Mis Step NAT 24.57 50.09 9.09 1 0.0 27.09 22.05 6.95 4.2 RecoverSAT 0.5 29.80 12.69 3.96 5.5 (K=10) 1.0 29.89 13.00 4.75 7.2 1→0 29.90 7.09 3.56 5.1 Table 3: Effect of the dynamic termination mechanism. The results are evaluated on the IWSLT16 En-De validation set. p is the parameter of Bernoulli distribution in Eq. 5. “Rep” and “Mis” measure the relative increment (%) of repetitive and missing token ratios (see Section 4.5), the smaller the better. “Step” denotes the average number of decoding steps. And “1→0” denotes annealing p from 1 to 0 linearly. where m(·, ·) computes the missing token ratio and is defined as follows: cw(yk, ¯yk) = max  c(¯yk, w) −c(yk, w), 0  , m(Y, ¯Y) = P k P w∈¯yk cw(yk, ¯yk) P k |¯yk| , (10) where c(y, w) is the occurrence number of a token w in the sentence y. From Table 3, we can observe that: (1) By using the dynamic termination mechanism (p = 0.5, 1.0, 1 →0, where p is the parameter of Bernoulli distribution (Eq. 5)), both repetitive and missing token errors are reduced (“Rep” & “Mis”), and the BLEU scores are increased, indicating the effectiveness of the mechanism; (2) As p grows larger, the average number of decoding steps (“Step”) increases significantly. The reason is that more target sentences are divided into segments equally with smaller p during training and the model is biased to generate segments with similar lengths. However, if the model is not exposed to randomly divided segments (p = 0.0), it fails to learn to recover from multi-modality related errors and the BLEU score drops significantly. (3) By using the annealing dividing strategy (p = 1 →0, see Section 3.2.1), we achieve a good balance between decoding speed and translation quality. Therefore, we use it as the default setting in this paper. 4.6 Effect of Segment Deletion Mechanism In this section, we investigate the effect of the segment deletion mechanism and the results are shown in Table 4, where q is the probability of injecting pseudo repetitive segments to each training instance. From the results we can observe that: (1) Without using the segment deletion mechanism q BLEU Rep Step NAT 24.57 50.09 1 0.0 28.56 26.24 4.4 0.1 29.73 5.11 4.7 RecoverSAT 0.3 29.61 7.71 5.1 (K = 10) 0.5 29.90 7.09 5.1 0.7 29.76 11.47 5.2 0.9 29.25 21.38 5.3 1.0 29.13 20.55 5.2 Table 4: Effect of segment deletion mechanism. The results are evaluated on the IWSLT16 En-De validation set. q is the probability of injecting pseudo repetitive segments to each training instance (see Section 3.2.2). (0, 10] (10, 20] (20, 30] (30, 40] (40, 50] >50 Length of Source Sentence 15 20 25 30 35 40 BLEU Transformer RecoverSAT (K=10) NAT Figure 2: Translation quality on the IWSLT16 En-De validation set over sentences in different length. (q = 0), the BLEU score drops significantly and the repetitive token errors (“Rep”) increase drastically, indicating that the mechanism is effective for recovering from repetitive token errors. (2) As q grows larger, the average number of decoding steps (“Step”) increases steadily because the model is misled that to generate then delete a repetitive segment is expected. Thus, q should not be too large. (3) The repetitive token errors (“Rep”) increase drastically when q > 0.7. We believe that the reason is that the pseudo repetitive segments are constructed randomly, making it hard to learn the underlying mapping. (4) The model achieves the best performance with q = 0.5. Therefore, we set q = 0.5 in our experiments. 4.7 Performance over Sentence Lengths Figure 2 shows the translation quality of the Transformer, our RecoverSAT model with K = 10 and NAT on the IWSLT16 En-De validation set bucketed by different source sentence lengths. From the figure, we can observe that RecoverSAT surpasses NAT significantly and achieves comparable performance to the Transformer on all length buckets, which indicates the effectiveness of our model. 3066 Source die er greif endste Abteilung ist das Denk mal f¨ur die Kinder , das zum Ged enken an die 1,5 Millionen Kinder , die in den Konzent rations lagern und Gas k ammern vernichtet wurden , erbaut wurde . Reference the most tragic section is the children’s mem orial , built in memory of 1.5 million children killed in concentration camps and gas cham bers . NAT Translation the most tangible department department the monument monument the children , which was built commem commem orate 1.5 1.5 million children were destroyed in the concentration camps and gas cham bers . RecoverSAT (K = 10) Translation A: [1]the EOS [2]most tangible department is the EOS [3]monument for children EOS [4]built to EOS [5]commem orate the 1.5 EOS [6]million children destroyed EOS [7]in the concentration camps and EOS [8]in DEL [9]gas EOS [10]cham bers . EOS Forced Translation B: [1]the EOS [2]most tangible department is the EOS [3]monument for children EOS [4]built to EOS [5]commem orate EOS [6]the 1.5 million children destroyed EOS [7]in the concentration camps and EOS [8]in DEL [9]gas EOS [10]cham bers . EOS C: [1]the EOS [2]most tangible department is the EOS [3]monument for children EOS [4]built to EOS [5]commem orate the 1.5 million children EOS [6]destroyed EOS [7]in concentration camps and EOS [8]in DEL [9]gas EOS [10]cham bers . EOS D: [1]the EOS [2]most tangible department is the EOS [3]monument for children EOS [4]built to EOS [5]commem orate the 1.5 million children destroyed EOS [6]in the concentration camps and EOS [7]in the DEL [8]in DEL [9]gas EOS [10]cham bers . EOS Table 5: Translation examples of NAT and RecoverSAT. “Forced Translation” denotes the generated sentence when we manually force the model to generate a certain token (colored green) at a certain position. We use yellow color to label repetitive tokens, red color to label missing tokens, and gray color to label the segments to be deleted. We use “ ” to concatenate sub-words and subscript numbers (e.g., [1]) to mark the beginning of each segment. 4.8 Case Study We present translation examples of NAT and our RecoverSAT model on the WMT14 De→En validation set in Table 5. From the table, we can observe that: (1) The multi-modality problem (repetitive and missing tokens) is severe in the sentence generated by NAT, while it is effectively alleviated by RecoverSAT (see translations A to D); (2) RecoverSAT can leverage target contexts to dynamically determine the segment length to reduce repetitive token errors (see translation B) or recover from missing token errors (see translations C and D); (3) RecoverSAT is capable of detecting and deleting the repetitive segments, even if there are multiple such segments (see translation D). 5 Related Work There has been various work investigating to accelerate the decoding process of sequence generation models (Kalchbrenner et al., 2018; Gu et al., 2018). In the field of neural machine translation, which is the focus of this work, Gu et al. (2018) first propose non-autoregressive machine translation (NAT), which generates all target tokens simultaneously. Although accelerating the decoding process significantly, NAT suffers from the multimodality problem (Gu et al., 2018) which generally manifests as repetitive or missing tokens in translation. Therefore, intensive efforts have been devoted to alleviate the multi-modality problem in NAT. Wang et al. (2019) regularize the decoder hidden states of neighboring tokens to reduce repetitive tokens; Sun et al. (2019) utilize conditional random field to model target-side positional contexts; Shao et al. (2019a) and Shao et al. (2019b) introduce target-side information via specially designed training loss while Guo et al. (2019a) enhance the input of the decoder with target-side information; Kaiser et al. (2018), Akoury et al. (2019), Shu et al. (2019) and Ma et al. (2019) incorporate latent variables to guide generation; Li et al. (2019), Wei et al. (2019) and Guo et al. (2019b) use autoregressive models to guide the training process of NAT; Ran et al. (2019) and Bao et al. (2019) consider the reordering information in decoding. Wang et al. (2018) further propose a semi-autoregressive Transformer method, which generates segments autoregressively and predicts the tokens in a segment non-autoregressively. However, none of the above methods explicitly consider recovering from multi-modality related errors. Recently, multi-step NAT models have also been investigated to address this issue. Lee et al. (2018) and Ghazvininejad et al. (2019) adopt an iterative decoding methods which have the potential to re3067 cover from generation errors. Besides, Stern et al. and Gu et al. (2019) also propose to use dynamic insertion/deletion to alleviate the generation repetition/missing. Different from these work, our model changes one-step NAT to a semi-autoregressive form, which maintains considerable speedup and enables the model to see the local history and future to avoid repetitive/missing words in decoding. Our work can further replace the one-step NAT to improve its performance. 6 Conclusion In this work, we propose a novel semiautoregressive model RecoverSAT to alleviate the multi-modality problem, which performs translation by generating segments non-autoregressively and predicts the tokens in a segment autoregressively. By determining segment length dynamically, RecoverSAT is capable of recovering from missing token errors and reducing repetitive token errors. By explicitly detecting and deleting repetitive segments, RecoverSAT is able to recover from repetitive token errors. Experiments on three widely-used benchmark datasets show that our RecoverSAT model maintains comparable performance with more than 4× decoding speedup compared with the AT model. Acknowledgments We would like to thank all anonymous reviewers for their insightful comments. References Nader Akoury, Kalpesh Krishna, and Mohit Iyyer. 2019. Syntactically supervised transformers for faster neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1269–1281. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations. Yu Bao, Hao Zhou, Jiangtao Feng, Mingxuan Wang, Shujian Huang, Jiajun Chen, and Lei Li. 2019. Non-autoregressive transformer by position learning. arXiv preprint arXiv:1911.10677. Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1724–1734. Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-predict: Parallel decoding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6114– 6123. Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. 2018. Non-autoregressive neural machine translation. In Proceedings of International Conference on Learning Representations. Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. In Proceedings of Advances in Neural Information Processing Systems 32, pages 11181–11191. Junliang Guo, Xu Tan, Di He, Tao Qin, Linli Xu, and Tie-Yan Liu. 2019a. Non-autoregressive neural machine translation with enhanced decoder input. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, volume 33, pages 3723–3730. Junliang Guo, Xu Tan, Linli Xu, Tao Qin, Enhong Chen, and Tie-Yan Liu. 2019b. Fine-tuning by curriculum learning for non-autoregressive neural machine translation. arXiv preprint arXiv:1911.08717. Lukasz Kaiser, Samy Bengio, Aurko Roy, Ashish Vaswani, Niki Parmar, Jakob Uszkoreit, and Noam Shazeer. 2018. Fast decoding in sequence models using discrete latent variables. In Proceedings of the 35th International Conference on Machine Learning, volume 80, pages 2390–2399. Nal Kalchbrenner, Erich Elsen, Karen Simonyan, Seb Noury, Norman Casagrande, Edward Lockhart, Florian Stimberg, Aaron van den Oord, Sander Dieleman, and Koray Kavukcuoglu. 2018. Efficient neural audio synthesis. In Proceedings of the 35th International Conference on Machine Learning, pages 2410–2419. Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317–1327. Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural sequence modeling by iterative refinement. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1173– 1182. Zhuohan Li, Zi Lin, Di He, Fei Tian, Tao Qin, Liwei Wang, and Tie-Yan Liu. 2019. Hint-based training for non-autoregressive machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5712–5717. 3068 Xuezhe Ma, Chunting Zhou, Xian Li, Graham Neubig, and Eduard Hovy. 2019. FlowSeq: Nonautoregressive conditional sequence generation with generative flow. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 4273–4283. Qiu Ran, Yankai Lin, Peng Li, and Jie Zhou. 2019. Guiding non-autoregressive neural machine translation decoding with reordering information. arXiv preprint arXiv:1911.02215. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725. Chenze Shao, Yang Feng, Jinchao Zhang, Fandong Meng, Xilin Chen, and Jie Zhou. 2019a. Retrieving sequential information for non-autoregressive neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3013–3024. Chenze Shao, Jinchao Zhang, Yang Feng, Fandong Meng, and Jie Zhou. 2019b. Minimizing the bag-ofngrams difference for non-autoregressive neural machine translation. arXiv preprint arXiv:1911.09320. Raphael Shu, Jason Lee, Hideki Nakayama, and Kyunghyun Cho. 2019. Latent-variable nonautoregressive neural machine translation with deterministic inference using a delta posterior. arXiv preprint arXiv:1908.07181. Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. Insertion transformer: Flexible sequence generation via insertion operations. In Proceedings of the 36th International Conference on Machine Learning, pages 5976–5985. Zhiqing Sun, Zhuohan Li, Haoqing Wang, Di He, Zi Lin, and Zhihong Deng. 2019. Fast structured decoding for sequence models. In Advances in Neural Information Processing Systems 32, pages 3011– 3020. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Chunqi Wang, Ji Zhang, and Haiqing Chen. 2018. Semi-autoregressive neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 479–488. Yiren Wang, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. 2019. Non-autoregressive machine translation with auxiliary regularization. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, volume 33, pages 5377–5384. Bingzhen Wei, Mingxuan Wang, Hao Zhou, Junyang Lin, and Xu Sun. 2019. Imitation learning for nonautoregressive neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1304– 1312. A Positional Encoding Our RecoverSAT model utilizes the positional encoding method in Vaswani et al. (2017) to encode the information about the positions of source tokens. The positional embedding is defined as: PEpos[2i] = sin  pos 100002i/d  , (11) PEpos[2i + 1] = cos  pos 100002i/d  , (12) where PEpos[i] is the i-th element of the positional embedding vector PEpos for the position pos, and d is the dimension of the positional embedding vector. Then we can compute the input vector of the encoder for the m-th source token w as: Ew = Etoken w + PEm, (13) where Etoken w is the token embedding vector of w. However, we can not apply this method to target tokens directly. Since lengths of segments are dynamically determined, the positions of the tokens in the target sentence, except those in the first segment, are not available during generation. To solve the problem, we use the aforementioned method to independently encode the position in the corresponding segment of each token instead and adopt an absolute segment embedding method, which uses a distinct trainable vector to represent the position of each segment. Formally, the input vector of the decoder for the n-th target token v of the j-th segment is computed as: Ev = Etoken v + PEn + Eseg j , (14) where Eseg j is the segment embedding vector for the segment position j. 3069 Model Iterative WMT14 En-De WMT16 En-Ro IWSLT16 En-De Decoding En→ De→ Speedup En→ Ro→ Speedup En→ Speedup Transformer 27.17 31.95 1.00× 32.86 32.60 1.00× 31.18 1.00× NAT-FT 17.69 21.47 27.29 29.06 26.52 15.6× NAT-FT+NPD (n = 10) 18.66 22.41 29.02 30.76 27.44 7.68× NAT-FT+NPD (n = 100) 19.17 23.20 29.79 31.44 28.16 2.36× SynST 20.74 25.50 4.86× 23.82 3.78× NAT-IR (iter = 1) ✓ 13.91 16.77 11.39× 24.45 25.73 16.03× 22.20 8.98× NAT-IR (iter = 10) ✓ 21.61 25.48 2.01× 29.32 30.19 2.15× 27.11 1.55× NAT-FS 22.27 27.25 3.75× 30.57 30.83 3.70× 27.78 3.38× imitate-NAT 22.44 25.67 28.61 28.90 28.41 18.6× imitate-NAT+LPD (n = 7) 24.15 27.28 31.45 31.81 30.68 9.70× PNAT 23.05 27.18 PNAT+LPD (n = 9) 24.48 29.16 NAT-REG 20.65 24.77 23.14 NAT-REG+LPD (n = 9) 24.61 28.90 27.02 LV NAR 25.10 6.8× NART 21.11 25.24 30.2× NART+LPD (n = 9) 25.20 29.52 17.8× FlowSeq-base 21.45 26.16 <1.5× 29.34 30.44 FlowSeq-base+NPD (n = 30) 23.48 28.40 <1.5× 31.75 32.49 FlowSeq-large 23.72 28.39 <1.5× 29.73 30.72 FlowSeq-large+NPD (n = 30) 25.31 30.68 <1.5× 32.20 32.84 FCL-NAT 21.70 25.32 28.9× FCL-NAT+NPD (n = 9) 25.75 29.50 16.0× ReorderNAT 26.51 31.13 31.70 31.99 30.26 5.96× NART-DCRF 23.44 27.22 10.4× NART-DCRF+LPD (n = 19) 26.80 30.04 4.39× SAT (K = 2) 26.90 1.51× SAT (K = 6) 24.83 2.98× CMLM-small (iter = 1) ✓ 15.06 19.26 20.12 20.36 CMLM-small (iter = 10) ✓ 25.51 29.47 31.65 32.27 CMLM-base (iter = 1) ✓ 18.05 21.83 27.32 28.20 CMLM-base (iter = 10) ✓ 27.03 30.53 <1.5× 33.08 33.31 RecoverSAT (K = 2) 27.11 31.67 2.16× 32.92 33.19 2.02× 30.78 2.06× RecoverSAT (K = 5) 26.91 31.22 3.17× 32.81 32.80 3.16× 30.55 3.28× RecoverSAT (K = 10) 26.32 30.46 4.31× 32.59 32.29 4.31× 29.90 4.68× Table 6: Performance (BLEU) of Transformer and the NAT/semi-autoregressive models on three widely-used machine translation benchmark datasets. NPD denotes the noisy parallel decoding technique (Gu et al., 2018) and LPD denotes the length parallel decoding technique (Wei et al., 2019). n denotes the sample size of NPD or LPD. iter denotes the refinement number of the iterative decoding method.
2020
277
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3070–3079 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3070 On the Inference Calibration of Neural Machine Translation Shuo Wang⋆∗ Zhaopeng Tu⊤ Shuming Shi⊤ Yang Liu⋆† ⋆Institute for Artificial Intelligence Department of Computer Science and Technology, Tsinghua University Beijing National Research Center for Information Science and Technology ⊤Tencent AI Lab †Beijing Academy of Artificial Intelligence Beijing Advanced Innovation Center for Language Resources ⋆{wangshuo.thu, liuyang.china}@gmail.com ⊤{zptu, shumingshi}@tencent.com Abstract Confidence calibration, which aims to make model predictions equal to the true correctness measures, is important for neural machine translation (NMT) because it is able to offer useful indicators of translation errors in the generated output. While prior studies have shown that NMT models trained with label smoothing are well-calibrated on the groundtruth training data, we find that miscalibration still remains a severe challenge for NMT during inference due to the discrepancy between training and inference. By carefully designing experiments on three language pairs, our work provides in-depth analyses of the correlation between calibration and translation performance as well as linguistic properties of miscalibration and reports a number of interesting findings that might help humans better analyze, understand and improve NMT models. Based on these observations, we further propose a new graduated label smoothing method that can improve both inference calibration and translation performance. 1 1 Introduction Calibration requires that the probability a model assigns to a prediction (i.e., confidence) equals to the correctness measure of the prediction (i.e., accuracy). Calibrated models are important in userfacing applications such as natural language processing (Nguyen and O’Connor, 2015) and speech recognition (Yu et al., 2011), in which one needs to assess the confidence of a prediction. For example, in computer-assisted translation, a calibrated machine translation model is able to tell a user when the model’s predictions are likely to be incorrect, which is helpful for the user to correct errors. ∗Work was done when Shuo Wang was interning at Tencent AI Lab under the Rhino-Bird Elite Training Program. 1The source code is available at https://github. com/shuo-git/InfECE. EnDe Dev (a) Training EnDe Dev (b) Inference Figure 1: Reliability diagrams in training and inference for the WMT14 En-De task. “Gap” denotes the difference between confidence and accuracy. Smaller gaps denotes better calibrated outputs. We find that the average gaps between confidence and accuracy are much larger in inference than in training (i.e., 15.83 > 1.39). The study of calibration on classification tasks has a long history, from statistical machine learning (Platt et al., 1999; Niculescu-Mizil and Caruana, 2005) to deep learning (Guo et al., 2017). However, calibration on structured generation tasks such as neural machine translation (NMT) has not been well studied. Recently, M¨uller et al. (2019) and Kumar and Sarawagi (2019) studied the calibration of NMT in the training setting, and found that NMT trained with label smoothing (Szegedy et al., 2016) is well-calibrated. We believe that this setting would cover up a central problem of NMT, the exposure bias (Ranzato et al., 2015) – the training-inference discrepancy caused by teacher forcing in the training of auto-regressive models. In response to this problem, this work focuses on the calibration of NMT in inference, which can better reflect the generative capacity of NMT models. To this end, we use translation error rate (TER) (Snover et al., 2006) to automatically annotate the correctness of generated tokens, which makes it feasible to evaluate calibration in infer3071 ence. Experimental results on several datasets across language pairs show that even trained with label smoothing, NMT models still suffer from miscalibration errors in inference. Figure 1 shows an example. While modern neural networks on classification tasks have been found to be miscalibrated in the direction of over-estimation (i.e., confidence > accuracy) (Guo et al., 2017), NMT models are also under-estimated (i.e., confidence < accuracy) on low-confidence predictions. In addition, we found that miscalibrated predictions correlate well with the translation errors in inference. Specifically, the over-estimated predictions correlate more with over-translation and mis-translation errors, while the under-estimated predictions correlate more with under-translation errors. This demonstrates the necessity of studying inference calibration for NMT. By investigating the linguistic properties of miscalibrated tokens in NMT outputs, we have several interesting findings: • Frequency: Low-frequency tokens generally suffer from under-estimation. Moreover, lowfrequency tokens contribute more to overestimation than high-frequency tokens, especially on large-scale data. • Position: Over-estimation does not have a bias on the position of generated tokens, while under-estimation occurs more in the left part of a generated sentence than in the right part. • Fertility: Predicted tokens that align to more than one source token (“fertility≥2”) suffer more from under-estimation, while tokens with fertility < 1 suffer from over-estimation. • Syntactic Roles: Content tokens are more likely to suffer from miscalibration than content-free tokens. Specifically, verbs are more likely to suffer from over-estimation than under-estimation. • Word Granularity: sub-words suffer more from both over-estimation and underestimation, while full words are less likely to be miscalibrated. Inspired by the finding that miscalibration on classification tasks is closely related to lack of regularization and increased model size (Guo et al., 2017), we revisit these techniques on the NMT (i.e., structured generation) task: • Regularization Techniques: We investigate label smoothing and dropout (Hinton et al., 2012), which directly affect the confidence estimation. Both label smoothing and dropout improve the inference calibration by alleviating the over-estimation. Label smoothing is the key for well-calibration, which is essential for maintaining translation performance for inference in large search space. Inspired by this finding, we propose a novel graduated label smoothing approach, in which the smoothing penalty for high-confidence predictions is higher than that for low-confidence predictions. The graduated label smoothing can improve translation performance by alleviating inference miscalibration. • Model Size: Increasing model size consistently improves translation performance at the cost of negatively affecting inference calibration. The problem can be alleviated by increasing the capacity of encoder only, which maintains the inference calibration and obtains a further improvement of translation performance in large search space. To summarize, the main contributions of our work are listed as follows: • We demonstrate the necessity of studying inference calibration for NMT, which can serve as useful indicators of translation errors. • We reveal certain linguistic properties of miscalibrated predictions in NMT, which provides potentially useful information for the design of training procedures. • We revisit recent advances in architectures and regularization techniques, and provide variants that can boost translation performance by improving inference calibration. 2 Related Work Calibration on Classification Calibration on classification tasks has been studied for a long history in the statistics literature, including Platt scaling (Platt et al., 1999), isotonic regression (Niculescu-Mizil and Caruana, 2005) and many other methods for non-binary classification (Zadrozny and Elkan, 2002; Menon et al., 2012; Zhong and Kwok, 2013). For modern deep neural networks, Guo et al. (2017) demonstrated 3072 Bush held a talk with Sharon in Israel . Bush attended a public talk with Sharon . C S C I C C C D GroundTruth System TER Label Figure 2: An example of TER labels. “C”: correct, “S”: substitution, corresponding to mis-translation, “I”: insertion, corresponding to over-translation, “D”: deletion, corresponding to under-translation. Dash line denotes mapping the label “D” from the ground-truth sequence to the generated sequence. that recent advances in training and model architecture have strong effects on the calibration. Szegedy et al. (2016) propose the label smoothing technique which can effectively reduce the calibration error. Ding et al. (2019) extend label smoothing to adaptive label regularization. Calibration on Structured Prediction Different from classification tasks, most natural language processing (NLP) tasks deal with complex structures (Kuleshov and Liang, 2015). Nguyen and O’Connor (2015) verified the finding of NiculescuMizil and Caruana (2005) in NLP tasks on loglinear structured models. For NMT, some works directed their attention to the uncertainty in prediction (Ott et al., 2018; Wang et al., 2019), Kumar and Sarawagi (2019) studied the calibration of several NMT models and found that the end of a sentence is severely miscalibrated. M¨uller et al. (2019) investigated the effect of label smoothing, finding that NMT models are well-calibrated in training. Different from previous works, we are interested in the calibration of NMT models in inference, given that the training and inference are discrepant for standard NMT models (Vaswani et al., 2017). 3 Definitions of Calibration 3.1 Neural Machine Translation Training In machine translation task, an NMT model F: x →y maximizes the probability of a target sequence y = {y1, ..., yT } given a source sentence x = {x1, ..., xS}: P(y|x; θ) = T Y t=1 P(yt|y<t, x; θ), (1) where θ is a set of model parameters and y<t is a partial translation. At each time step, the model generates an output token of the highest probability based on the source sentence x and the partial translation y<t. The training objective is to minimize the negative log-likelihood loss on the training corpus. Inference NMT models are trained on the ground-truth data distribution (teaching forcing), while in inference the models generate target tokens based on previous model predictions, which can be erroneous. The training-inference discrepancy caused by teacher forcing in maximum likelihood estimation training (Equation 1) is often referred to as exposure bias (Ranzato et al., 2015). In this work, we aim to investigate the calibration of NMT in inference, which we believe can better reflect the generation capacity of NMT models. 3.2 Calibration of NMT Calibration requires that the probability a model assigns to a prediction (i.e., confidence) equals to the true correctness measure of the prediction (i.e., accuracy). Modern neural networks have been found to be miscalibrated in the direction of overestimation (Guo et al., 2017). In this study, we revisit the calibration problem in NMT. If an NMT model is well-calibrated, the gap between the confidence of the generated tokens and the accuracy of them will be small. 2 Expected Calibration Error (ECE) ECE is a commonly-used metric to evaluate the miscalibration, which measures the difference in expectation between confidence and accuracy (Naeini et al., 2015). Specifically, ECE partitions predictions into M bins {B1, . . . , BM} according to their confidence and takes a weighted average of the bin’s accuracy/confidence difference: ECE = M X m=1 |Bm| N acc(Bm)−conf(Bm) , (2) where N is the number of prediction samples and |Bm| is the number of samples in the m-th bin. 2For example, given 100 predictions, each with confidence 0.7. If the accuracy is also 0.7 (i.e., 70 of the 100 tokens are correct), then the NMT model is well calibrated. 3073 EnJp & ZhEn Dev EnJp & ZhEn Dev (a) En-Jp EnJp & ZhEn Dev EnJp & ZhEn Dev (b) Zh-En Figure 3: Reliability diagrams on (a) En-Jp and (b) Zh-En datasets. Left: training, right: inference. ECE in Training and Inference In the case of considering just the topmost token in structured prediction tasks (e.g., machine translation), the prediction is ˆy = arg maxy∈V P(y) with P(ˆy) as confidence. The accuracy C(ˆy) ∈{1, 0} denotes whether the prediction ˆy is correct. In training, the correctness of the prediction ˆy is calculated as whether ˆy matches the ground-truth token yn: C(ˆy) ∈{1, 0}. However, in inference it is not straightforward to measure the accuracy of ˆy, since it requires to build an alignment between the generated tokens and the ground-truth tokens. To this end, we turn to the metric of Translation Error Rate (TER) (Snover et al., 2006), which measures the number of edits required to change a model output into the ground-truth sequence. Specifically, it assigns a label l ∈{C, S, I} to each generated token. Figure 2 shows an example of TER labels of each generated token with respect to the reference. As a side product, TER annotations provide the information of translation errors. While TER only labels the mis-translation (“S”) and over-translation (“I”) errors, we describe a simple heuristic method to annotate the undertranslation error by mapping the label “D” from the ground-truth sequence to the generated sequence. 4 Miscalibration in NMT Data and Setup We carried out experiments on three different language pairs, including WAT17 English-Japanese (En-Jp), WMT14 EnglishGerman (En-De), and WMT17 Chinese-English (Zh-En). The training datasets consist of 1.9M, 4.5M, and 20.6M sentence pairs respectively. We employed Byte pair encoding (BPE) (Sennrich et al., 2016) with 32K merge operations for all the three language pairs. We used BLEU (Papineni et al., 2001) to evaluate the NMT models. We used the TER toolkit (Snover et al., 2006) to label whether the tokens in NMT outputs are correctly translated. Normalization was not used, and the maximum shift distance was set to 50. The NMT model that we used in our experiments is Transformer (Vaswani et al., 2017). We used base model as default, which consists of a 6-layer encoder and a 6-layer decoder and the hidden size is 512. The model parameters are optimized by Adam (Kingma and Ba, 2015), with β1 = 0.9, β2 = 0.98 and ϵ = 10−9. We used the same warmup strategy for learning rate as Vaswani et al. (2017) with warmup steps = 4, 000. 4.1 Observing Miscalibration Reliability diagrams are a visual representation of model calibration, which plot accuracy as a function of confidence (Niculescu-Mizil and Caruana, 2005). Specifically, it partitions the output tokens into several bins according to their prediction confidence, and calculate the average confidence and accuracy of each bin. Figure 1 shows the reliability diagrams of both training and inference on En-De and Figure 3 shows those on En-Jp and Zh-En. Results are reported on the validation sets. NMT still suffers from miscalibration. The difference between training and inference ECEs is that when estimating training ECE, NMT models are fed with ground-truth prefixes (Kumar and Sarawagi, 2019; M¨uller et al., 2019), while for inference ECE, NMT models are fed with previous model predictions. As seen, the training ECE is very small, indicating that NMT models are wellcalibrated in training. This is consistent with the findings of Kumar and Sarawagi (2019); M¨uller et al. (2019). However, the inference ECE is much higher, suggesting that NMT models still suffer from miscalibration in inference. 3074 Translation Well-Cali. Mis-Cali. Correct En-Jp 0.53 0.47 En-De 0.57 0.43 Zh-En 0.60 0.40 All 0.57 0.43 Error En-Jp 0.46 0.54 En-De 0.43 0.57 Zh-En 0.36 0.63 All 0.42 0.58 Table 1: Cosine similarity between the calibration and the translation errors on the held-out data. NMT models are miscalibrated in directions of both over- and under-estimation. Modern neural networks have been found to be miscalibrated on classification tasks in the direction of overestimation (Guo et al., 2017). In contrast, NMT models also suffer from under-estimation problems. The under-estimation problem is more serious on En-Jp than on Zh-En, which we attribute to the smaller size of the training data of the En-Jp task. 4.2 Correlation with Translation Errors We investigated the calibration error of tokens with different TER labels. As the development set is small, to make the results more convincing, we sampled 100K sentences from the training set as a held-out set and retrained the NMT model on the remained training set excluding the held-out set. All results in this section is reported by the retrained model. We firstly compute the gap between the confidence and the accuracy of each token in each confidence bin on the held-out set. Tokens in bins whose gaps are less than a threshold are labeled as well-calibrated, otherwise they are labeled as miscalibrated. We use the inference ECE estimated on the development set as the threshold for each language pair respectively. Miscalibrated tokens can be divided into two categories: over-estimation and under-estimation. As shown in Table 1, correct translations (i.e., “C”) have higher correlations to well-calibrated predictions and erroneous translations (i.e., “S”, “I”, and “D”) correlate more to miscalibrated predictions. This finding is more obvious when NMT models are trained on larger data (e.g., Zh-En). Table 2 lists the correlation between different translation errors and different kinds of miscalibration. We find that over-estimated predictions are closely correlated with over-translation and misType Under-Est. Over-Est. Under-Tra. En-Jp 0.35 0.22 En-De 0.28 0.24 Zh-En 0.31 0.31 All 0.32 0.26 Over-Tra. En-Jp 0.28 0.32 En-De 0.20 0.36 Zh-En 0.29 0.35 All 0.26 0.34 Mis-Tra. En-Jp 0.24 0.36 En-De 0.17 0.42 Zh-En 0.24 0.40 All 0.21 0.39 Table 2: Cosine similarity between the miscalibration errors (under-estimation and over-estimation) and the translation errors (under-translation, mis-translation, and over-translation) on the held-out data. translation errors, while the under-estimated predictions correlate well with under-translation errors. This finding demonstrates the necessity of studying inference calibration for NMT. 5 Linguistic Properties of Miscalibration In this section, we investigate the linguistic properties of miscalibrated tokens in NMT outputs. We explore the following five types of properties: frequency, position, fertility, syntactic roles, and word granularity. Frequency is generally related to miscalibration; position, fertility, and word granularity are three factors associated with structured prediction; syntactic roles or linguistic roles may vary across language pairs. The results in this section are reported on the held-out set by the retrained model. Relative Change We use the relative change of the proportion of a certain category of tokens to quantify to what extent they suffer from the under/over-estimation. For instance, in the Zh-En task, high-frequency tokens account for 87.6% on the whole held-out set, and among over-estimated tokens, high-frequency tokens account for 77.3%, thus for over-estimation the relative change of highfrequency tokens is (77.3-87.6)/87.6=-11.76% in Zh-En. Accordingly, the value of the red rectangle of Zh-En is -11.76% in Figure 4a. Positive relative change denotes that a certain type of linguistic property accounts more in miscalibrated predictions than in all the predictions, suggesting this type of linguistic property suffers 3075 Over-Estimation Relative Change -50% 50% 150% 250% 350% En-Jp En-De Zh-En Low Medium High Under-Estimation En-Jp En-De Zh-En Over-Estimation Relative Change -50% -25% 0% 25% 50% En-Jp Jp-En En-De Zh-En Left Middle Right Under-Estimation En-Jp Jp-En En-DeZh-En (a) Over-Estimation Over-Estimation Relative Change -50% 50% 150% 250% 350% En-Jp En-De Zh-En Low Medium High Under-Estimation En-Jp En-De Zh-En Over-Estimation Relative Change -50% -25% 0% 25% 50% En-Jp Jp-En En-De Zh-En Left Middle Right Under-Estimation En-Jp Jp-En En-DeZh-En (b) Under-Estimation Figure 4: Effect of frequency on miscalibration. from the miscalibration problem. Similarly, negative relative change suggests that a certainty type of linguistic property is less likely to be impaired by the miscalibration problem. 5.1 Frequency We divide tokens into three categories based on their frequency, including High: the most 3,000 frequent tokens; Medium: the most 3,001-12,000 frequent tokens; Low: the other tokens. Low-frequency tokens are miscalibrated in the direction of under-estimation. As shown in Figure 4, the relative changes of low- and mediumfrequency tokens are much bigger than those of high-frequency tokens. The under-estimation in low- and medium-frequency tokens can be alleviated by increasing the size of training data (Figure 4b, data size: En-Jp < En-De < Zh-En). Low-frequency tokens contribute more to overestimation. As shown in Figure 4a, the relative changes of low- and medium-frequency tokens are positive while those of high-frequency tokens are negative, regarding over-estimation. High-frequency tokens are less likely to be miscalibrated. We find the relative changes of high frequency tokens are negative across the three language pairs. The imbalance in token frequency plays an important role in the calibration of NMT. 5.2 Position In structured prediction, different positions may behave differently regarding miscalibration. Thus we divide all the tokens equally into three categories: Left: tokens on the left third; Middle: tokens on the middle third; Right: tokens on the right third. Figure 5 depicts the relative changes of these three positions. Since Japanese is a head-final language (Wu et al., 2018), we also include the results of Japanese-English (“Jp-En”) for comparison. Over-Estimation Relative Change -50% 50% 150% 250% 350% En-Jp En-De Zh-En Low Medium High Under-Estimation En-Jp En-De Zh-En Over-Estimation Relative Change -50% -25% 0% 25% 50% En-Jp Jp-En En-De Zh-En Left Middle Right Under-Estimation En-Jp Jp-En En-DeZh-En (a) Over-Estimation Over-Estimation Relative Change -50% 50% 150% 250% 350% En-Jp En-De Zh-En Low Medium High Under-Estimation En-Jp En-De Zh-En Over-Estimation Relative Change -50% -25% 0% 25% 50% En-Jp Jp-En En-De Zh-En Left Middle Right Under-Estimation En-Jp Jp-En En-DeZh-En (b) Under-Estimation Figure 5: Effect of relative position on miscalibration. Over-estimation does not have a bias on position. And this holds for both left-branching and rightbranching languages. Increasing the size of training data is less likely to affect the over-estimation in different positions. Under-estimation occurs more in the left part. This phenomenon is more obvious in left-branching languages (e.g., Japanese) than in right-branching languages (e.g., English and German), confirming that characteristics of a language play an important role in machine translation (Wu et al., 2018). 5.3 Fertility Fertility Percentage 0% 20% 40% 60% 80% 100% En-Jp En-De Zh-En >=2 1 (0,1) 0 Over-Estimation Relative Change -50% -25% 0% 25% 50% En-Jp En-De Zh-En >=2 1 (0,1) 0 Under-Estimation En-Jp En-De Zh-En Position Percentage 0% 20% 40% 60% 80% 100% En-Jp En-De Zh-En Left Middle Right Over-Estimation Relative Change -50% -25% 0% 25% 50% En-Jp En-De Zh-En Left Middle Right Under-Estimation En-Jp En-De Zh-En (a) Over-Estimation Fertility Percentage 0% 20% 40% 60% 80% 100% En-Jp En-De Zh-En >=2 1 (0,1) 0 Over-Estimation Relative Change -50% -25% 0% 25% 50% En-Jp En-De Zh-En >=2 1 (0,1) 0 Under-Estimation En-Jp En-De Zh-En Position Percentage 0% 20% 40% 60% 80% 100% En-Jp En-De Zh-En Left Middle Right Over-Estimation Relative Change -50% -25% 0% 25% 50% En-Jp En-De Zh-En Left Middle Right Under-Estimation En-Jp En-De Zh-En (b) Under-Estimation Figure 6: Effect of fertility on miscalibration. Fertility indicates how many source tokens a target token is aligned to, which is highly related to inference in NMT. We use Fast Align (Dyer et al., 2013) to extract bilingual alignment. We distinguish between four categories regarding fertility: “≥2”: target tokens that are aligned to more than one source tokens; “1”: target tokens that are aligned to a single source token; “(0, 1)”: target tokens that are aligned to a single source token along with other target tokens; “0”: target tokens that are not aligned to any source token. Figure 6 plots the results. Tokens aligning to less than one source token suffer from over-estimation. The extent grows with 3076 Under-Estimation Relative Change -100% -30% 40% 110% En-Jp En-De Zh-En Pe 0% 10% % En-Jp En-De Zh-En Over-Estimation Relative Change -100% -30% 40% 110% En-Jp En-De Zh-En Noun Verb Adj Prep. Dete. Punc. Others (a) Over-Estimation Under-Estimation Relative Change -100% -30% 40% 110% En-Jp En-De Zh-En Pe 0% 10% % En-Jp En-De Zh-En Over-Estimation Relative Change -100% -30% 40% 110% En-Jp En-De Zh-En Noun Verb Adj Prep. Dete. Punc. Others (b) Under-Estimation Figure 7: Effect of POS tags on miscalibration. the data size. In addition, these tokens (“(0, 1)”) are less likely to suffer from under-estimation. Tokens aligning to more than one source token suffer more from under-estimation. The relative change of fertility>=2 is much larger than that of the other types of fertility. Meanwhile, the null-aligned target tokens (fertility=0) also suffer from under-estimation problem instead of overestimation problem on the large-scale Zh-En data. 5.4 Syntactic Roles In this experiment, we investigate the syntactic roles of miscalibrated tokens. 3 Words in English and German sentences are labeled by Stanford POS tagger4, and Japanese sentences are labeled by Kytea5. We distinguish between the following POS tags: noun, verb, adjective, preposition, determiner, punctuation, and the others. Noun, verb, and adjective belong to content tokens. Preposition, determiner, punctuation and the others belong to content-free tokens. Content tokens are more likely to suffer from miscalibration. From Figure 7 we find that the most relative changes of content tokens (i.e., “Noun”, “Verb” and “Adj”) are positive, while most of the relative changes of the content-free tokens (i.e., “Prep.”, “Dete.”, “Punc.”, “Others”) are negative. Among content tokens, the verbs (“Verb”) face the over-estimation problem instead of the underestimation problem. Surprisingly, the adjectives (“Adj”) suffer from under-estimation problem on large data (e.g., En-De and Zh-En). 5.5 Word Granularity BPE segmentation is the preliminary step for current NMT systems, which may segment some 3If a token is a sub-word segmented by BPE, the token shares the syntactic role of the full word that it belongs to. 4https://nlp.stanford.edu/software/tagger.shtml 5http://www.phontron.com/kytea/ Percentage 0% 20% 40% 60% 80% 100% En-Jp En-De Zh-En Full Frag Over-Estimation Relative Change -20% 10% 40% 70% 100% En-Jp En-De Zh-En Sub-word Full word Under-Estimation En-Jp En-De Zh-En Comparison between subword units that represent full-word vs. word-f (a) Over-Estimation Percentage 0% 20% 40% 60% 80% 100% En-Jp En-De Zh-En Full Frag Over-Estimation Relative Change -20% 10% 40% 70% 100% En-Jp En-De Zh-En Sub-word Full word Under-Estimation En-Jp En-De Zh-En Comparison between subword units that represent full-word vs. word(b) Under-Estimation Figure 8: Effect of word granularity on miscalibration. words into sub-words. To explore the effect of word granularity on the miscalibration of NMT models, we divide the tokens after BPE segmentation into two categories: Sub-Words that are divided into word fragments by BPE (e.g., with “@@”), and Full Words that are not divided by BPE. Figure 8 depicts the results. Sub-words suffer more from miscalibration, while full words are less likely to be miscalibrated. The relative changes of sub-words are all positive for both over- and under-estimation, while those of full words are all negative. Sennrich et al. (2016) showed that BPE addresses the open-vocabulary translation by encoding rare and unknown words as sequences of sub-word units. Our results confirm their claim: the behaviors of sub-words and full words correlate well with those of low- and high-frequency tokens respectively. 6 Revisiting Advances in Architecture and Regularization Guo et al. (2017) have revealed that the miscalibration on classification tasks is closely related to lack of regularization and increased model size. In this section we check whether the conclusion holds on the inference of NMT models, which belong to a family of structured generation. 3077 Label Dropout Beam Size = 10 Beam Size = 100 Smoothing BLEU ECE Over. Under. BLEU ECE Over. Under. × × 23.03 25.49 58.3% 9.6% 22.90 26.46 59.4% 9.3% ✓ × 24.51 14.99 42.3% 17.3% 24.58 15.97 42.8% 16.9% × ✓ 27.52 20.75 52.3% 10.1% 26.93 22.57 53.6% 9.8% ✓ ✓ 27.65 14.26 39.7% 14.1% 27.68 14.75 40.1% 14.2% GRADUATED ✓ 27.76 5.07 29.1% 31.6% 28.07 5.23 29.5% 31.4% Table 3: Results of label smoothing and dropout on the En-De task. “Over.” and “Under.” denote over-estimation and under-estimation, respectively. None-Constant-Graduate (a) None None-Constant-Graduate (b) Vanilla None-Constant-Graduate (c) Graduated Figure 9: Reliability diagrams of different label smoothing strategies: (a) no label smoothing; (b) vanilla label smoothing; (c) graduated label smoothing. The results are reported on the WMT14 En-De translation task. One criticism of NMT inference is that the translation performance inversely decreases with the increase of search space (Tu et al., 2017). Quite recently, Kumar and Sarawagi (2019) claimed that this problem can be attributed to miscalibration. Accordingly, we also report results on large beam size and find that reducing miscalibration can improve the NMT performance in large beam size. 6.1 Regularization Techniques We revisit two important regularization techniques that directly affect confidence estimation: • Label Smoothing (Szegedy et al., 2016): distributing a certain percentage of confidence from the ground truth label to other labels uniformly in training. • Dropout (Hinton et al., 2012): randomly omitting a certain percentage of the neural networks on each training case, which has been shown effective to prevent the over-fitting problem for large neural networks. For comparison, we disable label smoothing or dropout to retrain the model on the whole training set. The results are shown in Table 3. We find that label smoothing improves the performance by greatly reducing the over-estimation, at the cost of increasing the percentage of under-estimation error. Dropout alleviates the over-estimation problem, and does not aggravate under-estimation. Although label smoothing only marginally improves performance on top of dropout, it is essential for maintaining the translation performance in larger search space (i.e., Beam Size = 100). As seen from Table 3, reducing ECE can only lead to marginal BLEU gains. We attribute this phenomenon to the fact that ECE is another metric to evaluate NMT models, which is potentially complementary to BLEU. Accordingly, ECE is not necessarily strictly negatively related to BLEU. Graduated Label Smoothing Inspired by this finding, we propose a novel graduated label smoothing approach, in which the smoothing penalty for high-confidence predictions is bigger than that for low-confidence predictions. We firstly use the model trained by vanilla label smoothing to estimate the confidence of each token in the training set, then we set the smoothing penalty to 0.3 for tokens with confidence above 0.7, 0.0 for tokens with confidence below 0.3, and 0.1 for the remaining tokens. As shown in Table 3, the graduated label smoothing can improve translation performance by alle3078 Enc. Dec. Para. Beam Size = 10 Beam Size = 100 BLEU ECE Over. Under. BLEU ECE Over. Under. BASE BASE 88M 27.65 14.26 39.7% 14.1% 27.68 14.75 40.1% 14.2% DEEP DEEP 220M 28.86 14.99 40.3% 14.1% 28.64 15.55 41.8% 14.0% DEEP BASE 145M 29.09 14.28 39.6% 14.1% 29.29 14.53 39.6% 14.2% WIDE WIDE 264M 28.66 16.09 42.3% 12.6% 28.42 17.22 43.2% 12.5% WIDE BASE 160M 28.97 14.83 39.7% 13.6% 29.09 15.06 39.8% 13.7% Table 4: Effect of model size by enlarging encoder (“Enc.”) and decoder (“Dec.”) on the En-De dataset. viating inference miscalibration, and the improvement is more significant in large beam size. Figure 9 shows the reliability diagrams of different label smoothing strategies. The graduated label smoothing can effectively calibrate the predictions with 0.4 ≤confidence ≤0.8, while is less effective for low- (i.e., < 0.4) and high-confidence (i.e., > 0.8) predictions. We believe that the design of more advanced techniques to solve this problem is a worthwhile future direction of research. 6.2 Increased Model Size The model size of NMT models has increased significantly recently (Bahdanau et al., 2015; Vaswani et al., 2017; Wang et al., 2019). We evaluated the inference calibration of models with different sizes. We increase model size in the following two ways: • Deeper model: both the encoder and the decoder are deepened to 24 layers; • Wider model: the hidden size of the encoder and the decoder is widened to 1024. The BLEU score and inference ECE of different models are shown in Table 4. Increasing model size negatively affects inference calibration. We find that increasing both the encoder and the decoder increases the inference calibration error despite increasing the BLEU, confirming the finding of Guo et al. (2017) that increased model size is closely related to model miscalibration. This leads to a performance drop in a larger search space (i.e., Beam Size = 100). Only enlarging the encoder improves translation quality while maintaining inference calibration. As the decoder is more directly related to the generation, it is more likely to result in miscalibration. In order to maintain the performance improvement and do not aggravate over-estimation, we propose to only increase the size of encoder and keep the decoder unchanged. Results in Table 4 indicate that only enlarging the encoder can achieve better performance with fewer parameters compared to enlarging both the encoder and the decoder. In a larger search space (i.e., Beam Size = 100), models with high inference ECE will generate worse translations while models with low inference ECE can achieve improved translation performance. 7 Conclusion Although NMT models are well-calibrated in training, we observe that they still suffer from miscalibration during inference because of the discrepancy between training and inference. Through a series of in-depth analyses, we report several interesting findings which may help to analyze, understand and improve NMT models. We revisit recent advances and find that label smoothing and dropout play key roles in calibrating modern NMT models. We further propose graduated label smoothing that can reduce the inference calibration error effectively. Finally, we find that increasing model size can negatively affect the calibration of NMT models and this can be alleviated by only enlarging the encoder. As well-calibrated confidence estimation is more likely to establish trustworthiness with users, we plan to apply our work to interactive machine translation scenarios in the future. Acknowledgments We thank all anonymous reviewers for their valuable comments and suggestions for this work. This work was supported by the National Key R&D Program of China (No. 2017YFB0202204), National Natural Science Foundation of China (No. 61925601, No. 61761166008, No. 61772302), Beijing Advanced Innovation Center for Language Resources (No. TYR17002), and the NExT++ project supported by the National Research Foundation, Prime Ministers Office, Singapore under its IRC@Singapore Funding Initiative. 3079 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Qianggang Ding, Sifan Wu, Hao Sun, Jiadong Guo, and Shu-Tao Xia. 2019. Adaptive Regularization of Labels. arXiv. Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In NAACL. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In ICML. Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Volodymyr Kuleshov and Percy S Liang. 2015. Calibrated structured prediction. In NeurIPS. Aviral Kumar and Sunita Sarawagi. 2019. Calibration of Encoder Decoder Models for Neural Machine Translation. In ICLR Debugging Machine Learning Models Workshop. Aditya Krishna Menon, Xiaoqian J Jiang, Shankar Vembu, Charles Elkan, and Lucila Ohno-Machado. 2012. Predicting accurate probabilities with a ranking loss. In ICML, volume 2012, page 703. NIH Public Access. Rafael M¨uller, Simon Kornblith, and Geoffrey Hinton. 2019. When Does Label Smoothing Help? In NeurIPS. Mahdi Pakdaman Naeini, Gregory F. Cooper, and Milos Hauskrecht. 2015. Obtaining well calibrated probabilities using bayesian binning. In AAAI. Khanh Nguyen and Brendan O’Connor. 2015. Posterior calibration and exploratory analysis for natural language processing models. In EMNLP. Alexandru Niculescu-Mizil and Rich Caruana. 2005. Predicting good probabilities with supervised learning. In ICML, pages 625–632. Myle Ott, Michael Auli, David Grangier, and Marc’Aurelio Ranzato. 2018. Analyzing uncertainty in neural machine translation. In ICML. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. In ACL. John Platt et al. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in Large Margin Classifiers, 10(3):61–74. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of Association for Machine Translation in the Americas. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In CVPR. Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li. 2017. Neural machine translation with reconstruction. In AAAI. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao. 2019. Learning deep transformer models for machine translation. In ACL. Shuo Wang, Yang Liu, Chao Wang, Huanbo Luan, and Maosong Sun. 2019. Improving back-translation with uncertainty-based confidence estimation. In EMNLP. Lijun Wu, Xu Tan, Di He, Fei Tian, Tao Qin, Jianhuang Lai, and Tie-Yan Liu. 2018. Beyond error propagation in neural machine translation: Characteristics of language also matter. In EMNLP. Dong Yu, Jinyu Li, and Li Deng. 2011. Calibration of confidence measures in speech recognition. IEEE Transactions on Audio, Speech, and Language Processing, 19(8):2461–2473. Bianca Zadrozny and Charles Elkan. 2002. Transforming classifier scores into accurate multiclass probability estimates. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 694–699. ACM. Wenliang Zhong and James T Kwok. 2013. Accurate probability calibration for multiple classifiers. In Twenty-Third International Joint Conference on Artificial Intelligence.
2020
278
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3080–3085 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3080 Camouflaged Chinese Spam Content Detection with Semi-supervised Generative Active Learning Zhuoren Jiang1∗, Zhe Gao2∗, Yu Duan2, Yangyang Kang2, Changlong Sun2, Qiong Zhang2, Xiaozhong Liu3† 1School of Public Affairs, Zhejiang University, Hangzhou, China 2Alibaba Group, Hangzhou & Sunnyvale, China & USA 3School of Informatics, Computing and Engineering, IUB, Bloomington, USA [email protected],[email protected] {derrick.dy,yangyang.kangyy,qz.zhang}@alibaba-inc.com, [email protected],[email protected] Abstract We propose a Semi-supervIsed GeNerative Active Learning (SIGNAL) model to address the imbalance, efficiency, and text camouflage problems of Chinese text spam detection task. A “self-diversity” criterion is proposed for measuring the “worthiness” of a candidate for annotation. A semi-supervised variational autoencoder with masked attention learning approach and a character variation graph-enhanced augmentation procedure are proposed for data augmentation. The preliminary experiment demonstrates the proposed SIGNAL model is not only sensitive to spam sample selection, but also can improve the performance of a series of conventional active learning models for Chinese spam detection task. To the best of our knowledge, this is the first work to integrate active learning and semisupervised generative learning for text spam detection. 1 Introduction The recent successes of learning-based models all share the same prerequisite: a decent labeled training dataset is available for a given task (Jiang et al., 2019b; Arora and Agarwal, 2007). However, the annotating process can be “a tedious, laborious, and time consuming task for humans” (Sharma et al., 2015). To achieve high task performance with low labeling cost, (pool-based) active learning (Cohn et al., 1996) algorithms are proposed to select the most representative and informative sample to be labeled by human oracles (Druck et al., 2009). Although effective in general, in Chinese text spam detection context, the following reasons make the active learning a challenging task: ∗These two authors contributed equally to this research. †Corresponding author Imbalance: in reality, the ratio of spam samples to normal ones is very imbalanced. For instance, in North America, “much less than 1% of SMS messages were spam” (Almeida et al., 2013). As a result, the active learning model should be more sensitive to spam samples. The general active learning methods, e.g., (Lewis and Gale, 1994; Li and Guo, 2013; Roth and Small, 2006), can hardly address this problem. Efficiency: when competing with anti-spam models, spammers are constantly creating new forms for spam texts (Xie et al., 2012; Jiang et al., 2019a). The amount of unlabeled samples is huge and keeps increasing. Classical diversity-based approach (Brinker, 2003; Xu et al., 2003), which iteratively compares each unlabeled sample with each labeled sample to select the most “diverse” ones for annotating, will perform poorly as its computational complexity is O(n2). An efficient-oriented active learning algorithm is needed. Camouflage1: Chinese character has glyph and phonetic variations (Norman, 1988), e.g., “账(account)” and “帐(curtain)” have the similar structure and pronunciation. Spammers can take advantage of this characteristic to escape from the detection algorithms (Jindal and Liu, 2007; Jiang et al., 2019a). It is important to propose a novel active learning model that can predict the new Chinese character variation patterns not appearing in the labeled dataset. To address these challenges, we propose a novel solution, Semi-supervIsed GeNerative Active Learning (SIGNAL) model to naturally integrate active learning and semi-supervised generative learning into a unified framework. SIGNAL is 1“Camouflaged text spam” refers to the intentional mutation of Chinese character to escape from the spam detection algorithms. The variation-based spam text is purposely created and highly camouflaged for machine learning algorithms. Typos of normal text is not spam. 3081 inspired by a simple yet powerful observation in computer vision domain (Zhou et al., 2017) : the patches generated from the same image share the same label, and are naturally expected to have similar predictions by the classifier. Hence, the diversity of predictions of patches can successfully measure the “power” of a candidate image in elevating the performance of the current classifier. Similarly, in this study, a set of semantically similar texts for each candidate sample is automatically generated through data augmentation. We hypothesize that: the diversity of predictions of augmented texts is a useful indicator to predict the boost ability of a candidate text sample for the performance of the classifier. We define this strategy as a “self-diversity” based active learning strategy. Algorithmically, unsupervised generative models, such as variational autoencoder (Kingma and Welling, 2013), only learn to generate similar texts without considering the labeling information. Therefore, we utilize a Semi-supervised Variational AutoEncoder (S-VAE) (Kingma et al., 2014) to automatically generate semantically similar texts for each candidate sample, while trying to keep the label-consistency. To enable S-VAE to gain the ability of perceiving the sensitive positions of the candidate sample, we enrich the human annotation feedback. The annotator is required to provide not only a label for the candidate but also a rationale (critical terms in the candidate) (Sharma et al., 2015) for the chosen spam label. Based on the human-annotated rationales, we introduce a pseudo-mask distribution Pm to guide the attention learning in S-VAE. A character variation graphenhanced augmentation procedure is then applied to integrate the Chinese character variation knowledge and simulate the glyph and phonetic variation mutations in further data augmentation. Compared with conventional active learning, SIGNAL offers three advantages: (1) SIGNAL is more sensitive to seek the spam samples2. (2) SIGNAL does not need to compare with the labeled samples, which reduces its computational complexity to O(N). (3) SIGNAL considers the heterogeneous variation knowledge of Chinese characters for spam detection. The major contributions of this paper can be summarized as follows: 1. We propose a SIGNAL model, in the context 2More detailed information can be found in the experiment section. of Chinese text spam detection, to address the imbalance, efficiency, and text camouflage problems. To the best of our knowledge, this is the first work to integrate active learning and semi-supervised generative learning for text spam detection task. 2. The preliminary experiments on the Chinese SMS dataset demonstrate the efficacy and potential of SIGNAL for Chinese spam detection. A series of conventional active learning models can be improved after merging the SIGNAL model. 3. While focusing on the Chinese spam detection task in this study; theoretically, SIGNAL has a great potential to be applied in other NLP tasks. It can mitigate the data-hungry problem by cutting the labeling cost. 2 SIGNAL Model Figure 1 depicts the proposed SIGNAL framework3. It starts with a small set of labeled samples, a large set of unlabeled samples, and an initial classifier trained on the labeled samples. The goal of SIGNAL is to seek “salient” samples from the pool of unlabeled samples for annotation. Then the classifier can be continuously improved by incrementally enlarging the training set with newly annotated samples. The pseudocode of SIGNAL is described as Algorithm 1. Self-Diversity Based Active Learning. As aforementioned, in SIGNAL, we develop a “self-diversity” criterion for active candidate selection. Formally, for a candidate sample xi, a set of augmented texts ATi = n at1 i , at2 i , · · · , atj i · · · , atM i o is generated. The self-diversity SDi of xi can be defined as: SDi = Pj=1 M  pj i −¯pi 2 M (1) pj i is the prediction of the current classifier for augmented text atj i; ¯pi is the arithmetic mean of all predictions for ATi; M is the total number of augmented texts. SD suggests the “worthiness” of a candidate for annotation. A large SD indicates that the current classifier’s prediction for the target candidate is unstable. With a slight mutation, the prediction will change drastically. Such a candidate is worthy of annotation. This criterion has the potential to locate the vital samples and also to reduce the computational complexity. Furthermore, 3https://github.com/Giruvegan/generative-camouflagedspam-detector 3082 Figure 1: An Illustration of “SIGNAL” Framework in the context of Chinese text spam detection, spam candidate has a greater possibility to gain a larger SD. For instance, if the spam candidate mutates at the critical positions, the label of the augmented text is likely to change. On the contrary, normal candidates are less likely to be affected by this situation. S-VAE with Masked Attention Learning. As shown in Figure 1, we utilize S-VAE with masked attention learning to generate similar texts at the semantic level. In this study, with annotated rationales R (a set of critical terms), a pseudo-mask distribution Pm is generated for each candidate sample. For ith term ti of the candidate sample, the pseudo-mask probability Pri can be calculated as: Pri = ρIR(ti) ∆ (2) where IR(ti) is an indicator function to determine whether ti belongs to R; ∆is used for normalization; ρ is the weight to ensure the critical terms will have less attention, in other words, it can have a greater possibility to be “masked” during the generative process. Following (Kingma et al., 2014), the generative semi-supervised model with masked attention learning can be defined as: Pr(y) = Cat(y|π); Pr(z) = N(z|0, I); Prω(x′|fr(x)) = fa(x′; fr(x), ω); Prθ(x′|y, z) = f(x′; y, z, θ) (3) where x is a sample (labeled or unlabeled); fr(x) is a matrix generated by a non-linear transformation of x. x′ is a representation of fr(x) with an attention calculation, x′ = P ωifr(x)i; ω denotes the attention distribution, ωi = softmax(fc(fr(x))i, which is scalar; fc is an single-dimensional nonlinear transformation; Cat(y|π) is the multinomial distribution, if x is unlabeled, the class labels y are treated as latent variables; z is the latent variable; θ denotes the parameters of a non-linear transformation. Labeled samples can be used to train a classifier that predicts class labels y. During the inference process, we can predict the missing class for an unlabeled sample from the inferred posterior distribution Prθ(y|x′). The loss function of S-VAE with masked attention learning is defined as: L = LS−VAE + αDKL(Patt||Pm) (4) where LS−VAE is the loss of original S-VAE (Kingma et al., 2014); DKL(Pm||Patt) is the KL divergence of the attention distribution Patt from the pseudo-mask distribution Pm. Character Variation Graph-enhanced Augmentation. In this study, a random-walk based graph-enhanced augmentation procedure is used for integrating the Chinese character variation knowledge and simulating the glyph and phonetic variation mutations. A Chinese character variation graph G (Jiang et al., 2019a) is utilized. G = (C, R). C denotes the Chinese character (vertex) set. R denotes the variation relation (edge) set, and edge weight is the similarity of two characters given the target relation (variation) type. For critical positions in a piece of text, we adopt a random walk based graph exploration to predict the possible Chinese character variation patterns. For 3083 Algorithm 1 Semi-supervised Generative Active Learning Self-Diversity Based Active Learning (Labeled set: L, Unlabeled set: U = {x1, · · · , xN}, Initial Classifier: Ct, t = 0, Chinese Character Variation Graph: G, Annotated Rationales: R) R = ∅ repeat for all xi ∈U do With R, generate a pseudo-mask distribution P i m using Eq.2 SSi = S-VAE(xi, P i m) ATi = GraphAugmentation(SSi, G, P i m) With ATi and Ct, calculate SDi using Eq.1 end for Select top K unlabeled samples Q from U Get ˆL and ˆR from enriched human annotation L ←L T ˆL, R ←R T ˆR, U ←U/ Q t + +, Ct ←Train(L, Ct−1) until Convergence return Ct, L GraphAugmentation(Similar text set: SS, Chinese Character Variation Graph: G, pseudo-masked distribution Pm,) AT = ∅ for all ssj ∈SS do Probabilistically generate a position list POS with Pm for all posk ∈POS do Get the character Chposk at position posk Cho ←Chposk Randomly generate a walking step Tp ∈(0, T] Chn = RandomWalk(Cho, Tp, G) Chposk ←Chn end for Append ssj to AT end for return AT more detailed information on this procedure, please refer to Algorithm 1. 3 Preliminary Experiment Dataset and Experiment Setting. A Chinese SMS dataset4 was used for the experiment. There were 48,896 testing samples, including 23,891 spam samples and 25,005 normal samples. The size of the active learning sample set was 48884, including 23,891 spam samples and 24,993 normal samples. 200 samples were randomly selected as the initial labeled set. The remaining samples were used as an unlabeled sample pool. For each iteration, 100 samples were selected by different active learning models. The iterative active learning process repeated 10 times. For evaluation, a singlelayer CNN classifier was trained on the labeled samples. Uncertainty (Lewis and Gale, 1994), Margin (Roth and Small, 2006), and Entropy (Li 4https://github.com/Giruvegan/generative-camouflagedspam-detector Figure 2: Preliminary Experiment Result: (A) the number of selected spam samples after 10 iteration of active learning; (B) the classifier performance (accuracy) comparison between “Uncertainty” and “Uncertainty merging SIGNAL”; (C) the classifier performance (accuracy) comparison between “Entropy” and “Entropy merging SIGNAL”; (D) the classifier performance (accuracy) comparison between “Margin” and “Margin merging SIGNAL” and Guo, 2013) were chosen as baseline models. Similar baseline-settings can be found in (Zhou et al., 2017; Huang et al., 2018; Yoo and Kweon, 2019). In SIGNAL model, for S-VAE training4, we chose “BiGRU+ Attention + MLP” as encoder structure, a “single-layer GRU” as decoder structure, and a “single-layer CNN+MLP” as classifier. For each candidate sample, 10 augmented texts is generated for “self-diversity” calculation. Sensitivity of Spam Sample Selection. As shown in Figure 2 (A), compared with baseline models, SIGNAL can be more sensitive to spam samples. The selected spam samples from SIGNAL were significantly more than those from other baselines. This observation indicated the potential of SIGNAL for addressing the “imbalance” problem in Chinese text spam detection. The Elevating “Power” of SIGNAL. As shown in Figure 2 (B), (C), and (D), after merging5 SIGNAL, all baseline models had been improved to varying degrees. Especially for margin-based active learning (Roth and Small, 2006), SIGNAL can improve the performance in all active learning iterations. Averagely, by merging SIGNAL, Margin can be improved by 10% in the metric of the 5In the preliminary experiment, we apply a simple yet effective merging strategy: in each iteration, the baseline model and SIGNAL model select 50 samples respectively. 3084 Figure 3: Case study: augmented texts from SIGNAL classification performance. Case Study. To gain a straightforward understanding of the generation quality of SIGNAL, we present two augmented texts in Figure 3. From these two cases, we have the following observations: (1) the augmented texts are semantically similar to the original sample. (2) Although the original sample has no variation character, the augmented texts can simulate the phonetic or glyph variation mutations. (3) If the critical terms in the original sample are replaced, the label of text can be different. 4 Conclusion In this paper, we propose a SIGNAL model for Chinese text spam detection. SIGNAL integrates active learning and semi-supervised generative learning into a unified framework. As an exploration study for this newly proposed problem, the preliminary results have revealed the potential of SIGNAL to address the critical problems in the proposed task. For instance, Figure 2 (A) proves that SIGNAL can be more sensitive to spam samples (Imbalance Challenge); case study (Figure 3) shows the generation capacity of SIGNAL to simulate the phonetic or glyph variation mutations (Camouflage Challenge); comparing to classical diversity-based approach, we integrate self-diversity based active learning and generative learning which can greatly reduce the computational complexity (O (N) → O (N), Efficiency Challenge). In the future, we plan to enable the glyph and phonetic variation detection by integrating the variation graph representation learning, which may improve SIGNAL’s performance. Acknowledgments We are thankful to the anonymous reviewers for their helpful comments. This work is supported by Alibaba Group through Alibaba Research Fellowship Program, the National Natural Science Foundation of China (61876003), Guangdong Basic and Applied Basic Research Foundation (2019A1515010837), and the Opening Project of State Key Laboratory of Digital Publishing Technology (cndplab-2020-Z001). References Tiago Almeida, Jos´e Mar´ıa G´omez Hidalgo, and Tiago Pasqualini Silva. 2013. Towards sms spam filtering: Results under a new dataset. International Journal of Information Security Science, 2(1):1–18. Shilpa Arora and Sachin Agarwal. 2007. Active learning for natural language processing. Language Technologies Institute School of Computer Science Carnegie Mellon University. Klaus Brinker. 2003. Incorporating diversity in active learning with support vector machines. In Proceedings of the 20th international conference on machine learning (ICML-03), pages 59–66. David A Cohn, Zoubin Ghahramani, and Michael I Jordan. 1996. Active learning with statistical models. Journal of artificial intelligence research, 4:129– 145. Gregory Druck, Burr Settles, and Andrew McCallum. 2009. Active learning by labeling features. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3085 1-Volume 1, pages 81–90. Association for Computational Linguistics. Sheng-Jun Huang, Jia-Wei Zhao, and Zhao-Yang Liu. 2018. Cost-effective training of deep cnns with active model adaptation. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1580–1588. Zhuoren Jiang, Zhe Gao, Guoxiu He, Yangyang Kang, Changlong Sun, Qiong Zhang, Luo Si, and Xiaozhong Liu. 2019a. Detect camouflaged spam content via stoneskipping: Graph and text joint embedding for chinese character variation representation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6188– 6197. Zhuoren Jiang, Jian Wang, Lujun Zhao, Changlong Sun, Yao Lu, and Xiaozhong Liu. 2019b. Crossdomain aspect category transfer and detection via traceable heterogeneous graph representation learning. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 289–298. ACM. Nitin Jindal and Bing Liu. 2007. Review spam detection. In Proceedings of the 16th international conference on World Wide Web, pages 1189–1190. ACM. Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. In Proceedings of the International Conference on Learning Representations (ICLR). Durk P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. 2014. Semi-supervised learning with deep generative models. In Advances in neural information processing systems, pages 3581–3589. David D Lewis and William A Gale. 1994. A sequential algorithm for training text classifiers. In SIGIR’94, pages 3–12. Springer. Xin Li and Yuhong Guo. 2013. Adaptive active learning for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 859–866. Jerry Norman. 1988. Chinese. Cambridge University Press. Dan Roth and Kevin Small. 2006. Active learning with perceptron for structured output. In ICML Workshop on Learning in Structured Output Spaces. Manali Sharma, Di Zhuang, and Mustafa Bilgic. 2015. Active learning with rationales for text classification. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 441–451. Sihong Xie, Guan Wang, Shuyang Lin, and Philip S Yu. 2012. Review spam detection via temporal pattern discovery. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 823–831. ACM. Zhao Xu, Kai Yu, Volker Tresp, Xiaowei Xu, and Jizhi Wang. 2003. Representative sampling for text classification using support vector machines. In European conference on information retrieval, pages 393–407. Springer. Donggeun Yoo and In So Kweon. 2019. Learning loss for active learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 93–102. Zongwei Zhou, Jae Shin, Lei Zhang, Suryakanth Gurudu, Michael Gotway, and Jianming Liang. 2017. Fine-tuning convolutional neural networks for biomedical image analysis: actively and incrementally. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7340–7351.
2020
279
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 302–312 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 302 Unsupervised Paraphrasing by Simulated Annealing Xianggen Liu1 Lili Mou2 Fandong Meng3 Hao Zhou4 Jie Zhou3 Sen Song1 1Laboratory for Brain and Intelligence and Department of Biomedical Engineering, Tsinghua University 2Department of Computing Science, University of Alberta; Alberta Machine Intelligent Institute (Amii) 3Pattern Recognition Center, WeChat AI, Tencent Inc, 4ByteDance AI Lab [email protected], [email protected] {fandongmeng,withtomzhou}@tencent.com [email protected], [email protected] Abstract We propose UPSA, a novel approach that accomplishes Unsupervised Paraphrasing by Simulated Annealing. We model paraphrase generation as an optimization problem and propose a sophisticated objective function, involving semantic similarity, expression diversity, and language fluency of paraphrases. UPSA searches the sentence space towards this objective by performing a sequence of local edits. We evaluate our approach on various datasets, namely, Quora, Wikianswers, MSCOCO, and Twitter. Extensive results show that UPSA achieves the state-of-the-art performance compared with previous unsupervised methods in terms of both automatic and human evaluations. Further, our approach outperforms most existing domain-adapted supervised models, showing the generalizability of UPSA.1 1 Introduction Paraphrasing aims to restate one sentence as another with the same meaning, but different wordings. It constitutes a corner stone in many NLP tasks, such as question answering (Mckeown, 1983), information retrieval (Knight and Marcu, 2000), and dialogue systems (Shah et al., 2018). However, automatically generating accurate and different-appearing paraphrases is a still challenging research problem, due to the complexity of natural language. Conventional approaches (Prakash et al., 2016; Gupta et al., 2018) model the paraphrase generation as a supervised encoding-decoding problem, inspired by machine translation systems. Usually, such models require massive parallel samples for training. In machine translation, for example, the WMT 2014 English-German dataset contains 4.5M sentence pairs (Neidert et al., 2014). 1Code and data available at: https://github.com/ Liuxg16/UPSA Replace What would you do when you have the power to be invisible ? What would you do when you have the power to become invisible ? What would you do if given the power to become invisible ? What would you do if given you have the power to become invisible ? Sentence space Score of the generated paraphrase Insert Delete What would you do if given you the power to become invisible ? 1 2 3 4 5 1 5 … Editing steps What would you do if given you have the power to become invisible ? Figure 1: UPSA generates a paraphrase by a series of editing operations (i.e., insertion, replacement, and deletion). At each step, UPSA proposes a candidate modification of the sentence, which is accepted or rejected according to a certain acceptance rate (only accepted modifications are shown). Although sentences are discrete, we make an analogue in the continuous real x-axis where the distance of two sentences is roughly given by the number of edits. However, the training corpora for paraphrasing are usually small. The widely-used Quora dataset2 only contains 140K pairs of paraphrases; constructing such human-written paraphrase pairs is expensive and labor-intensive. Further, existing paraphrase datasets are domain-specific: the Quora dataset only contains question sentences, and thus, supervised paraphrase models do not generalize well to new domains (Li et al., 2019). On the other hand, researchers synthesize pseudo-paraphrase pairs by clustering news events (Barzilay and Lee, 2003), crawling tweets of the same topic (Lan et al., 2017), or translating bi-lingual datasets (Wieting and Gimpel, 2017), but these methods typically yield noisy training sets, leading to low paraphrasing performance (Li et al., 2018). As a result, unsupervised methods would largely benefit paraphrase generation as no parallel data are 2https://www.kaggle.com/c/quora-question-pairs 303 needed. With the help of deep learning, researchers are able to generate paraphrases by sampling from a neural network-defined probabilistic distribution, either in a continuous latent space (Bowman et al., 2016; Bao et al., 2019) or directly in the word space (Miao et al., 2019). However, the meaning preservation and expression diversity of those generated paraphrases are less “controllable” in such probabilistic sampling procedures. To this end, we propose a novel approach to Unsupervised Paraphrasing by Simulated Annealing (UPSA). Simulated annealing (SA) is a stochastic searching algorithm towards an objective function, which can be flexibly defined. In our work, we design a sophisticated objective function, considering semantic preservation, expression diversity, and language fluency of paraphrases. SA searches towards this objective by performing a sequence of local editing steps, namely, word replacement, insertion, deletion, and copy. For each step, UPSA first proposes a potential editing, and then accepts or rejects the proposal based on sample quality. In general, a better sentence (higher scored in the objective) is always accepted, while a worse sentence is likely to be rejected, but could also be accepted (controlled by an annealing temperature) to explore the search space in a less greedy fashion. At the beginning, the temperature is usually high, and worse sentences are more likely to be accepted, pushing SA outside a local optimum. The temperature is cooled down as the optimization proceeds, making the model better settle down to some optimum. Figure 1 illustrates how UPSA searches an optimum in unsupervised paraphrase generation. We evaluate the effectiveness of our model on four paraphrasing datasets, namely, Quora, Wikianswers, MSCOCO, and Twitter. Experimental results show that UPSA achieves a new state-of-theart unsupervised performance in terms of both automatic metrics and human evaluation. In summary, our contributions are as follows: • We propose the novel UPSA framework that addresses Unsupervised Paraphrasing by Simulated Annealing. • We design a searching objective function for paraphrasing that not only considers language fluency and semantic similarity, but also explicitly models expression diversity between a paraphrase and the input. • We propose a copy mechanism as one of our search actions of simulated annealing to address rare words. • We achieve the state-of-the-art performance on four benchmark datasets compared with previous unsupervised paraphrase generators, largely reducing the performance gap between unsupervised and supervised paraphrasing. We outperform most domain-adapted paraphrase generators, and even a supervised one on the Wikianswers dataset. 2 Related Work In early years, paraphrasing was typically accomplished by exploiting linguistic knowledge (Mckeown, 1983; Ellsworth and Janin, 2007; Narayan et al., 2016) and statistical machine translation methods (Quirk et al., 2004; Dolan et al., 2004). Recently, deep neural networks have become a prevailing approach to text generation, where paraphrasing is often formulated as a supervised encodingdecoding problem, for example, using stacked residual LSTM (Prakash et al., 2016) and the Transformer model (Wang et al., 2019). Unsupervised paraphrasing is an emerging research direction in the field of NLP. The variational autoencoder (VAE) can be intuitively applied to paraphrase generation in an unsupervised fashion, as we can sample sentences from a learned latent space (Bowman et al., 2016; Zhang et al., 2019; Bao et al., 2019). But the generated sentences are less controllable and suffer from the error accumulation problem in VAE’s decoding phase (Miao et al., 2019). Roy and Grangier (2019) introduce an unsupervised model based on vector-quantized autoencoders (Van den Oord et al., 2017). But their work mainly focuses on generating sentences for data augmentation instead of paraphrasing itself. Miao et al. (2019) use Metropolis–Hastings sampling (1953) for constrained sentence generation, achieving the state-of-the-art unsupervised paraphrasing performance. The main difference between their work and ours is that UPSA imposes the annealing temperature into the sampling process for better convergence to an optimum. In addition, we define our searching objective involving not only semantic similarity and language fluency, but also the expression diversity; we further propose a copy mechanism in our searching process. Recently, a few studies have applied editingbased approaches to sentence generation. Guu et al. (2018) propose a heuristic delete-retrieve-generate component for a supervised sequence-to-sequence 304 (Seq2Seq) model. Dong et al. (2019) learn the deletion and insertion operations for text simplification in a supervised way, where their groundtruth operations are obtained by some dynamic programming algorithm. Our editing operations (insertion, deletion, and replacement) are the search actions of unsupervised simulated annealing. Regarding discrete optimization/searching, a na¨ıve approach is by hill climbing (Edelkamp and Schroedl, 2011; Schumann et al., 2020; Kumar et al., 2020), which is in fact a greedy algorithm. In NLP, beam search (BS, Tillmann et al. 1997) is widely applied to sentence generation. BS maintains a k-best list in a partially greedy fashion during left-to-right (or right-to-left) decoding (Anderson et al., 2017; Zhou and Rush, 2019). By contrast, UPSA is local search with distributed edits over the entire sentence. Moreover, UPSA is able to make use of the original sentence as an initial state of searching, whereas BS usually works in the decoder of a Seq2Seq model and is not applicable to unsupervised paraphrasing. 3 Approach In this section, we present our novel UPSA framework that uses simulated annealing (SA) for unsupervised paraphrasing. In particular, we first present the general SA algorithm and then design our searching objective and searching actions (i.e., candidate sentence generator) for paraphrasing. 3.1 The Simulated Annealing Algorithm Simulated Annealing (SA) is an effective and general metaheuristic of searching, especially for a large discrete or continuous space (Kirkpatrick et al., 1983). Let X be a (huge) search space of sentences, and f(x) be an objective function. The goal is to search for a sentence x that maximizes f(x). At a searching step t, SA keeps a current sentence xt, and proposes a new candidate x∗by local editing. If the new candidate is better scored by f, i.e., f(x∗) > f(xt), then SA accepts the proposal. Otherwise, SA tends to reject the proposal x∗, but may still accept it with a small probability e f(x∗)−f(xt) T , controlled by an annealing temperature T. In other words, the probability of accepting the proposal is p(accept|x∗, xt, T) = min 1, e f(x∗)−f(xt) T  . (1) If the proposal is accepted, xt+1 = x∗, or otherwise, xt+1 = xt. Inspired by the annealing in chemistry, the temperature T is usually high at the beginning of searching, leading to a high acceptance probability even if x∗is worse than xt. Then, the temperature is decreased gradually as the search proceeds. In our work, we adopt the linear annealing schedule, given by T = max(0, Tinit −C · t), where Tinit is the initial temperature and C is the decreasing rate. The high initial temperature of SA makes the algorithm less greedy compared with hill climbing, whereas the decreasing of temperature enables the algorithm to better settle down to a certain optimum. Theoretically, simulated annealing is guaranteed to converge to the global optimum in a finite search space if the proposal and the temperature satisfy some mild conditions (Granville et al., 1994). Although such convergence may be slower than exhaustive search and the sentence space is, in fact, potentially infinite, simulated annealing is still a widely applied search algorithm, especially for discrete optimization. Readers may refer to Hwang (1988) for details of the SA algorithm. 3.2 Objective Function Simulated annealing maximizes an objective function, which can be flexibly specified in different applications. In particular, our UPSA objective f(x) considers multiple aspects of a candidate paraphrase, including semantic preservation fsem, expression diversity fexp, and language fluency fflu. Thus, our searching objective is to maximize f(x) = fsem(x, x0) · fexp(x, x0) · fflu(x), (2) where x0 is the input sentence. Semantic Preservation. A paraphrase is expected to capture all the key semantics of the original sentence. Thus, we leverage the cosine function of keyword embeddings to measure if the key focus of the candidate paraphrase is the same as the input. Specifically, we extract the keywords of the input sentence x0 by the Rake system (Rose et al., 2010) and embed them by GloVE (Pennington et al., 2014). For each keyword, we find the closest word in the candidate paraphrase x∗in terms of the cosine similarity. Our keyword-based semantic preservation score is given by the lowest cosine similarity among all the keywords, i.e., the least matched keyword: fsem,key(x∗, x0) = min e∈keywords(x0)max j {cos(w∗,j, e)}, (3) 305 where w∗,j is the jth word in the sentence x∗; e is an extracted keyword of x0. Bold letters indicate embedding vectors. In addition to keyword embeddings, we also adopt a sentence-level similarity function, based on Sent2Vec embeddings (Pagliardini et al., 2017). Sent2Vec learns n-gram embeddings and computes the average of n-grams embeddings as the sentence vector. It has been shown to be significant improvements over other unsupervised sentence embedding methods in similarity evaluation tasks (Pagliardini et al., 2017). Let x∗and x0 be the Sent2Vec embeddings of the candidate paraphrase and the input sentence, respectively. Our sentence-based semantic preservation scoring function is fsim,sen(x∗, x0) = cos(x∗, x0). To sum up, the overall semantic preservation scoring function of UPSA is given by fsem(x∗, x0) = fsem,key(x∗, x0)P · fsem,sen(x∗, x0)Q, (4) where P and Q are hyperparameters, balancing the importance of the two factors. Here, we use power weights because the scoring functions are multiplicative. Expression Diversity. The expression diversity scoring function computes the lexical difference of two sentences. We adopt a BLEU-induced function to penalize the repetition of the words and phrases in the input sentence: fexp(x∗, x0) = (1 −BLEU(x∗, x0))S, (5) where the BLEU score (Papineni et al., 2002) computes a length-penalized geometric mean of n-gram precision (n = 1, · · · , 4). S coordinates the importance of fexp(xt, x0) in the objective function (2). Language Fluency. Despite semantic preservation and expression diversity, the candidate paraphrase should be a fluent sentence by itself. We use a separately trained (forward) language model (denoted as −→ LM) to compute the likelihood of the candidate paraphrase as our fluency scoring function: fflu(x∗) = k=l∗ Y k=1 p−→ LM(w∗,k|w∗,1, . . . , w∗,k−1), (6) where l∗is the length of x∗and w∗,1, . . . , w∗,l are words of x∗. Here, we use a dataset-specific language model, trained on non-parallel sentences. Notice that a weighting hyperparameter is not needed for fflu, because the relative weights of different factors in Eqn. (2) are given by the powers in fsem,key, fsem,sen, and fexp. 3.3 Candidate Sentence Generator As mentioned, simulated annealing proposes a candidate sentence, given by different search actions. Since each action yields a new sentence x∗from xt, we call it a candidate sentence generator. While the proposal of candidate sentences does not affect convergence in theory (if some mild conditions are satisfied), it may largely influence the efficiency of SA searching. In our work, we mostly adopt the word-level editing in Miao et al. (2019) as our searching actions, but we differ in sampling distributions and further propose a copy mechanism for editing. At each step t, the candidate sentence generator randomly samples an editing position k and an editing operation namely, replacement, insertion, and deletion. For replacement and insertion, the candidate sentence generator also samples a candidate word. Let the current sentence be xt = (wt,1, . . . , wt,k−1, wk, wt,k+1 . . . , wt,lt). If the replacement operation proposes a candidate word w∗for the kth step, the resulting candidate sentence becomes x∗= (wt,1, . . . , wt,k−1, w∗, wt,k+1 . . . , wt,lt). The insertion operation works similarly. Here, the candidate word is sampled from a probabilistic distribution, induced by the objective function (2): p(w∗|·) = fsim(x∗, x0) · fexp(x∗, x0) · fflu(x∗) Z , (7) Z = X w∗∈W fsim(x∗, x0) · fexp(x∗, x0) · fflu(x∗), (8) where W is the sampling vocabulary; Z is known as the normalizing factor (noticing our scoring functions are nonnegative). We observe that sampling from such objective-induced distribution typically yields a meaningful candidate sentence, which enables SA to explore the search space more efficiently. It is also noted that sampling a word from the entire vocabulary involves re-evaluating (2) for each candidate word, and therefore, we also follow Miao et al. (2019) and only sample from the top-K words given by jointly considering a forward language 306 Algorithm 1 UPSA 1: Input: Original sentence x0 2: for t ∈{1, . . . , N} do 3: T = max{Tinit −C · t, 0} 4: Randomly choose an editing operation and a position k 5: Obtain a candidate x∗by candidate sentence generator 6: Compute the accepting probability paccept by Eqn. (1) 7: With probability paccept, xt+1 = x∗ 8: With probability 1 −paccept, xt+1 = xt 9: end for 10: return xτ s.t. τ = argmaxτ∈{1,...,N}f(xτ) model and backward language model. The replacement operator, for example, suggests the top-K words vocabulary by Wt,replace = top- Kw∗ h p−→ LM(wt,1, . . . , wt,k−1, w∗)· p←− LM(w∗, wt,k+1, . . . , wt,lt) i . (9) For word insertion, the top-K vocabulary Wt,insert is computed in a similar way (except that the position of w∗is slightly different). Details are not repeated. In our experiments, K is set to 50. Copy Mechanism. We observe that name entities and rare words are sometimes deleted or replaced during SA stochastic sampling. They are difficult to be recovered because they usually have a low language model-suggested probability. Therefore, we propose a copy mechanism for SA sampling, inspired by that in Seq2Seq learning (Gu et al., 2016). Specifically, we allow the candidate sentence generator to copy the words from the original sentence x0 for word replacement and insertion. This is essentially enlarging the top-K sampling vocabulary with the words in x0, given by f Wt,op = Wt,op ∪{w0,1, . . . , w0,l0} (10) where op ∈{replace,insert}. Thus, f Wt,op is the actual vocabulary from which SA samples the word w∗for replacement and insertion operation. While such vocabulary reduces the proposal space, it works well empirically because other low-ranked candidate words are either irrelevant or make the sentence disfluent; they usually have low objective scores, and are likely to be rejected even if sampled. 3.4 Overall Optimization Process We summarize our UPSA algorithm in Algorithm 1. Given an input x0, UPSA searches from the sentence space to maximize our objective f(x), which involves semantic preservation, expression diversity, and language fluency. UPSA starts from x0 itself. For each step, it randomly selects a search action (namely, word insertion, deletion, and replacement) at a position k (Line 4); if insertion or replacement is selected, UPSA also proposes a candidate word, so that a candidate paraphrase x∗is formed (Line 5). Then, UPSA computes an acceptance rate paccept based on the increment of f and the temperature T (Line 6). The candidate sentence xt+1 for the next step becomes xt if the proposal is accepted, or remains xt if the proposal is rejected. Until the maximum searching iterations, we choose the sentence xτ that yields the highest score. 4 Experiments 4.1 Datasets Quora. The Quora question pair dataset (Footnote 2) contains 140K parallel paraphrases and additional 260K pairs of non-parallel sentences. We follow the unsupervised setting in Miao et al. (2019), where 3K and 20K pairs are used for validation and test, respectively. Wikianswers. The original Wikianswers dataset (Fader et al., 2013) contains 2.3M pairs of question paraphrases from the Wikianswers website. Since our model only involves training a language model, we randomly selected 500K nonparallel sentences for training. For evaluation, we followed the same protocol as Li et al. (2019) and randomly sampled 5K for validation and 20K for testing. Although the exact data split in previous work is not available, our results are comparable to previous ones in the statistical sense. MSCOCO. The MSCOCO dataset contains 500K+ paraphrases pairs for ∼120K image captions (Lin et al., 2014). We follow the standard split (Lin et al., 2014) and the evaluation protocol in Prakash et al. (2016) where only image captions with fewer than 15 words are considered, since some captions are extremely long (e.g., 60 words). Twitter. The Twitter URL paraphrasing corpus (Lan et al., 2017) is originally constructed for paraphrase identification. We follow the standard train/test split, but take 10% of the training data as the validation set. The remaining samples are used to train our language model. For the test set, we only consider sentence pairs that are labeled as “paraphrases.” This results in 566 test cases. 307 4.2 Competing Methods and Metrics Unsupervised paraphrasing is an emerging research topic. We would compare UPSA with recent discrete and continuous sampling-based paraphrase generators, namely, VAE, Lag VAE (He et al., 2019), and CGMH. Early work on unsupervised paraphrasing typically adopts rule-based methods (Mckeown, 1983; Barzilay and Lee, 2003). Their performance could not be verified on the above datasets, since the extracted rules are not available. Therefore, we are unable to compare them in this paper. Also, rule-based systems usually do not generalize well to different domains. In the following, we describe our competing methods: VAE. We train a variational autoencoder (VAE) with two-layer, 300-dimensional LSTM units. The VAE is trained with non-parallel corpora by maximizing the variational lower bound of loglikelihood; during inference, sentences are sampled from the learned variational latent space (Bowman et al., 2016). Lag VAE. He et al. (2019) propose to aggressively optimize the inference process of VAE with more updates to address the posterior collapse problem (Chen et al., 2017). This method has been reported to be the state-of-the-art VAE. We adopted the published source code and generated paraphrases for comparison. CGMH. Miao et al. (2019) use Metropolis– Hastings sampling in the word space for constrained sentence generation. It is shown to outperform latent space sampling as in VAE, and is the state-of-the-art unsupervised paraphrasing approach. We also adopted the published source code and generated paraphrases for comparison. We further compare UPSA with supervised Seq2Seq paraphrase generators: ResidualLSTM (Prakash et al., 2016), VAE-SVG-eq (Gupta et al., 2018), Pointer-generator (See et al., 2017), the Transformer (Vaswani et al., 2017), and the decomposable neural paraphrase generator (DNPG, Li et al., 2019). DNPG has been reported as the state-of-the-art supervised paraphrase generator. To better compare UPSA with all paraphrasing settings, we also include domain-adapted supervised paraphrase generators that are trained in a source domain but tested in a target domain, including shallow fusion (Gulcehre et al., 2015) and multitask learning (MTL, Domhan and Hieber 2017). We adopt BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) scores as automatic metrics to evaluate model performance. Sun and Zhou (2012) observe that BLEU and ROUGE could not measure the diversity between the generated and the original sentences, and propose the iBLEU variant by penalizing by the similarity with the original sentence. Therefore, we regard the iBLEU score as our major metric, which is also adopted in Li et al. (2019). In addition, we also conduct human evaluation in our experiments (detailed later). 4.3 Implementation Details Our method involves unsupervised language modeling (forward and backward), realized by two-layer LSTM with 300 hidden units and trained specifically on each dataset with non-parallel sentences. For hyperparameter tuning, we applied a grid search procedure on the validation set of the Quora dataset using the iBLEU metric. The power weights P, Q, and S in the objective were 8, 1, and 1, respectively, chosen from {0.5, 1, 2, . . . , 8}. The initial temperature Tinit was chosen from {0.5, 1, 3, 5, 7, 9} × 10−2 and set to Tinit = 3 × 10−2 by validation. The magnitude of Tinit appears small here, but is in fact dependent on the scale of the objective function. The annealing rate C was set to Tinit #Iteration = 3 × 10−4, where our number of iterations (#Iteration) was 100. We should emphasize that all SA hyperparameters were validated only on the Quora dataset, and we did not perform any tuning on the other datasets (except the language model). This shows the robustness of our UPSA model and its hyperparameters. 4.4 Results Table 1 presents the performance of all competing methods on the Quora and Wikianswers datasets. The unsupervised methods are only trained on the non-parallel sentences. The supervised models were trained on 100K paraphrase pairs for Quora and 500K pairs for Wikianswers. The domainadapted supervised methods are trained on one dataset (Quora or Wikianswers), adapted using nonparallel text on the other (Wikianswers or Quora), and eventually tested on the latter domain (Wikianswers or Quora). We observe in Table 1 that, among unsupervised approaches, VAE and Lag VAE achieve the worst performance on both datasets, indicating that paraphrasing by latent space sampling is worse than word editing. We further observe that UPSA yields significantly better results than CGMH: the iBLEU score of UPSA is higher than that of CGMH by 2–5 308 Quora Wikianswers Model iBLEU BLEU Rouge1 Rouge2 iBLEU BLEU Rouge1 Rouge2 Supervised ResidualLSTM 12.67 17.57 59.22 32.40 22.94 27.36 48.52 18.71 VAE-SVG-eq 15.17 20.04 59.98 33.30 26.35 32.98 50.93 19.11 Pointer-generator 16.79 22.65 61.96 36.07 31.98 39.36 57.19 25.38 Transformer 16.25 21.73 60.25 33.45 27.70 33.01 51.85 20.70 Transformer+Copy 17.98 24.77 63.34 37.31 31.43 37.88 55.88 23.37 DNPG 18.01 25.03 63.73 37.75 34.15 41.64 57.32 25.88 Supervised Pointer-generator 5.04 6.96 41.89 12.77 21.87 27.94 53.99 20.85 Transformer+Copy 6.17 8.15 44.89 14.79 23.25 29.22 53.33 21.02 Shallow fusion 6.04 7.95 44.87 14.79 22.57 29.76 53.54 20.68 + Domain-adapted MTL 4.90 6.37 37.64 11.83 18.34 23.65 48.19 17.53 MTL+Copy 7.22 9.83 47.08 19.03 21.87 30.78 54.10 21.08 DNPG 10.39 16.98 56.01 28.61 25.60 35.12 56.17 23.65 Unsupervised VAE 8.16 13.96 44.55 22.64 17.92 24.13 31.87 12.08 Lag VAE 8.73 15.52 49.20 26.07 18.38 25.08 35.65 13.21 CGMH 9.94 15.73 48.73 26.12 20.05 26.45 43.31 16.53 UPSA 12.03 18.21 59.51 32.63 24.84 32.39 54.12 21.45 Table 1: Performance on the Quora and Wikianswers datasets. The best scores within the same training setting are underlined. The results of supervised learning and domain-adapted supervised methods are quoted from Li et al. (2019). We run experiments for all unsupervised methods and use the same evaluation script with Li et al. (2019) for a fair comparison. The results of CGMH in this table is slightly different from Miao et al. (2019), because Miao et al. (2019) use corpus-level BLEU, while Li et al. (2019) and our paper use sentence-level BLEU. Model MSCOCO Twitter iBLEU BLEU Rouge1 Rouge2 iBLEU BLEU Rouge1 Rouge2 VAE 7.48 11.09 31.78 8.66 2.92 3.46 15.13 3.40 Lag VAE 7.69 11.63 32.20 8.71 3.15 3.74 17.20 3.79 CGMH 7.84 11.45 32.19 8.67 4.18 5.32 19.96 5.44 UPSA 9.26 14.16 37.18 11.21 4.93 6.87 28.34 8.53 Table 2: Performances on MSCOCO and Twitter. points. This shows that paraphrase generation is better modeled as an optimization process, instead of sampling from a distribution. It is curious to see how our unsupervised paraphrase generator is compared with supervised ones, should large-scale parallel data be available. Admittedly, we see that supervised approaches generally outperform UPSA, as they can learn from massive parallel data. Our UPSA nevertheless achieves comparable results with the recent ResidualLSTM model (Prakash et al., 2016), reducing the gap between supervised and unsupervised paraphrasing. In addition, our UPSA could be easily applied to new datasets and new domains, whereas the supervised setting does not generalize well. This is shown by a domain adaptation experiment, where a supervised model is trained on one domain but tested on the other. We notice in Table 1 that the performance of supervised models (e.g., Transformer+Copy) decreases drastically on out-ofdomain sentences, even if both Quora and Wikianswers are question sentences. The performance is supposed to decrease further if the source and target domains are more different. UPSA outperforms all supervised domain-adapted paraphrase generators (except DNPG on the Wikianswers dataset). Table 2 shows model performance on MSCOCO and Twitter corpora. These datasets are less used for paraphrase generation than Quora and Wikianswers, and thus we could only compare unsupervised approaches by running existing code bases. Again, we see the same trend as Table 1: UPSA achieves the best performance, CGMH second, and VAEs worst. It is also noted that the Twitter corpus yields lower iBLEU scores for all models, largely due to the noise of Twitter utterances (Lan et al., 2017). However, the consistent results demonstrate that UPSA is robust and generalizable to different domains (without hyperparameter re-tuning). Human Evaluation. We also conducted human 309 Model Relevance Fluency Mean Score Agreement Mean Score Agreement VAE 2.65 0.41 3.23 0.51 Lag VAE 2.81 0.45 3.25 0.48 CGMH 3.08 0.36 3.51 0.49 UPSA 3.78 0.55 3.66 0.53 Table 3: Human evaluation on the Quora dataset. evaluation on the generated paraphrases. Due to the limit of budget and resources, we sampled 300 sentences from the Quora test set and only compared the unsupervised methods (which is the main focus of our work). Selecting a subset of models and data samples is a common practice for human evaluation in previous work (Wang et al., 2019). We asked three human annotators to evaluate the generated paraphrases in terms of relevance and fluency; each aspect was scored from 1 to 5. We report the average human scores and the Cohen’s kappa score (Cohen, 1960). It should be emphasized that our human evaluation was conducted in a blind fashion. Table 3 shows that UPSA achieves the highest human satisfaction scores in terms of both relevance and fluency, and the kappa scores indicate moderate inter-annotator agreement (Landis and Koch, 1977). The results are also consistent with the automatic metrics in Tables 1 and 2. We further conducted two-sided Wilcoxon signed rank tests. The improvement of UPSA is statistically significant with p < 0.01 in both aspects, compared with both competing methods. 4.5 Model Analysis We analyze UPSA in more detail on the most widely-used Quora dataset, with a test subset of 2000 samples. Ablation Study. We first evaluate the searching objective function (2) in Lines 1–4 of Table 4. The results show that each component of our objective (namely, keyword similarity, sentence similarity, and expression diversity) does play its role in paraphrase generation. Line 5 of Table 4 shows the effect of our copy mechanism, which is used in word replacement and insertion. It yields roughly one iBLEU score improvement if we keep sampling those words in the original sentence. Finally, we test the effect of the temperature decay in SA. Line 6 shows the performance if we fix the initial temperature during the whole searching process, which is similar to Metropolis–Hastings Line # UPSAVariant iBLEU BLEU Rouge1 Rouge2 1 UPSA 12.41 18.48 57.06 31.39 2 w/o fsim,key 10.28 15.34 50.85 26.42 3 w/o fsim,sen 11.78 17.95 57.04 30.80 4 w/o fexp 11.93 21.17 59.75 34.91 5 w/o copy 11.42 17.25 56.09 29.73 6 w/o annealing 10.56 16.52 56.02 29.25 Table 4: Ablation study. 9.00 11.00 13.00 15.00 17.00 19.00 0 0.05 0.1 0.15 0.2 Performance Initial Temperature BLEU iBLEU Figure 2: Analysis of the initial temperature Tinit. The dashed line illustrates the selected hyperparameter in validation. sampling.3 The result shows the importance of the annealing schedule. It also verifies our intuition that sentence generation (in particular, paraphrasing in this paper) should be better modeled as a searching problem than a sampling problem. Analysis of the Initial Temperature. We fixed the decreasing rate to C = 1 × 10−4 and chose the initial temperature Tinit from {0, 0.5, 1, 3, 5, 7, 9, 11, 15, 21} × 10−2. In particular, Tinit = 0 is equivalent to hill climbing (greedy search). The trend is plotted in Figure 2. It is seen that a high temperature yields worse performance (with other hyperparameters fixed), because in this case UPSA accepts more worse sentences and is less likely to settle down. On the other hand, a low temperature makes UPSA greedier, also resulting in worse performance. Especially, our simulated annealing largely outperforms greedy search, whose temperature is 0. We further observe that BLEU and iBLEU peak at different values of the initial temperature. This is because a lower temperature indicates a greedier strategy with less editing, and if the input sentence is not changed much, we may indeed have a higher BLEU score. But our major metric iBLEU penalizes the similarity to the input and thus prefers 3The Metropolis–Hastings sampler computes its acceptance rate in a slightly different way from Eqn. (1). 310 Input VAE Lag VAE CGMH UPSA where are best places for spring snowboarding in the us? where are best places for running in the world? (3.33) where are best places for honeymoon year near the us? (2.33) where is best store for the snowboarding in the US? (3.67) where can I find the best places in the US for snowboarding? (4.67) how can i become good in studies? how can i have a good android phone? (2.33) how can i become good students? (4.33) how can i become very rich in studies? (4.00) how should i do to get better grades in my studies? (4.33) what are the pluses and minuses about life as a foreigner in singapore? what are the UNK and most interesting life as a foreigner in medieval greece? (2.33) what are the UNK and interesting things about life as a foreigner? (2.33) what are the misconception about UNK with life as a foreigner in western? (2.33) what are the mistakes and pluses life as a foreigner in singapore? (2.67) Table 5: Example paraphrases generated by different methods on the Quora dataset. The averaged score evaluated by three annotators is shown at the end of each generated sentence. a higher temperature. We chose Tinit = 0.03 by validating on iBLEU. Case Study. We showcase several generated paraphrases in Table 5. We see qualitatively that UPSA can produce more reasonable paraphrases than the other methods in terms of both closeness in meaning and difference in expressions, and can make non-local transformations. For example, “places for spring snowboarding in the US” is paraphrased as “places in the US for snowboarding.” Admittedly, such samples are relatively rare, and our current UPSA mainly synthesizes paraphrases by editing words in the sentence, whereas the syntax is mostly preserved. This is partially due to the difficulty of exploring the entire (discrete) sentence space even by simulated annealing, and partially due to the insensitivity of the similarity objective given two very different sentences. 5 Conclusion and Future Work In this paper, we proposed a novel unsupervised approach UPSA that generates paraphrases by simulated annealing. Experiments on four datasets show that UPSA outperforms previous state-of-theart unsupervised methods to a large extent. In the future, we plan to apply the SA framework on syntactic parse trees in hopes of generating more syntactically different sentences (motivated by our case study). Acknowledgments We thank the anonymous reviewers for their insightful suggestions. This work was supported in part by the Beijing Innovation Center for Future Chip. Lili Mou is supported by AltaML, the Amii Fellow Program, and the Canadian CIFAR AI Chair Program; he also acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), RGPIN-2020-04465. Sen Song is the corresponding author of this paper. References Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2017. Guided open vocabulary image captioning with constrained beam search. In EMNLP, pages 936–945. Yu Bao, Hao Zhou, Shujian Huang, Lei Li, Lili Mou, Olga Vechtomova, Xin-yu Dai, and Jiajun Chen. 2019. Generating sentences from disentangled syntactic and semantic spaces. In ACL, pages 6008– 6019. Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: An unsupervised approach using multiple-sequence alignment. In ACL, pages 16–23. Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In CoNLL, pages 10–21. Xi Chen, Diederik P Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2017. Variational lossy autoencoder. ICLR. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1):37–46. Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In COLING, pages 350–356. Tobias Domhan and Felix Hieber. 2017. Using targetside monolingual data for neural machine translation through multi-task learning. In EMNLP, pages 1500–1505. Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. EditNTS: An neural programmer-interpreter model for sentence simplification through explicit editing. In ACL, pages 3393– 3402. Stefan Edelkamp and Stefan Schroedl. 2011. Heuristic Search: Theory and Applications. Elsevier. Michael Ellsworth and Adam Janin. 2007. Mutaphrase: Paraphrasing with framenet. In Proc. ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 143–150. 311 Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learning for open question answering. In ACL, pages 1608–1618. Vincent Granville, Mirko Krivanek, and Jeanpaul Rasson. 1994. Simulated annealing: a proof of convergence. TPAMI, 16(6):652–656. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In ACL, pages 1631–1640. Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual corpora in neural machine translation. arXiv preprint arXiv:1503.03535. Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2018. A deep generative framework for paraphrase generation. In AAAI, pages 5149–5156. Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. TACL, 6:437–450. Junxian He, Daniel Spokoyny, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019. Lagging inference networks and posterior collapse in variational autoencoders. In ICLR. Chii-Ruey Hwang. 1988. Simulated annealing: theory and applications. Acta Applicandae Mathematicae, 12(1):108–111. S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. 1983. Optimization by simulated annealing. Science, 220(4598):671–680. Kevin Knight and Daniel Marcu. 2000. Statisticsbased summarization step one: Sentence compression. In AAAI, pages 703–710. Dhruv Kumar, Lili Mou, Lukasz Golab, and Olga Vechtomova. 2020. Iterative edit-based unsupervised sentence simplification. In ACL. Wuwei Lan, Siyu Qiu, Hua He, and Wei Xu. 2017. A continuously growing dataset of sentential paraphrases. In EMNLP, pages 1224–1234. J. Richard Landis and Gary G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics, 33(1):159–174. Zichao Li, Xin Jiang, Lifeng Shang, and Hang Li. 2018. Paraphrase generation with deep reinforcement learning. In EMNLP, pages 3865–3878. Zichao Li, Xin Jiang, Lifeng Shang, and Qun Liu. 2019. Decomposable neural paraphrase generation. In ACL, pages 3403–3414. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Proc. Workshop on Text Summarization Branches Out, pages 74–81. Tsungyi Lin, Michael Maire, Serge J Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and C Lawrence Zitnick. 2014. Microsoft COCO: Common objects in context. In ECCV, pages 740– 755. Kathleen R Mckeown. 1983. Paraphrasing questions using given and new information. Computational Linguistics, 9(1):1–10. Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth, Augusta H Teller, and Edward Teller. 1953. Equation of state calculations by fast computing machines. J. Chemical Physics, 21(6):1087–1092. Ning Miao, Hao Zhou, Lili Mou, Rui Yan, and Lei Li. 2019. Constrained sentence generation by Metropolis–Hastings sampling. In AAAI, pages 6834–6842. Shashi Narayan, Siva Reddy, and Shay B Cohen. 2016. Paraphrase generation from latent-variable PCFGs for semantic parsing. In INLG, pages 153–162. Julia Neidert, Sebastian Schuster, Spence Green, Kenneth Heafield, and Christopher Manning. 2014. Stanford University’s submissions to the WMT 2014 translation task. In Proc. 9th Workshop on Statistical Machine Translation, pages 150–156. Aaron Van den Oord, Oriol Vinyals, et al. 2017. Neural discrete representation learning. In NIPS, pages 6306–6315. Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2017. Unsupervised learning of sentence embeddings using compositional n-gram features. In NAACL, pages 528–540. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In ACL, pages 311– 318. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: global vectors for word representation. In EMNLP, pages 1532–1543. Aaditya Prakash, Sadid A Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual LSTM networks. In COLING, pages 2923– 2934. Chris Quirk, Chris Brockett, and William Dolan. 2004. Monolingual machine translation for paraphrase generation. In EMNLP, pages 142–149. Stuart Rose, Dave Engel, Nick Cramer, and Wendy Cowley. 2010. Automatic keyword extraction from individual documents. Text Mining: Applications and Theory, 1:1–20. Aurko Roy and David Grangier. 2019. Unsupervised paraphrasing without translation. In ACL, pages 6033–6039. 312 Raphael Schumann, Lili Mou, Yao Lu, Olga Vechtomova, and Katja Markert. 2020. Discrete optimization for unsupervised sentence summarization with word level extraction. In ACL. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In ACL, pages 1073–1083. Pararth Shah, Dilek Hakkani-T¨ur, Bing Liu, and Gokhan T¨ur. 2018. Bootstrapping a neural conversational agent with dialogue self-play, crowdsourcing and on-line reinforcement learning. In NAACL, pages 41–51. Hong Sun and Ming Zhou. 2012. Joint learning of a dual SMT system for paraphrase generation. In ACL, pages 38–42. Christoph Tillmann, Stephan Vogel, Hermann Ney, A. Zubiaga, and Hassan Sawaf. 1997. Accelerated DP based search for statistical translation. In EUROSPEECH, pages 2667–2670. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 5998–6008. Su Wang, Rahul Gupta, Nancy Chang, and Jason Baldridge. 2019. A task in a suit and a tie: Paraphrase generation with semantic augmentation. In AAAI, pages 7176–7183. John Wieting and Kevin Gimpel. 2017. ParaNMT50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. In ACL, pages 451–462. Xinyuan Zhang, Yi Yang, Siyang Yuan, Dinghan Shen, and Lawrence Carin. 2019. Syntax-infused variational autoencoder for text generation. In ACL, pages 2069–2078. Jiawei Zhou and Alexander Rush. 2019. Simple unsupervised summarization by contextual matching. In ACL, pages 5101–5106.
2020
28
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3086–3095 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3086 Distinguish Confusing Law Articles for Legal Judgment Prediction Nuo Xu1, Pinghui Wang2,1∗, Long Chen1, Li Pan3, Xiaoyan Wang4, Junzhou Zhao1∗ 1MOE NEKEY Lab, Xi’an Jiaotong University, China 2Shenzhen Research School, Xi’an Jiaotong University, China 3School of Electronic, Information and Electrical Engineering, Shanghai Jiao Tong University 4Information Technology Service Center, The Supreme People’s Court, China [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Legal Judgment Prediction (LJP) is the task of automatically predicting a law case’s judgment results given a text describing its facts, which has excellent prospects in judicial assistance systems and convenient services for the public. In practice, confusing charges are frequent, because law cases applicable to similar law articles are easily misjudged. For addressing this issue, the existing method relies heavily on domain experts, which hinders its application in different law systems. In this paper, we present an end-to-end model, LADAN, to solve the task of LJP. To distinguish confusing charges, we propose a novel graph neural network to automatically learn subtle differences between confusing law articles and design a novel attention mechanism that fully exploits the learned differences to extract compelling discriminative features from fact descriptions attentively. Experiments conducted on realworld datasets demonstrate the superiority of our LADAN. 1 Introduction Exploiting artificial intelligence techniques to assist legal judgment has become popular in recent years. Legal judgment prediction (LJP) aims to predict a case’s judgment results, such as applicable law articles, charges, and terms of penalty, based on its fact description, as illustrated in Figure 1. LJP can assist judiciary workers in processing cases and offer legal consultancy services to the public. In the literature, LJP is usually formulated as a text classification problem, and several rule-based methods (Liu et al., 2004; Lin et al., 2012) and neural-based methods (Hu et al., 2018; Luo et al., 2017; Zhong et al., 2018) have been proposed. The main drawback of existing methods is that they cannot solve the confusing charges issue. ∗Corresponding authors. Judgment results Fact Description Law Articles Charges Terms of Penalty At 18:00 on October 26, 2015, the defendant Zhao XX and Zhang XX had an altercation. Zhao XX beat up Zhang and caused injuries. After identification, the injuries of bilateral nasal bone fractures of Zhang XX were minor injuries of grade ii…… Law Article 234:[The Crime of intentional injury]Whoever intentionally injures another person shall be sentenced to fixed-term imprisonment of not more than three years, criminal detention or public surveillance…… Crime of intentional injury A fixed-term imprisonment of ten months Figure 1: An illustration of the LJP. Generally, a judge needs to conduct professional analysis and reasoning on the fact description of the case, and then choose reasonable law articles, charges and the term of penalty to convict the offender. That is, due to the high similarity of several law articles, their corresponding law cases can be easily misjudged. For example, in Figure 2, both Article 385 and Article 163 describe offenses of accepting bribes, and their subtle difference is whether the guilty parties are state staffs or not. The key to solving the confusing charges issue is how to capture essential but rare features for distinguishing confusing law articles. Hu et al. (2018) defined ten discriminative attributes to distinguish confusing charges. However, their method relies too much on experts to hinder its applications in a large number of laws. In practice, we desire a method that can automatically extract textual features from law articles to assist JLP. The most relevant existing work to this requirement is (Luo et al., 2017), which used an attention mechanism to extract features from fact descriptions with respect to a specific law article. As shown in Figure 3a, for each law article, an attention vector is computed, which is used to extract features from the fact description of a law case to predict whether the law article is applicable to the case. Nevertheless, the weakness 3087 Any state staffs who, taking advantage of his position, demands money or property from another person, or illegally accepts another person's money or property in return for securing benefits for the person shall be guilty of acceptance of bribes. Article 385: The Crime of acceptance of bribes Whoever, in order to seek illegitimate benefits, gives any state staffs with money and property, shall be the crime of bribery Article 389: Crime of offering bribes Whoever, in order to seek illegitimate benefits, gives employees of companies, enterprises or other units with money or property , shall be guilty of bribing non-state staffs. Article 164: The crime of offering bribes to non-state staff The employees of companies, enterprises or other units who, taking advantage of his position, demands money or property from another person, or illegally accepts another person's money or property in return for securing benefits for the person shall be guilty of bribery crime of nonstate staffs. Article 163: Bribery crime of non-state staffs Figure 2: Examples of confusing charges. is that they learn each law article’s attention vector independently, and this may result in that similar attention vectors are learned for semantically close law articles; hence, it is ineffective in distinguishing confusing charges. To solve the confusing charges issue, we propose an end-to-end framework, i.e., Law Article Distillation based Attention Network (LADAN). LADAN uses the difference among similar law articles to attentively extract features from law cases’ fact descriptions, which is more effective in distinguishing confusing law articles, and improve the performance of LJP. To obtain the difference among similar law articles, a straightforward way is to remove duplicated texts between two law articles and only use the leftover texts for the attention mechanism. However, we find that this method may generate the same leftover texts for different law article, and generate misleading information to LJP. As shown in Fig. 2, if we remove the duplicated phrases and sentences between Article 163 and Article 385 (i.e., the red text in Fig. 2), and between Article 164 and Article 389 (i.e., the pink text in Fig. 2), respectively, then Article 385 and Article 389 will be almost same to each other (i.e., the blue text in Fig. 2). We design LADAN based on the following observation: it is usually easy to distinguish dissimilar law articles as sufficient distinctions exist, but challenging to discriminate similar law articles due to the few useful features. We first group law articles into different communities, and law articles in the same community are highly similar to each other. Then we propose a graph-based representation learning method to automatically explore the difference among law articles and comA1 A2 An ... a b Fact Description An-1 An-2 Fact Description An-2 An An-1 ... αn αn-1 αn-2 α1 α2 α3 ... A2 A1 A4 A3 A3 At-1 At At+1 Community 1 Community m Community M √ βm Community matching Attention computation Figure 3: a. The fact-law attention model in (Luo et al., 2017). b. Our framework. Variables α and β represent the encoded vectors learned for attentively extracting features from fact descriptions. pute an attention vector for each community. For an input law case, we learn both macro- and microlevel features. Macro-level features are used for predicting which community includes the applicable law articles. Micro-level features are attentively extracted by the attention vector of the selected community for distinguishing confusing law articles within the same community. Our main contributions are summarized as follows: (1) We develop an end-to-end framework, i.e., LADAN, to solve the LJP task. It addresses the confusing charges issue by mining similarities between fact descriptions and law articles as well as the distinctions between confusing law articles. (2) We propose a novel graph distillation operator (GDO) to extract discriminative features for effectively distinguishing confusing law articles. (3) We conduct extensive experiments on realworld datasets. The results show that our model outperforms all state-of-the-art methods. 2 Related Work Our work solves the problem of the confusing charge in the LJP task by referring to the calculation principle of graph neural network (GNN). Therefore, in this section, we will introduce related works from these two aspects. 2.1 Legal Judgment Prediction Existing approaches for legal judgment prediction (LJP) are mainly divided into three categories. In early times, works usually focus on analyzing existing legal cases in specific scenarios with mathematical and statistical algorithms (Kort, 1957; Nagel, 1963; Keown, 1980; Lauderdale and Clark, 2012). However, these methods are limited to small datasets with few labels. Later, a number of 3088 machine learning-based methods (Lin et al., 2012; Liu et al., 2004; Sulea et al., 2017) were developed to solve the problem of LJP, which almost combine some manually designed features with a linear classifier to improve the performance of case classification. The shortcoming is that these methods rely heavily on manual features, which suffer from the generalization problem. In recent years, researchers tend to exploit neural networks to solve LJP tasks. Luo et al. (2017) propose a hierarchical attentional network to capture the relation between fact description and relevant law articles to improve the charge prediction. Zhong et al. (2018) model the explicit dependencies among subtasks with scalable directed acyclic graph forms and propose a topological multi-task learning framework for effectively solving these subtasks together. Yang et al. (2019) further refine this framework by adding backward dependencies between the prediction results of subtasks. To the best of our knowledge, Hu et al. (2018) are the first to study the problem of discriminating confusing charges for automatically predicting applicable charges. They manually define 10 discriminative attributes and propose to enhance the representation of the case fact description by learning these attributes. This method relies too much on experts and cannot be easily extended to different law systems. To solve this issue, we propose a novel attention framework that automatically extracts differences between similar law articles to enhance the representation of fact description. 2.2 Graph Neural Network Due to its excellent performance in graph structure data, GNN has attracted significant attention (Kipf and Welling, 2017; Hamilton et al., 2017; Bonner et al., 2019). In general, existing GNNs focus on proposing different aggregation schemes to fuse features from the neighborhood of each node in the graph for extracting richer and more comprehensive information: Kipf et al. (2017) propose graph convolution networks which use mean pooling to pool neighborhood information; GraphSAGE (Hamilton et al., 2017) concatenates the node’s features and applies mean/max/LSTM operators to pool neighborhood information for inductively learning node embeddings; MR-GNN (Xu et al., 2019) aggregates the multi-resolution features of each node to exploit node information, subgraph information, and global information together; Besides, Message Passing Neural Networks (Gilmer et al., 2017) further consider edge information when doing the aggregation. However, the aggregation schemes lead to the over-smoothing issue of graph neural networks (Li et al., 2018), i.e., the aggregated node representations would become indistinguishable, which is entirely contrary to our goal of extracting distinguishable information. So in this paper, we propose our distillation operation, based on a distillation strategy instead of aggregation schemes, to extract the distinguishable features between similar law articles. 3 Problem Formulation In this section, we introduce some notations and terminologies, and then formulate the LJP task. Law Cases. Each law case consists of a fact description and several judgment results (cf. Figure 1). The fact description is represented as a text document, denoted by f. The judgment results may include applicable law articles, charges, terms of penalty, etc. Assume there are t kinds of judgment results, and the i-th judgment result is represented as a categorical variable yi which takes value from set Yi. Then, a law case can be represented by a tuple (f, y1, . . . , yt). Law Articles. Law cases are often analyzed and adjudicated according to a legislature’s statutory law (also known as, written law). Formally, we denote the statutory law as a set of law articles L ≜{L1, . . . , Lm} where m is the number of law articles. Similar to the fact description of cases, we also represent each law article Li as a document. Legal Judgment Prediction. In this paper, we consider three kinds of judgment results: applicable law articles, charges, and terms of penalty. Given a training dataset D ≜{(f, y1, y2, y3)z}q z=1 of size q, we aim to train a model F(·) that can predict the judgment results for any test law case with a fact description ftest, i.e., F(ftest, L) = (ˆy1, ˆy2, ˆy3), where ˆyi ∈Yi, i = 1, 2, 3. Following (Zhong et al., 2018; Yang et al., 2019), we assume each case has only one applicable law article. 3089 Graph Distillation Operator W Y Z X W Y Z X Graph Construction Layer Law-similarity Graphs ... g1 gM ... Adjacency matrices GDO GDO GDO Subgraph selection g2 Law E Law G Law F Law Articles Fact Re-encode Module Concat Law Distillation Module y1 y2 y3 Law Article Prediction Charge Predicton Term of Penalty Prediction Multi-task Learning Framework pooling ... Fact Description f Law W Law Y Law X Law Z β1 β2 βM Distinction vectors ... ... a b Law A Law C Law B Law D Basic Encoder Module Figure 4: a. Overview of our framework LADAN: it takes the fact descriptions of cases and the text definitions of law articles as inputs, then extracts the basic representation vb f and distinguishing representation vd f of the fact descriptions through the basic encoder and the re-encoder, and finally combines this two representations for the downstream prediction tasks; b. Law Distillation Module: this module communizes law articles and distills the distinguishable features of each community for attention calculation of the re-encoder. 4 Our Method 4.1 Overview In our framework LADAN (cf. Fig. 4a), the fact description of a case is represented by two parts: a basic representation, denoted by vb f, and a distinguishable representation, denoted by vd f. The basic representation vb f contains basic semantic information for matching a group of law articles that may apply to the case. In contrast, the distinguishable representation vd f captures features that can effectively distinguish confusing law articles. The concatenation of vb f and vd f is fed into subsequent classifiers to predict the labels of the JLP task. As we mentioned, it is easy to distinguish dissimilar law articles as sufficient distinctions exist, and the difficulty in solving confusing charges lies in extracting distinguishable features of similar law articles. To obtain the basic representation vb f, therefore, we use one of the popular document encoding methods (e.g., CNN encoder (Kim, 2014) and Bi-RNN encoder (Yang et al., 2016)). To learn the distinguishable representation vd f, we use a law distillation module first to divide law articles to several communities to ensure that the law articles in each community are highly similar, and then extract each community i’s distinction vector (or, distinguishable features) βi from the basic representation of law articles in community i. Given the case’s fact description, from all communities’ distinction vectors, we select the most relevant one (i.e., βˆc in Fig. 4(a)) for attentively extracting the distinguishable features vd f in the fact re-encode module. In the follows, we elaborate law distillation module (Sec. 4.2) and fact re-encode module (Sec. 4.3) respectively. 4.2 Distilling Law Articles A case might be misjudged due to the high similarity of some law articles. To alleviate this problem, we design a law distillation module (cf. Fig. 4 b) to extract distinguishable and representative information from all law articles. Specifically, it first uses a graph construction layer (GCL) to divide law articles into different communities. For each law article community, a graph distillation layer is applied to learn its discriminative representation, hereinafter, called distinction vector. 4.2.1 Graph Construction Layer To find probably confusing law articles, we first construct a fully-connected graph G∗for all law articles L, where the weight on the edge between a pair of law article Li, Lj ∈L is defined as 3090 the cosine similarity between their TF-IDF (Term Frequency-Inverse Document Frequency) representations tf idf i and tf idf j. Since confusing law articles are usually semantically similar and there exists sufficient information to distinguish dissimilar law articles, we remove the edges with weights less than a predefined threshold τ from graph G∗. By setting an appropriate τ, we obtain a new graph G = {gi}M i=1 composed of several disconnected subgraphs g1, . . . , gM (or, communities), where each gi, i = 1, . . . , M contains a specific community of probably confusing articles. Our later experimental results demonstrate that this easy-to-implement method effectively improves the performance of LADAN. 4.2.2 Graph Distillation Layer To extract the distinguishable information from each community gi, a straightforward way is to delete duplicate words and sentences presented in law articles within the community (as described in Sec. 1). In addition to introducing significant errors, this simple method cannot be plugged into end-to-end neural architectures due to its nondifferentiability. To overcome the above issues, inspired by the popular graph convolution operator (GCO) (Kipf and Welling, 2017; Hamilton et al., 2017; Veliˇckovi´c et al., 2017), we propose a graph distillation operator (GDO) to effectively extract distinguishable features. Different from GCO, which computes the message propagation between neighbors and aggregate these messages to enrich representations of nodes in the graph, the basic idea behind our GDO is to learn effective features with distinction by removing similar features between nodes. Specifically, for an arbitrary law article Li, GDO uses a trainable weight matrix Ψ to capture similar information between it and its neighbors in graph G, and a matrix Φ to extract effective semantic features of Li. At each layer l ≥0, the aggregation of similar information between Li and its neighbors is removed from its representation, that is, v(l+1) Li = Φ(l)v(l) Li − X Lj∈Ni Ψ(l)[v(l) Li, v(l) Lj] |Ni| + b(l) where v(l) Li ∈Rdl refers to the representation of law Li in the lth graph distillation layer, Ni refers to the neighbor set of Li in graph G, b(l) is the bias, and Φ(l) ∈Rdl+1×dl and Ψ(l) ∈Rdl+1×2dl are the trainable self weighted matrix and the neighbor similarity extracting matrix respectively. Note that dl is the dimension of the feature vector in the lth graph distillation layer. We set d0 = ds, where ds is the dimension of basic representations vb f and vLi. Similar to GCO, our GDO also supports multi-layer stacking. Using GDO with H layers, we output law article representation of the last layer, i.e., v(H) Li ∈ RdH, which contains rich distinguishable features that can distinguish law article Li from the articles within the same community. To further improve law articles’ distinguishable features, for each subgraph gi, i = 1, 2, . . . , M in graph G, we compute its distinction vector βi by using pooling operators to aggregate the distinguishable features of articles in gi. Formally, βi is computed as: βi = [MaP({v(H) Li }Lj∈gi), MiP({v(H) Li }Lj∈gi)] where MaP(·) and MiP(·) are the element-wise max pooling and element-wise min pooling operators respectively. 4.3 Re-encoding Fact with Distinguishable Attention To capture a law case’s distinguishable features from its fact description f, we firstly define the following linear function, which is used to predict its most related community gˆc in graph G: ˆX = softmax(Wgvb f + bg) (1) where vb f is the basic representation of fact description f, Wg ∈RM×ds and bg ∈RM are the trainable weight matrix and bias respectively. Each element ˆXi ∈ˆX, i = 1, ..., M reflects the closeness between fact description f and law articles community gi. The most relevant community gˆc is computed as ˆc = arg max i=1,...,M ˆXi. Then, we use the corresponding community’s distinction vector βˆc to attentively extract distinguishable features from fact description f. Inspired by (Yang et al., 2016), we attentively extract distinguishable features based on wordlevel and sentence-level Bi-directional Gated Recurrent Units (Bi-GRUs). Specifically, for each input sentence Si = [wi,1, · · · , wi,ni] in fact description f, word-level Bi-GRUs will output a hidden 3091 state sequence, that is, hi,j = [−−→ GRU(wi,j), ←−− GRU(wi,j)], j = 1, ..., ni, where wi,j represents the word embedding of word wi.j and hi,j ∈Rdw. Based on this hidden state sequence and the distinction vector βˆc, we calculate an attentive vector [αi,1, . . . , αi,ni], where each αi,j evaluates the discrimination ability of word wi,j ∈Si. αi,j is formally computed as: αi,j = exp(tanh(Wwhi,j)T(Wgwβˆc)) P j exp(tanh(Wwhi,j)T(Wgwβˆc)), where Ww and Wgw are trainable weight matrices. Then, we get a representation of sentence Si as: vsi = ni X j=1 αi,jhi,j, where ni denotes the word number in sentence Si. By the above word-level Bi-GRUs, we get a sentence representations sequence [vs1, . . . , vsnf ], where nf refers to the number of sentences in the fact description f. Based on this sequence, similarly, we build sentencelevel Bi-GRUs and calculate a sentence-level attentive vector [α1, . . . , αnf ] that reflects the discrimination ability of each sentence, and then get the fact’s distinguishable representation vd f ∈Rds. Our sentence-level Bi-GRUs are formulated as: hi = [−−→ GRU(vsi), ←−− GRU(vsi)], i = 1, 2, ..., nf, αi = exp(tanh(Wshi)T(Wgsβˆc)) P i exp(tanh(Wshi)T(Wgsβˆc)), vd f = X i αihi. 4.4 Prediction and Training We concatenate the basic representation vb f and the distinguishable representation vd f as the final representation of fact description f, i.e., ˜vf = [vb f, vd f]. Based on ˜vf, we generate a corresponding feature vector ˜vj f for each subtask tj, j = 1, 2, 3 mentioned in Sec. 3, i.e., t1: law article prediction; t2: charge prediction; t3: term of penalty prediction. To obtain the prediction for each subtask, we use a linear classifier: ˆyj = softmax(Wj p˜vj f + bj p), where Wj p and bj p are parameters specific to task tj. For training, we compute a cross-entropy loss function for each subtask and take the loss sum of all subtasks as the overall prediction loss: Lp = − 3 X j=1 |Yj| X k=1 yj,k log(ˆyj,k), where |Yj| denotes the number of different classes (or, labels) for task tj and [yj,1, yj,2, . . . , yj,|Yj|] refers to the ground-truth vector of task tj. Besides, we also consider the loss of law article community prediction (i.e., Eq. 1): Lc = −λ M X j=1 Xj log( ˆXj), where [X1, X2, . . . , XM] is the ground-truth vector of the community including the correct law article applied to the law case. In summary, our final overall loss function is: L = Lp + Lc (2) 5 Experiments 5.1 Datasets To evaluate the performance of our method, we use the publicly available datasets of the Chinese AI and Law challenge (CAIL2018)1 (Xiao et al., 2018): CAIL-small (the exercise stage dataset) and CAIL-big (the first stage dataset). The case samples in both datasets contain fact description, applicable law articles, charges, and the terms of penalty. For data processing, we first filter out samples with fewer than 10 meaningful words. To be consistent with state-of-the-art methods, we filter out the case samples with multiple applicable law articles and multiple charges. Meanwhile, referring to (Zhong et al., 2018), we only keep the law articles and charges that apply to not less than 100 corresponding case samples and divide the terms of penalty into non-overlapping intervals. The detailed statistics of the datasets are shown in Table 1. 5.2 Baselines and Settings Baselines. We compare LADAN with some baselines, including: 1http://cail.cipsc.org.cn/index.html 3092 Dataset CAIL-small CAIL-big #Training Set Cases 101,619 1,587,979 #Test Set Cases 26,749 185,120 #Law Articles 103 118 #Charges 119 130 #Term of Penalty 11 11 Table 1: Statistics of datasets. • CNN (Kim, 2014): a CNN-based model with multiple filter window widths for text classification. • HARNN (Yang et al., 2016): an RNN-based neural network with a hierarchical attention mechanism for document classification. • FLA (Luo et al., 2017): a charge prediction method that uses an attention mechanism to capture the interaction between fact description and applicable laws. • Few-Shot (Hu et al., 2018): a discriminating confusing charge method, which extracts features about ten predefined attributes from fact descriptions to enforce semantic information. • TOPJUDGE (Zhong et al., 2018): a topological multi-task learning framework for LJP, which formalizes the explicit dependencies over subtasks in a directed acyclic graph. • MPBFN-WCA (Yang et al., 2019): a multitask learning framework for LJP with multiperspective forward prediction and backward verification, which is the state-of-theart method. Similar to existing works (Luo et al., 2017; Zhong et al., 2018), we train the baselines CNN, HLSTM and FLA using a multi-task framework (recorded as MTL) and select a set of the best experimental parameters according to the range of the parameters given in their original papers. Besides, we use our method LADAN with the same multi-task framework (i.e., Landan+MTL, LADAN+TOPJUDGE, and LADAN+MPBFN) to demonstrate our superiority in feature extraction. Experimental Settings. We use the THULAC (Sun et al., 2016) tool to get the word segmentation because all case samples are in Chinese. Afterward, we use the Skip-Gram model (Mikolov et al., 2013) to pre-train word embeddings on these case documents, where the model’s embedding size and frequency threshold are set to 200 and 25 respectively. Meanwhile, we set the maximum document length as 512 words for CNN-based models in baselines and set the maximum sentence length to 100 words and maximum document length to 15 sentences for LSTMbased models. As for hyperparameters setting, we set the dimension of all latent states (i.e., dw, ds, dl and df) as 256 and the threshold τ as 0.3. In our method LADAN, we use two graph distillation layers, and a Bi-GRU with a randomly initialized attention vector u is adopted as the basic document encoder. For training, we set the learning rate of Adam optimizer to 10−3, and the batch size to 128. After training every model for 16 epochs, we choose the best model on the validation set for testing.2 5.3 Experimental Results To compare the performance of the baselines and our methods, we choose four metrics that are widely used for multi-classification tasks, including accuracy (Acc.), macro-precision (MP), macro-recall (MR), and macro-F1 (F1). Since the problem of confusing charges often occurs between a few categories, the main metric is the F1 score. Tables 2 and 3 show the experimental results on datasets CAIL-small and CAIL-big, respectively. Our method LADAN performs the best in terms of all evaluation metrics. Because both CAIL-small and CAIL-big are imbalanced datasets, we focus on comparing the F1-score, which more objectively reflects the effectiveness of our LADAN and other baselines. Compared with the state-of-the-art MPBFN-WCA, LADAN improved the F1-scores of law article prediction, charge prediction, and term of penalty prediction on dataset CAIL-small by 2.02%, 2.42% and 4.20% respectively, and about 3.18%, 1.44% and 5.79% on dataset CAIL-big. Meanwhile, the comparison under the same multi-task framework (i.e., MTL, TOPJUDGE, and MPBFN) shows that our LADAN extracted more effective features from fact descriptions than all baselines. Meanwhile, we can observe that the performance of Few-shot on charge prediction is close to LADAN, but its performance on the term of penalty prediction is far from ideal. It is because the ten predefined attributes of Few-Shot are only effective for identifying charges, which also proves the robustness 2Our source codes are available at https://github. com/prometheusXN/LADAN 3093 Tasks Law Articles Charges Term of Penalty Metrics Acc. MP MR F1 Acc. MP MR F1 Acc. MP MR F1 FLA+MTL 77.74 75.32 74.36 72.93 80.90 79.25 77.61 76.94 36.48 30.94 28.40 28.00 CNN+MTL 78.71 76.02 74.87 73.79 82.41 81.51 79.34 79.61 35.40 33.07 29.26 29.86 HARNN+MTL 79.79 75.26 76.79 74.90 83.80 82.44 82.78 82.12 36.17 34.66 31.26 31.40 Few-Shot+MTL 79.30 77.80 77.59 76.09 83.65 80.84 82.01 81.55 36.52 35.07 26.88 27.14 TOPJUDGE 79.88 79.77 73.67 73.60 82.10 83.60 78.42 79.05 36.29 34.73 32.73 29.43 MPBFN-WCA 79.12 76.30 76.02 74.78 82.14 82.28 80.72 80.72 36.02 31.94 28.60 29.85 LADAN+MTL 81.20 78.24 77.38 76.47 85.07 83.42 82.52 82.74 38.29 36.16 32.49 32.65 LADAN+TOPJUDGE 81.53 78.62 78.29 77.10 85.12 83.64 83.57 83.14 38.34 36.39 32.75 33.53 LADAN+MPBFN 82.34 78.79 77.59 76.80 84.83 83.33 82.80 82.85 39.35 36.94 33.25 34.05 Table 2: Judgment prediction results on CAIL-small. Tasks Law Articles Charges Term of Penalty Metrics Acc. MP MR F1 Acc. MP MR F1 Acc. MP MR F1 FLA+MTL 93.23 72.78 64.30 66.56 92.76 76.35 68.48 70.74 57.63 48.93 45.00 46.54 CNN+MTL 95.84 83.20 75.31 77.47 95.74 86.49 79.00 81.37 55.43 45.13 38.85 39.89 HARNN+MTL 95.63 81.48 74.57 77.13 95.58 85.59 79.55 81.88 57.38 43.50 40.79 42.00 Few-Shot+MTL 96.12 85.43 80.07 81.49 96.04 88.30 80.46 83.88 57.84 47.27 42.55 43.44 TOPJUDGE 95.85 84.84 74.53 77.50 95.78 86.46 78.51 81.33 57.34 47.32 42.77 44.05 MPBFN-WCA 96.06 85.25 74.82 78.36 95.98 89.16 79.73 83.20 58.14 45.86 39.07 41.39 LADAN+MTL 96.57 86.22 80.78 82.36 96.45 88.51 83.73 85.35 59.66 51.78 45.34 46.93 LADAN+TOPJUDGE 96.62 86.53 79.08 81.54 96.39 88.49 82.28 84.64 59.70 51.06 45.46 46.96 LADAN+MPBFN 96.60 86.42 80.37 81.98 96.42 88.45 83.08 84.95 59.85 51.75 45.59 47.18 Table 3: Judgment prediction results on CAIL-big. of our LADAN. The highest MP- and MR-scores of LADAN also demonstrates its ability to distinguish confusing law articles. Note that all methods’ performance on dataset CAIL-big is better than that on CAIL-small, which is because the training set on CAIL-big is more adequate. 5.4 Ablation Experiments To further illustrate the significance of considering the difference between law articles, we conducted ablation experiments on model LADAN+MTL with dataset CAIL-small. To prove the effectiveness of our graph construction layer (GCL), we build a LADAN model with the GCL’s removing threshold τ = 0 (i.e., “-no GCL” in Table 4), which directly applies the GDO on the fullyconnected graph G∗to generate a global distinction vector βg for re-encoding the fact description. To verify the effectiveness of our graph distillation operator (GDO), we build a no-GDO LADAN model (i.e., “-no GDO” in Table 4), which directly pools each subgraph gi to a distinction vector βi without GDOs. To evaluate the importance of considering the difference among law articles, we remove both GCL and GDO from LADAN by setting τ = 1.0 (i.e., “-no both” in Table 4), i.e., each law article independently extracts the attentive feature from fact description. In Table 4, we Tasks Law Charge Penalty Metrics Acc. F1 Acc. F1 Acc. F1 LADAN+MTL 81.20 76.47 85.07 83.14 38.29 32.65 -no GCL 80.46 75.98 84.04 82.33 37.80 31.85 -no GDO 80.82 76.19 84.65 82.50 36.69 31.62 -no both 79.79 74.97 83.72 82.02 34.87 31.34 Table 4: Ablation analysis on CAIL-small. see that both GCL and GDO effectively improve the performance of LADAN. GCL is more critical than GDO because GDO has a limited performance when the law article communities obtained by GCL are not accurate. When removing both GCL and GDO, the accuracy of LADAN decreases to that of HARNN+MTL, which powerfully demonstrates the effectiveness of our method exploiting differences among similar law articles. 5.5 Case Study To intuitively verify that LADAN effectively extracts distinguishable features, we visualize the attention of LADAN’s encoders. Figure 5 shows two law case examples, each for Article 385 and Article 163, respectively, where the darker the word is, the higher the attention weight it gets in the corresponding encoder, i.e., its information is more important to the encoder. For the basic encoder, we see that the vital information in these two cases is very similar, which both contain the 3094 Fact Re-encoder: Basic Encoder: Case example of Law Article 163: Bribery crime of non-state emplotees Basic Encoder: Case example of Law Article 185: Crimeof acceptance of bribes Fact Re-encoder: Figure 5: The attention visualization on case examples for Article 185 and Article 163. word like “use position” “accept benefit” “accept ... cash”, etc. Therefore, when using just the representation of basic encoder to predict acceptable law articles, charges and terms of penalty, these two cases tend to be misjudged. As we mentioned in Sec. 4.3, with the distinction vector, our fact reencoder focuses on extracting distinguishable features like defendants’ identity information (e.g., “company manager” “working in the Cadastral Unit of Luocheng Branch of Luohe City Land and Resources Bureau” in our examples), which effectively distinguish the applicable law articles and charges of these two cases. 6 Conclusion In this paper, we present an end-to-end model, LADAN, to solve the issue of confusing charges in LJP. In LADAN, a novel attention mechanism is proposed to extract the key features for distinguishing confusing law articles attentively. Our attention mechanism not only considers the interaction between fact description and law articles but also the differences among similar law articles, which are effectively extracted by a graph neural network GDL proposed in this paper. The experimental results on real-world datasets show that our LADAN raises the F1-score of state-of-the-art by up to 5.79%. In the future, we plan to study complicated situations such as a law case with multiple defendants and charges. Acknowledgments The research presented in this paper is supported in part by National Key R&D Program of China (2018YFC0830500),Shenzhen Basic Research Grant (JCYJ20170816100819428), National Natural Science Foundation of China (61922067, U1736205, 61902305), MoE-CMCC “Artifical Intelligence” Project (MCM20190701), National Science Basic Research Plan in Shaanxi Province of China (2019JM-159), National Science Basic Research Plan in Zhejiang Province of China (LGG18F020016). References Stephen Bonner, Ibad Kureshi, John Brennan, Georgios Theodoropoulos, Andrew Stephen McGough, and Boguslaw Obara. 2019. Exploring the semantic content of unsupervised graph embeddings: an empirical study. Data Science and Engineering, 4(3):269– 289. Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. 2017. Neural message passing for quantum chemistry. In ICML. Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In NeurIPS. Zikun Hu, Xiang Li, Cunchao Tu, Zhiyuan Liu, and Maosong Sun. 2018. Few-shot charge prediction with discriminative legal attributes. In COLING. R Keown. 1980. Mathematical models for legal prediction. Computer/lj, 2:829. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP. Thomas N Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In ICML. Fred Kort. 1957. Predicting supreme court decisions mathematically: A quantitative analysis of the “right to counsel” cases. American Political Science Review, 51(1):1–12. 3095 Benjamin E Lauderdale and Tom S Clark. 2012. The supreme court’s many median justices. American Political Science Review, 106(4):847–866. Qimai Li, Zhichao Han, and Xiao-Ming Wu. 2018. Deeper insights into graph convolutional networks for semi-supervised learning. In AAAI. Wan-Chen Lin, Tsung-Ting Kuo, Tung-Jia Chang, Chueh-An Yen, Chao-Ju Chen, and Shou-de Lin. 2012. Exploiting machine learning models for chinese legal documents labeling, case classification, and sentencing prediction. Processdings of ROCLING. Chao-Lin Liu, Cheng-Tsung Chang, and Jim-How Ho. 2004. Case instance generation and refinement for case-based criminal summary judgments in chinese. Bingfeng Luo, Yansong Feng, Jianbo Xu, Xiang Zhang, and Dongyan Zhao. 2017. Learning to predict charges for criminal cases with legal basis. arXiv preprint arXiv:1707.09168. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NeurIPS. Stuart S Nagel. 1963. Applying correlation analysis to case prediction. Tex. L. Rev., 42:1006. Octavia-Maria Sulea, Marcos Zampieri, Shervin Malmasi, Mihaela Vela, Liviu P Dinu, and Josef van Genabith. 2017. Exploring the use of text classification in the legal domain. arXiv preprint arXiv:1710.09306. Maosong Sun, Xinxiong Chen, Kaixu Zhang, Zhipeng Guo, and Zhiyuan Liu. 2016. Thulac: An efficient lexical analyzer for chinese. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903. Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Yansong Feng, Xianpei Han, Zhen Hu, Heng Wang, et al. 2018. Cail2018: A large-scale legal dataset for judgment prediction. arXiv preprint arXiv:1807.02478. Nuo Xu, Pinghui Wang, Long Chen, Jing Tao, and Junzhou Zhao. 2019. Mr-gnn: Multi-resolution and dual graph neural network for predicting structured entity interactions. In IJCAI. Wenmian Yang, Weijia Jia, XIaojie Zhou, and Yutao Luo. 2019. Legal judgment prediction via multiperspective bi-feedback network. arXiv preprint arXiv:1905.03969. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In NAACL. Haoxi Zhong, Guo Zhipeng, Cunchao Tu, Chaojun Xiao, Zhiyuan Liu, and Maosong Sun. 2018. Legal judgment prediction via topological learning. In EMNLP.
2020
280
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3096–3104 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3096 Hiring Now: A Skill-Aware Multi-Attention Model for Job Posting Generation Liting Liu1, Jie Liu2∗, Wenzheng Zhang2, Ziming Chi2, Wenxuan Shi1, Yalou Huang1 1College of Software, Nankai University, Tianjin, China 2College of Artificial Intelligence, Nankai University, Tianjin, China {liu liting, wzzhang}@mail.nankai.edu.cn [email protected] {jliu, shiwx, huangyl}@nankai.edu.cn Abstract Writing a good job posting is a critical step in the recruiting process, but the task is often more difficult than many people think. It is challenging to specify the level of education, experience, relevant skills per the company information and job description. To this end, we propose a novel task of Job Posting Generation (JPG) that is cast as a conditional text generation problem to generate job requirements according to the job descriptions. To deal with this task, we devise a data-driven global Skill-Aware Multi-Attention generation model, named SAMA. Specifically, to model the complex mapping relationships between input and output, we design a hierarchical decoder that we first label the job description with multiple skills, then we generate a complete text guided by the skill labels. At the same time, to exploit the prior knowledge about the skills, we further construct a skill knowledge graph to capture the global prior knowledge of skills and refine the generated results. The proposed approach is evaluated on real-world job posting data. Experimental results clearly demonstrate the effectiveness of the proposed method1. 1 Introduction Writing high-quality job postings is the crucial first step to attract and filter the right talents in the recruiting process of human resource management. Given job descriptions and basic company information, the key to the job posting is to write job requirements, which requires to specify professional skills properly. Both too many or few requirements may lead to negative impacts on talent recruiting. Because of the extremely large number of job positions and varieties of professional skills, a lot of ∗Corresponding Author 1https://github.com/NKU-IIPLab/SAMA Basic Information Job Description Position: Market Researcher company scale: 1000 ~ 10000 1. Assist the General Manager in sourcing travel industry news and in conducting product research and analysis. 2. Facilitate effective communication between the market research and user experience teams. 3. Translate key industry texts and compose newsletters for internal communication. Job Requirement 1. 3+ years of research experience at investment banks. 2. Strong research, data analysis and communication skills. 3. Proficient user of Microsoft Suite/G Suite. Figure 1: An example of automatic job posting. companies have to pay much cost in this step to win in the war of talents. To this end, we propose the task of Job Posting Generation (JPG) in this paper, and we cast it as a novel conditional text generation task that generates the job requirement paragraph. Exploiting the ubiquitous job posting data, we aim to automatically specify the level of necessary skills and generate fluent job requirements in a data-driven manner, as shown in Figure 1. Although the JPG task is of great significance, the complexity of it poses several key challenges: 1) Generating job requirements needs to not only produce overall fluent text but also precisely organize the key content like skills and other information, which is very difficult to current neural systems. Especially, the long-text to long-text generation easily leads to information missing (Shen et al., 2019). 2) The key points of job descriptions and the skills of job requirements are complex many-tomany relations, which makes the mapping learning very difficult. 3) How to exploit the global information among the heterogeneous relations between basic company information and the professional 3097 skills across the whole dataset is of great importance to generate high-quality job requirements. To address these challenges, we focus on the richness and accuracy of skills in generated job requirements and propose a global Skill-Aware MultiAttention (SAMA) model for JPG task. Specifically, we devise a two-pass decoder to generate informative, accurate, and fluent job requirement paragraph. The first-pass decoder is to predict multiple skills according to the job description, which is a multi-label classification task (Zhang and Zhou, 2014). The second-pass decoder is to generate a complete text according to the predicted skill labels and the input text. Moreover, we build a skill knowledge graph to capture the global information in the whole job posting dataset in addition to the local information provided by the input. Through the skill knowledge graph, our model obtains the global prior knowledge to alleviate the misusing of skills. Extensive experiments are conducted to evaluate our model on real-world job posting data. The result demonstrates the effectiveness of the proposed method. The main contributions of this paper can be summarized as follows: • We propose a novel task of job posting generation that is defined as the conditional generation given a job description and basic company information to generate a job requirement. • A data-driven generation approach SAMA is proposed to model the complex mapping relationships and generate informative and accurate job requirements. • We build a real-world job posting dataset and conducte extensive experiments to validate the effectiveness and superiority of our proposed approach. 2 Data Description We collect a job posting dataset from a famous Chinese online recruiting market, across a period of 19 months, ranging from 2019 to 2020. There are 107,616 job postings in total. After removing repetitive and too short job postings, 11,221 records are selected. This dataset is collected from 6 different industry domains. The detailed statistics of the dataset are illustrated in Table 1. Considering the importance of the skills for JPG, we select 2000 records and manually tag the skills in these records. Then we train a word-level LSTMtraining validation testing Internet 2055 509 687 Consumer goods 1153 292 356 Real Estate 969 220 276 Finance 1477 366 463 Automobile 997 282 296 Medical 397 94 115 Table 1: The statistics of the dataset. CRF model (Huang et al., 2015) to recognize the skills in the whole dataset. We also keep the basic information, i.e., job position and company scale information, for the reason that they are the critical attributes of job postings that have impacts on the level of skills. In order to capture the global prior knowledge of skills, we construct a skill knowledge graph according to the semantic relations of entities in the job postings. As shown in Figure 2, there are three types of entities, i.e., skill, company scale, and job position. The entities of skills are divided into two types, generic skills (denoted by G) and professional skills (denoted by P), according to the number of occurrences. The relation N.T.M. (needto-master) exists between job position entity and skill entity. Besides, the relation IN exists between company scale entity and skill entity. For example, jobseeker who is seeking for a programmer position in a company of 10 to 100 people needs to master the professional skill C++, then there exist three triplets, (programmer, N.T.M., C++), ([10, 100], IN, C++) and (C++, type, P). resilience manager N.T.M. Excel N.T.M. [10, 100] IN interpreter N.T.M. [100, 1000] IN IN C++ programmer IN N.T.M. N.T.M. G P P entity(skill) entity(scale) entity(position) N.T.M. relation 1 IN relation 2 type Figure 2: An example of the skill knowledge graph. 3 Approach Let D = {(Bi, Xi, Yi)}N i=1 denote the dataset, where Xi = (xi,1, xi,2, ..., xi,m) is the word sequence of job description paragraph. Yi = (yi,1, yi,2, ..., yi,n) is the word sequence of job requirement paragraph, Bi = (bp i , bs i) is the basic information, bp and bs are job position and company scale information, N is the size of dataset, m and n are the lengths of sequence Xi and Yi, respectively. The target of the JPG task is to estimate P(Yi|Xi, Bi), the conditional probability of a 3098 𝑦𝑦𝑖𝑖,𝑛𝑛 Text Encoder Skill Prediction Skill Words Set 𝑂𝑂𝑖𝑖 𝐵𝐵𝑖𝑖= ቁ ቀ𝑏𝑏𝑖𝑖 𝑝𝑝, 𝑏𝑏𝑖𝑖 𝑠𝑠 Text Decoder Target Sequence Attention 3 Attention 2 Attention 4 𝑜𝑜𝑖𝑖,1 𝑜𝑜𝑖𝑖,2 𝑜𝑜𝑖𝑖,3 𝑜𝑜𝑖𝑖,4 𝑜𝑜𝑖𝑖,5 𝑜𝑜𝑖𝑖,𝑘𝑘 𝑔𝑔1 𝑔𝑔2 𝑔𝑔3 𝑔𝑔4 𝑔𝑔𝑛𝑛 𝑠𝑠𝑖𝑖,1 ′ 𝑠𝑠𝑖𝑖,2 ′ 𝑠𝑠𝑖𝑖,3 ′ 𝑠𝑠𝑖𝑖,4 ′ 𝑠𝑠𝑖𝑖,5 ′ 𝑠𝑠𝑖𝑖,𝑘𝑘 ′ …… 𝜆𝜆 𝑦𝑦𝑖𝑖,1 𝑦𝑦𝑖𝑖,2 𝑦𝑦𝑖𝑖,3 𝑦𝑦𝑖𝑖,4 𝑔𝑔𝑙𝑙 ′ 𝑔𝑔2 ′ 𝑔𝑔1 ′ 𝑔𝑔3 ′ 𝑃𝑃𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑌𝑌𝑖𝑖𝑋𝑋𝑖𝑖, 𝑆𝑆𝑖𝑖, 𝐵𝐵𝑖𝑖) 𝑠𝑠𝑖𝑖,𝑙𝑙 𝑠𝑠𝑖𝑖,3 𝑠𝑠𝑖𝑖,1 𝑠𝑠𝑖𝑖,2 ℎ1 ℎ2 ℎ3 ℎ4 ℎ𝑚𝑚 𝑥𝑥𝑖𝑖,1 𝑥𝑥𝑖𝑖,2 𝑥𝑥𝑖𝑖,3 𝑥𝑥𝑖𝑖,4 𝑥𝑥𝑖𝑖,𝑚𝑚 …… …… Attention 1 𝐶𝐶𝑙𝑙 𝑠𝑠𝑠𝑠 …… 𝐶𝐶𝑛𝑛𝑛𝑛𝑛𝑛 𝐶𝐶𝑛𝑛𝑟𝑟𝑟𝑟 𝐶𝐶𝑛𝑛𝑡𝑡𝑡 …… …… 1 −𝜆𝜆 Skill knowledge graph 𝑃𝑃𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑌𝑌𝑖𝑖𝑋𝑋𝑖𝑖, 𝑆𝑆𝑖𝑖, 𝐵𝐵𝑖𝑖) 𝑃𝑃𝑌𝑌𝑖𝑖𝑋𝑋𝑖𝑖, 𝑆𝑆𝑖𝑖, 𝐵𝐵𝑖𝑖) 𝑃𝑃𝑆𝑆𝑖𝑖𝑋𝑋𝑖𝑖, 𝐵𝐵𝑖𝑖) Figure 3: An illustration of the architecture of SAMA that consists of three parts, i.e., skill prediction part, skill refinement part, and job requirement generation part. The skills Si are predicted given the job description. To consider the global prior knowledge of skills, the skill knowledge graph gives another set of skills Oi, which plays the role of refinement. Finally, SAMA fuses multiple attentions to generate the final job requirement paragraph Yi. . job requirement Yi given a job description Xi and basic information Bi. To tackle the JPG task, we propose a global SkillAware Multi-Attention model, named SAMA. Figure 3 shows the overall architecture of SAMA. Firstly, considering the importance of skill prediction in JPG, we decompose the probability P(Yi|Xi, Bi) into a two-stage generation process, including skill prediction and job requirement paragraph generation: P(Yi|Xi, Bi) = P(Yi|Xi, Si, Bi)P(Si|Xi, Bi), (1) where Si = (si,1, si,2, ..., si,l) is a skill2 word sequence of its corresponding job requirement, l is the length of Si. Since Si and Bi are conditionally independent given Xi, we can derive that P(Si|Xi, Bi) = P(Si|Xi). Secondly, for refining the skills, we leverage the global prior information by the skill knowledge graph Gs = (E1, R, E2) where E1 and E2 are the sets of head and tail entities and R is the set of relations. Given the basic information Bi and the skill knowledge graph Gs, we obtain a set of skills Oi = (oi,1, oi,2, ..., oi,k). Oi = f(Bi, Gs), (2) where f is an invertible query function, which can ensure the one to one mapping relation between Bi and Oi. 2The details of how skills are extracted are described in Section 2. Thirdly, to fuse the local and global information, the probability P(Yi|Xi, Si, Bi) during the text generation process is calculated as: P(Yi|Xi, Si, Bi) = (1 −λ)Plocal(Yi|Xi, Si, Bi) +λPglobal(Yi|Xi, Si, Bi), (3) where λ is a hyperparameter that adjusts the balance of two probabilities. 3.1 Job Description Encoder The input job description word sequence Xi is first transformed into a sequence of word embeddings. To obtain the long-term dependency vector representation, we use a bi-directional LSTM (Schuster and Paliwal, 1997) as the text encoder. The input sequence is transformed into a hidden state sequence H = (h1, h2, ..., hm) by concatenating the representations of the forward and backward hidden states ht = [ → ht, ← hm−t+1]. Specifically, the initiated encoder hidden state h0 is a zero vector, and the last encoder hidden state hm is used for initiating the skill decoder. 3.2 Skill Prediction Intuitively, the process of skill prediction is a MultiLabel Classification (MLC) task, which aims to assign multiple skills to each job description. To capture the correlations between skills, inspired by Yang et al. (2018), we view this MLC task as a sequence generation problem. 3099 Formally, the skill decoder layer first takes the hidden state hm of the encoder as input, then derive a context vector Cst by an attention mechanism (Luong et al., 2015) to help predict the skill labels. αj i = exp (g′T j−1W 1hi) P i′ exp (g′T j−1W 1hi′); Cst j = m X i=1 αj ihi, (4) where W 1 ∈Rd×d is trainable weight matrix, d is the hidden vector size. Inspired by Yuan et al. (2018), the job description is labelled with multiple skills by generating a skills sequence which joins the skills by delimiter <SEP> and has an unfixed number of skills (e.g., English <SEP> computer science <SEP> c++). The skill decoder is based on LSTM, whose hidden vector is computed by: g′ t = LSTM(g′ t−1, Cst t ). (5) Specifically, the last skill decoder hidden state g′ l is used for initiating the text decoder. The skill sequence is finally obtained by a softmax classification over the vocabulary of skills, Vskill. In detail, a non-linear transformation is applied to form the skill decoder semantic representation Ist, and then compute the probability P(Si|Xi, Bi) via: Ist j = tanh(W 2[g′ j; Cst j ]) P(si,j|Xi) = softmaxi(W 3Ist j + b3), (6) where [; ] is vector concatenation, W 2 ∈Rd×2d, W 3 ∈R|Vskill|×d and b3 ∈R|Vskill| are parameters. 3.3 Skill Refinement The process of skill prediction only considers the local information, which results in some misusing of skills. To refine the skill of the generated job requirement, the global information is taken into account by the skill knowledge graph. The skill entities are divided into G and P as described in Section 2. Here, the basic assumption is that a generic skill appears more frequently than a professional skill among all the job postings, because the professional skill contains more domain characters. We use a hyperparameter θ as a threshold to divide the skills entities. Given the basic information Bi = (bp i , bs i), the set of skills Oi is obtained from the skill knowledge graph by the query function f. In detail, firstly, we obtain the set of entities that have the “N.T.M.” relation with bp i and the set of entities who have the “IN” relation with bs i. Secondly, we get the intersection of the sets obtained in the first step. Finally, we keep the entities whose types are P. we embed Oi as S′ i = (s′ i,1, s′ i,2, ..., s′ i,k), and linearly combine it as a skill graph context vector Cnd j by an attention mechanism: τ j i = exp(gT j−1W 4s′ i) P i′ exp(gT j−1W 4s′ i′); Cnd j = k X i=1 τ j i s′ i, (7) where W 4 ∈Rd×d′ are parameters, d′ is the dimensions of the word embeddings. Then a nonlinear transformation is applied to form the graph skill semantic representation Ind. The probability Pglobal(Yi|Xi, Si, Bi) from Vskill is computed via: Ind j = tanh(W 5[gj; Cnd j ; Crd j ]), (8) Pglobal(yi,j = w|Xi, Si, Bi) =  softmaxi(W 6Ind j + b6), w ∈Oi 0, w /∈Oi , (9) where g and Crd will be introduced in next section, W 5 ∈Rd×(2d+d′), W 6 ∈R|Vskill|×d, b6 ∈ R|Vskill| are trainable parameters. 3.4 Job Requirement Generation Job requirement generation fuses multiple attention mechanisms from three aspects, job descriptions, predicted skills and skills from skill knowledge graph. The text decoder, based on another LSTM, aims to generate final word sequence. The hidden vector of text decoder is computed by gt = LSTM(et−1, gt−1), where et−1 is the word embedding of the final generated target word at time step t −1. After obtaining g, a nonlinear transformation is applied to form the text decoder semantic representation Ird. The probability Plocal(Yi|Xi, Si, Bi) is computed via: Ird j = tanh(W 7[ej−1; gj; Crd j ; Cth j ]), (10) Plocal(yi,j|Xi, Si, Bi) = softmaxi(W 8Ird j + b8), (11) where W 7 ∈Rd×2(d+d′), W 8 ∈R|Vtext|×d, b8 ∈ R|Vtext| are parameters, Vtext is the vocabulary of job requirement and Vskill is a subset of Vtext, both Crd and Cth are the context vectors generated by attention mechanisms. Specifically, Crd is a context vector computed similar as Cst because they 3100 directly take input sequence into account. βj i = exp(gT j−1W 9hi) P i′ exp(gT j−1W 9hi′); Crd j = m X i=1 βj i hi, (12) where W 9 ∈Rd×d. In addition, the skills S generated by skill decoder are fed into the text decoder to guide the generation process. To obtain Cth, another attention model is leveraged: γj i = exp(gT j−1W 10si) P i′ exp(gT j−1W 10si′); Cth j = l X i=1 γj i si, (13) where W 10 ∈Rd×d′ are parameters. The generation probability P(Yi|Xi, Si, Bi) is the weighted sum of Plocal(Yi|Xi, Si, Bi) and Pglobal(Yi|Xi, Si, Bi) as in equation 3. As shown in equation 8 and equation 10, the vector Cth appears explicitly only in Plocal, which implies that Plocal puts emphasis on the skill prediction, i.e., the local information, while the vector Cnd appears explicitly only in Pglobal, which indicates that Pglobal focuses on the skills given by skill knowledge graph, i.e., the global prior knowledge. In this way, SAMA considers not only the local information from the job description but also the global information from the skill knowledge graph. 3.5 Training and Inference The loss function of the model has two parts, the negative log-likelihood of the silver3 skill labels, LS, and the gold4 job requirement text, LY : LS = − l X i=1 log P(S|X, B), LY = − n X i=1 log P(Y |X, S, B), L = LS + µLY , (14) where µ is a hyperparameter, we give more weight to the loss of gold job requirement. During inference, the outputs of the skill decoder and the text decoder are predicted as follows: ⌢S = arg max S P(S|X, B), (15) 3The skill labels are silver standard, because it was not created by an expert but extracted by a trained model. 4The job requirement text is gold standard, because it was written by human and put out online. ⌢Y = arg max Y P(Y |X, ⌢S, B). (16) For each stage, we obtain the best results by utilizing the greedy search at each step. 4 Experiments In this section, we conduct experiments to verify the effectiveness of SAMA. 4.1 Experimental Setup 4.1.1 Datasets Job descriptions and job requirements are tokenized by Pyltp5 word segmenter. Table 1 shows the split of the dataset. There are 468 position entities, 9 scale entities, 31,090 skill entities, and 310,413 relation edges in the skill knowledge graph. The vocabulary of job descriptions contains 14,189 words, the vocabulary of skills contains 3,523 words, and vocabulary the job requirements contains 18,612 words. 4.1.2 Comparison Models To achieve the comprehensive and comparative analysis of SAMA, we compared it with two kinds of representative models: the standard generation model and the hierarchical generation model. • S2SA: Seq2Seq with attention (Luong et al., 2015) is a standard generation model. • DelNet: Deliberation networks model (Xia et al., 2017) is a hierarchical generation model which has a two-pass decoder to generate and polish the same target sequence. • VPN: Vocabulary pyramid networks (Liu et al., 2019) is a hierarchical generation model which has the multi-pass encoder and decoder to generate a multi-level target sequence. • SAMA(w/o pred): SAMA(w/o pred) is a degraded model of SAMA that removes the process of skill prediction for the ablation test. • SAMA(w/o graph): SAMA(w/o graph) is another degraded model of SAMA that removes the process of skill refinement. 4.1.3 Network Configuration In all models, we pretrain word2vec (Mikolov et al., 2013) in the job posting dataset. We set the word embedding dimension as 100 and the hidden vector size as 400 in both encoding and decoding. We set 5https://github.com/HIT-SCIR/pyltp 3101 Models BLEU-1 BLEU-2 BLEU-3 BLEU-4 ROUGE-1 ROUGE-2 ROUGE-3 ROUGE-4 S2SA 44.78 29.96 20.33 13.11 44.43 20.02 8.87 3.62 DelNet 37.10 25.35 18.28 12.62 44.21 19.29 8.42 3.08 VPN 34.15 23.26 16.90 11.68 40.16 16.82 6.96 2.63 SAMA(w/o pred) 44.70 31.59 23.09 16.32 45.87 22.78 11.25 5.75 SAMA(w/o graph) 45.49 31.89 23.32 16.40 45.93 22.85 11.37 5.84 SAMA 46.15 32.44 23.77 16.83 46.37 23.27 12.17 6.16 Table 2: Word overlap based metrics. the maximum number of words in each sequence of skills and each job requirement as 30 and 150, respectively. Also, the weighted parameters λ and µ are set as 0.5 and 1.4, respectively. The threshold θ is set as 100. We apply dropout (Zaremba et al., 2014) at a rate of 0.3. Models are trained for 15 epochs with the Adam optimizer (Kingma and Ba, 2015), and the batch size is 5. 4.1.4 Evaluation Metrics To evaluate the performance of SAMA, we employ the following metrics: Word overlap based metrics: To evaluate the overall text generation quality, we employ BLEU-N (Papineni et al., 2002) and ROUGE-N (Lin, 2004) as evaluation metrics, in which BLEU-N is a kind of precision-based metric and ROUGE-N is a kind of recall-based metric. Skill prediction metrics: Since the correctness of generated skills is of great importance in JPG, we further evaluate the quality of skills in generated job requirements, using Precision, Recall, and F1 value. To achieve this, we extract skills in the ground truth and generated text by a matching method based on the skill vocabulary Vskill. Human-based evaluation: Since it is difficult to measure the comprehensive quality of the generated texts, i.e., both fluency of the texts and accuracy of the skills, in addition to automatic metrics above, we conduct a subjective evaluation following. Three graduated student volunteers are asked to evaluate the generated paragraphs. We randomly sample 50 pieces of data from the testing set. The job requirements generated by different models are pooled and randomly shuffled for each volunteer. Each generated paragraph is evaluated as bad (irrelevant skills or disfluent sentence), normal (basic relevant skills and fluent sentence), or good (rich and relevant skills and fluent sentence). 4.2 Results and Analysis 4.2.1 Overall Performance Table 2 shows the results of word overlap based metrics. In terms of BLEU-N and ROUGE-N, 0 10 20 30 40 50 60 P R F1 S2SA DelNet VPN SAMA(w/o prediction) SAMA(w/o graph) SAMA Figure 4: Skill prediction metrics. SAMA performs the best in all word overlap based metrics, which suggests that our model obtains more overlapped words with the ground truth. SAMA(w/o graph) and SAMA(w/o pred) obtain competitive results, and both are significantly better than baselines, which demonstrates the effectiveness of skill prediction and prior knowledge of skills, respectively. In addition to the overall metrics, Figure 4 further demonstrates the skill-level metrics. Figure 4 demonstrates that the job requirements generated by skill aware models (SAMA(w/o pred), SAMA(w/o graph), and SAMA) consist of more accurate and richer skills than those generated by the baselines (S2SA, DelNet, and VPN). Among them, SAMA achieves the best performance. Besides, SAMA(w/o graph) obtains a higher recall rate, which demonstrates that it can enrich the skill information effectively. SAMA(w/o pred) obtains a higher precision rate, which demonstrates that it can refine the skill information effectively. 4.2.2 Human-based Evaluation Results of the human-based annotation are shown in Table 3. it can be seen that skill aware models obtain more relevant and informative results (good results) than the baselines, and SAMA obtains the most “good” results and the least “bad” results. The results are consistent with the automatic metric results. S2SA obtains the most “normal” results. This is because S2SA contains less rich and accurate skills in job requirements although with a good fluency. DelNet and VPN obtain a large percentage 3102 Model bad normal good Kappa S2SA 0.34 0.50 0.16 0.44 DelNet 0.48 0.34 0.18 0.41 VPN 0.56 0.32 0.12 0.38 SAMA(w/o pred) 0.28 0.42 0.30 0.42 SAMA(w/o graph) 0.26 0.42 0.32 0.43 SAMA 0.22 0.40 0.38 0.42 Table 3: Human-based evaluation results interior design EA undergraduate real-estate design-institute interior design EA undergraduate real-estate design-institute interior design EA undergraduate real-estate design-institute Figure 5: Visualization. The y-axes represent the skills of a generated job requirement, and the x-axes of the upper, the lower left and the lower right are the input job description X, the recommended skills O from the skill knowledge graph and skills S produced by the skill prediction, respectively. of “bad” results mainly because of the repeated sentences. Besides, SAMA(w/o pred) and SAMA(w/o graph) are both much worse than SAMA on “good” results. This is because SAMA(w/o pred) misses some skills, and SAMA(w/o graph) misuses some skills. All models have the kappa scores around 0.4, indicating that evaluators reach high agreement. 4.2.3 Visualization Analysis When the model generates the target sequence, there exist differences in the contributions of different words. SAMA can synthetically select the most informative words by utilizing the three attention mechanisms. Figure 56 shows the visualization of three attention mechanisms. According to Figure 5, when SAMA generates the skill “EA (Environmental Art)”, it automatically assigns larger weights to more informative words in three sources, e.g., ‘interior’ of X, ‘interior, design, construction, matching’ of O, ‘interior, design, drawing, management’ of S. It shows that SAMA can consider the different contributions and capture the most informative words automatically from multiple sources. 4.2.4 Case Study To illustrate the difference in quality between SAMA and the compared models, we give an example of the generated text in Figure 6, where we 6Due to the space limitation, we intercept some texts. Input: 1、负责完成公司下达的年度销售指标。2、将年度指标分解至季 度、月度并加以执行。3、确保客户订单及时回款,确保无逾期、 呆账等。4、渠道新客户开发及老客户的维护。 1. Responsible for completing the annual sales targets issued by the company. 2. Decompose annual indicators into quarters and months then implement them. 3. Ensure that orders are repaid timely and ensure no overdue or bad debts. 4. Develop new customer and maintain old customers. Gold Output: 1、高中以上学历,具备一定的销售经验。2、有礼赠品团购渠道 销售经验者优先。3、忠诚度高、服从管理、有团队协作精神。 1. High school education above, with some sales experience. 2. Sales experience in gift group-buying is preferred. 3. High loyalty, obedient management, and teamwork spirit. SAMA Output: 1、高中以上学历,1年以上销售经验,有销售运营类管理更加的 优先考虑;2、有礼赠品团购终端客户服务体系的工作经验、熟悉 礼品销售者优先;3、有团队合作精神,能承受较大的工作压力。 1. High school education above, more than 1 year of sales experience, sales management is preferred; 2. Working experience in gift groupbuying terminal customer service system, familiar with gift sales are preferred; 3. Team spirit, can bear high working pressure. S2SA Output: 1、高中及以上学历,市场营销等相关专业;2、2年以上销售行业 工作经验,有铝艺门窗或建材行业销售经验者优先。 1. High school education or above, marketing and other related majors; 2. More than 2 years working experience in sales, sales experience in aluminum doors and windows or building materials industry is preferred. Figure 6: Case Study. We translate Chinese to English. Skills in bold print are the correct and accurate skills. The underlined skills are the correct but inaccurate skills. The italic skills are the incorrect skills. compare SAMA with the strong baseline S2SA. As shown in Figure 6, SAMA captures all three aspects the same as ground truth, while S2SA misses the third aspect. Besides, in every aspect SAMA generates more correct and accurate skills, while S2SA obviously performs not good enough and generates inaccurate skills. Generally, the main consideration of job seekers is the skills they need to master, such as Python, English, and Go Language. Therefore, although S2SA generates some right words, like “preferred”, it does not increase the quality of the generated text because it generates inaccurate skills. 4.2.5 Parameter Analysis We show how the two key hyperparameters of SAMA, λ and µ, influence the performance in Figure 7. The hyperparameter λ adjusts the balance of the probabilities between Plocal and Pglobal and µ adjusts the balance between two losses, the loss in skill prediction LS and the loss in job requirements generation LY . The value of hyperparameter λ varies from 0.1 to 0.9 and bigger value implies more global prior knowledge of skills. Figure 7 shows that the performance gets a peak when the λ increases. It is intuitive that prior knowledge can help generate 3103 16 16.2 16.4 16.6 16.8 17 17.2 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 43.5 44 44.5 45 45.5 46 46.5 47 BLEU4 weighted parameter μ BLEU1 BLEU-1 BLEU-4 16 16.2 16.4 16.6 16.8 17 17.2 43.5 44 44.5 45 45.5 46 46.5 47 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 BLEU4 BLEU1 weighted parameter λ BLEU-1 BLEU-4 Figure 7: Parameter analysis. accurate and rich skills. However, the too large value may sacrifice the fluency. The value of hyperparameter µ varies from 1.1 to 2.0. We give greater weight to the loss of job requirements generation for the reason that it is the target of the JPG task. As observed in Figure 7, a weight close to 1 may introduce noises from the skill labels. Besides, when the weight continuously increases close to 2, the model is incapable of fully considering the skill labels. 5 Related Work The related works fall into two categories, human resource management and generation models. 5.1 Human Resource Management Human Resource Management (HRM) is an appealing topic for applied researchers, and the recruitment is a key part of HRM. With the explosive growth of recruiting data, many studies focus on the efficient automatic HRM, e.g., person-organization fit, intelligent job interview, and job skill ranking. Lee and Brusilovsky (2007) designed a job recommender system with considering the preferences of both employers and candidates. Qin et al. (2019) proposed a personalized question recommender system for job interview to better interview the candidates. Naim et al. (2015) analyzed the videos of interview for quantifying verbal and nonverbal behaviors in the context of job interviews. Sun et al. (2019) studied the compatibility of person and organization. Xu et al. (2018) proposed a data driven approach for modeling the popularity of job skills. Besides, some augmented writing tools, such as Textio 7 and TapRecruit 8, are developed to assist the HR to write job postings in the way that assuming a draft as input and then polishing the draft. In this paper, we also consider improving the efficiency of HRM from the perspective of the job posting writing which is the crucial first step in the process of recruitment. 7https://textio.com/products/ 8https://taprecruit.co/ 5.2 Generation Models Many practical applications are modeled as generation tasks such as keyword extraction, headline generation, and response generation. Many generation tasks are formulated as Seq2Seq learning problems. Plenty of studies focused on the optimization of the Seq2seq model. For example, Lopyrev (2015) trained a Seq2Seq model with attention for headlines generation task. Xing et al. (2017) incorporated topic information into Seq2Seq by a joint attention mechanism to generate informative responses for chatbots. Meng et al. (2017) applied a Seq2seq model with a copy mechanism to a keyword extraction task. However, models without explicit modeling the sentence planning have a great limitation in generating complex argument structures depending on hierarchy. Dong and Lapata (2018) decomposed the semantic parsing process into sketch generation and details filled-in and proposed a structure-aware neural architecture. Zhang et al. (2019) formulated outline generation task as a hierarchical structured prediction problem and proposed HiStGen. Puduppully et al. (2019) proposed a two-stage model which incorporates content selection and planning, for the data-to-text generation task. Similar to the above researches, we proposed a hierarchical generation model, namely SAMA, which first labels the job description with multiple skills and then generates the job requirement paragraph, to tackle the JPG task. Different from prior arts, SAMA considered the global information across the whole dataset to generate high quality job requirements. 6 Conclusion In this paper, we proposed the job posting generation (JPG) task and formalized it to a conditional text generation problem. Besides, we proposed a novel model, SAMA, for this task. The merits of SAMA come from three aspects. Firstly, it decomposed the long text generation into two stages, including an MLC task and a multiple skills guided text generation task. Secondly, it considered both the local and the global information to generate accurate and rich skills. Last but not least, the learned mapping relationships can be applied to various downstream tasks, such as automatic resume, and person-job fit. Extensive experiments conducted on real-world job posting data demonstrated the effectiveness and superiority of SAMA. 3104 Acknowledgement This research was supported by the National Natural Science Foundation of China under grant No. 61976119, the Natural Science Foundation of Tianjin under grant No. 18JCYBJC15800, and the Major Program of Science and Technology of Tianjin under grant No. 18ZXZNGX00310. References Li Dong and Mirella Lapata. 2018. Coarse-to-fine decoding for neural semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 731–742. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Danielle H Lee and Peter Brusilovsky. 2007. Fighting information overflow with personalized comprehensive information access: A proactive job recommender. In ICAS’07, pages 21–21. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Cao Liu, Shizhu He, Kang Liu, and Jun Zhao. 2019. Vocabulary pyramid network: Multi-pass encoding and decoding with multi-level vocabularies for response generation. In ACL, pages 3774–3783. Konstantin Lopyrev. 2015. Generating news headlines with recurrent neural networks. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In EMNLP, pages 1412– 1421. Rui Meng, Sanqiang Zhao, Shuguang Han, Daqing He, Peter Brusilovsky, and Yu Chi. 2017. Deep keyphrase generation. In ACL, pages 582–592. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations. Iftekhar Naim, M Iftekhar Tanveer, Daniel Gildea, and Mohammed Ehsan Hoque. 2015. Automated prediction and analysis of job interview performance: The role of what you say and how you say it. In International Conference and Workshops on Automatic Face and Gesture Recognition, pages 1–6. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318. Ratish Puduppully, Li Dong, and Mirella Lapata. 2019. Data-to-text generation with content selection and planning. In The Thirty-Third AAAI Conference on Artificial Intelligence, pages 6908–6915. Chuan Qin, Hengshu Zhu, Chen Zhu, Tong Xu, Fuzhen Zhuang, Chao Ma, Jingshuai Zhang, and Hui Xiong. 2019. Duerquiz: A personalized question recommender system for intelligent job interview. In KDD, pages 2165–2173. Mike Schuster and Kuldip K. Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Trans. Signal Processing, 45(11):2673–2681. Dinghan Shen, Asli Celikyilmaz, Yizhe Zhang, Liqun Chen, Xin Wang, and Lawrence Carin. 2019. Hierarchically-structured variational autoencoders for long text generation. Ying Sun, Fuzhen Zhuang, Hengshu Zhu, Xin Song, Qing He, and Hui Xiong. 2019. The impact of person-organization fit on talent management: A structure-aware convolutional neural network approach. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1625–1633. Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2017. Deliberation networks: Sequence generation beyond one-pass decoding. In Advances in Neural Information Processing Systems, pages 1784–1794. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pages 3351–3357. Tong Xu, Hengshu Zhu, Chen Zhu, Pan Li, and Hui Xiong. 2018. Measuring the popularity of job skills in recruitment market: A multi-criteria approach. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pages 2572–2579. Pengcheng Yang, Xu Sun, Wei Li, Shuming Ma, Wei Wu, and Houfeng Wang. 2018. SGM: sequence generation model for multi-label classification. In Proceedings of the 27th International Conference on Computational Linguistics. Xingdi Yuan, Tong Wang, Rui Meng, Khushboo Thaker, Daqing He, and Adam Trischler. 2018. Generating diverse numbers of diverse keyphrases. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. Min-Ling Zhang and Zhi-Hua Zhou. 2014. A review on multi-label learning algorithms. IEEE Trans. Knowl. Data Eng., 26(8):1819–1837. Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, and Xueqi Cheng. 2019. Outline generation: Understanding the inherent content structure of documents. In SIGIR, pages 745–754.
2020
281
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3105–3114 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3105 HyperCore: Hyperbolic and Co-graph Representation for Automatic ICD Coding Pengfei Cao1,2, Yubo Chen1,2, Kang Liu1,2, Jun Zhao1,2, Shengping Liu3 and Weifeng Chong3 1 National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China 2 School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China 3 Beijing Unisound Information Technology Co., Ltd, Beijing, 100028, China {pengfei.cao, yubo.chen, kliu, jzhao}@nlpr.ia.ac.cn, {liushengping, chongweifeng}@unisound.com Abstract The International Classification of Diseases (ICD) provides a standardized way for classifying diseases, which endows each disease with a unique code. ICD coding aims to assign proper ICD codes to a medical record. Since manual coding is very laborious and prone to errors, many methods have been proposed for the automatic ICD coding task. However, most of existing methods independently predict each code, ignoring two important characteristics: Code Hierarchy and Code Co-occurrence. In this paper, we propose a Hyperbolic and Co-graph Representation method (HyperCore) to address the above problem. Specifically, we propose a hyperbolic representation method to leverage the code hierarchy. Moreover, we propose a graph convolutional network to utilize the code co-occurrence. Experimental results on two widely used datasets demonstrate that our proposed model outperforms previous state-ofthe-art methods. 1 Introduction The International Classification of Diseases (ICD) is a healthcare classification system supported by the World Health Organization, which provides a unique code for each disease, symptom, sign and so on. ICD codes have been widely used for analyzing clinical data and monitoring health issues (Choi et al., 2016; Avati et al., 2018). Due to the importance of ICD codes, ICD coding – which assigns proper ICD codes to a medical record – has drawn much attention. The task of ICD coding is usually undertaken by professional coders according to doctors’ diagnosis descriptions in the form of free texts. However, manual coding is very expensive, time-consuming and error-prone. Automatic ICD Coding Model Mr.[**Known lastname 58216**] is an 87 year old male with Parkinsons Disease, difficulty breathing ,…,… 87 year old male presents with severe chest tightness, respiratory failure, and pneumatosis coli indicative of visceral necrosis. As the patient was not a surgical candidate, medical prognosis was poor …… Input: Clinical Text Output: Predicted ICD codes ICD-9 Codes Disease Name 518.81 Acute respiratory failure 401.9 Essential hypertension 276.2 Acidosis 038.9 Unspecified septicemia …… …… Figure 1: An example of automatic ICD coding task. The input and output of the automatic ICD coding model are clinical text and predicted ICD codes, respectively. For better understanding, we add the corresponding disease name for each code. The cost incurred by coding errors and the financial investment spent on improving coding quality are estimated to be $25 billion per year in the US (Lang, 2007). Two main reasons can account for this. First, only the people who have medical expert knowledge and specialized ICD coding skills can handle the task. However, it is hard to train such an eligible ICD coder. Second, it is difficult to correctly assign proper codes to the input document even for professional coders, because one document can be assigned multiple ICD codes and the number of codes in the taxonomy of ICD is large. For example, there are over 15,000 and 60,000 codes respectively in the ninth version (ICD-9) and the tenth version (ICD-10) of ICD taxonomies. To reduce human labor and coding errors, many methods have been carefully designed for automatic ICD coding (Perotte et al., 2013; Mullenbach et al., 2018). For example in Figure 1, given the clinical text of a patient, the ICD coding model needs to automatically predict the corresponding ICD codes. The automatic ICD coding task can be modeled as a multi-label classification task since each clinical text is usually accompanied by mul3106 460-519 - DISEASES OF THE RESPIRATORY SYSTEM 460 - Acute nasopharyngitis 461 - Acute sinusitis 461.0 - Maxillary 461.1 - Frontal 464 - Acute laryngitis and tracheitis 464.0 - Acute laryngitis 464.00 - Without mention of obstruction 464.01 - With obstruction 464.1 - Acute tracheitis ICD-9 Descriptor 460-519 460 461 462 461.0 461.1 464.0 464.1 464.00 464.01 Hierarchical Structure 463 464 Figure 2: An example of ICD-9 descriptors and the derived hierarchical structure. tiple codes. Most of the previous methods handle each code in isolation and convert the multi-label problem into a set of binary classification problems to predict whether each code of interest presents or not (Mullenbach et al., 2018; Rios and Kavuluru, 2018). Though effective, they ignore two important characteristics: Code Hierarchy and Code Co-occurrence, which can be leveraged to improve coding accuracy. In the following, we will introduce the two characteristics and the reasons why they are critical for the automatic ICD coding. Code Hierarchy: Based on ICD taxonomy, ICD codes are organized under a tree-like hierarchical structure as shown in Figure 2, which indicates the parent-child and sibling relations between codes. In the hierarchical structure, the upper level nodes represent more generic disease categories and the lower level nodes represent more specific diseases. The code hierarchy can capture the mutual exclusion of some codes. If code X and Y are both children of Z (i.e., X and Y are the siblings), it is unlikely to simultaneously assign X and Y to a patient in general (Xie and Xing, 2018). For example in Figure 2, if code “464.00 (acute laryngitis without mention of obstruction)” is assigned to a patient, it is unlikely to assign the code “464.01 (acute laryngitis with obstruction)” to the patient at the same time. If automatic ICD coding models ignore such a characteristic, they are prone to giving inconsistent predictions. Thus, a challenging problem is how to model the code hierarchy and use it to capture the mutual exclusion of codes. Code Co-occurrence: Since some diseases are concurrent or have a causal relationship with each other, their codes usually co-occur in the clinical text, such as “997.91 (hypertension)” and “429.9 (heart disease)”. In this paper, we call such characteristic code co-occurrence which can capture the correlations of codes. The code co-occurrence can be utilized to correctly predict some codes which are difficult to predict by only using the clinical text itself. For example in Figure 1, the code of “acute respiratory failure” can be easily inferred via capturing apparent clues (i.e., the green bold words) from the text. Although there are also a few clues to infer the code of “acidosis”, they are very obscure, let alone predict the code of “acidosis” by only using these obscure clues. Fortunately, there is a strong association between these two diseases: one of the main causes of “acidosis” is “acute respiratory failure”. This prior knowledge can be captured via the fact that the codes of the two diseases usually co-occur in clinical texts. By considering the correlation, the automatic ICD coding model can better exploit obscure clues to predict the code of “acidosis”. Therefore, another problem is how to leverage code co-occurrence for ICD coding. In this paper, we propose a novel method termed as Hyperbolic and Co-graph Representation method (HyperCore) to address above problems. Since the tree-likeness properties of the hyperbolic space make it more suitable for representing symbolic data with hierarchical structures than the Euclidean space (Nickel and Kiela, 2017), we propose a hyperbolic representation learning method to learn the Code Hierarchy. Meanwhile, the graph has been proved effective in modeling data correlation and the graph convolutional network (GCN) enables to efficiently learn node representation (Kipf and Welling, 2016). Thus, we devise a code co-occurrence graph (co-graph) for capturing Code Co-occurrence and exploit the GCN to learn the code representation in the co-graph. The contributions of this paper are threefold. Firstly, to our best knowledge, this is the first work to propose a hyperbolic representation method to leverage the code hierarchy for automatic ICD coding. Secondly, this is also the first work to utilize a GCN to exploit code co-occurrence correlation for automatic ICD coding. Thirdly, experiments on two widely used automatic ICD coding datasets show that our proposed model outperforms previous state-of-the-art methods. 2 Related Work Automatic ICD Coding. Automatic ICD coding is a challenging and important task in the medical informatics community, which has been studied with traditional machine learning methods (Larkey and Croft, 1996; Perotte et al., 2013) and neural network methods (Koopman et al., 2015; Rios and Kavuluru, 2018; Yu et al., 2019). Given discharge 3107 This was a 51 year o l d w o m a n w h o e n t e r e d v i a t h e emergency room after a fall. She was transferred from an outside hospital … A Clinical Text CNN Encoder Code-wise Attention Hyperbolic document projector S: document-code similarity scores Hyperbolic Code Embedder ℬ" Code Hierarchy ICD-9 Descriptor GCN Code-wise Attention V: code vectors D: code-aware document representations 460-519 - DISEASES OF RESPIRATORY SYSTEM 460 - Acute Nasopharyngitis 461 - Acute Sinusitis 461.0 - Maxillary 461.1 - Frontal …… Aggregation Layer Code Probability Distribution  H: document representations C: code-aware document representations #$ #% #& '$ '& '% Code Co-occurrence Encoding via GCN Figure 3: The architecture of Hyperbolic and Co-graph Representation method (HyperCore). In the Poincar´e ball Bn, we show the embeded code hierarchy (i.e., tree-like hierarchical structure). The dots li (i = 1, 2, 3) on the treelike hierarchical structure and triangles mi (i = 1, 2, 3) in the Poincar´e ball denote hyperbolic code embeddings and hyperbolic document representations, respectively. summaries, Perotte et al. (2013) propose a hierarchical SVM model to predict ICD codes. Recently, neural network methods have been introduced to the task. Mullenbach et al. (2018) propose an attention based convolutional neural network (CNN) model to capture important information for each code. Xie and Xing (2018) adopt tree long shortterm memory (LSTM) to utilize code descriptions. Though effective, they ignore the code hierarchy and code co-occurrence. Hyperbolic Representation. Hyperbolic space has been applied to modeling complex networks (Krioukov et al., 2010). Recent research on representation learning demonstrates that the hyperbolic space is more suitable for representing symbolic data with hierarchical structures than the Euclidean space (Nickel and Kiela, 2017, 2018; Hamann, 2018). In the field of natural language processing (NLP), the hyperbolic representation has been successfully applied to question answering (Tay et al., 2018), machine translation (Gulcehre et al., 2018) and sentence representation (Dhingra et al., 2018). To our knowledge, this is the first work to apply hyperbolic representation method to the automatic ICD coding task. Graph Convolutional Networks. GCN (Kipf and Welling, 2016) is a powerful neural network, which operates on graph data. It yields substantial improvements over various NLP tasks such as semantic role labeling (Marcheggiani and Titov, 2017), multi-document summarization (Yasunaga et al., 2017) and machine translation (Bastings et al., 2017). Veliˇckovi´c et al. (2017) propose graph attention networks (GAT) to summarize neighborhood features by using masked self-attentional layers. We are the first to capture the code co-occurrence characteristic via the GCN for the automatic ICD coding task. 3 Method We propose a hyperbolic and co-graph representation (HyperCore) model for automatic ICD coding. Firstly, to capture the code hierarchy, we learn the code hyperbolic representations and measure the similarities between document and codes in the hyperbolic space. Secondly, to exploit code cooccurrence, we exploit the GCN to learn code cooccurrence representations and use them as query vectors to obtain code-aware document representations. Finally, the document-code similarity scores and code-aware document representations are then aggregated to predict the codes. Figure 3 shows the overall architecture of our proposed model. 3.1 Convolution Neural Network Encoder We first map each word into a low dimensional word embedding space. The document can be denoted as X = {x1, x2, . . . , xN}, where N is the length of the document. Then, we exploit the CNN to encode the clinical text due to its high computational efficiency: hi = tanh(Wc ∗xi:i+k−1 + bc) (1) where Wc is the convolutional filter. bc is the bias. k is the filter size. ∗is the convolution operator. 3108 3.2 Code-wise Attention After encoding by CNN, we obtain the document representation H = {h1, h2, . . . , hN}. Since we need to assign multiple codes for each document and different codes may focus on different sections of the document, we employ code-wise attention to learn relevant document representations for each code. We first generate the code vector for each code via averaging the word embeddings of its descriptor: vi = 1 Nd XNd j=1 wj, i = 1, . . . , L (2) where vi is the code vector, Nd is the length of the descriptor, wj is the embedding of j-th word in the descriptor, and L is the total number of codes in the dataset (Jouhet et al., 2012; Johnson et al., 2016). The code vectors set is V = {v1, v2, . . . , vL}. Then, we generate the code-wise attention vector via matrix-vector product: αi = softmax(HT vi) (3) Finally, we use the document representation H and attention vector αi to generate the code-aware document representation: ci = Hαi (4) We concatenate the ci (i = 1, . . . , L) to obtain the code-aware document representation, denoted as C = {c1, c2, . . . , cL} ∈Rdc×L . 3.3 Document-Code Similarities in Hyperbolic Space To capture the code hierarchy, we learn the code hyperbolic representations and measure the similarities between document and codes in the hyperbolic space. In this section, we propose a hyperbolic code embedder to obtain code hyperbolic representations, and we also propose a hyperbolic document projector to project the document representations from Euclidean space to hyperbolic space. We then compute the similarities between the document and codes in the hyperbolic space. 3.3.1 Hyperbolic Geometry Hyperbolic geometry is a non-Euclidean geometry which studies spaces of constant negative curvature. Our approach is based on the Poincar´e ball model (Nickel and Kiela, 2017), which is a particular model of hyperbolic space and is wellsuited for gradient-based optimization. In particular, let Bn = {x ∈Rn | ||x|| < 1} be the open n-dimensional unit ball, where ||·|| denotes the Euclidean norm. The Poincar´e ball (Bn, gx) is defined by the Riemannian manifold, i.e., the open unit ball equipped with the Riemannian metric tensor: gx =  2 1 −||x||2 2 gE (5) where x ∈Bn. gE denotes the Euclidean metric tensor. Furthermore, the distance between two points u, v ∈Bn is given as: d(u, v) = arcosh(1 + 2 ||u −v||2 (1 −||u||2)(1 −||v||2)) (6) where arcosh is the inverse hyperbolic cosine function, i.e., arcosh(x) = ln(x + p (x2 −1)). If we consider the origin O and two points u, v, when the two points moving towards the outside of the Poincar´e ball (i.e., ||u||, ||v|| →1), the distance d(u, v) tends to d(u, O) + d(O, v). That is, the path between the two points converges to a path through the origin, which can be seen as a tree-like hierarchical structure. 3.3.2 Hyperbolic Code Embedder The tree-likeness of the hyperbolic space makes it natural to embed hierarchical structures. By embedding code hierarchy in the Poincar´e ball, the top codes are placed near the origin and bottom codes are near the boundary. The embedding norm represents depth in the hierarchy, and the distance between embeddings represents the similarity. Let D = {(lp, lq)} be the set of parent-child relations between code pairs. Θ = {θi}T i=1, θi ∈Bdp is the corresponding code embedding set, where T is the number of all ICD codes. In order to enforce related codes to be closer than unrelated codes, we minimize the following loss function to get the code hyperbolic representations when ||θi|| < 1(i = 1, . . . , L): J (Θ) = − X (lp,lq)∈D log exp(−d(θp, θq)) P lq′ ∈N (lp) exp(−d(θp, θq′)) (7) where N(lp) = {lq′|(lp, lq′) /∈D} ∪{lp} is the set of negative samples. The hyperbolic code representations in our work are denoted as ΘL = {θi}L i=1. d(·) is the distance defined as Equation (6). 3.3.3 Hyperbolic Document Projector To compute the similarities between document and codes in hyperbolic space, the code-aware document representations C = {c1, c2, . . . , cL} need 3109 to be projected into the hyperbolic space. We exploit the re-parameterization technique (Dhingra et al., 2018; L´opez et al., 2019) to implement it, which involves computing a direction vector r and a norm magnitude η. We use the ci as an example to illustrate the procedure: ri = Φdir(ci), ri = ri ||ri|| ηi = Φnorm(ci), ηi = σ(ηi) (8) where Φdir : Rdc →Rdp is the direction function. We parameterize it as a multi-layer perceptron (MLP). Φnorm : Rdc →R is the norm magnitude function. We use a linear layer to implement it. σ is the sigmoid function to ensure the resulting norm ηi ∈(0, 1). The re-parameterized document representation is defined as mi = ηiri, which lies in hyperbolic space Bdp. The re-parameterization technique enables to project the code-aware document representation into the Poincar´e ball, which enables the avoidance of the stochastic Riemannian optimization method (Bonnabel, 2013) to learn the parameters in the hyperbolic space. Instead, we can exploit the deep learning optimization method to update the parameters in the entire model. 3.3.4 Compute Document-Code Similarity Since there doesn’t exist a clear hyperbolic innerproduct, the cosine similarity is not appropriate to be the metric. In our work, we adopt the hyperbolic distance function to model the relationships between the document and codes. Since the hyperbolic document representation for each code has been obtained, we just need to compute the similarity with the corresponding hyperbolic code embedding: scorei = d(mi, θi) S = [score1; score2; . . . ; scoreL] (9) where S ∈RL is the document-code similarity score. [; ] is the concatenation operation. d(·) is the distance function defined as Equation (6). 3.4 Code-aware Document Representations via Graph Convolutional Network To exploit code co-occurrence, we exploit the graph to model code co-occurrence correlation, and then we use the GCN to learn code cooccurrence representations. In this section, we first construct the co-graph according to the statistics of the code cooccurrence in the training set, and then we exploit the GCN to encode the code co-occurrence correlation. 3.4.1 Code Co-graph Construction Given a graph with L nodes, we can represent the graph using a L × L adjacency matrix A. To capture the co-occurrence correlations between codes, we build the code co-occurrence graph (co-graph), which utilizes the code co-occurrence matrix as the adjacency matrix. If the i-th code and the j-th code co-occur in the clinical text, there is an edge between them. Intuitively, if the i-th code co-appears with the j-th code more often than the k-th code, the probabilities of the i-th code and the j-th code should have stronger dependencies. Therefore, in our work, we use the co-appearing times between two codes as the connection weights in the adjacency matrix, which can represent the prior knowledge. For example, if the i-th code co-appears n times with the j-th code, we set Aij = n. 3.4.2 Code Co-occurrence Encoding via GCN The inputs of GCN are initial representations of codes V which are obtained via Equation (2) and the adjacency matrix A. We use the standard convolution computation (Kipf and Welling, 2016) to encode code co-occurrence: H(l+1) = ρ( ˜ D−1 2 ˜ A ˜ D−1 2 H(l)W (l)) (10) where ˜ A = A + I. I is the identity matrix, ˜Dii = P j ˜ Aij, H(l) ∈RL×dc and H(0) = V . ρ is an activation function (e.g., ReLU). After co-occurrence correlation encoding via GCN, the code representations enable to capture the code co-occurrence correlations. Then, we use the codewise attention to obtain code-aware document representations, denoted as D = {d1, d2, . . . , dL}1. 3.5 Aggregation Layer After capturing the code hierarchy and code cooccurrence, we use an aggregation layer to fuse document-code similarity scores S and code-aware document representations D for enhancing representation with each other: U = λWsS + DT Wd (11) where Ws and Wd are transformation matrixes. U = {u1, u2, . . . , uL} ∈RL are final document representations for each code. λ is the hyperparameter. 1C and D are both code-aware document representations, but D captures the code co-occurrence correlations. 3110 Model MIMIC-III full MIMIC-III 50 AUC F1 P@N AUC F1 P@5 Macro Micro Macro Micro 8 15 Macro Micro Macro Micro C-MemNN – – – – – – 0.833 – – – 0.420 C-LSTM-ATT – – – – – – – 0.900 – 0.532 – CAML 0.895 0.986 0.088 0.539 0.709 0.561 0.875 0.909 0.532 0.614 0.609 DR-CAML 0.897 0.985 0.086 0.529 0.690 0.548 0.884 0.916 0.576 0.633 0.618 HyperCore 0.930 0.989 0.090 0.551 0.722 0.579 0.895 0.929 0.609 0.663 0.632 ±0.001 ±0.005 ±0.003 ±0.001 ±0.002 ±0.001 ±0.003 ±0.002 ±0.001 ±0.001 ±0.002 Table 1: Comparison of our model and other baselines on the MIMIC-III dataset. We run our model 10 times and each time we use different random seeds for initialization. We report the mean ± standard deviation of each result. 3.6 Training The prediction for each code is generated via: ˆyi = σ(ui), i = 1, . . . , L (12) Our model is to be trained using a multi-label binary cross-entropy loss: L = XL i=1[−yilog( ˆyi) −(1 −yi)log(1 −ˆyi)] (13) where yi ∈{0, 1} is the ground truth for the i-th code. 4 Experiments 4.1 Datasets We evaluate our proposed model on two widely used datasets, including MIMIC-II (Jouhet et al., 2012) and MIMIC-III (Johnson et al., 2016). Both datasets contain discharge summaries that are tagged by human coders with a set of ICD-9 codes. For MIMIC-III dataset, we use the same experimental setting as previous works (Shi et al., 2017; Mullenbach et al., 2018). The dataset has two common settings: MIMIC-III full and MIMIC-III 50. For MIMIC-III full setting, the setting consists of 8921 codes, 47719, 1631 and 3372 discharge summaries for training, development and testing respectively. For MIMIC-III 50 setting, the setting contains the top 50 most frequent codes, 8067, 1574 and 1730 discharge summaries for training, development and testing respectively. For the MIMIC-II dataset, we use the same splits as previous works (Perotte et al., 2013; Mullenbach et al., 2018), there are 20533 and 2282 clinical notes for training and testing, and 5031 unique ICD-9 codes in the dataset. 4.2 Metrics and Parameter Settings Following previous work (Mullenbach et al., 2018), we use macro-averaged and micro-averaged F1, macro-averaged and micro-averaged AUC (area under the ROC, i.e., receiver operating characteristic curve) and Precision@N (P@N) as the metrics. Model AUC F1 P@8 Macro Micro Macro Micro SVM – – – 0.293 – HA-GRU – – – 0.366 – CAML 0.820 0.966 0.048 0.442 0.523 DR-CAML 0.826 0.966 0.049 0.457 0.515 HyperCore 0.885 0.971 0.070 0.477 0.537 ±0.001 ±0.004 ±0.002 ±0.003 ±0.003 Table 2: Experimental results are shown in means ± standard deviations on the MIMIC-II dataset. The P@N indicates the proportion of the correctlypredicted labels in the top-N predicted labels. Hyper-parameters are tuned on the development set by grid search. The word embedding size de is 100. The convolution filter size is 10. The size of the filter output is 200. The dropout rate is 0.4. The λ is 0.2. The batch size is 16. Adam (Kingma and Ba, 2014) is used for optimization with an initial learning rate 1e-4. We pre-train the word embeddings on the combination of training sets of MIMIC-II and MIMIC-III datasets by using word2vec toolkit (Mikolov et al., 2013). 4.3 Baselines SVM: A hierarchical support vector machine (SVM) is proposed by Perotte et al. (2013) to use the hierarchical nature of ICD codes, which is evaluated on the MIMIC-II dataset. C-MemNN: A condensed memory neural network is proposed by Prakash et al. (2017) to predict ICD codes on the MIMIC-III 50 dataset. C-LSTM-ATT: A character-aware LSTM based attention model is proposed by Shi et al. (2017). It is also evaluated on the MIMIC-III 50 dataset. HA-GRU: A hierarchical attention gated recurrent unit model is proposed by Baumel et al. (2018) to predict ICD codes on the MIMIC-II dataset. CAML & DR-CAML: The convolutional attention network for multi-label classification (CAML) is proposed by Mullenbach et al. (2018). DR-CAML is an extension of CAML which 3111 Models MIMIC-III full MIMIC-III 50 MIMIC-II Macro-F1 Micro-F1 Macro-F1 Micro-F1 Macro-F1 Micro-F1 HyperCore 0.090 0.551 0.609 0.663 0.070 0.477 w/o hyperbolic representation 0.081 0.539 0.576 0.645 0.062 0.464 w/o co-graph representation 0.085 0.541 0.582 0.637 0.055 0.453 w/o hyperbolic and co-graph representation 0.077 0.531 0.570 0.626 0.047 0.439 Table 3: Ablation study by removing the main components, where “w/o” indicates without. incorporates the code description. They achieve the state-of-the-art performance on the MIMIC-III and MIMIC-II datasets. 4.4 Compared with State-of-the-art Methods We repeat 10 times training and each time we use different random seeds for initialization. We report the mean ± standard deviation of each result. Table 1 and Table 2 show the results on the MIMIC-III and MIMIC-II datasets, respectively. Since some baselines are evaluated either on MIMIC-III or MIMIC-II, the baselines used for the two datasets are different. Overall, we observe that: (1) In Table 1, our method HyperCore outperforms all the baselines on MIMIC-III dataset. For example, compared with the state-of-the-art model DR-CAML, our method achieves 2.2% and 3% improvements of Micro-F1 score on MIMIC-III full and MIMIC-III 50 respectively. It indicates that, as compared to neural network based models that handle each code in isolation, our method can better take advantage of the rich correlations among codes. In addition, the small standard deviations indicate that our model obtains stable good results. (2) As previous work (Mullenbach et al., 2018), the Macro-F1 score of our method on MIMICIII full is lower than that on the MIMIC-III 50. The reason is that MIMIC-III full has long-tail frequency distributions, and the Macro-F1 places more emphasis on rare code prediction. Therefore, it is difficult to achieve a high Macro-F1 score on MIMIC-III full. Nevertheless, our method still achieves the best result on the Macro-F1 metric. It indicates that our method is very effective. (3) In Table 2, our method HyperCore also achieves the best performance over all metrics on the MIMIC-II. Especially, compared with the stateof-the-art model DR-CAML, our method achieves 5.9% improvements of Macro-AUC, which indicates the effectiveness of our method. (4) As shown in Table 2, the neural network based methods outperform the traditional model (SVM), which indicates the limitation of humandesigned features and the advancement of neural networks for the automatic ICD coding. 4.5 Ablation Experiment To investigate the effectiveness of the hyperbolic and co-graph representation, we conduct the ablation studies. The experimental results are listed in Table 3. From the results, we can observe that: (1) Effectiveness of Hyperbolic Representation. Compared with the model removed hyperbolic representation, the HyperCore improves the Micro-F1 score from 0.539 to 0.551 on MIMIC-III full dataset. It demonstrates the effectiveness of the hyperbolic representation. (2) Effectiveness of Co-graph Representation. Compared with the model removed the co-graph representation, the HyperCore model improves the performance, achieving 2.6% improvements of Micro-F1 score on the MIMIC-III 50 dataset. The great improvements indicate the co-graph representation is very effective. (3) Effectiveness of Hyperbolic and Co-graph Representation. When we remove the hyperbolic and co-graph representation, the performance drops significantly. The Micro-F1 score drops from 0.477 to 0.439 on the MIMIC-II dataset. It indicates that simultaneously exploiting the hyperbolic and cograph representation is also very effective. 4.6 Discussion 4.6.1 The Analysis of Hyperbolic Code Embedding Dimension Since the dimensionality of the hperbolic code embeddings is very important for hyperbolic representation, we investigate its effect. The size of hyperbolic code embeddings is set 10, 20, 50, 70 and 100. Table 4 shows the results of our model on the MIMIC-III and MIMIC-II datasets. We have two important observations: (1) The best hyperbolic code embedding dimensionality on MIMIC-III full is larger than it on MIMIC-III 50 and MIMIC-II. The reason may be that the number of codes in MIMIC-III full is 3112 Dimensionality MIMIC-III full MIMIC-III 50 MIMIC-II Macro-F1 Micro-F1 P@8 Macro-F1 Micro-F1 P@5 Macro-F1 Micro-F1 P@8 10 0.083 0.539 0.701 0.593 0.651 0.619 0.064 0.463 0.528 20 0.085 0.542 0.704 0.598 0.656 0.625 0.066 0.471 0.532 50 0.087 0.547 0.708 0.609 0.663 0.632 0.070 0.477 0.537 70 0.090 0.551 0.722 0.605 0.660 0.627 0.065 0.473 0.534 100 0.083 0.548 0.710 0.602 0.659 0.625 0.064 0.473 0.530 Table 4: Experimental results of HyperCore with different size of hyperbolic code embeddings. ICD-9 code Norm 460-519 (Diseases of the Respiratory System) 0.455 480-488 (Pneumonia and Influenza) 0.520 487 (Influenza) 0.568 487.8 (Influenza with other manifestations) 0.928 520-579 (Diseases of the Digestive System) 0.412 550-579 (Hernia of Abdominal Cavity) 0.472 550 (Inguinal hernia) 0.590 550.0 (Inguinal hernia with gangrene) 0.902 Table 5: The first and second blocks list some codes and their hyperbolic norms of ‘‘Diseases of the Respiratory System” and “Diseases of the Digestive System”, respectively. In each block, the disease becomes more specific from top to bottom. The norms of codes increase with the depth. more than other two datasets, which needs higherdimensional hyperbolic code embedding to represent the code hierarchy. (2) The performance does not always improve when the hyperbolic code embedding size increases. We guess that low dimensional embeddings can capture the hierarchy and the network is prone to over-fitting when high dimensional hyperbolic code embeddings are used. 4.6.2 The Hierarchy of Hyperbolic Code Embedding After embedding the ICD codes into the hyperbolic space, the top level codes will be placed near the origin and low level codes near the boundary, which can be reflected via their norms. Table 5 shows examples of ICD-9 codes and their hyperbolic norms. The first and second blocks list codes of “Diseases of the Respiratory System” and “Diseases of the Digestive System”, respectively. As expected, the lower level codes have higher hyperbolic norms, and this approves that when the disease is more specific, the hyperbolic norm is larger. For example, code “487.8 (influenza with other manifestations)” has a higher norm than “487 (influenza)”, and “550.0 (inguinal hernia with gangrene)” has a higher norm than “550 (inguinal hernia)”. It indicates that the hyperbolic code embeddings can Input Gold Label 518.81; 401.9; 276.2; 038.9 CNN+Attention 518.81; 401.9; 518.83; 518.84 HyperCore 518.81; 401.9; 276.2; 038.9 Mr. [**Known lastname 58216**] is an 87 year old male with a h/o Parkinsons Disease, difficulty breathing, ……, 87 year old male presents with severe chest tightness, respiratory failure, and pneumatosis coli indicative of visceral necrosis. As the patient was not a surgical candidate, medical prognosis was poor …… Figure 4: An example to illustrate the effectiveness of the proposed model. The green bold codes indicate they are highly correlated. The red bold codes denote there exists contradictions between them. capture the code hierarchy. 4.7 Case Study We give an example shown in Figure 4 to illustrate the visualization of code-wise attention and the effectiveness of hyperbolic and co-graph representation. (1) Code-wise attention visualization: When the HyperCore model predicts the code “518.81 (acute respiratory failure)”, it can assign larger weights to more informative words, like “respiratory failure” and “chest tightness”. It shows the codes-wise attention enables to select the most informative words. (2) The effectiveness of hyperbolic representations: Our proposed model and the CNN+Attention can both correctly predict the code “518.81”. However, the CNN+Attention model gives contradictory predictions. Our proposed model can avoid the prediction contradictions by exploiting code hierarchy, which proves the effectiveness of hyperbolic representations. (3) The effectiveness of co-graph representation: Although there is no very obvious clue to predict the code “276.2 (acidosis)”, our model can exploit the co-occurrence between the code “518.81” and “276.2” to assist in inferring the code “276.2”. It demonstrates the effectiveness of the co-graph representation. 3113 5 Conclusion In this paper, we propose a novel hyperbolic and cograph representation framework for the automatic ICD coding task, which can jointly exploit code hierarchy and code co-occurrence. We exploit the hyperbolic representation learning method to leverage the code hierarchy in the hyperbolic space. Moreover, we use the graph convolutional network to capture the co-occurrence correlation. Experimental results on two widely used datasets indicate that our proposed model outperforms previous state-ofthe-art methods. We believe our method can also be applied to other tasks that need to exploit hierarchical label structure and label co-occurrence, such as fine-grained entity typing and hierarchical multi-label classification. Acknowledgments This work is supported by the Natural Key R&D Program of China (No.2017YFB1002101), the National Natural Science Foundation of China (No.61922085, No.61533018, No.61976211, No.61806201) and the Key Research Program of the Chinese Academy of Sciences (Grant NO. ZDBS-SSW-JSC006). This work is also supported by Beijing Academy of Artificial Intelligence (BAAI2019QN0301) and the CCF-Tencent Open Research Fund. References Anand Avati, Kenneth Jung, Stephanie Harman, Lance Downing, Andrew Ng, and Nigam H Shah. 2018. Improving palliative care with deep learning. BMC medical informatics and decision making, 18(4):122. Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Simaan. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1957–1967. Tal Baumel, Jumana Nassour-Kassis, Raphael Cohen, Michael Elhadad, and Noemie Elhadad. 2018. Multi-label classification of patient notes: case study on icd code assignment. In Workshops at the ThirtySecond AAAI Conference on Artificial Intelligence. Silvere Bonnabel. 2013. Stochastic gradient descent on riemannian manifolds. IEEE Transactions on Automatic Control, 58(9):2217–2229. Edward Choi, Mohammad Taha Bahadori, Andy Schuetz, Walter F Stewart, and Jimeng Sun. 2016. Doctor ai: Predicting clinical events via recurrent neural networks. In Machine Learning for Healthcare Conference, pages 301–318. Bhuwan Dhingra, Christopher Shallue, Mohammad Norouzi, Andrew Dai, and George Dahl. 2018. Embedding text in hyperbolic spaces. In Proceedings of the Twelfth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-12), pages 59–69. Caglar Gulcehre, Misha Denil, Mateusz Malinowski, Ali Razavi, Razvan Pascanu, Karl Moritz Hermann, Peter Battaglia, Victor Bapst, David Raposo, Adam Santoro, et al. 2018. Hyperbolic attention networks. In Proceedings of International Conference on Learning Representations. Matthias Hamann. 2018. On the tree-likeness of hyperbolic spaces. In Mathematical Proceedings of the Cambridge Philosophical Society, volume 164, pages 345–361. Cambridge University Press. Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-wei, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimiciii, a freely accessible critical care database. Scientific data, 3:160035. Vianney Jouhet, Georges Defossez, Anita Burgun, Pierre Le Beux, P Levillain, Pierre Ingrand, and Vincent Claveau. 2012. Automated classification of free-text pathology reports for registration of incident cases of cancer. Methods of information in medicine, 51(03):242–251. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. In Proceedings of International Conference on Learning Representations. Bevan Koopman, Guido Zuccon, Anthony Nguyen, Anton Bergheim, and Narelle Grayson. 2015. Automatic icd-10 classification of cancers from free-text death certificates. International journal of medical informatics, 84(11):956–965. Dmitri Krioukov, Fragkiskos Papadopoulos, Maksim Kitsak, Amin Vahdat, and Mari´an Bogun´a. 2010. Hyperbolic geometry of complex networks. Physical Review E, 82(3):036106. Dee Lang. 2007. Consultant report-natural language processing in the health care industry. Cincinnati Children’s Hospital Medical Center, Winter, 6. Leah S Larkey and W Bruce Croft. 1996. Combining classifiers in text categorization. In SIGIR, pages 289–297. Citeseer. 3114 Federico L´opez, Benjamin Heinzerling, and Michael Strube. 2019. Fine-grained entity typing in hyperbolic space. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP2019), pages 169–180. Association for Computational Linguistics. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1506–1515. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explainable prediction of medical codes from clinical text. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1101–1111. Maximillian Nickel and Douwe Kiela. 2017. Poincar´e embeddings for learning hierarchical representations. In Advances in neural information processing systems, pages 6338–6347. Maximillian Nickel and Douwe Kiela. 2018. Learning continuous hierarchies in the lorentz model of hyperbolic geometry. In International Conference on Machine Learning, pages 3776–3785. Adler Perotte, Rimma Pivovarov, Karthik Natarajan, Nicole Weiskopf, Frank Wood, and No´emie Elhadad. 2013. Diagnosis code assignment: models and evaluation metrics. Journal of the American Medical Informatics Association, 21(2):231–237. Aaditya Prakash, Siyuan Zhao, Sadid A Hasan, Vivek Datla, Kathy Lee, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2017. Condensed memory networks for clinical diagnostic inferencing. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pages 3274–3280. AAAI Press. Anthony Rios and Ramakanth Kavuluru. 2018. Fewshot and zero-shot multi-label learning for structured label spaces. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3132–3142. Haoran Shi, Pengtao Xie, Zhiting Hu, Ming Zhang, and Eric P Xing. 2017. Towards automated icd coding using deep learning. arXiv preprint arXiv:1711.04075. Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2018. Hyperbolic representation learning for fast and efficient neural question answering. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 583–591. ACM. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. In Proceedings of International Conference on Learning Representations. Pengtao Xie and Eric Xing. 2018. A neural architecture for automated icd coding. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1066–1076. Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, and Dragomir Radev. 2017. Graph-based neural multi-document summarization. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 452–462. Ying Yu, Min Li, Liangliang Liu, Zhihui Fei, FangXiang Wu, and Jianxin Wang. 2019. Automatic icd code assignment of chinese clinical notes based on multilayer attention birnn. Journal of biomedical informatics, 91:103114.
2020
282
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3115–3124 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3115 Hyperbolic Capsule Networks for Multi-Label Classification Boli Chen, Xin Huang, Lin Xiao, Liping Jing Beijing Key Lab of Traffic Data Analysis and Mining Beijing Jiaotong University, Beijing, China {18120345, 18120367, 17112079, lpjing}@bjtu.edu.cn Abstract Although deep neural networks are effective at extracting high-level features, classification methods usually encode an input into a vector representation via simple feature aggregation operations (e.g. pooling). Such operations limit the performance. For instance, a multi-label document may contain several concepts. In this case, one vector can not sufficiently capture its salient and discriminative content. Thus, we propose Hyperbolic Capsule Networks (HYPERCAPS) for Multi-Label Classification (MLC), which have two merits. First, hyperbolic capsules are designed to capture fine-grained document information for each label, which has the ability to characterize complicated structures among labels and documents. Second, Hyperbolic Dynamic Routing (HDR) is introduced to aggregate hyperbolic capsules in a label-aware manner, so that the label-level discriminative information can be preserved along the depth of neural networks. To efficiently handle large-scale MLC datasets, we additionally present a new routing method to adaptively adjust the capsule number during routing. Extensive experiments are conducted on four benchmark datasets. Compared with the state-of-the-art methods, HYPERCAPS significantly improves the performance of MLC especially on tail labels. 1 Introduction The main difference between Multi-Class Classification (MCC) and Multi-Label Classification (MLC) is that datasets in MCC have only serval mutually exclusive classes, while datasets in MLC contain much more correlated labels. MLC allows label co-occurrence in one document, which indicates that the labels are not disjointed. In addition, a large fraction of the labels are the infrequently occurring tail labels (Bhatia et al., 2015), which is also referred as the power-law label distribution. Figure 1: The power-law label distribution of EURLEX57K with Y-axis on log-scale. Division is based on average number of training instances. Figure 1 illustrates the label distribution of EURLEX57K (Chalkidis et al., 2019). A multi-label document usually has serval head and tail labels, and hence contain several concepts about both its head and tail labels simultaneously. Recent works for text classification, such as CNN-KIM (Kim, 2014) and FASTTEXT (Joulin et al., 2017), focus on encoding a document into a fixed-length vector as the distributed document representation (Le and Mikolov, 2014). These encoding based deep learning methods use simple operations (e.g. pooling) to aggregate features extracted by neural networks and construct the document vector representation. A Fully-Connected (FC) layer is usually applied upon the document vector to predict the probability of each label. And each row in its weight matrix can be interpreted as a label vector representation (Du et al., 2019b). In this way, the label probability can be predicted by computing the dot product between label and document vectors, which is proportional to the scalar projection of the label vector onto the document vector as shown in Figure 2. For example, label ”movie” should have the largest scalar projection onto a document about ”movie”. However, even 3116 Figure 2: Illustration of the FC layer in the encoding based methods. the learned label representation of ”music” can be distinguished from ”movie”, it may also have a large scalar projection onto the document. Moreover, multi-label documents always contain several concepts about multiple labels, such as a document about ”sport movie”. Whereas the document vector representation is identical to all the labels, and training instances for tail labels are inadequate compared to head labels. The imbalance between head and tail labels makes it hard for the FC layer to make prediction, especially on tail labels. In this case, one vector can not sufficiently capture its salient and discriminative content. Therefore, the performance of constructing the document vector representation via simple aggregation operations is limited for MLC. Capsule networks (Sabour et al., 2017; Yang et al., 2018a) has recently proposed to use dynamic routing in place of pooling and achieved better performance for classification tasks. In fact, capsules are fine-grained features compared to the distributed document representation, and dynamic routing is a label-aware feature aggregation procedure. (Zhao et al., 2019) improves the scalability of capsule networks for MLC. However, they only use CNN to construct capsules, which capture local contextual information (Wang et al., 2016). Effectively learning the document information about multiple labels is crucial for MLC. Thus we propose to connect CNN and RNN in parallel to capture both local and global contextual information, which would be complementary to each other. Nevertheless, Euclidean capsules necessitate designing a non-linear squashing function. Inspired by the hyperbolic representation learning methods which demonstrate that the hyperbolic space has more representation capacity than the Euclidean space (Nickel and Kiela, 2017; Ganea et al., 2018a), Hyperbolic Capsule Networks (HYPERCAPS) is proposed. Capsules are constrained in the hyperbolic space which does not require the squashing function. Hyperbolic Dynamic Routing (HDR) is introduced to aggregate hyperbolic capsules in a label-aware manner. Moreover, in order to fit the large label set of MLC and improve the scalability of HYPERCAPS, adaptive routing is presented to adjust the number of capsules participated in the routing procedure. The main contributions of our work are therefore summarized as follows: • We propose to connect CNN and RNN in parallel to simultaneously extract local and global contextual information, which would be complementary to each other. • HYPERCAPS with HDR are formulated to aggregate features in a label-aware manner, and hyperbolic capsules benefits from the representation capacity of the hyperbolic space. • Adaptive routing is furthermore presented to improve the scalability of HYPERCAPS and fit the large label set of MLC. • Extensive experiments on four benchmark MLC datasets demonstrate the effectiveness of HYPERCAPS, especially on tail labels. 2 Preliminaries In order to make neural networks work in the hyperbolic space, formalism of the M¨obius gyrovector space is adopted (Ganea et al., 2018b). An n-dimensional Poincar´e ball Bn is a Riemannian manifold defined as Bn = {x ∈Rn | ∥x∥< 1}, with its tangent space around p ∈Bn denoted as TpBn and the conformal factor as λp := 2 1−∥p∥2 . The exponential map expp : TpBn →Bn for w ∈TpBn \ {0} is consequently defined as expp(w) = p ⊕(tanh(λp 2 ∥w∥) w ∥w∥). (1) To work with hyperbolic capsules, M¨obius operations in the Poincar´e ball also need to be formulated. M¨obius addition for u, v ∈Bn is defined as u ⊕v = (1+2⟨u,v⟩+∥v∥2)u+(1−∥u∥2)v 1+2⟨u,v⟩+∥u∥2∥v∥2 , (2) where ⟨·, ·⟩denotes the Euclidean inner product. 3117 Thus M¨obius summation can be formulated as n M i=m pi = pm ⊕· · · ⊕pn, pi ∈Bn. (3) M¨obius scalar multiplication for k ∈R and p ∈Bn \ {0} is defined as k ⊗p = tanh(k tanh−1(∥p∥)) p ∥p∥. (4) And k ⊗p = 0 when p = 0 ∈Bn. The definition of M¨obius matrix-vector multiplication for M ∈Rm×n and p ∈Bn when Mp ̸= 0 is as follows M ⊗p = tanh(∥Mp∥ ∥p∥tanh−1(∥p∥)) Mp ∥Mp∥. (5) And M ⊗p = 0 when Mp = 0. HDR is developed based on these operations. 3 Local and Global Hyperbolic Capsules Neural networks are generally used as effective feature extractors for text classification. Kernels of CNN can be used to capture local n-gram contextual information at different positions of a text sequence, while hidden states of RNN can represent global long-term dependencies of the text (Wang et al., 2016). Hence, we propose to obtain the combination of local and global hyperbolic capsules by connecting CNN and RNN in parallel, which would be complementary to each other. Given a text sequence of a document with T word tokens x = [x1, . . . , xT ], pre-trained wdimensional word embeddings (e.g. GLOVE (Pennington et al., 2014)) are used to compose word vector representations E = [e1, . . . , eT ] ∈RT×w, upon which CNN and RNN connected in parallel are used to construct local and global hyperbolic capsules in the Poincar´e ball. Figure 3 illustrates the framework for HYPERCAPS. 3.1 Local Hyperbolic Capsule Layer N-gram kernels K ∈Rk×w with different window size k are applied on the local region of the word representations Et:t+k−1 ∈Rk×w to construct the local features as lt = ϕ(K ◦Et:t+k−1), (6) where ◦denotes the element-wise multiplication and ϕ is a non-linearity (e.g. ReLU). For simplicity, the bias term is omitted. With totally d channels, the local hyperbolic capsules at position t can be constructed as lt = exp0([l(1) t , . . . , l(d) t ]) ∈Bd. (7) Therefore, a k-gram kernel with 1 stride can construct T −k+1 local hyperbolic capsules. The local hyperbolic capsule set is denoted as {u1, . . . , uL}. 3.2 Global Hyperbolic Capsule Layer Bidirectional GRU (Chung et al., 2014) is adopted to incorporate forward and backward global contextual information and construct the global hyperbolic capsules. Forward and backward hidden states at time-step t are obtained by −→ ht = GRU(−−→ ht−1, et), ←− ht = GRU(←−− ht+1, et). (8) Each of the total 2T hidden states can be taken as a global hyperbolic capsule using the exponential map, i.e. −→ gt = exp0(−→ ht), and equally for the backward capsules. The global hyperbolic capsule set is denoted as {u1, . . . , uG}. 3.3 Hyperbolic Compression Layer As discussed in (Zhao et al., 2019), the routing procedure is computational expensive for a large number of capsules. Compressing capsules into a smaller amount can not only relieve the computational complexity, but also merge similar capsules and remove outliers. Therefore, hyperbolic compression layer is introduced. Each compressed local hyperbolic capsule is calculated as a weighted M¨obius summation over all the local hyperbolic capsules. For instance, ul = M uk∈{u1,...,uL} rk ⊗uk ∈Bd, (9) where rk is a learnable weight parameter. And likewise for compressing global hyperbolic capsules. Let set {u1, . . . , uP } denote the compressed local and global hyperbolic capsules together, which are then aggregated in a label-aware manner via HDR. 4 Hyperbolic Dynamic Routing The purpose of Hyperbolic Dynamic Routing (HDR) is to iteratively aggregate local and global hyperbolic capsules into label-aware hyperbolic capsules, whose activations stand for probabilities of the labels. 3118 Figure 3: Illustration of HYPERCAPS framework. 4.1 Label-Aware Hyperbolic Capsules With the acquirement of the compressed local and global hyperbolic capsule set {u1, . . . , uP } in layer ℓ, let {v1, . . . , vQ} denote the label-aware hyperbolic capsule set in the next layer ℓ+1, where Q equals to the number of labels. Following (Sabour et al., 2017), the compressed hyperbolic capsules are firstly transformed into a set of prediction capsules {ˆuj|1, . . . , ˆuj|P } for the j-th label-aware capsule, each of them is calculated by ˆuj|i = Wij ⊗ui ∈Bd, (10) where Wij is a learnable parameter. Then vj is calculated as a weighted M¨obius summation over all the prediction capsules by vj = M ˆuj|i∈{ˆuj|1,...,ˆuj|P } cij ⊗ˆuj|i, (11) where cij denotes the coupling coefficient that indicates the connection strength between ˆuj|i and vj. The coupling coefficient cij is iteratively updated during the HDR procedure and computed by the routing softmax cij = exp(bij) P k exp(bik), (12) where the logits bij are the log prior probabilities between capsule i and j, which are initialized as 0. Once the label-aware hyperbolic capsules are produced, each bij is then updated by bij = bij + K(dB(vj, ˆuj|i)), (13) where dB(·, ·) denotes the Poincar´e distance, which can be written as dB(u, v) = cosh−1(1 + 1 2λuλv∥u −v∥2). (14) And K is a Epanechnikov kernel function (Wand and Jones, 1994) with K = ( γ −x, x ∈[0, γ) 0, x ≥γ (15) where γ is the maximum Poincar´e distance between two points in the Poincar´e ball, which is dB(p, 0) with ∥p∥= 1 −ϵ (ϵ = 10−5) to avoid numerical errors. HDR is summarized in Algorithm 1. Different from the routing procedure described in (Sabour et al., 2017), HDR does not require the squashing function since all the hyperbolic capsules are constrained in the Poincar´e ball. 4.2 Adaptive Routing The large amount of labels in MLC is one major source of the computational complexity for the routing procedure. Since most of the labels are unrelated to a document, calculating the label-aware hyperbolic capsules for all the unrelated labels is redundant. Therefore, encoding based adaptive routing layer is used to efficiently decide the candidate labels for the document. The adaptive routing layer produces the candidate probability of each label by c = σ(Wc 1 T X ei∈E ei + bc), (16) 3119 Table 1: Statistics of the datasets: Ntrain and Ntest are the numbers of training and test instances, Wtrain and Wtest are their average word numbers, L is the average label number per instance, I is the average number of training instances per label, #H and #T are the numbers of head and tail labels, H and T are their average number of training instances respectively. Dataset Ntrain Ntest Wtrain Wtest L I #H H #T T AAPD 49,356 6,484 163.34 164.14 2.41 2,199.03 17 5,002.23 37 911.08 RCV1 23,149 781,265 259.47 269.23 3.21 715.50 27 2,209.44 76 184.76 ZHIHU 2,699,969 299,997 38.14 35.56 2.32 3,165.92 442 7,144.31 1,557 2,036.54 EUR-LEX57K 51,000 6,000 726.46 725.37 5.06 53.45 711 273.72 3,560 9.46 Algorithm 1 Hyperbolic Dynamic Routing 1: procedure HDR(ˆuj|i, r, ℓ) 2: Initialize ∀i, j : bij ←0 3: for r iterations do 4: for all capsule i in layer ℓand capsule j in layer ℓ+ 1: cij ←softmax(bij) ▷Eq. 12 5: for all capsule j in layer (ℓ+ 1): vj ←Mi cij ⊗ˆuj|i 6: for all capsule i in layer ℓand capsule j in layer ℓ+ 1: bij ←bij + K(dB(vj, ˆuj|i)) 7: return vj where σ denotes the Sigmoid function. Wc and the bias bc are learnable parameters updated by minimizing the binary cross-entropy loss (Liu et al., 2017) Lc = − Q P j=1 yjlog(cj) + (1 −yj)log(1 −cj)  , (17) where cj ∈[0, 1] is the j-th element in c and yj ∈ {0, 1} denotes the ground truth about label j. The adaptive routing layer selects the candidate labels during test. Label-aware hyperbolic capsules are then constructed via HDR to predict probabilities of these candidate labels. During the training process, negative sampling is used to improve the the scalability of HYPERCAPS. Let N + denote the true label set and N −denote the set of randomly selected negative labels, the loss function is derived as Lf = − P j∈N + log(aj) + P j∈N −log(1 −aj)  , (18) where aj = σ(dB(vj, 0)) is activations of the j-th label-aware capsules, which is proportional to the distance from the origin of the Poincar´e ball. 5 Experiments The proposed HYPERCAPS is evaluated on four benchmark datasets with various label number from 54 to 4271. We compare with the state-of-the-art methods in terms of widely used metrics. Performance on tail labels is also compared to demonstrate the superiority of HYPERCAPS for MLC. An ablation test is also carried out to analyse the contribution of each component of HYPERCAPS. 5.1 Experimental Setup Datasets Experiments are carried out on four publicly available MLC datasets, including the small-scale AAPD (Yang et al., 2018b) and RCV1 (Lewis et al., 2004), the large-scale ZHIHU1 and EUR-LEX57K (Chalkidis et al., 2019). Labels are divided into head and tail sets according to their number of training instances, i.e. labels have less than average number of training instances are divided into the tail label set. Their statistics can be found in Table 1. Evaluation metrics We use the rank-based evaluation metrics which have been widely adopted for MLC tasks (Bhatia et al., 2015; Liu et al., 2017), i.e. Precision@k (P@k for short) and nDCG@k, which are respectively defined as P@k = 1 k X j∈rankk(a) yj, (19) nDCG@k = P j∈rankk(a) yj/log(j + 1) Pmin(k,∥y∥0) j=1 1/log(j + 1) , (20) where yj ∈{0, 1} denotes the the ground truth about label j, rankk(a) denotes the indices of the candidate label-aware hyperbolic capsules with k largest activations in descending order, and ∥y∥0 is the true label number for the document instance. 1https://www.biendata.com/competition/ zhihu/data/. 3120 Table 2: Results on all the labels in P@k and nDCG@k, bold face indicates the best of each line. Dataset Metric FASTTEXT SLEEC XML-CNN SGM REGGNN NLP-CAP HYPERCAPS P@1 75.33 75.85 76.31 77.90 79.92 81.75 85.37 P@3 53.83 54.36 54.41 55.76 57.31 59.63 61.89 AAPD P@5 37.57 37.89 37.83 38.58 39.50 41.97 42.51 nDCG@3 71.22 71.54 72.12 73.73 75.77 78.40 81.64 nDCG@5 75.78 75.98 76.39 78.05 80.03 83.70 85.87 P@1 95.40 95.35 96.86 95.37 96.53 97.05 97.10 P@3 79.96 79.51 81.11 81.36 81.69 81.27 82.04 RCV1 P@5 55.64 55.06 56.07 53.06 56.23 56.33 57.06 nDCG@3 90.95 90.45 92.22 91.76 92.28 92.47 93.03 nDCG@5 91.68 90.97 92.63 90.69 92.67 93.11 93.66 P@1 49.40 50.22 49.68 50.32 50.67 53.73 56.50 P@3 31.50 32.21 32.27 31.83 32.43 33.83 35.77 ZHIHU P@5 23.23 23.81 24.17 23.95 24.23 25.10 26.27 nDCG@3 46.52 47.57 46.65 46.90 47.97 48.89 50.61 nDCG@5 49.16 50.34 49.60 50.47 50.70 51.19 52.89 P@1 86.18 89.43 85.33 89.11 90.46 90.83 91.42 P@3 73.18 76.73 74.40 78.03 79.29 80.72 82.18 EUR-LEX57K P@5 60.15 63.59 61.21 65.02 65.83 69.14 70.53 nDCG@3 77.42 80.98 78.59 82.30 83.45 84.13 86.05 nDCG@5 73.21 76.96 74.36 78.50 79.40 81.91 83.28 The final results are averaged over all the test instances. Baselines To demonstrate the effectiveness of HYPERCAPS on the benchmark datasets, six comparative text classification methods are chosen as the baselines. FASTTEXT (Joulin et al., 2017) is a representative encoding-based method which use average pooling to construct document representations and MLP to make the predictions. SLEEC (Bhatia et al., 2015) is a typical label-embedding method for MLC, which uses k-nearest neighbors search to predict the labels. XML-CNN (Liu et al., 2017) employs CNN as local n-gram feature extractors and a dynamic pooling technique as aggregation method. SGM (Yang et al., 2018b) applies the seq2seq model with attention mechanism, which takes the global contextual information. REGGNN (Xu et al., 2019) uses a combination of CNN and LSTM with a dynamic gate that controls the information from these two parts. NLP-CAP (Zhao et al., 2019) is a capsule-based approach for MLC, which reformulates the routing algorithm. NLPCAP use only CNN to construct capsules, and it applies the squashing function onto capsules. Implementation Details All the words are converted to lower case and padding is used to handle the various lengths of the text sequences. Maximum length of AAPD, RCV1 and EUR-LEX57K is set to 500, while maximum length of ZHIHU is 50. To compose the word vector representations, pre-trained 300-dimensional GLOVE (Pennington et al., 2014) word embeddings are used for AAPD, RCV1 and EUR-LEX57K, while ZHIHU uses its specified 256-dimensional word embeddings. The dimension of the Poincar´e ball is set to 32 with a radius 1 −ϵ (ϵ = 10−5) to avoid numerical errors. Multiple one-dimensional convolutional kernels (with window sizes of 2, 4, 8) are applied in the local hyperbolic capsule layer. The number of compressed local and global hyperbolic capsules is 128. Adaptive routing layer is not applied on the small-scale datasets AAPD and RCV1. The maximum candidate label number is set to 200 for the large-scale datasets ZHIHU and EUR-LEX57K. For the baselines, hyperparameters recommended by their authors are adopted. 5.2 Experimental Results The proposed HYPERCAPS is evaluated on the four benchmark datasets by comparing with the six baselines in terms of P@k and nDCG@k with k = 1, 3, 5. Results on all the labels averaged over the test instances are shown in Table 2. nDCG@1 is omitted since it gives the same value as P@1. It is notable that HYPERCAPS obtains competitive results on the four datasets. The encoding-based FASTTEXT is generally inferior to the other baselines as it applies the average pooling on word vector representations, which ig3121 (a) AAPD (b) RCV1 (c) ZHIHU (d) EUR-LEX57K Figure 4: Results on tail labels in nDCG@k. Figure 5: Results of ablation test on EUR-LEX57K in P@k. L denotes local capsules, G denotes global capsules, H denotes HDR. nores word order for the construction of document representations. The typical MLC method SLEEC takes advantage of label correlations by embedding the label co-occurrence graph. However, SLEEC uses TF-IDF vectors to represent documents, thus word order is also ignored. XML-CNN uses a dynamic pooling technique to aggregate the local contextual features extracted by CNN, while SGM uses attention mechanism to aggregate the global contextual features extracted by LSTM. REGGNN is generally superior to both of them as it combines the local and global contextual information dynamically and takes label correlations into consideration using a regularized loss. However, the two capsulebased methods NLP-CAP and HYPERCAPS consistently outperform all the other methods owing to dynamic routing, which aggregates the fine-grained capsule features in a label-aware manner. Moreover, NLP-CAP only uses CNN to extract the local contextual information, while HYPERCAPS benefits from the parallel combination of local and global contextual information. In addition, NLP-CAP applies the non-linear squashing function for capsules in the Euclidean space, while HDR is designed for hyperbolic capsules, which take advantage of the representation capacity of the hyperbolic space. Therefore, HYPERCAPS outperforms NLP-CAP as expected. This result further confirms that the proposed HYPERCAPS with HDR is effective to learn the label-aware hyperbolic capsules for MLC. 5.3 Performance on Tail Labels In MLC, tail labels have low occurring frequency and hence are hard to predict compared to head labels. The performance on tail labels of the four benchmark datasets is evaluated in terms of nDCG@k with k = 1, 3, 5. Figure 4 shows the results of the five deep learning based MLC methods, i.e. XML-CNN, SGM, REGGNN, NLP-CAP and HYPERCAPS. nDCG@1 is smaller than nDCG@3 on AAPD, RCV1 and ZHIHU since most of their test instances contain less than three tail labels. It is remarkable that HYPERCAPS outperforms all the other methods on tail labels. REGGNN takes advantage of the local and global contextual information and label correlations, thus it outperforms XML-CNN and SGM. The two capsule-based methods NLP-CAP and HYPERCAPS are both superior to the other methods, which indicates that the label-aware dynamic routing is effective for the prediction on tail labels. In addition, the fact that HYPERCAPS significantly improves the prediction performance compared to NLP-CAP implies that the representation capacity of the hyperbolic space and the combination of local and global contextual information are helpful for learning on tail labels. The results demonstrate the superiority of the proposed HYPERCAPS on tail labels for MLC. 3122 5.4 Ablation Test An ablation test would be informative to analyze the effect of varying different components of the proposed HYPERCAPS, which can be taken apart as local Euclidean capsules only (denoted as L), global Euclidean capsules only (denoted as G), a combination of the local and global Euclidean capsules (denoted as L + G), and a combination of the local and global hyperbolic capsules (denoted as L + G + H). Euclidean capsules (in L, G and L + G) are aggregated via the origin dynamic routing (Sabour et al., 2017), while hyperbolic capsules (in L + G + H) are aggregated via our HDR. Figure 5 shows the results on EUR-LEX57K in terms of P@k with k = 1, 3, 5. In order to make the comparison fair, the number of total compressed capsules is equally set to 256 for all the four models. Adaptive routing is also applied with the maximum candidate label number set equally to 200. Generally, the proposed combination of local and global contextual information contributes to the effectiveness of the model (L + G). Therefore, it is practical to combine the local and global contextual information via dynamic routing. HDR furthermore improves the performance by making use of the representation capacity of the hyperbolic space. Overall, each of the components benefits the performance of HYPERCAPS for MLC. In summary, extensive experiments are carried out on four MLC benchmark datasets with various scales. The results demonstrate that the proposed HYPERCAPS can achieve competitive performance compared with the baselines. In particular, effectiveness of HYPERCAPS is shown on tail labels. The ablation test furthermore confirms that the combination of local and global contextual information is practical and HYPERCAPS benefits from the representation capacity of the hyperbolic space. 6 Related Work 6.1 Multi-Label Classification Multi-label classification (MLC) aims at assigning multiple relevant labels to one document. The MLC label set is large compared to Multi-class classification (MCC). Besides, the correlations of labels (e.g. hierarchical label structures (Banerjee et al., 2019)) and the existence of tail labels make MLC a hard task (Bhatia et al., 2015). As data sparsity and scalability issues arise with the large number of labels, XML-CNN (Liu et al., 2017) employs CNN as efficient feature extractor, whereas it ignores label correlations, which are often used to deal with tail labels. The traditional MLC method SLEEC (Bhatia et al., 2015) makes use of label correlations by embedding the label co-occurrence graph. The seq2seq model SGM (Yang et al., 2018b) uses the attention mechanism to consider the label correlations, while REGGNN (Xu et al., 2019) applies a regularized loss specified for label co-occurrence. REGGNN additionally chooses to dynamically combine the local and global contextual information to construct document representations. 6.2 Capsule Networks Capsule networks are recently proposed to address the representation limitations of CNN and RNN. The concept of capsule is first introduced by (Hinton et al., 2011). (Sabour et al., 2017) replaces the scalar output features of CNN with vector capsules and pooling with dynamic routing. (Hinton et al., 2018) proposes the EM algorithm based routing procedure between capsule layers. (Gong et al., 2018) proposes to regard dynamic routing as an information aggregation procedure, which is more effective than pooling. (Yang et al., 2018a) and (Du et al., 2019a) investigate capsule networks for text classification. (Zhao et al., 2019) then presents a capsule compression method and reformulates the routing procedure to fit for MLC. Our work is different from the predecessors as we design the Hyperbolic Dynamic Routing (HDR) to aggregate the parallel combination of local and global contextual information in form of hyperbolic capsules, which are constrained in the hyperbolic space without the requirement of non-linear squashing function. In addition, adaptive routing is proposed to improve the scalability for large number of labels. 6.3 Hyperbolic Deep Learning Recent research on representation learning (Nickel and Kiela, 2017) indicates that hyperbolic space is superior to Euclidean space in terms of representation capacity, especially in low dimension. (Ganea et al., 2018b) generalizes operations for neural networks in the Poincar´e ball using formalism of M¨obius gyrovector space. Some works lately demonstrate the superiority of the hyperbolic space for serval natural language processing tasks, such as textual entailment (Ganea et al., 2018a), machine translation (Gulcehre et al., 2019) and word embedding (Tifrea et al., 2019). Our work presents 3123 the Hyperbolic Capsule Networks (HYPERCAPS) for MLC. 7 Conclusion We present the Hyperbolic Capsule Networks (HYPERCAPS) with Hyperbolic Dynamic Routing (HDR) and adaptive routing for Multi-Label Classification (MLC). The proposed HYPERCAPS takes advantage of the parallel combination of finegrained local and global contextual information and label-aware feature aggregation method HDR to dynamically construct label-aware hyperbolic capsules for tail and head labels. Adaptive routing is additionally applied to improve the scalability of HYPERCAPS by controlling the number of capsules during the routing procedure. Extensive experiments are carried out on four benchmark datasets. Results compared with the state-of-the-art methods demonstrate the superiority of HYPERCAPS, especially on tail labels. As recent works explore the superiority of hyperbolic space to Euclidean space for serval natural language processing tasks, we intend to couple with the hyperbolic neural networks (Ganea et al., 2018b) and the hyperbolic word embedding method such as POINCAR´EGLOVE (Tifrea et al., 2019) in the future. Acknowledgments This work was supported in part by the National Natural Science Foundation of China under Grant 61822601, 61773050, and 61632004; the Beijing Natural Science Foundation under Grant Z180006; National Key Research and Development Program (2017YFC1703506); the Fundamental Research Funds for the Central Universities (2019JBZ110). We thank the anonymous reviewers for their valuable feedback. References Siddhartha Banerjee, Cem Akkaya, Francisco PerezSorrosal, and Kostas Tsioutsiouliklis. 2019. Hierarchical transfer learning for multi-label text classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6295–6300. Kush Bhatia, Himanshu Jain, Purushottam Kar, Manik Varma, and Prateek Jain. 2015. Sparse local embeddings for extreme multi-label classification. In Advances in Neural Information Processing Systems 28, pages 730–738. Ilias Chalkidis, Emmanouil Fergadiotis, Prodromos Malakasiotis, and Ion Androutsopoulos. 2019. Large-scale multi-label text classification on EU legislation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6314–6322. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS 2014 Workshop on Deep Learning. Chunning Du, Haifeng Sun, Jingyu Wang, Qi Qi, Jianxin Liao, Chun Wang, and Bing Ma. 2019a. Investigating capsule network and semantic feature on hyperplanes for text classification. pages 456–465. Cunxiao Du, Zhaozheng Chin, Fuli Feng, Lei Zhu, Tian Gan, and Liqiang Nie. 2019b. Explicit interaction model towards text classification. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, pages 6359–6366. Octavian Ganea, Gary Becigneul, and Thomas Hofmann. 2018a. Hyperbolic entailment cones for learning hierarchical embeddings. In Proceedings of the 35th International Conference on Machine Learning, pages 1646–1655. Octavian Ganea, Gary Becigneul, and Thomas Hofmann. 2018b. Hyperbolic neural networks. In Advances in neural information processing systems 31, pages 5345–5355. Jingjing Gong, Xipeng Qiu, Shaojing Wang, and Xuanjing Huang. 2018. Information aggregation via dynamic routing for sequence encoding. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2742–2752. Caglar Gulcehre, Misha Denil, Mateusz Malinowski, Ali Razavi, Razvan Pascanu, Karl Moritz Hermann, Peter Battaglia, Victor Bapst, David Raposo, Adam Santoro, and Nando de Freitas. 2019. Hyperbolic attention networks. In International Conference on Learning Representations. Geoffrey E Hinton, Alex Krizhevsky, and Sida D Wang. 2011. Transforming auto-encoders. In International Conference on Artificial Neural Networks, pages 44– 51. Springer. Geoffrey E Hinton, Sara Sabour, and Nicholas Frosst. 2018. Matrix capsules with EM routing. In International Conference on Learning Representations. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427–431. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1746–1751. 3124 Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on Machine Learning, pages 1188–1196. David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for text categorization research. Journal of machine learning research, 5(Apr):361–397. Jingzhou Liu, Wei-Cheng Chang, Yuexin Wu, and Yiming Yang. 2017. Deep learning for extreme multilabel text classification. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 115–124. Maximillian Nickel and Douwe Kiela. 2017. Poincar´e embeddings for learning hierarchical representations. In Advances in Neural Information Processing Systems 30, pages 6338–6347. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532–1543. Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. 2017. Dynamic routing between capsules. In Advances in Neural Information Processing Systems 30, pages 3856–3866. Alexandru Tifrea, Gary Becigneul, and OctavianEugen Ganea. 2019. Poincare glove: Hyperbolic word embeddings. In International Conference on Learning Representations. Matt P Wand and M Chris Jones. 1994. Kernel smoothing. Chapman and Hall/CRC. Xingyou Wang, Weijie Jiang, and Zhiyong Luo. 2016. Combination of convolutional and recurrent neural network for sentiment analysis of short texts. In Proceedings of the 26th International Conference on Computational Linguistics, pages 2428–2437. Yunlai Xu, Xiangying Ran, Wei Sun, Xiangyang Luo, and Chongjun Wang. 2019. Gated neural network with regularized loss for multi-label text classification. In 2019 International Joint Conference on Neural Networks, pages 1–8. Min Yang, Wei Zhao, Jianbo Ye, Zeyang Lei, Zhou Zhao, and Soufei Zhang. 2018a. Investigating capsule networks with dynamic routing for text classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3110–3119. Pengcheng Yang, Xu Sun, Wei Li, Shuming Ma, Wei Wu, and Houfeng Wang. 2018b. SGM: Sequence generation model for multi-label classification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3915–3926. Wei Zhao, Haiyun Peng, Steffen Eger, Erik Cambria, and Min Yang. 2019. Towards scalable and reliable capsule networks for challenging NLP applications. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1549–1559. Appendix A Label Distributions (a) AAPD (b) RCV1 (c) ZHIHU Figure 6: Label distributions of the other three benchmark datasets. Y-axes of ZHIHU is on log-scale Figure 1 and Figure 6 show the label distributions of the four benchmark datasets. Head and tail labels are divided based on the average number of training instances (listed in Table 1), i.e. labels have less than average number of training instances are tail labels. We observe that this division generally follows the Pareto Principle, as nearly 80% of labels are divided into the tail label set.
2020
283
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3125–3134 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3125 Improving Segmentation for Technical Support Problems Kushal Chauhan ∗ ABV-IIITM, Gwalior [email protected] Abhirut Gupta † IBM Research AI [email protected] Abstract Technical support problems are often long and complex. They typically contain user descriptions of the problem, the setup, and steps for attempted resolution. Often they also contain various non-natural language text elements like outputs of commands, snippets of code, error messages or stack traces. These elements contain potentially crucial information for problem resolution. However, they cannot be correctly parsed by tools designed for natural language. In this paper, we address the problem of segmentation for technical support questions. We formulate the problem as a sequence labelling task, and study the performance of state of the art approaches. We compare this against an intuitive contextual sentence-level classification baseline, and a state of the art supervised text-segmentation approach. We also introduce a novel component of combining contextual embeddings from multiple language models pre-trained on different data sources, which achieves a marked improvement over using embeddings from a single pre-trained language model. Finally, we also demonstrate the usefulness of such segmentation with improvements on the downstream task of answer retrieval. 1 Introduction Problems, reported by users of software or hardware products - called tickets or cases, are often long and complex. Along with a description of the problem, users often report the setup, steps they have tried at mitigating the problem, and explicit requests. These problems also contain various nonnatural language elements like snippets of code or commands tried, outputs of commands or software tools, error messages or stack traces, contents of log files or configuration files, and lists of key-value ∗Work done at IBM Research during a summer internship †Now at Google Figure 1: Various non-natural language segments labelled from a problem on AskUbuntu pairs. Figure 1 shows a sample support problem from AskUbuntu 1, where all such segments are labeled. While these segments are important sources of information for the human reader, they are difficult to handle for systems built to automatically answer support problems. As noted in Gupta et al. (2018), the non-natural language segments lead to parsing mistakes, and errors in the understanding of support problems. Correctly identifying these segments can also augment problem understanding. For instance, a retrieval engine with error messages and their solutions indexed in distinct fields would return better results with a fielded query containing just the error message from the ticket. Specialized tools for log analysis (He et al., 2016) could also be 1https://askubuntu.com/ 3126 run specifically on the identified log segment of problems. In this paper, we aim to address the problem of identifying and extracting these non-natural language segments from support tickets. In particular, we choose to focus on the following six segment labels which appear often in support tickets (also shown in Figure 1): • Command / Code: Includes terminal commands and programming code snippets • Command Output / Info Message: Includes outputs of successful command/code executions • Error Message / Stack Trace: Includes error traces resulting from unsuccessful command/code executions • File Content (Not Code): Includes contents of log files, configuration files, etc. which do not contain programming source code • Path / URL: Includes file paths or webpage URLs • Semi-Structured Information: Includes text which is structured in the form of key-value pairs, lists, etc., often used to convey system configurations or lists of components We formulate the problem as a sequence labelling task, with word-level tags used to encode segments. To leverage the rich literature of supervised approaches in this framework, we also create a dataset with segments tagged for questions from AskUbuntu 2. Our contributions are as follows 1. We introduce a novel task towards understanding technical support problems, which has implications on a variety of downstream applications. We also release a tagged dataset of problems for the task. 2. We benchmark the performance of state of the art sequence labelling models on the task, studying their performance and limitations. This hopefully provides direction for future research. 2Data available at https://github.com/kushalchauhan98/ticketsegmentation 3. Given the relatively small size of tagged data, we also explore pre-training based approaches. Our model leverages activations from multiple language models pre-trained on different data sources, and we show how they can be used to improve performance on the task. 2 Related Work Understanding technical support problems is a particularly difficult task, owing to the long text of problems. In Gupta et al. (2018), the authors propose that understanding can be approached by extracting attributes of the ticket that correspond to the description of the problem (symptom), steps taken for mitigation (attempt), and explicit requests (intent). They also propose a dependency parser-based approach for extracting these attributes. However, while this approach pays attention to the semantics of the problem, the syntactical idiosyncrasies are ignored. The idea of segmenting of questions for improvements on downstream tasks is not new. In Wang et al. (2010), the authors propose an unsupervised graph-based approach for segmenting questions from Community Question Answering (cQA) websites into sub-questions and their related context sentences. The authors demonstrate improvements in question retrieval by using these segments for more granular similarity matching. Chrupała (2013) uses representations from a character-level language model for segmenting code spans in Stack Overflow posts. The author uses ⟨code⟩tags in HTML sources of posts for supervised training of a character level sequence labelling model. However, the ⟨code⟩tags in the posts usually include all forms of non-natural language text like code snippets, command outputs, error messages or stack traces, and file paths (See Fig 2). The resulting level of granularity is thus insufficient for effective application in downstream tasks such as automated problem resolution. The task of textsegmentation in itself has been well studied in the literature, with popular unsupervised approaches like TextTiling (Hearst, 1997) and C99 (Choi, 2000). While, the problem of ticket segmentation, as defined by us, involves both segmenting and identifying segment types, we compare the performance of a more recent supervised segmentation approach (Koshorek et al., 2018) against our proposed model. Significant amount of work has been done on us3127 (a) (b) Figure 2: Sample problems from Ask Ubuntu with ⟨code⟩tag used to present (a) an error message, and (b) contents of a configuration file ing sequence labelling approaches for text segmentation tasks (Huang et al., 2015; Chiu and Nichols, 2016; Lample et al., 2016; Ma and Hovy, 2016; Rei et al., 2016; Peters et al., 2017). In Wang et al. (2018) the authors use ELMo embeddings and a biLSTM-CRF based architecture with selfattention for the task of neural discourse segmentation. We adopt a similar architecture, and explore the effect of using pre-trained contextual embeddings on our task. Given the fact that different segments in technical support problems have very different vocabularies, we also explore leveraging pre-trained Language Models on a variety of different datasets. 3 Data Our dataset is derived from English questions on Ask Ubuntu. Questions posted on the website are similar to proprietary tech support tickets (in terms of question length, number of keywords/noun phrases, etc). We would like to point out that while posts on the website support the ⟨code⟩HTML tag, it is not granular enough for our downstream tasks. These tags are also often abused to present snippets of command outputs/error messages/file paths etc. Figure 2 shows examples of such questions. We Figure 3: Relative frequencies of each tag in the dataset. also do not use other metadata available (like turnbased information) with the data dump because these are not available with proprietary tickets. Tagging is performed at the word level, and we use the BIO tagging scheme. We have a pair of Begin and Inside tags for each of the 6 non-natural language segments, and natural language segments are labelled O, totalling to 13 tags. We use the Doccano tool 3 for labelling, which provides better support for labelling long chunks in big documents compared to other popular sequence labelling annotation tools. We obtain labelling for 1,317 questions, totalling 3https://github.com/chakki-works/doccano 3128 #Questions Avg. #Words Avg. #Spans Total CC CO ES FC SS PU Dataset 1317 897.37 4.86 2.13 1.20 0.62 0.30 0.14 0.46 Train 1053 899.33 4.91 2.14 1.20 0.63 0.30 0.14 0.49 Val 131 783.43 4.67 2.17 1.04 0.66 0.26 0.19 0.36 Test 133 994.10 4.64 2.08 1.36 0.47 0.35 0.09 0.28 Table 1: Statistics of the tagged dataset for segmentation with average number of words and spans per question. The last 6 columns contain average number of spans for each tag type - CC: Command/Code, CO: Command Output, ES: Error Message/Stack Trace, FC: File Content, SS: Semi-structured Information, PU: Path/URL Figure 4: Confusion Matrix to show the word-level agreement between annotations of 2 annotators on 50 questions. The relatively large off-diagonal values represent the inherent difficulty in the task. Abbreviations for tags - CC: Command/Code, CO: Command Output, ES: Error Message/Stack Trace, FC: File Content, SS: Semi-structured Information, PU: Path/URL to 11,580 spans (including spans labelled as O) and over 1.18 million words. We divide the data into 80:10:10 train, val, and test splits, at random. High-level statistics for the dataset are presented in Table 1. Figure 3 shows the average number of words per tag in the dataset. The tags Command Output and Error Message are relatively infrequent (1.2 and 0.6 per question) compared to the tag Command Code (2.1 per question), however, they cover a much larger fraction of words because they tend to be quite verbose. In Figure 4 we show the inter-annotator agreement between two annotators on 50 questions. Few of the label pairs with large off-diagonal values include • Command Output - Error Message, which is understandable, as error messages are often interspersed in successful program runs. Conversely, unsuccessful program runs often contain a long train of success messages, only ending in one or few error logs. • Command Output - Semi-Structured Information and File Content - Semi-Structured Information. This kind of confusion is due to the presence of network configurations, commands to view these, and files that contain these. They’re often stored in configuration files as “key-value” pairs • Command Output - File Content. This particular confusion stems from the “cat” command, and its use to view the contents of files. The low inter-annotator agreement (κ = 0.7637) illustrates the inherent difficulty of the task. At this point, it’s important to note that while there’s some confusion in identifying labels for these segments, the need for these separate labels stems from downstream tasks. 4 Model Given a technical support question, we formulate the segmentation problem as a sequence labelling task. It is an intuitive choice, given its efficacy for similar text segmentation problems like discourse segmentation (Wang et al., 2018) and chunking (Peters et al., 2017). Figure 5 presents an overview of our model. We explore different embeddings for each word (character-based embeddings, pretrained embeddings, and pre-trained contextual embeddings). These word embeddings are then fed to a bi-directional GRU for encoding context. On the output of the GRU layer, we explore the effect of attention. Finally, the representations are passed to a CRF to decode the segment labels. We also study the impact of combining pre-trained contextual embeddings from multiple language models, trained on different data sources. In the rest of this section we detail individual components of the model. 4.1 Word Embeddings For distributed word representations, we use skipgram based word2vec embeddings (Mikolov et al., 2013) trained on all the questions from Ask Ubuntu. 3129 Figure 5: Model architecture for segmenting technical support problems. We also look at fastText word embeddings (Bojanowski et al., 2017), which enrich word vectors by using subword information, emitting plausible word representations for unseen or rare words, giving us a significant gain. We use a 300-dimensional embedding from both word2vec and fastText. 4.2 Character Embeddings In addition to the word-level features we also use bi-directional LSTM based character-level features similar to Chiu and Nichols (2016), Lample et al. (2016), and Ma and Hovy (2016). These features encode rich character level information which can improve performance, especially in syntactic tasks. We obtain an 80-dimensional representation for each word through the character bi-LSTM, which is the concatenation of the last hidden state of the forward and backward LSTMs. 4.3 Contextual Embeddings from Language Models Pre-trained contextual embeddings have been shown to work well on a wide variety of NLP tasks. In domains with relatively small task-specific training data, the gains have been substantial (McCann et al., 2017; Akbik et al., 2018; Peters et al., 2017). We also include contextual embeddings from the pre-trained bi-directional language model in ELMo (Peters et al., 2018). We observe that the non-natural language segments exhibit wide differences in syntactic and semantic structure, as is evident from Fig 1. We propose contextual embeddings from multiple language models; each trained on a different data source - English text, code snippets, config/log file contents. We hypothesize that combined embeddings from language models trained on separate data sources can capture word relationships better and can give richer word representations, as opposed to a single model trained on a large English corpora. For combining multiple contextual embeddings, we explore two techniques - (1) a naive concatenation, and (2) a weighted sum, with weights learned from context-independent DME (Dynamic Meta-Embeddings) and context-dependent CDME (Contextualised Dynamic Meta-Embeddings) selfattention mechanisms as proposed by Kiela et al. (2018). 4.3.1 DME and CDME When using embeddings from n different LMs for a training instance with s tokens {tj}s j=1, we get contextual embeddings {wi,j}s j=1 ∈Rdi(i = 1, 2, . . . , n). For computing the weighted sum, the embeddings from multiple LMs are first projected to a common d′-dimensional space by learned linear functions: w′ i,j = Piwi,j + bi(i = 1, 2, . . . , n) (1) where Pi ∈Rd′×di and bi ∈Rd′. The projected embeddings are then combined with a weighted sum wDME j = n X i=1 αi,jw′ i,j (2) where αi,j = g({w′ i,j}s j=1) are scalar weights. In DME, they are learned with the self-attention mechanism: αi,j = g w′ i,j  = φ a · w′ i,j + b  (3) where a ∈Rd′ and b ∈R are learned parameters and φ is the softmax function. For CDME, the self-attention mechanism is made context-dependent: αi,j = g {w′ i,j s j=1  = φ (a · hj + b) (4) 3130 where hj ∈R2m is the jth hidden state of a bidirectional LSTM which takes {w′ i,j}s j=1 as input, a ∈R2m and b ∈R. m is the number of hidden units in this LSTM, and it is set to 2 as in the original paper. 4.3.2 Data Sources for pre-trained LMs In addition to the pre-trained ELMo model, we train three additional language models on different data sources. Each of these are also trained with the ELMo architecture. The pre-trained model emits word embeddings of size 1024, while each of our domain-specific models emit embeddings of size 256. • Code LM: This LM was trained on a concatenation of all text inside the ⟨code⟩tags of Ask Ubuntu, Super User, and Unix Stack Exchange posts. The total size of this corpus was approximately 400 MB. • Prog LM: This LM was trained on a corpus containing programming source code that was compiled from various code repositories on GitHub. Approximately 275 MB in size, it includes sources in most popular languages such as C, C++, Python, Java, Go, JavaScript, and Bash. • Config LM: This LM was trained on a corpus of configuration and log files present in the system folders of Mac OS and Ubuntu installations. The total size of the corpus was about 60 MB. 4.4 Attention In Wang et al. (2018), the authors experiment with a restricted attention mechanism on top of the LSTM hidden representations. This is not appropriate for our task since the questions are fairly long (averaging around 900 words) and signals indicating the start or end of a segment might appear far away. Since RNNs are known to be poor at modelling very long-distance dependencies, we also experiment with the inclusion of the Scaled Dot-Product Attention layer (Vaswani et al., 2017) on top of the bi-directional GRU. This attention layer requires the computation of 3 matrices (Key, Query, Value) from the RNN hidden states, which entails a large number of extra parameters to be learned. Therefore, we also try a version of attention where all the three matrices are set equal to the hidden states of the GRU. We call these two approaches “weighted” and “un-weighted” attention, in our experiments. 5 Experimental Setup With the setup above, we study the performance of various model components on the task of segmenting support problems. To put the performance in perspective, we also compare against three baselines detailed in Section 5.1. The evaluation metrics are carefully selected, avoiding an exact evaluation of such long and noisy segments, and rewarding partial retrieval of segments. The chosen evaluation metric is discussed in Section 5.2. Finally, to demonstrate the usefulness of the task, we evaluate the performance of answer retrieval with segmentation (Section 5.3). All baselines and sequence labelling models are trained on the train split, and fine-tuned on the validation split. For the baselines, we only tune the regularization strength parameter. For the sequence labelling model, we tune the dropout and recurrent dropout parameters, as well as the learning rate. Our best performing models have a dropout of 0.3, recurrent dropout of 0, and learning rate of 1e-3. All results are then reported on the test split. 5.1 Baseline The task of segmenting technical support problems can be thought to be comprised of two distinct subtasks - (1) segmentation of text, (2) identification of the segment label. With these in mind, we propose 3 baseline methods 1. Sentence Only Baseline - Segmentation is done trivially with newlines and sentence boundaries serving as segment boundaries. The label for a segment is determined using just the current sentence as input. 2. Sentence Context Baseline - Segmentation is done identically to the Sentence Only baseline. The label for a segment is determined using the immediate neighbouring sentences along with the current sentence as input. 3. Supervised Text Segmentation Baseline - Segments are identified with the supervised algorithm for segmenting text as described in Koshorek et al. (2018). The label for each segment is identified with all the text contained in it as input. 3131 For training the supervised text segmentation model from Koshorek et al. (2018) we use the whole data dump from AskUbuntu, with the ⟨code⟩ and ⟨/code⟩html tags serving as segment boundaries. For identifying segments (in all three baselines) we use a Logistic Regression classifier with representation from ELMo as input features. Segment representations are created by mean pooling the contextual representation of the comprising words from ELMo. 5.2 Evaluation Metrics Segments in our dataset are typically quite long, therefore evaluation based on an exact match is quite harsh. Keeping this in mind, we resort to soft precision and recall metrics. We adopt proportional overlap based metrics, used for the task of opinion expression detection, as proposed by Johansson and Moschitti (2010). Towards the calculation of soft precision and recall, consider two spans s and s′ with labels l and l′ respectively. The span coverage, c, is defined as how well s′ is covered by s: c s, s′ = |s ∩s′| |s′| if l = l′, 0 otherwise (5) Using span coverage, the span set coverage of a set of spans S with respect to another set of spans S′ is computed as follows: C S, S′ = X sj∈S X s′ k∈S′ c sj, s′ k  (6) Using the span set coverage, we can now define the soft precision P and recall R of a predicted set of spans ˆS with respect to the gold standard set of spans S: P(S, ˆS) = C(S, ˆS) |ˆS| R(S, ˆS) = C(ˆS, S) |S| (7) In this equation, the operator | · | counts the no. of spans in the span set. 5.3 Retrieval An important task in the automation of technical support is the retrieval of the most relevant answer document for a given ticket (from a corpus of product documentation, FAQ docs, frequent procedures). In this experiment we demonstrate the usefulness of segmenting support tickets towards this goal. We index the text of about 250,000 answers from AskUbuntu with ElasticSearch 4. Answers with a large number of downvotes, and very short answers are ignored. We use questions from our annotated dataset as search queries. We then compare the retrieval performance of querying with the whole question against a query with separate fields corresponding to each segment. In the fielded query, we set different boost values for the identified segments. Boosting a specific segment of the question with a higher value causes it to have more significance in the relevance score calculation in ElasticSearch. To decide the boost values, we calculate the average percentage word overlap between a segment in the question and its correct answer from AskUbuntu on the train and val sets. To compare retrieval performance, we evaluate the Mean Reciprocal Rank (MRR) of the correct answer for questions in the test set. 6 Results Table 2 presents evaluation metrics for the three baselines against three variants of our sequence labelling model. The first variant does not use pre-trained embeddings from language models, the second uses just pre-trained ELMo, while the third combines pre-trained embeddings from multiple language models using CDME. All three variants use fastText for word embeddings (refer Section 6.1), character-based embeddings, and do not have attention mechanism before the final CRF layer (refer Section 6.2). As one would expect, the Context Baseline performs much better than the Sentence Only Baseline. The sequence labelling models, however, outperform both the baselines by a huge margin, demonstrating the effectiveness of the model on the task. Specifically, the best performance is achieved by combining pre-trained embeddings from multiple language models trained on different data sources. It significantly outperforms the model using embeddings from a single pre-trained model on English (explored in Section 6.3). In the following section we present results from the various model components we explored. 6.1 Effect of fastText Row 1 and 4 in Table 3 presents the comparison between models using word embeddings from 4https://www.elastic.co/products/elasticsearch 3132 Model P R F1 Sent. Only Baseline 47.77 31.75 38.15 Sent. Context Baseline 52.52 34.03 41.3 Supervised Text Segmentation Baseline 44.13 40.43 42.20 SL w/o LM embeddings 74.57 75.51 75.04 SL + pre-trained ELMo 76.88 74.49 75.67 SL + CDME combined pre-trained Embeddings 78.30 79.29 78.80 Table 2: Results comparing the three baselines against variants of our sequence labelling model. The best performing variant uses CDME to combine pre-trained embeddings from multiple language models trained on different datasources. Model P R F1 Word2Vec (w/o Attn) 65.20 58.59 61.72 + weighted Attn. 62.34 57.0 59.55 + un-weighted Attn. 69.21 56.15 62.0 fastText 74.57 75.51 75.04 Table 3: Results for experiments between using Word2Vec and fastText embeddings. Also includes results of using attention on top of the model with Word2Vec. Since attention results were not promising, we did not repeat them with fastText. word2vec and fastText. Both word2vec and fastText embeddings are trained on all posts in the Ask Ubuntu dataset. As we can see, fastText gives a marked improvement over using embeddings from word2vec. This is probably due to the nature of the vocabulary in our task. Since large portions of questions are spans of command output or error messages a lot of tokens appear very rarely. In fact, out of the 62,501 unique tokens in the dataset, 57% appear just once, and 78% appear 3 or fewer times. However, the characters in these tokens are probably very informative (for example “http” in a token would signal that the token is a URL). Therefore, fastText, which uses n-grams from a token to compute embeddings, would emit more meaningful representations. As a simple experiment, we check the similarity of two URLs from the dataset that appear just once - http://paste.ubuntu.com/1403448/ and http://paste.ubuntu.com/14545476/. While the cosine similarity of Word2Vec vectors for the two is −0.07, the similarity between the fastText vectors is 0.99. 6.2 Effect of Attention Given the long tickets in our dataset, and unreasonably long lengths of spans for labels like command output or error messages, we explored the usefulness of attention in our model. We used the Scaled Dot-Product Attention as in (Vaswani et al., 2017). Rows 2 and 3 in Table 3 present the results of using attention. We find that weighted attention actually hurts performance. This could be because of the large number of extra parameters introduced in the calculation of Key, Value, and Query matrices. While the un-weighted version gets around this by using the bi-directional GRU hidden states as all 3 matrices, it doesn’t improve results significantly either. 6.3 Effect of Contextual Pre-Trained Embeddings As detailed in Section 4.3, we explore the impact of pre-trained contextual embeddings. We also test our hypothesis, that combining pre-trained embeddings from different data sources would perform better on our task than using embeddings from a language model trained on a single data source. The combination is also performed in two ways naive concatenation of embeddings from all language models, and weighted combination using DME and CDME as in Kiela et al. (2018). Table 4 summarizes these results. For the simple concatenation method, we present results for the best n-way combination of embeddings from different data sources, for each n (1, 2, 3, and 4). We find that combining embeddings from multiple language models trained on different data sources considerably outperforms using embeddings from a single pre-trained model (using both the naive concatenation and CDME). This is an artifact of the support problems containing large sections of nonnatural language text. We also find that contextual weighting does better than a simple concatenation. 3133 Model P R F1 No Pretraining 74.57 75.51 75.04 Simple Concat - 1 (en) 76.88 74.49 75.67 Simple Concat - 2 (en + config) 77.67 76.12 76.89 Simple Concat - 3 (en + code + config) 79.64 77.72 78.67 Simple Concat - 4 (ALL) 76.05 76.65 76.35 DME 77.42 75.82 76.61 CDME 78.30 79.29 78.80 Table 4: Results comparing the models using various pre-trained embeddings. The en data source is the downloaded pre-trained ELMo model. For simple concatenation, we present the results for the best model at each n combinations of data sources. For example, when concatenating any 2 datasources, the en + config combination gives the best performance. Method MRR Full Question 0.292 Segmented Question - Gold 0.300 Segmented Question - Predicted 0.298 Table 5: Retrieval results, comparing the performance of querying with the full question against segmented question (gold segments and predicted segments) 6.4 Retrieval of the Correct Answer Table 5 presents results for the retrieval experiment. We show that weighing identified segments of the question with separate weights improves retrieval of the correct answer over a query with all tokens from the question. We also present results from the gold annotations of segments for these questions, as an upper-bound of the performance improvement we can hope to achieve. 7 Conclusion In this paper, we introduce and address an important problem towards a better understanding of support tickets - segmentation of various nonnatural language segments. We create an annotated dataset for the task, on questions from the publicly available website, Ask Ubuntu. We also study the performance of the most recent Recurrent Neural Network-based approaches to sequence labelling, on this task. In the end, we propose the novel idea of combining pre-trained embeddings from language models trained on different data sources, which substantially improves performance. We also demonstrate the usefulness of the task with improvements in retrieval of the correct answer. Our future research direction includes a thorough study of differences in this dataset with actual tickets, and potential for transfer. It is still valuable to study models on open datasets, however, as these are readily available to the community. References Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638–1649, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Jason P.C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics, 4:357–370. Freddy Y. Y. Choi. 2000. Advances in domain independent linear text segmentation. In Proceedings of the 1st North American Chapter of the Association for Computational Linguistics Conference, NAACL 2000, page 2633, USA. Association for Computational Linguistics. Grzegorz Chrupała. 2013. Text segmentation with character-level text embeddings. Workshop on Deep Learning for Audio, Speech and Language Processing, ICML 2013, Atlanta, United States. Abhirut Gupta, Anupama Ray, Gargi Dasgupta, Gautam Singh, Pooja Aggarwal, and Prateeti Mohapatra. 2018. Semantic parsing for technical support questions. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3251– 3259, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Shilin He, Jieming Zhu, Pinjia He, and Michael R Lyu. 2016. Experience report: system log analysis for anomaly detection. In 2016 IEEE 27th International Symposium on Software Reliability Engineering (ISSRE), pages 207–218. Marti A. Hearst. 1997. TextTiling: Segmenting text into multi-paragraph subtopic passages. Computational Linguistics, 23(1):3364. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF Models for Sequence Tagging. arXiv e-prints, page arXiv:1508.01991. Richard Johansson and Alessandro Moschitti. 2010. Syntactic and semantic structure for opinion expression detection. In Proceedings of the Fourteenth 3134 Conference on Computational Natural Language Learning, CoNLL 10, page 6776, USA. Association for Computational Linguistics. Douwe Kiela, Changhan Wang, and Kyunghyun Cho. 2018. Dynamic meta-embeddings for improved sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1466–1477, Brussels, Belgium. Association for Computational Linguistics. Omri Koshorek, Adir Cohen, Noam Mor, Michael Rotman, and Jonathan Berant. 2018. Text segmentation as a supervised learning task. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 469–473, New Orleans, Louisiana. Association for Computational Linguistics. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270, San Diego, California. Association for Computational Linguistics. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074, Berlin, Germany. Association for Computational Linguistics. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS17, page 62976308, Red Hook, NY, USA. Curran Associates Inc. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems Volume 2, NIPS’13, pages 3111–3119, USA. Curran Associates Inc. Matthew Peters, Waleed Ammar, Chandra Bhagavatula, and Russell Power. 2017. Semi-supervised sequence tagging with bidirectional language models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1756–1765, Vancouver, Canada. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Marek Rei, Gamal Crichton, and Sampo Pyysalo. 2016. Attending to characters in neural sequence labeling models. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 309–318, Osaka, Japan. The COLING 2016 Organizing Committee. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Kai Wang, Zhao-Yan Ming, Xia Hu, and Tat-Seng Chua. 2010. Segmentation of multi-sentence questions: Towards effective question retrieval in CQA services. In Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 10, page 387394, New York, NY, USA. Association for Computing Machinery. Yizhong Wang, Sujian Li, and Jingfeng Yang. 2018. Toward fast and accurate neural discourse segmentation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 962–967, Brussels, Belgium. Association for Computational Linguistics.
2020
284
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3135–3142 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3135 MOOCCube: A Large-scale Data Repository for NLP Applications in MOOCs Jifan Yu1∗, Gan Luo1∗, Tong Xiao1, Qingyang Zhong1, Yuquan Wang1, Wenzheng Feng1, Junyi Luo1, Chenyu Wang1, Lei Hou1,2,3, Juanzi Li1,2,3†, Zhiyuan Liu1,2,3, Jie Tang1,2,3 1Dept. of Computer SCi.& Tech., Tsinghua University, China 100084 2KIRC, Institute for Artificial Intelligence, Tsinghua University, China 100084 3Beijing National Research Center for Information Science and Technology, China 100084 {yujf18,luog18,zhongqy16,wangyuqu19,fwz17}@mails.tsinghua.edu.cn {houlei,lijuanzi,liuzy,jietang}@tsinghua.edu.cn Abstract The prosperity of Massive Open Online Courses (MOOCs) provides fodder for many NLP and AI research for education applications, e.g., course concept extraction, prerequisite relation discovery, etc. However, the publicly available datasets of MOOC are limited in size with few types of data, which hinders advanced models and novel attempts in related topics. Therefore, we present MOOCCube, a large-scale data repository of over 700 MOOC courses, 100k concepts, 8 million student behaviors with an external resource. Moreover, we conduct a prerequisite discovery task as an example application to show the potential of MOOCCube in facilitating relevant research. The data repository is now available at http: //moocdata.cn/data/MOOCCube. 1 Introduction Massive open online courses (MOOCs) boom swiftly in recent years and have provided convenient education for over 100 million users worldwide (Shah, 2019). As a multi-media, large-scale online interactive system, MOOC is an excellent platform for advanced application research (Volery and Lord, 2000). Since MOOC is committed to helping students learn implicit knowledge concepts from diverse courses, many efforts from NLP and AI raise topics to build novel applications for assistance. From extracting course concepts and their prerequisite relations (Pan et al., 2017b; Roy et al., 2019; Li et al., 2019) to analyzing student behaviors (Zhang et al., 2019; Feng et al., 2019), MOOC-related topics, tasks, and methods snowball in recent years. Despite the plentiful research interests, the resource from real MOOCs is still impoverished. ∗Equal Contribution. †Corresponding author. Most of the publicly available datasets are designed for a specific task or method, e.g., Zhang et al.(2019) build a MOOC enrollment dataset for course recommendation and (Yu et al., 2019) is only for course concept expansion, which merely contains a subset of MOOC elements. Consequently, they are not feasible enough to support ideas that demand more types of information. Moreover, these datasets only contain a small size of specific entities or relation instances, e.g., prerequisite relation of TutorialBank (Fabbri et al., 2018) only has 794 cases, making it insufficient for advanced models (such as graph neural networks). Therefore, we present MOOCCube, a data repository that integrates courses, concepts, student behaviors, relationships, and external resources. Compared with existing education-related datasets, MOOCCube maintains the following advantages: • Large-scale: MOOCCube contains over 700 MOOC courses, 38k videos, 200k students, and 100k concepts with 300k relation instances, which provide sufficient resources for models that require large-scale data. • High-coverage: Obtained from real MOOC websites and external resources, the courses, concepts, and student behaviors in MOOCCube have profuse attributes and relationships, offering comprehensive information for various related tasks. As shown in Figure 1, a data cell of MOOCCube is in terms of concepts, courses, and students, which represents a learning fact, i.e., a student s learns concept k in course c. Through different queries, MOOCCube can provide various combinations of these data cells to support existing research. In this paper, we first introduce the data collection process and then give an insight into the characteristics of MOOCCube by analyzing its statistics in different aspects. We also conduct a typical NLP application task on MOOCCube and discuss the future directions on the usage of our datasets. 3136 Figure 1: The framework of MOOCCube. Our contribution is in two folds: a) an investigation of NLP and AI application research in online education, especially in MOOCs; b) a large-scale data repository of MOOCs, which organizes data in three dimensions: student behaviors, courses, and knowledge concepts. 2 Dataset Collection 2.1 An Overview of MOOCCube Figure 1 gives an overview of MOOCCube, which models various facts of MOOCs in three main dimensions: courses, concepts and students. Due to the rich relationships among these entities, we organize the data into a form of a knowledge base for convenient storage and query. Through specific queries, MOOCCube can support diverse related applications, e.g., we can build a dataset for dropout prediction tasks by collecting a student’s all behaviors in a certain course, and build a concept extraction dataset with all concepts in all courses. In subsequent sections, we introduce how to obtain and process the abundant data from XuetangX1, one of the largest MOOC website in China, while considering the issue of privacy protection. 2.2 Course Extraction Courses are the foundation of MOOCs and consist of a series of pre-recorded videos. Regarding each course as an entity, we extract the synopsis, video list, teacher, and the organization, offering this course as its attributes. As shown in Figure 1, We obtain each video’s subtitle and save the order of videos for further knowledge discovery in MOOCs. Notably, we also record the description of the teacher and the organization from Wikidata2 as an external resource. 1https://next.xuetangx.com/ 2https://www.wikidata.org 2.3 Concept and Concept Graph Course concepts refer to the knowledge concepts taught in the course videos. For each video, we extract 10 most representative course concepts from subtitles (Pan et al., 2017b). We also record the concept description from Wikidata and search top 10 related papers for each concept via AMiner3 (Tang et al., 2008) as external resource. Moreover, as many NLP types of research are interested in discovering semantic relationships among concepts, we further build a novel concept taxonomy with prerequisite chains as a concept graph (Gordon et al., 2016). Concept Taxonomy. A solid concept taxonomy is favorable for further research in course content (Gordon et al., 2017). However, existing taxonomies like ConceptNet (Liu and Singh, 2004) or Wiki Taxonomy (Ponzetto and Strube, 2007) cannot be directly applied to course concepts because course concepts are mostly academic terms and the non-academic categories greatly interfere with the quality of taxonomy. Thus, we select a crosslingual term taxonomy from CNCTST4 as a basis and lead manual annotation to build a serviceable course concept taxonomy for MOOCCube. Prerequisite Chain. Prerequisite relation is defined as: If concept A can help understanding concept B, then there is a prerequisite relation from A to B (Gordon et al., 2016). Prerequisite relation has received much attention in recent years (Pan et al., 2017a; Fabbri et al., 2018; Li et al., 2019) and has a direct help for teaching applications. To build prerequisite chains, we first reduce the amount of candidate concept pairs by utilizing taxonomy information (Liang et al., 2015) and video dependency (Roy et al., 2019), and then lead 3https://aminer.org 4http://www.cnctst.cn/ 3137 manual annotation. The annotation results are then employed to train different models to build a much larger distant supervised prerequisite dataset. 2.4 Student Behavior Student behavior data not only supports relevant research (such as course recommendation (Zhang et al., 2019), video navigation (Zhang et al., 2017), dropout prediction (Feng et al., 2019)), but also indicates the relationships between courses and concepts (Liang et al., 2015). To meet different needs, we preserve the enrollment records and video watch logs of over 190,000 users from 2017 to 2019. Note that video watch logs record student behavior in detail, e.g., click a certain sentence, jump back to a video point, etc. Considering the data quality and privacy, we first remove the users with less than two video watching records and then anonymize the user names into UserIDs. We further shuffled these IDs and relinked them to the “most popular names”5. 2.5 Data Processing and Annotation We lead data processing and annotations, including 1) process the extracted course videos into subtitles; 2) process the related papers into Json files; 3) the annotation of course/video dependency; 4) large-scale annotation of concept taxonomy and prerequisite relations. All the annotations are provided by students in corresponding domains with strict quality controls6. 3 Data Analysis In this section, we analyze various aspects of MOOCCube to provide a deeper understanding of the dataset. Comparison with similar datasets. Table 1 shows statistics of MOOCCube and other AI-InEducation datasets, including KDDCup2015 (Predicting dropout in MOOCs) (Cup, 2015), hierarchical MOOC recommendation (HMR) (Zhang et al., 2019), prerequisite relation learning(PRL) (Pan et al., 2017a), TutorialBank (Fabbri et al., 2018) and LectureBank (Li et al., 2019). The comparison is conducted in two aspects: • Data Size. MOOCCube contains the largest data size, especially the course concept graph. For example, the number of prerequisite concept pairs 5Published by Social Security Administration, https: //www.ssa.gov/ 6Some annotation and quality control details are in Appendix. exceeds the existing datasets by almost 100 times, and hereafter supports the attempts of advanced models such as neural networks on related tasks. • Data Dimension. Existing datasets are clearly divided into two categories: datasets centered on user behavior, such as HMR, they only contain very little course content information; datasets centered on course content, such as LectureBank, they focus on the concepts in the education material instead. MOOCCube organically combines these types of data in the MOOC environment so that researchers can analyze specific learning behavior. Concept Graph. Figure 2 shows the concept distribution over different categories. Overall, we divide the concepts into 24 domains. There are significantly more concepts in engineering courses than in natural sciences or social sciences, while the number of sub-fields is the opposite. Since there are more than 1,500 valid concepts in each field, the concept information in MOOCCube is abundant. Moreover, the statistic of prerequisite concept pairs in Table 1 indicates its rarity: only 6% of concept pairs maintain a solid prerequisite relation, which explains its scarcity in existing datasets. Student Behavior. Figure 3(a) shows the course distribution of enrolled users, which substantially fits a normal distribution. Despite a few courses with rare students, 451 courses are enrolled by over 100 users. Figure 3(b) presents a user view of the data, indicating more than 70% of users possess over ten videos watching records. These statistical results give an insight into abundant interaction between MOOCCube students, courses, and videos. 4 Application Such a wealth of data enables MOOCCube to support multiple tasks such as course recommendation (Zhang et al., 2019), concept mining (Yu et al., 2019), etc. In this section, we conduct an important and typical task, prerequisite relation discovery as an example application of MOOCCube by utilizing different types of data from it. As introduced in Section 2.3, prerequisite relation indicates “what should a student learn at first”. Since existing efforts have attempted to discover such relationships among concepts from different types of information, we reproduce the following methods on MOOCCube and present some basic new models. • MOOC-LR and MOOC-XG learn such relations from the course video list and the abstracts of Wikipedia (Pan et al., 2017b), we select Logic Re3138 Dataset Course Video Concept Prerequisite Taxonomy Student Enrollment Video Watching External Resource KDDCup2015 39 – – – – 112,448 200,904 1,319,032 – HMR 1,302 – – – – 82,535 458,454 – – PRL 20 1,356 573 3,504 – – – – Corpus TutorialBank – – 200 794 200 – – – Corpus, Paper LectureBank 60 208 921 1,221 – – – Corpus, Paper, Blog MOOCCube 706 38,181 106,056 17,686 3,152 199,199 682,753 4,874,298 Corpus, Paper Course, Video, Concept, Student are the sum of respective entities. Prerequisite is the number of relation instances, Taxonomy is the number of finest taxonomy categories, and Enrollment and Video Watching are the records of behavior. Table 1: Statistics of existing NLP-in-Education datasets. Figure 2: Concept distribution over taxonomy (a) Courses Enrollment. (b) Video Watching. Figure 3: (a) shows the number of courses for different enrolled users while (b) is the user with different video watching records. gression and Xgboost as the classifier of the model. • PREREQ employs a network to detect such relationships from course and video dependency (Roy et al., 2019). Here we present an improved version PREREQ-S by introducing students’ video watch order to enhance the video dependency network, i.e., we sort the watched videos of each student by time and utilize these sequences for replacing the video sequences in the original paper. • PCNN and PRNN. We present two simple DNN models, which first encode the embeddings (Cao et al., 2017) of the concept pairs and then train an MLP to classify the prerequisite ones. Result Analysis. Overall, PREREQs perform best in F1-score, while student behavior is beneficial to P R F1-Score MOOC-LR 0.667 0.479 0.565 MOOC-XG 0.607 0.507 0.552 PREREQ 0.606 0.755 0.672 PREREQ-S 0.651 0.730 0.688 PCNN 0.629 0.636 0.630 PRNN 0.681 0.668 0.659 Table 2: Results of prerequisite discovery. the precision of this model (PREREQ-S improves the precision to 0.651). We argue that the diverse information provided by MOOCCube helps to discover such relationships. Meanwhile, two simple DNN models perform competitive results in this task, which indicates that the existing methods are indeed limited by the amount of data (Most advanced models cannot be trained on small datasets). 5 Related Work In this section, we introduce the research of NLP in education, especially in MOOCs, as well as several publicly available related datasets. Existing research in MOOCs uses courses and students as the main resource, which can be divided into two categories according to the research object: one focuses on the content of the courses, such as the course concept extraction (Pan et al., 2017b), prerequisite relation discovery (Pan et al., 2017a), and course concept expansion (Yu et al., 3139 2019); the other focuses on the learning behavior of students, such as the prediction of dropouts (Feng et al., 2019), course recommendations (Zhang et al., 2019; Cao et al., 2019), etc. Due to the different tasks, researchers have to repeat the work to build their datasets, which arouses the original motivation of MOOCCube. In addition, some researchers also try to obtain education information from other resources, e.g., ACL Anthology (Radev et al., 2013), TutorialBank (Fabbri et al., 2018), and LectureBank (Li et al., 2019). They collected concepts and relationships from papers and lectures and also built diverse datasets. Though they are also limited in data scale, these beneficial attempts guide the construction of MOOCCube. 6 Conclusion and Future Work We present MOOCCube, a multi-dimensional data repository containing courses, concepts, and student activities from real MOOC websites. Obtaining large-scale data in all dimensions, MOOCCube can support new models and diverse NLP applications in MOOCs. We also conduct prerequisite relation extraction as an example application, and experimental results show the potential of such a repository. Promising future directions include: 1) utilize more types of data from MOOCCube to facilitate existing topics; 2) employ advanced models in existing tasks; 3) more innovative NLP application tasks in online education domain. Acknowledgments Zhiyuan Liu is supported by the National KeyResearch and Development Program of China(No. 2018YFB1004503), and others are supported by NSFC key project (U1736204, 61533018), a grant from Beijing Academy of Artificial Intelligence (BAAI2019ZD0502), a grant from the Insititute for Guo Qiang, Tsinghua University, THUNUS NExT Co-Lab, the Center for Massive Online Education of Tsinghua Univerisity, and XuetangX. References Yixin Cao, Lifu Huang, Heng Ji, Xu Chen, and Juanzi Li. 2017. Bridge text and knowledge by learning multi-prototype entity mention embedding. In ACL. Yixin Cao, Xiang Wang, Xiangnan He, Zikun Hu, and Tat-Seng Chua. 2019. Unifying knowledge graph learning and recommendation: Towards a better understanding of user preferences. In The world wide web conference. KDD Cup. 2015. Kdd cup 2015: Predicting dropouts in mooc. Alexander Fabbri, Irene Li, Prawat Trairatvorakul, Yijiao He, Weitai Ting, Robert Tung, Caitlin Westerfield, and Dragomir Radev. 2018. Tutorialbank: A manually-collected corpus for prerequisite chains, survey extraction and resource recommendation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 611–620. Wenzheng Feng, Jie Tang, and Tracy Xiao Liu. 2019. Understanding dropouts in moocs. In Proceedings of the AAAI Conference on Artificial Intelligence, (Vol 33 No 01: AAAI-19, IAAI-19, EAAI-20). Jonathan Gordon, Stephen Aguilar, Emily Sheng, and Gully Burns. 2017. Structured generation of technical reading lists. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 261–270, Copenhagen, Denmark. Association for Computational Linguistics. Jonathan Gordon, Linhong Zhu, Aram Galstyan, Prem Natarajan, and Gully Burns. 2016. Modeling concept dependencies in a scientific corpus. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 866–875, Berlin, Germany. Association for Computational Linguistics. Irene Li, Alexander R Fabbri, Robert R Tung, and Dragomir R Radev. 2019. What should i learn first: Introducing lecturebank for nlp education and prerequisite chain learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6674–6681. Chen Liang, Zhaohui Wu, Wenyi Huang, and C Lee Giles. 2015. Measuring prerequisite relations among concepts. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1668–1674. Hugo Liu and Push Singh. 2004. Conceptneta practical commonsense reasoning tool-kit. BT technology journal, 22(4):211–226. Liangming Pan, Chengjiang Li, Juanzi Li, and Jie Tang. 2017a. Prerequisite relation learning for concepts in moocs. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1447–1456. Liangming Pan, Xiaochen Wang, Chengjiang Li, Juanzi Li, and Jie Tang. 2017b. Course concept extraction in moocs via embedding-based graph propagation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 875–884. 3140 Simone Paolo Ponzetto and Michael Strube. 2007. Deriving a large scale taxonomy from wikipedia. In AAAI, volume 7, pages 1440–1445. Dragomir R Radev, Pradeep Muthukrishnan, Vahed Qazvinian, and Amjad Abu-Jbara. 2013. The acl anthology network corpus. Language Resources and Evaluation, 47(4):919–944. Sudeshna Roy, Meghana Madhyastha, Sheril Lawrence, and Vaibhav Rajan. 2019. Inferring concept prerequisite relations from online educational resources. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 9589–9594. D Shah. 2019. Year of mooc-based degrees: A review of mooc stats and trends in 2018. class central. Class Centrals MOOC Report. Jie Tang, Jing Zhang, Limin Yao, Juanzi Li, Li Zhang, and Zhong Su. 2008. Arnetminer: extraction and mining of academic social networks. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 990–998. ACM. Thierry Volery and Deborah Lord. 2000. Critical success factors in online education. International journal of educational management, 14(5):216–223. Jifan Yu, Chenyu Wang, Gan Luo, Lei Hou, Juanzi Li, Zhiyuan Liu, and Jie Tang. 2019. Course concept expansion in moocs with external knowledge and interactive game. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 4292–4302. Han Zhang, Maosong Sun, Xiaochen Wang, Zhengyang Song, Jie Tang, and Jimeng Sun. 2017. Smart jump: Automated navigation suggestion for videos in moocs. In Proceedings of the 26th international conference on world wide web companion, pages 331–339. International World Wide Web Conferences Steering Committee. Jing Zhang, Bowen Hao, Bo Chen, Cuiping Li, Hong Chen, and Jimeng Sund. 2019. Hierarchical reinforcement learning for course recommendation in moocs. In Proceedings of the AAAI Conference on Artificial Intelligence, (Vol 33 No 01: AAAI-19, IAAI19, EAAI-20). A Data Annotation and Quality Control As introduced in Section A, we conduct manual annotations with a quality control mechanism. Three relations need tagging: Course Dependency Chain, Concept Taxonomy, and Concept Prerequisite Chain. • Course Dependency Chain is the recommended course order of learning, which is often presented by teaching assistance or mentor in school. Many efforts for extracting prerequisite relation utilize this information (Liang et al., 2015; Roy et al., 2019). For each domain of courses, we invite three experts who have corresponding teaching experience to annotate the dependency relation among them. • Concept Taxonomy annotation is in two processes: 1) For each course concept, we use a pretrained word embedding to calculate the most likely category of it. Then three annotators in the corresponding field are asked to label whether the concept belongs to this category. 2) For the conceptcategory pairs that are labeled as “not belong to”, we choose the brother category of the prior one as a new candidate and put the refreshed pair into the annotation pool again. Such process effectively reduces the number of invalid annotations. • Concept Prerequisite Chain. To detect the prerequisite relation between concepts, we convene students in the corresponding domain as annotators. However, labeling all possible pairs is infeasible, for 100K concepts may generate over 500 billion candidate pairs. Thus we lead a distantly supervised annotation in three stages. First, we only select the concepts which occur in the same course to sample candidate concept pair. As in prior work, the annotators label if concept A is helpful to understand B. Second, we train a model as (Pan et al., 2017a) and classify other unlabeled pairs. Finally, the results with a low confidence score are labeled again to train another classifier and give all pairs a new label. This process repeats for several rounds7, and the voting result of each pair is finally adopted. In total, 3,500 pairs are in manual labeling, and the experiments in Application use them as the test set. Quality Control. Both of concept taxonomy and prerequisite relations are subjective (Liang et al., 2015). To prevent low-quality annotation results, we mix some golden standards (which are from existing well-organized datasets (Fabbri et al., 2018)) into the annotation pool. Once the labeling result is different from the golden standard, we lead another expert estimation to specifically confirm the truth of these conflicts and identify the annotators that can’s meet the requirements. B MOOC Q&A Dataset Except for the data types that are introduced in the paper, we also collect and build a Q&A dataset of MOOCs, which requires an ability of language 7This process is experimentally set to 5 rounds. 3141 Course Name Number of concepts included Data and Structure 1 140 Data and Structure 2 117 Network Technology and Applications 125 The Basics of Programming in C++ Language 84 The Advanced Design of C++ Program 77 Introduction to Computer Science and Python Programming 116 Operating System 143 Java Programming Design 62 Artificial Intelligence 66 Artificial Intelligence for Beginner 54 5G and Artificial Intelligence 71 Big Data and Machine Learning 236 Table 3: Course name list. Type One-hop Multu-hop Total All Types MOOCQA Dataset Query(Type A) 5,504 10,615 16,119 53,311 Judge(Type B) 16,324 14,301 30,625 Count(Type C) 3,384 3,183 6,567 Table 4: Statistics of questions. Entity/Relationship Table Name Number of Rows Entity concept 700 course 12 paper 5,927 school 208 teacher 1,733 user 4,723 video 1,242 Relationship concept field 44 concept paper 5,927 course concept 10,346 course video 1,591 school course 705 school teacher 2,130 teacher course 2,349 user course 24,933 video concept 4,040 Table 5: Statistics of entities and relationships in MOOC Q&A. understanding and multi-hop reasoning , to provide a comprehensive resource for more possible applications of MOOCs. Here are the methods we followed to collect the QA dataset. We divide the dataset into one-hop questions and multi-hop questions. An one-hop question only involves a single head entity and a single predicate in the knowledge, while a multihop question may contain several entities and to answer the question needs to reason over several facts in the knowledge graph. We design 22 types of 1-hop question schema and 20 types of multi-hop question schema based on the meaningful real queries we collected from MOOC platform. Each schema is paraphrased into 4 different templates and questions are generated by random sampling from the text template pool. Triples related with twelve typical courses are used in case that the model wont run out of memory. The twelve courses are listed as Table 3. The twelve courses are all from the computer science field. They cover different levels of courses in computer science and the internal prerequisite-successive relationships between the twelve courses typically represent the real relations between courses in MOOC platform. The model trained on our dataset is expected to provide MOOC users with information and further related knowledge they need. The type and number of entities and relationships are shown in Table 5. Besides, to make our dataset closer to the actual scenario, three types of questions are contained in MOOCQA Dataset, which are Query, Judge and Count. When answering Query questions, model is expected to offer the correct entities in knowledge graph. As for Count questions, the count of the related entities is required. For Judge questions, the model should make a clear judgement of the 3142 factoid description in the question. In MOOCQA Dataset, each line is a question sample. In addtion to the question and its corresponding answer, we provide more information including entity ids, question type, etc. Question, supporting fact and answer are separated by “\t”. If the answer consist of several entities, they will be separated by ‘|’.
2020
285
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3143–3153 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3143 Towards Interpretable Clinical Diagnosis with Bayesian Network Ensembles Stacked on Entity-Aware CNNs Jun Chen, Xiaoya Dai, Quan Yuan, Chao Lu and Haifeng Huang Baidu Inc, Beijing, China {chenjun22,daixiaoya,yuanquan02,luchao,huanghaifeng}@baidu.com Abstract The automatic text-based diagnosis remains a challenging task for clinical use because it requires appropriate balance between accuracy and interpretability. In this paper, we attempt to propose a solution by introducing a novel framework that stacks Bayesian Network Ensembles on top of Entity-Aware Convolutional Neural Networks (CNN) towards building an accurate yet interpretable diagnosis system. The proposed framework takes advantage of the high accuracy and generality of deep neural networks as well as the interpretability of Bayesian Networks, which is critical for AI-empowered healthcare. The evaluation conducted on the real Electronic Medical Record (EMR) documents from hospitals and annotated by professional doctors proves that, the proposed framework outperforms the previous automatic diagnosis methods in accuracy performance and the diagnosis explanation of the framework is reasonable. 1 Introduction The automatic diagnosis of diseases has drawn the increasing attention from both research communities and industrial companies in the recent years due to the advancement of artificial intelligence (AI) (Liang et al., 2019; Esteva et al., 2019; Liu et al., 2018). As reported in (Anandan et al., 2019), “AI-enabled analysis software is helping to guide doctors and other health-care workers through diagnostic processes and questioning to arrive at treatment decisions with greater speed and accuracy.” Although the image-based diagnosis has been well studied using PACS (Picture Archiving and Communication Systems) data (Litjens et al., 2017), the text-based diagnosis for Clinical Decision Support (CDS) (Berner, 2007) remains difficult due to the rare access to reliable clinical corpus and the difficulty in balancing between accuracy and interpretability. Table 1: A real outpatient EMR from hospital. Section Content Basic 男, 30岁(Male, 30 years old) CC 咽部不适3天(Pharyngeal discomfort for 3 days) HPI 患者于3日前起咽痛伴发热, 无呼吸困难、咳嗽、 咳痰、嗳气或反酸(The patient developed pharyngalgia and fever 3 days ago, without dyspnea, cough, sputum, belching or acid reflux) PE 咽峡稍充血, 双侧扁桃体Ⅰ度肿大, 无栓塞物及瘢痕 (The hypopharyngeal isthmus is slightly congested. The bilateral tonsils are first-degree enlarged. There is no embolism or scar in the pharynx.) TR 血常规示白细胞计数升高, WBC12.5 ∗109/L. C反应 蛋白正常. ( The blood test showed elevated white blood cell count, WBC12.5 ∗109/L. The C-reactive protein is normal.) Diagnosis 急性扁桃体炎(Acute tonsillitis) There have been attempts to study automatic text-based diagnosis with Electronic Medical Record (EMR) documents integrated in the Hospital Information System (Mullenbach et al., 2018; Yang et al., 2018; Girardi et al., 2018). Basically, an EMR document is written by a doctor and consists of several sections that describe the illness of the patient. Besides the patient’s basic information like name, age and gender, an EMR document contains Chief Complaint (CC), History of Present Illness (HPI), Physical Examination (PE), Test Reports (TR, e.g. lab test reports and PACS reports), Diagnosis, etc. Table 1 shows a real outpatient EMR document from a hospital. These sections describe the patient’s medical situation from different aspects: CC summarizes the patient’s main discomforts of this visit. HPI extends CC by adding more details and findings from the conversation between doctor and patient. PE shows the findings by physically examining the patient’s body, e.g. by palpation or inspection. TR are the objective findings from the lab test reports or the PACS reports. In the hospitals, the doctors will make a comprehensive analysis mainly based on CC, HPI, PE, TR and the basic information, and make a diagnosis. However, it is very hard for computers to automatically understand all the diverse sections and capture the key 3144 information before making an appropriate diagnosis. Besides, an inpatient EMR document is similar to that in Table 1 except that HPI, PE and TR are usually more lengthy and detailed. The framework proposed in this work can be applied on both the outpatient and the inpatient EMR documents and we will not distinguish them later. In this study, we bring forward a novel framework of automatic diagnosis with EMR documents for CDS.1 Specifically, we propose to predict the main diagnosis based on the patient’s current illness. Different from the previous works (Yang et al., 2018; Sha and Wang, 2017; Li et al., 2017; Girardi et al., 2018; Mullenbach et al., 2018) that solely rely on the end-to-end neural models, we propose to stack the Bayesian Network (BN) ensembles on top of Entity-aware Convolutional Neural Networks (ECNN) in automatic diagnosis, where ECNN improves the accuracy of the prediction and BN ensembles explain the prediction. The proposed framework attempts to bring some interpretability of the predictions by incorporating the knowledge encoded in the BN ensembles. The main contributions of this work are as follows: • We propose a novel framework that stacks the Bayesian network ensembles on top of the entity-aware convolutional neural networks to bring interpretability into automatic diagnosis without compromising the accuracy of deep learning. Interpretability is very important in the AI-empowered healthcare studies. • We bring forward three variants of Bayesian Networks for disease inference that provides interpretability. Moreover, we ensemble these BNs towards more robust diagnosis results. • The evaluation conducted on real EMR documents from hospitals proves that the proposed framework outperforms the previous automatic diagnosis methods with EMRs. The proposed framework has been used as a critical component in the clinical decision support system developed by Baidu, which assists physicians in diagnosis in over hundreds of primary healthcare facilities in China. • We publish the Chinese medical knowledge graph of Gynaecology and Respiration used in our Bayesian Network for disease inference with this paper for reproducibility. The data 1Different from Electronic Health Record (EHR) where the illness of a patient’s multiple visits are combined together, EMR only contains the patient’s illness of this particular visit. EMRs are more generally used in the hospitals in China. set can be downloaded from Github.2 2 Related Work Due to the rapid advancement of machine intelligence, the text-based automatic diagnosis is becoming one of the most important applications of machine learning and natural language processing in the recent years (Anandan et al., 2019; Koleck et al., 2019). Different from diagnosis or question answering on the Web (Chen et al., 2019), diagnosis for the CDS takes place in the hospitals and clinics, and the predictive algorithm is integrated into the Hospital Information System to assist doctors and physicians in the diagnosis. Liang et al. (2019) proposes a top-down hierarchical classification method towards diagnosing pediatric diseases. From the root to the leaf, each level on the diagnostic hierarchy is a logistic regression model that performs classification on labels from coarse granularity to fine-grained granularity, e.g. from organ systems down to respiratory systems and to upper respiratory systems. This method requires heavy manual annotation of training samples at different levels of hierarchy. Zhang et al. (2017) combines the variational auto-encoder and the variational recurrent neural network together to make diagnosis based on laboratory test data. However, laboratory test data are not the only resources considered in this paper. Prakash et al. (2017) introduces the memory networks into diagnostic inference based on free text clinical records with external knowledge source from Wikipedia. Sha and Wang (2017) proposes a hierarchical GRU-based neural network to predict the clinical outcomes based on the medical code sequences of the patient’s previous visits. It deals with the sequential disease forecasting problem with EHR data rather than the diagnosis problem for the current visit with EMR document. Similarly, Choi et al. (2016a) studies the RNN-based model for clinical event prediction. Baumel et al. (2017) investigates the multi-label classification problem for discharge summaries of EHR with hierarchical attention-bidirectional GRU. The most similar works to ours are in (Yang et al., 2018; Li et al., 2017) which trains an endto-end convolutional network model to predict di2https://github.com/PaddlePaddle/ Research/tree/master/KG/ACL2020_ SignOrSymptom_Relationship 3145 agnosis based on EMRs. Besides, Girardi et al. (2018) improves the CNN model with the attention mechanism in automatic diagnosis. Moreover, Mullenbach et al. (2018) studies a label-wise attention model to further improve the accuracy of diagnosis at the cost of more computation time. Choi et al. (2016b) proposes a reverse time attention mechanism for interpretable healthcare studies. Different from the previous studies, the novelty of this paper is to bring interpretability into automatic diagnosis by stacking the ensembles of Bayesian networks on top of the entity-aware convolutional neural networks. 3 The Proposed Framework Automatic diagnosis can be formally considered as a classification problem where the proposed method outputs a probability distribution Pr(d|S) over all diseases d ∈D based on the illness description S. In this study, S corresponds to the patient’s EMR document, i.e. S consists of several sections of texts and some structured data like age, gender and medical department. We bring forward a new framework that combines the black-box deep learning and the whitebox knowledge inference to diagnose disease with EMR documents. Figure 1 shows the architecture of the proposed framework. Firstly, the medical entities are extracted from the EMR contents. Then, the EMR document is fed into the entity-aware convolutional networks to generate disease prior probability. Next, the Bayesian network ensembles perform disease inference based on the prior probability and the probabilistic graphical models (PGMs) before ensembling the final predictions. 3.1 Named Entity Recognition Before introducing the convolutional and the Bayesian networks, we first discuss a basic component of this framework – the named entity recognition (NER). NER extracts the entities as well as their types from text sentences, which is very important to capture the key information of the texts. In our experiments, we used Baidu’s enterprise Chinese medical NER system that integrates the advanced NER models (Dai et al., 2019; Jia et al., 2019) and extracts entities of symptoms, vital signs, diseases and test report findings. The F1 score of the NER system we use is 91% in a separate evaluation conducted on 1000 deduplicated sentences from real EMR documents by 10 Table 2: The NER results of the EMR document shown in Table 1. TR Finding: test result finding. (+) for positive, (-) for negative and (?) for unknown. Word Section Type Polarity 咽部不适 (pharyngeal discomfort) CC Symptom (+) 咽痛(pharyngalgia) HPI Symptom (+) 发热(fever) HPI Symptom (+) 呼吸困难(dyspnea) HPI Symptom (-) 咳嗽(cough) HPI Symptom (-) 咳痰(sputum) HPI Symptom (-) 嗳气(belching) HPI Symptom (-) 反酸(acid reflux) HPI Symptom (-) 咽峡充血(congested hypopharyngeal isthmus) PE Vital Sign (+) 双侧扁桃体肿大 (enlarged bilateral tonsils) PE Vital Sign (+) 咽部栓塞物 (pharyngeal embolism) PE Vital Sign (-) 咽部瘢痕 (pharyngeal scar) PE Vital Sign (-) 白细胞计数升高 (elevated WBC) TR TR Finding (+) C反应蛋白异常(abnormal C-reactive protein) TR TR Finding (-) 急性扁桃体炎 (acute tonsillitis) Diagnosis Diesease (+) certificated physicians in China. 3 Meanwhile, the polarity (positive (+), negative (-) or unknown (?)) of entities is also recognized. The polarity in this work objectively means the presence or absence of a finding in a given EMR. It is recognized in conjunction with the rule-based method with a vocabulary of negative Chinese words as well as the polarity detection model. Table 2 shows the NER results of the EMR in Table 1. Please note that the disease (acute tonsillitis) from the diagnosis section is the ground-truth label to predict and it will not be included in the input to the predictive model in the evaluation. In the offline processing of the EMR corpus, we preserved the Top-K most frequent entities of all types as the entity vocabulary. In later experiments, we empirically set K = 10, 000. The entity vocabulary will be used to construct the one-hot feature for each EMR document, which will be introduced later. Since NER is not the focus of this study, the readers can choose the public Chinese NER API4 from Baidu for fast experiments. We will focus on the major contributions of the proposed framework in the next sections. 3There are two senior physicians beyond the attending doctor level and eight junior physicians contributed in the annotation tasks here and later. 4http://ai.baidu.com/tech/cognitive/ entity_annotation 3146 …... …... embedding convolution max pooling & flatten CC other features MLP dropout softmax parallel PGM universal PGM cascade PGM final predictions HPI PE disease priors CNN tower CNN CNN Figure 1: The architecture of the proposed framework. 3.2 ECNN for Prior Generation The convolutional networks take as input the list of texts w.r.t. the sections of an EMR document as well as the medical entities extracted from them, and output the probability distribution of the diseases. To distinguish from the previous CNN models without medical entities (Yang et al., 2018; Li et al., 2017), we use ECNN to denote the entityaware CNN model proposed in this paper where another branch of fully connected layers processes the medical entities and outputs the corresponding feature representation. Let N denote the number of sections (CC, HPI, PE, TR, etc) selected from the EMR document to construct ECNN. ECNN consist of two parts: (1) N convolutional towers, each of which reads a unique section, and (2) one multi-layer perceptron (MLP) branch that reads a high-dimensional hand-crafted feature. Similar to the previous CNN method for text classification (Kim, 2014), each convolutional tower processes the input sequence with three kernels of various length resulting in multi-channel feature output. The three kernels process the input with 3-grams, 4-grams and 5-grams, respectively, and their outputs are concatenated as the output of a convolutional tower. Each kernel in the convolutional networks has 100 filters with strides as 1. The input is padded with valid method and the output is activated by ReLU. For the input of MLP, we create the entity vocabulary that consists of the top-K frequent entities. Then, each EMR document is transformed to a Kdimensional one-hot feature f. That is, if the i-th entity in the entity vocabulary appears as a positive finding in the input EMR, then the i-th dimension of f is set to 1, and otherwise, it is set to 0. Moreover, the patient’s age and gender are appended to f to get the hand-crafted feature for MLP. The MLP contains one dense layer activated by sigmoid function with 128 hidden units. ECNN is trained with Adam optimizer (learning rate 0.001), 20 epochs and batch size of 32. The output of each convolutional tower and the output of the MLP are further concatenated before passing through the dropout and the softmax layer. Similar to Kim (2014), the dropout rate is empirically set to 0.5. A |D|-dimensional feature is output by ECNN as the disease priors for the inference in the next where D is the disease set. In ECNN, the CNNs are supposed to capture the sequential signals in the section texts and the MLP is supposed to encode the feature of the critical entities. By jointly modeling with CNNs and MLP, the proposed ECNN is expected to have superior performance than either of them alone. 3.3 Bayesian Network Ensembles Although ECNN also outputs a probability distribution over all diseases, the result is not interpretable due to its end-to-end nature. However, the interpretability is very important in the CDS to explain how the diagnosis is generated by machines. Thus, we propose the Bayesian network ensembles on top of the output of ECNN to explicitly infer disease with PGMs. There are three steps: 3.3.1 Relation Extraction We extract the relations between disease and other types of entities (disease, finding) where finding can be symptom, vital sign, test report finding, etc. 3147 The rest of this paper will use finding to denote any type of entities other than disease. Relation extraction is performed in conjunction with the (disease, finding) co-occurrence mining and the deep extraction model (Shi et al., 2019) from the EMR documents and the textbooks 5. Then, the pairs with high co-occurrences larger than a support (e.g. 5) are preserved. The extracted relations are reviewed by 10 certificated physicians. The invalid extracted relations which result from issues like incorrect recognition of entities or polarities by NER, the symptom caused by the secondary diagnosis but incorrectly paired with the first diagnosis, are removed before adding to the medical knowledge graph. Therefore, the relation (disease, finding) in the medical knowledge graph can, to some extent, be interpreted as: disease causes finding. In our study, the pairs are mined from 275,797 EMR documents of two medical departments (Gynaecology and Respiration). On average, each disease of Gynaecology in our experiments is associated with 24 findings and that of Respiration is 42. For Gynaecology, there are 33 diseases, 305 symptoms, 143 vital signs and 25 test report findings in the PGMs. For Respiration, there are 21 diseases, 263 symptoms, 187 vital signs and 31 test report findings in the PGMs. 3.3.2 Relation Weights Estimation We experiment with six classical text features as the relation weights in this study. (1) Occurrence. The weight of finding i given disease j is: w(i; j) = n(i, j) P k n(k, j), (1) where n(i, j) is the number of co-occurrences of finding i and disease j. w(i; j) is computed by the type of findings. (2) TF-IDF Feature. Similar to TF-IDF feature in information retrieval, the weight of finding i given disease j is: w(i; j) = n(i, j) ∗(log |D| + 1 ni + 1 + 1), (2) where ni is the number of diseases whose EMR documents contain finding i. (3) TFC Feature. TFC feature (Salton and Buckley, 1988) is a variant of TF-IDF and it estimates the weight of finding i given disease j as: 5The undergraduate teaching materials in most of the medical schools in China, authorized by the publisher. w(i; j) = n(i, j) ∗log |D| ni qP k(n(k, j) ∗log |D| nk )2 . (3) (4) TF-IWF Feature. The Term-Frequency Inverse-Word-Frequency (TF-IWF) feature (Basili et al., 1999) estimates the weight of finding i given disease j as: w(i; j) = n(i, j) ∗(log P k tk ti )2, (4) where ti represents the number of occurrences of word i in the whole training corpus. (5) CHI Feature. CHI feature (χ2 Test) measures how much a term is associated with a class from a statistical view. The CHI feature of finding i given disease j is (Yang and Pedersen, 1997): w(i; j) = N ∗(A ∗D −C ∗B)2 (A + C) ∗(B + D) ∗(A + B) ∗(C + D) , (5) where N, A, B, C and D are the number of all documents, the number of documents containing finding i and belonging to disease j, the number of documents containing i but not belonging to j, the number of documents belonging to j but not containing i, and the number of documents not containing i and not belonging to j. (6) Mutual Information. This feature assumes that the higher the strength between a finding and a disease, the higher their mutual information will be. Similar to the definition in CHI feature, this feature is defined as: w(i; j) ≈log A ∗N (A + C) ∗(A + B). (6) The above features are normalized by disease before applying to the diagnosis inference. By default, the average of the six features is used as the connection weight. 3.3.3 Diagnosis Inference We propose the Bayesian network ensembles for the diagnosis inference. Specifically, a group of PGMs with the extracted relations and weights are ensembled towards the final predictions. Firstly, multiple bipartite graphs between disease nodes and each type of finding nodes are derived from the medical knowledge graph. For M types of findings, there will be M bipartite graphs. In later experiments, M = 3, i.e. (disease, symptom), (disease, vital sign) and (disease, test result finding). Based on the findings extracted from EMR document, each bipartite graph can be independently used to infer the disease distribution. 3148 For Bayesian inference, we compute the posterior probability of diseases given the findings in the EMR document extracted by NER: Pr(d|F +, F −) = Pr(d, F +, F −) Pr(F +, F −) , d ∈D, (7) where F + and F −are the sets of the positive and the negative findings in the given EMR document, respectively. Following Eq. (7), it is straightforward to get Pr(d|F + sym, F − sym), Pr(d|F + sign, F − sign) and Pr(d|F + test, F − test) w.r.t. the predictions based on symptom alone, vital sign alone and test report finding alone. To compute the joint probability Pr(d, F +, F −) and Pr(F +, F −), we refer the readers to the QuickScore method (Heckerman, 1990) and the deduction therein. To speed up computation when a disease is associated with too many positive findings, the variational method on the PGMs is applied (Jordan et al., 1999). Next, we assemble these bipartite graphs in different ways to get three variants of PGMs (Fig. 1). (1) Parallel. This method independently performs inference with each type of finding and average their results: Pr(d|F +, F −) = avg(Pr(d|F + sym, F − sym), Pr(d|F + sign, F − sign), Pr(d|F + test, F − test)). (8) Parallel assumes that the ways to diagnose disease are different using different types of entities, and their predictions can complement each other. An extension of Parallel is to perform a weighted sum of the three predictions. For simplicity concerns, we experiment with equal weights in this paper. (2) Universal. This method mixes all types of findings together into a single network: Pr(d|F +, F −) = (9) Pr(d|F + sym, F − sym, F + sign, F − sign, F + test, F − test). It means that Universal does not distinguish the types of entities and performs the type-free Bayesian inference. Compared with the other two PGM variants, the connections between diseases and findings in Universal are much denser. It assumes that the prediction benefits from the joint inference by seeing more findings of multiple types at the same time. (3) Cascade. This method constructs the multilayer Bayesian networks with finding types as layers and use the output of the previous layer as the prior probability for the current layer. Pr(dsym) = Pr(d|F + sym, F − sym) s.t., d ∼Pr(dCNN), Pr(dsign) = Pr(d|F + sign, F − sign) s.t., d ∼Pr(dsym), Pr(dBN) = Pr(dtest) = Pr(d|F + test, F − test) s.t., d ∼Pr(dsign), (10) where Pr(dCNN) is the disease probability distribution computed by the convolutional networks in Sec. 3.2 and d ∼Pr(dx) means that variable d satisfies prior probability distribution Pr(dx). Cascade first infers disease with symptoms alone and uses the disease probability from ECNN as priors. Then, it infers disease with vital signs alone and uses the disease probability from symptombased inference as priors. Finally, it infers disease with test report findings alone and uses the disease probability from the previous output as priors. We present the cascade appraoch in such order because it shows the best results compared to those in other orders in our experiments. Cascade assumes that each type of entities can be used to refine the previous predictions by incorporating additional information. The output of the above three PGMs are ensembled, e.g. weighted sum, as the final predictions. In all, the proposed framework takes the raw EMR document and the NER results as input, and outputs the diagnosis predictions. Although we experiment with three types of entities in this paper, the proposed Bayesian network ensemble method is not limited to these types of entities. It is easy to add more entity types in the proposed method when applicable. 3.4 The Interpretability of BN Ensembles One of the major contributions of this work is to bring interpretability into automatic diagnosis by stacking the Bayesian network ensembles on top of the convolutional networks. We illustrate how the predictions are explained, i.e. interpretability, by BN with Fig. 2. We use the symptom-based bipartite graph to illustrate for the simplicity concern, and the other types of entities explain the predictions in the same way. In Fig. 2, if only pharyngalgia is extracted from a patient’s EMR, then upper respiratory infection (URI) will be predicted with high probability but the probability of pneumonia and phthisis will 3149 Figure 2: The example of the interpretability of Bayesian network. The connection from disease d to symptom s represents that d has some probability to cause s to be present. If d is diagnosed, the detected symptoms from EMR that are connected with d can be used to explain the diagnosis. be set to the minimum because both of them are not likely to cause pharyngalgia based on their cooccurrences in the corpus. The proposed method can explain the prediction of URI with symptom pharyngalgia and their co-occurrence times besides the prediction probability. If pharyngalgia and hemoptysis are both extracted from a patient’s EMR, then URI as well as phthisis will be predicted with some positive probability (their rankings depend on both their prior probability and their connection weights to pharyngalgia and hemoptysis), but pneumonia will be predicted with the minimum probability. This is because the noisy-OR gate is used in the Bayesian inference (Heckerman, 1990). The proposed method explains the prediction of URI with the positive finding of symptom pharyngalgia and explains the prediction of phthisis with the positive finding of symptom hemoptysis as well as their cooccurrences. 4 Experiments and Results In this section, we will introduce the data sets we experiment with and the evaluation results. 4.1 Data Sets The proposed framework is evaluated on the real EMR documents (mostly admission records). We have collaborated with several top hospitals in China and we are authorized to conduct experiments with 275,797 EMR documents of two medical departments for the evaluation (see Table 3).6 6Unfortunately, we have not yet obtained the permission from the hospitals to make the evaluation data sets public at this moment because EMR documents are legally protected by the Chinese laws and there is too much sensitive information about the patients and the doctors in them. We are currently working with the hospitals in contributing the benchmark EMR data sets for automatic diagnosis, but it takes time due to the legal issues. We suggest the readers to focus their attention on the contribution of the novel automatic diagnosis framework in this paper. 1 10 100 1000 10000 100000 1 11 21 31 41 51 61 71 81 91 101 111 121 Gynaecology Respiration Figure 3: The long-tail distribution of diagnosis. The xaxis indexes the names of diagnosis. The y-axis counts the occurrences of diagnosis in the log scale. Table 3: The statistics of the data sets. The table represents the document counts by source. # means the number of. “# collected” is the number of the collected EMR documents in the our experiments. Departments # collected # test # disease Gynaecology 191,645 606 33 Respiration 84,152 214 21 The collected EMR documents are processed as follows: The main diagnosis in each EMR document is extracted as its disease label. Then, we select the top diseases from the collected EMR documents, which results in 33 diseases from Gynaecology (including Salpingitis, Cervical Carcinoma, Endometritis, Fibroid, etc) and 21 diseases from Respiration (including Upper Respiratory Infection, Chronic Bronchitis, Pneumonia, Asthma, Lung Cancer, etc) that cover over 90% of all EMR documents. There is a long-tail distribution of EMR documents by diseases as shown in Fig. 3, and each of the selected diseases has over 100 EMR documents for training. The other diseases are discarded in the experiments due to the lack of enough EMR documents to train a trustworthy model. Next, in order to ensure the validity of the disease labels in the test set, we recruit 10 professional physicians to review the labels by evenly sampling EMR documents under each disease. In this way, we collected 606 reviewed EMR documents for Gynaecology and 214 for Respiration as the test set (See disease distribution in supplemental files). The rest EMR documents are used for training. Since we are not given the identity of patient w.r.t. each EMR, the training and the testing sets are considered disjoint. In later experiments, we separately report the performance under both departments. It is more important and difficult to distinguish diseases within the same department than that across departments due to the overlapping symptoms, signs and test report findings among the similar diseases. 3150 Table 4: The accuracy of the different diagnosis methods on two medical departments. Top-k sensitivity is used as the accuracy measurement. Methods Gynaecology Respiration Top-1 Top-3 Top-1 Top-3 CAML (2018) 58.6% 76.3% 60.7% 82.7% CNN (2018) 61.0% 82.8% 61.7% 80.8% ACNN (2018) 62.1% 83.3% 60.7% 84.6% PGM-C 50.8% 64.6% 26.6% 47.6% PGM-P 56.1% 69.3% 31.3% 45.3% PGM-U 56.2% 69.6% 33.6% 57.9% PGM-E 53.9% 70.2% 28.0% 48.1% ECNN 68.9% 86.7% 65.8% 81.7% ECNN-PGM-C 71.4% 88.6% 52.8% 82.7% ECNN-PGM-U 72.9% 88.6% 59.3% 87.8% ECNN-PGM-P 73.2% 88.4% 68.2% 87.3% ECNN-PGM-E 73.4% 88.8% 64.0% 88.3% 4.2 Experimental Results We conduct experiments on the collected data sets to evaluate the performance of the framework. 4.2.1 Experimental Settings In the experiments, we used four CNN towers (N = 4) w.r.t. CC, HPI, PE and TR, and each tower has three channels with kernel length 3, 4 and 5 (representing 3-grams, 4-grams and 5-grams). We use Jieba package7 to perform Chinese word segmentation on the training set and remove the punctuation from the segmentation results. The segmented word corpus is used to train the 100-dimensional word embeddings using the Word2Vec (Mikolov et al., 2013) method (window as 5, min support as 5) implemented in the gensim package8. The top 100,000 frequent segmented words consist of the word vocabulary in the embedding layer of ECNN. Thus, the size of the embedding layer is (100000, 100). Besides, the top 10,000 frequent entities (not segmented words) as well as age and gender are used to construct the one-hot feature into MLP which consists of one hidden dense layer (128 Sigmoid units) due to the efficiency consideration. Similar to Kim (2014), the dropout rate is empirically set to 0.5. By default, we use the average of all six relation weights in the experiments. The final predictions are the average of the three PGM variants. ECNN and PGMs are trained separately offline. 4.2.2 Performance Accuracy Table 4 shows the Top-k sensitivity (The micro average of the per-disease Top-k sensitivity, com7https://github.com/fxsjy/jieba 8https://radimrehurek.com/gensim/ monly used as the accuracy measurement in healthcare studies (Liang et al., 2019).) under two departments. Generally, sensitivity is ususally used in binary classification (mostly output yes or no). Similarly, when we are dealing with classification of multi-class rather than binary classification, the proposed automatic diagnosis model outputs the probability distribution over K diseases (classes) for a given EMR. Suppose there are li out of ni cases, where di is included in the Top-k predictions (ranked by probability) for the ni EMRs of disease di. The Top-k sensitivity of the proposed model on disease di is: li ni . Furthermore, in the overall evaluation of the proposed model on all diseases, we use the micro average of all classes as the overall Top-k sensitivity: sensitivity = P i li P i ni . (11) CAML (Mullenbach et al., 2018) performs the label-wise attention on top of a CNN model. CNN (Yang et al., 2018) concatenates CC, HPI and TR together before sending to the multi-channel CNN model. ACNN (Girardi et al., 2018) incorporates the gram-level attention with a CNN model. The empirical settings of hyper parameters are selected from the original papers. Besides, they share the same training set, training epochs, learning rate and batch size with the proposed methods. Among the proposed methods, PGM-* (-C, -P, -U and -E represent Cascade, Parallel, Universal and Ensemble, respectively) are the methods that solely relies on the Bayesian networks which use the disease distribution in the training set as the prior probability. ECNN is the proposed method without the BN ensembles. ECNN-PGM-* are the combined methods while ECNN-PGM-E is the proposed method with ECNN and Bayesian network ensembles in Figure 1. According to the results: (1) Most of the proposed methods ECNN-PGM-* outperform the previous automatic diagnosis methods, which shows the effectiveness of the proposed methods. (2) ECNN outperforms CNN due to the incorporation of medical entities. Jointly modeling with free texts and medical entities brings extra accuracy performance compared with modeling with only either one. (3) Stacking Bayesian Networks on top of the neural networks is very likely to further improve the performance, especially with the ensemble of the predictions from multiple PGMs. 3151 0 0.2 0.4 0.6 0.8 1 Salpingitis Fibroid Pelvic Infection Vulvitis Mole Cervical Polyp Cervical Carcinoma Ovarian Tumor Female Infertility Endometriosis (a) Gynaecology 0 0.2 0.4 0.6 0.8 1 URI Chronic Bronchitis Pneumonia Asthma Lung Cancer COPD Pulmonary Abscess Bronchiectasia Pulmonary Embolism Respiratory Failure (b) Respiration Figure 4: Top-1 sensitivity by diseases. 4.2.3 Error Analysis Fig. 4 shows the Top-1 sensitivity on some diseases. The performances across diseases are quite different. For example, the Top-1 sensitivity of Salpingitis is 100% but that of Endometriosis is 29% in the evaluation. Salpingitis can be identified by combining general symptoms and ultrasonic exam results. However, from the perspective of physicians, Endometriosis is difficult to diagnose by nature because it shares common symptoms like dysmenorrhea and irregular menstruation with other Gynecologic diseases. These shared findings misguide the classifier towards other similar diseases. Similarly, among the respiratory diseases, patients with Pulmonary Embolism, Respiratory Failure and Bronchiectasia share symptom dyspnea which makes it difficult to distinguish between them. In contrast, Upper Respiratory Infection (URI) is easy to diagnose because it causes throat pain and rhinorrhea unlike the other respiratory diseases. Based on the analysis, the diagnosis performance of a disease is higher if it shares less findings with other diseases or it has more specific findings. 4.2.4 Interpretability The interpretability is reflected on the observed findings in the EMR that connect to the predicted disease in the medical knowledge graph as well as their co-occurrences. We generate the prediction explanation with the following template: The patient is diagnosed as disease d because (s)he is suffering from symptom si, and (s)he has the vital sign of vj, and the lab test (or PACS report) shows (s)he has tk. Besides, si, vj and tk have been found on the patients of d for ni, nj, nk times, respectively, in the previous EMR documents that support this diagnosis. Since the extracted relations in the medical knowledge graph are reviewed by the certificated physicians, the validity of explanation is guaranteed from the clinical perspective. We randomly select 50 testing samples per department whose Top-1 diagnosis prediction is correct and generate the explanation for the diagnosis prediction with 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 MI Occ TFC TF-IDF TF-IWF CHI All Accuracy FeatureType Gyn-Top1 Res-Top1 Gyn-Top3 Res-Top3 Figure 5: The accuracy of ECNN-PGM-E using different types of features. Gyn and Res represent gynaecology and respiration, respectively. MI and Occ are mutual information and occurrence, respectively. the above template. The explanation is evaluated by three certificated physicians. The evaluation is subjective, but all of them agree that the prediction is well-supported by the generated explanation. 4.2.5 Feature Importance Figure 5 shows the accuracy performance using different types of features. We can see that in this evaluation, TFC, TF-IDF and the average of all features are likely to lead to higher accuracy compared to the other features where the accuracy of Top-3 prediction is over 88%. In all, the above experiments prove that the proposed framework can improve the accuracy of automatic diagnosis and bring reasonable interpretability into the predictions in the same time. 5 Conclusion In this paper, we investigate the problem of automatic diagnosis with EMR documents for clinical decision support. We propose a novel framework that stacks the Bayesian Network ensembles on top of the Entity-aware Convolutional Neural Networks. The proposed design brings interpretability into the predictions, which is very important for the AI-empowered healthcare, without compromising the accuracy of convolutional networks. The evaluation conducted on the real EMR documents from hospitals validates the effectiveness of the proposed framework compared to the baselines in automatic diagnosis with EMR. Acknowledgement We thank all the professional physicians led by Dr. Shi and Dr. Hu who have contributed in the annotation tasks in our experiments. 3152 References Padmanabhan Anandan, Yan Huang, Kazumi Nishikawa, BBorie Park, Eric S. Sullivan, Jingyu Wang, and Xu Shan. 2019. AI in health care: Capacity, capability, and a future of active health in Asia. MIT Technology Review Insights, pages 1–25. Roberto Basili, Alessandro Moschitti, and Maria Teresa Pazienza. 1999. A text classifier based on linguistic processing. In IJCAI Workshop on Machine Learning and Information Filtering, Stockholm, Sweden. Tal Baumel, Jumana Nassour-Kassis, Raphael Cohen, Michael Elhadad, and No´emie Elhadad. 2017. Multi-label classification of patient notes a case study on icd code assignment. In AAAI Workshops, pages 409–416. Eta S. Berner. 2007. Clinical Decision Support Systems. Springer. Jun Chen, Jingbo Zhou, Zhenhui Shi, Bin Fan, and Chengliang Luo. 2019. Knowledge abstraction matching for medical question answering. In IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 342–347. Edward Choi, Mohammad Taha Bahadori, Andy Schuetz, Walter F Stewart, and Jimeng Sun. 2016a. Doctor AI: Predicting clinical events via recurrent neural networks. In Machine Learning for Healthcare Conference, pages 301–318. Edward Choi, Mohammad Taha Bahadori, Jimeng Sun, Joshua Kulas, Andy Schuetz, and Walter Stewart. 2016b. RETAIN: An interpretable predictive model for healthcare using reverse time attention mechanism. In NeurIPS, pages 3504–3512. Dai Dai, Xinyan Xiao, Yajuan Lyu, Shan Dou, Qiaoqiao She, and Haifeng Wang. 2019. Joint extraction of entities and overlapping relations using positionattentive sequence labeling. In AAAI, Honolulu, Hawaii, USA. Andre Esteva, Alexandre Robicquet, Bharath Ramsundar, Volodymyr Kuleshov, Mark DePristo, Katherine Chou, Claire Cui, Greg Corrado, Sebastian Thrun, and Jeff Dean. 2019. A guide to deep learning in healthcare. Nature Medicine, 25:24–29. Ivan Girardi, Pengfei Ji, An phi Nguyen, Nora Hollenstein, Adam Ivankay, Lorenz Kuhn, Chiara Marchiori, and Ce Zhang. 2018. Patient risk assessment and warning symptom detection using deep attention-based neural networks. In EMNLP Workshop, pages 139–148, Brussels, Belgium. David Heckerman. 1990. A tractable inference algorithm for diagnosing multiple diseases. Machine Intelligence and Pattern Recognition, 10:163–171. Wei Jia, Dai Dai, Xinyan Xiao, and Hua Wu. 2019. ARNOR: Attention regularization based noise reduction for distant supervision relation classification. In ACL, pages 1399–1408, Florence, Italy. Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. 1999. An introduction to variational methods for graphical models. Machine Learning, 37:183–233. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP, pages 1746—1751, Doha, Qatar. Theresa A Koleck, Caitlin Dreisbach, Philip E Bourne, and Suzanne Bakken. 2019. Natural language processing of symptoms documented in free-text narratives of electronic health records: a systematic review. Journal of the American Medical Informatics Association, pages 364–379. Christy Li, Dimitris Konomis, Graham Neubig, Pengtao Xie, Carol Cheng, and Eric Xing. 2017. Convolutional neural networks for medical diagnosis from admission notes. In arXiv. Huiying Liang, Brian Y. Tsui, Hao Ni, Carolina C. S. Valentim, Sally L. Baxter, and et al. 2019. Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence. Nature Medicine, 25:433– 438. Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen A.W.M. van der Laak, Bram van Ginneken, and Clara I. Sanchez. 2017. A survey on deep learning in medical image analysis. Medical Image Analysis, 42:60–88. Qianlong Liu, Zhongyu Wei, Baolin Peng, Xiangying Dai, Huaixiao Tou, Ting Chen, Xuanjing Huang, and Kam fai Wong. 2018. Task-oriented dialogue system for automatic diagnosis. In ACL, pages 201—-207, Melbourne, Australia. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representation of words and phrases and their compositionality. In NeurIPS, pages 3111—-3119. James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explainable prediction of medical codes from clinical text. In NAACL, pages 1101––1111, New Orleans, Louisiana, USA. Aaditya Prakash, Siyuan Zhao, Sadid A. Hasan, Vivek Datla, Kathy Lee, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2017. Condensed memory networks for clinical diagnostic inferencing. In AAAI, pages 3274–3280. Gerard Salton and Christopher Buckley. 1988. Termweighting approaches in automatic text retrieval. Information Processing & Management, 24(5):513– 523. 3153 Ying Sha and May D. Wang. 2017. Interpretable predictions of clinical outcomes with an attention-based recurrent neural network. In ACM International Conference on Bioinformatics, Computational Biology,and Health Informatics, pages 233–240, Boston, MA, USA. Xue Shi, Yingping Yi, Ying Xiong, Buzhou Tang, Qingcai Chen, Xiaolong Wang, Zongcheng Ji, Yaoyun Zhang, and Hua Xu. 2019. Extracting entities with attributes in clinical text via joint deep learning. Journal of the American Medical Informatics Association, pages 1584–1591. Yiming Yang and Jan O. Pedersen. 1997. A comparative study on feature selection in text categorization. In ICML, pages 412—-420, Nashville, TN, USA. Zhongliang Yang, Yongfeng Huang, Yiran Jiang, Yuxi Sun, Yu-Jin Zhang, and Pengcheng Luo. 2018. Clinical assistant diagnosis for electronic medical record based on convolutional neural network. Scientific Reports, 8(6329). Shiyue Zhang, Pengtao Xie, Dong Wang, and Eric P. Xing. 2017. Medical diagnosis from laboratory tests by combining generative and discriminative learning. In arxiv.
2020
286
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3154–3160 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3154 Analyzing the Persuasive Effect of Style in News Editorial Argumentation Roxanne El Baff 1,2 Henning Wachsmuth 3 Khalid Al-Khatib 2 Benno Stein 2 1 German Aerospace Center (DLR), Germany, [email protected] 2 Bauhaus-Universität Weimar, Weimar, Germany, <first>.<last>@uni-weimar.de 3 Paderborn University, Paderborn, Germany, [email protected] Abstract News editorials argue about political issues in order to challenge or reinforce the stance of readers with different ideologies. Previous research has investigated such persuasive effects for argumentative content. In contrast, this paper studies how important the style of news editorials is to achieve persuasion. To this end, we first compare content- and style-oriented classifiers on editorials from the liberal NYTimes with ideology-specific effect annotations. We find that conservative readers are resistant to NYTimes style, but on liberals, style even has more impact than content. Focusing on liberals, we then cluster the leads, bodies, and endings of editorials, in order to learn about writing style patterns of effective argumentation. 1 Introduction The interaction between the author and the intended reader of an argumentative text is encoded in the linguistic choices of the author and their persuasive effect on the reader (Halmari and Virtanen, 2005). News editorials, in particular, aim to challenge or to reinforce the stance of readers towards controversial political issues, depending on the readers’ ideology (El Baff et al., 2018). To affect readers, they often start with an enticing lead paragraph and end their argument with a “punch” (Rich, 2015). Existing research has studied the persuasive effect of argumentative content and structure (Zhang et al., 2016; Wachsmuth et al., 2016) or combinations of content and style (Wang et al., 2017; Persing and Ng, 2017). In addition, some works indicate that different types of content affect readers with different personalities (Lukin et al., 2017) and beliefs (Durmus and Cardie, 2018). However, it remains unexplored so far what stylistic choices in argumentation actually affect which readers. We expect such choices to be key to generating effective argumentation (Wachsmuth et al., 2018). This paper analyzes the persuasive effect of style in news editorial argumentation on readers with different political ideologies (conservative vs. liberal). We model style with widely-used features capturing argumentativeness (Somasundaran et al., 2007), psychological meaning (Tausczik and Pennebaker, 2010), and similar (Section 3). Based on the NYTimes editorial corpus of El Baff et al. (2018) with ideology-specific effect annotations (Section 4), we compare style-oriented with content-oriented classifiers for persuasive effect (Section 5).1 While the general performance of effect prediction seems somewhat limited on the corpus, our experiments yield important results: Conservative readers seem largely unaffected by the style of the (liberal) NYTimes, matching the intuition that content is what dominates opposing ideologies. On the other hand, the style features predict the persuasive effect on liberal readers even better than the content features — while being complementary. That is, style matters as soon as ideology matches. Knowing about the specific structure of news editorials, we finally obtain common stylistic choices in their leads, bodies, and endings through clustering. From these, we derive writing style patterns that challenge or reinforce the stance of (liberal) readers of (liberal) news editorials, giving insights into what makes argumentation effective. 2 Related Work Compared to other argumentative genres (Stede and Schneider, 2018), news editorials use many rhetorical means to achieve a persuasive effect on readers (van Dijk, 1995). Computational research has dealt with news editorials for retrieving opinions (Yu and Hatzivassiloglou, 2003; Bal, 2009), mining arguments (Al-Khatib et al., 2017), and 1For reproducibility, the code of our experiments can be found here: https://github.com/webis-de/ acl20-editorials-style-persuasive-effect 3155 Feature Base Overview Reference Linguistic inquiry and word count Psychological meaningfulness in percentile Pennebaker et al. (2015) NRC emotional and sentiment lexicon Count of emotions (e,g. sad, etc.) and polarity words Mohammad and Turney (2013) Webis Argumentative Discourse Units Count of each evidence type (e.g., statistics) Al-Khatib et al. (2017) MPQA Arguing Lexicon Count of 17 types of arguing (e.g., assessments) Somasundaran et al. (2007) MPQA Subjectivity Classifier Count of subjective and objective sentences Riloff and Wiebe (2003) Table 1: Summary of the style feature types in our dataset. Each feature is quantified at the level of the editorial. analyzing their properties (Bal and Dizier, 2010; Scheffler and Stede, 2016). While Al-Khatib et al. (2016) modeled the structure underlying editorial argumentation, we use the corpus of El Baff et al. (2018) meant to study the persuasive effects of editorials depending on the readers’ political ideology. Halmari and Virtanen (2005) state that four aspects affect persuasion in editorials: linguistic choices, prior beliefs of readers, prior beliefs and behaviors of authors, and the effect of the text. Persuasive effectiveness reflects the rhetorical quality of argumentation (Wachsmuth et al., 2017). To assess effectiveness, Zhang et al. (2016) modeled the flow of content in debates, and Wachsmuth et al. (2016) the argumentative structure of student essays. Others combined different features for these genres (Persing and Ng, 2015). The impact of content selection relates to the notion of framing (Ajjour et al., 2019) and is well-studied in theory (van Eemeren, 2015). As Wang et al. (2017), however, we hypothesize that content and style achieve persuasion jointly. We target argumentative style here primarily, and we analyze its impact on liberal and conservative readers. In related work, Lukin et al. (2017) found that emotional and rational arguments affect people with different personalities, and Durmus and Cardie (2018) take into account the religious and political ideology of debate portal participants. In followup work, Longpre et al. (2019) observed that style is more important for decided listeners. Unlike them, we focus on the stylistic choices made in well-planned argumentative texts. The lead paragraphs and the ending of an editorial have special importance (Rich, 2015). Hynds (1990) analyzes how leads and endings changed over time, whereas Moznette and Rarick (1968) examined the readability of an editorial based on them. To our knowledge, however, no one investigated their importance computationally so far. In this paper, we close this gap by analyzing what style of leads and endings is particularly effective compared to the editorial’s body. 3 Style Features To model style, we need to abstract from the content of a news editorial. This section outlines the feature types that we employ for this purpose. Most of them have been widely used in the literature. Table 1 summarizes all features. LIWC Psychological word usage is reflected in the Linguistic Inquiry and Word Count (Tausczik and Pennebaker, 2010). LIWC is a lexicon-based text analysis that assigns words to psychologically meaningful categories (Tausczik and Pennebaker, 2010). We use the LIWC version of Pennebaker et al. (2015), which contains 15 dimensions listed in the following with examples. (1) Language metrics: words per sentence, long words. (2) Function words: pronouns, auxiliaries. (3) Other grammar: common verbs, comparisons. (4) Affect words: positive and negative emotion. (5) Social word: family, friends. (6) Cognitive processes: discrepancies, certainty. (7) Perceptual processes: feeling, seeing. (8) Biological processes: body, health. (9) Core drives and needs: power, reward focus. (10) Time orientation. (11) Relativity. (12) Personal concerns. (13) Informal speech. (14) Punctuation. (15) Summary variables. The last dimension (15) contains four variables, each of which is derived from various LIWC dimensions: (a) Analytical thinking (Pennebaker et al., 2014): The degree to which people use narrative language (low score), or more logical and formal language (high score). (b) Clout (Kacewicz et al., 2014): The relative social status, confidence, and leadership displaced in a text. (c) Authenticity (Newman et al., 2003): The degree to which people reveal themselves authentically. (d) Emotional tone (Cohn et al., 2004): Negative emotions, for scores lower than 50, and positive emotions otherwise. NRC Emotion&Sentiment To represent the mood of editorials, we use the NRC lexicon of Mohammad and Turney (2013). NRC contains a set of English words and their associations with (1) emotions such as anger, disgust, and fear as 3156 well as (2) negative and positive sentiment polarities. These features are represented as the count of words associated with each category. Webis ADUs To identify argumentative units in editorials that present evidence, we use the pre-trained evidence classifier of Al-Khatib et al. (2017). For each editorial, we identify the number of sentences that manifest anecdotal, statistical, and testimonial evidence respectively. MPQA Arguing Somasundaran et al. (2007) constructed a lexicon that includes various patterns of arguing such as assessments, doubt, authority, emphasis. For each lexicon, we have one feature that represents the count of the respective pattern in an editorial. MPQA Subjectivity We apply the subjectivity classifier provided in OpinionFinder 2.0 (Riloff and Wiebe, 2003; Wiebe and Riloff, 2005) on the editorials, in order to count the number of subjective and objective sentences there. 4 Data As the basis of our analysis, we use the WebisEditorial-Quality-18 corpus (El Baff et al., 2018). The corpus includes persuasive effect annotations of 1000 English news editorials from the liberal New York Times (NYTimes).2 The annotations capture whether a given editorial challenges the prior stance of readers (i.e., making them rethink it, but not necessarily change it), reinforces their stance (i.e., helping them argue better about the discussed topic), or is ineffective for them. Each editorial has been annotated by six annotators: three with liberal and three with conservative ideology. To evaluate an editorial’s persuasive effect on liberals, we computed the majority vote of their annotations for the editorial (and, similarly, for conservatives). We ended up with 979 editorials with effect labels for liberals and conservatives, because we found 21 duplicate editorials with the same content but different IDs (for these, we use the majority vote across all duplicates). The corpus does not have predefined evaluation datasets. To mimic real-life scenarios, we chronologically split it into a training set (oldest 80%) and a test set (newest 20%). Table 2 shows the distribution of ideology-specific effects in the datasets. 2For copyright reasons, the corpus provides only annotations for IDs of editorials. The actual texts of these editorials come from the NYTimes Annotated Corpus (Sandhaus, 2008). Class Training Test Liberal Conserv. Liberal Conserv. Challenging 126 128 22 41 Ineffective 118 292 32 71 Reinforcing 539 363 142 84 Overall 783 783 196 196 Table 2: Distribution of the majority persuasive effect of the news editorials in the given training and test set for liberal and conservative ideology respectively. 5 Prediction of Persuasive Effects To assess the impact of news editorial style on readers, we employ our style-based features on the task of predicting an editorial’s persuasive effect: Given either of the two ideologies (liberal or conservative), predict for each editorial whether it is challenging, reinforcing, or ineffective. We developed separate prediction models for the effect on liberals and conservatives, respectively. For each style feature type and for their combinations, we trained one SVM model with a linear kernel on the training set using scikit-learn (Pedregosa et al., 2011). Given the dataset split mentioned above (training set 80%, test set 20%), we tuned the SVM’s cost hyperparameter using grid search with 5-fold cross-validation on the training set. Since the distribution of effect labels is highly skewed, we set the hyperparameter class_weight to “balanced”. We then trained the best model on the whole training set and evaluated it on the test set. For comparison, we also built models for standard content features (lemma 1- to 3-grams), and we consider the random baseline that picks an effect class by chance. For both ideologies, Table 3 reports the macroand micro F1-scores for the style features, their best-performing combination,3 the content features, and the best combination of content and style.4 We computed significance using Wilcoxon’s test to reveal differences between each two approaches among best style, content, best content+style, and baseline.5 We obtained the means of F1-scores used in the significance tests by conducting fivefold cross-validation on the test set, using the same SVM hyperparameters as above. 3Best style liberals: LIWC, MPQA Subjectivity. Best style conservatives: NRC Emotion&Sentiment, Webis ADUs 4Content+style liberals: LIWC, MPQA Arguing, MPQA Subjectivity, Content. Conservatives: MPQA Arguing, Content 5A non-parametric test was needed, because a normal distribution was not given. 3157 Liberals Conservatives Features Macro Micro Macro Micro LIWC 0.31 0.40 0.25 0.26 NRC Emotion&Sentiment 0.33 0.39 0.28 0.29 Webis ADUs 0.28 0.36 0.31 0.31 MPQA Arguing 0.33 0.41 0.29 0.29 MPQA Subjectivity 0.33 0.38 0.26 0.28 Best Style *0.38 *0.49 0.36 0.37 Content 0.36 *0.49 0.37 0.38 Best Content+Style *†0.43 *†0.54 0.36 0.36 Random baseline 0.23 0.26 0.33 0.34 Table 3: Test set micro and macro F1-scores of each feature type and their best combinations in classifying the persuasive effect on liberals and conservatives. * and † indicate significant differences at p < 0.05 against the Random baseline and Content respectively. In general, the results indicate that the persuasive effect seems hard to predict on the given corpus. Still, we observe that the style features play a notable role in predicting the effect of editorials on liberals. They achieve a significantly better macro F1-score of 0.43 when combined with content compared to 0.36 when using content alone, at p < 0.05. On the other hand, the F1-scores of content (macro 0.37, micro 0.38) and style (both 0.36) in predicting the effect on conservatives, are insignificantly different even from the baseline (0.33, 0.34). These results suggest that style is important as soon as the ideology of a reader matches the one of the news portal (at least, this holds for liberal ideology), but not if it mismatches (here, conservative). 6 Identification of Style Patterns Observing that the style of NYTimes editorials affects liberal readers, we seek to learn what patterns of writing style makes their argumentation effective. To this end, we (1) abstract each discourse part of an editorial (lead, body, ending) into a style label using cluster analysis and (2) identify sequential patterns of style labels that are specific to challenging, ineffective, and reinforcing editorials. Clustering Styles of Discourse Parts Given the importance of specific discourse parts of editorials (Rich, 2015), we split each editorial into lead, body, and ending. For each part, we separately perform three steps on the training set of the given corpus:6 6The corpus of Sandhaus (2008) contains lead and paragraph annotations. The lead spans either the first two paragraphs (994 editorials), the first three (5), or the first only (1). We consider the last paragraph as the ending in all cases. Part Cluster Chall. Ineff. Reinf. Lead ▲tone, ▼authenticity 0.15 0.12 0.11 ▼tone, ▲authenticity 0.11 0.13 0.14 ▼tone, ▼authenticity 0.20 0.09 0.15 ▼tone, ▶authenticity, ▲# words 0.11 0.11 0.14 ▶tone, ▲authenticity 0.06 0.18 0.14 ▲tone, ▼authenticity 0.13 0.14 0.15 ▶tone, ▶authenticity, ▲# words 0.24 0.23 0.17 Body ▲tone, ▼authenticity 0.17 0.25 0.13 ▼tone, ▲authenticity, ▲relativity 0.09 0.05 0.10 ▼▼tone, ▼▼authenticity, ▼relativity 0.13 0.10 0.09 ▼▼tone, ▼authenticity, ▼relativity 0.15 0.10 0.17 ▶tone, ▲authenticity, ▲relativity 0.17 0.18 0.15 ▶tone, ▼▼authenticity, ▼relativity 0.11 0.11 0.16 ▶tone, ▶authenticity 0.18 0.21 0.19 End. ▲tone, ▲authenticity, ▼# words 0.10 0.11 0.07 ▲tone, ▼authenticity, ▲# words 0.24 0.25 0.25 ▲tone, ▼▼authenticity, ▼# words 0.15 0.15 0.14 ▼tone, ▲authenticity, ▼# words 0.06 0.08 0.09 ▼tone, ▼authenticity, ▼# words 0.21 0.12 0.17 ▼tone, ▼▼authenticity, ▼# words 0.06 0.08 0.06 ▼tone, ▼authenticity, ▲# words 0.17 0.19 0.22 Table 4: Distribution of clusters over the leads, bodies, and endings of challenging, ineffective, and reinforcing editorials in the training set. The clusters are labeled by their most discriminating features (ordered). ▲, ▶, ▼, and ▼▼denote relatively high, medium, and (very) low scores. The highest value in each row is marked bold. 1. Extract the style features from Section 3. 2. Perform a cluster analysis on the style features using cosine k-means. k is determined with the elbow method on the inertia of the clusters. 3. Derive cluster labels from the most discriminating features across clusters: For each cluster, we determine those 2–3 values (e.g., “high tone, low authenticity”) whose combination suffices to significantly distinguish a cluster from others. With high to very low, we mean here a feature has significantly higher or lower scores compared to other clusters.7 Table 4 shows the distribution of lead, body, and ending clusters over challenging, ineffective, and reinforcing editorials. For each discourse part, the most discriminating feature is tone, followed by authenticity. The former combines positive (higher scores) and neg7For each feature (e.g., tone), we measured significance using Anova (in case of homogeneity and normality) or Kruskal (otherwise). In the case of p < 0.05, we conducted posthoc analysis (independent t-test in case of normality, MannWhitney otherwise) with Bonferroni correction for each cluster pair, and we calculated the effect size r. Based on the effect size values, we deduced the labels of each cluster and the relative differences between them (high to very low). 3158 tone authentic. tone authentic. tone authentic. relativity tone authentic. # words Lead Body Ending Challenging editorial Ineffective editorial Reinforcing editorial tone authentic. tone authentic. tone authentic. tone authentic. # words tone authentic. # words tone authentic. # words tone authentic. tone authentic. relativity tone authentic. relativity tone authentic. # words tone authentic. # words Figure 1: Sequences of lead, body, and ending styles most specific to challenging, ineffective, and reinforcing news editorials. The triangles denote whether the given style attribute is high, medium, or (very) low. The ordering of attributes reflects their importance. ative (lower scores) emotional tones (Cohn et al., 2004). The latter indicates the degree to which people authentically reveal themselves; the higher the score, the more personal, humble, or vulnerable the writer is (Newman et al., 2003). In Table 4, we observe, for example, that the lead of challenging editorials over-proportionally often shows low authenticity, or that bodies with positive tone but low authenticity tend to be ineffective. Identification of Style Patterns From Table 4, we determine the (maximum) two labels for each discourse part that are most specific to each of the three persuasive effect classes. From these, we build all possible lead-body-ending sequences, as visualized in Figure 1. According to a χ-square test, the distributions of these sequences differ significantly at p < 0.05. They reveal the following patterns of NYTimes editorials for liberal readers: • Challenging editorials often begin with a polar emotional tone, followed by a negative tone. They tend to have low authenticity (i.e., not humble/personal) in the whole discourse (see Figure 2 for an example). • Ineffective editorials over-proportionally often start with authenticity and dull tone. They then tend to diffuse in different directions and to have a short ending paragraph. • Reinforcing editorials tend to start and end with a negative tone. They often avoid relativtone authentic. tone authentic. relativity tone authentic. # words Lead Body Ending Excerpt of the news editorial “Indonesia's Avian Flu Holdout”, challenging to liberal annorators. Indonesia sent a chill through the World Health Organization recently when it refused to supply any more samples of the avian flu virus that has killed scores of its people. The move, which seemed aimed at gaining access to vaccines at an affordable price, threatens the global effort to track the virus and develop vaccines. But Indonesia has raised a valid point that needs to be addressed: if a pandemic should strike, poor countries would be left without protection. [...] In a typical flu season, the key strains emerge from Asia, while the vaccines are sold primarily in the West. This has not caused a ruckus because most developing countries consider influenza one of their lesser health threats. But with rising fears of an avian flu pandemic, the dynamic has changed. Indonesia decided to act after a foreign company announced work on a vaccine that would be based on its samples. Indonesia stopped cooperating with the W.H.O. and started negotiations to send future samples to another vaccine maker in return for technology that would allow Indonesia to make its own vaccine. [...] The W.H.O. needs to work much harder to encourage the transfer of vaccine production technology to countries, like Indonesia, that have the technical ability to use it. That will increase the supply of vaccine and presumably bring prices down. Even then, we fear, there still won't be enough. Figure 2: Example of a challenging editorial, along with the styles observed for its lead, body, and ending. ity in the actual arguments (i.e., in the body). While these insights are naturally still vague to some extent and require more analysis in follow-up research, they show a first way of capturing the style of editorial argumentation. 7 Conclusion This paper analyzes the importance of news editorials style in achieving persuasive effects on readers with different political ideologies. We find evidence that style has a significant influence on how a (liberal) editorial affects a (liberal) reader. Inspired by the theory of the high importance of the lead and ending in writing editorials (Rich, 2015), we also reveal common effective and ineffective style sequences (lead-body-ending) statistically. Our findings help to understand how effective argumentation works in the political sphere of editorial argumentation — and how to generate such argumentation. In related work, El Baff et al. (2019) revealed the impact of style features on generating pathos- and logos-oriented short argumentative texts based on the rhetorical strategies discussed by Wachsmuth et al. (2018). With the findings of this paper, we go beyond, defining the basis of a styledependent generation model for more sophisticated argumentation, as found in news editorials. 3159 References Yamen Ajjour, Milad Alshomary, Henning Wachsmuth, and Benno Stein. 2019. Modeling frames in argumentation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 2922–2932, Hong Kong, China. Association for Computational Linguistics. Khalid Al-Khatib, Henning Wachsmuth, Matthias Hagen, and Benno Stein. 2017. Patterns of argumentation strategies across topics. In 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017, pages 1362–1368. Association for Computational Linguistics. Khalid Al-Khatib, Henning Wachsmuth, Johannes Kiesel, Matthias Hagen, and Benno Stein. 2016. A news editorial corpus for mining argumentation strategies. In 26th International Conference on Computational Linguistics (COLING 2016), pages 3433–3443. Association for Computational Linguistics. Bal Krishna Bal. 2009. Towards an analysis of opinions in news editorials: How positive was the year? (project abstract). In Proceedings of the Eight International Conference on Computational Semantics, pages 260–263. Association for Computational Linguistics. Bal Krishna Bal and Patrick Saint Dizier. 2010. Towards building annotated resources for analyzing opinions and argumentation in news editorials. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC’10). European Languages Resources Association (ELRA). Michael A Cohn, Matthias R Mehl, and James W Pennebaker. 2004. Linguistic markers of psychological change surrounding september 11, 2001. Psychological science, 15(10):687–693. Teun A. van Dijk. 1995. Opinions and ideologies in editorials. In Proceedings of the 4th International Symposium of Critical Discourse Analysis, Language, Social Life and Critical Thought, Athens. Esin Durmus and Claire Cardie. 2018. Exploring the role of prior beliefs for argument persuasion. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1035– 1045. Frans H. van Eemeren. 2015. Strategic Maneuvering, pages 1–9. American Cancer Society. Roxanne El Baff, Henning Wachsmuth, Khalid AlKhatib, Manfred Stede, and Benno Stein. 2019. Computational argumentation synthesis as a language modeling task. In 12th International Natural Language Generation Conference. ACL. Roxanne El Baff, Henning Wachsmuth, Khalid AlKhatib, and Benno Stein. 2018. Challenge or empower: Revisiting argumentation quality in a news editorial corpus. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 454–464. Association for Computational Linguistics. Helena Halmari and Tuija Virtanen. 2005. Persuasion across Genres: a Linguistic Approach, volume 130. John Benjamins Publishing. Ernest C Hynds. 1990. Changes in editorials: A study of three newspapers, 1955–1985. Journalism Quarterly, 67(2):302–312. Ewa Kacewicz, James W Pennebaker, Matthew Davis, Moongee Jeon, and Arthur C Graesser. 2014. Pronoun use reflects standings in social hierarchies. Journal of Language and Social Psychology, 33(2):125–143. Liane Longpre, Esin Durmus, and Claire Cardie. 2019. Persuasion of the undecided: Language vs. the listener. In Proceedings of the 6th Workshop on Argument Mining, pages 167–176, Florence, Italy. Association for Computational Linguistics. Stephanie Lukin, Pranav Anand, Marilyn Walker, and Steve Whittaker. 2017. Argument strength is in the eye of the beholder: Audience effects in persuasion. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 742–753. Association for Computational Linguistics. Saif M Mohammad and Peter D Turney. 2013. Crowdsourcing a word–emotion association lexicon. Computational Intelligence, 29(3):436–465. James Moznette and Galen Rarick. 1968. Which are more readable: Editorials or news stories? Journalism Quarterly, 45(2):319–321. Matthew L Newman, James W Pennebaker, Diane S Berry, and Jane M Richards. 2003. Lying words: Predicting deception from linguistic styles. Personality and social psychology bulletin, 29(5):665–675. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. James W Pennebaker, Ryan L Boyd, Kayla Jordan, and Kate Blackburn. 2015. The development and psychometric properties of LIWC2015. Technical report, University of Texas at Austin. James W Pennebaker, Cindy K Chung, Joey Frazee, Gary M Lavergne, and David I Beaver. 2014. When small words foretell academic success: The case of college admissions essays. PloS one, 9(12):e115844. 3160 Isaac Persing and Vincent Ng. 2015. Modeling argument strength in student essays. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 543–552. Association for Computational Linguistics. Isaac Persing and Vincent Ng. 2017. Lightlysupervised modeling of argument persuasiveness. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 594–604, Taipei, Taiwan. Asian Federation of Natural Language Processing. Carole Rich. 2015. Writing and reporting news: A coaching method. Cengage Learning. Ellen Riloff and Janyce Wiebe. 2003. Learning extraction patterns for subjective expressions. In Proceedings of the 2003 conference on Empirical methods in natural language processing. Evan Sandhaus. 2008. The New York Times Annotated Corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752. Tatjana Scheffler and Manfred Stede. 2016. Realizing argumentative coherence relations in German: A contrastive study of newspaper editorials and Twitter posts. In Proceedings of the COMMA Workshop: Foundations of the Language of Argumentation, pages 73–80. Swapna Somasundaran, Josef Ruppenhofer, and Janyce Wiebe. 2007. Detecting arguing and sentiment in meetings. In Proceedings of the SIGdial Workshop on Discourse and Dialogue, volume 6. Manfred Stede and Jodi Schneider. 2018. Argumentation Mining. Number 40 in Synthesis Lectures on Human Language Technologies. Morgan & Claypool. Yla R Tausczik and James W Pennebaker. 2010. The psychological meaning of words: Liwc and computerized text analysis methods. Journal of language and social psychology, 29(1):24–54. Henning Wachsmuth, Khalid Al Khatib, and Benno Stein. 2016. Using argument mining to assess the argumentation quality of essays. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1680–1691. The COLING 2016 Organizing Committee. Henning Wachsmuth, Nona Naderi, Yufang Hou, Yonatan Bilu, Vinodkumar Prabhakaran, Tim Alberdingk Thijm, Graeme Hirst, and Benno Stein. 2017. Computational argumentation quality assessment in natural language. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 176–187. Association for Computational Linguistics. Henning Wachsmuth, Manfred Stede, Roxanne El Baff, Khalid Al Khatib, Maria Skeppstedt, and Benno Stein. 2018. Argumentation synthesis following rhetorical strategies. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3753–3765. Association for Computational Linguistics. Lu Wang, Nick Beauchamp, Sarah Shugars, and Kechen Qin. 2017. Winning on the merits: The joint effects of content and style on debate outcomes. Transactions of the Association for Computational Linguistics, 5:219–232. Janyce Wiebe and Ellen Riloff. 2005. Creating subjective and objective sentence classifiers from unannotated texts. In International conference on intelligent text processing and computational linguistics, pages 486–497. Springer. Hong Yu and Vasileios Hatzivassiloglou. 2003. Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 129–136. Association for Computational Linguistics. Justine Zhang, Ravi Kumar, Sujith Ravi, and Cristian Danescu-Niculescu-Mizil. 2016. Conversational flow in Oxford-style debates. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 136–141, San Diego, California. Association for Computational Linguistics.
2020
287
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3161–3170 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3161 ECPE-2D: Emotion-Cause Pair Extraction based on Joint Two-Dimensional Representation, Interaction and Prediction Zixiang Ding, Rui Xia∗, Jianfei Yu School of Computer Science and Engineering, Nanjing University of Science and Technology, China {dingzixiang,rxia,jfyu}@njust.edu.cn Abstract In recent years, a new interesting task, called emotion-cause pair extraction (ECPE), has emerged in the area of text emotion analysis. It aims at extracting the potential pairs of emotions and their corresponding causes in a document. To solve this task, the existing research employed a two-step framework, which first extracts individual emotion set and cause set, and then pair the corresponding emotions and causes. However, such a pipeline of two steps contains some inherent flaws: 1) the modeling does not aim at extracting the final emotion-cause pair directly; 2) the errors from the first step will affect the performance of the second step. To address these shortcomings, in this paper we propose a new end-toend approach, called ECPE-Two-Dimensional (ECPE-2D), to represent the emotion-cause pairs by a 2D representation scheme. A 2D transformer module and two variants, windowconstrained and cross-road 2D transformers, are further proposed to model the interactions of different emotion-cause pairs. The 2D representation, interaction, and prediction are integrated into a joint framework. In addition to the advantages of joint modeling, the experimental results on the benchmark emotion cause corpus show that our approach improves the F1 score of the state-of-the-art from 61.28% to 68.89%. 1 Introduction Emotion cause extraction (ECE), as a sub-task of emotion analysis, aims at extracting the potential causes of certain emotion expressions in text. The ECE task was first proposed by Lee et al. (2010) and defined as a word-level sequence labeling problem. Gui et al. (2016a) released a new corpus and re-formalized the ECE task as a clause-level extraction problem. Given an emotion annotation, ∗Corresponding author the goal of ECE is to predict for each clause in a document if the clause is an emotion cause. This framework has received much attention in the following studies in this direction. Although the ECE task was well defined, it has two problems: Firstly, the emotion must be annotated manually before cause extraction, which greatly limits its practical application; Secondly, the way to first annotate the emotion and then extract the causes ignores the fact that emotions and causes are mutually indicative. To address this problem, we have proposed a new task named emotion-cause pair extraction (ECPE), aiming to extract the potential pairs of emotions and their corresponding causes together in our previous work (Xia and Ding, 2019). Specifically, ECPE is defined as a fine-grained emotion analysis task, where the goal is to extract a set of valid emotion-cause pairs, given a document consisting of multiple clauses as the input. Figure 1 (a) shows an example of the ECPE task. The input in this example is a document consisting of six clauses. Clause c4 contains a “happy” emotion and it has two corresponding causes: clause c2 (“a policeman visited the old man with the lost money”), and clause c3 (“told him that the thief was caught”). Clause c5 contains a “worried” emotion and the corresponding cause is clause c6 (“as he doesn’t know how to keep so much money”). The final output is a set of valid emotion-cause pairs defined at clause level: {c4-c2, c4-c3, c5-c6}. We have also proposed a two-step approach (ECPE-2Steps) to address the ECPE task (Xia and Ding, 2019). ECPE-2Steps is a pipeline of two steps: Step 1 extracts an emotion set and a cause set individually. For example in Figure 1 (a), the emotion set is {c4, c5} and the cause set is {c2, c3, c6}; Step 2 conducts emotion-cause pairing and filtering based on the outputs of Step 1. As shown in Figure 1 (a), it first gets the candidate emotion-cause pairs by applying a Cartesian product to the emotion set and 3162 All possible Emotion-Cause Pairs: {c4-c2, c4-c3, c4-c6, c5-c2, c5-c3, c5-c6} Valid Emotion-Cause Pairs: {c4-c2, c4-c3, c4-c6, c5-c2, c5-c3, c5-c6} Step 2 - Filtering c1: Yesterday morning, c2: a policeman visited the old man with the lost money, c3: and told him that the thief was caught. c4: The old man was very happy. c5: But he still feels worried, c6: as he doesn’t know how to keep so much money. Emotion set: {c4, c5} Cause set: {c2, c3, c6} Step 2 - Pairing Step 1 (a) ECPE-2Step (Xia and Ding, 2019) c1-c3 c1-c4 c1-c5 c1-c2 c1-c1 c1-c6 c2-c3 c2-c4 c2-c5 c2-c2 c2-c1 c2-c6 c3-c3 c3-c4 c3-c5 c3-c2 c3-c1 c3-c6 c4-c3 c4-c4 c4-c5 c4-c2 c4-c1 c4-c6 c6-c3 c6-c4 c6-c5 c6-c2 c6-c1 c6-c6 c5-c3 c5-c4 c5-c5 c5-c2 c5-c1 c5-c6 Cause clause Emotion clause (b) ECPE-2D (Our approach) Figure 1: An example showing two frameworks for solving the emotion-cause pair extraction (ECPE) task. cause set, and then train an independent filter to remove the invalid pairs. Although the ECPE-2Steps approach seems reasonable and performs well, it still has the following shortcomings: (1) as a pipeline of two separate steps, ECPE-2Steps requires two prediction steps to get the final emotion-cause pair. The training of the model is also not directly aimed at extracting the final emotion-cause pair. (2) The errors from Step 1 will affect the performance of Step 2. For one thing, the upper bound of the recall in Step 2 is determined by the recall in Step 1, because Step 2 cannot produce emotion-cause pairs from the emotions or causes that were not extracted by Step 1; for another, if Step 1 predicts too many incorrect emotions or causes, the precision of Step 2 will be reduced. To address these problems, in this work we propose a new end-to-end ECPE solution, called ECPE-Two-Dimensional (ECPE-2D), to represent the emotion-cause pairs by a 2D representation scheme, and integrate the emotion-cause pair representation, interaction and prediction into a joint framework. As shown in Figure 1 (b), firstly, we design a 2D representation scheme to represent the emotion-cause pairs in forms of a square matrix, where each item represents an emotion-cause pair. Secondly, a 2D Transformer framework and its two variants, window-constrained and cross-road 2D transformers, are further proposed to capture the interaction between different emotion-cause pairs. Finally, we extract the valid emotion-cause pairs based on the 2D representation by conducting a binary classification on each emotion-cause pair. These three parts are integrated into a unified framework and trained simultaneously. We evaluate our ECPE-2D approach on the benchmark emotion cause corpus. The experimental results prove that ECPE-2D can obtain overwhelmingly better results than the state-of-the-art methods on the emotion-cause pair extraction task and two auxiliary tasks (emotion extraction and cause extraction). 2 Approach 2.1 Overall Architecture Following our prior work (Xia and Ding, 2019), we formalize the emotion-cause pair extraction (ECPE) task as follows. The input is a document consisting of multiple clauses d = [c1, c2, · · · , c|d|], the goal of ECPE is to extract a set of emotioncause pairs in d: P = {· · · , cemo-ccau, · · ·}, (1) where cemo is an emotion clause and ccau is the corresponding cause clause. The overall architecture of the proposed method is shown in Figure 2. It consists of three parts: 1) 2D Emotion-Cause Pair Representation; 2) 2D Emotion-Cause Pair Interaction; 3) 2D EmotionCause Pair Prediction. Firstly, an individual emotion/cause encoding component is firstly employed to obtain the emotion-specific representation vectors and cause-specific representation vectors. A full pairing component is applied to pair the two representation vectors into a 2D representation matrix. Then a 2D transformer module is proposed to model the interactions between different emotioncause pairs. For each emotion-cause pair in the matrix, the updated representation is finally fed to a softmax layer to predict if the pair is valid or not. The three modules are integrated into a unified framework and trained simultaneously. 3163 ⊕ … … ⊕ ⊕ … 𝒓|𝑑| emo ⊕ … … … … ෝ𝒚|𝑑| emo … softmax softmax Bi-LSTM … Bi-LSTM 𝒔1 𝒓1 cau ෝ𝒚1 cau softmax softmax Bi-LSTM … Bi-LSTM 𝒔1 𝒔|𝑑| 𝒓𝒑𝒆𝑖,𝑗 𝒔1 𝒔|𝑑| 𝒔|𝑑| … … … Copy 𝑤1,1 𝑐1 … Bi-LSTM & attention 𝑤1,|𝑐1| 𝑤|𝑑|,1 𝑐|𝑑| … Bi-LSTM & Attention 𝑤|𝑑|,|𝑐|𝑑|| … ෝ𝒚𝑖,𝑗 pair softmax 𝒓𝑗 cau ෝ𝒚𝑗 cau 𝒓𝑖 emo ෝ𝒚𝑖 emo … … … … 𝒓1 emo ෝ𝒚1 emo 𝒓|𝑑| cau ෝ𝒚|𝑑| cau Figure 2: Overview of the proposed joint framework for emotion-cause pair extraction. 2.2 2D Emotion-Cause Pair Representation 2.2.1 Individual Emotion/Cause Encoding The purpose of the clause encoder layer is to generate an emotion-specific representation and a causespecific representation for each clause in a document. The input is a document contains multiple clauses: d = [c1, c2, · · · , c|d|], and each clause also contains multiple words ci = [wi,1, wi,2, ..., wi,|ci|]. A hierarchical neural network which contains two layers is employed to capture such a word-clausedocument structure. The lower layer consists of a set of word-level Bi-LSTM modules, each of which corresponds to one clause and accumulate the context information for each word of the clause. The hidden state of the j-th word in the i-th clause hi,j is obtained based on a bi-directional LSTM. An attention mechanism is then adopted to get the clause representation si. The upper layer is composed of two independent components, with the goal to generate an emotionspecific representation remo i and a cause-specific representation rcau i for each clause, respectively. Both components take the clause representation (s1, s2, , s|d|) as input and use two clause-level BiLSTMs to obtain remo i and rcau i , respectively. Finally, remo i and rcau i are respectively feed into two softmax layers to get the emotion prediction ˆyemo i and cause prediction ˆycau i : ˆyemo i = softmax(Wemoremo i + bemo), (2) ˆycau i = softmax(Wcaurcau i + bcau). (3) It should be noted that the individual emotion/cause encoder here is a compatible module. Other emotion/cause encoder such as Inter-CE, Inter-EC (Xia and Ding, 2019), and BERT (Devlin et al., 2019) can also be used. We will compare and discuss them in the experiments. 2.2.2 Emotion-Cause Full Pairing In contrast to the ECPE-2Steps approach (Xia and Ding, 2019) which only extract pairs from the individual emotion set and cause set, we consider all possible pairs of clauses in d as candidates. Assuming the length of the document is |d|, then all possible pairs form a matrix M of the shape |d|∗|d|, where the rows and columns represent the index of the emotion clause and the cause clause in the document, respectively. cemo i -ccau j is the element in the i-th row and the j-th column of M and indicates the emotion-cause pair that consists of the i-th clause and the j-th clause, encoded as: Mi,j = remo i ⊕ˆyemo i ⊕rcau j ⊕ˆycau j ⊕rpei,j, (4) where remo i and ˆyemo i are emotion-specific representation and emotion prediction of the i-th clause ci, rcau j and ˆycau j are cause-specific representation and cause prediction of the j-th clause cj. rpei,j is a relative position embedding vector of cj relative to ci. 2.3 2D Emotion-Cause Pair Interaction In the previous section, we have obtained a 2D representation matrix consisting of all possible emotion-cause pairs. Each element of the matrix represents a specific emotion-cause pair. Considering that a document of length |d| will generate |d| ∗|d| possible emotion-cause pairs, a3164 (a) Window-constrained 2D transformer. FFN 𝑴1,|𝑑| 𝑴1,1 … 𝑴|𝑑|,1 ො𝒛1,|𝑑| ො𝒛1,1 … ො𝒛|𝑑|,1 𝒛1,|𝑑| 𝒛1,1 … 𝒛|𝑑|,1 ෝ𝒐1,|𝑑| ෝ𝒐1,1 … ෝ𝒐|𝑑|,1 𝒐1,|𝑑| 𝒐1,1 … 𝒐|𝑑|,1 FFN 2D Self attention Add & Normalize Add & Normalize 𝑴1,|𝑑| 𝑴1,1 … 𝑴|𝑑|,1 ො𝒛1,|𝑑| ො𝒛1,1 … ො𝒛|𝑑|,1 𝒛1,|𝑑| 𝒛1,1 … 𝒛|𝑑|,1 ෝ𝒐1,|𝑑| ෝ𝒐1,1 … ෝ𝒐|𝑑|,1 𝒐1,|𝑑| 𝒐1,1 … 𝒐|𝑑|,1 Add & Normalize Add & Normalize Nx Nx (b) Cross-road 2D transformer. 2D Self attention Figure 3: Two simplified versions of 2D transformer for emotion-cause pair interaction. mong which only a very small number of pairs are positive samples. Using the independent pair representation for emotion-cause pair prediction will not take advantage of this global information. Therefore, we further designed a 2D transformer for the ECPE task to effectively achieve the interaction between emotion-cause pairs. 2.3.1 Standard 2D Transformer The standard 2D transformer (Vaswani et al., 2017) consists of a stack of N layers. Each layer consists of two sublayers: a multi-head 2D self-attention mechanism followed by a position-wise feed forward network. Multi-head 2D Self-attention. The multi-head 2D self-attention mechanism first calculates the query vector qi,j, key vector ki,j and value vector vi,j for each pair cemo i -ccau j in the document d as : qi,j = Relu(Mi,jWQ), (5) ki,j = Relu(Mi,jWK), (6) vi,j = Relu(Mi,jWV ), (7) where WQ ∈Rn×n, WK ∈Rn×n, WV ∈Rn×n are parameters for queries, keys and values respectively. For each pair cemo i -ccau j , a set of weights βi,j = {βi,j,1,1, βi,j,1,2, · · · , βi,j,|d|,|d|)} are learned: βi,j,a,b = exp(qi,j· ka,b √n ) P a′ P b′ exp(qi,j· ka′ ,b′ √n ) . (8) Then the new feature representation of cemo i -ccau j is obtained by considering all the |d| ∗|d| pairs in M: ˆzi,j = |d| X a=1 |d| X b=1 βi,j,a,b · va,b. (9) Position-wise Feed Forward Network. In addition to the attention sublayer, a position-wise feed forward network is applied to each pair separately and identically: ˆoi,j = max(0, zi,jW1 + b1)W2 + b2. (10) It should be noted that both of the above two sublayers use the residual connection followed by normalization layer at its output: zi,j = Normalize(ˆzi,j + Mi,j), (11) oi,j = Normalize(ˆoi,j + zi,j). (12) As has mentioned, the standard 2D transformer consists of a stack of N layers. Let l denotes the index of transformer layers. The output of the previous layer will be used as the input of the next layer: M(l+1) i,j = o(l) i,j. (13) Computational inefficiency. Since the outputs of the standard transformer are |d| ∗|d| elements, each element requires the calculation of |d| ∗|d| attention weights, and eventually (|d|∗|d|)∗(|d|∗|d|) weights are needed to be calculated and temporarily stored. To alleviate the computational load, we furthermore propose two variants of the standard 2D Transformer in the following two subsections: 1) window-constrained 2D Transformer and 2) cross-road 2D Transformer, as shown in Figure 3. 3165 2D transformer Time complexity Space complexity Standard O(batch ∗|d| ∗|d| ∗n ∗(|d| ∗|d| + n)) O(batch ∗|d| ∗|d| ∗(|d| ∗|d| + n)) Window-constrained O(batch ∗|d| ∗w ∗n ∗(|d| ∗w + n)) O(batch ∗|d| ∗w ∗(|d| ∗w + n)) Cross-road O(batch ∗|d| ∗|d| ∗n ∗(|d| + n)) O(batch ∗|d| ∗|d| ∗(|d| + n)) Table 1: Comparison of three kinds of 2D transformer in resource consumption. batch indicates the batch size during training, |d| indicates the number of clauses in the document, n refers to the hidden state size, w is equal to 2 ∗window + 1, and window is the window size used in window-constrained 2D transformer. (a) (b) (c) Figure 4: Examples of attentions to be calculated in three 2D Transformers: (a) Standard 2D-Transformer, (b) Window-constrained 2D Transformer, and (c) Cross-road 2D Transformer. 2.3.2 Window-constrained 2D Transformer Considering that most of the cause clauses are around the emotion clauses, we propose the window-constrained 2D transformer, which is a standard 2D transformer while only takes cemo i -ccau j that meets j −i ∈ [−window, window] as inputs. The outputs of the window-constrained 2D transformer are |d| ∗(window ∗2 + 1) elements, each element requires the calculation of |d|∗(window ∗ 2 + 1) attention weights, and eventually (|d| ∗ (window∗2+1))∗(|d|∗(window∗2+1)) weights are needed to be calculated and temporarily stored. It should be noted that compared to the standard 2D transformer, the window-constrained transformer not only greatly reduces the resource requirements, but also alleviates the class imbalance problem to some extent since most of the pairs out of the windows are negative samples. 2.3.3 Cross-road 2D Transformer Since the feature representation of pairs in the same row or column tends to be closer, we believe that pairs in the same row and column with the current pair have a greater impact on the current pair. Therefore, we propose the cross-road 2D transformer, in which the multi-head 2D self-attention mechanism is replaced by the cross-road 2D selfattention, and the other parts remain the same. In the cross-road 2D self-attention, we calculate a set of row-wise weights βrow i,j = {βrow i,j,1 , βrow i,j,2 , · · · , βrow i,j,|d|)} and a set of columnwise weights βcol i,j = {βcol i,j,1, βcol i,j,2, · · · , βcol i,j,|d|)} for each pair cemo i -ccau j : βrow i,j,b = exp(qi,j· ki,b √n ) P b′ exp(qi,j· ki,b′ √n ) , (14) βcol i,j,a = exp(qi,j· ka,j √n ) P a′ exp(qi,j· ka′ ,j √n ) . (15) Then the new feature representation of cemo i -ccau j is obtained by considering the pairs in the same row and column with it: ˆzi,j = ( |d| X b=1 βrow i,j,b · vi,b + |d| X a=1 βcol i,j,a · va,j)/2. (16) The outputs of the cross-road 2D transformer are |d| ∗|d| elements, each element requires the calculation of (|d| + |d|) attention weights, and eventually (|d| ∗|d|) ∗(|d| ∗2) weights are needed to be calculated and temporarily stored. In this way, the new representation of each pair cemo i -ccau j can encode the information on all the pairs in the same row and column. In addition, if the cross-road 2D transformer is performed twice or more, the feature representation of each pair can encode the global information on all the pairs in M, while standard 2D transformer requires much more resource to achieve this. We show an example of attentions to be calculated for standard, window-constrained, and crossroad 2D transformer in Figure 4 (a), (b), and (c), respectively, and summarize their resource consumption in Table 1. 2.4 2D Emotion-Cause Pair Prediction After a stack of N 2D transformer layers, we can get the final representation o(N) i,j for each pair cemo i -ccau j , and predict the emotion-cause pair distribution ˆypair i,j as follows: 3166 ˆypair i,j = softmax(Wpairo(N) i,j + bpair). (17) The loss of emotion-cause pair classification for a document d is: Lpair = − |d| X i=1 |d| X j=1 ypair i,j · log(ˆypair i,j ), (18) where ypair i,j is the ground truth distribution of emotion-cause pair of cemo i -ccau j . In order to get better emotion-specific representation and cause-specific representation, we introduce the auxiliary loss for emotion prediction and cause prediction: Laux = − |d| X i=1 yemo i ·log(ˆyemo i )− |d| X i=1 ycau i ·log(ˆycau i ), (19) where yemo i and ycau i are emotion and cause annotation of clause ci, respectively. The final loss of our model for a document d is a weighted sum of Lpair and Laux with L2-regularization term as follows: L = λ1Lpair + λ2Laux + λ3||θ||2, (20) where λ1, λ2, λ3 ∈(0, 1) are weights, θ denotes all the parameters in this model. 3 Experiments 3.1 Dataset and Metrics We evaluated our proposed model on an ECPE corpus from (Xia and Ding, 2019), which was constructed based on a Chinese emotion cause corpus (Gui et al., 2016a). The same as (Xia and Ding, 2019), we stochastically select 90% of the data as training data and the remaining 10% as testing data. In order to obtain statistically credible results, we repeat the experiments 20 times and report the average result. The precision, recall, and F1 score defined in (Xia and Ding, 2019) are used as the metrics for evaluation. In addition, we also evaluated the performance of two sub-tasks: emotion extraction and cause extraction, using the precision, recall, and F1 score defined in (Gui et al., 2016a) as the metrics. 3.2 Experimental Settings We use word vectors provided by (Xia and Ding, 2019) that were pre-trained on a corpora from Chinese Weibo. The dimensions of word embedding and relative position embedding are set to 200 and 50, respectively. The number of hidden units in BiLSTM for all our models is set to 100. The dimension of the hidden states, query, key, and value in the transformer are all set to 30. The window size in the window-constrained 2D transformer is set to 3. All weight matrixes and bias are randomly initialized by a uniform distribution U(0.01, 0.01). For training details, we use the stochastic gradient descent (SGD) algorithm and Adam update rule with shuffled minibatch. The batch size and learning rate are set to 32 and 0.005, respectively. As for regularization, dropout is applied for word embeddings and the dropout rate is set to 0.7. The weights λ1, λ2, λ3 in formula 20 are set to 1, 1, 1e5, respectively. The code has been made publicly available on Github1. 3.3 Overall Performance Table 2 shows the experimental results of our models and baseline methods on the ECPE task as well as two subtasks (emotion extraction and cause extraction). ECPE-2Steps is a set of two-step pipeline methods proposed in our prior work (Xia and Ding, 2019), which first perform individual emotion extraction and cause extraction via multi-task learning, and then conduct emotion-cause pairing and filtering. Specifically, there are three kinds of multitask learning settings: 1) Indep: It is an independent multi-task learning method, in which emotion extraction and cause extraction are independently modeled. 2) Inter-CE: It is an interactive multi-task learning method, in which the predictions of cause extraction are used to improve emotion extraction. 3) Inter-EC: It is another interactive multi-task learning method, in which the predictions of emotion extraction are used to enhance cause extraction. ECPE-2D is a joint framework proposed in this paper, which integrates the 2D emotion-cause pair representation, interaction, and prediction in an 1https://github.com/NUSTM/ECPE-2D 3167 Framework Approach Emotion-Cause Pair Ext. Emotion Ext. Cause Ext. P R F1 P R F1 P R F1 ECPEIndep 68.32 50.82 58.18 83.75 80.71 82.10 69.02 56.73 62.05 2Steps Inter-CE 69.02 51.35 59.01 84.94 81.22 83.00 68.09 56.34 61.51 Inter-EC 67.21 57.05 61.28 83.64 81.07 82.30 70.41 60.83 65.07 ECPE-2D Indep 71.60 55.95 62.63 86.32 81.52 83.80 69.15 59.72 63.97 +WC 69.01 59.58 63.80 85.08 81.82 83.35 71.57 59.08 64.64 (Ours) +CR 69.12 58.78 63.38 85.27 81.82 83.44 69.73 59.37 63.99 Inter-CE 69.35 57.24 62.61 86.12 82.40 84.16 69.77 59.42 63.98 +WC 68.62 58.70 63.18 84.97 82.58 83.70 69.24 59.15 63.65 +CR 69.22 59.04 63.56 84.82 82.88 83.76 69.80 58.78 63.68 Inter-EC 71.73 57.54 63.66 85.37 81.97 83.54 71.51 62.74 66.76 +WC 71.18 59.84 64.94 85.11 82.37 83.65 71.33 62.85 66.72 +CR 69.60 61.18 64.96 85.12 82.20 83.58 72.72 62.98 67.38 Inter-EC 70.73 64.86 67.47 86.22 91.82 88.88 73.46 68.79 70.96 (BERT) +WC 72.92 65.44 68.89 86.27 92.21 89.10 73.36 69.34 71.23 +CR 69.35 67.85 68.37 85.48 92.44 88.78 72.72 69.27 70.87 Table 2: Performance of our models and baseline models (Xia and Ding 2019) using precision, recall, and F1measure as metrics on the ECPE task as well as the two sub-tasks. end-to-end fashion. We explored three individual emotion/cause encoding settings: Indep, Inter-CE and Inter-EC, and three emotion-cause pair interaction settings: 1) “-” indicates that we do not introduce emotioncause pair interaction; 2) “+WC” indicates that we use the windowconstrained 2D transformer for emotion-cause pair interaction; 3) “+CR” indicates that we use the cross-road 2D transformer for emotion-cause pair interaction; Note that due to the limitations of GPU memory, we have not been able to perform experiments with Standard 2D Transformer. First of all, it can be seen that our proposed model ECPE-2D (Inter-EC+WC) performs better than ECPE-2Step on all metrics of all tasks, which proves the effectiveness of our method. On the ECPE task, ECPE-2Steps (Inter-EC) performs best among all the previous methods. Compared with ECPE-2Steps (Indep), the improvement of ECPE-2Steps (Inter-EC) is mainly on the recall rate, while the precision score is slightly reduced. On the basis of ECPE-2Steps (Inter-EC), the recall rate of ECPE-2D (Inter-EC+CR) has been further greatly improved, and the precision score has also been slightly improved, which ultimately leads to better performance on the F1 score. On the emotion extraction and cause extraction subtasks, ECPE-2Steps (Inter-CE) and ECPE2Steps (Inter-EC) achieves significant improvements compared to ECPE-2Steps (Indep) on the former and latter subtask respectively by leveraging the interaction between emotion and cause. While our method ECPE-2D (Inter-EC+CR) outperforms the previous methods on both subtasks. We attribute the improvements to multi-task learning, as compared to the ECPE-2Steps (Inter-EC) model, ECPE-2D (Inter-EC+CR) additionally introduces the emotion-cause pair extraction task and trains the three tasks in a unified framework. In addition, we also explored the effect of using BERT2 (Devlin et al., 2019) as clause encoder in Inter-EC, which is denoted as Inter-EC (BERT). The experimental results in Table 2 show that the performance on all tasks can be further greatly improved (especially, the state-of-the-art F1 score on the ECPE task is improved from 61.28% to 68.89%) by adopting BERT as clause encoder. 3.4 ECPE-2D vs. ECPE-2Steps In order to verify the effect of our proposed joint framework ECPE-2D, we discard the emotioncause pair interaction module and compare ECPE2D models with ECPE-2Step models based on the same individual encoding setting, the results are shown in Table 2. By comparing ECPE-2D (Indep) with ECPE2Step (Indep), we find that the performance of ECPE-2D (Indep) on all the metrics of all tasks (especially the ECPE task) are significantly improved. On the ECPE task, the performance of ECPE-2D (Indep) is even better than ECPE-2D 2BERT is only used to replace the word-level Bi-LSTM. Specifically, each clause in the document is feed into the BERT model independently, and the final hidden state of ”[CLS]” is used as the clause representation. Our model is built based on this implementation: https://github.com/google-research/bert. 3168 (Inter-EC), which is the prior state-of-the-art model. On the two subtasks, the performance has also been improved. We attribute the improvements to multi-task learning, as compared to the ECPE2Step (Indep) model, ECPE-2D (Indep) additionally introduces the emotion-cause pair extraction task. By comparing ECPE-2D (Inter-CE) and ECPE2D (Inter-EC) with their two-step pipeline versions (ECPE-2Step (Inter-CE) and ECPE-2Step (Inter-EC)), we can draw similar conclusions. All these results prove that the proposed joint framework ECPE-2D is superior to the two-step pipeline framework ECPE-2Step in solving the ECPE task. 3.5 The Effectiveness of 2D Transformer Comparing with the ECPE-2D (Indep) model, the ECPE-2D (Indep+WC/CR) models can achieve further improvement on the ECPE task, while the improvement on the two subtasks are not significant. Similar conclusions can be drawn when comparing ECPE-2D (Inter-CE) and ECPE-2D (Inter-CE+WC/CR) as well as ECPE-2D(InterEC) and ECPE-2D(Inter-EC+WC/CR). Particularly, compared to the strong baseline ECPE-2D (Inter-EC(BERT)), the performance can still be improved by introducing two kinds of 2D transformers. These results demonstrate that the windowconstrained and cross-road 2D transformer can effectively improve the performance on the ECPE task via encoding interactive information between pairs. In addition, we found that for ECPE-2D (Indep/Inter-CE/Inter-EC/Inter-EC(BERT)), the improvements brought by the introduction of window-constrained and cross-road 2D transformer are similar. These results indicate that the two 2D transformers are comparable. 3.6 The Effectiveness of Auxiliary Supervision In order to explore the impact of the auxiliary supervision of two subtasks (emotion extraction and cause extraction) on the final performance of the ECPE task, we design the experiments in Table 3. “-AS” denotes the auxiliary supervision is removed (in practice, we set λ2 in formula (20) to 0). Compared with ECPE-2D (Indep/Inter-CE/InterEC), we find that the F1 score of ECPE-2D (Indep/Inter-CE/Inter-EC)-AS on the ECPE task decreased by about 1.4%, 2.2%, and 2.6%, respectively, which indicates that the supervisions of emoEmotion-Cause Pair Ext. P R F1 Indep-AS 67.26 56.46 61.24 Indep+WC-AS 68.87 59.78 63.86 Indep+CR-AS 67.48 60.66 63.76 Inter-CE-AS 68.36 54.40 60.42 Inter-CE+WC-AS 67.12 60.79 63.44 Inter-CE+CR-AS 67.28 61.08 63.85 Inter-EC-AS 66.46 56.69 61.08 Inter-EC+WC-AS 67.79 60.47 63.81 Inter-EC+CR-AS 69.26 60.06 64.17 Table 3: Performance of our models on the ECPE task when the auxiliary supervisions of emotion extraction and cause extraction are removed. For brevity, the prefix ”ECPE-2D” of all methods in this table are omitted. tion extraction and cause extraction are important for the ECPE task. Nevertheless, the results of ECPE-2D (Indep)-AS are still better than ECPE2Step (Indep) and comparable to the prior stateof-the-art result, which shows that emotion-cause pair extraction can be performed individually and proves the effectiveness of our joint framework. Compared with ECPE-2D (Inter-EC+WC/+CR), the F1 score of ECPE-2D (Inter-EC+WC/+CR)AS on the ECPE task decreased by about 1.1% and 0.8%, which is much less than the decrease between ECPE-2D (Inter-EC) and ECPE-2D (InterEC)-AS (drops 2.6%). These results lead to the conclusion that the negative impact of removing auxiliary supervision is reduced when pairwise encoders are introduced. From another perspective, when auxiliary supervisions are removed, the improvement brought by introducing pairwise encoders is greater. Comparing ECPE-2D (InterCE+WC/+CR), ECPE-2D (Indep+WC/+CR) and their ”-AS” versions leads to similar conclusions. The above results again demonstrate the effectiveness of the proposed 2D transformer. 4 Related Work The emotion-cause pair extraction (ECPE) task was first proposed in our prior work (Xia and Ding, 2019) and is derived from the traditional emotion cause extraction (ECE) task. Since the ECPE task was recently proposed, there is little work on it. We mainly introduce the related work of ECE task. The emotion cause extraction (ECE) task was first proposed by Lee et al. (2010), with the goal to extract the word-level causes that lead to the given emotions in text. Based on the same task settings, there were some other individual studies that conducted ECE research on their own corpus us3169 ing rule-based methods (Neviarouskaya and Aono, 2013; Li and Xu, 2014; Gao et al., 2015a,b; Yada et al., 2017) or machine learning methods (Ghazi et al., 2015; Song and Meng, 2015). Based on the analysis of the corpus in (Lee et al., 2010), Chen et al. (2010) suggested that a clause may be the most appropriate unit to detect causes and transformed the task from word-level to clauselevel. There was also some work based on this task setting (Russo et al., 2011; Gui et al., 2014). Recently, a Chinese emotion cause dataset was released by (Gui et al., 2016a,b; Xu et al., 2017), and has received much attention. Based on this corpus, a lot of traditional machine learning methods (Gui et al., 2016a,b; Xu et al., 2017) and deep learning methods (Gui et al., 2017; Li et al., 2018; Yu et al., 2019; Xu et al., 2019; Ding et al., 2019; Xia et al., 2019) were proposed. In addition, there is also some work focused on cause detection for Chinese microblogs using a multiple-user structure and formalized two cause detection tasks for microblogs (current-subtweetbased cause detection and original-subtweet-based cause detection). (Cheng et al., 2017; Chen et al., 2018b,a). The traditional ECE tasks suffer from two shortcomings: 1) the emotion must be annotated before cause extraction in ECE, which greatly limits its applications in real-world scenarios; 2) the way to first annotate emotion and then extract the cause ignores the fact that they are mutually indicative. To address this problem, we proposed the new emotion-cause pair extraction task in (Xia and Ding, 2019), which aims to extract the potential pairs of emotions and corresponding causes in a document. We have also proposed a two-step framework, which first extracts individual emotion set and cause set, and then pairs the corresponding emotions and causes. In this paper, we propose a new end-to-end approach to represent the emotion-cause pairs by a 2D representation scheme. Two kinds of 2D transformers, namely window-constrained and cross-road 2D transformers, are further proposed to model the interactions of different emotion-cause pairs. Finally, the 2D representation, interaction, and prediction are integrated into a joint framework. 5 Conclusions The emotion-cause pair extraction (ECPE) task has drawn attention recently. However the previous approach employed a two-step pipeline framework and has some inherent flaws. In this paper, instead of a pipeline of two steps, we propose a joint endto-end framework, called ECPE-2D, to represent the emotion-cause pairs by a 2D representation scheme, and integrate the 2D emotion-cause pair representation, interaction, and prediction into a joint a framework. We also develop two kinds of 2D Transformers, i.e., Window-constrained and Cross-road 2D Transformers, to further model the interaction of different emotion-cause pairs. The experimental results on the benchmark emotion cause corpus demonstrate that in addition to the advantages of joint modeling, our approach outperforms the state-of-the-art method by 7.6 percentage points in terms of the F1 score on the ECPE task. Acknowledgments We would like to thank three anonymous reviewers for their valuable comments. This work was supported by the Natural Science Foundation of China (No. 61672288). Zixiang Ding and Rui Xia contributed equally to this paper. References Ying Chen, Wenjun Hou, and Xiyao Cheng. 2018a. Hierarchical convolution neural network for emotion cause detection on microblogs. In International Conference on Artificial Neural Networks (ICANN), pages 115–122. Ying Chen, Wenjun Hou, Xiyao Cheng, and Shoushan Li. 2018b. Joint learning for emotion classification and emotion cause detection. In Empirical Methods in Natural Language Processing (EMNLP), pages 646–651. Ying Chen, Sophia Yat Mei Lee, Shoushan Li, and ChuRen Huang. 2010. Emotion cause detection with linguistic constructions. In Computational Linguistics (COLING), pages 179–187. Xiyao Cheng, Ying Chen, Bixiao Cheng, Shoushan Li, and Guodong Zhou. 2017. An emotion cause corpus for chinese microblogs with multiple-user structures. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 17(1):1– 19. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT), pages 4171–4186. 3170 Zixiang Ding, Huihui He, Mengran Zhang, and Rui Xia. 2019. From independent prediction to re-ordered prediction: Integrating relative position and global label information to emotion cause identification. In AAAI Conference on Artificial Intelligence (AAAI), pages 6343–6350. Kai Gao, Hua Xu, and Jiushuo Wang. 2015a. Emotion cause detection for chinese micro-blogs based on ecocc model. In Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), pages 3–14. Kai Gao, Hua Xu, and Jiushuo Wang. 2015b. A rulebased approach to emotion cause detection for chinese micro-blogs. Expert Systems with Applications, 42(9):4517–4528. Diman Ghazi, Diana Inkpen, and Stan Szpakowicz. 2015. Detecting emotion stimuli in emotion-bearing sentences. In International Conference on Intelligent Text Processing and Computational Linguistics (CICLing), pages 152–165. Lin Gui, Jiannan Hu, Yulan He, Ruifeng Xu, Qin Lu, and Jiachen Du. 2017. A question answering approach to emotion cause extraction. In Empirical Methods in Natural Language Processing (EMNLP), pages 1593–1602. Lin Gui, Dongyin Wu, Ruifeng Xu, Qin Lu, and Yu Zhou. 2016a. Event-driven emotion cause extraction with corpus construction. In Empirical Methods in Natural Language Processing (EMNLP), pages 1639–1649. Lin Gui, Ruifeng Xu, Qin Lu, Dongyin Wu, and Yu Zhou. 2016b. Emotion cause extraction, a challenging task with corpus construction. In Chinese National Conference on Social Media Processing, pages 98–109. Lin Gui, Li Yuan, Ruifeng Xu, Bin Liu, Qin Lu, and Yu Zhou. 2014. Emotion cause detection with linguistic construction in chinese weibo text. In Natural Language Processing and Chinese Computing (NLPCC), pages 457–464. Sophia Yat Mei Lee, Ying Chen, and Chu-Ren Huang. 2010. A text-driven rule-based system for emotion cause detection. In NAACL HLT Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 45–53. Weiyuan Li and Hua Xu. 2014. Text-based emotion classification using emotion cause extraction. Expert Systems with Applications, 41(4):1742–1749. Xiangju Li, Kaisong Song, Shi Feng, Daling Wang, and Yifei Zhang. 2018. A co-attention neural network model for emotion cause analysis with emotional context awareness. In Empirical Methods in Natural Language Processing (EMNLP), pages 4752–4757. Alena Neviarouskaya and Masaki Aono. 2013. Extracting causes of emotions from text. In International Joint Conference on Natural Language Processing (IJCNLP), pages 932–936. Irene Russo, Tommaso Caselli, Francesco Rubino, Ester Boldrini, and Patricio Mart´ınez-Barco. 2011. Emocause: an easy-adaptable approach to emotion cause contexts. In Workshop on Computational Approaches to Subjectivity and Sentiment Analysis (WASSA), pages 153–160. Shuangyong Song and Yao Meng. 2015. Detecting concept-level emotion cause in microblogging. In World Wide Web (WWW), pages 119–120. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems (NIPS), pages 5998–6008. Rui Xia and Zixiang Ding. 2019. Emotion-cause pair extraction: A new task to emotion analysis in texts. In Association for Computational Linguistics (ACL), pages 1003–1012. Rui Xia, Mengran Zhang, and Zixiang Ding. 2019. RTHN: A RNN-transformer hierarchical network for emotion cause extraction. In International Joint Conference on Artificial Intelligence (IJCAI), pages 5285–5291. Bo Xu, Hongfei Lin, Yuan Lin, Yufeng Diao, Liang Yang, and Kan Xu. 2019. Extracting emotion causes using learning to rank methods from an information retrieval perspective. IEEE Access, 7:15573–15583. Ruifeng Xu, Jiannan Hu, Qin Lu, Dongyin Wu, and Lin Gui. 2017. An ensemble approach for emotion cause detection with event extraction and multikernel svms. Tsinghua Science and Technology, 22(6):646–659. Shuntaro Yada, Kazushi Ikeda, Keiichiro Hoashi, and Kyo Kageura. 2017. A bootstrap method for automatic rule acquisition on emotion cause extraction. In IEEE International Conference on Data Mining Workshops, pages 414–421. Xinyi Yu, Wenge Rong, Zhuo Zhang, Yuanxin Ouyang, and Zhang Xiong. 2019. Multiple level hierarchical network-based clause selection for emotion cause extraction. IEEE Access, 7(1):9071–9079.
2020
288
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3171–3181 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3171 Effective Inter-Clause Modeling for End-to-End Emotion-Cause Pair Extraction Penghui Wei, Jiahao Zhao, Wenji Mao †SKL-MCCS, Institute of Automation, Chinese Academy of Sciences (CASIA) ‡School of Artificial Intelligence, University of Chinese Academy of Sciences {weipenghui2016,zhaojiahao2019,wenji.mao}@ia.ac.cn Abstract Emotion-cause pair extraction aims to extract all emotion clauses coupled with their cause clauses from a given document. Previous work employs two-step approaches, in which the first step extracts emotion clauses and cause clauses separately, and the second step trains a classifier to filter out negative pairs. However, such pipeline-style system for emotion-cause pair extraction is suboptimal because it suffers from error propagation and the two steps may not adapt to each other well. In this paper, we tackle emotion-cause pair extraction from a ranking perspective, i.e., ranking clause pair candidates in a document, and propose a onestep neural approach which emphasizes interclause modeling to perform end-to-end extraction. It models the interrelations between the clauses in a document to learn clause representations with graph attention, and enhances clause pair representations with kernel-based relative position embedding for effective ranking. Experimental results show that our approach significantly outperforms the current two-step systems, especially in the condition of extracting multiple pairs in one document. 1 Introduction Emotion cause analysis has attracted increasing research attention in sentiment analysis and text mining community in recent years (Lee et al., 2010a; Russo et al., 2011; Neviarouskaya and Aono, 2013; Ghazi et al., 2015; Gui et al., 2016). Its goal is to detect causes or stimuli for a certain emotion expressed in text. Understanding why an emotion occurs has broad applications such as consumer review mining and public opinion monitoring. Previous studies mostly focus on emotion cause extraction task which aims to identify cause(s) for a given emotion. Xia and Ding (2019) pointed out that this setting ignores the mutual indication of emotions and causes, and the need of emotion annotation in advance restricts the range of applications. To overcome such limitations, they put forward a new research task named emotion-cause pair extraction, aiming to extract all emotion expression clauses coupled with their causes from a given document. As shown in the following example, an emotion clause c3 and its corresponding cause clause c2 construct an emotion-cause pair (c3, c2): Example. He told us that since his illness (c1), his classmates and advisors have given him much help about the schoolwork (c2). He has been touched (c3), and said that he will repay them (c4). Compared with emotion cause extraction, emotion-cause pair extraction is a more challenging task, because we need a comprehensive understanding of document content and structure to perform emotion-cause co-extraction and discriminate emotion-cause clause pairs from negative ones. Xia and Ding (2019) proposed to tackle emotioncause pair extraction using a two-step solution. At the first step, a multi-task LSTM network extracts emotion clauses and cause clauses separately. Then at the second step, a binary classifier is used to filter out negative pairs from all possible pairs. Although the two-step solution has shown its effectiveness, such pipeline-style system is suboptimal for emotion-cause pair extraction, because it is confronted with error propagation, and the two steps may not adapt to each other well. Coherent document has an underlying structure (Mann and Thompson, 1988; Marcu, 2000) and there is a causal relationship between the two clauses of an emotion-cause pair, which distinguishes it from other non-emotion-cause pairs in the document. Thus, knowledge about the interrelations between the clauses in a document is beneficial for extracting potential emotion-cause pairs. Further, according to the cohesion and coherence of discourse (De Beaugrande and Dressler, 1981), the probability of two distant clauses containing 3172 causal relationship is relatively small. Thus, relative position information between two clauses of a clause pair can be considered as an effective feature for emotion-cause pair extraction. Based on the above two considerations, in this paper, we tackle emotion-cause pair extraction from a ranking perspective, i.e., ranking clause pair candidates in a given document, and propose a one-step approach which emphasizes inter-clause modeling to perform end-to-end extraction. Our approach first models the inter-clause relationships via exploiting graph attention to learn clause representations, facilitating pair extraction through capturing the latent relationship between two clauses. It then learns clause pair representations and rank these pairs to extract emotion-cause pairs. A kernelbased relative position embedding scheme is proposed to model the mutual impact among relative positions and enhance clause pair representations for effective ranking. We integrate the two components into a unified neural network, which is optimized end-to-end. Unlike the previous two-step solution, our approach can directly extract emotioncause pairs from documents. The main contributions of this work are summarized as follows. • To our knowledge, we propose the first end-toend approach for emotion-cause pair extraction, which is a unified model to tackle this task from a ranking perspective. • Our approach emphasizes inter-clause modeling by integrating inter-clause relationship modeling and kernel-based relative position enhanced clause pair ranking. • Experimental results demonstrate that our onestep approach significantly outperforms the current best-performing systems, especially in the condition of extracting multiple pairs in one document. 2 Problem Formulation Given a document D = (c1, c2, . . . , c|D|) where |D| is the number of clauses and the i-th clause ci = (wi 1, wi 2, . . . , wi |ci|) is a word sequence, our goal is to extract all emotion-cause pairs in D: P = {(cemo1, ccau1), (cemo2, ccau2), . . .} , (1) where (cemoj, ccauj) is the j-th pair, cemoj ∈D is an emotion clause, and ccauj ∈D is the corresponding cause clause. Note that an emotion may have more than one cause, and the same cause may also become the stimulus of multiple emotions. 3 Proposed Approach We propose a one-step approach named RANKCP, which ranks clause pair candidates in a document to extract emotion-cause pairs. The overall architecture is shown in Fig. 1, which consists of three components. The first component learns vector representations of clauses in a given document. The second component models the relationships between clauses to obtain better clause representations. The third component learns clause pair representations enhanced with relative position modeling, and ranks clause pair candidates to extract emotion-cause pairs. 3.1 Document Encoding Given a document D = (c1, c2, . . . , c|D|) composed of |D| clauses, we use a hierarchical recurrent neural network (Hierarchical RNN) to encode textual content and learn clause representations.1 For each clause ci = (wi 1, wi 2, . . . , wi |ci|), we use a word-level bidirectional RNN to encode its content information and obtain the clause’s hidden state sequence (hi 1, hi 2, . . . , hi |ci|). An attention layer is adopted to combine them and return a state vector hi = P|ci| j=1 αjhi j for the clause ci, where αj = Softmax  w⊤ a tanh(Wahi j + ba)  is the attention weight of the j-th word in clause ci, with a multilayer perceptron (MLP) parameterized by Wa, ba and wa. Then the document D’s clause state sequence (h1, h2, . . . , h|D|) is fed into a clause-level bidirectional RNN to produce clause representations, denoted as (c1, c2, . . . , c|D|). 3.2 Modeling Inter-Clause Relationships with Graph Attention Network Knowledge about inter-clause relationships is useful for extracting emotion-cause pairs. After learning clause representations of a document, to enhance the interactions between clauses in the document, we regard the document structure as a fullyconnected clause graph, and adopt graph attention network (Veliˇckovi´c et al., 2018) to model the interclause relationships. Specifically, each node in the fully-connected graph is a clause in the document, and every two nodes have an edge. We also add a self-loop edge 1Pretrained BERT encoder (Devlin et al., 2019) based clause representation component is shown in Appendix A.1. 3173 Clause I. Document Encoding II. Inter-Clause Relationship Modeling III. Clause Pair Representation Learning and Ranking Clause Clause Clause Fullyconnected Clause Graph h1 <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> h2 <latexit sha1_base64="vJbc5dj3KA93OcwzJMcdo0Q+60=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqewWQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10qpVfa/qP1xV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fRM6Vmg=</latexit> <latexit sha1_base64="vJbc5dj3KA93OcwzJMcdo0Q+60=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqewWQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10qpVfa/qP1xV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fRM6Vmg=</latexit> <latexit sha1_base64="vJbc5dj3KA93OcwzJMcdo0Q+60=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqewWQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10qpVfa/qP1xV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fRM6Vmg=</latexit> <latexit sha1_base64="vJbc5dj3KA93OcwzJMcdo0Q+60=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqewWQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10qpVfa/qP1xV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fRM6Vmg=</latexit> h3 <latexit sha1_base64="4lectlpB3tLZtpaie3zgmoWTcVI=">AB/XicbVDLSgNBEOyNrxhfUY9eBoPgKeyqoMegF48RzAOSJcxOZpMh81hmZoWw BL/Bq569iVe/xaN/4iTZg0ksaCiqunuihLOjPX9b6+wtr6xuVXcLu3s7u0flA+PmkalmtAGUVzpdoQN5UzShmW03aiKRYRp61odDf1W09UG6bkox0nNBR4IFnMCLZOanUjgYa9y1654lf9GdAqCXJSgRz1Xvmn21ckFVRawrExncBPbJhbRnhdFLqpoYmIzwgHYclVhQE2azcyfozCl9FCvtSlo0U/9OZFg YMxaR6xTYDs2yNxX/8zqpjW/CjMktVS+aI45cgqNP0d9ZmxPKxI5ho5m5FZIg1JtYltLAlEhOXSbCcwCpXlQDvxo8XFVqt3k6RTiBUziHAK6hBvdQhwYQGMELvMKb9+y9ex/e57y14OUzx7A7+sXRmGVmw=</latexit> <latexit sha1_base64="4lectlpB3tLZtpaie3zgmoWTcVI=">AB/XicbVDLSgNBEOyNrxhfUY9eBoPgKeyqoMegF48RzAOSJcxOZpMh81hmZoWw BL/Bq569iVe/xaN/4iTZg0ksaCiqunuihLOjPX9b6+wtr6xuVXcLu3s7u0flA+PmkalmtAGUVzpdoQN5UzShmW03aiKRYRp61odDf1W09UG6bkox0nNBR4IFnMCLZOanUjgYa9y1654lf9GdAqCXJSgRz1Xvmn21ckFVRawrExncBPbJhbRnhdFLqpoYmIzwgHYclVhQE2azcyfozCl9FCvtSlo0U/9OZFg YMxaR6xTYDs2yNxX/8zqpjW/CjMktVS+aI45cgqNP0d9ZmxPKxI5ho5m5FZIg1JtYltLAlEhOXSbCcwCpXlQDvxo8XFVqt3k6RTiBUziHAK6hBvdQhwYQGMELvMKb9+y9ex/e57y14OUzx7A7+sXRmGVmw=</latexit> <latexit sha1_base64="4lectlpB3tLZtpaie3zgmoWTcVI=">AB/XicbVDLSgNBEOyNrxhfUY9eBoPgKeyqoMegF48RzAOSJcxOZpMh81hmZoWw BL/Bq569iVe/xaN/4iTZg0ksaCiqunuihLOjPX9b6+wtr6xuVXcLu3s7u0flA+PmkalmtAGUVzpdoQN5UzShmW03aiKRYRp61odDf1W09UG6bkox0nNBR4IFnMCLZOanUjgYa9y1654lf9GdAqCXJSgRz1Xvmn21ckFVRawrExncBPbJhbRnhdFLqpoYmIzwgHYclVhQE2azcyfozCl9FCvtSlo0U/9OZFg YMxaR6xTYDs2yNxX/8zqpjW/CjMktVS+aI45cgqNP0d9ZmxPKxI5ho5m5FZIg1JtYltLAlEhOXSbCcwCpXlQDvxo8XFVqt3k6RTiBUziHAK6hBvdQhwYQGMELvMKb9+y9ex/e57y14OUzx7A7+sXRmGVmw=</latexit> <latexit sha1_base64="4lectlpB3tLZtpaie3zgmoWTcVI=">AB/XicbVDLSgNBEOyNrxhfUY9eBoPgKeyqoMegF48RzAOSJcxOZpMh81hmZoWw BL/Bq569iVe/xaN/4iTZg0ksaCiqunuihLOjPX9b6+wtr6xuVXcLu3s7u0flA+PmkalmtAGUVzpdoQN5UzShmW03aiKRYRp61odDf1W09UG6bkox0nNBR4IFnMCLZOanUjgYa9y1654lf9GdAqCXJSgRz1Xvmn21ckFVRawrExncBPbJhbRnhdFLqpoYmIzwgHYclVhQE2azcyfozCl9FCvtSlo0U/9OZFg YMxaR6xTYDs2yNxX/8zqpjW/CjMktVS+aI45cgqNP0d9ZmxPKxI5ho5m5FZIg1JtYltLAlEhOXSbCcwCpXlQDvxo8XFVqt3k6RTiBUziHAK6hBvdQhwYQGMELvMKb9+y9ex/e57y14OUzx7A7+sXRmGVmw=</latexit> h4 <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> Stacked Graph Attention Layers Generating Clause Pair Candidates Relative Position Embedding h1 <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> h2 <latexit sha1_base64="vJbc5dj3KA93OcwzJMcdo0Q+60=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqewWQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10qpVfa/qP1xV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fRM6Vmg=</latexit> <latexit sha1_base64="vJbc5dj3KA93OcwzJMcdo0Q+60=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqewWQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10qpVfa/qP1xV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fRM6Vmg=</latexit> <latexit sha1_base64="vJbc5dj3KA93OcwzJMcdo0Q+60=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqewWQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10qpVfa/qP1xV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fRM6Vmg=</latexit> <latexit sha1_base64="vJbc5dj3KA93OcwzJMcdo0Q+60=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqewWQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10qpVfa/qP1xV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fRM6Vmg=</latexit> h3 <latexit sha1_base64="4lectlpB3tLZtpaie3zgmoWTcVI=">AB/XicbVDLSgNBEOyNrxhfUY9eBoPgKeyqoMegF48RzAOSJcxOZpMh81hmZoWw BL/Bq569iVe/xaN/4iTZg0ksaCiqunuihLOjPX9b6+wtr6xuVXcLu3s7u0flA+PmkalmtAGUVzpdoQN5UzShmW03aiKRYRp61odDf1W09UG6bkox0nNBR4IFnMCLZOanUjgYa9y1654lf9GdAqCXJSgRz1Xvmn21ckFVRawrExncBPbJhbRnhdFLqpoYmIzwgHYclVhQE2azcyfozCl9FCvtSlo0U/9OZFg YMxaR6xTYDs2yNxX/8zqpjW/CjMktVS+aI45cgqNP0d9ZmxPKxI5ho5m5FZIg1JtYltLAlEhOXSbCcwCpXlQDvxo8XFVqt3k6RTiBUziHAK6hBvdQhwYQGMELvMKb9+y9ex/e57y14OUzx7A7+sXRmGVmw=</latexit> <latexit sha1_base64="4lectlpB3tLZtpaie3zgmoWTcVI=">AB/XicbVDLSgNBEOyNrxhfUY9eBoPgKeyqoMegF48RzAOSJcxOZpMh81hmZoWw BL/Bq569iVe/xaN/4iTZg0ksaCiqunuihLOjPX9b6+wtr6xuVXcLu3s7u0flA+PmkalmtAGUVzpdoQN5UzShmW03aiKRYRp61odDf1W09UG6bkox0nNBR4IFnMCLZOanUjgYa9y1654lf9GdAqCXJSgRz1Xvmn21ckFVRawrExncBPbJhbRnhdFLqpoYmIzwgHYclVhQE2azcyfozCl9FCvtSlo0U/9OZFg YMxaR6xTYDs2yNxX/8zqpjW/CjMktVS+aI45cgqNP0d9ZmxPKxI5ho5m5FZIg1JtYltLAlEhOXSbCcwCpXlQDvxo8XFVqt3k6RTiBUziHAK6hBvdQhwYQGMELvMKb9+y9ex/e57y14OUzx7A7+sXRmGVmw=</latexit> <latexit sha1_base64="4lectlpB3tLZtpaie3zgmoWTcVI=">AB/XicbVDLSgNBEOyNrxhfUY9eBoPgKeyqoMegF48RzAOSJcxOZpMh81hmZoWw BL/Bq569iVe/xaN/4iTZg0ksaCiqunuihLOjPX9b6+wtr6xuVXcLu3s7u0flA+PmkalmtAGUVzpdoQN5UzShmW03aiKRYRp61odDf1W09UG6bkox0nNBR4IFnMCLZOanUjgYa9y1654lf9GdAqCXJSgRz1Xvmn21ckFVRawrExncBPbJhbRnhdFLqpoYmIzwgHYclVhQE2azcyfozCl9FCvtSlo0U/9OZFg YMxaR6xTYDs2yNxX/8zqpjW/CjMktVS+aI45cgqNP0d9ZmxPKxI5ho5m5FZIg1JtYltLAlEhOXSbCcwCpXlQDvxo8XFVqt3k6RTiBUziHAK6hBvdQhwYQGMELvMKb9+y9ex/e57y14OUzx7A7+sXRmGVmw=</latexit> <latexit sha1_base64="4lectlpB3tLZtpaie3zgmoWTcVI=">AB/XicbVDLSgNBEOyNrxhfUY9eBoPgKeyqoMegF48RzAOSJcxOZpMh81hmZoWw BL/Bq569iVe/xaN/4iTZg0ksaCiqunuihLOjPX9b6+wtr6xuVXcLu3s7u0flA+PmkalmtAGUVzpdoQN5UzShmW03aiKRYRp61odDf1W09UG6bkox0nNBR4IFnMCLZOanUjgYa9y1654lf9GdAqCXJSgRz1Xvmn21ckFVRawrExncBPbJhbRnhdFLqpoYmIzwgHYclVhQE2azcyfozCl9FCvtSlo0U/9OZFg YMxaR6xTYDs2yNxX/8zqpjW/CjMktVS+aI45cgqNP0d9ZmxPKxI5ho5m5FZIg1JtYltLAlEhOXSbCcwCpXlQDvxo8XFVqt3k6RTiBUziHAK6hBvdQhwYQGMELvMKb9+y9ex/e57y14OUzx7A7+sXRmGVmw=</latexit> h4 <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> h1 <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> h4 <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> h1 <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> h2 <latexit sha1_base64="vJbc5dj3KA93OcwzJMcdo0Q+60=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqewWQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10qpVfa/qP1xV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fRM6Vmg=</latexit> <latexit sha1_base64="vJbc5dj3KA93OcwzJMcdo0Q+60=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqewWQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10qpVfa/qP1xV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fRM6Vmg=</latexit> <latexit sha1_base64="vJbc5dj3KA93OcwzJMcdo0Q+60=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqewWQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10qpVfa/qP1xV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fRM6Vmg=</latexit> <latexit sha1_base64="vJbc5dj3KA93OcwzJMcdo0Q+60=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqewWQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10qpVfa/qP1xV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fRM6Vmg=</latexit> h1 <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> h1 <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> h3 <latexit sha1_base64="4lectlpB3tLZtpaie3zgmoWTcVI=">AB/XicbVDLSgNBEOyNrxhfUY9eBoPgKeyqoMegF48RzAOSJcxOZpMh81hmZoWw BL/Bq569iVe/xaN/4iTZg0ksaCiqunuihLOjPX9b6+wtr6xuVXcLu3s7u0flA+PmkalmtAGUVzpdoQN5UzShmW03aiKRYRp61odDf1W09UG6bkox0nNBR4IFnMCLZOanUjgYa9y1654lf9GdAqCXJSgRz1Xvmn21ckFVRawrExncBPbJhbRnhdFLqpoYmIzwgHYclVhQE2azcyfozCl9FCvtSlo0U/9OZFg YMxaR6xTYDs2yNxX/8zqpjW/CjMktVS+aI45cgqNP0d9ZmxPKxI5ho5m5FZIg1JtYltLAlEhOXSbCcwCpXlQDvxo8XFVqt3k6RTiBUziHAK6hBvdQhwYQGMELvMKb9+y9ex/e57y14OUzx7A7+sXRmGVmw=</latexit> <latexit sha1_base64="4lectlpB3tLZtpaie3zgmoWTcVI=">AB/XicbVDLSgNBEOyNrxhfUY9eBoPgKeyqoMegF48RzAOSJcxOZpMh81hmZoWw BL/Bq569iVe/xaN/4iTZg0ksaCiqunuihLOjPX9b6+wtr6xuVXcLu3s7u0flA+PmkalmtAGUVzpdoQN5UzShmW03aiKRYRp61odDf1W09UG6bkox0nNBR4IFnMCLZOanUjgYa9y1654lf9GdAqCXJSgRz1Xvmn21ckFVRawrExncBPbJhbRnhdFLqpoYmIzwgHYclVhQE2azcyfozCl9FCvtSlo0U/9OZFg YMxaR6xTYDs2yNxX/8zqpjW/CjMktVS+aI45cgqNP0d9ZmxPKxI5ho5m5FZIg1JtYltLAlEhOXSbCcwCpXlQDvxo8XFVqt3k6RTiBUziHAK6hBvdQhwYQGMELvMKb9+y9ex/e57y14OUzx7A7+sXRmGVmw=</latexit> <latexit sha1_base64="4lectlpB3tLZtpaie3zgmoWTcVI=">AB/XicbVDLSgNBEOyNrxhfUY9eBoPgKeyqoMegF48RzAOSJcxOZpMh81hmZoWw BL/Bq569iVe/xaN/4iTZg0ksaCiqunuihLOjPX9b6+wtr6xuVXcLu3s7u0flA+PmkalmtAGUVzpdoQN5UzShmW03aiKRYRp61odDf1W09UG6bkox0nNBR4IFnMCLZOanUjgYa9y1654lf9GdAqCXJSgRz1Xvmn21ckFVRawrExncBPbJhbRnhdFLqpoYmIzwgHYclVhQE2azcyfozCl9FCvtSlo0U/9OZFg YMxaR6xTYDs2yNxX/8zqpjW/CjMktVS+aI45cgqNP0d9ZmxPKxI5ho5m5FZIg1JtYltLAlEhOXSbCcwCpXlQDvxo8XFVqt3k6RTiBUziHAK6hBvdQhwYQGMELvMKb9+y9ex/e57y14OUzx7A7+sXRmGVmw=</latexit> <latexit sha1_base64="4lectlpB3tLZtpaie3zgmoWTcVI=">AB/XicbVDLSgNBEOyNrxhfUY9eBoPgKeyqoMegF48RzAOSJcxOZpMh81hmZoWw BL/Bq569iVe/xaN/4iTZg0ksaCiqunuihLOjPX9b6+wtr6xuVXcLu3s7u0flA+PmkalmtAGUVzpdoQN5UzShmW03aiKRYRp61odDf1W09UG6bkox0nNBR4IFnMCLZOanUjgYa9y1654lf9GdAqCXJSgRz1Xvmn21ckFVRawrExncBPbJhbRnhdFLqpoYmIzwgHYclVhQE2azcyfozCl9FCvtSlo0U/9OZFg YMxaR6xTYDs2yNxX/8zqpjW/CjMktVS+aI45cgqNP0d9ZmxPKxI5ho5m5FZIg1JtYltLAlEhOXSbCcwCpXlQDvxo8XFVqt3k6RTiBUziHAK6hBvdQhwYQGMELvMKb9+y9ex/e57y14OUzx7A7+sXRmGVmw=</latexit> h1 <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> h4 <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> h1 <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> h1 <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> <latexit sha1_base64="utKHgbtHOADKJBmXYVgx6s5TWc=">AB/XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NJFmyu3fs7gnh CP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQIbqzvf3uFtfWNza3idmlnd2/oHx41DRxqhk2WCxi/RhRg4IrbFhuBT4mGqmMBLai0e3Ubz2hNjxWD3acYCjpQPE+Z9Q6qdWJBl2g2654lf9GcgqCXJSgRz1bvmn04tZKlFZJqgx7cBPbJhRbTkTOCl1UoMJZSM6wLajiko0YTY7d0LOnNIj/Vi7UpbM1L8TGZX GjGXkOiW1Q7PsTcX/vHZq+9dhxlWSWlRsvqifCmJjMv2d9LhGZsXYEco0d7cSNqSaMusSWtgSyYnLJFhOYJU0L6qBXw3uLyu1mzydIpzAKZxDAFdQgzuoQwMYjOAFXuHNe/bevQ/vc95a8PKZY1iA9/ULQzuVmQ=</latexit> Constraint: h4 <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> <latexit sha1_base64="OU/i4tChv7R0nWd5GdQ50v6Ms=">AB/XicbVBNSwMxEJ3Ur1q/qh69BIvgqexKQY9FLx4r2A9ol5JNs21okl2SrFCW 4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMvDAR3FjP+0aFjc2t7Z3ibmlv/+DwqHx80jJxqilr0ljEuhMSwRXrGm5FayTaEZkKFg7HN/N/PYT04bH6tFOEhZIMlQ84pRYJ7V7ocSjfq1frnhVbw68TvycVCBHo1/+6Q1imkqmLBXEmK7vJTbIiLacCjYt9VLDEkLHZMi6jioimQmy+blTfOGUAY5i7UpZPFf/TmR EGjORoeuUxI7MqjcT/O6qY1ugoyrJLVM0cWiKBXYxnj2Ox5wzagVE0cI1dzdiumIaEKtS2hpSyinLhN/NYF10rq+l7Vf6hV6rd5OkU4g3O4B+uoQ730IAmUBjDC7zCG3pG7+gDfS5aCyifOYUloK9fR/SVnA=</latexit> eliminate j −i = +3 <latexit sha1_base64="6cZQAgUBIGFs0dbtFJ6RaL5IO4g=">AB/HicbVBNSwMxEJ31s9avqkcvwSIYtlVQS9C0YvHCvYD2qVk02ybNskuSVYoS/0NXvXsTbz6Xz6T0zbPdjWBwOP92aYmRfEnGnjut/O0 vLK6tp6biO/ubW9s1vY26/pKFGEVknEI9UIsKacSVo1zHDaiBXFIuC0Hgzuxn79iSrNIvlohjH1Be5KFjKCjZVq/TN2c3rRLhTdkjsBWiReRoqQodIu/LQ6EUkElYZwrHXTc2Pjp1gZRjgd5VuJpjEmA9ylTUslFlT76eTaETq2SgeFkbIlDZqofydSLQeisB2Cmx6et4bi/95zcSE137KZJwYKsl0UZhwZCI0fh1mKLE8KElmChmb0WkhxUmxgY0syUQI5uJN5/AIqmdlzy35D1cFsu3WTo5OIQjOAEPrqAM91CBKhDowu8wpvz7Lw 7H87ntHXJyWYOYAbO1y8TCJTn</latexit> <latexit sha1_base64="6cZQAgUBIGFs0dbtFJ6RaL5IO4g=">AB/HicbVBNSwMxEJ31s9avqkcvwSIYtlVQS9C0YvHCvYD2qVk02ybNskuSVYoS/0NXvXsTbz6Xz6T0zbPdjWBwOP92aYmRfEnGnjut/O0 vLK6tp6biO/ubW9s1vY26/pKFGEVknEI9UIsKacSVo1zHDaiBXFIuC0Hgzuxn79iSrNIvlohjH1Be5KFjKCjZVq/TN2c3rRLhTdkjsBWiReRoqQodIu/LQ6EUkElYZwrHXTc2Pjp1gZRjgd5VuJpjEmA9ylTUslFlT76eTaETq2SgeFkbIlDZqofydSLQeisB2Cmx6et4bi/95zcSE137KZJwYKsl0UZhwZCI0fh1mKLE8KElmChmb0WkhxUmxgY0syUQI5uJN5/AIqmdlzy35D1cFsu3WTo5OIQjOAEPrqAM91CBKhDowu8wpvz7Lw 7H87ntHXJyWYOYAbO1y8TCJTn</latexit> <latexit sha1_base64="6cZQAgUBIGFs0dbtFJ6RaL5IO4g=">AB/HicbVBNSwMxEJ31s9avqkcvwSIYtlVQS9C0YvHCvYD2qVk02ybNskuSVYoS/0NXvXsTbz6Xz6T0zbPdjWBwOP92aYmRfEnGnjut/O0 vLK6tp6biO/ubW9s1vY26/pKFGEVknEI9UIsKacSVo1zHDaiBXFIuC0Hgzuxn79iSrNIvlohjH1Be5KFjKCjZVq/TN2c3rRLhTdkjsBWiReRoqQodIu/LQ6EUkElYZwrHXTc2Pjp1gZRjgd5VuJpjEmA9ylTUslFlT76eTaETq2SgeFkbIlDZqofydSLQeisB2Cmx6et4bi/95zcSE137KZJwYKsl0UZhwZCI0fh1mKLE8KElmChmb0WkhxUmxgY0syUQI5uJN5/AIqmdlzy35D1cFsu3WTo5OIQjOAEPrqAM91CBKhDowu8wpvz7Lw 7H87ntHXJyWYOYAbO1y8TCJTn</latexit> <latexit sha1_base64="6cZQAgUBIGFs0dbtFJ6RaL5IO4g=">AB/HicbVBNSwMxEJ31s9avqkcvwSIYtlVQS9C0YvHCvYD2qVk02ybNskuSVYoS/0NXvXsTbz6Xz6T0zbPdjWBwOP92aYmRfEnGnjut/O0 vLK6tp6biO/ubW9s1vY26/pKFGEVknEI9UIsKacSVo1zHDaiBXFIuC0Hgzuxn79iSrNIvlohjH1Be5KFjKCjZVq/TN2c3rRLhTdkjsBWiReRoqQodIu/LQ6EUkElYZwrHXTc2Pjp1gZRjgd5VuJpjEmA9ylTUslFlT76eTaETq2SgeFkbIlDZqofydSLQeisB2Cmx6et4bi/95zcSE137KZJwYKsl0UZhwZCI0fh1mKLE8KElmChmb0WkhxUmxgY0syUQI5uJN5/AIqmdlzy35D1cFsu3WTo5OIQjOAEPrqAM91CBKhDowu8wpvz7Lw 7H87ntHXJyWYOYAbO1y8TCJTn</latexit> j −i = −3 <latexit sha1_base64="MTSCSpNDWQ47atuFJSorjm7SzDI=">AB/HicbVA9SwNBEJ2LXzF+RS1tFoNgk3CngjZC0MYygomB5Ah7m71k929Y3dPCEf8DbZa24mt/8XSf+ImucIkPh4vDfDzLwg5kwb1/12c iura+sb+c3C1vbO7l5x/6Cho0QRWicRj1QzwJpyJmndMNpM1YUi4DTx2B4O/Efn6jSLJIPZhRTX+CeZCEj2FipMSiz6/J5p1hyK+4UaJl4GSlBhlqn+NPuRiQRVBrCsdYtz42Nn2JlGOF0XGgnmsaYDHGPtiyVWFDtp9Nrx+jEKl0URsqWNGiq/p1IsdB6JALbKbDp60VvIv7ntRITXvkpk3FiqCSzRWHCkYnQ5HXUZYoSw0eWYKYvRWRPlaYGBvQ3JZAjG0m3mICy6RxVvHcind/UareZOnk4QiO4RQ8uIQq3EN6kBgAC/wCm/Os/P ufDifs9ack80cwhycr18WMJTp</latexit> <latexit sha1_base64="MTSCSpNDWQ47atuFJSorjm7SzDI=">AB/HicbVA9SwNBEJ2LXzF+RS1tFoNgk3CngjZC0MYygomB5Ah7m71k929Y3dPCEf8DbZa24mt/8XSf+ImucIkPh4vDfDzLwg5kwb1/12c iura+sb+c3C1vbO7l5x/6Cho0QRWicRj1QzwJpyJmndMNpM1YUi4DTx2B4O/Efn6jSLJIPZhRTX+CeZCEj2FipMSiz6/J5p1hyK+4UaJl4GSlBhlqn+NPuRiQRVBrCsdYtz42Nn2JlGOF0XGgnmsaYDHGPtiyVWFDtp9Nrx+jEKl0URsqWNGiq/p1IsdB6JALbKbDp60VvIv7ntRITXvkpk3FiqCSzRWHCkYnQ5HXUZYoSw0eWYKYvRWRPlaYGBvQ3JZAjG0m3mICy6RxVvHcind/UareZOnk4QiO4RQ8uIQq3EN6kBgAC/wCm/Os/P ufDifs9ack80cwhycr18WMJTp</latexit> <latexit sha1_base64="MTSCSpNDWQ47atuFJSorjm7SzDI=">AB/HicbVA9SwNBEJ2LXzF+RS1tFoNgk3CngjZC0MYygomB5Ah7m71k929Y3dPCEf8DbZa24mt/8XSf+ImucIkPh4vDfDzLwg5kwb1/12c iura+sb+c3C1vbO7l5x/6Cho0QRWicRj1QzwJpyJmndMNpM1YUi4DTx2B4O/Efn6jSLJIPZhRTX+CeZCEj2FipMSiz6/J5p1hyK+4UaJl4GSlBhlqn+NPuRiQRVBrCsdYtz42Nn2JlGOF0XGgnmsaYDHGPtiyVWFDtp9Nrx+jEKl0URsqWNGiq/p1IsdB6JALbKbDp60VvIv7ntRITXvkpk3FiqCSzRWHCkYnQ5HXUZYoSw0eWYKYvRWRPlaYGBvQ3JZAjG0m3mICy6RxVvHcind/UareZOnk4QiO4RQ8uIQq3EN6kBgAC/wCm/Os/P ufDifs9ack80cwhycr18WMJTp</latexit> <latexit sha1_base64="MTSCSpNDWQ47atuFJSorjm7SzDI=">AB/HicbVA9SwNBEJ2LXzF+RS1tFoNgk3CngjZC0MYygomB5Ah7m71k929Y3dPCEf8DbZa24mt/8XSf+ImucIkPh4vDfDzLwg5kwb1/12c iura+sb+c3C1vbO7l5x/6Cho0QRWicRj1QzwJpyJmndMNpM1YUi4DTx2B4O/Efn6jSLJIPZhRTX+CeZCEj2FipMSiz6/J5p1hyK+4UaJl4GSlBhlqn+NPuRiQRVBrCsdYtz42Nn2JlGOF0XGgnmsaYDHGPtiyVWFDtp9Nrx+jEKl0URsqWNGiq/p1IsdB6JALbKbDp60VvIv7ntRITXvkpk3FiqCSzRWHCkYnQ5HXUZYoSw0eWYKYvRWRPlaYGBvQ3JZAjG0m3mICy6RxVvHcind/UareZOnk4QiO4RQ8uIQq3EN6kBgAC/wCm/Os/P ufDifs9ack80cwhycr18WMJTp</latexit> |j −i| M <latexit sha1_base64="YwRQBdpLdax2/CTN0nvyGctTzao=">ACAXicbVC7SgNBFL3rM8ZX1NJmMAg2hl0RtAza2AgRzAM2S5idzCZjZmfWmVkhb FL5DbZa24mtX2Lpnzh5FCbxwIXDOfdy7z1hwpk2rvtLC2vrK6t5zbym1vbO7uFvf2alqkitEokl6oRYk05E7RqmOG0kSiK45DTeti7Hvn1J6o0k+Le9BMaxLgjWMQINlbyBw+nbNDk9BHdtgpFt+SOgRaJNyVFmKLSKvw025KkMRWGcKy17mJCTKsDCOcDvPNVNMEkx7uUN9SgWOqg2x8hAdW6WNIqlsCYPG6t+ JDMda9+PQdsbYdPW8NxL/8/zURJdBxkSGirIZFGUcmQkGv2P2kxRYnjfEkwUs7ci0sUKE2NTmtkSxkObiTefwCKpnZU8t+TdnRfLV9N0cnAIR3ACHlxAGW6gAlUgIOEFXuHNeXbenQ/nc9K65ExnDmAGztcvpsmXgQ=</latexit> <latexit sha1_base64="YwRQBdpLdax2/CTN0nvyGctTzao=">ACAXicbVC7SgNBFL3rM8ZX1NJmMAg2hl0RtAza2AgRzAM2S5idzCZjZmfWmVkhb FL5DbZa24mtX2Lpnzh5FCbxwIXDOfdy7z1hwpk2rvtLC2vrK6t5zbym1vbO7uFvf2alqkitEokl6oRYk05E7RqmOG0kSiK45DTeti7Hvn1J6o0k+Le9BMaxLgjWMQINlbyBw+nbNDk9BHdtgpFt+SOgRaJNyVFmKLSKvw025KkMRWGcKy17mJCTKsDCOcDvPNVNMEkx7uUN9SgWOqg2x8hAdW6WNIqlsCYPG6t+ JDMda9+PQdsbYdPW8NxL/8/zURJdBxkSGirIZFGUcmQkGv2P2kxRYnjfEkwUs7ci0sUKE2NTmtkSxkObiTefwCKpnZU8t+TdnRfLV9N0cnAIR3ACHlxAGW6gAlUgIOEFXuHNeXbenQ/nc9K65ExnDmAGztcvpsmXgQ=</latexit> <latexit sha1_base64="YwRQBdpLdax2/CTN0nvyGctTzao=">ACAXicbVC7SgNBFL3rM8ZX1NJmMAg2hl0RtAza2AgRzAM2S5idzCZjZmfWmVkhb FL5DbZa24mtX2Lpnzh5FCbxwIXDOfdy7z1hwpk2rvtLC2vrK6t5zbym1vbO7uFvf2alqkitEokl6oRYk05E7RqmOG0kSiK45DTeti7Hvn1J6o0k+Le9BMaxLgjWMQINlbyBw+nbNDk9BHdtgpFt+SOgRaJNyVFmKLSKvw025KkMRWGcKy17mJCTKsDCOcDvPNVNMEkx7uUN9SgWOqg2x8hAdW6WNIqlsCYPG6t+ JDMda9+PQdsbYdPW8NxL/8/zURJdBxkSGirIZFGUcmQkGv2P2kxRYnjfEkwUs7ci0sUKE2NTmtkSxkObiTefwCKpnZU8t+TdnRfLV9N0cnAIR3ACHlxAGW6gAlUgIOEFXuHNeXbenQ/nc9K65ExnDmAGztcvpsmXgQ=</latexit> <latexit sha1_base64="YwRQBdpLdax2/CTN0nvyGctTzao=">ACAXicbVC7SgNBFL3rM8ZX1NJmMAg2hl0RtAza2AgRzAM2S5idzCZjZmfWmVkhb FL5DbZa24mtX2Lpnzh5FCbxwIXDOfdy7z1hwpk2rvtLC2vrK6t5zbym1vbO7uFvf2alqkitEokl6oRYk05E7RqmOG0kSiK45DTeti7Hvn1J6o0k+Le9BMaxLgjWMQINlbyBw+nbNDk9BHdtgpFt+SOgRaJNyVFmKLSKvw025KkMRWGcKy17mJCTKsDCOcDvPNVNMEkx7uUN9SgWOqg2x8hAdW6WNIqlsCYPG6t+ JDMda9+PQdsbYdPW8NxL/8/zURJdBxkSGirIZFGUcmQkGv2P2kxRYnjfEkwUs7ci0sUKE2NTmtkSxkObiTefwCKpnZU8t+TdnRfLV9N0cnAIR3ACHlxAGW6gAlUgIOEFXuHNeXbenQ/nc9K65ExnDmAGztcvpsmXgQ=</latexit> M = 2 <latexit sha1_base64="VlWBuvmcbVLW71tXKWPAHdyTkg=">AB6nicbVBNS8NAEJ34WetX1aOXxSJ4KkR9CIUvXgRKtoPaEPZbDft0s0m7E6EvoTvHhQxKu/yJv/xm2bg7Y+GHi8N8PMvCRwqDrfjsrq2 vrG5uFreL2zu7efungsGniVDPeYLGMdTughkuheAMFSt5ONKdRIHkrGN1M/dYT10bE6hHCfcjOlAiFIyilR7urq9UtmtuDOQZeLlpAw56r3SV7cfszTiCpmkxnQ8N0E/oxoFk3xS7KaGJ5SN6IB3LFU04sbPZqdOyKlV+iSMtS2FZKb+nshoZMw4CmxnRHFoFr2p+J/XSTG89DOhkhS5YvNFYSoJxmT6N+kLzRnKsSWUaWFvJWxINWVo0ynaELzFl5dJs1rx3Ip3f16uXedxFOAYTuAMPLiAGtxCHRrAYADP8ApvjnRenHfnY964uQzR/ AHzucPl/mNVA=</latexit> <latexit sha1_base64="VlWBuvmcbVLW71tXKWPAHdyTkg=">AB6nicbVBNS8NAEJ34WetX1aOXxSJ4KkR9CIUvXgRKtoPaEPZbDft0s0m7E6EvoTvHhQxKu/yJv/xm2bg7Y+GHi8N8PMvCRwqDrfjsrq2 vrG5uFreL2zu7efungsGniVDPeYLGMdTughkuheAMFSt5ONKdRIHkrGN1M/dYT10bE6hHCfcjOlAiFIyilR7urq9UtmtuDOQZeLlpAw56r3SV7cfszTiCpmkxnQ8N0E/oxoFk3xS7KaGJ5SN6IB3LFU04sbPZqdOyKlV+iSMtS2FZKb+nshoZMw4CmxnRHFoFr2p+J/XSTG89DOhkhS5YvNFYSoJxmT6N+kLzRnKsSWUaWFvJWxINWVo0ynaELzFl5dJs1rx3Ip3f16uXedxFOAYTuAMPLiAGtxCHRrAYADP8ApvjnRenHfnY964uQzR/ AHzucPl/mNVA=</latexit> <latexit sha1_base64="VlWBuvmcbVLW71tXKWPAHdyTkg=">AB6nicbVBNS8NAEJ34WetX1aOXxSJ4KkR9CIUvXgRKtoPaEPZbDft0s0m7E6EvoTvHhQxKu/yJv/xm2bg7Y+GHi8N8PMvCRwqDrfjsrq2 vrG5uFreL2zu7efungsGniVDPeYLGMdTughkuheAMFSt5ONKdRIHkrGN1M/dYT10bE6hHCfcjOlAiFIyilR7urq9UtmtuDOQZeLlpAw56r3SV7cfszTiCpmkxnQ8N0E/oxoFk3xS7KaGJ5SN6IB3LFU04sbPZqdOyKlV+iSMtS2FZKb+nshoZMw4CmxnRHFoFr2p+J/XSTG89DOhkhS5YvNFYSoJxmT6N+kLzRnKsSWUaWFvJWxINWVo0ynaELzFl5dJs1rx3Ip3f16uXedxFOAYTuAMPLiAGtxCHRrAYADP8ApvjnRenHfnY964uQzR/ AHzucPl/mNVA=</latexit> <latexit sha1_base64="VlWBuvmcbVLW71tXKWPAHdyTkg=">AB6nicbVBNS8NAEJ34WetX1aOXxSJ4KkR9CIUvXgRKtoPaEPZbDft0s0m7E6EvoTvHhQxKu/yJv/xm2bg7Y+GHi8N8PMvCRwqDrfjsrq2 vrG5uFreL2zu7efungsGniVDPeYLGMdTughkuheAMFSt5ONKdRIHkrGN1M/dYT10bE6hHCfcjOlAiFIyilR7urq9UtmtuDOQZeLlpAw56r3SV7cfszTiCpmkxnQ8N0E/oxoFk3xS7KaGJ5SN6IB3LFU04sbPZqdOyKlV+iSMtS2FZKb+nshoZMw4CmxnRHFoFr2p+J/XSTG89DOhkhS5YvNFYSoJxmT6N+kLzRnKsSWUaWFvJWxINWVo0ynaELzFl5dJs1rx3Ip3f16uXedxFOAYTuAMPLiAGtxCHRrAYADP8ApvjnRenHfnY964uQzR/ AHzucPl/mNVA=</latexit> (Let in this Fig.) 0 <latexit sha1_base64="6ZTwbptvK0HUiMuNsEoeJPkc=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm 1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1Fip6Q7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585h T9wPn8AeRmMtA=</latexit> <latexit sha1_base64="6ZTwbptvK0HUiMuNsEoeJPkc=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm 1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1Fip6Q7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585h T9wPn8AeRmMtA=</latexit> <latexit sha1_base64="6ZTwbptvK0HUiMuNsEoeJPkc=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm 1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1Fip6Q7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585h T9wPn8AeRmMtA=</latexit> <latexit sha1_base64="6ZTwbptvK0HUiMuNsEoeJPkc=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm 1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1Fip6Q7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585h T9wPn8AeRmMtA=</latexit> +1 <latexit sha1_base64="0tjat07OZC/X0XVF9056I5xyElo=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBZBEoiBT0WvXisYj+gDWznbRLN5uwuxFK6D/w4kERr/4jb/4bt20O2vpg4PHeDPzgkRwbVz32ymsr W9sbhW3Szu7e/sH5cOjlo5TxbDJYhGrTkA1Ci6xabgR2EkU0igQ2A7GtzO/YRK81g+mkmCfkSHkoecUWOlhwuvX64VXcOskq8nFQgR6Nf/uoNYpZGKA0TVOu5ybGz6gynAmclnqpxoSyMR1i1JI9R+Nr90Ss6sMiBhrGxJQ+bq74mMRlpPosB2RtSM9LI3E/zuqkJr/2MyQ1KNliUZgKYmIye5sMuEJmxMQSyhS3txI2oyY8Mp2RC85ZdXSeuy6rlV75Wqd/kcRThBE7hHDy4gjrcQOawCEZ3iFN2fsvDjvzseiteDkM 8fwB87nD+DYjOo=</latexit> <latexit sha1_base64="0tjat07OZC/X0XVF9056I5xyElo=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBZBEoiBT0WvXisYj+gDWznbRLN5uwuxFK6D/w4kERr/4jb/4bt20O2vpg4PHeDPzgkRwbVz32ymsr W9sbhW3Szu7e/sH5cOjlo5TxbDJYhGrTkA1Ci6xabgR2EkU0igQ2A7GtzO/YRK81g+mkmCfkSHkoecUWOlhwuvX64VXcOskq8nFQgR6Nf/uoNYpZGKA0TVOu5ybGz6gynAmclnqpxoSyMR1i1JI9R+Nr90Ss6sMiBhrGxJQ+bq74mMRlpPosB2RtSM9LI3E/zuqkJr/2MyQ1KNliUZgKYmIye5sMuEJmxMQSyhS3txI2oyY8Mp2RC85ZdXSeuy6rlV75Wqd/kcRThBE7hHDy4gjrcQOawCEZ3iFN2fsvDjvzseiteDkM 8fwB87nD+DYjOo=</latexit> <latexit sha1_base64="0tjat07OZC/X0XVF9056I5xyElo=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBZBEoiBT0WvXisYj+gDWznbRLN5uwuxFK6D/w4kERr/4jb/4bt20O2vpg4PHeDPzgkRwbVz32ymsr W9sbhW3Szu7e/sH5cOjlo5TxbDJYhGrTkA1Ci6xabgR2EkU0igQ2A7GtzO/YRK81g+mkmCfkSHkoecUWOlhwuvX64VXcOskq8nFQgR6Nf/uoNYpZGKA0TVOu5ybGz6gynAmclnqpxoSyMR1i1JI9R+Nr90Ss6sMiBhrGxJQ+bq74mMRlpPosB2RtSM9LI3E/zuqkJr/2MyQ1KNliUZgKYmIye5sMuEJmxMQSyhS3txI2oyY8Mp2RC85ZdXSeuy6rlV75Wqd/kcRThBE7hHDy4gjrcQOawCEZ3iFN2fsvDjvzseiteDkM 8fwB87nD+DYjOo=</latexit> <latexit sha1_base64="0tjat07OZC/X0XVF9056I5xyElo=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBZBEoiBT0WvXisYj+gDWznbRLN5uwuxFK6D/w4kERr/4jb/4bt20O2vpg4PHeDPzgkRwbVz32ymsr W9sbhW3Szu7e/sH5cOjlo5TxbDJYhGrTkA1Ci6xabgR2EkU0igQ2A7GtzO/YRK81g+mkmCfkSHkoecUWOlhwuvX64VXcOskq8nFQgR6Nf/uoNYpZGKA0TVOu5ybGz6gynAmclnqpxoSyMR1i1JI9R+Nr90Ss6sMiBhrGxJQ+bq74mMRlpPosB2RtSM9LI3E/zuqkJr/2MyQ1KNliUZgKYmIye5sMuEJmxMQSyhS3txI2oyY8Mp2RC85ZdXSeuy6rlV75Wqd/kcRThBE7hHDy4gjrcQOawCEZ3iFN2fsvDjvzseiteDkM 8fwB87nD+DYjOo=</latexit> −1 <latexit sha1_base64="VHuAnIK6HQukuXfWtT+pc/O3tuM=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBiyWRgh6LXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RTW1 jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNwI7CQKaRQIbAfj25nfkKleSwfzSRBP6JDyUPOqLHSw4XL1fcqjsHWSVeTiqQo9Evf/UGMUsjlIYJqnXcxPjZ1QZzgROS71UY0LZmA6xa6mkEWo/m186JWdWGZAwVrakIXP190RGI60nUWA7I2pGetmbif953dSE137GZIalGyxKEwFMTGZvU0GXCEzYmIJZYrbWwkbUWZseGUbAje8surpHVZ9dyqd1+r1G/yOIpwAqdwDh5cQR3uoAFNYBDCM7zCmzN2Xpx352PRWnDym WP4A+fzB+PijOw=</latexit> <latexit sha1_base64="VHuAnIK6HQukuXfWtT+pc/O3tuM=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBiyWRgh6LXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RTW1 jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNwI7CQKaRQIbAfj25nfkKleSwfzSRBP6JDyUPOqLHSw4XL1fcqjsHWSVeTiqQo9Evf/UGMUsjlIYJqnXcxPjZ1QZzgROS71UY0LZmA6xa6mkEWo/m186JWdWGZAwVrakIXP190RGI60nUWA7I2pGetmbif953dSE137GZIalGyxKEwFMTGZvU0GXCEzYmIJZYrbWwkbUWZseGUbAje8surpHVZ9dyqd1+r1G/yOIpwAqdwDh5cQR3uoAFNYBDCM7zCmzN2Xpx352PRWnDym WP4A+fzB+PijOw=</latexit> <latexit sha1_base64="VHuAnIK6HQukuXfWtT+pc/O3tuM=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBiyWRgh6LXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RTW1 jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNwI7CQKaRQIbAfj25nfkKleSwfzSRBP6JDyUPOqLHSw4XL1fcqjsHWSVeTiqQo9Evf/UGMUsjlIYJqnXcxPjZ1QZzgROS71UY0LZmA6xa6mkEWo/m186JWdWGZAwVrakIXP190RGI60nUWA7I2pGetmbif953dSE137GZIalGyxKEwFMTGZvU0GXCEzYmIJZYrbWwkbUWZseGUbAje8surpHVZ9dyqd1+r1G/yOIpwAqdwDh5cQR3uoAFNYBDCM7zCmzN2Xpx352PRWnDym WP4A+fzB+PijOw=</latexit> <latexit sha1_base64="VHuAnIK6HQukuXfWtT+pc/O3tuM=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBiyWRgh6LXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RTW1 jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNwI7CQKaRQIbAfj25nfkKleSwfzSRBP6JDyUPOqLHSw4XL1fcqjsHWSVeTiqQo9Evf/UGMUsjlIYJqnXcxPjZ1QZzgROS71UY0LZmA6xa6mkEWo/m186JWdWGZAwVrakIXP190RGI60nUWA7I2pGetmbif953dSE137GZIalGyxKEwFMTGZvU0GXCEzYmIJZYrbWwkbUWZseGUbAje8surpHVZ9dyqd1+r1G/yOIpwAqdwDh5cQR3uoAFNYBDCM7zCmzN2Xpx352PRWnDym WP4A+fzB+PijOw=</latexit> +2 <latexit sha1_base64="zMfpdkJQZU2muQOtVaNkJiCeg=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBZBEpSBD0WvXisYj+gDWznbRLN5uwuxFK6D/w4kERr/4jb/4bt20O2vpg4PHeDPzgkRwbVz32ymsr W9sbhW3Szu7e/sH5cOjlo5TxbDJYhGrTkA1Ci6xabgR2EkU0igQ2A7GtzO/YRK81g+mkmCfkSHkoecUWOlh4tav1xq+4cZJV4OalAjka/NUbxCyNUBomqNZdz02Mn1FlOBM4LfVSjQlYzrErqWSRqj9bH7plJxZUDCWNmShszV3xMZjbSeRIHtjKgZ6WVvJv7ndVMTXvsZl0lqULFojAVxMRk9jYZcIXMiIklClubyVsRBVlxoZTsiF4y+vklat6rlV7/6yUr/J4yjCZzCOXhwBXW4gwY0gUEIz/AKb87YeXHenY9Fa8HJZ 47hD5zPH+JcjOs=</latexit> <latexit sha1_base64="zMfpdkJQZU2muQOtVaNkJiCeg=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBZBEpSBD0WvXisYj+gDWznbRLN5uwuxFK6D/w4kERr/4jb/4bt20O2vpg4PHeDPzgkRwbVz32ymsr W9sbhW3Szu7e/sH5cOjlo5TxbDJYhGrTkA1Ci6xabgR2EkU0igQ2A7GtzO/YRK81g+mkmCfkSHkoecUWOlh4tav1xq+4cZJV4OalAjka/NUbxCyNUBomqNZdz02Mn1FlOBM4LfVSjQlYzrErqWSRqj9bH7plJxZUDCWNmShszV3xMZjbSeRIHtjKgZ6WVvJv7ndVMTXvsZl0lqULFojAVxMRk9jYZcIXMiIklClubyVsRBVlxoZTsiF4y+vklat6rlV7/6yUr/J4yjCZzCOXhwBXW4gwY0gUEIz/AKb87YeXHenY9Fa8HJZ 47hD5zPH+JcjOs=</latexit> <latexit sha1_base64="zMfpdkJQZU2muQOtVaNkJiCeg=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBZBEpSBD0WvXisYj+gDWznbRLN5uwuxFK6D/w4kERr/4jb/4bt20O2vpg4PHeDPzgkRwbVz32ymsr W9sbhW3Szu7e/sH5cOjlo5TxbDJYhGrTkA1Ci6xabgR2EkU0igQ2A7GtzO/YRK81g+mkmCfkSHkoecUWOlh4tav1xq+4cZJV4OalAjka/NUbxCyNUBomqNZdz02Mn1FlOBM4LfVSjQlYzrErqWSRqj9bH7plJxZUDCWNmShszV3xMZjbSeRIHtjKgZ6WVvJv7ndVMTXvsZl0lqULFojAVxMRk9jYZcIXMiIklClubyVsRBVlxoZTsiF4y+vklat6rlV7/6yUr/J4yjCZzCOXhwBXW4gwY0gUEIz/AKb87YeXHenY9Fa8HJZ 47hD5zPH+JcjOs=</latexit> <latexit sha1_base64="zMfpdkJQZU2muQOtVaNkJiCeg=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBZBEpSBD0WvXisYj+gDWznbRLN5uwuxFK6D/w4kERr/4jb/4bt20O2vpg4PHeDPzgkRwbVz32ymsr W9sbhW3Szu7e/sH5cOjlo5TxbDJYhGrTkA1Ci6xabgR2EkU0igQ2A7GtzO/YRK81g+mkmCfkSHkoecUWOlh4tav1xq+4cZJV4OalAjka/NUbxCyNUBomqNZdz02Mn1FlOBM4LfVSjQlYzrErqWSRqj9bH7plJxZUDCWNmShszV3xMZjbSeRIHtjKgZ6WVvJv7ndVMTXvsZl0lqULFojAVxMRk9jYZcIXMiIklClubyVsRBVlxoZTsiF4y+vklat6rlV7/6yUr/J4yjCZzCOXhwBXW4gwY0gUEIz/AKb87YeXHenY9Fa8HJZ 47hD5zPH+JcjOs=</latexit> −2 <latexit sha1_base64="Lz0bIhcQaFeofVQFIDPtUngFItQ=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBiyUpgh6LXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RTW1 jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNwI7CQKaRQIbAfj25nfkKleSwfzSRBP6JDyUPOqLHSw0WtX64VXcOskq8nFQgR6Nf/uoNYpZGKA0TVOu5ybGz6gynAmclnqpxoSyMR1i1JI9R+Nr90Ss6sMiBhrGxJQ+bq74mMRlpPosB2RtSM9LI3E/zuqkJr/2MyQ1KNliUZgKYmIye5sMuEJmxMQSyhS3txI2oyY8Mp2RC85ZdXSatW9dyqd39Zqd/kcRThBE7hHDy4gjrcQOawCEZ3iFN2fsvDjvzseiteDkM 8fwB87nD+VmjO0=</latexit> <latexit sha1_base64="Lz0bIhcQaFeofVQFIDPtUngFItQ=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBiyUpgh6LXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RTW1 jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNwI7CQKaRQIbAfj25nfkKleSwfzSRBP6JDyUPOqLHSw0WtX64VXcOskq8nFQgR6Nf/uoNYpZGKA0TVOu5ybGz6gynAmclnqpxoSyMR1i1JI9R+Nr90Ss6sMiBhrGxJQ+bq74mMRlpPosB2RtSM9LI3E/zuqkJr/2MyQ1KNliUZgKYmIye5sMuEJmxMQSyhS3txI2oyY8Mp2RC85ZdXSatW9dyqd39Zqd/kcRThBE7hHDy4gjrcQOawCEZ3iFN2fsvDjvzseiteDkM 8fwB87nD+VmjO0=</latexit> <latexit sha1_base64="Lz0bIhcQaFeofVQFIDPtUngFItQ=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBiyUpgh6LXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RTW1 jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNwI7CQKaRQIbAfj25nfkKleSwfzSRBP6JDyUPOqLHSw0WtX64VXcOskq8nFQgR6Nf/uoNYpZGKA0TVOu5ybGz6gynAmclnqpxoSyMR1i1JI9R+Nr90Ss6sMiBhrGxJQ+bq74mMRlpPosB2RtSM9LI3E/zuqkJr/2MyQ1KNliUZgKYmIye5sMuEJmxMQSyhS3txI2oyY8Mp2RC85ZdXSatW9dyqd39Zqd/kcRThBE7hHDy4gjrcQOawCEZ3iFN2fsvDjvzseiteDkM 8fwB87nD+VmjO0=</latexit> <latexit sha1_base64="Lz0bIhcQaFeofVQFIDPtUngFItQ=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBiyUpgh6LXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RTW1 jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNwI7CQKaRQIbAfj25nfkKleSwfzSRBP6JDyUPOqLHSw0WtX64VXcOskq8nFQgR6Nf/uoNYpZGKA0TVOu5ybGz6gynAmclnqpxoSyMR1i1JI9R+Nr90Ss6sMiBhrGxJQ+bq74mMRlpPosB2RtSM9LI3E/zuqkJr/2MyQ1KNliUZgKYmIye5sMuEJmxMQSyhS3txI2oyY8Mp2RC85ZdXSatW9dyqd39Zqd/kcRThBE7hHDy4gjrcQOawCEZ3iFN2fsvDjvzseiteDkM 8fwB87nD+VmjO0=</latexit> 0 <latexit sha1_base64="6ZTwbptvK0HUiMuNsEoeJPkc=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm 1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1Fip6Q7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585h T9wPn8AeRmMtA=</latexit> <latexit sha1_base64="6ZTwbptvK0HUiMuNsEoeJPkc=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm 1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1Fip6Q7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585h T9wPn8AeRmMtA=</latexit> <latexit sha1_base64="6ZTwbptvK0HUiMuNsEoeJPkc=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm 1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1Fip6Q7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585h T9wPn8AeRmMtA=</latexit> <latexit sha1_base64="6ZTwbptvK0HUiMuNsEoeJPkc=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm 1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1Fip6Q7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585h T9wPn8AeRmMtA=</latexit> … c1 <latexit sha1_base64="yRjoBq9 9koyFx8YvuYkRx/CJ8Y4=">AB6nicbVBNS8NAEJ3Ur1q/oh69LBbBU0 lE0GPRi8eK9gPaUDbTbt0swm7E6GE/gQvHhTx6i/y5r9x2+agrQ8GHu/ NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z9wD49aJsk0402WyER3Qmq4FIo3 UaDknVRzGoeSt8Px7cxvP3FtRKIecZLyIKZDJSLBKFrpgfX9vlv1at4cZ JX4BalCgUbf/eoNEpbFXCGT1Jiu76UY5FSjYJPK73M8JSyMR3yrqWKxt wE+fzUKTmzyoBEibalkMzV3xM5jY2ZxKHtjCmOzLI3E/zuhlG10EuVJo hV2yxKMokwYTM/iYDoTlDObGEMi3srYSNqKYMbToVG4K/PIqaV3UfK/m 319W6zdFHGU4gVM4Bx+uoA530IAmMBjCM7zCmyOdF+fd+Vi0lpxi5hj+w Pn8AeujYs=</latexit> <latexit sha1_base64="yRjoBq9 9koyFx8YvuYkRx/CJ8Y4=">AB6nicbVBNS8NAEJ3Ur1q/oh69LBbBU0 lE0GPRi8eK9gPaUDbTbt0swm7E6GE/gQvHhTx6i/y5r9x2+agrQ8GHu/ NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z9wD49aJsk0402WyER3Qmq4FIo3 UaDknVRzGoeSt8Px7cxvP3FtRKIecZLyIKZDJSLBKFrpgfX9vlv1at4cZ JX4BalCgUbf/eoNEpbFXCGT1Jiu76UY5FSjYJPK73M8JSyMR3yrqWKxt wE+fzUKTmzyoBEibalkMzV3xM5jY2ZxKHtjCmOzLI3E/zuhlG10EuVJo hV2yxKMokwYTM/iYDoTlDObGEMi3srYSNqKYMbToVG4K/PIqaV3UfK/m 319W6zdFHGU4gVM4Bx+uoA530IAmMBjCM7zCmyOdF+fd+Vi0lpxi5hj+w Pn8AeujYs=</latexit> <latexit sha1_base64="yRjoBq9 9koyFx8YvuYkRx/CJ8Y4=">AB6nicbVBNS8NAEJ3Ur1q/oh69LBbBU0 lE0GPRi8eK9gPaUDbTbt0swm7E6GE/gQvHhTx6i/y5r9x2+agrQ8GHu/ NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z9wD49aJsk0402WyER3Qmq4FIo3 UaDknVRzGoeSt8Px7cxvP3FtRKIecZLyIKZDJSLBKFrpgfX9vlv1at4cZ JX4BalCgUbf/eoNEpbFXCGT1Jiu76UY5FSjYJPK73M8JSyMR3yrqWKxt wE+fzUKTmzyoBEibalkMzV3xM5jY2ZxKHtjCmOzLI3E/zuhlG10EuVJo hV2yxKMokwYTM/iYDoTlDObGEMi3srYSNqKYMbToVG4K/PIqaV3UfK/m 319W6zdFHGU4gVM4Bx+uoA530IAmMBjCM7zCmyOdF+fd+Vi0lpxi5hj+w Pn8AeujYs=</latexit> <latexit sha1_base64="yRjoBq9 9koyFx8YvuYkRx/CJ8Y4=">AB6nicbVBNS8NAEJ3Ur1q/oh69LBbBU0 lE0GPRi8eK9gPaUDbTbt0swm7E6GE/gQvHhTx6i/y5r9x2+agrQ8GHu/ NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z9wD49aJsk0402WyER3Qmq4FIo3 UaDknVRzGoeSt8Px7cxvP3FtRKIecZLyIKZDJSLBKFrpgfX9vlv1at4cZ JX4BalCgUbf/eoNEpbFXCGT1Jiu76UY5FSjYJPK73M8JSyMR3yrqWKxt wE+fzUKTmzyoBEibalkMzV3xM5jY2ZxKHtjCmOzLI3E/zuhlG10EuVJo hV2yxKMokwYTM/iYDoTlDObGEMi3srYSNqKYMbToVG4K/PIqaV3UfK/m 319W6zdFHGU4gVM4Bx+uoA530IAmMBjCM7zCmyOdF+fd+Vi0lpxi5hj+w Pn8AeujYs=</latexit> c2 <latexit sha1_base64="tH/lnfd mPbXeWx2i9xfZDS+3iMU=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0 mKUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw 3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgkts GW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLk HXi5aQCOZqD8ld/GLM0QmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGq H2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCa pQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68Ttq1qudW vfurSuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la8HJZ07hD 5zPH+0njYw=</latexit> <latexit sha1_base64="tH/lnfd mPbXeWx2i9xfZDS+3iMU=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0 mKUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw 3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgkts GW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLk HXi5aQCOZqD8ld/GLM0QmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGq H2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCa pQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68Ttq1qudW vfurSuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la8HJZ07hD 5zPH+0njYw=</latexit> <latexit sha1_base64="tH/lnfd mPbXeWx2i9xfZDS+3iMU=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0 mKUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw 3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgkts GW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLk HXi5aQCOZqD8ld/GLM0QmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGq H2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCa pQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68Ttq1qudW vfurSuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la8HJZ07hD 5zPH+0njYw=</latexit> <latexit sha1_base64="tH/lnfd mPbXeWx2i9xfZDS+3iMU=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0 mKUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw 3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgkts GW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLk HXi5aQCOZqD8ld/GLM0QmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGq H2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCa pQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68Ttq1qudW vfurSuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la8HJZ07hD 5zPH+0njYw=</latexit> c3 <latexit sha1_base64="z+eTvC MDCz8piPM6gBxcLrfdTs=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0 m0oMeiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6 bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2 DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPqX/XLFrbpzk FXi5aQCORr98ldvELM0QmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGq H2s/mpU3JmlQEJY2VLGjJXf09kNJ6EgW2M6JmpJe9mfif101NeO1nXCa pQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8SloXVc+t eve1Sv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDH zifP+6rjY0=</latexit> <latexit sha1_base64="z+eTvC MDCz8piPM6gBxcLrfdTs=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0 m0oMeiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6 bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2 DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPqX/XLFrbpzk FXi5aQCORr98ldvELM0QmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGq H2s/mpU3JmlQEJY2VLGjJXf09kNJ6EgW2M6JmpJe9mfif101NeO1nXCa pQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8SloXVc+t eve1Sv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDH zifP+6rjY0=</latexit> <latexit sha1_base64="z+eTvC MDCz8piPM6gBxcLrfdTs=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0 m0oMeiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6 bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2 DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPqX/XLFrbpzk FXi5aQCORr98ldvELM0QmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGq H2s/mpU3JmlQEJY2VLGjJXf09kNJ6EgW2M6JmpJe9mfif101NeO1nXCa pQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8SloXVc+t eve1Sv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDH zifP+6rjY0=</latexit> <latexit sha1_base64="z+eTvC MDCz8piPM6gBxcLrfdTs=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0 m0oMeiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6 bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2 DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPqX/XLFrbpzk FXi5aQCORr98ldvELM0QmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGq H2s/mpU3JmlQEJY2VLGjJXf09kNJ6EgW2M6JmpJe9mfif101NeO1nXCa pQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8SloXVc+t eve1Sv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDH zifP+6rjY0=</latexit> c4 <latexit sha1_base64="ypzjBH z1k2rmAf3Jp3DJGBI4uE=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0 mkUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw 3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgkts GW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLk HXi5aQCOZqD8ld/GLM0QmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGq H2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCa pQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+t eve1SuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la8HJZ07hD 5zPH/AvjY4=</latexit> <latexit sha1_base64="ypzjBH z1k2rmAf3Jp3DJGBI4uE=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0 mkUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw 3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgkts GW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLk HXi5aQCOZqD8ld/GLM0QmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGq H2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCa pQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+t eve1SuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la8HJZ07hD 5zPH/AvjY4=</latexit> <latexit sha1_base64="ypzjBH z1k2rmAf3Jp3DJGBI4uE=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0 mkUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw 3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgkts GW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLk HXi5aQCOZqD8ld/GLM0QmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGq H2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCa pQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+t eve1SuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la8HJZ07hD 5zPH/AvjY4=</latexit> <latexit sha1_base64="ypzjBH z1k2rmAf3Jp3DJGBI4uE=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0 mkUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw 3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgkts GW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLk HXi5aQCOZqD8ld/GLM0QmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGq H2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCa pQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+t eve1SuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la8HJZ07hD 5zPH/AvjY4=</latexit> {(ci, cj)} <latexit sha1_base64="7RGW4vU7BXh9yut2cYavTRteSA0=">AB9HicbVBNS8NAEJ3Ur1q/qh69LBahgpSkCHosevFYwX5AE 8Jmu2nXbjZxd1Mob/DiwdFvPpjvPlv3LY5aOuDgcd7M8zMCxLOlLbtb6uwtr6xuVXcLu3s7u0flA+P2ipOJaEtEvNYdgOsKGeCtjTnHYTSXEUcNoJRrczvzOmUrFYPOhJQr0IDwQLGcHaSJ6bVYnPLoj/eO5O/XLFrtlzoFXi5KQCOZp+cvtxySNqNCEY6V6jp1oL8N SM8LptOSmiaYjPCA9gwVOKLKy+ZHT9GZUfojKUpodFc/T2R4UipSRSYzgjroVr2ZuJ/Xi/V4bWXMZGkmgqyWBSmHOkYzRJAfSYp0XxiCaSmVsRGWKJiTY5lUwIzvLq6Rdrzl2zbm/rDRu8jiKcAKnUAUHrqABd9CEFhB4gmd4hTdrbL1Y79bHorVg5TPH8AfW5w/e 75GA</latexit> <latexit sha1_base64="7RGW4vU7BXh9yut2cYavTRteSA0=">AB9HicbVBNS8NAEJ3Ur1q/qh69LBahgpSkCHosevFYwX5AE 8Jmu2nXbjZxd1Mob/DiwdFvPpjvPlv3LY5aOuDgcd7M8zMCxLOlLbtb6uwtr6xuVXcLu3s7u0flA+P2ipOJaEtEvNYdgOsKGeCtjTnHYTSXEUcNoJRrczvzOmUrFYPOhJQr0IDwQLGcHaSJ6bVYnPLoj/eO5O/XLFrtlzoFXi5KQCOZp+cvtxySNqNCEY6V6jp1oL8N SM8LptOSmiaYjPCA9gwVOKLKy+ZHT9GZUfojKUpodFc/T2R4UipSRSYzgjroVr2ZuJ/Xi/V4bWXMZGkmgqyWBSmHOkYzRJAfSYp0XxiCaSmVsRGWKJiTY5lUwIzvLq6Rdrzl2zbm/rDRu8jiKcAKnUAUHrqABd9CEFhB4gmd4hTdrbL1Y79bHorVg5TPH8AfW5w/e 75GA</latexit> <latexit sha1_base64="7RGW4vU7BXh9yut2cYavTRteSA0=">AB9HicbVBNS8NAEJ3Ur1q/qh69LBahgpSkCHosevFYwX5AE 8Jmu2nXbjZxd1Mob/DiwdFvPpjvPlv3LY5aOuDgcd7M8zMCxLOlLbtb6uwtr6xuVXcLu3s7u0flA+P2ipOJaEtEvNYdgOsKGeCtjTnHYTSXEUcNoJRrczvzOmUrFYPOhJQr0IDwQLGcHaSJ6bVYnPLoj/eO5O/XLFrtlzoFXi5KQCOZp+cvtxySNqNCEY6V6jp1oL8N SM8LptOSmiaYjPCA9gwVOKLKy+ZHT9GZUfojKUpodFc/T2R4UipSRSYzgjroVr2ZuJ/Xi/V4bWXMZGkmgqyWBSmHOkYzRJAfSYp0XxiCaSmVsRGWKJiTY5lUwIzvLq6Rdrzl2zbm/rDRu8jiKcAKnUAUHrqABd9CEFhB4gmd4hTdrbL1Y79bHorVg5TPH8AfW5w/e 75GA</latexit> <latexit sha1_base64="7RGW4vU7BXh9yut2cYavTRteSA0=">AB9HicbVBNS8NAEJ3Ur1q/qh69LBahgpSkCHosevFYwX5AE 8Jmu2nXbjZxd1Mob/DiwdFvPpjvPlv3LY5aOuDgcd7M8zMCxLOlLbtb6uwtr6xuVXcLu3s7u0flA+P2ipOJaEtEvNYdgOsKGeCtjTnHYTSXEUcNoJRrczvzOmUrFYPOhJQr0IDwQLGcHaSJ6bVYnPLoj/eO5O/XLFrtlzoFXi5KQCOZp+cvtxySNqNCEY6V6jp1oL8N SM8LptOSmiaYjPCA9gwVOKLKy+ZHT9GZUfojKUpodFc/T2R4UipSRSYzgjroVr2ZuJ/Xi/V4bWXMZGkmgqyWBSmHOkYzRJAfSYp0XxiCaSmVsRGWKJiTY5lUwIzvLq6Rdrzl2zbm/rDRu8jiKcAKnUAUHrqABd9CEFhB4gmd4hTdrbL1Y79bHorVg5TPH8AfW5w/e 75GA</latexit> c1 <latexit sha1_base64="JOakx6XM9x+K2cYbi+FM5AhtN0=">AB/ XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NXLJkd+/Y3RNCP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQKbqzvf3uFtfWNza3idm lnd2/oHx41DRJphk2WCIS/RhRg4IrbFhuBT6mGqmMBLai4e3Ubz2hNjxRD3aUYihpX/GYM2qd1OpEkrBu0C1X/Ko/A1klQU4qkKPeLf90egnLJC rLBDWmHfipDcdUW84ETkqdzGBK2ZD2se2ohJNOJ6dOyFnTumRONGulCUz9e/EmEpjRjJynZLagVn2puJ/Xjuz8XU45irNLCo2XxRngtiETH8nPa6 RWTFyhDLN3a2EDaimzLqEFrZEcuIyCZYTWCXNi2rgV4P7y0rtJk+nCdwCucQwBXU4A7q0AGQ3iBV3jznr1378P7nLcWvHzmGBbgf0CO1KVlA= </latexit> <latexit sha1_base64="JOakx6XM9x+K2cYbi+FM5AhtN0=">AB/ XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NXLJkd+/Y3RNCP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQKbqzvf3uFtfWNza3idm lnd2/oHx41DRJphk2WCIS/RhRg4IrbFhuBT6mGqmMBLai4e3Ubz2hNjxRD3aUYihpX/GYM2qd1OpEkrBu0C1X/Ko/A1klQU4qkKPeLf90egnLJC rLBDWmHfipDcdUW84ETkqdzGBK2ZD2se2ohJNOJ6dOyFnTumRONGulCUz9e/EmEpjRjJynZLagVn2puJ/Xjuz8XU45irNLCo2XxRngtiETH8nPa6 RWTFyhDLN3a2EDaimzLqEFrZEcuIyCZYTWCXNi2rgV4P7y0rtJk+nCdwCucQwBXU4A7q0AGQ3iBV3jznr1378P7nLcWvHzmGBbgf0CO1KVlA= </latexit> <latexit sha1_base64="JOakx6XM9x+K2cYbi+FM5AhtN0=">AB/ XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NXLJkd+/Y3RNCP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQKbqzvf3uFtfWNza3idm lnd2/oHx41DRJphk2WCIS/RhRg4IrbFhuBT6mGqmMBLai4e3Ubz2hNjxRD3aUYihpX/GYM2qd1OpEkrBu0C1X/Ko/A1klQU4qkKPeLf90egnLJC rLBDWmHfipDcdUW84ETkqdzGBK2ZD2se2ohJNOJ6dOyFnTumRONGulCUz9e/EmEpjRjJynZLagVn2puJ/Xjuz8XU45irNLCo2XxRngtiETH8nPa6 RWTFyhDLN3a2EDaimzLqEFrZEcuIyCZYTWCXNi2rgV4P7y0rtJk+nCdwCucQwBXU4A7q0AGQ3iBV3jznr1378P7nLcWvHzmGBbgf0CO1KVlA= </latexit> <latexit sha1_base64="JOakx6XM9x+K2cYbi+FM5AhtN0=">AB/ XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NXLJkd+/Y3RNCP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQKbqzvf3uFtfWNza3idm lnd2/oHx41DRJphk2WCIS/RhRg4IrbFhuBT6mGqmMBLai4e3Ubz2hNjxRD3aUYihpX/GYM2qd1OpEkrBu0C1X/Ko/A1klQU4qkKPeLf90egnLJC rLBDWmHfipDcdUW84ETkqdzGBK2ZD2se2ohJNOJ6dOyFnTumRONGulCUz9e/EmEpjRjJynZLagVn2puJ/Xjuz8XU45irNLCo2XxRngtiETH8nPa6 RWTFyhDLN3a2EDaimzLqEFrZEcuIyCZYTWCXNi2rgV4P7y0rtJk+nCdwCucQwBXU4A7q0AGQ3iBV3jznr1378P7nLcWvHzmGBbgf0CO1KVlA= </latexit> c2 <latexit sha1_base64="MQTkJghj41RPH2kogvkSvP5Qg0Q=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9ktgh6LXjxWsB/QLiVJs21okl2SrFCW4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMPJIbqzvf3uFjc2t7Z3ibm lv/+DwqHx80jJxqilr0ljEukOwYIr1rTcCtZJNMOSCNYm47uZ35i2vBYPdpJwkKJh4pHnGLrpHaPSET7tX654lf9OdA6CXJSgRyNfvmnN4hpKp myVGBjuoGf2D2nIq2LTUSw1LMB3jIes6qrBkJszm507RhVMGKIq1K2XRXP07kWFpzEQS1ymxHZlVbyb+53VTG92EGVdJapmi0VRKpCN0ex3NOC aUSsmjmCqubsV0RHWmFqX0NIWIqcuk2A1gXSqlUDvxo8XFXqt3k6RTiDc7iEAK6hDvfQgCZQGMLvMKb9+y9ex/e56K14OUzp7AE7+sXPOWVlQ= </latexit> <latexit sha1_base64="MQTkJghj41RPH2kogvkSvP5Qg0Q=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9ktgh6LXjxWsB/QLiVJs21okl2SrFCW4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMPJIbqzvf3uFjc2t7Z3ibm lv/+DwqHx80jJxqilr0ljEukOwYIr1rTcCtZJNMOSCNYm47uZ35i2vBYPdpJwkKJh4pHnGLrpHaPSET7tX654lf9OdA6CXJSgRyNfvmnN4hpKp myVGBjuoGf2D2nIq2LTUSw1LMB3jIes6qrBkJszm507RhVMGKIq1K2XRXP07kWFpzEQS1ymxHZlVbyb+53VTG92EGVdJapmi0VRKpCN0ex3NOC aUSsmjmCqubsV0RHWmFqX0NIWIqcuk2A1gXSqlUDvxo8XFXqt3k6RTiDc7iEAK6hDvfQgCZQGMLvMKb9+y9ex/e56K14OUzp7AE7+sXPOWVlQ= </latexit> <latexit sha1_base64="MQTkJghj41RPH2kogvkSvP5Qg0Q=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9ktgh6LXjxWsB/QLiVJs21okl2SrFCW4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMPJIbqzvf3uFjc2t7Z3ibm lv/+DwqHx80jJxqilr0ljEukOwYIr1rTcCtZJNMOSCNYm47uZ35i2vBYPdpJwkKJh4pHnGLrpHaPSET7tX654lf9OdA6CXJSgRyNfvmnN4hpKp myVGBjuoGf2D2nIq2LTUSw1LMB3jIes6qrBkJszm507RhVMGKIq1K2XRXP07kWFpzEQS1ymxHZlVbyb+53VTG92EGVdJapmi0VRKpCN0ex3NOC aUSsmjmCqubsV0RHWmFqX0NIWIqcuk2A1gXSqlUDvxo8XFXqt3k6RTiDc7iEAK6hDvfQgCZQGMLvMKb9+y9ex/e56K14OUzp7AE7+sXPOWVlQ= </latexit> <latexit sha1_base64="MQTkJghj41RPH2kogvkSvP5Qg0Q=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9ktgh6LXjxWsB/QLiVJs21okl2SrFCW4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMPJIbqzvf3uFjc2t7Z3ibm lv/+DwqHx80jJxqilr0ljEukOwYIr1rTcCtZJNMOSCNYm47uZ35i2vBYPdpJwkKJh4pHnGLrpHaPSET7tX654lf9OdA6CXJSgRyNfvmnN4hpKp myVGBjuoGf2D2nIq2LTUSw1LMB3jIes6qrBkJszm507RhVMGKIq1K2XRXP07kWFpzEQS1ymxHZlVbyb+53VTG92EGVdJapmi0VRKpCN0ex3NOC aUSsmjmCqubsV0RHWmFqX0NIWIqcuk2A1gXSqlUDvxo8XFXqt3k6RTiDc7iEAK6hDvfQgCZQGMLvMKb9+y9ex/e56K14OUzp7AE7+sXPOWVlQ= </latexit> c3 <latexit sha1_base64="M3PhYKyfWzWJY0e6dD6bySfgF70=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9lVQY9FLx4r2A9ol5Kk2TY0yS5JVihL8Td41bM38epv8eg/MW3YFsfDzem2FmHkEN9b3v73C2vrG5lZxu7 Szu7d/UD48apo41ZQ1aCxi3SbYMEVa1huBWsnmFJBGuR0d3Ubz0xbXisHu04YaHEA8UjTrF1UqtLJK9y1654lf9GdAqCXJSgRz1Xvmn249pKp myVGBjOoGf2D2nIq2KTUTQ1LMB3hAes4qrBkJsxm507QmVP6KIq1K2XRTP07kWFpzFgS1ymxHZplbyr+53VSG92EGVdJapmi80VRKpCN0fR31Oe aUSvGjmCqubsV0SHWmFqX0MIWIicuk2A5gVXSvKgGfjV4uKrUbvN0inACp3AOAVxDe6hDg2gMIXeIU379l79z68z3lrwctnjmEB3tcvPniVlg= </latexit> <latexit sha1_base64="M3PhYKyfWzWJY0e6dD6bySfgF70=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9lVQY9FLx4r2A9ol5Kk2TY0yS5JVihL8Td41bM38epv8eg/MW3YFsfDzem2FmHkEN9b3v73C2vrG5lZxu7 Szu7d/UD48apo41ZQ1aCxi3SbYMEVa1huBWsnmFJBGuR0d3Ubz0xbXisHu04YaHEA8UjTrF1UqtLJK9y1654lf9GdAqCXJSgRz1Xvmn249pKp myVGBjOoGf2D2nIq2KTUTQ1LMB3hAes4qrBkJsxm507QmVP6KIq1K2XRTP07kWFpzFgS1ymxHZplbyr+53VSG92EGVdJapmi80VRKpCN0fR31Oe aUSvGjmCqubsV0SHWmFqX0MIWIicuk2A5gVXSvKgGfjV4uKrUbvN0inACp3AOAVxDe6hDg2gMIXeIU379l79z68z3lrwctnjmEB3tcvPniVlg= </latexit> <latexit sha1_base64="M3PhYKyfWzWJY0e6dD6bySfgF70=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9lVQY9FLx4r2A9ol5Kk2TY0yS5JVihL8Td41bM38epv8eg/MW3YFsfDzem2FmHkEN9b3v73C2vrG5lZxu7 Szu7d/UD48apo41ZQ1aCxi3SbYMEVa1huBWsnmFJBGuR0d3Ubz0xbXisHu04YaHEA8UjTrF1UqtLJK9y1654lf9GdAqCXJSgRz1Xvmn249pKp myVGBjOoGf2D2nIq2KTUTQ1LMB3hAes4qrBkJsxm507QmVP6KIq1K2XRTP07kWFpzFgS1ymxHZplbyr+53VSG92EGVdJapmi80VRKpCN0fR31Oe aUSvGjmCqubsV0SHWmFqX0MIWIicuk2A5gVXSvKgGfjV4uKrUbvN0inACp3AOAVxDe6hDg2gMIXeIU379l79z68z3lrwctnjmEB3tcvPniVlg= </latexit> <latexit sha1_base64="M3PhYKyfWzWJY0e6dD6bySfgF70=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9lVQY9FLx4r2A9ol5Kk2TY0yS5JVihL8Td41bM38epv8eg/MW3YFsfDzem2FmHkEN9b3v73C2vrG5lZxu7 Szu7d/UD48apo41ZQ1aCxi3SbYMEVa1huBWsnmFJBGuR0d3Ubz0xbXisHu04YaHEA8UjTrF1UqtLJK9y1654lf9GdAqCXJSgRz1Xvmn249pKp myVGBjOoGf2D2nIq2KTUTQ1LMB3hAes4qrBkJsxm507QmVP6KIq1K2XRTP07kWFpzFgS1ymxHZplbyr+53VSG92EGVdJapmi80VRKpCN0fR31Oe aUSvGjmCqubsV0SHWmFqX0MIWIicuk2A5gVXSvKgGfjV4uKrUbvN0inACp3AOAVxDe6hDg2gMIXeIU379l79z68z3lrwctnjmEB3tcvPniVlg= </latexit> c4 <latexit sha1_base64="AVJelIyC0Wtuyfq98ix86/AeyLw=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgh6LXjxWsB/QLiVJs21okl2SrFCW4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMPJIbqzvf3uFjc2t7Z3ibm lv/+DwqHx80jJxqilr0ljEukOwYIr1rTcCtZJNMOSCNYm47uZ35i2vBYPdpJwkKJh4pHnGLrpHaPSET7tX654lf9OdA6CXJSgRyNfvmnN4hpKp myVGBjuoGf2D2nIq2LTUSw1LMB3jIes6qrBkJszm507RhVMGKIq1K2XRXP07kWFpzEQS1ymxHZlVbyb+53VTG92EGVdJapmi0VRKpCN0ex3NOC aUSsmjmCqubsV0RHWmFqX0NIWIqcuk2A1gXSuqoGfjV4qFXqt3k6RTiDc7iEAK6hDvfQgCZQGMLvMKb9+y9ex/e56K14OUzp7AE7+sXQAuVlw= </latexit> <latexit sha1_base64="AVJelIyC0Wtuyfq98ix86/AeyLw=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgh6LXjxWsB/QLiVJs21okl2SrFCW4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMPJIbqzvf3uFjc2t7Z3ibm lv/+DwqHx80jJxqilr0ljEukOwYIr1rTcCtZJNMOSCNYm47uZ35i2vBYPdpJwkKJh4pHnGLrpHaPSET7tX654lf9OdA6CXJSgRyNfvmnN4hpKp myVGBjuoGf2D2nIq2LTUSw1LMB3jIes6qrBkJszm507RhVMGKIq1K2XRXP07kWFpzEQS1ymxHZlVbyb+53VTG92EGVdJapmi0VRKpCN0ex3NOC aUSsmjmCqubsV0RHWmFqX0NIWIqcuk2A1gXSuqoGfjV4qFXqt3k6RTiDc7iEAK6hDvfQgCZQGMLvMKb9+y9ex/e56K14OUzp7AE7+sXQAuVlw= </latexit> <latexit sha1_base64="AVJelIyC0Wtuyfq98ix86/AeyLw=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgh6LXjxWsB/QLiVJs21okl2SrFCW4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMPJIbqzvf3uFjc2t7Z3ibm lv/+DwqHx80jJxqilr0ljEukOwYIr1rTcCtZJNMOSCNYm47uZ35i2vBYPdpJwkKJh4pHnGLrpHaPSET7tX654lf9OdA6CXJSgRyNfvmnN4hpKp myVGBjuoGf2D2nIq2LTUSw1LMB3jIes6qrBkJszm507RhVMGKIq1K2XRXP07kWFpzEQS1ymxHZlVbyb+53VTG92EGVdJapmi0VRKpCN0ex3NOC aUSsmjmCqubsV0RHWmFqX0NIWIqcuk2A1gXSuqoGfjV4qFXqt3k6RTiDc7iEAK6hDvfQgCZQGMLvMKb9+y9ex/e56K14OUzp7AE7+sXQAuVlw= </latexit> <latexit sha1_base64="AVJelIyC0Wtuyfq98ix86/AeyLw=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgh6LXjxWsB/QLiVJs21okl2SrFCW4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMPJIbqzvf3uFjc2t7Z3ibm lv/+DwqHx80jJxqilr0ljEukOwYIr1rTcCtZJNMOSCNYm47uZ35i2vBYPdpJwkKJh4pHnGLrpHaPSET7tX654lf9OdA6CXJSgRyNfvmnN4hpKp myVGBjuoGf2D2nIq2LTUSw1LMB3jIes6qrBkJszm507RhVMGKIq1K2XRXP07kWFpzEQS1ymxHZlVbyb+53VTG92EGVdJapmi0VRKpCN0ex3NOC aUSsmjmCqubsV0RHWmFqX0NIWIqcuk2A1gXSuqoGfjV4qFXqt3k6RTiDc7iEAK6hDvfQgCZQGMLvMKb9+y9ex/e56K14OUzp7AE7+sXQAuVlw= </latexit> ✔ eliminate … … Clause Pair Ranking List ✔ Clause Representations Pre-output Emotion Clause Prediction {ˆyemo i } <latexit sha1_base64="krkBvXlv9F1NmULR0e9AB5B0csU=">AB/3icb VDLSsNAFJ34rPUVFdy4GSyCq5KIoMuiG5cV7AOaGCbTaTt0HmFmIoSYhb/ixoUibv0Nd/6N0zYLbT1w4XDOvdx7T5wqo3nfTtLyura+uVjerm1vbOru3 9YyVZi0sGRSdWOkCaOCtAw1jHQTRCPGenE4+uJ3kgSlMp7kyWkJCjoaADipGxUuQeBnkwQgZmEb3PA8Uh4bIisiteXVvCrhI/JLUQIlm5H4FfYlToTBD Gnd873EhDlShmJGimqQapIgPEZD0rNUIE50mE/vL+CJVfpwIJUtYeBU/T2RI651xmPbyZEZ6XlvIv7n9VIzuAxzKpLUEIFniwYpg0bCSRiwTxXBhmWIKyov RXiEVIGxtZ1Ybgz7+8SNpnd+r+7fntcZVGUcFHIFjcAp8cAEa4AY0Qtg8AiewSt4c56cF+fd+Zi1LjnlzAH4A+fzBxeXliM=</latexit> <latexit sha1_base64="krkBvXlv9F1NmULR0e9AB5B0csU=">AB/3icb VDLSsNAFJ34rPUVFdy4GSyCq5KIoMuiG5cV7AOaGCbTaTt0HmFmIoSYhb/ixoUibv0Nd/6N0zYLbT1w4XDOvdx7T5wqo3nfTtLyura+uVjerm1vbOru3 9YyVZi0sGRSdWOkCaOCtAw1jHQTRCPGenE4+uJ3kgSlMp7kyWkJCjoaADipGxUuQeBnkwQgZmEb3PA8Uh4bIisiteXVvCrhI/JLUQIlm5H4FfYlToTBD Gnd873EhDlShmJGimqQapIgPEZD0rNUIE50mE/vL+CJVfpwIJUtYeBU/T2RI651xmPbyZEZ6XlvIv7n9VIzuAxzKpLUEIFniwYpg0bCSRiwTxXBhmWIKyov RXiEVIGxtZ1Ybgz7+8SNpnd+r+7fntcZVGUcFHIFjcAp8cAEa4AY0Qtg8AiewSt4c56cF+fd+Zi1LjnlzAH4A+fzBxeXliM=</latexit> <latexit sha1_base64="krkBvXlv9F1NmULR0e9AB5B0csU=">AB/3icb VDLSsNAFJ34rPUVFdy4GSyCq5KIoMuiG5cV7AOaGCbTaTt0HmFmIoSYhb/ixoUibv0Nd/6N0zYLbT1w4XDOvdx7T5wqo3nfTtLyura+uVjerm1vbOru3 9YyVZi0sGRSdWOkCaOCtAw1jHQTRCPGenE4+uJ3kgSlMp7kyWkJCjoaADipGxUuQeBnkwQgZmEb3PA8Uh4bIisiteXVvCrhI/JLUQIlm5H4FfYlToTBD Gnd873EhDlShmJGimqQapIgPEZD0rNUIE50mE/vL+CJVfpwIJUtYeBU/T2RI651xmPbyZEZ6XlvIv7n9VIzuAxzKpLUEIFniwYpg0bCSRiwTxXBhmWIKyov RXiEVIGxtZ1Ybgz7+8SNpnd+r+7fntcZVGUcFHIFjcAp8cAEa4AY0Qtg8AiewSt4c56cF+fd+Zi1LjnlzAH4A+fzBxeXliM=</latexit> <latexit sha1_base64="krkBvXlv9F1NmULR0e9AB5B0csU=">AB/3icb VDLSsNAFJ34rPUVFdy4GSyCq5KIoMuiG5cV7AOaGCbTaTt0HmFmIoSYhb/ixoUibv0Nd/6N0zYLbT1w4XDOvdx7T5wqo3nfTtLyura+uVjerm1vbOru3 9YyVZi0sGRSdWOkCaOCtAw1jHQTRCPGenE4+uJ3kgSlMp7kyWkJCjoaADipGxUuQeBnkwQgZmEb3PA8Uh4bIisiteXVvCrhI/JLUQIlm5H4FfYlToTBD Gnd873EhDlShmJGimqQapIgPEZD0rNUIE50mE/vL+CJVfpwIJUtYeBU/T2RI651xmPbyZEZ6XlvIv7n9VIzuAxzKpLUEIFniwYpg0bCSRiwTxXBhmWIKyov RXiEVIGxtZ1Ybgz7+8SNpnd+r+7fntcZVGUcFHIFjcAp8cAEa4AY0Qtg8AiewSt4c56cF+fd+Zi1LjnlzAH4A+fzBxeXliM=</latexit> {ˆycau i } <latexit sha1_base64="lJ9EN7QXSdbPcyEDp5zRAYeCTM=">AB/3icb VBNS8NAEN34WetXVPDiZbEInkoigh6LXjxWsB/QxDZbtulu0nY3Qgh5uBf8eJBEa/+DW/+G7dtDtr6YODx3gwz8KEM6Ud59taWl5ZXVuvbFQ3t7Z3du29/ baKU0loi8Q8lt0QFOUsoi3NKfdRFIQIaedcHw98TsPVCoWR3c6S6gvYBixASOgjRTYh17ujUDjLGD3uScFJpAWXhHYNafuTIEXiVuSGirRDOwvrx+TVNBIE w5K9Vwn0X4OUjPCaVH1UkUTIGMY0p6hEQiq/Hx6f4FPjNLHg1iaijSeqr8nchBKZSI0nQL0SM17E/E/r5fqwaWfsyhJNY3IbNEg5VjHeBIG7jNJieaZIUAkM 7diMgIJRJvIqiYEd/7lRdI+q7tO3b09rzWuyjgq6Agdo1PkogvUQDeoiVqIoEf0jF7Rm/VkvVjv1sesdckqZw7QH1ifPwtPlhs=</latexit> <latexit sha1_base64="lJ9EN7QXSdbPcyEDp5zRAYeCTM=">AB/3icb VBNS8NAEN34WetXVPDiZbEInkoigh6LXjxWsB/QxDZbtulu0nY3Qgh5uBf8eJBEa/+DW/+G7dtDtr6YODx3gwz8KEM6Ud59taWl5ZXVuvbFQ3t7Z3du29/ baKU0loi8Q8lt0QFOUsoi3NKfdRFIQIaedcHw98TsPVCoWR3c6S6gvYBixASOgjRTYh17ujUDjLGD3uScFJpAWXhHYNafuTIEXiVuSGirRDOwvrx+TVNBIE w5K9Vwn0X4OUjPCaVH1UkUTIGMY0p6hEQiq/Hx6f4FPjNLHg1iaijSeqr8nchBKZSI0nQL0SM17E/E/r5fqwaWfsyhJNY3IbNEg5VjHeBIG7jNJieaZIUAkM 7diMgIJRJvIqiYEd/7lRdI+q7tO3b09rzWuyjgq6Agdo1PkogvUQDeoiVqIoEf0jF7Rm/VkvVjv1sesdckqZw7QH1ifPwtPlhs=</latexit> <latexit sha1_base64="lJ9EN7QXSdbPcyEDp5zRAYeCTM=">AB/3icb VBNS8NAEN34WetXVPDiZbEInkoigh6LXjxWsB/QxDZbtulu0nY3Qgh5uBf8eJBEa/+DW/+G7dtDtr6YODx3gwz8KEM6Ud59taWl5ZXVuvbFQ3t7Z3du29/ baKU0loi8Q8lt0QFOUsoi3NKfdRFIQIaedcHw98TsPVCoWR3c6S6gvYBixASOgjRTYh17ujUDjLGD3uScFJpAWXhHYNafuTIEXiVuSGirRDOwvrx+TVNBIE w5K9Vwn0X4OUjPCaVH1UkUTIGMY0p6hEQiq/Hx6f4FPjNLHg1iaijSeqr8nchBKZSI0nQL0SM17E/E/r5fqwaWfsyhJNY3IbNEg5VjHeBIG7jNJieaZIUAkM 7diMgIJRJvIqiYEd/7lRdI+q7tO3b09rzWuyjgq6Agdo1PkogvUQDeoiVqIoEf0jF7Rm/VkvVjv1sesdckqZw7QH1ifPwtPlhs=</latexit> <latexit sha1_base64="lJ9EN7QXSdbPcyEDp5zRAYeCTM=">AB/3icb VBNS8NAEN34WetXVPDiZbEInkoigh6LXjxWsB/QxDZbtulu0nY3Qgh5uBf8eJBEa/+DW/+G7dtDtr6YODx3gwz8KEM6Ud59taWl5ZXVuvbFQ3t7Z3du29/ baKU0loi8Q8lt0QFOUsoi3NKfdRFIQIaedcHw98TsPVCoWR3c6S6gvYBixASOgjRTYh17ujUDjLGD3uScFJpAWXhHYNafuTIEXiVuSGirRDOwvrx+TVNBIE w5K9Vwn0X4OUjPCaVH1UkUTIGMY0p6hEQiq/Hx6f4FPjNLHg1iaijSeqr8nchBKZSI0nQL0SM17E/E/r5fqwaWfsyhJNY3IbNEg5VjHeBIG7jNJieaZIUAkM 7diMgIJRJvIqiYEd/7lRdI+q7tO3b09rzWuyjgq6Agdo1PkogvUQDeoiVqIoEf0jF7Rm/VkvVjv1sesdckqZw7QH1ifPwtPlhs=</latexit> Cause Clause Prediction Ranking c1 <latexit sha1_base64="JOakx6XM9x+K2cYbi+FM5AhtN0=">AB/ XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NXLJkd+/Y3RNCP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQKbqzvf3uFtfWNza3idm lnd2/oHx41DRJphk2WCIS/RhRg4IrbFhuBT6mGqmMBLai4e3Ubz2hNjxRD3aUYihpX/GYM2qd1OpEkrBu0C1X/Ko/A1klQU4qkKPeLf90egnLJC rLBDWmHfipDcdUW84ETkqdzGBK2ZD2se2ohJNOJ6dOyFnTumRONGulCUz9e/EmEpjRjJynZLagVn2puJ/Xjuz8XU45irNLCo2XxRngtiETH8nPa6 RWTFyhDLN3a2EDaimzLqEFrZEcuIyCZYTWCXNi2rgV4P7y0rtJk+nCdwCucQwBXU4A7q0AGQ3iBV3jznr1378P7nLcWvHzmGBbgf0CO1KVlA= </latexit> <latexit sha1_base64="JOakx6XM9x+K2cYbi+FM5AhtN0=">AB/ XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NXLJkd+/Y3RNCP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQKbqzvf3uFtfWNza3idm lnd2/oHx41DRJphk2WCIS/RhRg4IrbFhuBT6mGqmMBLai4e3Ubz2hNjxRD3aUYihpX/GYM2qd1OpEkrBu0C1X/Ko/A1klQU4qkKPeLf90egnLJC rLBDWmHfipDcdUW84ETkqdzGBK2ZD2se2ohJNOJ6dOyFnTumRONGulCUz9e/EmEpjRjJynZLagVn2puJ/Xjuz8XU45irNLCo2XxRngtiETH8nPa6 RWTFyhDLN3a2EDaimzLqEFrZEcuIyCZYTWCXNi2rgV4P7y0rtJk+nCdwCucQwBXU4A7q0AGQ3iBV3jznr1378P7nLcWvHzmGBbgf0CO1KVlA= </latexit> <latexit sha1_base64="JOakx6XM9x+K2cYbi+FM5AhtN0=">AB/ XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NXLJkd+/Y3RNCP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQKbqzvf3uFtfWNza3idm lnd2/oHx41DRJphk2WCIS/RhRg4IrbFhuBT6mGqmMBLai4e3Ubz2hNjxRD3aUYihpX/GYM2qd1OpEkrBu0C1X/Ko/A1klQU4qkKPeLf90egnLJC rLBDWmHfipDcdUW84ETkqdzGBK2ZD2se2ohJNOJ6dOyFnTumRONGulCUz9e/EmEpjRjJynZLagVn2puJ/Xjuz8XU45irNLCo2XxRngtiETH8nPa6 RWTFyhDLN3a2EDaimzLqEFrZEcuIyCZYTWCXNi2rgV4P7y0rtJk+nCdwCucQwBXU4A7q0AGQ3iBV3jznr1378P7nLcWvHzmGBbgf0CO1KVlA= </latexit> <latexit sha1_base64="JOakx6XM9x+K2cYbi+FM5AhtN0=">AB/ XicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBJEJyhL3NXLJkd+/Y3RNCP4GW63txNbfYuk/cZNcYRIfDzem2FmXpQKbqzvf3uFtfWNza3idm lnd2/oHx41DRJphk2WCIS/RhRg4IrbFhuBT6mGqmMBLai4e3Ubz2hNjxRD3aUYihpX/GYM2qd1OpEkrBu0C1X/Ko/A1klQU4qkKPeLf90egnLJC rLBDWmHfipDcdUW84ETkqdzGBK2ZD2se2ohJNOJ6dOyFnTumRONGulCUz9e/EmEpjRjJynZLagVn2puJ/Xjuz8XU45irNLCo2XxRngtiETH8nPa6 RWTFyhDLN3a2EDaimzLqEFrZEcuIyCZYTWCXNi2rgV4P7y0rtJk+nCdwCucQwBXU4A7q0AGQ3iBV3jznr1378P7nLcWvHzmGBbgf0CO1KVlA= </latexit> c2 <latexit sha1_base64="MQTkJghj41RPH2kogvkSvP5Qg0Q=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9ktgh6LXjxWsB/QLiVJs21okl2SrFCW4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMPJIbqzvf3uFjc2t7Z3ibm lv/+DwqHx80jJxqilr0ljEukOwYIr1rTcCtZJNMOSCNYm47uZ35i2vBYPdpJwkKJh4pHnGLrpHaPSET7tX654lf9OdA6CXJSgRyNfvmnN4hpKp myVGBjuoGf2D2nIq2LTUSw1LMB3jIes6qrBkJszm507RhVMGKIq1K2XRXP07kWFpzEQS1ymxHZlVbyb+53VTG92EGVdJapmi0VRKpCN0ex3NOC aUSsmjmCqubsV0RHWmFqX0NIWIqcuk2A1gXSqlUDvxo8XFXqt3k6RTiDc7iEAK6hDvfQgCZQGMLvMKb9+y9ex/e56K14OUzp7AE7+sXPOWVlQ= </latexit> <latexit sha1_base64="MQTkJghj41RPH2kogvkSvP5Qg0Q=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9ktgh6LXjxWsB/QLiVJs21okl2SrFCW4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMPJIbqzvf3uFjc2t7Z3ibm lv/+DwqHx80jJxqilr0ljEukOwYIr1rTcCtZJNMOSCNYm47uZ35i2vBYPdpJwkKJh4pHnGLrpHaPSET7tX654lf9OdA6CXJSgRyNfvmnN4hpKp myVGBjuoGf2D2nIq2LTUSw1LMB3jIes6qrBkJszm507RhVMGKIq1K2XRXP07kWFpzEQS1ymxHZlVbyb+53VTG92EGVdJapmi0VRKpCN0ex3NOC aUSsmjmCqubsV0RHWmFqX0NIWIqcuk2A1gXSqlUDvxo8XFXqt3k6RTiDc7iEAK6hDvfQgCZQGMLvMKb9+y9ex/e56K14OUzp7AE7+sXPOWVlQ= </latexit> <latexit sha1_base64="MQTkJghj41RPH2kogvkSvP5Qg0Q=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9ktgh6LXjxWsB/QLiVJs21okl2SrFCW4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMPJIbqzvf3uFjc2t7Z3ibm lv/+DwqHx80jJxqilr0ljEukOwYIr1rTcCtZJNMOSCNYm47uZ35i2vBYPdpJwkKJh4pHnGLrpHaPSET7tX654lf9OdA6CXJSgRyNfvmnN4hpKp myVGBjuoGf2D2nIq2LTUSw1LMB3jIes6qrBkJszm507RhVMGKIq1K2XRXP07kWFpzEQS1ymxHZlVbyb+53VTG92EGVdJapmi0VRKpCN0ex3NOC aUSsmjmCqubsV0RHWmFqX0NIWIqcuk2A1gXSqlUDvxo8XFXqt3k6RTiDc7iEAK6hDvfQgCZQGMLvMKb9+y9ex/e56K14OUzp7AE7+sXPOWVlQ= </latexit> <latexit sha1_base64="MQTkJghj41RPH2kogvkSvP5Qg0Q=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9ktgh6LXjxWsB/QLiVJs21okl2SrFCW4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMPJIbqzvf3uFjc2t7Z3ibm lv/+DwqHx80jJxqilr0ljEukOwYIr1rTcCtZJNMOSCNYm47uZ35i2vBYPdpJwkKJh4pHnGLrpHaPSET7tX654lf9OdA6CXJSgRyNfvmnN4hpKp myVGBjuoGf2D2nIq2LTUSw1LMB3jIes6qrBkJszm507RhVMGKIq1K2XRXP07kWFpzEQS1ymxHZlVbyb+53VTG92EGVdJapmi0VRKpCN0ex3NOC aUSsmjmCqubsV0RHWmFqX0NIWIqcuk2A1gXSqlUDvxo8XFXqt3k6RTiDc7iEAK6hDvfQgCZQGMLvMKb9+y9ex/e56K14OUzp7AE7+sXPOWVlQ= </latexit> c3 <latexit sha1_base64="M3PhYKyfWzWJY0e6dD6bySfgF70=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9lVQY9FLx4r2A9ol5Kk2TY0yS5JVihL8Td41bM38epv8eg/MW3YFsfDzem2FmHkEN9b3v73C2vrG5lZxu7 Szu7d/UD48apo41ZQ1aCxi3SbYMEVa1huBWsnmFJBGuR0d3Ubz0xbXisHu04YaHEA8UjTrF1UqtLJK9y1654lf9GdAqCXJSgRz1Xvmn249pKp myVGBjOoGf2D2nIq2KTUTQ1LMB3hAes4qrBkJsxm507QmVP6KIq1K2XRTP07kWFpzFgS1ymxHZplbyr+53VSG92EGVdJapmi80VRKpCN0fR31Oe aUSvGjmCqubsV0SHWmFqX0MIWIicuk2A5gVXSvKgGfjV4uKrUbvN0inACp3AOAVxDe6hDg2gMIXeIU379l79z68z3lrwctnjmEB3tcvPniVlg= </latexit> <latexit sha1_base64="M3PhYKyfWzWJY0e6dD6bySfgF70=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9lVQY9FLx4r2A9ol5Kk2TY0yS5JVihL8Td41bM38epv8eg/MW3YFsfDzem2FmHkEN9b3v73C2vrG5lZxu7 Szu7d/UD48apo41ZQ1aCxi3SbYMEVa1huBWsnmFJBGuR0d3Ubz0xbXisHu04YaHEA8UjTrF1UqtLJK9y1654lf9GdAqCXJSgRz1Xvmn249pKp myVGBjOoGf2D2nIq2KTUTQ1LMB3hAes4qrBkJsxm507QmVP6KIq1K2XRTP07kWFpzFgS1ymxHZplbyr+53VSG92EGVdJapmi80VRKpCN0fR31Oe aUSvGjmCqubsV0SHWmFqX0MIWIicuk2A5gVXSvKgGfjV4uKrUbvN0inACp3AOAVxDe6hDg2gMIXeIU379l79z68z3lrwctnjmEB3tcvPniVlg= </latexit> <latexit sha1_base64="M3PhYKyfWzWJY0e6dD6bySfgF70=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9lVQY9FLx4r2A9ol5Kk2TY0yS5JVihL8Td41bM38epv8eg/MW3YFsfDzem2FmHkEN9b3v73C2vrG5lZxu7 Szu7d/UD48apo41ZQ1aCxi3SbYMEVa1huBWsnmFJBGuR0d3Ubz0xbXisHu04YaHEA8UjTrF1UqtLJK9y1654lf9GdAqCXJSgRz1Xvmn249pKp myVGBjOoGf2D2nIq2KTUTQ1LMB3hAes4qrBkJsxm507QmVP6KIq1K2XRTP07kWFpzFgS1ymxHZplbyr+53VSG92EGVdJapmi80VRKpCN0fR31Oe aUSvGjmCqubsV0SHWmFqX0MIWIicuk2A5gVXSvKgGfjV4uKrUbvN0inACp3AOAVxDe6hDg2gMIXeIU379l79z68z3lrwctnjmEB3tcvPniVlg= </latexit> <latexit sha1_base64="M3PhYKyfWzWJY0e6dD6bySfgF70=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9lVQY9FLx4r2A9ol5Kk2TY0yS5JVihL8Td41bM38epv8eg/MW3YFsfDzem2FmHkEN9b3v73C2vrG5lZxu7 Szu7d/UD48apo41ZQ1aCxi3SbYMEVa1huBWsnmFJBGuR0d3Ubz0xbXisHu04YaHEA8UjTrF1UqtLJK9y1654lf9GdAqCXJSgRz1Xvmn249pKp myVGBjOoGf2D2nIq2KTUTQ1LMB3hAes4qrBkJsxm507QmVP6KIq1K2XRTP07kWFpzFgS1ymxHZplbyr+53VSG92EGVdJapmi80VRKpCN0fR31Oe aUSvGjmCqubsV0SHWmFqX0MIWIicuk2A5gVXSvKgGfjV4uKrUbvN0inACp3AOAVxDe6hDg2gMIXeIU379l79z68z3lrwctnjmEB3tcvPniVlg= </latexit> c4 <latexit sha1_base64="AVJelIyC0Wtuyfq98ix86/AeyLw=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgh6LXjxWsB/QLiVJs21okl2SrFCW4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMPJIbqzvf3uFjc2t7Z3ibm lv/+DwqHx80jJxqilr0ljEukOwYIr1rTcCtZJNMOSCNYm47uZ35i2vBYPdpJwkKJh4pHnGLrpHaPSET7tX654lf9OdA6CXJSgRyNfvmnN4hpKp myVGBjuoGf2D2nIq2LTUSw1LMB3jIes6qrBkJszm507RhVMGKIq1K2XRXP07kWFpzEQS1ymxHZlVbyb+53VTG92EGVdJapmi0VRKpCN0ex3NOC aUSsmjmCqubsV0RHWmFqX0NIWIqcuk2A1gXSuqoGfjV4qFXqt3k6RTiDc7iEAK6hDvfQgCZQGMLvMKb9+y9ex/e56K14OUzp7AE7+sXQAuVlw= </latexit> <latexit sha1_base64="AVJelIyC0Wtuyfq98ix86/AeyLw=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgh6LXjxWsB/QLiVJs21okl2SrFCW4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMPJIbqzvf3uFjc2t7Z3ibm lv/+DwqHx80jJxqilr0ljEukOwYIr1rTcCtZJNMOSCNYm47uZ35i2vBYPdpJwkKJh4pHnGLrpHaPSET7tX654lf9OdA6CXJSgRyNfvmnN4hpKp myVGBjuoGf2D2nIq2LTUSw1LMB3jIes6qrBkJszm507RhVMGKIq1K2XRXP07kWFpzEQS1ymxHZlVbyb+53VTG92EGVdJapmi0VRKpCN0ex3NOC aUSsmjmCqubsV0RHWmFqX0NIWIqcuk2A1gXSuqoGfjV4qFXqt3k6RTiDc7iEAK6hDvfQgCZQGMLvMKb9+y9ex/e56K14OUzp7AE7+sXQAuVlw= </latexit> <latexit sha1_base64="AVJelIyC0Wtuyfq98ix86/AeyLw=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgh6LXjxWsB/QLiVJs21okl2SrFCW4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMPJIbqzvf3uFjc2t7Z3ibm lv/+DwqHx80jJxqilr0ljEukOwYIr1rTcCtZJNMOSCNYm47uZ35i2vBYPdpJwkKJh4pHnGLrpHaPSET7tX654lf9OdA6CXJSgRyNfvmnN4hpKp myVGBjuoGf2D2nIq2LTUSw1LMB3jIes6qrBkJszm507RhVMGKIq1K2XRXP07kWFpzEQS1ymxHZlVbyb+53VTG92EGVdJapmi0VRKpCN0ex3NOC aUSsmjmCqubsV0RHWmFqX0NIWIqcuk2A1gXSuqoGfjV4qFXqt3k6RTiDc7iEAK6hDvfQgCZQGMLvMKb9+y9ex/e56K14OUzp7AE7+sXQAuVlw= </latexit> <latexit sha1_base64="AVJelIyC0Wtuyfq98ix86/AeyLw=">AB/ XicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgh6LXjxWsB/QLiVJs21okl2SrFCW4m/wqmdv4tXf4tF/YtruwbY+GHi8N8PMPJIbqzvf3uFjc2t7Z3ibm lv/+DwqHx80jJxqilr0ljEukOwYIr1rTcCtZJNMOSCNYm47uZ35i2vBYPdpJwkKJh4pHnGLrpHaPSET7tX654lf9OdA6CXJSgRyNfvmnN4hpKp myVGBjuoGf2D2nIq2LTUSw1LMB3jIes6qrBkJszm507RhVMGKIq1K2XRXP07kWFpzEQS1ymxHZlVbyb+53VTG92EGVdJapmi0VRKpCN0ex3NOC aUSsmjmCqubsV0RHWmFqX0NIWIqcuk2A1gXSuqoGfjV4qFXqt3k6RTiDc7iEAK6hDvfQgCZQGMLvMKb9+y9ex/e56K14OUzp7AE7+sXQAuVlw= </latexit> 1. (c3, c1) 2. (c3, c2) 3. (c2, c1) 4. (c1, c3) 5. (c3, c3) 6. (c1, c2) 7. (c4, c4) <latexit sha1_base64="mgRPTj3QXjCEPMKHZqPTM2XPyI=">ACVnicbZFdT8IwFIa7IYL4NfXSm0ZiAokhK6B4SfTGS0zkI2Fk6UqBxu4jbWdCH9Sb/SneGPs NgwKnqTJ+7zvOel26kWcSWXbH4aZ28nvFop7pf2Dw6Nj6+S0J8NYENolIQ/FwMOSchbQrmK0EkKPY9Tve832S91+okCwMntQ8oiMfTwM2YQrbmWj2oOrBC3cUVcVHWcUn3N9YQbGd/8mbGSHMj4et1f8o36zydb2Xc1NysulbZrtlpwW2BVqIMVtVxrVdnHJLYp4EiHEs5RHakRgsFCOcLktOLGmEyTOe0qGWAfapHC3StSzhpXbGcB IKfQIFU/f3xAL7Us59T3f6WM3kZpaY/2XDWE1uRwsWRLGiAckumsQcqhAmO4ZjJihRfK4FJoLpb4VkhgUmSr9ES8Bbf7ytujVa8iuocdmuX23WkcRnIMLUAEItEAbPIAO6AIC3sCnYRo54934MvNmIWs1jdXMGfhTpvUNJiynWA=</latexit> <latexit sha1_base64="mgRPTj3QXjCEPMKHZqPTM2XPyI=">ACVnicbZFdT8IwFIa7IYL4NfXSm0ZiAokhK6B4SfTGS0zkI2Fk6UqBxu4jbWdCH9Sb/SneGPs NgwKnqTJ+7zvOel26kWcSWXbH4aZ28nvFop7pf2Dw6Nj6+S0J8NYENolIQ/FwMOSchbQrmK0EkKPY9Tve832S91+okCwMntQ8oiMfTwM2YQrbmWj2oOrBC3cUVcVHWcUn3N9YQbGd/8mbGSHMj4et1f8o36zydb2Xc1NysulbZrtlpwW2BVqIMVtVxrVdnHJLYp4EiHEs5RHakRgsFCOcLktOLGmEyTOe0qGWAfapHC3StSzhpXbGcB IKfQIFU/f3xAL7Us59T3f6WM3kZpaY/2XDWE1uRwsWRLGiAckumsQcqhAmO4ZjJihRfK4FJoLpb4VkhgUmSr9ES8Bbf7ytujVa8iuocdmuX23WkcRnIMLUAEItEAbPIAO6AIC3sCnYRo54934MvNmIWs1jdXMGfhTpvUNJiynWA=</latexit> <latexit sha1_base64="mgRPTj3QXjCEPMKHZqPTM2XPyI=">ACVnicbZFdT8IwFIa7IYL4NfXSm0ZiAokhK6B4SfTGS0zkI2Fk6UqBxu4jbWdCH9Sb/SneGPs NgwKnqTJ+7zvOel26kWcSWXbH4aZ28nvFop7pf2Dw6Nj6+S0J8NYENolIQ/FwMOSchbQrmK0EkKPY9Tve832S91+okCwMntQ8oiMfTwM2YQrbmWj2oOrBC3cUVcVHWcUn3N9YQbGd/8mbGSHMj4et1f8o36zydb2Xc1NysulbZrtlpwW2BVqIMVtVxrVdnHJLYp4EiHEs5RHakRgsFCOcLktOLGmEyTOe0qGWAfapHC3StSzhpXbGcB IKfQIFU/f3xAL7Us59T3f6WM3kZpaY/2XDWE1uRwsWRLGiAckumsQcqhAmO4ZjJihRfK4FJoLpb4VkhgUmSr9ES8Bbf7ytujVa8iuocdmuX23WkcRnIMLUAEItEAbPIAO6AIC3sCnYRo54934MvNmIWs1jdXMGfhTpvUNJiynWA=</latexit> <latexit sha1_base64="mgRPTj3QXjCEPMKHZqPTM2XPyI=">ACVnicbZFdT8IwFIa7IYL4NfXSm0ZiAokhK6B4SfTGS0zkI2Fk6UqBxu4jbWdCH9Sb/SneGPs NgwKnqTJ+7zvOel26kWcSWXbH4aZ28nvFop7pf2Dw6Nj6+S0J8NYENolIQ/FwMOSchbQrmK0EkKPY9Tve832S91+okCwMntQ8oiMfTwM2YQrbmWj2oOrBC3cUVcVHWcUn3N9YQbGd/8mbGSHMj4et1f8o36zydb2Xc1NysulbZrtlpwW2BVqIMVtVxrVdnHJLYp4EiHEs5RHakRgsFCOcLktOLGmEyTOe0qGWAfapHC3StSzhpXbGcB IKfQIFU/f3xAL7Us59T3f6WM3kZpaY/2XDWE1uRwsWRLGiAckumsQcqhAmO4ZjJihRfK4FJoLpb4VkhgUmSr9ES8Bbf7ytujVa8iuocdmuX23WkcRnIMLUAEItEAbPIAO6AIC3sCnYRo54934MvNmIWs1jdXMGfhTpvUNJiynWA=</latexit> Pre-output Figure 1: Overview of RANKCP, our proposed one-step approach for emotion-cause pair extraction. to every node, because the cause clause of an emotion clause may be itself. Graph attention network propagates information among clauses by stacking multiple graph attention layers, in which each layer is to learn an updated clause representation via aggregating neighboring clauses’ information using self-attention (Vaswani et al., 2017). At the t-th graph attention layer, let {h(t−1) 1 , h(t−1) 2 , . . . , h(t−1) |D| } denote the input clause representations of this layer, where the clause representation of clause ci is denoted as h(t−1) i ∈Rdt−1. The graph attention mechanism operates on each clause ci in the document via the following aggregation scheme: h(t) i = ReLU  X j∈N(i) α(t) ij W (t)h(t−1) j + b(t)  , (2) where h(t) i is the output representation, W (t) and b(t) are learnable parameters, and N(i) denotes the directly neighboring clauses of ci (in our case it contains all clauses in the document). The attention weight α(t) ij reflects the strength of aggregation level between the clause ci and the clause cj, which is learned by an MLP parameterized by w(t): e(t) ij = w(t)⊤tanh h W (t)h(t−1) i ; W (t)h(t−1) j i , α(t) ij = exp  LeakyReLU(e(t) ij )  P k∈N(i) exp  LeakyReLU(e(t) ik )  , (3) where [·; ·] is concatenation. The following matrix form can describe the t-th graph attention layer: H(t) = ReLU  A(t)H(t−1)W (t)⊤+ b(t) , (4) where [A(t)]ij = α(t) ij . The first layer’s input H(0) = c1, c2, . . . , c|D| ⊤is the document encoder’s output (see Section 3.1). By stacking T layers to model inter-clause relationships, the last layer’s output is the updated clause representations H(T) = h1, h2, . . . , h|D| ⊤. We further adopt multi-head attention, where each head can capture a global pattern based on the order-preserving property of graph attention (Qiu et al., 2018). In practice, we add a highway connection (Srivastava et al., 2015) between every two adjacent layers to control the information flow.2 Based on modeling the interactions between clauses with graph attention network composed of multiple graph attention layers, each clause representation hi is produced by fusing other clauses’ information adaptively, and the inter-clause relationships in the document can be learned sufficiently. After obtaining updated clause representations {hi}|D| i=1, we feed them into two pre-output layers to predict whether a clause is an emotion/cause clause or not. Specifically, an MLP (parameterized by wemo and bemo) with logistic function σ(·) is used to predict the probability of a clause ci being an emotion clause (denoted as ˆyemo i ): ˆyemo i = σ  w⊤ emohi + bemo  . (5) Similarly, the probability of a clause ci being a cause clause (ˆycau i ) is obtained by the other layer. 3.3 Clause Pair Ranking with Kernel-based Relative Position Embedding To extract emotion-cause pairs in an end-to-end fashion, our approach further learns clause pair rep2We attempted to extend graph attention to a structured version, but it did not lead to improvement (see Appendix B). 3174 resentations and rank these pairs to obtain emotioncause pairs. Relative position between two clauses is the key to indicate emotion-cause pairs. Thus, we inject relative position information into the clause pair representation learning process via relative position embedding learning. We hypothesize that if the relative position of two clauses is too large, the probability of their forming an emotion-cause pair is very small. Thus, given the document D = (c1, . . . , c|D|), we consider each clause pair (ci, cj) in which the two clauses’ relative position (absolute value) |j −i| is less than or equal to a certain value M as a candidate of emotion-cause pair. We construct a set of clause pair candidates from the document D: P′ = {(ci, cj) | −M ≤j −i ≤+M} . (6) Learning Clause Pair Representations For each clause pair candidate pij = (ci, cj) ∈P′, its initialized representation is obtained by concatenating three vectors: the clause ci’s representation hi, the clause cj’s representation hj, and their relative position j −i ’s embedding rj−i. We employ a one-layer MLP to learn its representation: pij = ReLU(Wp[hi; hj; rj−i] + bp) , (7) with learnable Wp and bp. Next we introduce how to build relative position embeddings. Vanilla relative position embedding For each relative position m ∈{−M, . . . , −1, 0, +1, . . . , +M}, we randomly initialize the embedding rm via sampling from a uniform distribution. Then each relative position embedding is learned together with the model training process. Kernel-based relative position embedding Beyond the above vanilla scheme where each relative position embedding is partly independent of each other, we aim to model the mutual impact among different relative positions for further improving relative position embeddings. To this end, for each relative position m ∈{−M, . . . , +M}, we use an RBF kernel function Km(·) to model the impact between m and other relative positions: Km(j) = exp  −(j −m)2 σK2  , (8) where j ∈{−M, . . . , +M} is one of possible relative position values, and σK restricts the shape of the kernel function. Then, we enhance the vanilla r0 −1 <latexit sha1_base64="eqcAzF/DINHc43ykL9ajxwiMFVU=">ACAXicbVA9SwNBEJ2LXzF+RS1tFoNoY7gTQcug jWUEwOXI+xt9pIlu7fH7p4Qjqv8DbZa24mtv8TSf+ImucIkPh4vDfDzLw4Uwb1/12Siura+sb5c3K1vbO7l51/6CtZaoIbRHJpeqEWFPOYtoyzHDaSRTFIuT0MRzdTvzHJ6o0k/GDGSc0EHgQs4gRbKzkd0OBVC879/LTXrXm1t0p0DLxCl KDAs1e9afblyQVNDaEY619z01MkGFlGOE0r3RTRNMRnhAfUtjLKgOsunJOTqxSh9FUtmKDZqfycyLQei9B2CmyGetGbiP95fmqi6yBjcZIaGpPZoijlyEg0+R/1maLE8LElmChmb0VkiBUmxqY0tyUuc3EW0xgmbQv6p5b9+4va42bIp0yH MExnIEHV9CAO2hCwhIeIFXeHOenXfnw/mctZacYuYQ5uB8/QL9mZcX</latexit> <latexit sha1_base64="eqcAzF/DINHc43ykL9ajxwiMFVU=">ACAXicbVA9SwNBEJ2LXzF+RS1tFoNoY7gTQcug jWUEwOXI+xt9pIlu7fH7p4Qjqv8DbZa24mtv8TSf+ImucIkPh4vDfDzLw4Uwb1/12Siura+sb5c3K1vbO7l51/6CtZaoIbRHJpeqEWFPOYtoyzHDaSRTFIuT0MRzdTvzHJ6o0k/GDGSc0EHgQs4gRbKzkd0OBVC879/LTXrXm1t0p0DLxCl KDAs1e9afblyQVNDaEY619z01MkGFlGOE0r3RTRNMRnhAfUtjLKgOsunJOTqxSh9FUtmKDZqfycyLQei9B2CmyGetGbiP95fmqi6yBjcZIaGpPZoijlyEg0+R/1maLE8LElmChmb0VkiBUmxqY0tyUuc3EW0xgmbQv6p5b9+4va42bIp0yH MExnIEHV9CAO2hCwhIeIFXeHOenXfnw/mctZacYuYQ5uB8/QL9mZcX</latexit> <latexit sha1_base64="eqcAzF/DINHc43ykL9ajxwiMFVU=">ACAXicbVA9SwNBEJ2LXzF+RS1tFoNoY7gTQcug jWUEwOXI+xt9pIlu7fH7p4Qjqv8DbZa24mtv8TSf+ImucIkPh4vDfDzLw4Uwb1/12Siura+sb5c3K1vbO7l51/6CtZaoIbRHJpeqEWFPOYtoyzHDaSRTFIuT0MRzdTvzHJ6o0k/GDGSc0EHgQs4gRbKzkd0OBVC879/LTXrXm1t0p0DLxCl KDAs1e9afblyQVNDaEY619z01MkGFlGOE0r3RTRNMRnhAfUtjLKgOsunJOTqxSh9FUtmKDZqfycyLQei9B2CmyGetGbiP95fmqi6yBjcZIaGpPZoijlyEg0+R/1maLE8LElmChmb0VkiBUmxqY0tyUuc3EW0xgmbQv6p5b9+4va42bIp0yH MExnIEHV9CAO2hCwhIeIFXeHOenXfnw/mctZacYuYQ5uB8/QL9mZcX</latexit> <latexit sha1_base64="eqcAzF/DINHc43ykL9ajxwiMFVU=">ACAXicbVA9SwNBEJ2LXzF+RS1tFoNoY7gTQcug jWUEwOXI+xt9pIlu7fH7p4Qjqv8DbZa24mtv8TSf+ImucIkPh4vDfDzLw4Uwb1/12Siura+sb5c3K1vbO7l51/6CtZaoIbRHJpeqEWFPOYtoyzHDaSRTFIuT0MRzdTvzHJ6o0k/GDGSc0EHgQs4gRbKzkd0OBVC879/LTXrXm1t0p0DLxCl KDAs1e9afblyQVNDaEY619z01MkGFlGOE0r3RTRNMRnhAfUtjLKgOsunJOTqxSh9FUtmKDZqfycyLQei9B2CmyGetGbiP95fmqi6yBjcZIaGpPZoijlyEg0+R/1maLE8LElmChmb0VkiBUmxqY0tyUuc3EW0xgmbQv6p5b9+4va42bIp0yH MExnIEHV9CAO2hCwhIeIFXeHOenXfnw/mctZacYuYQ5uB8/QL9mZcX</latexit> X <latexit sha1_base64="HPvrImzRBbCzydhOIVtm7PZs4Rs=">AB+nicbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoMeiF48VbCu0S8m2TY0yS5JVihr/4JXPXsTr/4Zj/4Ts+0ebOuDgcd7M8zMCxPBjfW8b1RaW9/Y3CpvV3Z29/YPqodHbROnmrIWjUWsH0NimOCKtSy3gj0 mhEZCtYJx7e53li2vBYPdhJwgJhopHnBKbSz2Tyn615tW9GfAq8QtSgwLNfvWnN4hpKpmyVBjur6X2CAj2nIq2LTSw1LCB2TIes6qohkJshmt07xmVMGOIq1K2XxTP07kRFpzESGrlMSOzLXi7+53VTG10HGVdJapmi80VRKrCNcf4HnDNqBUTRwjV3N2K6YhoQq2LZ2FLKcuE385gVXSvqj7Xt2/v6w1bop0ynACp3AOPlxBA+6gCS2gMIXeIU39Ize0Qf6nLeWUDFzDAtAX7/pcJTp</latexit> <latexit sha1_base64="HPvrImzRBbCzydhOIVtm7PZs4Rs=">AB+nicbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoMeiF48VbCu0S8m2TY0yS5JVihr/4JXPXsTr/4Zj/4Ts+0ebOuDgcd7M8zMCxPBjfW8b1RaW9/Y3CpvV3Z29/YPqodHbROnmrIWjUWsH0NimOCKtSy3gj0 mhEZCtYJx7e53li2vBYPdhJwgJhopHnBKbSz2Tyn615tW9GfAq8QtSgwLNfvWnN4hpKpmyVBjur6X2CAj2nIq2LTSw1LCB2TIes6qohkJshmt07xmVMGOIq1K2XxTP07kRFpzESGrlMSOzLXi7+53VTG10HGVdJapmi80VRKrCNcf4HnDNqBUTRwjV3N2K6YhoQq2LZ2FLKcuE385gVXSvqj7Xt2/v6w1bop0ynACp3AOPlxBA+6gCS2gMIXeIU39Ize0Qf6nLeWUDFzDAtAX7/pcJTp</latexit> <latexit sha1_base64="HPvrImzRBbCzydhOIVtm7PZs4Rs=">AB+nicbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoMeiF48VbCu0S8m2TY0yS5JVihr/4JXPXsTr/4Zj/4Ts+0ebOuDgcd7M8zMCxPBjfW8b1RaW9/Y3CpvV3Z29/YPqodHbROnmrIWjUWsH0NimOCKtSy3gj0 mhEZCtYJx7e53li2vBYPdhJwgJhopHnBKbSz2Tyn615tW9GfAq8QtSgwLNfvWnN4hpKpmyVBjur6X2CAj2nIq2LTSw1LCB2TIes6qohkJshmt07xmVMGOIq1K2XxTP07kRFpzESGrlMSOzLXi7+53VTG10HGVdJapmi80VRKrCNcf4HnDNqBUTRwjV3N2K6YhoQq2LZ2FLKcuE385gVXSvqj7Xt2/v6w1bop0ynACp3AOPlxBA+6gCS2gMIXeIU39Ize0Qf6nLeWUDFzDAtAX7/pcJTp</latexit> <latexit sha1_base64="HPvrImzRBbCzydhOIVtm7PZs4Rs=">AB+nicbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoMeiF48VbCu0S8m2TY0yS5JVihr/4JXPXsTr/4Zj/4Ts+0ebOuDgcd7M8zMCxPBjfW8b1RaW9/Y3CpvV3Z29/YPqodHbROnmrIWjUWsH0NimOCKtSy3gj0 mhEZCtYJx7e53li2vBYPdhJwgJhopHnBKbSz2Tyn615tW9GfAq8QtSgwLNfvWnN4hpKpmyVBjur6X2CAj2nIq2LTSw1LCB2TIes6qohkJshmt07xmVMGOIq1K2XxTP07kRFpzESGrlMSOzLXi7+53VTG10HGVdJapmi80VRKrCNcf4HnDNqBUTRwjV3N2K6YhoQq2LZ2FLKcuE385gVXSvqj7Xt2/v6w1bop0ynACp3AOPlxBA+6gCS2gMIXeIU39Ize0Qf6nLeWUDFzDAtAX7/pcJTp</latexit>=) <latexit sha1_base64="rJ0dVvQdlB1RZE4oedNMK+3aVRQ=">ACB3icbVC7SgNBFL0bXzE+smpMxgEq7ArgpZ BGwuLCOYByRJmJ7ObIbMzy8ysEkI+wG+w1dpObP0MS/ESbKFSTxw4XDOvZzLCVPOtPG8b6ewtr6xuVXcLu3s7u2X3YPDpaZIrRBJeqHWJNORO0YZjhtJ0qipOQ01Y4vJn6rUeqNJPiwYxSGiQ4FixiBsr9dxy906KWLF4YLBS8qnVryq NwNaJX5OKpCj3nN/un1JsoQKQzjWuN7qQnGWBlGOJ2UupmKSZDHNOpQInVAfj2eMTdGqVPoqksiMmql/L8Y40XqUhHYzwWagl72p+J/XyUx0FYyZSDNDBZkHRlHRqJpC6jPFCWGjyzBRDH7KyIDrDAxtquFlDCZ2E785QZWSfO86ntV/ 6iUrvO2ynCMZzAGfhwCTW4hTo0gEAGL/AKb86z8+58OJ/z1YKT3xzBApyvXz3emgw=</latexit> <latexit sha1_base64="rJ0dVvQdlB1RZE4oedNMK+3aVRQ=">ACB3icbVC7SgNBFL0bXzE+smpMxgEq7ArgpZ BGwuLCOYByRJmJ7ObIbMzy8ysEkI+wG+w1dpObP0MS/ESbKFSTxw4XDOvZzLCVPOtPG8b6ewtr6xuVXcLu3s7u2X3YPDpaZIrRBJeqHWJNORO0YZjhtJ0qipOQ01Y4vJn6rUeqNJPiwYxSGiQ4FixiBsr9dxy906KWLF4YLBS8qnVryq NwNaJX5OKpCj3nN/un1JsoQKQzjWuN7qQnGWBlGOJ2UupmKSZDHNOpQInVAfj2eMTdGqVPoqksiMmql/L8Y40XqUhHYzwWagl72p+J/XyUx0FYyZSDNDBZkHRlHRqJpC6jPFCWGjyzBRDH7KyIDrDAxtquFlDCZ2E785QZWSfO86ntV/ 6iUrvO2ynCMZzAGfhwCTW4hTo0gEAGL/AKb86z8+58OJ/z1YKT3xzBApyvXz3emgw=</latexit> <latexit sha1_base64="rJ0dVvQdlB1RZE4oedNMK+3aVRQ=">ACB3icbVC7SgNBFL0bXzE+smpMxgEq7ArgpZ BGwuLCOYByRJmJ7ObIbMzy8ysEkI+wG+w1dpObP0MS/ESbKFSTxw4XDOvZzLCVPOtPG8b6ewtr6xuVXcLu3s7u2X3YPDpaZIrRBJeqHWJNORO0YZjhtJ0qipOQ01Y4vJn6rUeqNJPiwYxSGiQ4FixiBsr9dxy906KWLF4YLBS8qnVryq NwNaJX5OKpCj3nN/un1JsoQKQzjWuN7qQnGWBlGOJ2UupmKSZDHNOpQInVAfj2eMTdGqVPoqksiMmql/L8Y40XqUhHYzwWagl72p+J/XyUx0FYyZSDNDBZkHRlHRqJpC6jPFCWGjyzBRDH7KyIDrDAxtquFlDCZ2E785QZWSfO86ntV/ 6iUrvO2ynCMZzAGfhwCTW4hTo0gEAGL/AKb86z8+58OJ/z1YKT3xzBApyvXz3emgw=</latexit> <latexit sha1_base64="rJ0dVvQdlB1RZE4oedNMK+3aVRQ=">ACB3icbVC7SgNBFL0bXzE+smpMxgEq7ArgpZ BGwuLCOYByRJmJ7ObIbMzy8ysEkI+wG+w1dpObP0MS/ESbKFSTxw4XDOvZzLCVPOtPG8b6ewtr6xuVXcLu3s7u2X3YPDpaZIrRBJeqHWJNORO0YZjhtJ0qipOQ01Y4vJn6rUeqNJPiwYxSGiQ4FixiBsr9dxy906KWLF4YLBS8qnVryq NwNaJX5OKpCj3nN/un1JsoQKQzjWuN7qQnGWBlGOJ2UupmKSZDHNOpQInVAfj2eMTdGqVPoqksiMmql/L8Y40XqUhHYzwWagl72p+J/XyUx0FYyZSDNDBZkHRlHRqJpC6jPFCWGjyzBRDH7KyIDrDAxtquFlDCZ2E785QZWSfO86ntV/ 6iUrvO2ynCMZzAGfhwCTW4hTo0gEAGL/AKb86z8+58OJ/z1YKT3xzBApyvXz3emgw=</latexit> . . . <latexit sha1_base64="4fajA5Sub0emPp2TPmVj1aHk3ls=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDabTb t2kw27E6GE/gcvHhTx6v/x5r9x2+agrQ8GHu/NMDMvSKUw6LrfTmltfWNzq7xd2dnd2z+oHh61jco04y2mpNLdgBouRcJbKFDybqo5jQPJO8H4duZ3nrg2QiUPOEm5H9NhIiLBKFqp3ZehQjOo1ty6OwdZJV5BalCgOah+9UPFspgnyCQ1pue5Kfo51SiY5NKPzM8pWxMh7xnaUJjbv x8fu2UnFklJHSthIkc/X3RE5jYyZxYDtjiOz7M3E/7xehtG1n4skzZAnbLEoyiRBRWavk1BozlBOLKFMC3srYSOqKUMbUMWG4C2/vEraF3XPrXv3l7XGTRFHGU7gFM7BgytowB0oQUMHuEZXuHNUc6L8+58LFpLTjFzDH/gfP4AvB+POA=</latexit> <latexit sha1_base64="4fajA5Sub0emPp2TPmVj1aHk3ls=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDabTb t2kw27E6GE/gcvHhTx6v/x5r9x2+agrQ8GHu/NMDMvSKUw6LrfTmltfWNzq7xd2dnd2z+oHh61jco04y2mpNLdgBouRcJbKFDybqo5jQPJO8H4duZ3nrg2QiUPOEm5H9NhIiLBKFqp3ZehQjOo1ty6OwdZJV5BalCgOah+9UPFspgnyCQ1pue5Kfo51SiY5NKPzM8pWxMh7xnaUJjbv x8fu2UnFklJHSthIkc/X3RE5jYyZxYDtjiOz7M3E/7xehtG1n4skzZAnbLEoyiRBRWavk1BozlBOLKFMC3srYSOqKUMbUMWG4C2/vEraF3XPrXv3l7XGTRFHGU7gFM7BgytowB0oQUMHuEZXuHNUc6L8+58LFpLTjFzDH/gfP4AvB+POA=</latexit> <latexit sha1_base64="4fajA5Sub0emPp2TPmVj1aHk3ls=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDabTb t2kw27E6GE/gcvHhTx6v/x5r9x2+agrQ8GHu/NMDMvSKUw6LrfTmltfWNzq7xd2dnd2z+oHh61jco04y2mpNLdgBouRcJbKFDybqo5jQPJO8H4duZ3nrg2QiUPOEm5H9NhIiLBKFqp3ZehQjOo1ty6OwdZJV5BalCgOah+9UPFspgnyCQ1pue5Kfo51SiY5NKPzM8pWxMh7xnaUJjbv x8fu2UnFklJHSthIkc/X3RE5jYyZxYDtjiOz7M3E/7xehtG1n4skzZAnbLEoyiRBRWavk1BozlBOLKFMC3srYSOqKUMbUMWG4C2/vEraF3XPrXv3l7XGTRFHGU7gFM7BgytowB0oQUMHuEZXuHNUc6L8+58LFpLTjFzDH/gfP4AvB+POA=</latexit> <latexit sha1_base64="4fajA5Sub0emPp2TPmVj1aHk3ls=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDabTb t2kw27E6GE/gcvHhTx6v/x5r9x2+agrQ8GHu/NMDMvSKUw6LrfTmltfWNzq7xd2dnd2z+oHh61jco04y2mpNLdgBouRcJbKFDybqo5jQPJO8H4duZ3nrg2QiUPOEm5H9NhIiLBKFqp3ZehQjOo1ty6OwdZJV5BalCgOah+9UPFspgnyCQ1pue5Kfo51SiY5NKPzM8pWxMh7xnaUJjbv x8fu2UnFklJHSthIkc/X3RE5jYyZxYDtjiOz7M3E/7xehtG1n4skzZAnbLEoyiRBRWavk1BozlBOLKFMC3srYSOqKUMbUMWG4C2/vEraF3XPrXv3l7XGTRFHGU7gFM7BgytowB0oQUMHuEZXuHNUc6L8+58LFpLTjFzDH/gfP4AvB+POA=</latexit> r−2 <latexit sha1_base64="/IShXm3MEtRkTwOnuprq6W4fqgQ=">ACAHicbVA9SwNBEJ2LXzF+RS1tFoNgY7gLgpZB G8sI5gOTI+xt9pIlu3vH7p4QjT+Blut7cTWf2LpP3EvucIkPh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMNpJ1YUi4DTdjC+zfz2E1WaRfLBTGLqCzyULGQEGys9gKBVD+9qE375YpbdWdAq8TLSQ VyNPrln94gIomg0hCOte56bmz8FCvDCKfTUi/RNMZkjIe0a6nEgmo/nV08RWdWGaAwUrakQTP170SKhdYTEdhOgc1IL3uZ+J/XTUx47adMxomhkswXhQlHJkLZ+2jAFCWGTyzBRDF7KyIjrDAxNqSFLYHIMvGWE1glrVrVc6ve/WlfpOnU4QTO IVz8OAK6nAHDWgCAQkv8ApvzrPz7nw4n/PWgpPHMCnK9fmVGW5w=</latexit> <latexit sha1_base64="/IShXm3MEtRkTwOnuprq6W4fqgQ=">ACAHicbVA9SwNBEJ2LXzF+RS1tFoNgY7gLgpZB G8sI5gOTI+xt9pIlu3vH7p4QjT+Blut7cTWf2LpP3EvucIkPh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMNpJ1YUi4DTdjC+zfz2E1WaRfLBTGLqCzyULGQEGys9gKBVD+9qE375YpbdWdAq8TLSQ VyNPrln94gIomg0hCOte56bmz8FCvDCKfTUi/RNMZkjIe0a6nEgmo/nV08RWdWGaAwUrakQTP170SKhdYTEdhOgc1IL3uZ+J/XTUx47adMxomhkswXhQlHJkLZ+2jAFCWGTyzBRDF7KyIjrDAxNqSFLYHIMvGWE1glrVrVc6ve/WlfpOnU4QTO IVz8OAK6nAHDWgCAQkv8ApvzrPz7nw4n/PWgpPHMCnK9fmVGW5w=</latexit> <latexit sha1_base64="/IShXm3MEtRkTwOnuprq6W4fqgQ=">ACAHicbVA9SwNBEJ2LXzF+RS1tFoNgY7gLgpZB G8sI5gOTI+xt9pIlu3vH7p4QjT+Blut7cTWf2LpP3EvucIkPh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMNpJ1YUi4DTdjC+zfz2E1WaRfLBTGLqCzyULGQEGys9gKBVD+9qE375YpbdWdAq8TLSQ VyNPrln94gIomg0hCOte56bmz8FCvDCKfTUi/RNMZkjIe0a6nEgmo/nV08RWdWGaAwUrakQTP170SKhdYTEdhOgc1IL3uZ+J/XTUx47adMxomhkswXhQlHJkLZ+2jAFCWGTyzBRDF7KyIjrDAxNqSFLYHIMvGWE1glrVrVc6ve/WlfpOnU4QTO IVz8OAK6nAHDWgCAQkv8ApvzrPz7nw4n/PWgpPHMCnK9fmVGW5w=</latexit> <latexit sha1_base64="/IShXm3MEtRkTwOnuprq6W4fqgQ=">ACAHicbVA9SwNBEJ2LXzF+RS1tFoNgY7gLgpZB G8sI5gOTI+xt9pIlu3vH7p4QjT+Blut7cTWf2LpP3EvucIkPh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMNpJ1YUi4DTdjC+zfz2E1WaRfLBTGLqCzyULGQEGys9gKBVD+9qE375YpbdWdAq8TLSQ VyNPrln94gIomg0hCOte56bmz8FCvDCKfTUi/RNMZkjIe0a6nEgmo/nV08RWdWGaAwUrakQTP170SKhdYTEdhOgc1IL3uZ+J/XTUx47adMxomhkswXhQlHJkLZ+2jAFCWGTyzBRDF7KyIjrDAxNqSFLYHIMvGWE1glrVrVc6ve/WlfpOnU4QTO IVz8OAK6nAHDWgCAQkv8ApvzrPz7nw4n/PWgpPHMCnK9fmVGW5w=</latexit> r−1 <latexit sha1_base64="AcAh/Nk3Gv1ENIS+UbTjL0rMPU=">ACAHicbVA9SwNBEJ2LXzF+RS1tFoNgY7iTgJZB G8sI5gOTI+xt9pIlu3vH7p4QjT+Blut7cTWf2LpP3EvucIkPh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMNpJ1YUi4DTdjC+zfz2E1WaRfLBTGLqCzyULGQEGys9gKBVD+98Kb9csWtujOgVeLlpA I5Gv3yT28QkURQaQjHWnc9NzZ+ipVhNpqZdoGmMyxkPatVRiQbWfzi6eojOrDFAYKVvSoJn6dyLFQuJCGynwGakl71M/M/rJia89lMm48RQSeaLwoQjE6HsfTRgihLDJ5Zgopi9FZERVpgYG9LClkBkmXjLCayS1mXVc6vefa1Sv8nTKcIJn MI5eHAFdbiDBjSBgIQXeIU359l5dz6cz3lrwclnjmEBztcvl72W5g=</latexit> <latexit sha1_base64="AcAh/Nk3Gv1ENIS+UbTjL0rMPU=">ACAHicbVA9SwNBEJ2LXzF+RS1tFoNgY7iTgJZB G8sI5gOTI+xt9pIlu3vH7p4QjT+Blut7cTWf2LpP3EvucIkPh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMNpJ1YUi4DTdjC+zfz2E1WaRfLBTGLqCzyULGQEGys9gKBVD+98Kb9csWtujOgVeLlpA I5Gv3yT28QkURQaQjHWnc9NzZ+ipVhNpqZdoGmMyxkPatVRiQbWfzi6eojOrDFAYKVvSoJn6dyLFQuJCGynwGakl71M/M/rJia89lMm48RQSeaLwoQjE6HsfTRgihLDJ5Zgopi9FZERVpgYG9LClkBkmXjLCayS1mXVc6vefa1Sv8nTKcIJn MI5eHAFdbiDBjSBgIQXeIU359l5dz6cz3lrwclnjmEBztcvl72W5g=</latexit> <latexit sha1_base64="AcAh/Nk3Gv1ENIS+UbTjL0rMPU=">ACAHicbVA9SwNBEJ2LXzF+RS1tFoNgY7iTgJZB G8sI5gOTI+xt9pIlu3vH7p4QjT+Blut7cTWf2LpP3EvucIkPh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMNpJ1YUi4DTdjC+zfz2E1WaRfLBTGLqCzyULGQEGys9gKBVD+98Kb9csWtujOgVeLlpA I5Gv3yT28QkURQaQjHWnc9NzZ+ipVhNpqZdoGmMyxkPatVRiQbWfzi6eojOrDFAYKVvSoJn6dyLFQuJCGynwGakl71M/M/rJia89lMm48RQSeaLwoQjE6HsfTRgihLDJ5Zgopi9FZERVpgYG9LClkBkmXjLCayS1mXVc6vefa1Sv8nTKcIJn MI5eHAFdbiDBjSBgIQXeIU359l5dz6cz3lrwclnjmEBztcvl72W5g=</latexit> <latexit sha1_base64="AcAh/Nk3Gv1ENIS+UbTjL0rMPU=">ACAHicbVA9SwNBEJ2LXzF+RS1tFoNgY7iTgJZB G8sI5gOTI+xt9pIlu3vH7p4QjT+Blut7cTWf2LpP3EvucIkPh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMNpJ1YUi4DTdjC+zfz2E1WaRfLBTGLqCzyULGQEGys9gKBVD+98Kb9csWtujOgVeLlpA I5Gv3yT28QkURQaQjHWnc9NzZ+ipVhNpqZdoGmMyxkPatVRiQbWfzi6eojOrDFAYKVvSoJn6dyLFQuJCGynwGakl71M/M/rJia89lMm48RQSeaLwoQjE6HsfTRgihLDJ5Zgopi9FZERVpgYG9LClkBkmXjLCayS1mXVc6vefa1Sv8nTKcIJn MI5eHAFdbiDBjSBgIQXeIU359l5dz6cz3lrwclnjmEBztcvl72W5g=</latexit> r0 <latexit sha1_base64="WFmJ0f1DyXNoJgOC6cr2D5bQNIw=">AB/3icbVA9SwNBEJ2LXzF+nVraLAbBKtyJoGXQ xjKC+ZDkCHubvWTJ7t6xuyeEI4W/wVZrO7H1p1j6T9xLrjCJDwYe780wMy9MONPG876d0tr6xuZWebuys7u3f+AeHrV0nCpCmyTmseqEWFPOJG0aZjtJIpiEXLaDse3ud9+okqzWD6YSUIDgYeSRYxgY6XHXiQ6mfetO9WvZo3A1olfkGqUK DRd396g5ikgkpDONa63uJCTKsDCOcTiu9VNMEkzEe0q6lEguqg2x28BSdWAoljZkgbN1L8TGRZaT0RoOwU2I73s5eJ/Xjc10XWQMZmkhkoyXxSlHJkY5d+jAVOUGD6xBPF7K2IjLDCxNiMFraEIs/EX05glbQuar5X8+8vq/WbIp0ynMApn IMPV1CHO2hAEwgIeIFXeHOenXfnw/mct5acYuYFuB8/QInXpau</latexit> <latexit sha1_base64="WFmJ0f1DyXNoJgOC6cr2D5bQNIw=">AB/3icbVA9SwNBEJ2LXzF+nVraLAbBKtyJoGXQ xjKC+ZDkCHubvWTJ7t6xuyeEI4W/wVZrO7H1p1j6T9xLrjCJDwYe780wMy9MONPG876d0tr6xuZWebuys7u3f+AeHrV0nCpCmyTmseqEWFPOJG0aZjtJIpiEXLaDse3ud9+okqzWD6YSUIDgYeSRYxgY6XHXiQ6mfetO9WvZo3A1olfkGqUK DRd396g5ikgkpDONa63uJCTKsDCOcTiu9VNMEkzEe0q6lEguqg2x28BSdWAoljZkgbN1L8TGRZaT0RoOwU2I73s5eJ/Xjc10XWQMZmkhkoyXxSlHJkY5d+jAVOUGD6xBPF7K2IjLDCxNiMFraEIs/EX05glbQuar5X8+8vq/WbIp0ynMApn IMPV1CHO2hAEwgIeIFXeHOenXfnw/mct5acYuYFuB8/QInXpau</latexit> <latexit sha1_base64="WFmJ0f1DyXNoJgOC6cr2D5bQNIw=">AB/3icbVA9SwNBEJ2LXzF+nVraLAbBKtyJoGXQ xjKC+ZDkCHubvWTJ7t6xuyeEI4W/wVZrO7H1p1j6T9xLrjCJDwYe780wMy9MONPG876d0tr6xuZWebuys7u3f+AeHrV0nCpCmyTmseqEWFPOJG0aZjtJIpiEXLaDse3ud9+okqzWD6YSUIDgYeSRYxgY6XHXiQ6mfetO9WvZo3A1olfkGqUK DRd396g5ikgkpDONa63uJCTKsDCOcTiu9VNMEkzEe0q6lEguqg2x28BSdWAoljZkgbN1L8TGRZaT0RoOwU2I73s5eJ/Xjc10XWQMZmkhkoyXxSlHJkY5d+jAVOUGD6xBPF7K2IjLDCxNiMFraEIs/EX05glbQuar5X8+8vq/WbIp0ynMApn IMPV1CHO2hAEwgIeIFXeHOenXfnw/mct5acYuYFuB8/QInXpau</latexit> <latexit sha1_base64="WFmJ0f1DyXNoJgOC6cr2D5bQNIw=">AB/3icbVA9SwNBEJ2LXzF+nVraLAbBKtyJoGXQ xjKC+ZDkCHubvWTJ7t6xuyeEI4W/wVZrO7H1p1j6T9xLrjCJDwYe780wMy9MONPG876d0tr6xuZWebuys7u3f+AeHrV0nCpCmyTmseqEWFPOJG0aZjtJIpiEXLaDse3ud9+okqzWD6YSUIDgYeSRYxgY6XHXiQ6mfetO9WvZo3A1olfkGqUK DRd396g5ikgkpDONa63uJCTKsDCOcTiu9VNMEkzEe0q6lEguqg2x28BSdWAoljZkgbN1L8TGRZaT0RoOwU2I73s5eJ/Xjc10XWQMZmkhkoyXxSlHJkY5d+jAVOUGD6xBPF7K2IjLDCxNiMFraEIs/EX05glbQuar5X8+8vq/WbIp0ynMApn IMPV1CHO2hAEwgIeIFXeHOenXfnw/mct5acYuYFuB8/QInXpau</latexit> K−1(j) <latexit sha1_base64="cFlBEabvnNWxy5Xd3U8AijQ2xwI=">ACMnicbVDLSsNAFJ34rPVdekmW BRdWBIRdCm6EdxUsA9oS5lMb+roZBJmbsQy5A/8GnGnP6I7cevWvdO0C6seGDic+zpzgkRwjZ736kxNz8zOzRcWiotLyurpbX1uo5TxaDGYhGrZkA1C6hwFNBMFNAoENILbs2G9cQdK81he4SCBTkT7koecUbR St7TRrjHfI+JFZV9yEw7onitQ3ORdc2+n+3e7GXdUtmreDncv8QfkzIZo9otfbV7MUsjkMgE1brlewl2DFXImYCs2E41JTd0j60LJU0At0xuY/M3bZKzw1jZ9EN1d/ThgaT2IAtuZW/1dG4r/1Vophscdw2WSI kg2OhSmwsXYHYbj9rgChmJgCWKW68u6aKMrQRTlwJok/mPuRdZuT/zuVv6R+UPG9in95WD45HSdWIJtki+wSnxyRE3JOqRGHkgj+SZvDhPzpvz7nyMWqec8cwGmYDz+Q2ZzayB</latexit> <latexit sha1_base64="cFlBEabvnNWxy5Xd3U8AijQ2xwI=">ACMnicbVDLSsNAFJ34rPVdekmW BRdWBIRdCm6EdxUsA9oS5lMb+roZBJmbsQy5A/8GnGnP6I7cevWvdO0C6seGDic+zpzgkRwjZ736kxNz8zOzRcWiotLyurpbX1uo5TxaDGYhGrZkA1C6hwFNBMFNAoENILbs2G9cQdK81he4SCBTkT7koecUbR St7TRrjHfI+JFZV9yEw7onitQ3ORdc2+n+3e7GXdUtmreDncv8QfkzIZo9otfbV7MUsjkMgE1brlewl2DFXImYCs2E41JTd0j60LJU0At0xuY/M3bZKzw1jZ9EN1d/ThgaT2IAtuZW/1dG4r/1Vophscdw2WSI kg2OhSmwsXYHYbj9rgChmJgCWKW68u6aKMrQRTlwJok/mPuRdZuT/zuVv6R+UPG9in95WD45HSdWIJtki+wSnxyRE3JOqRGHkgj+SZvDhPzpvz7nyMWqec8cwGmYDz+Q2ZzayB</latexit> <latexit sha1_base64="cFlBEabvnNWxy5Xd3U8AijQ2xwI=">ACMnicbVDLSsNAFJ34rPVdekmW BRdWBIRdCm6EdxUsA9oS5lMb+roZBJmbsQy5A/8GnGnP6I7cevWvdO0C6seGDic+zpzgkRwjZ736kxNz8zOzRcWiotLyurpbX1uo5TxaDGYhGrZkA1C6hwFNBMFNAoENILbs2G9cQdK81he4SCBTkT7koecUbR St7TRrjHfI+JFZV9yEw7onitQ3ORdc2+n+3e7GXdUtmreDncv8QfkzIZo9otfbV7MUsjkMgE1brlewl2DFXImYCs2E41JTd0j60LJU0At0xuY/M3bZKzw1jZ9EN1d/ThgaT2IAtuZW/1dG4r/1Vophscdw2WSI kg2OhSmwsXYHYbj9rgChmJgCWKW68u6aKMrQRTlwJok/mPuRdZuT/zuVv6R+UPG9in95WD45HSdWIJtki+wSnxyRE3JOqRGHkgj+SZvDhPzpvz7nyMWqec8cwGmYDz+Q2ZzayB</latexit> <latexit sha1_base64="cFlBEabvnNWxy5Xd3U8AijQ2xwI=">ACMnicbVDLSsNAFJ34rPVdekmW BRdWBIRdCm6EdxUsA9oS5lMb+roZBJmbsQy5A/8GnGnP6I7cevWvdO0C6seGDic+zpzgkRwjZ736kxNz8zOzRcWiotLyurpbX1uo5TxaDGYhGrZkA1C6hwFNBMFNAoENILbs2G9cQdK81he4SCBTkT7koecUbR St7TRrjHfI+JFZV9yEw7onitQ3ORdc2+n+3e7GXdUtmreDncv8QfkzIZo9otfbV7MUsjkMgE1brlewl2DFXImYCs2E41JTd0j60LJU0At0xuY/M3bZKzw1jZ9EN1d/ThgaT2IAtuZW/1dG4r/1Vophscdw2WSI kg2OhSmwsXYHYbj9rgChmJgCWKW68u6aKMrQRTlwJok/mPuRdZuT/zuVv6R+UPG9in95WD45HSdWIJtki+wSnxyRE3JOqRGHkgj+SZvDhPzpvz7nyMWqec8cwGmYDz+Q2ZzayB</latexit> . . . <latexit sha1_base64="4fajA5Sub0emPp2TPmVj1aHk3ls=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDabTb t2kw27E6GE/gcvHhTx6v/x5r9x2+agrQ8GHu/NMDMvSKUw6LrfTmltfWNzq7xd2dnd2z+oHh61jco04y2mpNLdgBouRcJbKFDybqo5jQPJO8H4duZ3nrg2QiUPOEm5H9NhIiLBKFqp3ZehQjOo1ty6OwdZJV5BalCgOah+9UPFspgnyCQ1pue5Kfo51SiY5NKPzM8pWxMh7xnaUJjbv x8fu2UnFklJHSthIkc/X3RE5jYyZxYDtjiOz7M3E/7xehtG1n4skzZAnbLEoyiRBRWavk1BozlBOLKFMC3srYSOqKUMbUMWG4C2/vEraF3XPrXv3l7XGTRFHGU7gFM7BgytowB0oQUMHuEZXuHNUc6L8+58LFpLTjFzDH/gfP4AvB+POA=</latexit> <latexit sha1_base64="4fajA5Sub0emPp2TPmVj1aHk3ls=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDabTb t2kw27E6GE/gcvHhTx6v/x5r9x2+agrQ8GHu/NMDMvSKUw6LrfTmltfWNzq7xd2dnd2z+oHh61jco04y2mpNLdgBouRcJbKFDybqo5jQPJO8H4duZ3nrg2QiUPOEm5H9NhIiLBKFqp3ZehQjOo1ty6OwdZJV5BalCgOah+9UPFspgnyCQ1pue5Kfo51SiY5NKPzM8pWxMh7xnaUJjbv x8fu2UnFklJHSthIkc/X3RE5jYyZxYDtjiOz7M3E/7xehtG1n4skzZAnbLEoyiRBRWavk1BozlBOLKFMC3srYSOqKUMbUMWG4C2/vEraF3XPrXv3l7XGTRFHGU7gFM7BgytowB0oQUMHuEZXuHNUc6L8+58LFpLTjFzDH/gfP4AvB+POA=</latexit> <latexit sha1_base64="4fajA5Sub0emPp2TPmVj1aHk3ls=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDabTb t2kw27E6GE/gcvHhTx6v/x5r9x2+agrQ8GHu/NMDMvSKUw6LrfTmltfWNzq7xd2dnd2z+oHh61jco04y2mpNLdgBouRcJbKFDybqo5jQPJO8H4duZ3nrg2QiUPOEm5H9NhIiLBKFqp3ZehQjOo1ty6OwdZJV5BalCgOah+9UPFspgnyCQ1pue5Kfo51SiY5NKPzM8pWxMh7xnaUJjbv x8fu2UnFklJHSthIkc/X3RE5jYyZxYDtjiOz7M3E/7xehtG1n4skzZAnbLEoyiRBRWavk1BozlBOLKFMC3srYSOqKUMbUMWG4C2/vEraF3XPrXv3l7XGTRFHGU7gFM7BgytowB0oQUMHuEZXuHNUc6L8+58LFpLTjFzDH/gfP4AvB+POA=</latexit> <latexit sha1_base64="4fajA5Sub0emPp2TPmVj1aHk3ls=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDabTb t2kw27E6GE/gcvHhTx6v/x5r9x2+agrQ8GHu/NMDMvSKUw6LrfTmltfWNzq7xd2dnd2z+oHh61jco04y2mpNLdgBouRcJbKFDybqo5jQPJO8H4duZ3nrg2QiUPOEm5H9NhIiLBKFqp3ZehQjOo1ty6OwdZJV5BalCgOah+9UPFspgnyCQ1pue5Kfo51SiY5NKPzM8pWxMh7xnaUJjbv x8fu2UnFklJHSthIkc/X3RE5jYyZxYDtjiOz7M3E/7xehtG1n4skzZAnbLEoyiRBRWavk1BozlBOLKFMC3srYSOqKUMbUMWG4C2/vEraF3XPrXv3l7XGTRFHGU7gFM7BgytowB0oQUMHuEZXuHNUc6L8+58LFpLTjFzDH/gfP4AvB+POA=</latexit> r+1 <latexit sha1_base64="VrY7MmzV0w7rs3Ls5xy2a3XABPo=">ACAHicbVA9SwNBEJ2LXzF+RS1tFoMgCOFOAloGbSwj mA9MjrC32UuW7O4du3tCONL4G2y1thNb/4ml/8S95AqT+GDg8d4M/OCmDNtXPfbKaytb2xuFbdLO7t7+wflw6OWjhJFaJNEPFKdAGvKmaRNwynVhRLAJO28H4NvPbT1RpFskHM4mpL/BQspARbKz02AsEUv30wpv2yxW36s6AVomXkwrkaPTLP71BRB JBpSEca9313Nj4KVaGEU6npV6iaYzJGA9p1KJBdV+Ort4is6sMkBhpGxJg2bq34kUC60nIrCdApuRXvYy8T+vm5jw2k+ZjBNDJZkvChOTISy9GAKUoMn1iCiWL2VkRGWGFibEgLWwKRZeItJ7BKWpdVz61697VK/SZPpwgncArn4MEV1OEOGtAEAhJe 4BXenGfn3flwPuetBSefOYFOF+/lJOW5A=</latexit> <latexit sha1_base64="VrY7MmzV0w7rs3Ls5xy2a3XABPo=">ACAHicbVA9SwNBEJ2LXzF+RS1tFoMgCOFOAloGbSwj mA9MjrC32UuW7O4du3tCONL4G2y1thNb/4ml/8S95AqT+GDg8d4M/OCmDNtXPfbKaytb2xuFbdLO7t7+wflw6OWjhJFaJNEPFKdAGvKmaRNwynVhRLAJO28H4NvPbT1RpFskHM4mpL/BQspARbKz02AsEUv30wpv2yxW36s6AVomXkwrkaPTLP71BRB JBpSEca9313Nj4KVaGEU6npV6iaYzJGA9p1KJBdV+Ort4is6sMkBhpGxJg2bq34kUC60nIrCdApuRXvYy8T+vm5jw2k+ZjBNDJZkvChOTISy9GAKUoMn1iCiWL2VkRGWGFibEgLWwKRZeItJ7BKWpdVz61697VK/SZPpwgncArn4MEV1OEOGtAEAhJe 4BXenGfn3flwPuetBSefOYFOF+/lJOW5A=</latexit> <latexit sha1_base64="VrY7MmzV0w7rs3Ls5xy2a3XABPo=">ACAHicbVA9SwNBEJ2LXzF+RS1tFoMgCOFOAloGbSwj mA9MjrC32UuW7O4du3tCONL4G2y1thNb/4ml/8S95AqT+GDg8d4M/OCmDNtXPfbKaytb2xuFbdLO7t7+wflw6OWjhJFaJNEPFKdAGvKmaRNwynVhRLAJO28H4NvPbT1RpFskHM4mpL/BQspARbKz02AsEUv30wpv2yxW36s6AVomXkwrkaPTLP71BRB JBpSEca9313Nj4KVaGEU6npV6iaYzJGA9p1KJBdV+Ort4is6sMkBhpGxJg2bq34kUC60nIrCdApuRXvYy8T+vm5jw2k+ZjBNDJZkvChOTISy9GAKUoMn1iCiWL2VkRGWGFibEgLWwKRZeItJ7BKWpdVz61697VK/SZPpwgncArn4MEV1OEOGtAEAhJe 4BXenGfn3flwPuetBSefOYFOF+/lJOW5A=</latexit> <latexit sha1_base64="VrY7MmzV0w7rs3Ls5xy2a3XABPo=">ACAHicbVA9SwNBEJ2LXzF+RS1tFoMgCOFOAloGbSwj mA9MjrC32UuW7O4du3tCONL4G2y1thNb/4ml/8S95AqT+GDg8d4M/OCmDNtXPfbKaytb2xuFbdLO7t7+wflw6OWjhJFaJNEPFKdAGvKmaRNwynVhRLAJO28H4NvPbT1RpFskHM4mpL/BQspARbKz02AsEUv30wpv2yxW36s6AVomXkwrkaPTLP71BRB JBpSEca9313Nj4KVaGEU6npV6iaYzJGA9p1KJBdV+Ort4is6sMkBhpGxJg2bq34kUC60nIrCdApuRXvYy8T+vm5jw2k+ZjBNDJZkvChOTISy9GAKUoMn1iCiWL2VkRGWGFibEgLWwKRZeItJ7BKWpdVz61697VK/SZPpwgncArn4MEV1OEOGtAEAhJe 4BXenGfn3flwPuetBSefOYFOF+/lJOW5A=</latexit> r+2 <latexit sha1_base64="CKE+tCbu5IbahI8Qhq9qBowFwY=">ACAHicbVA9SwNBEJ2LXzF+RS1tFoMgCOEuCFoGbSwj mA9MjrC32UuW7O4du3tCONL4G2y1thNb/4ml/8S95AqT+GDg8d4M/OCmDNtXPfbKaytb2xuFbdLO7t7+wflw6OWjhJFaJNEPFKdAGvKmaRNwynVhRLAJO28H4NvPbT1RpFskHM4mpL/BQspARbKz02AsEUv30ojbtlytu1Z0BrRIvJxXI0eiXf3qDiC SCSkM41rubHxU6wMI5xOS71E0xiTMR7SrqUSC6r9dHbxFJ1ZYDCSNmSBs3UvxMpFlpPRGA7BTYjvexl4n9eNzHhtZ8yGSeGSjJfFCYcmQhl76MBU5QYPrE8XsrYiMsMLE2JAWtgQiy8RbTmCVtGpVz61695eV+k2eThFO4BTOwYMrqMdNKAJBCS8 wCu8Oc/Ou/PhfM5bC04+cwLcL5+AZYnluU=</latexit> <latexit sha1_base64="CKE+tCbu5IbahI8Qhq9qBowFwY=">ACAHicbVA9SwNBEJ2LXzF+RS1tFoMgCOEuCFoGbSwj mA9MjrC32UuW7O4du3tCONL4G2y1thNb/4ml/8S95AqT+GDg8d4M/OCmDNtXPfbKaytb2xuFbdLO7t7+wflw6OWjhJFaJNEPFKdAGvKmaRNwynVhRLAJO28H4NvPbT1RpFskHM4mpL/BQspARbKz02AsEUv30ojbtlytu1Z0BrRIvJxXI0eiXf3qDiC SCSkM41rubHxU6wMI5xOS71E0xiTMR7SrqUSC6r9dHbxFJ1ZYDCSNmSBs3UvxMpFlpPRGA7BTYjvexl4n9eNzHhtZ8yGSeGSjJfFCYcmQhl76MBU5QYPrE8XsrYiMsMLE2JAWtgQiy8RbTmCVtGpVz61695eV+k2eThFO4BTOwYMrqMdNKAJBCS8 wCu8Oc/Ou/PhfM5bC04+cwLcL5+AZYnluU=</latexit> <latexit sha1_base64="CKE+tCbu5IbahI8Qhq9qBowFwY=">ACAHicbVA9SwNBEJ2LXzF+RS1tFoMgCOEuCFoGbSwj mA9MjrC32UuW7O4du3tCONL4G2y1thNb/4ml/8S95AqT+GDg8d4M/OCmDNtXPfbKaytb2xuFbdLO7t7+wflw6OWjhJFaJNEPFKdAGvKmaRNwynVhRLAJO28H4NvPbT1RpFskHM4mpL/BQspARbKz02AsEUv30ojbtlytu1Z0BrRIvJxXI0eiXf3qDiC SCSkM41rubHxU6wMI5xOS71E0xiTMR7SrqUSC6r9dHbxFJ1ZYDCSNmSBs3UvxMpFlpPRGA7BTYjvexl4n9eNzHhtZ8yGSeGSjJfFCYcmQhl76MBU5QYPrE8XsrYiMsMLE2JAWtgQiy8RbTmCVtGpVz61695eV+k2eThFO4BTOwYMrqMdNKAJBCS8 wCu8Oc/Ou/PhfM5bC04+cwLcL5+AZYnluU=</latexit> <latexit sha1_base64="CKE+tCbu5IbahI8Qhq9qBowFwY=">ACAHicbVA9SwNBEJ2LXzF+RS1tFoMgCOEuCFoGbSwj mA9MjrC32UuW7O4du3tCONL4G2y1thNb/4ml/8S95AqT+GDg8d4M/OCmDNtXPfbKaytb2xuFbdLO7t7+wflw6OWjhJFaJNEPFKdAGvKmaRNwynVhRLAJO28H4NvPbT1RpFskHM4mpL/BQspARbKz02AsEUv30ojbtlytu1Z0BrRIvJxXI0eiXf3qDiC SCSkM41rubHxU6wMI5xOS71E0xiTMR7SrqUSC6r9dHbxFJ1ZYDCSNmSBs3UvxMpFlpPRGA7BTYjvexl4n9eNzHhtZ8yGSeGSjJfFCYcmQhl76MBU5QYPrE8XsrYiMsMLE2JAWtgQiy8RbTmCVtGpVz61695eV+k2eThFO4BTOwYMrqMdNKAJBCS8 wCu8Oc/Ou/PhfM5bC04+cwLcL5+AZYnluU=</latexit> Figure 2: An example: calculating r′ −1 using kernel. embedding rm by integrating other relative positions’ influences: r′ m = +M X j=−M Km(j) · rj . (9) The intuition behind it is that if j is close to m, rj will exert more influence on r′ m than other distant relative positions. Fig. 2 shows an illustration for m = −1. As σK →0, kernel-based embeddings devolve to vanilla ones. Thus, our kernel-based embedding scheme can be regarded as a regularized version of vanilla embedding. Ranking Clause Pairs A ranking layer (parameterized by wr and br) with activation function fact(·) is adopted to produce the ranking score ˆyij for each clause pair candidate pij ∈P′: ˆyij = fact  w⊤ r pij + br  . (10) 3.4 Optimization Our network RANKCP is optimized end-to-end. The loss function for the input document D consists of the following two parts. The first part measures the ranking scores of clause pairs. Pointwise ranking loss is defined as: Lpair = X pij∈P′ −(yij log ˆyij+(1−yij) log(1−ˆyij)) , (11) where yij ∈{0, 1} is the ground-truth of the clause pair pij (yij = 1 means that pij is an emotioncause pair), and fact(·) is set to logistic function. It can also be computed by pairwise ranking loss, with a margin hyperparameter γ: Lpair = X {p+,p−}∈P′ p+≻p− max{0, −(y+−y−)+γ} , (12) where the ground-truth of clause pair p+ is 1 while the ground-truth of clause pair p−is 0 (thus p+’s score y+ should rank higher than p−’s score y−), and fact(·) is set to tanh function. The second part of the loss function measures the pre-output ˆyemo i and ˆycau i of graph attention 3175 # Doc. having one emotion-cause pair 1,746 # Doc. having two emotion-cause pairs 177 # Doc. having three or more emotion-cause pairs 22 Avg. of # Clause per document 14.77 Max. of # Clause per document 73 Table 1: Summary statistics of the dataset for evaluation. “Doc.” is the abbreviation for “Document”. network (see Eq. 5). According to the ground-truth of clause pairs, we know whether a clause is an emotion/cause clause or not, thus we use two crossentropy loss functions Lemo and Lcau to supervise the two pre-output predictions. We employ the sum of the above two parts as the final loss function L for the document D: L = Lpair + (Lemo + Lcau) . (13) This forms two-level supervision for both clause representation learning and clause pair ranking. 3.5 Lexicon-based Extraction At test time, a key problem is how to extract potential emotion-cause pairs according to the ranking scores of all pair candidates. Note that it is not easy to determine an overall threshold score that can be adopted to all documents for dividing candidates into emotion-cause pairs and negative ones. We adopt a lexicon-based extraction scheme to obtain emotion-cause pairs from the top-N ranking list {p1, p2, . . . , pN} of a test document. We first extract the top pair p1 (with the highest score) as an emotion-cause pair. Then, for each remaining clause pair pi = (ci,1, ci,2) ∈{p2, . . . , pN}, we use a sentiment lexicon to determine whether the clause ci,1 contains sentiment word(s). If so, we extract the pair pi as an emotion-cause pair. Therefore, our model is able to extract multiple emotion-cause pairs from a given document. 4 Experiments We conduct extensive experiments to verify the effectiveness of our proposed model RANKCP. 4.1 Experimental Setup Dataset and Evaluation Metrics We use the benchmark dataset released by (Xia and Ding, 2019) to conduct our experiments. This dataset is constructed based on an emotion cause extraction corpus (Gui et al., 2016) that consists of 1,945 Chinese documents from SINA NEWS website. Table 1 shows the summary statistics. In our experiments, following the previous work, we use the same data split (10-fold cross-validation), and choose precision P, recall R and F-score F1 as evaluation metrics: P = #correctly predicted pairs #predicted pairs , R = #correctly predicted pairs #ground-truth pairs , F1 = 2 · P · R P + R . (14) Moreover, we also evaluate the performance on emotion clause extraction and cause clause extraction respectively. That is, we break down the emotion-cause pairs to a set of emotion clauses and a set of cause clauses, and then compute metrics for the two sets. Precision, recall and F-score are defined similar to those in Eq. 14: replacing “pairs” with “emotion clauses” or “cause clauses”. Comparative Approaches Xia and Ding (2019) proposed three two-step systems. The first step extracts emotion clauses and cause clauses separately, and the second step is a binary classifier that filters out negative pairs. Specifically, the difference of their three systems exists at the first step. • INDEP encodes clauses with bidirectional LSTM, then uses two independently bidirectional LSTMs to extract emotion and cause clauses respectively. • INTER-CE is different from INDEP in that it first extracts cause clauses, and then the predicted distribution is utilized as extra feature to extract emotion clauses. • INTER-EC is similar to INTER-CE except that it first extracts emotion clauses. Implementation Details3 For fair comparison, we adopt the same word embeddings as used in INTER-EC. We use LSTM as the RNN cell, and the dimension of clause representations is 200. We stack two graph attention layers to build the graph attention network, and we add dropout with rate 0.1 for each layer. The maximum relative position M is set to 12, and the dimension of relative position embedding is set to 50, with σK = 1 in the RBF kernel function. 3Our implementation based on PyTorch is available at: https://github.com/Determined22/ Rank-Emotion-Cause. 3176 Approach Emotion-Cause Pair Extraction Emotion Clause Extraction Cause Clause Extraction F1 P R F1 P R F1 P R INDEP 0.5818 0.6832 0.5082 0.8210 0.8375 0.8071 0.6205 0.6902 0.5673 INTER-CE 0.5901 0.6902 0.5135 0.8300 0.8494 0.8122 0.6151 0.6809 0.5634 INTER-EC 0.6128 0.6721 0.5705 0.8230 0.8364 0.8107 0.6507 0.7041 0.6083 RANKCP (top-1) 0.6562 0.6910 0.6254 0.8428 0.8735 0.8146 0.6790 0.7130 0.6468 RANKCP 0.6610 0.6698 0.6546 0.8548 0.8703 0.8406 0.6824 0.6927 0.6743 Table 2: Experimental results on emotion-cause pair extraction. Moreover, results on emotion clause extraction and cause clause extraction are also reported. “RANKCP (top-1) ” denotes the model that does not use the lexiconbased extraction at test time, and directly chooses the clause pair having the highest ranking score as the unique emotion-cause pair for a document. We train RANKCP using Adam optimizer with 0.001 learning rate and 4 mini-batch size, and ℓ2 regularization coefficient is set to 1e-5. We choose pointwise ranking loss because training with it is faster than that with pairwise loss. We use ANTUSD (Wang and Ku, 2016) as the sentiment lexicon,4 and the hyperparameter N is set to 3. 4.2 Experimental Results Results on Emotion-Cause Pair Extraction Table 2 reports the comparative results on emotioncause pair extraction and two sub-tasks, i.e., emotion clause extraction and cause clause extraction.5 Our one-step approach RANKCP shows clear advantage over other baseline systems on all three tasks, which obtains 4.82%, 3.18% and 3.17% F1 improvements over the best-performing baseline system INTER-EC on three tasks respectively. More specifically, we can observe that the above advantage mainly originates from the significant improvement of recall R. Comparing to INTEREC, RANKCP achieves 8.43% and 6.60% improvements on emotion-cause pair extraction and cause clause extraction respectively, which indicates that our one-step solution can effectively extract more correct emotion-cause pairs without hurting the precision P. Comparison between the last two lines’ results in Table 2 demonstrates the effectiveness of lexiconbased extraction. We can see that adding the lexicon-based extraction scheme can improve the recall R, indicating that it indeed obtains more correct emotion-cause pairs. Although the precision P slightly decreases, the F-score F1 still performs better than only extracting the top-1 pair in a document. Thus, lexicon-based extraction is an effective 4https://academiasinicanlplab.github. io 5Appendix A.2 reports the results with BERT encoder. # Pairs Approach F1 P R One per doc. INTER-EC 0.6288 0.6734 0.5939 RANKCP 0.6780 0.6625 0.6966 Two or more INTER-EC 0.4206 0.5912 0.3302 per doc. RANKCP 0.5531 0.7508 0.4390 Table 3: Comparative results for documents with only one and more than one emotion-cause pair. way to improve the pair extraction performance of our one-step approach. Comparison on Extracting Multiple Pairs We further compare the results on extracting multiple pairs in one document. We divide each fold’s test set into two subsets: one subset contains documents having only one emotion-cause pair, and the other subset contains documents having two or more emotion-cause pairs. Table 3 reports the comparative results on two subsets respectively. It can be seen that our model consistently outperforms INTER-EC on both subsets. Our one-step approach is relatively more effective for documents with more than one emotioncause pair (over 13% F1 improvement). Results on Emotion Cause Extraction We also provide the comparative results with recently-proposed methods for emotion cause extraction task: a rule-based method RB (Lee et al., 2010a), a traditional machine learning based method MULTI-KERNEL (Gui et al., 2016), and three neural methods CONVMS-MEMNET (Gui et al., 2017), CANN (Li et al., 2018), and RTHN (Xia et al., 2019). Note that all of them utilize known emotion clauses as model input. The top half of Table 4 reports their performance. The bottom half of Table 4 shows the comparative results of methods without using known emotion clauses as model input. It clearly demonstrates 3177 Emotion Cause Extraction F1 P R RB 0.5243 0.6747 0.4287 MULTI-KERNEL 0.6752 0.6588 0.6927 CONVMS-MEMNET 0.6955 0.7076 0.6838 CANN 0.7266 0.7721 0.6891 RTHN 0.7677 0.7697 0.7662 Cause Clause Extraction F1 P R CANN – E 0.3797 0.4826 0.3160 RTHN-APE 0.5694 0.5800 0.5618 INTER-EC 0.6507 0.7041 0.6083 RANKCP 0.6824 0.6927 0.6743 Table 4: Results on emotion cause extraction task. CANN – E and RTHN-APE denote the variant models of CANN and RTHN respectively, which do not utilize known emotion clauses as model input. Loss Function F1 P R Lpair 0.6241 0.6412 0.6090 Lpair + (Lemo + Lcau) 0.6610 0.6698 0.6546 Table 5: Comparison of different supervised signals for RANKCP. that our proposed RANKCP performs much better than other methods. Besides, although RANKCP does not utilize known emotions of test documents as model input, it still outperforms RB and MULTI-KERNEL, and is comparable to CONVMSMEMNET. Thus, our approach benefits from interclause modeling and shows its effectiveness on cause clause extraction. 4.3 Further Discussions We conduct ablation studies to analyze the effects of different components in our approach. Effect of Two-level Supervision Our model is trained with a mixture of two supervised signals: a low-level signal Lemo + Lcau on clause representation learning at the output of graph attention network (see Eq. 5), and a high-level signal Lpair on clause pair representation learning and ranking (see Eq. 10). To verify the effect of lowlevel supervision, we train our model with Lpair only, and the results compared with those of our full model are given in Table 5. It shows that training with two-level supervision boosts the extraction performance. This indicates that incorporating a low-level supervision helps learn better clause representations, and eventually facilitates the clause pair representation learning and ranking process. 0 1 2 3 # layers 0.60 0.62 0.64 0.66 0.68 0.70 F-score Pair Extraction Cause Extraction Figure 3: Results of RANKCP with various graph attention layers. F-score Precision Recall 0.60 0.65 0.70 Metrics RankCP w/o Rank RankCP (top-1) RankCP Figure 4: Comparative results of our variant model that removes the clause pair representation learning and ranking component (denoted as “RANKCP w/o Rank”) and our full model RANKCP. Effect of Graph Attention Layers Graph attention network for modeling inter-clause latent relationships is the key component of our approach. We vary the number of graph attention layers (ranging from 0 to 3) to test its effect, and the results on emotion-cause pair extraction and cause clause extraction are shown in Fig. 3. Obviously, the model without graph attention layer can not obtain good performance. Our approach achieves the best performance with twolayer graph attention network, indicating that interclause relationships can be modeled sufficiently without stacking a lot of layers in this task. Effect of Clause Pair Representation Learning We further investigate if we can obtain ideal performance by directly using clause representations to predict emotion clauses and cause clauses. In other words, we remove the clause pair representation learning and ranking component, and utilize the graph attention network’s predictions (i.e., Eq. 5) to produce emotion-cause pairs. After predicting emotion clauses and cause clauses in a document, we consider all combinations of the predicted emotions and causes as the extracted emotion-cause pairs, and the comparative results of this variant model and our full model are shown in Fig. 4. RANKCP performs much better than the variant one (especially on Recall), demonstrating that 3178 Relative Position Scheme F1 P R No (top-1 ext.) 0.6267 0.6600 0.5973 No (lexicon-based ext.) 0.6260 0.6378 0.6160 Vanilla (top-1 ext.) 0.6468 0.6810 0.6164 Vanilla (lexicon-based ext.) 0.6582 0.6669 0.6510 Kernel (top-1 ext.) 0.6562 0.6910 0.6254 Kernel (lexicon-based ext.) 0.6610 0.6698 0.6546 Table 6: Comparison on relative position embedding schemes. “ext.” is the abbreviation for “extraction”. only offering clause-level predictions is not suitable for emotion-cause pair extraction task. Thus, combining clause-level and clause pair representation learning in a unified one-step model is indeed effective for extracting emotion-cause pairs. Effect of Relative Position Embedding We remove the relative position embedding part in RANKCP to verify its effect. We also compare vanilla and kernel-based relative position embedding schemes. The results are given in Table 6. Removing relative position embedding results in performance degradation, indicating that relative position between a clause pair is indeed useful for prediction. Another observation from the first two lines is that lexicon-based extraction can not outperform top-1 extraction, which further verifies that the model without relative position embedding can not offer ideal ranking list. Kernel-based embedding achieves better performance than vanilla one on both top-1 and lexicon-based extractions, thus considering the mutual impact among relative positions helps obtain more powerful clause pair representations and further improves the performance of emotion-cause pair extraction. 4.4 Case Analysis We illustrate a document that our approach RANKCP correctly extracts its emotion-cause pair (c5, c4) while INTER-EC fails: 4月11日(c1),长沙网友洛丽塔在网上发帖 吐槽(c2),她有一个极品男友(c3),如果要去 的餐馆没有团购就要求换地方(c4),这让她感 觉很不爽(c5),也很没面子(c6)。 Translation: On April 11th (c1), a netizen posted her complains on the Internet (c2), she has a wacko boyfriend (c3), he never goes to a restaurant without discounts (c4), this makes her feel bad (c5), and very embarrassed (c6). We visualize the attention weights for two clauses c4 and c5 in Fig. 5. The emotion clause c5 attends the corresponding cause c4 with the highest c3 <latexit sha1_base64="z+eTvCMDCz8piPM6gBxcLrfdTs=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0m0oMeiF48V 7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPqX/XLFrbpzkFXi5aQCORr98ldvELM0QmYoFp3PT cxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JmlQEJY2VLGjJXf09kNJ6EgW2M6JmpJe9mfif101NeO1nXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8SloXVc+teve1Sv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1 oKTzxzDHzifP+6rjY0=</latexit> <latexit sha1_base64="z+eTvCMDCz8piPM6gBxcLrfdTs=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0m0oMeiF48V 7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPqX/XLFrbpzkFXi5aQCORr98ldvELM0QmYoFp3PT cxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JmlQEJY2VLGjJXf09kNJ6EgW2M6JmpJe9mfif101NeO1nXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8SloXVc+teve1Sv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1 oKTzxzDHzifP+6rjY0=</latexit> <latexit sha1_base64="z+eTvCMDCz8piPM6gBxcLrfdTs=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0m0oMeiF48V 7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPqX/XLFrbpzkFXi5aQCORr98ldvELM0QmYoFp3PT cxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JmlQEJY2VLGjJXf09kNJ6EgW2M6JmpJe9mfif101NeO1nXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8SloXVc+teve1Sv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1 oKTzxzDHzifP+6rjY0=</latexit> <latexit sha1_base64="z+eTvCMDCz8piPM6gBxcLrfdTs=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0m0oMeiF48V 7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPqX/XLFrbpzkFXi5aQCORr98ldvELM0QmYoFp3PT cxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JmlQEJY2VLGjJXf09kNJ6EgW2M6JmpJe9mfif101NeO1nXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8SloXVc+teve1Sv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1 oKTzxzDHzifP+6rjY0=</latexit> c4 <latexit sha1_base64="ypzjBHz1k2rmAf3Jp3DJGBI4uE=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r 2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmYoFr3PD cxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+teve1SuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la 8HJZ07hD5zPH/AvjY4=</latexit> <latexit sha1_base64="ypzjBHz1k2rmAf3Jp3DJGBI4uE=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r 2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmYoFr3PD cxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+teve1SuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la 8HJZ07hD5zPH/AvjY4=</latexit> <latexit sha1_base64="ypzjBHz1k2rmAf3Jp3DJGBI4uE=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r 2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmYoFr3PD cxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+teve1SuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la 8HJZ07hD5zPH/AvjY4=</latexit> <latexit sha1_base64="ypzjBHz1k2rmAf3Jp3DJGBI4uE=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r 2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmYoFr3PD cxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+teve1SuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la 8HJZ07hD5zPH/AvjY4=</latexit> c5 <latexit sha1_base64="LxO/yXw7L0hYzFEtg5wEptC0xMA=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0nEoseiF48V 7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1+as3iFkaoTRMUK27np sYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP6YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwl9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1 oKTzxzDHzifP/GzjY8=</latexit> <latexit sha1_base64="LxO/yXw7L0hYzFEtg5wEptC0xMA=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0nEoseiF48V 7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1+as3iFkaoTRMUK27np sYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP6YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwl9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1 oKTzxzDHzifP/GzjY8=</latexit> <latexit sha1_base64="LxO/yXw7L0hYzFEtg5wEptC0xMA=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0nEoseiF48V 7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1+as3iFkaoTRMUK27np sYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP6YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwl9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1 oKTzxzDHzifP/GzjY8=</latexit> <latexit sha1_base64="LxO/yXw7L0hYzFEtg5wEptC0xMA=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0nEoseiF48V 7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1+as3iFkaoTRMUK27np sYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP6YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwl9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1 oKTzxzDHzifP/GzjY8=</latexit> c6 <latexit sha1_base64="mvSOnvlAfRmbt3bzfjq6KuTRErs=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseiF48V 7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1+as3iFkaoTRMUK27np sYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP6YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwl9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1 oKTzxzDHzifP/M3jZA=</latexit> <latexit sha1_base64="mvSOnvlAfRmbt3bzfjq6KuTRErs=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseiF48V 7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1+as3iFkaoTRMUK27np sYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP6YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwl9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1 oKTzxzDHzifP/M3jZA=</latexit> <latexit sha1_base64="mvSOnvlAfRmbt3bzfjq6KuTRErs=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseiF48V 7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1+as3iFkaoTRMUK27np sYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP6YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwl9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1 oKTzxzDHzifP/M3jZA=</latexit> <latexit sha1_base64="mvSOnvlAfRmbt3bzfjq6KuTRErs=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseiF48V 7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1+as3iFkaoTRMUK27np sYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP6YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwl9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1 oKTzxzDHzifP/M3jZA=</latexit> c1 <latexit sha1_base64="yRjoBq9koyFx8YvuYkRx/CJ8Y4=">AB6nicbVBNS8NAEJ3Ur1q/oh69LBbBU0lE0GPRi8eK 9gPaUDbTbt0swm7E6GE/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z9wD49aJsk0402WyER3Qmq4FIo3UaDknVRzGoeSt8Px7cxvP3FtRKIecZLyIKZDJSLBKFrpgfX9vlv1at4cZJX4BalCgUbf/eoNEpbFXCGT1Jiu76 UY5FSjYJPK73M8JSyMR3yrqWKxtwE+fzUKTmzyoBEibalkMzV3xM5jY2ZxKHtjCmOzLI3E/zuhlG10EuVJohV2yxKMokwYTM/iYDoTlDObGEMi3srYSNqKYMbToVG4K/PIqaV3UfK/m319W6zdFHGU4gVM4Bx+uoA530IAmMBjCM7zCmyOdF+fd+Vi0l pxi5hj+wPn8AeujYs=</latexit> <latexit sha1_base64="yRjoBq9koyFx8YvuYkRx/CJ8Y4=">AB6nicbVBNS8NAEJ3Ur1q/oh69LBbBU0lE0GPRi8eK 9gPaUDbTbt0swm7E6GE/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z9wD49aJsk0402WyER3Qmq4FIo3UaDknVRzGoeSt8Px7cxvP3FtRKIecZLyIKZDJSLBKFrpgfX9vlv1at4cZJX4BalCgUbf/eoNEpbFXCGT1Jiu76 UY5FSjYJPK73M8JSyMR3yrqWKxtwE+fzUKTmzyoBEibalkMzV3xM5jY2ZxKHtjCmOzLI3E/zuhlG10EuVJohV2yxKMokwYTM/iYDoTlDObGEMi3srYSNqKYMbToVG4K/PIqaV3UfK/m319W6zdFHGU4gVM4Bx+uoA530IAmMBjCM7zCmyOdF+fd+Vi0l pxi5hj+wPn8AeujYs=</latexit> <latexit sha1_base64="yRjoBq9koyFx8YvuYkRx/CJ8Y4=">AB6nicbVBNS8NAEJ3Ur1q/oh69LBbBU0lE0GPRi8eK 9gPaUDbTbt0swm7E6GE/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z9wD49aJsk0402WyER3Qmq4FIo3UaDknVRzGoeSt8Px7cxvP3FtRKIecZLyIKZDJSLBKFrpgfX9vlv1at4cZJX4BalCgUbf/eoNEpbFXCGT1Jiu76 UY5FSjYJPK73M8JSyMR3yrqWKxtwE+fzUKTmzyoBEibalkMzV3xM5jY2ZxKHtjCmOzLI3E/zuhlG10EuVJohV2yxKMokwYTM/iYDoTlDObGEMi3srYSNqKYMbToVG4K/PIqaV3UfK/m319W6zdFHGU4gVM4Bx+uoA530IAmMBjCM7zCmyOdF+fd+Vi0l pxi5hj+wPn8AeujYs=</latexit> <latexit sha1_base64="yRjoBq9koyFx8YvuYkRx/CJ8Y4=">AB6nicbVBNS8NAEJ3Ur1q/oh69LBbBU0lE0GPRi8eK 9gPaUDbTbt0swm7E6GE/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z9wD49aJsk0402WyER3Qmq4FIo3UaDknVRzGoeSt8Px7cxvP3FtRKIecZLyIKZDJSLBKFrpgfX9vlv1at4cZJX4BalCgUbf/eoNEpbFXCGT1Jiu76 UY5FSjYJPK73M8JSyMR3yrqWKxtwE+fzUKTmzyoBEibalkMzV3xM5jY2ZxKHtjCmOzLI3E/zuhlG10EuVJohV2yxKMokwYTM/iYDoTlDObGEMi3srYSNqKYMbToVG4K/PIqaV3UfK/m319W6zdFHGU4gVM4Bx+uoA530IAmMBjCM7zCmyOdF+fd+Vi0l pxi5hj+wPn8AeujYs=</latexit> c2 <latexit sha1_base64="tH/lnfdmPbXeWx2i9xfZDS+3iMU=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mKUI9FLx4r 2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmYoFr3PD cxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68Ttq1qudWvfurSuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la 8HJZ07hD5zPH+0njYw=</latexit> <latexit sha1_base64="tH/lnfdmPbXeWx2i9xfZDS+3iMU=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mKUI9FLx4r 2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmYoFr3PD cxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68Ttq1qudWvfurSuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la 8HJZ07hD5zPH+0njYw=</latexit> <latexit sha1_base64="tH/lnfdmPbXeWx2i9xfZDS+3iMU=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mKUI9FLx4r 2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmYoFr3PD cxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68Ttq1qudWvfurSuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la 8HJZ07hD5zPH+0njYw=</latexit> <latexit sha1_base64="tH/lnfdmPbXeWx2i9xfZDS+3iMU=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mKUI9FLx4r 2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmYoFr3PD cxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68Ttq1qudWvfurSuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la 8HJZ07hD5zPH+0njYw=</latexit> c4 <latexit sha1_base64="ypzjBHz1k2rmAf3Jp3DJGBI4uE=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r 2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmYoFr3PD cxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+teve1SuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la 8HJZ07hD5zPH/AvjY4=</latexit> <latexit sha1_base64="ypzjBHz1k2rmAf3Jp3DJGBI4uE=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r 2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmYoFr3PD cxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+teve1SuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la 8HJZ07hD5zPH/AvjY4=</latexit> <latexit sha1_base64="ypzjBHz1k2rmAf3Jp3DJGBI4uE=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r 2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmYoFr3PD cxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+teve1SuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la 8HJZ07hD5zPH/AvjY4=</latexit> <latexit sha1_base64="ypzjBHz1k2rmAf3Jp3DJGBI4uE=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r 2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmYoFr3PD cxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+teve1SuMmj6MIZ3AOl+BHRpwB01oAYMRPMrvDnCeXHenY9la 8HJZ07hD5zPH/AvjY4=</latexit> c5 <latexit sha1_base64="LxO/yXw7L0hYzFEtg5wEptC0xMA=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0nEoseiF48V 7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1+as3iFkaoTRMUK27np sYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP6YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwl9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1 oKTzxzDHzifP/GzjY8=</latexit> <latexit sha1_base64="LxO/yXw7L0hYzFEtg5wEptC0xMA=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0nEoseiF48V 7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1+as3iFkaoTRMUK27np sYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP6YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwl9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1 oKTzxzDHzifP/GzjY8=</latexit> <latexit sha1_base64="LxO/yXw7L0hYzFEtg5wEptC0xMA=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0nEoseiF48V 7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1+as3iFkaoTRMUK27np sYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP6YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwl9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1 oKTzxzDHzifP/GzjY8=</latexit> <latexit sha1_base64="LxO/yXw7L0hYzFEtg5wEptC0xMA=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0nEoseiF48V 7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1+as3iFkaoTRMUK27np sYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP6YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwl9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1 oKTzxzDHzifP/GzjY8=</latexit> Figure 5: Attention weights for two clauses c4 and c5. weight, indicating that graph attention effectively captures the relationship between the two clauses. 5 Related Work Emotion Cause Extraction Lee et al. (2010a,b) first studied emotion cause extraction and designed a linguistic rule-based system to detect cause events. Early work attempted rule-based (Chen et al., 2010; Neviarouskaya and Aono, 2013; Gao et al., 2015), commonsense-based (Russo et al., 2011), and traditional machine learning based (Ghazi et al., 2015) approaches to extract causes for certain emotion expressions. Gui et al. (2016) proposed an event-driven multikernel SVM method and released a benchmark corpus. Both feature based (Xu et al., 2019) and neural approaches (Gui et al., 2017; Li et al., 2018; Ding et al., 2019; Yu et al., 2019) have been proposed recently. Xia et al. (2019) adopted Transformer encoder augmented with position information and integrated global prediction embedding to improve performance. Fan et al. (2019) incorporated sentiment and position regularizers to restrain parameter learning. Hu et al. (2019) exploited external sentiment classification corpus to pretrain the model. In other research lines, some work (Cheng et al., 2017) extracted emotion causes in the context of microblog with multi-user structure. Besides, Kim and Klinger (2018) and Bostan et al. (2020) addressed emotions as structured phenomena, and studied the semantic roles of emotions including trigger phrases, experiencers, targets and causes, as well as the reader’s perception. Emotion-Cause Pair Extraction All previous studies on emotion cause analysis need to take known emotion clauses as model input. The pioneer work (Xia and Ding, 2019) first put forward emotion-cause pair extraction task. They proposed a two-step approach to extract emotion and cause clauses separately, and then train a classifier to filter out negative pairs. Unlike their work, our work is a one-step solution for end-to-end emotion-cause pair extraction via effective inter-clause modeling, achieving significantly better performance. 3179 6 Conclusion and Future Work In this paper, we propose the first one-step neural approach RANKCP to tackle the problem of emotion-cause pair extraction, which emphasizes inter-clause modeling from a ranking perspective. Our approach effectively models inter-clause relationships to learn clause representations, and integrates relative position enhanced clause pair ranking into a unified neural network to extract emotioncause pairs in an end-to-end fashion. Experimental results on the benchmark dataset demonstrate that RANKCP significantly outperforms previous systems, and further analysis verifies the effectiveness of each component in our model. In future work, we shall explore the following directions. First, current studies on emotion cause analysis mainly focus on clause-level extraction which is relatively coarse-grained, and it is desirable to further design fine-grained methods that can extract span-level or phrase-level emotion expressions and causes. Second, designing effective methods to inject appropriate linguistic knowledge into neural models is valuable to emotion analysis tasks (Ke et al., 2019; Zhong et al., 2019). Finally, it would be interesting to study the semantic roles of emotion (Bostan et al., 2020), which considers the full structure of an emotion expression and is more challenging. Acknowledgments This work was supported in part by the Ministry of Science and Technology of China under Grants #2016QY02D0305 and #2018ZX10201001, and NSFC under Grants #71621002, #61671450 and #11832001. We thank the anonymous reviewers for their valuable comments. References Laura Bostan, Evgeny Kim, and Roman Klinger. 2020. GoodNewsEveryone: A corpus of news headlines annotated with emotions, semantic roles, and reader perception. In LREC. Ying Chen, Sophia Yat Mei Lee, Shoushan Li, and ChuRen Huang. 2010. Emotion cause detection with linguistic constructions. In COLING. Xiyao Cheng, Ying Chen, Bixiao Cheng, Shoushan Li, and Guodong Zhou. 2017. An emotion cause corpus for Chinese microblogs with multiple-user structures. ACM Transactions on Asian and LowResource Language Information Processing (TALLIP), 17(1). Robert De Beaugrande and Wolfgang U Dressler. 1981. Introduction to text linguistics. Routledge. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. Zixiang Ding, Huihui He, Mengran Zhang, and Rui Xia. 2019. From independent prediction to reordered prediction: Integrating relative position and global label information to emotion cause identification. In AAAI. Chuang Fan, Hongyu Yan, Jiachen Du, Lin Gui, Lidong Bing, Min Yang, Ruifeng Xu, and Ruibin Mao. 2019. A knowledge regularized hierarchical approach for emotion cause analysis. In EMNLPIJCNLP. Kai Gao, Hua Xu, and Jiushuo Wang. 2015. Emotion cause detection for Chinese micro-blogs based on ECOCC model. In PAKDD. Diman Ghazi, Diana Inkpen, and Stan Szpakowicz. 2015. Detecting emotion stimuli in emotion-bearing sentences. In CICLing. Lin Gui, Jiannan Hu, Yulan He, Ruifeng Xu, Qin Lu, and Jiachen Du. 2017. A question answering approach for emotion cause extraction. In EMNLP. Lin Gui, Dongyin Wu, Ruifeng Xu, Qin Lu, and Yu Zhou. 2016. Event-driven emotion cause extraction with corpus construction. In EMNLP. Jiaxing Hu, Shumin Shi, and Heyan Huang. 2019. Combining external sentiment knowledge for emotion cause detection. In NLPCC. Pei Ke, Haozhe Ji, Siyang Liu, Xiaoyan Zhu, and Minlie Huang. 2019. SentiLR: Linguistic knowledge enhanced language representation for sentiment analysis. arXiv preprint arXiv:1911.02493. Evgeny Kim and Roman Klinger. 2018. Who feels what and why? Annotation of a literature corpus with semantic roles of emotions. In COLING. Terry Koo, Amir Globerson, Xavier Carreras, and Michael Collins. 2007. Structured prediction models via the matrix-tree theorem. In EMNLP-CoNLL. Sophia Yat Mei Lee, Ying Chen, and Chu-Ren Huang. 2010a. A text-driven rule-based system for emotion cause detection. In the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text. Sophia Yat Mei Lee, Ying Chen, Shoushan Li, and ChuRen Huang. 2010b. Emotion cause events: Corpus construction and analysis. In LREC. Xiangju Li, Kaisong Song, Shi Feng, Daling Wang, and Yifei Zhang. 2018. A co-attention neural network model for emotion cause analysis with emotional context awareness. In EMNLP. 3180 Yang Liu and Mirella Lapata. 2018. Learning structured text representations. Transactions of the Association for Computational Linguistics (TACL), 6. Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In EMNLP-IJCNLP. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In ICLR. William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text-interdisciplinary Journal for the Study of Discourse, 8(3). Daniel Marcu. 2000. The theory and practice of discourse parsing and summarization. MIT press. Alena Neviarouskaya and Masaki Aono. 2013. Extracting causes of emotions from text. In IJCNLP. Jiezhong Qiu, Jian Tang, Hao Ma, Yuxiao Dong, Kuansan Wang, and Jie Tang. 2018. DeepInf: Social influence prediction with deep learning. In KDD. Irene Russo, Tommaso Caselli, Francesco Rubino, Ester Boldrini, and Patricio Mart´ınez-Barco. 2011. EMOCause: An easy-adaptable approach to extract emotion cause contexts. In WASSA@ACL-HLT. Rupesh K Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Training very deep networks. In NIPS. W.T. Tutte. 1984. Graph theory. Encyclopedia of mathematics and its applications. Addison-Wesley Pub. Co., Advanced Book Program. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. 2018. Graph attention networks. In ICLR. Shih-Ming Wang and Lun-Wei Ku. 2016. ANTUSD: A large Chinese sentiment dictionary. In LREC. Rui Xia and Zixiang Ding. 2019. Emotion-cause pair extraction: A new task to emotion analysis in texts. In ACL. Rui Xia, Mengran Zhang, and Zixiang Ding. 2019. RTHN: A RNN-Transformer hierarchical network for emotion cause extraction. In IJCAI. Bo Xu, Hongfei Lin, Yuan Lin, Yufeng Diao, Liang Yang, and Kan Xu. 2019. Extracting emotion causes using learning to rank methods from an information retrieval perspective. IEEE Access, 7. Xinyi Yu, Wenge Rong, Zhuo Zhang, Yuanxin Ouyang, and Zhang Xiong. 2019. Multiple level hierarchical network-based clause selection for emotion cause extraction. IEEE Access, 7. Peixiang Zhong, Di Wang, and Chunyan Miao. 2019. Knowledge-enriched Transformer for emotion detection in textual conversations. In EMNLP-IJCNLP. A Experimental Results with BERT We employ pretrained BERT (Devlin et al., 2019) to replace the original encoder (Hierarchical RNN) in RANKCP, and report the experimental results. A.1 RANKCP with BERT Encoder Given a document D = (c1, c2, . . . , c|D|) where the i-th clause ci = (wi 1, wi 2, . . . , wi |ci|), to feed D into pretrained BERT, for each clause we insert a [CLS] token before it and append a [SEP] token to it, obtaining ci = ([CLS], wi 1, wi 2, . . . , wi |ci|, [SEP]). Following (Liu and Lapata, 2019), we use “interval” segment embeddings (EA, EB, EA, . . .) to distinguish clauses in a document, i.e., EA for clauses at odd positions and EB for those at even positions. For each token in the document, its input representation is the sum of the corresponding token, segment, and position embeddings. The clause representation of clause ci is the corresponding [CLS] token’s output representation. We implement our model based on PyTorch and Transformers,6 and the BERT encoder is initialized using BERT-Base, Chinese.7 The model is optimized by Eq. 13 for 20 epochs with early stopping, using AdamW optimizer (Loshchilov and Hutter, 2019) and 1e-5 learning rate. We schedule the learning rate that the first 10% of all training steps is a linear warmup phrase and then a linear decay phrase is used. A.2 Results Table 7 shows the results on emotion-cause pair extraction and two sub-tasks. With the pretrained BERT encoder, the results of RANKCP significantly perform better than those with hierarchical RNN, especially on Recall, which indicates the effectiveness of contextualized embeddings as external knowledge, and thus pretrained BERT is a suitable backbone network for emotion-cause pair extraction task. Table 8 shows the comparative results on extracting one and more than one pair, and we can observe that pretrained BERT encoder further improves the performance of RANKCP for extracting multiple pairs in one document. 6https://github.com/huggingface/ transformers 7https://github.com/google-research/ bert 3181 Approach Emotion-Cause Pair Extraction Emotion Clause Extraction Cause Clause Extraction F1 P R F1 P R F1 P R RANKCP 0.6610 0.6698 0.6546 0.8548 0.8703 0.8406 0.6824 0.6927 0.6743 RANKCP w/ BERT 0.7360 0.7119 0.7630 0.9057 0.9123 0.8999 0.7615 0.7461 0.7788 Table 7: Experimental results with pretrained BERT encoder. # Pairs Approach F1 P R One per doc. RANKCP 0.6790 0.6625 0.6966 w/ BERT 0.7633 0.7203 0.8123 Two or more RANKCP 0.5531 0.7508 0.4390 per doc. w/ BERT 0.5802 0.6772 0.5146 Table 8: Comparative results for documents with only one and more than one emotion-cause pair. B More Discussions on Modeling Inter-Clause Relationships B.1 Multi-Root Discourse Tree Induction In previous experiments, we let RANKCP induce a discourse dependency tree (each discourse unit is a clause) while extracting emotion-cause pairs in a document. We expect that a document can be structurally represented as a multi-root dependency tree, where each root node is an emotion clause, and its child nodes plus the root itself are potential causes. To this end, we extended the original graph attention to a structured graph attention mechanism, inspired by (Koo et al., 2007; Liu and Lapata, 2018). See the next sub-section for details. However, the structured graph attention does not lead to improvement for RANKCP. The main reason might be that dependencies in a discourse tree cannot handle a common situation well, i.e., an emotion clause and its corresponding cause clause is the same one. We leave the exploration of effective tree induction methods with the help of clause pair representation learning for future work. B.2 Structured Graph Attention At the t-th layer, let {h(t−1) 1 , h(t−1) 2 , . . . , h(t−1) |D| } denote the input clause representations. The structured graph attention mechanism operates on each clause ci via the following aggregation scheme: p(t) i = α(t) i eroot + X j∈N (i) α(t) ji h(t−1) j , c(t) i = X j∈N (i) α(t) ij h(t−1) j , h(t) i = ReLU  W (t) g h p(t) i ; c(t) i ; h(t−1) i i + b(t) g  , (15) where p(t) i and c(t) i are the context information aggregated from parent clauses and child clauses respectively. α(t) ij reflects the marginal probability of a dependency between two clauses ci and cj. α(t) i denotes the probability of ci being a root, and eroot is a special root embedding. Specifically, two MLPs compute unnormalized values e(t) ij and e(t) i : e(t) ij = w(t) d ⊤tanh h W (t)h(t−1) i ; W (t)h(t−1) j i , e(t) i = w(t) r ⊤tanh  W (t)h(t−1) i  , (16) Then, the normalized weights α(t) ij and α(t) i can be regarded as constrained attention weights to induce a non-projective discourse dependency tree based on Kirchhoff’s Matrix-Tree Theorem (Tutte, 1984; Koo et al., 2007), where A(t) and L(t) denote adjacency matrix and Laplacian matrix respectively: [A(t)]ij =  0, if i = j , exp(LeakyReLU(e(t) ij )) , otherwise . r(t) i = exp(e(t) i ) , (17) [L(t)]ij =  P|D| k=1[A(t)]kj if i = j , −[A(t)]ij, otherwise . (18) ˆL(t) = L(t) + diag(r(t) 1 , . . . , r(t) |D|) (19) The normalized weights are: α(t) ij = (1 −δ1,j)[A(t)]ij[ˆL (t)−1 ]jj −(1 −δi,1)[A(t)]ij[ˆL (t)−1 ]ji , α(t) i = r(t) i [ˆL (t)−1 ]i1 , (20) where δ is Kronecker delta and ·−1 denotes matrix inversion. Eq. 19 is suitable for multi-root setting. In the case of single-root setting, it is replaced by: [ˆL(t)]ij =  r(t) j if i = 1 , [L(t)]ij, otherwise . (21) During training, a cross-entropy loss is used to each layer’s root probability α(t) i , similar to ˆyemo i .
2020
289
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 313–322 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 313 A Joint Model for Document Segmentation and Segment Labeling Joe Barrow∗ University of Maryland [email protected] Rajiv Jain Adobe Research [email protected] Vlad I. Morariu Adobe Research [email protected] Varun Manjunatha Adobe Research [email protected] Douglas W. Oard University of Maryland [email protected] Philip Resnik University of Maryland [email protected] Abstract Text segmentation aims to uncover latent structure by dividing text from a document into coherent sections. Where previous work on text segmentation considers the tasks of document segmentation and segment labeling separately, we show that the tasks contain complementary information and are best addressed jointly. We introduce the Segment Pooling LSTM (SLSTM) model, which is capable of jointly segmenting a document and labeling segments. In support of joint training, we develop a method for teaching the model to recover from errors by aligning the predicted and ground truth segments. We show that S-LSTM reduces segmentation error by 30% on average, while also improving segment labeling. 1 Introduction A well-written document is rich not only in content but also in structure. One type of structure is the grouping of content into topically coherent segments. These segmented documents have many uses across various domains and downstream tasks. Segmentation can, for example, be used to convert unstructured medical dictations into clinical reports (Sadoughi et al., 2018), which in turn can help with medical coding (since a diagnosis mentioned in a "Medical History" might be different from a diagnosis mentioned in an "Intake" section (Ganesan and Subotin, 2014)). Segmentation can also be used downstream for retrieval (Hearst and Plaunt, 2002; Edinger et al., 2017; Allan et al., 1998), where it can be particularly useful when applied to informal text or speech that lacks explicit segment markup. Topically segmented documents are also useful for pre-reading (the process of skimming or surveying a text prior to careful reading), thus serving as an aid for reading comprehension (Swaffar et al., 1991; Ajideh, 2003). ∗⋆Work done while interning at Adobe. Uncovering latent, topically coherent segments of text is a difficult problem because it requires solving a chicken-and-egg problem: determining the segment topics is easier if segment boundaries are given, and identifying the boundaries of segments is easier if the topic(s) addressed in parts of the document are known. Prior approaches to text segmentation can largely be split into two categories that break the cycle by sequentially solving the two problems: those that attempt to directly predict segment bounds (Koshorek et al., 2018), and those that attempt to predict topics per passage (e.g., per sentence) and use measures of coherence for post hoc segmentation (Hearst, 1997; Arnold et al.; Eisenstein and Barzilay, 2008; Riedl and Biemann, 2012; Glavaš et al., 2016). The benefit of the topic modeling approach is that it can work in unsupervised settings where collecting ground truth segmentations is difficult and labeled data is scarce (Eisenstein and Barzilay, 2008; Choi, 2000). Recent work uses Wikipedia as a source of segmentation labels by eliding the segment bounds of a Wikipedia article to train supervised models (Koshorek et al., 2018; Arnold et al.). This enables models to directly learn to predict segment bounds or to learn sentence-level topics and perform post hoc segmentation. Our work is motivated by the observation that the segment bounds and topicality are tightly interwoven, and should ideally be considered jointly rather than sequentially. We start by examining three properties about text segmentation: (1) segment bounds and segment labels contain complementary supervisory signals, (2) segment labels are a product of lower level (e.g. sentence) labels which must be composed, and (3) the model should not only learn to label from ground-truth segmentations at training time, but instead the labeler should learn to be robust to segmentation errors. These properties build on previous work discussed in Section 2. We 314 experimentally evaluate and verify each of these properties in Section 5 with respect to a document segmentation and segment labeling task. Taking advantage of these properties, we propose a neural model that jointly segments and labels without committing to a priori segmentations, Segment Pooling LSTM (S-LSTM). It consists of three components: a segment proposal LSTM (discussed in Section 3.2), a segment pooling layer (Section 3.3), and a segment aligner for training and evaluation (Section 3.4). Our main contribution is a model that performs segmentation and labeling jointly rather than separately. By virtue of joint inference, our model takes advantage of the complementary supervisory signals for segmentation and topic inference, considers the contribution of all sentences to the segment label, and avoids committing to early errors in low-level inference. Our approach improves over neural and nonneural baselines of a document segmentation task. We use a dataset of Wikipedia articles described in Section 5 for training and evaluation. We show that S-LSTM is capable of reducing segmentation error by, on average, 30% while also improving segment classification. We also show that these improvements hold on out-of-domain datasets. 2 Related Work Coherence-based Segmentation. Much work on text segmentation uses measures of coherence to find topic shifts in documents. Hearst (1997) introduced the TextTiling algorithm, which uses term co-occurrences to find coherent segments in a document. Eisenstein and Barzilay (2008) introduced BayesSeg, a Bayesian method that can incorporate other features such as cue phrases. Riedl and Biemann (2012) later introduced TopicTiling, which uses coherence shifts in topic vectors to find segment bounds. Glavaš et al. (2016) proposed GraphSeg, which constructs a semantic relatedness graph over the document using lexical features and word embeddings, and segments using cliques. Nguyen et al. (2012) proposed SITS, a model for topic segmentation in dialogues that incorporates a per-speaker likelihood to change topics. While the above models are unsupservised, Arnold et al. introduced a supervised method to compute sentence-level topic vectors using Wikipedia articles. The authors created the WikiSection dataset and proposed the SECTOR neural model. The SECTOR model predicts a label for each sentence, and then performs post hoc segmentation looking at the coherence of the latent sentence representations, addressing segmentation and labeling separately. We propose a model capable of jointly learning segmentation boundaries and segment-level labels at training time. Our segmentation does not rely on measures of coherence, and can instead learn from signals in the data, such as cue phrases, to predict segment bounds, while still performing well at the segment labeling task. Supervised Segmentation. An alternative to using measures of topical coherence to segment text is to learn to directly predict segment bounds from labeled data. This was the approach taken in Koshorek et al. (2018), where the authors used Wikipedia as a source of training data to learn text segmentation as a supervised task. However, learning only to predict segment bounds does not necessarily capture the topicality of a segment that is useful for informative labeling. The task of document segmentation and labeling is well-studied in the clinical domain, where both segmenting and learning segment labels are important tasks. Pomares-Quimbaya et al. (2019) provide a current overview of work on clinical segmentation. Ganesan and Subotin (2014) trained a logistic regression model on a clinical segmentation task, though they did not consider the task of segment labeling. Tepper et al. (2012) considered both tasks of segmentation and segment labeling, and proposed a two-step pipelined method that first segments and then classifies the segments. Our proposed model is trained jointly on both the segmentation and segment labeling tasks. Concurrent work considers the task of document outline generation (Zhang et al., 2019). The goal of outline generation is to segment and generate (potentially hierarchical) headings for each segment. The authors propose the HiStGen model, a hierarchical LSTM model with a sequence decoder. The work offers an alternative view of the joint segmentation and labeling problem, and is evaluated using exact match for segmentation and ROUGE (Lin, 2004) for heading generation if the segment is predicted correctly. In contrast, we evaluate our models using a commonly-used probabilistic segmentation measure, Pk, which assigns partial credit to incorrect segmentations (Beeferman et al., 1999). We also use an alignment technique to assign partial credit to labels of incorrect segmentations, both for 315 training and evaluation. In addition, we explicitly consider the problem of model transferability, evaluating the pretrained models on additional datasets. IOB Tagging. The problem of jointly learning to segment and classify is well-studied in NLP, though largely at a lower level, with Inside-OutsideBeginning (IOB) tagging (Ramshaw and Marcus, 1999). Conditional random field (CRF) decoding has long been used with IOB tagging to simultaneously segment and label text, e.g. for named entity recognition (NER, McCallum and Li, 2003). The models that perform best at joint segmentation/classification tasks like NER or phrase chunking were IOB tagging models, typically LSTMs with a CRF decoder (Lample et al., 2016) until BERT (Devlin et al., 2019) and ELMo (Peters et al., 2018). Tepper et al. (2012) proposed the use of IOB tagging to segment and label clinical documents, but argued for a pipelined approach. CRF-decoded IOB tagging models are more difficult to apply to the multilabel case. Segment bounds need to be consistent across all labels, so modeling the full transition from |L| −→|L| (where |L| is the size of the label space) at every time step is computationally expensive. In contrast, our joint model performs well at multilabel prediction, while also outperforming a neural CRFdecoded model on a single-label labeling task. 3 Modeling In order to jointly model document segmentation and segment classification, we introduce the Segment Pooling LSTM (S-LSTM) model. S-LSTM is a supervised model trained to both predict segment bounds and pool over and classify the segments. The model consists of three components: a sentence encoder (Section 3.1), a segment predictor LSTM (Section 3.2), and a segment pooling network which pools over predicted segments to classify them (Section 3.3). The segment predictor is allowed to make mistakes that the labeler must learn to be robust to, a process which we refer to as exploration, and accomplish by aligning predicted and ground truth segments (Section 3.4). The full architecture is presented in Figure 1, and the loss is discussed in Section 3.5. 3.1 Encoding Sentences The first stage is encoding sentences. S-LSTM is agnostic to the choice of sentence encoder, though in this work we use a concat pooled bi-directional LSTM (Howard and Ruder, 2018). First, the embedded words are passed through the LSTM encoder. Then, the maximum and mean of all hidden states are concatenated with the final hidden states, and this is used as the sentence encoding. 3.2 Predicting Segment Bounds The second step of our model is a Segment Predictor LSTM, which predicts segment boundaries within the document. For this step we use a bidirectional LSTM that consumes each sentence vector and predicts an indicator variable, (B)eginning or (I)nside a segment. It is trained from presegmented documents using a binary cross entropy loss. This indicator variable determines if the sentence is the start of a new segment or not. This is similar to the approach taken by TextSeg in Koshorek et al. (2018), though we do not estimate a threshold, τ, and instead learn to to predict two classes: (B)eginning and (I)nside. 3.3 Segment Pooling After segmenting the document, the third stage of the model pools within the predicted segments to predict a label for each segment. The sentence vectors for the predicted segments are all grouped, and a pooling function is run over them. There are several possible sequence-to-vector pooling functions that could be used, such as averaging, and more complex learned pooling functions, such as LSTMs. The full S-LSTM model uses a concat pooling LSTM, and our experimental results show that this yields a better segment label than just averaging. We then use a classifier following the output of the segment pooler, which can provide a distribution over labels for each segment. The combination of segment prediction and pooling is one way that S-LSTM is different from previous hierarchical LSTM models. The model can predict and label segments dynamically, generating a single vector for predicted segments. 3.4 Segment Alignment and Exploration Because segments can be considered dynamically at training time, we propose a method of assigning labels to potentially incorrect segments by aligning the predicted segments with ground truth segments. This label assignment allows segment-labeling loss to be propagated through the end-to-end model. Teacher Forcing. Teacher forcing, or feeding ground truth inputs into a recurrent network as 316 [ max; mean; last ] B I B I s1 s2 s3 s4 1. Embed the Words 2. Encode the Sentences using a concat pooled LSTM. 3. Propose Segment Bounds based on the encoded sentence representation run through a bi-directional LSTM. 4. Pool over Proposed Segments to generate a single label or topic prediction per sentence using a concat pooled LSTM. Figure 1: Segment Pooling LSTM (S-LSTM) architecture. The network first proposes segment bounds based on text, and then pools over sentence representations in the proposed segment to generate a segment label. opposed to model predictions, was first developed in Williams and Zipser (1989). The idea is to use ground truth predictions for inputs that would normally come from model predictions for the first stages of training, to help with convergence. For S-LSTM, it is the simplest approach to segment pooling and alignment: at training time feed the ground truth segments (as opposed to the predicted segments) the segment pooler (step 3 in Figure 1). This gives us a one-to-one alignment of "predicted" (forced) segments and ground truth segments. This is opposed to only using the predicted segments as the bounds for segment pooler. Exploration. Employing only teacher forcing does not allow the segment labeler to learn how to recover from errors in segmentation. The mechanism for allowing the model to explore incorrect segmentations is to align the predicted segments with overlapping ground truth segments at training time, and treat the all aligned ground truth labels as correct. While many alignments are possible, we use the one presented in Figure 2. This manyto-many alignment ensures that every ground-truth segment is mapped to at least one predicted segment and every predicted segment is mapped to at least one ground truth segment. We can additionally schedule teacher forcing. At the beginning, when the segmentation prediction network performs poorly, the model pools over only ground truth segment bounds, allowing it to learn the cleanest topic representations. However, as training progresses and the segmentation accuracy begins to converge, we switch from pooling over ground truth segments to aligning predicted and ground truth segment. In this way, the segment pooler learns to be robust to segmentation errors. 3.5 Joint Training To jointly train the model, we use a multi-task loss, L(X, y; θ) =α · Lseg(X, yseg; θseg)+ (1 −α) · Lcls(X, ycls; θcls, aligner), where yseg are the labels for the segment prediction LSTM and ycls are segment labels. In addition, we pass in an aligner, which determines how to align the predicted segments with the ground truth segments to compute the loss, and either teacher forces the model or allows it to explore. 4 Experimental Setup We follow the experimental procedure of Arnold et al. to evaluate S-LSTM for the tasks of document segmentation and segment labeling. 4.1 Datasets WikiSection. Arnold et al. introduced the WikiSection dataset, which contains Wikipedia articles across two languages (English and German) and domains (Cities and Diseases). Articles are segmented using the Wikipedia section structure. The heading of each segment is retained, as well as a normalized label for each heading type (e.g. History, Demography), drawn from a restricted label vocabulary. There are two tasks: (1) jointly segment the document and assign a single restrictedvocabulary label to the segment, and (2) predict the bag-of-words in the title of the Wikipedia section as a label. For instance, the bag-of-words label for the title of this section would be the words: 317 History History History Politics Geography Economy Politics Geography Economy 1. Align all ground truth segments with the maximum overlapping predicted segment. (↓) 2. Align unmatched predicted segments with maximum overlapping ground truth segments. (↑) Ground Truth Predicted Figure 2: Greedy many-to-many alignment. This alignment is used to assign ground-truth labels to predicted segments for training. Each ground truth segment first aligns to the maximally overlapping predicted segment; each leftover predicted segment then aligns to the maximally overlapping ground truth segment. 1. Slide a probe of length k over the items. 2. Increase a counter by 1 whenever: a. the items are in the same segment in the ground truth, but not the predictions; or b. the items are in different segments in the ground truth, but not the predictions. 3. Divide the counter by the number of measures taken. Ground Truth Predicted ... +1 0 +1 0 0 0 +1 0 +1 Figure 3: Computing Pk. A sliding window of length k is run over the text, and a counter increments whenever the same/different status for the two ends of the window doesn’t match in the ground truth and predicted segmentation. [Dataset, Experimental, Setup].1 For the second task, we post-process headers to remove stopwords, numbers and punctuation. We then remove words that occur fewer than 20 times in the training data to get the final label vocabulary sizes. Of note, we encountered a smaller label vocabulary for the bag-of-words generation task than that reported by Arnold et al.. For the four datasets, the original reported sizes of the header vocabularies were: [1.5k 1.0k, 2.8k, 1.1k]. When reproducing earlier results, we verified with the dataset authors that the actual sizes were: [179, 115, 603, 318]. The first task aligns closely with the clinical domain, in which headers are typically drawn from a fixed label set (Tepper et al., 2012). The second aligns more closely with learning to segment and label from naturally labeled data, such as contracts or Wikipedia articles, which can potentially then be transferred (Koshorek et al., 2018). Wiki-50. The Wiki-50 dataset was introduced as a test set in Koshorek et al. (2018), which also introduced the full Wiki-727k dataset. The dataset contains 50 randomly sampled Wikipedia articles, segmented and with their headers, and was used to evaluate computationally expensive methods such as BAYESSEG (Eisenstein and Barzilay, 2008). Cities and Elements. The Cities and Elements 1Subsection bags-of-words labels include the dominating section heading. datasets were introduced in Chen et al. (2009). They provide two additional Wikipedia datasets with both segmentation and segment headers. Clinical. We use the Clinical Textbook dataset from Eisenstein and Barzilay (2008), which has segment boundaries but no headings. 4.2 Experimental Design We evaluate S-LSTM with previous document segmentation and segment labeling approaches on all four WikiSection datasets— English-language Diseases (en_disease), German-language Diseases (de_disease), English-language Cities (en_city), and German-language Cities (de_city)—for both the single label and multi-label tasks. Model Ablation. In order to understand the effect of our proposed segment pooling and segment exploration strategies, we also include results for simpler baselines for each of these modules. For the segment labeling we report not only the full S-LSTM model with LSTM pooling, but also additionally a mean pooling model, which we denote with "-pool". For the segment exploration we report not only the model with exploration, but also a model only trained using teacher forcing, which we denote with "-expl". Model Transferability. To evaluate model transferability, we test models trained on the English 318 Żelechów is located near border of Masovian and Lublin Voivodeships . During the period between 1975 - 1998 Żelechów was in Siedlce Voivodship . Before 1795 , Żelechów had strong connections with Lesser Poland . So it is located between three geographical regions : Podlaskie , Lubelszczyzna and Masovia . The surrounding landscape was formed during the ice age when the whole area was covered with ice . The landscape now is gently waved , and the town itself is located on a hill , making its altitude vary from up to . The area around Żelechów is surrounded by fields and few forests . The area of the town is 1214 hectares ( 12,14 km² ) . This is much more than the actual built - up area : 77,8 % ( 945 ha ) of the whole area is agriculture usage , 3,6 % ( 43 ha ) of the area are forests , and 18,6 % ( 226 ha ) is unused or built up . Żelechów is 65th town in Masovian Voivodship in respect of number of inhabitants ( with total number of towns in Masovian Voivodship of 85 ) . It is the smallest town in Garwolin County . In 2006 number of inhabitants of the town of Żelechów made 47,7 % of the total population of Gmina Żelechów . Detailed demography information from December 31 , 2006 : Poles are dominant nation in the town , there is also a group of the Romani people . The name was used in the time of Middle Ages . It can be found in a document ( written in time between 1335 and 1342 ) as Zelechov . In a later document written by Jan Długosz ( 1470–1480 ) as Zyelyechow . The name derives from the Polish forename Żelech , which is a simplified form of Żelisław . Names in other languages : - Russian : Желехув - Hebrew : ליחוב'ז , לחוב'ז - Yidish : זשעליכאָוו , זשעלעכאָוו The first record of Żelechów dates back to 1282 , and the city rights were gained in 1447 . Żelechów was a private town , first owned by the family of Ciołek ( who later changed their surname to Żelechowski ) . It was a local center of trade and an important city until the Deluge ( the war with Sweden ) . At that time the town was greatly devastated , and dozens of people died ( also due to diseases ) . In the first half of the 17th century Jews first settled in Żelechów . The owners of the town changed frequently , one of them was Ignacy Wyssogota Zakrzewski - the first President of Warsaw . After the Partitions of Poland Żelechów belonged to Austria . Then in the time of the Napoleonic Wars it was within the borders of the Duchy of Warsaw , and after the Congress of Vienna it was finally placed in Congress Poland , which was in fact controlled by Russia . Joachim Lelewel was a deputy to the Sejm from Żelechów county in years 1828 - 1831 . Romuald Traugutt lived here in 1845 , he served as officer of a ruff of sappers . During the January Uprising near Żelechów , few skirmishes took place . After the uprising the Russian government took the decision to punish those who fought against them , who were generally nobility . Nearby peasants received land ( which later belonged to nobility ) , and the city from that time onward was not owned by a single person . To keep the peace in the area , two cavalry companies and an artillery unit were placed in Żelechów . They brought prosperity , because their needs had to be supported by the townspeople . In that time , Żelechów started to be especially well known as a shoe production center . In 1880 a great fire burned a large part of the town , but it was rebuilt quickly with brick houses replacing wooden ones . In 1919 about 7,800 inhabitants lived in the city . During the interwar period about 800 firms resided in Żelechów ( mainly shops and handicrafts ) . In 1939 in Żelechów lived about 8,500 inhabitants , who were mostly Jews ( 5,800 people ) . Before the Great Wars , many Jews fled to America , mainly to Costa Rica , where they founded a new Jewish community . When the Nazi Germany occupied Poland , a ghetto was created in a small area in the city , placing about 10,000 Jews there , mainly from Żelechów but also from other cities of Poland . In September 1942 , the liquidation of the ghetto began , where people were transported to Treblinka extermination camp , but due to the chaos many tried to escape . About 1,000 died in Żelechów this time shot by German soldiers . On July 17 of 1944 the Red Army entered Żelechów , ending the war there . Only 50 Jews remained alive in the city . At this time about 4,000 people lived in Żelechów , and this number has not changed much to this day . Żelechów is a centre supporting nearby farmers . There are over 500 firms in the town , mainly small family shops , handicrafts or service . Bigger firms work in the fields of machinery , footwear and the floor industry . Żelechów is a local centre of education , up to secondary school . There are many schools offering education in different areas of knowledge . The city is from European route E372 , which runs from Warsaw to Lviv . The voivodship road 807 passes through the town . Żelechów is located near border of Masovian and Lublin Voivodeships . During the period between 1975 - 1998 Żelechów was in Siedlce Voivodship . Before 1795 , Żelechów had strong connections with Lesser Poland . So it is located between three geographical regions : Podlaskie , Lubelszczyzna and Masovia . The surrounding landscape was formed during the ice age when the whole area was covered with ice . The landscape now is gently waved , and the town itself is located on a hill , making its altitude vary from up to . The area around Żelechów is surrounded by fields and few forests . The area of the town is 1214 hectares ( 12,14 km² ) . This is much more than the actual built - up area : 77,8 % ( 945 ha ) of the whole area is agriculture usage , 3,6 % ( 43 ha ) of the area are forests , and 18,6 % ( 226 ha ) is unused or built up . Żelechów is 65th town in Masovian Voivodship in respect of number of inhabitants ( with total number of towns in Masovian Voivodship of 85 ) . It is the smallest town in Garwolin County . In 2006 number of inhabitants of the town of Żelechów made 47,7 % of the total population of Gmina Żelechów . Detailed demography information from December 31 , 2006 : Poles are dominant nation in the town , there is also a group of the Romani people . The name was used in the time of Middle Ages . It can be found in a document ( written in time between 1335 and 1342 ) as Zelechov . In a later document written by Jan Długosz ( 1470–1480 ) as Zyelyechow . The name derives from the Polish forename Żelech , which is a simplified form of Żelisław . Names in other languages : - Russian : Желехув - Hebrew : ליחוב'ז , לחוב'ז - Yidish : זשעליכאָוו , זשעלעכאָוו The first record of Żelechów dates back to 1282 , and the city rights were gained in 1447 . Żelechów was a private town , first owned by the family of Ciołek ( who later changed their surname to Żelechowski ) . It was a local center of trade and an important city until the Deluge ( the war with Sweden ) . At that time the town was greatly devastated , and dozens of people died ( also due to diseases ) . In the first half of the 17th century Jews first settled in Żelechów . The owners of the town changed frequently , one of them was Ignacy Wyssogota Zakrzewski - the first President of Warsaw . After the Partitions of Poland Żelechów belonged to Austria . Then in the time of the Napoleonic Wars it was within the borders of the Duchy of Warsaw , and after the Congress of Vienna it was finally placed in Congress Poland , which was in fact controlled by Russia . Joachim Lelewel was a deputy to the Sejm from Żelechów county in years 1828 - 1831 . Romuald Traugutt lived here in 1845 , he served as officer of a ruff of sappers . During the January Uprising near Żelechów , few skirmishes took place . After the uprising the Russian government took the decision to punish those who fought against them , who were generally nobility . Nearby peasants received land ( which later belonged to nobility ) , and the city from that time onward was not owned by a single person . To keep the peace in the area , two cavalry companies and an artillery unit were placed in Żelechów . They brought prosperity , because their needs had to be supported by the townspeople . In that time , Żelechów started to be especially well known as a shoe production center . In 1880 a great fire burned a large part of the town , but it was rebuilt quickly with brick houses replacing wooden ones . In 1919 about 7,800 inhabitants lived in the city . During the interwar period about 800 firms resided in Żelechów ( mainly shops and handicrafts ) . In 1939 in Żelechów lived about 8,500 inhabitants , who were mostly Jews ( 5,800 people ) . Before the Great Wars , many Jews fled to America , mainly to Costa Rica , where they founded a new Jewish community . When the Nazi Germany occupied Poland , a ghetto was created in a small area in the city , placing about 10,000 Jews there , mainly from Żelechów but also from other cities of Poland . In September 1942 , the liquidation of the ghetto began , where people were transported to Treblinka extermination camp , but due to the chaos many tried to escape . About 1,000 died in Żelechów this time shot by German soldiers . On July 17 of 1944 the Red Army entered Żelechów , ending the war there . Only 50 Jews remained alive in the city . At this time about 4,000 people lived in Żelechów , and this number has not changed much to this day . Żelechów is a centre supporting nearby farmers . There are over 500 firms in the town , mainly small family shops , handicrafts or service . Bigger firms work in the fields of machinery , footwear and the floor industry . Żelechów is a local centre of education , up to secondary school . There are many schools offering education in different areas of knowledge . The city is from European route E372 , which runs from Warsaw to Lviv . The voivodship road 807 passes through the town . history, geography, location geography, area history, population, geography, demographics history, name demographics, barangays, history history history history history economy education, transport geography, location, area demographics, population, geography history, name economy education transportation, transport SECTOR S-LSTM Figure 4: A randomly selected document from the en_cities test set, with the output of SECTOR (left) and SLSTM (right). Green lines are a correctly predicted segment bound, red lines are false positive bound predictions, and yellow dashed lines are false negatives. For each segment, the top 1-2 predicted terms are also shown. Terms are bold green if they appear in the maximally overlapping segment in the ground truth, underlined red if they are false positive terms, and italicized yellow if they are false negatives. S-LSTM does not predict any false positive segment bounds, and makes only a small number of labeling errors compared with the SECTOR baseline. WikiSection tasks (en_disease and en_city) on the Cities, Elements, Wiki-50, and Clinical datasets. 4.3 Evaluation Measures Segmentation: Pk. Pk is a probabilistic measure (Beeferman et al., 1999) that works by running a sliding window of width k over the predicted and ground truth segments, and counting the number of times there is disagreement about the ends of the probe being in the same or different sections (see Figure 3). The number of disagreements is then divided by the total number of window positions, resulting in a score normalized between 0 and 1. Our segmentation results are reported setting k to half the average size of ground truth segments. Classification: F1, MAP, and Prec@1. For classification, we report three different measures, depending on the task. For the single label tasks, we report F1 and Mean Average Precision (MAP). For evaluating the bag-of-words (multilabel) tasks, we report Precision at the first rank position (Prec@1) and MAP. In both cases, these are computed by first aligning the predicted segments with the ground truth segments as shown in Figure 2 and described in Section 3.4. In all cases, the metrics are micro-averaged. 4.4 Baselines We report C99 (Choi, 2000), TopicTiling (Riedl and Biemann, 2012), and TextSeg (Koshorek et al., 2018) as baselines on WikiSection segmentation. For a neural baseline, we report the SECTOR model (Arnold et al.) with pre-trained embeddings, denoted in the paper as SEC>T,H+emb. For the additional datasets, we report GraphSeg (Glavaš et al., 2016), BayesSeg (Eisenstein and Barzilay, 2008) and pretrained TextSeg and SECTOR models. In addition, we implemented an LSTM-LSTMCRF IOB tagging model following Lample et al. (2016). This is only used for the single-label experiments, as CRF-decoded IOB tagging models are more difficult to apply to the multilabel case. 4.5 Model Setup For each task and dataset, we use the same set of hyperparameters: Adam optimizer (Kingma and Ba, 2015) with learning rate 0.001 and weight decay 0.9. Dropout (Srivastava et al., 2014) is applied after each layer except the final classification layers; we use a single dropout probability of 0.1 for every instance. For models with exploration, we employ teacher forcing for 10 epochs. Model weights are initialized using Xavier normal initialization (Glorot and Bengio, 2010). All LSTM hidden-layer sizes are set to 200. We use fixed 300-dimensional FastText embeddings (Bojanowski et al., 2017) for both English and German, and project them down to 200 dimensions using a trainable linear layer. 5 Results and Analysis There are five major takeaways from the experimental results and analysis. First, the jointly trained S-LSTM model shows major improvement over prior work that modeled document segmentation and segment labeling tasks separately. Second, segment alignment and exploration during training reduces error rates. Third, the segment pooling layer leads to improvements for both segmentation and segment labeling. Fourth, S-LSTM outperforms an IOB-tagging CRF-decoded model for single label segment labeling, and also generalizes easily 319 WikiSection-topics single-label classification en_disease 27 topics de_disease 25 topics en_city 30 topics de_city 27 topics model configuration ↓Pk ↑F1 ↑MAP ↓Pk ↑F1 ↑MAP ↓Pk ↑F1 ↑MAP ↓Pk ↑F1 ↑MAP C99 37.4 n/a n/a 42.7 n/a n/a 36.8 n/a n/a 38.3 n/a n/a TopicTiling 43.4 n/a n/a 45.4 n/a n/a 30.5 n/a n/a 41.3 n/a n/a TextSeg 24.3 n/a n/a 35.7 n/a n/a 19.3 n/a n/a 27.5 n/a n/a SEC>T+emb 26.3 55.8 69.4 27.5 48.9 65.1 15.5 71.6 81.0 16.2 71.0 81.1 LSTM-LSTM-CRF 23.9 57.2 n/a 23.6 51.4 n/a 9.7 77.5 n/a 10.2 74.0 n/a S-LSTM 20.0 59.3 72.4 18.8 55.6 69.0 9.1 76.1 83.5 9.5 76.5 84.5 Table 1: WikiSection results. Baselines are TopicTiling (Riedl and Biemann, 2012), TextSeg (Koshorek et al., 2018), and C99 (Choi, 2000), and the best neural SECTOR models from Arnold et al.. WikiSection-headings multi-label classification en_disease 179 topics de_disease 115 topics en_city 603 topics de_city 318 topics model configuration ↓Pk ↑Prec@1 ↑MAP ↓Pk ↑Prec@1 ↑MAP ↓Pk ↑Prec@1 ↑MAP ↓Pk ↑Prec@1 ↑MAP C99 37.4 n/a n/a 42.7 n/a n/a 36.8 n/a n/a 38.3 n/a n/a TopicTiling 43.4 n/a n/a 45.4 n/a n/a 30.5 n/a n/a 41.3 n/a n/a TextSeg 24.3 n/a n/a 35.7 n/a n/a 19.3 n/a n/a 27.5 n/a n/a SEC>H+emb 30.7 50.5 57.3 32.9 26.6 36.7 17.9 72.3 71.1 19.3 68.4 70.2 S-LSTM 19.8 53.5 60.3 18.6 36.2 46.1 9.0 73 71.3 8.2 74.1 75.1 S-LSTM, -expl 20.8 52.1 59 19.1 34.7 44.8 9.2 72.7 70.8 8.5 73.8 74.4 S-LSTM, -expl, -pool 21.2 52.3 59.5 19.8 34.4 45 10.4 69.7 67.2 10.2 64.1 66.7 Table 2: WikiSection headings task results, which predicts a multi-label bag-of-words drawn from section headers. To show the effect of the segment pooling and model exploration used in S-LSTM we report two variants where -expl uses only teacher forcing and -pool uses only mean pooling. and tractably to multi-labeling. Fifth, a deeper analysis of the joint modeling demonstrates that segment labeling and segment bound prediction contain complementary information. 5.1 Structure Predicts Better Structure Tables 1 and 2 show that by explicitly predicting segment bounds we can improve segmentation by a large margin. On the header prediction task (Table 2), we reduced Pk by an average of over 30% across the WikiSection datasets. Pk was consistent across both WikiSection tasks, and did not degrade when going from single-label to multi-label prediction, as Arnold et al. had found. This shows that we can achieve a more robust segmentation through jointly modeling segmentation and labeling. This is also clear from Figure 4, where S-LSTM predicts a much more accurate segmentation. 5.2 Exploration Allows Error Recovery The results of an ablation experiment (Table 2, bottom) show that there is an additional classification gain by allowing the model to explore recovering from segmentation errors. Exploration has the important property of allowing the model to optimize more closely to how it is being evaluated. This follows from a long line of work in NLP that shows that for tasks such as dependency parsing (Ballesteros et al., 2016), constituency parsing (Goodman, 1996), and machine translation (Och, 2003), all show improvements by optimizing on a loss that aligns with evaluation. The teacher forcing was important at the beginning of model training. When training variants of S-LSTM that did not use teacher forcing at the beginning, which instead could explore the bad segmentation, the segmentation failed to converge and the model performed universally poorly. 5.3 S-LSTM Can Take Advantage of Both of These, Plus Segment Pooling S-LSTM is capable of taking advantage of the complementary information by jointly learning to segment and label. It is capable of learning to recover from segmentation errors by exploring towards the end of training. But the ablation study shows that there is one more important component of S-LSTM that allows it to improve over previous baselines: LSTM pooling over segments. The addition of the segment pooling layer improves MAP and Prec@1 across all four datasets in the heading prediction task (Table 2), comparing the model without exploration (S-LSTM,-expl) with the model without exploration (which uses average pooling: S-LSTM,320 Segmentation Wiki-50 Cities Elements Clinical and multi-label classification ↓Pk ↑MAP ↓Pk ↑MAP ↓Pk ↑MAP ↓Pk GraphSeg 63.6 n/a 40.0 n/a 49.1 n/a – BayesSeg 49.2 n/a 36.2 n/a 35.6 n/a 57.8 TextSeg 18.2* n/a 19.7* n/a 41.6 n/a 30.8 SEC>H+emb@en_disease – – – – 43.3 9.5 36.5 SEC>H+emb@en_city 40.5 13.4 33.3 53.6 41.0 7.9 – S-LSTM@en_city 22.7 16.6 21.2 54.2 34.5 11.0 – S-LSTM@en_disease – – – – 30.2 19.1 36.1 Table 3: Transfer results across four datasets. Those marked * are trained on the training portion of the corresponding dataset, whereas those without are either unsupervised or trained on a different dataset. For the Wiki-50, Cities, and Elements datasets, S-LSTM outperforms all models not trained on corresponding training set. WikiSection-headings multi-label classification de_disease 115 topics model configuration ↓Pk ↑P@1 ↑MAP S-LSTM, w/o Segment Prediction n/a 42.3 52.1 S-LSTM, w/ Segment Prediction 19.1 43.3 53.3 Table 4: A model trained to jointly predict segment bounds and segment labels improves classification over a baseline which only predicts labels. Both are given oracle segment bounds and do not use exploration. WikiSection-headings document segmentation de_disease 115 topics model configuration ↓Pk ↑P@1 ↑MAP S-LSTM, w/o Segment Labeling 21.8 n/a n/a S-LSTM, w/ Segment Labeling 19.1 34.7 44.8 Table 5: Inverse of the experiment in Table 4. A model that jointly predicts segment bounds and labels outperforms a model that only predicts segment bounds. expl,-pool). It is the combination of these three improvements that comprise the full S-LSTM. 5.4 S-LSTM Outperforms a CRF Baseline In Table 1, the results demonstrate that S-LSTM outperforms LSTM-LSTM-CRF baseline in almost every case for single-labeling, and in every case for segmentation. This makes S-LSTM a useful model choice for cases like clinical segmentation and labeling, where segments are drawn from a small fixed vocabulary. S-LSTM also generalizes easily to multi-label problems, in contrast to an IOB-tagging LSTM-LSTM-CRF, since it only requires changing the segment-pooling loss from cross-entropy to binary cross-entropy. 5.5 Predicting Structure Predicts Better Labels (and vice versa) Though we compare with TextSeg (a neural model that predicts segment bounds) and SECTOR (a neural model that predicts sentence labels and post hoc segments them) and show improvements compared to both models, we also directly test the hypothesis that the segmentation and segment labeling tasks contain complementary information. To do so, we conduct two experiments: (1) we fix the segment bounds at training and evaluation time, only training the model to label known segments (results in Table 5); and (2) we only have the model predict segment bounds (results in Table 4). In both cases, the addition of the loss from the companion task improves performance on the main task. This shows that the two tasks contain complementary information, and directly validates our core hypothesis that the two tasks are tightly interwoven. Thus, considering them jointly improves performance on both tasks. 6 Conclusion and Future Work In this paper we introduce the Segment Pooling LSTM (S-LSTM) model for joint segmentation and segment labeling tasks. We find that the model dramatically reduces segmentation error (by 30% on average across four datasets) while improving segment labeling accuracy compared to previous neural and non-neural baselines for both singlelabel and multi-label tasks. Experiments demonstrate that jointly modeling the segmentation and segment labeling, segmentation alignment and exploration, and segment pooling each contribute to S-LSTM’s improved performance. S-LSTM is agnostic as to the sentence encoder used, so we would like to investigate the potential 321 usefulness of transformer-based language models as sentence encoders. There are additional engineering challenges associated with using models such as BERT as sentence encoders, since encoding entire documents can be too expensive to fit on a GPU without model parallelism. We would also like to investigate the usefulness of an unconsidered source of document structure: the hierarchical nature of sections and subsections. Like segment bounds and headers, this structure is naturally available in Wikipedia. Having shown that segment bounds contain useful supervisory signal, it would be interesting to examine if segment hierarchies might also contain useful signal. Acknowledgements The authors would like to thank Sebastian Arnold for his feedback and responsiveness. We would also like to thank others for their feedback, including Franck Dernoncourt, Sasha Spala, Nick Miller, Han-Chin Shing, Pedro Rodriguez, Denis Peskov, and Yogarshi Vyas. This work was supported through Adobe Gift Funding, which supports an Adobe Research-University of Maryland collaboration. It was completed while the primary author was interning at Adobe Research. References Parviz Ajideh. 2003. Schema theory-based pre-reading tasks: A neglected essential in the esl reading class. The Reading Matrix, 3(1). James Allan, Jaime G Carbonell, George Doddington, Jonathan Yamron, and Yiming Yang. 1998. Topic detection and tracking pilot study final report. In Proceedings of the DARPA Broadcast News Transcription and Understanding Workshop. Sebastian Arnold, Rudolf Schneider, Philippe CudréMauroux, Felix A Gers, and Alexander Löser. Sector: A neural model for coherent topic segmentation and classification. Transactions of the Association for Computational Linguistics, 7. Miguel Ballesteros, Yoav Goldberg, Chris Dyer, and Noah A Smith. 2016. Training with exploration improves a greedy stack-LSTM parser. In Proceedings of Empirical Methods in Natural Language Processing. Doug Beeferman, Adam Berger, and John Lafferty. 1999. Statistical models for text segmentation. Machine Learning, 34(1-3). Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5. Harr Chen, SRK Branavan, Regina Barzilay, and David R Karger. 2009. Content modeling using latent permutations. Journal of Artificial Intelligence Research, 36. Freddy YY Choi. 2000. Advances in domain independent linear text segmentation. Conference of the North American Chapter of the Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Conference of the North American Chapter of the Association for Computational Linguistics. Tracy Edinger, Dina Demner-Fushman, Aaron M Cohen, Steven Bedrick, and William Hersh. 2017. Evaluation of clinical text segmentation to facilitate cohort retrieval. In AMIA Annual Symposium Proceedings. Jacob Eisenstein and Regina Barzilay. 2008. Bayesian unsupervised topic segmentation. In Proceedings of Empirical Methods in Natural Language Processing. Kavita Ganesan and Michael Subotin. 2014. A general supervised approach to segmentation of clinical texts. In IEEE International Conference on Big Data. Goran Glavaš, Federico Nanni, and Simone Paolo Ponzetto. 2016. Unsupervised text segmentation using semantic relatedness graphs. In Proceedings of the Joint Conference on Lexical and Computational Semantics. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of Artificial Intelligence and Statistics. Joshua Goodman. 1996. Parsing algorithms and metrics. In Proceedings of the Association for Computational Linguistics. Marti Hearst and Christian Plaunt. 2002. Subtopic structuring for full-length document access. Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval. Marti A. Hearst. 1997. Text tiling: Segmenting text into multi-paragraph subtopic passages. Computational Linguistics, 23(1). Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the Association for Computational Linguistics. 322 Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations. Omri Koshorek, Adir Cohen, Noam Mor, Michael Rotman, and Jonathan Berant. 2018. Text segmentation as a supervised learning task. In Conference of the North American Chapter of the Association for Computational Linguistics. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Conference of the North American Chapter of the Association for Computational Linguistics. Chin-Yew Lin. 2004. Looking for a few good metrics: Automatic summarization evaluation-how many samples are enough? In NTCIR. Andrew McCallum and Wei Li. 2003. Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In Conference of the North American Chapter of the Association for Computational Linguistics. Viet-An Nguyen, Jordan Boyd-Graber, and Philip Resnik. 2012. SITS: A hierarchical nonparametric model using speaker identity for topic segmentation in multiparty conversations. In Proceedings of the Association for Computational Linguistics. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, Sapporo, Japan. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Conference of the North American Chapter of the Association for Computational Linguistics. Alexandra Pomares-Quimbaya, Markus Kreuzthaler, and Stefan Schulz. 2019. Current approaches to identify sections within clinical narratives from electronic health records: a systematic review. BMC medical research methodology, 19(1). Lance A Ramshaw and Mitchell P Marcus. 1999. Text chunking using transformation-based learning. In Natural language processing using very large corpora. Springer. Martin Riedl and Chris Biemann. 2012. TopicTiling: a text segmentation algorithm based on LDA. In Proceedings of ACL 2012 Student Research Workshop. Najmeh Sadoughi, Greg P Finley, Erik Edwards, Amanda Robinson, Maxim Korenevsky, Michael Brenndoerfer, Nico Axtmann, Mark Miller, and David Suendermann-Oeft. 2018. Detecting section boundaries in medical dictations: Toward real-time conversion of medical dictations to clinical reports. In International Conference on Speech and Computer. Springer. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1). Janet K Swaffar, Katherine Arens, and Heidi Byrnes. 1991. Reading for meaning: An integrated approach to language learning. Pearson College Division. Michael Tepper, Daniel Capurro, Fei Xia, Lucy Vanderwende, and Meliha Yetisgen-Yildiz. 2012. Statistical section segmentation in free-text clinical records. In Proceedings of the Language Resources and Evaluation Conference. Ronald J Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1(2). Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, and Xueqi Cheng. 2019. Outline generation: Understanding the inherent content structure of documents. In Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval.
2020
29
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3182–3187 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3182 Embarrassingly Simple Unsupervised Aspect Extraction St´ephan Tulkens CLiPS University of Antwerp Belgium [email protected] Andreas van Cranenburgh Department of Information Science University of Groningen The Netherlands [email protected] Abstract We present a simple but effective method for aspect identification in sentiment analysis. Our unsupervised method only requires word embeddings and a POS tagger, and is therefore straightforward to apply to new domains and languages. We introduce Contrastive Attention (CAt ), a novel single-head attention mechanism based on an RBF kernel, which gives a considerable boost in performance and makes the model interpretable. Previous work relied on syntactic features and complex neural models. We show that given the simplicity of current benchmark datasets for aspect extraction, such complex models are not needed. The code to reproduce the experiments reported in this paper is available at https://github.com/clips/cat. 1 Introduction We consider the task of unsupervised aspect extraction from text. In sentiment analysis, an aspect can intuitively be defined as a dimension on which an entity is evaluated (see Figure 1). While aspects can be concrete (e.g., a laptop battery), they can also be subjective (e.g., the loudness of a motorcycle). Aspect extraction is an important subtask of aspect-based sentiment analysis. However, most existing systems are supervised (for an overview, cf. Zhang et al., 2018). As aspects are domain-specific, supervised systems that rely on strictly lexical cues to differentiate between aspects are unlikely to transfer well between different domains (Rietzler et al., 2019). Another reason to consider the unsupervised extraction of aspect terms is the scarcity of training data for many domains (e.g., books), and, more importantly, the complete lack of training data for many languages. Unsupervised aspect extraction has previously been attempted with topic models (Mukherjee and Liu, 2012), topic model hybrids (Garc´ıa-Pablos et al., 2018), and reThe two things that really drew me to vinyl were the expense and the inconvenience . Figure 1: An example of a sentence expressing two aspects (red) on a target (italics). Source: https: //www.newyorker.com/cartoon/a19180 A: aspect vectors S: sentence (word vectors) att: attention vector d: sentence summary food staff ambience RBF Figure 2: An overview of our aspect extraction model. stricted Boltzmann machines (Wang et al., 2015), among others. Recently, autoencoders using attention mechanisms (He et al., 2017; Luo et al., 2019) have also been proposed as a method for aspect extraction, and have reached state of the art performance on a variety of datasets. These models are unsupervised in the sense that they do not require labeled data, although they do rely on unlabeled data to learn relevant patterns. In addition, these are complex neural models with a large number of parameters. We show that a much simpler model suffices for this task. We present a simple unsupervised method for aspect extraction which only requires a POS tagger and in-domain word embeddings, trained on a small set of documents. We introduce a novel single-head attention mechanism, Contrastive At3183 the bread is top notch as well . best spicy tuna roll , great asian salad . also get the onion rings – best we ’ve ever had . Figure 3: Examples of Contrastive Attention (γ=.03) tention (CAt ), based on Radial Basis Function (RBF) kernels. Compared to conventional attention mechanisms (Weston et al., 2014; Sukhbaatar et al., 2015), CAt captures more relevant information from a sentence. Our method outperforms more complex methods, e.g., attention-based neural networks (He et al., 2017; Luo et al., 2019). In addition, our method automatically assigns aspect labels, while in previous work, labels are manually assigned to aspect clusters. Finally, we present an analysis of the limitations of our model, and propose some directions for future research. 2 Method Like previous methods (Hu and Liu, 2004; Xu et al., 2013), our method (see Figure 2) consists of two steps: extraction of candidate aspect terms and assigning aspect labels to instances. Both steps assume a set of in-domain word embeddings, which we train using word2vec (Mikolov et al., 2013). We use a small set of in-domain documents, containing about 4 million tokens for the restaurant domain. Step 1: aspect term extraction In previous work (Hu and Liu, 2004; Xu et al., 2013), the main assumption has been that nouns that are frequently modified by sentiment-bearing adjectives (e.g., good, bad, ugly) are likely to be aspect nouns. We experimented with this notion and devised a labeling strategy in which aspects are extracted based on their co-occurrence with seed adjectives. However, during experimentation we found that for the datasets in this paper, the most frequent nouns were already good aspects; any further constraint led to far worse performance on the development set. This means that our method only needs a POS tagger to recognize nouns, not a full-fledged parser. Throughout this paper, we use spaCy (Honnibal and Montani, 2017) for tokenization and POS tagging. In Section 5, we investigate how these choices impact performance. Step 2: aspect selection using Contrastive Attention We use a simple of form of attention, similar to the attention mechanism used in memory networks (Weston et al., 2014; Sukhbaatar et al., 2015). With an attention mechanism, a sequence of words, e.g., a sentence or a document, is embedded into a matrix S, which is operated on with an aspect a to produce a probability distribution, att. Schematically: att = softmax(aS) (1) att is then multiplied with S to produce an informative summary with respect to the aspect a: d = X i atti Si (2) Where d is the weighted sentence summary. There is no reason to restrict a to be a single vector: when replaced by a matrix of queries, A, the equation above gives a separate attention distribution for each aspect, which can then be used to create different summaries, thereby keeping track of different pieces of information. In our specific case, however, we are interested in tracking which words elicit aspects, regardless of the aspect to which they belong. We address this by introducing Contrastive Attention (CAt ), a way of calculating attention that integrates a set of query vectors into a single attention distribution. It uses an RBF kernel, which is defined as follows: rbf(x, y, γ) = exp(−γ||x −y||2 2) (3) where, x and y are vectors, and γ is a scaling factor, which we treat as a hyperparameter. An important aspect of the RBF kernel is that it turns an arbitrary unbounded distance, the squared euclidean distance in this case, into a bounded similarity. For example, regardless of γ, if x and y have a distance of 0, their RBF response will be 1. As their distance increases, their similarity decreases, and will eventually asymptote towards 0, depending on γ. Given the RBF kernel, a matrix S, and a set of aspect vectors A, attention is calculated as follows: att = P a∈A rbf(w, a, γ) P w∈S P a∈A rbf(w, a, γ) (4) The attention for a given word is thus the sum of the RBF responses of all vectors in A, divided by the sum of the RBF responses of the vectors to all vectors in S. This defines a probability distribution over words in the sentence or document, where words that are, on average, more similar to aspects, get assigned a higher score. 3184 Train Test Citysearch (2009) 1,490 SemEval (2014) 3,041 402 SemEval (2015) 1,315 250 Table 1: The number of sentences in each of the datasets after removing sentences that did not express exactly one aspect in our set of aspects. Method P R F SERBM (2015) 86.0 74.6 79.5 ABAE (2017) 89.4 73.0 79.6 W2VLDA (2018) 80.8 70.0 75.8 AE-CSA (2019) 85.6 86.0 85.8 Mean 78.9 76.9 77.2 Attention 80.5 80.7 80.6 CAt 86.5 86.4 86.4 Table 2: Weighted macro averages across all aspects on the test set of the Citysearch dataset. Step 3: assigning aspect labels After reweighing the word vectors, we label each document based on the cosine similarity between the weighted document vector d and the label vector. ˆy = argmax c∈C (cos(d,⃗c)) (5) Where C is the set of labels, i.e., {FOOD, AMBIENCE, STAFF}. In the current work, we use word embeddings of the labels as the targets. This avoids the inherent subjectivity of manually assigning aspect labels, the strategy employed in previous work (He et al., 2017; Luo et al., 2019). 3 Datasets We use several English datasets of restaurant reviews for the aspect extraction task. All datasets have been annotated with one or more sentencelevel labels, indicating the aspect expressed in that sentence (e.g., the sentence “The sushi was great” would be assigned the label FOOD). We evaluate our approach on the Citysearch dataset (Ganu et al., 2009), which uses the same labels as the SemEval datasets. To avoid optimizing for a single corpus, we use the restaurant subsets of the SemEval 2014 (Pontiki et al., 2014) and SemEval 2015 (Pontiki et al., 2015) datasets as development data. Note that, even though our method is completely unsupervised, we explicitly allocate test data to ensure proper methodological soundness, Method P R F Aspect: FOOD SERBM (2015) 89.1 85.4 87.2 ABAE (2017) 95.3 74.1 82.8 W2VLDA (2018) 96.0 69.0 81.0 AE-CSA (2019) 90.3 92.6 91.4 Mean 92.4 73.5 85.6 Attention 86.7 89.5 88.1 CAt 91.8 92.4 92.1 Aspect: STAFF SERBM (2015) 81.9 58.2 68.0 ABAE (2017) 80.2 72.8 75.7 W2VLDA (2018) 61.0 86.0 71.0 AE-CSA (2019) 92.6 75.6 77.3 Mean 55.8 85.7 67.5 Attention 74.4 69.3 71.8 CAt 82.4 75.6 78.8 Aspect: AMBIENCE SERBM (2015 80.5 59.2 68.2 ABAE (2017) 81.5 69.8 74.0 W2VLDA (2018) 55.0 75.0 64.0 AE-CSA (2019) 91.4 77.9 77.0 Mean 58.7 56.1 57.4 Attention 67.1 65.7 66.4 CAt 76.6 80.1 76.6 Table 3: Precision, recall, and F-scores on the test set of the Citysearch dataset. and do not optimize any models on the test set. Following previous work (He et al., 2017; Ganu et al., 2009), we restrict ourselves to sentences that only express exactly one aspect; sentences that express more than one aspect, or no aspect at all, are discarded. Additionally, we restrict ourselves to three labels: FOOD, SERVICE, and AMBIENCE. We adopt these restrictions in order to compare to other systems. Additionally, previous work (Brody and Elhadad, 2010) reported that the other labels, ANECDOTES and PRICE, were not reliably annotated. Table 1 shows statistics of the datasets. 4 Evaluation We optimize all our models on SemEval ’14 and ’15 training data; the scores on the Citysearch dataset do not reflect any form of optimization with regards to performance. We optimize the hyperparameters of each model separately (i.e., the number of aspect terms and γ of the RBF kernel), leading to the following hyperparameters: For the regular 3185 attention, we select the top 980 nouns as aspect candidates. For the RBF attention, we use the top 200 nouns and a γ of .03. We compare our system to four other systems. W2VLDA (Garc´ıa-Pablos et al., 2018) is a topic modeling approach that biases word-aspect associations by computing the similarity from a word to a set of aspect terms. SERBM (Wang et al., 2015) a restricted Boltzmann Machine (RBM) that learns topic distributions, and assigns individual words to these distributions. In doing so, it learns to assign words to aspects. We also compare our system to two attention-based systems. First, ABAE (He et al., 2017), which is an auto-encoder that learns an attention distribution over words in the sentence by simultaneously considering the global context and aspect vectors. In doing so, ABAE learns an attention distribution, as well as appropriate aspect vectors. Second, AE-CSA (Luo et al., 2019), which is a hierarchical model which is similar to ABAE. In addition to word vectors and aspect vectors, this model also considers sense and sememe (Bloomfield, 1926) vectors in computing the attention distribution. Note that all these systems, although being unsupervised, do require training data, and need to be fit to a specific domain. Hence, all these systems rely on the existence of in-domain training data on which to learn reconstructions and/or topic distributions. Furthermore, much like our approach, ABAE, AE-CSA, and W2VLDA rely on the availability of pre-trained word embeddings. Additionally, AE-CSA needs a dictionary of senses and sememes, which might not be available for all languages or domains. Compared to other systems, our system does require a UD POS tagger to extract frequent nouns. However, this can be an off-the-shelf POS tagger, since it does not need to be trained on domain-specific data. We also compare our system to a baseline based on the mean of word embeddings, a version of our system using regular attention, and a version of our system using Contrastive Attention (CAt ). The results are shown in Table 3. Because of class imbalance (60 % of instances are labeled FOOD), the F-scores from 3 do not give a representative picture of model performance. Therefore, we also report weighted macro-averaged scores in Table 2. Our system outperforms ABAE, AE-CSA, and the other systems, both in weighted macro-average F1 score, and on the individual aspects. In addition, 2 shows that the difference between ABAE and 20 40 60 80 100 percentage of training data (326k sentences) 0 20 40 60 80 100 score (weighted f1) Figure 4: A learning curve on the restaurant data, averaged over 5 embedding models. SERBM is smaller than one would expect based on the F1 scores on the labels, on which ABAE outperforms SERBM on STAFF and AMBIENCE. The Mean model still performs well on this dataset, while it does not use any attention or knowledge of aspects. This implies that aspect knowledge is probably not required to perform well on this dataset; focusing on lexical semantics is enough. 5 Analysis We perform an ablation study to see the influence of each component of our system; specifically, we look at the effect of POS tagging, in-domain word embeddings, and the amount of data on performance. Only selecting the most frequent words as aspects, regardless of their POS tag, had a detrimental effect on performance, giving an F-score of 64.5 (∆-21.9), while selecting nouns based on adjectivenoun co-occurrence had a smaller detrimental effect, giving an F-score of 84.4 (∆-2.2), higher than ABAE and SERBM. Replacing the in-domain word embeddings trained on the training set with pretrained GloVe embeddings (Pennington et al., 2014)1 had a large detrimental effect on performance, dropping the F-score to 54.4 (∆-32); this shows that in-domain data is important. To investigate how much in-domain data is required to achieve good performance, we perform a learning curve experiment (Figure 4). We increase the training data in 10% increments, training five word2vec models at each increment. As the fig1Specifically, the glove.6B.200D vectors from https://nlp.stanford.edu/projects/glove/ 3186 Phenomenon Example OOV “I like the Somosas” Data Sparsity “great Dhal” Homonymy “Of course” Verb > Noun “Waited for food” Discourse “She didn’t offer dessert” Implicature “No free drink” Table 4: A categorization of observed error types. ure shows, only a modest amount of data (about 260k sentences) is needed to tackle this specific dataset. To further investigate the limits of our model, we perform a simple error analysis on our best performing model. Table 4 shows a manual categorization of error types. Several of the errors relate to Outof-Vocabulary (OOV) or low frequency items, such as the words ‘Somosas’ (OOV) and ‘Dhal’ (low frequency). Since our model is purely based on lexical similarity, homonyms and polysemous words can lead to errors. An example of this is the word ‘course,’ which our model interprets as being about food. As the aspect terms we use are restricted to nouns, the model also misses aspects expressed in verbs, such as “waited for food.” Finally, discourse context and implicatures often lead to errors. The model does not capture enough context or world knowledge to infer that ‘no free drink’ does not express an opinion about drinks, but about service. Given these errors, we surmise that our model will perform less well in domains in which aspects are expressed in a less overt way. For example, consider the following sentence from a book review (Kirkus Reviews, 2019): (1) As usual, Beaton conceals any number of surprises behind her trademark wry humor. This sentence touches on a range of aspects, including writing style, plot, and a general opinion on the book that is being reviewed. Such domains might also require the use of more sophisticated aspect term extraction methods. However, it is not the case that our model necessarily overlooks implicit aspects. For example, the word “cheap” often signals an opinion about the price of something. As the embedding of the word “cheap” is highly similar to that of “price” our model will attend to “cheap” as long as enough price-related terms are in the set of extracted aspect terms of the model. In the future, we would like to address the limitations of the current method, and apply it to datasets with other domains and languages. Such datasets exist, but we have not yet evaluated our system on them due to the lack of sufficient unannotated in-domain data in addition to annotated data. Given the performance of CAt , especially compared to regular dot-product attention, it would be interesting to see how it performs as a replacement of regular attention in supervised models, e.g., memory networks (Weston et al., 2014; Sukhbaatar et al., 2015). Additionally, it would be interesting to see why the attention model outperforms regular dot product attention. Currently, our understanding is that the dot-product attention places a high emphasis on words with a higher vector norm; words with a higher norm have, on average, a higher inner product with other vectors. As the norm of a word embedding directly relates to the frequency of this word in the training corpus, the regular dot-product attention naturally attends to more frequent words. In a network with trainable parameters, such as ABAE (He et al., 2017), this effect can be mitigated by finetuning the embeddings or other weighting mechanisms. In our system, no such training is available, which can explain the suitability of CAt as an unsupervised aspect extraction mechanism. 6 Conclusion We present a simple model of aspect extraction that uses a frequency threshold for candidate selection together with a novel attention mechanism based on RBF kernels, together with an automated aspect assignment method. We show that for the task of assigning aspects to sentences in the restaurant domain, the RBF kernel attention mechanism outperforms a regular attention mechanism, as well as more complex models based on auto-encoders and topic models. Acknowledgments We are grateful to the three reviewers for their feedback. The first author was sponsored by a Fonds Wetenschappelijk Onderzoek (FWO) aspirantschap. References Leonard Bloomfield. 1926. A set of postulates for the science of language. Language, 2(3):153–164. 3187 Samuel Brody and Noemie Elhadad. 2010. An unsupervised aspect-sentiment model for online reviews. In Proceedings of NAACL-HLT, pages 804–812. Gayatree Ganu, Noemie Elhadad, and Am´elie Marian. 2009. Beyond the stars: improving rating predictions using review text content. In Proceedings of WebDB, volume 9, pages 1–6. Aitor Garc´ıa-Pablos, Montse Cuadros, and German Rigau. 2018. W2VLDA: almost unsupervised system for aspect based sentiment analysis. Expert Systems with Applications, 91:127–137. Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2017. An unsupervised neural attention model for aspect extraction. In Proceedings of ACL, pages 388–397. Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. Software package. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of ACM SIGKDD, pages 168–177. Kirkus Reviews. 2019. Beating about the bush. Ling Luo, Xiang Ao, Yan Song, Jinyao Li, Xiaopeng Yang, Qing He, and Dong Yu. 2019. Unsupervised neural aspect extraction with sememes. In Proceedings of IJCAI, pages 5123–5129. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In ICLR Workshop Papers. Arjun Mukherjee and Bing Liu. 2012. Aspect extraction through semi-supervised modeling. In Proceedings of ACL, pages 339–348. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP, pages 1532– 1543. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. Semeval-2015 task 12: Aspect based sentiment analysis. In Proceedings of SemEval, pages 486–495. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Haris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In Proceedings of SemEval. Alexander Rietzler, Sebastian Stabinger, Paul Opitz, and Stefan Engl. 2019. Adapt or get left behind: Domain adaptation through BERT language model finetuning for aspect-target sentiment classification. arXiv preprint arXiv:1908.11860. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Proceedings of NIPS, pages 2440–2448. Linlin Wang, Kang Liu, Zhu Cao, Jun Zhao, and Gerard de Melo. 2015. Sentiment-aspect extraction based on restricted boltzmann machines. In Proceedings of ACL-IJCNLP, pages 616–625. Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. arXiv preprint arXiv:1410.3916. Liheng Xu, Kang Liu, Siwei Lai, Yubo Chen, and Jun Zhao. 2013. Mining opinion words and opinion targets in a two-stage framework. In Proceedings of ACL, pages 1764–1773. Lei Zhang, Shuai Wang, and Bing Liu. 2018. Deep learning for sentiment analysis: A survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 8(4):e1253.
2020
290
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3188–3197 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3188 Enhancing Cross-target Stance Detection with Transferable Semantic-Emotion Knowledge Bowen Zhang1, Min Yang2, Xutao Li3∗, Yunming Ye3∗, Xiaofei Xu1, Kuai Dai3 1Harbin Institute of Technology, Harbin, China 2Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China 3Harbin Institute of Technology, Shenzhen, China Abstract Stance detection is an important task, which aims to classify the attitude of an opinionated text towards a given target. Remarkable success has been achieved when sufficient labeled training data is available. However, annotating sufficient data is labor-intensive, which establishes significant barriers for generalizing the stance classifier to the data with new targets. In this paper, we proposed a Semantic-Emotion Knowledge Transferring (SEKT) model for cross-target stance detection, which uses the external knowledge (semantic and emotion lexicons) as a bridge to enable knowledge transfer across different targets. Specifically, a semantic-emotion heterogeneous graph is constructed from external semantic and emotion lexicons, which is then fed into a graph convolutional network to learn multi-hop semantic connections between words and emotion tags. Then, the learned semantic-emotion graph representation, which serves as prior knowledge bridging the gap between the source and target domains, is fully integrated into the bidirectional long short-term memory (BiLSTM) stance classifier by adding a novel knowledgeaware memory unit to the BiLSTM cell. Extensive experiments on a large real-world dataset demonstrate the superiority of SEKT against the state-of-the-art baseline methods. 1 Introduction The goal of stance detection is to automatically predict the attitude (i.e., favor, against, or none) of an opinionated text towards a given target (Du et al., 2017). Recently, deep learning methods, such as convolutional neural network (CNN) and long short-term memory (LSTM) (Augenstein et al., 2016; Du et al., 2017), have dominated the study of stance detection. Impressive stance detection performances have been achieved when a large ∗corresponding authors: {lixutao, yym}@hit.edu.cn number of labeled samples are available. However, obtaining rich annotated data is a time-consuming and labor-intensive process. Conventional stance detection methods are struggling to cope well with the data across targets. This motivates the studies of cross-target stance detection (Wei and Mao, 2019), which infers the attitude of the destination target by leveraging a large amount of annotated data from the source target. So far, several previous studies have been conducted for cross-target stance detection (Augenstein et al., 2016; Xu et al., 2018; Wei and Mao, 2019). These methods leverage either common words or concept-level knowledge shared by different targets to bridge the knowledge gap across the different targets. Such models suffer from two issues when they are applied to cross-target stance detection in practice. First, stance detection often involves analyzing the texts from social media that are short and informal, making it difficult to extract domain-independent common words shared by different targets from the training data. Second, users may express their stance towards a given target in an implicit way. Thus, the existing conceptlevel based methods may fail to distinguish implicit stance-carrying terms and context information. To alleviate the aforementioned issues, we propose a semantic-emotion knowledge transferring (SEKT) model for cross-domain stance detection, which leverages external knowledge as a bridge between source and destination targets. The proposed model is motivated by the observation that the data with different targets usually shares certain common external knowledge that can be transferred from the source to destination targets. First, we build a semantic-emotion graph (SE-graph) from semantic-related and emotion-related lexicons, which incorporates external knowledge from both word-level and concept-level. In SE-graph, each node is either a word or an emotion tag, and 3189 the edge between each node pair indicates the cooccurrences of the two nodes in the lexicons. Second, a graph convolutional network (GCN) (Kipf and Welling, 2016) is employed to learn the graph representation that captures the multi-hop semantic connections between words or emotion tags rather than one-hop connection. Third, we extend the standard bidirectional LSTM (BiLSTM) classifier to fully integrate the external knowledge (SEgraph) by adding an additional knowledge-aware memory unit (KAMU) to the LSTM cell. KAMU is capable of controlling the influence of the external knowledge in learning the hidden state of each word. The main contributions of this paper can be summarized as follows: • We construct a semantic-emotion heterogeneous graph from external semantic and emotion lexicons, and employ GCN to learn the semantic graph representation. The external knowledge enriches the representation learning of the text and target and can be used as a bridge to enable knowledge transfer across different targets. • We extend the standard LSTM cell with an additional memory unit, effectively integrating external knowledge into the classifier for stance detection. • We conduct extensive experiments on a large dataset expanded from SemEval-2016 Task 6 to verify the effectiveness of our model for cross-domain stance detection. The experimental results show that our model consistently outperforms the compared methods. 2 Related Work 2.1 In-domain Stance Detection Stance detection aims to infer the attitude of a text towards specific target expression, which is related to argument mining, fact-checking, and aspect-level sentiment analysis. Early stance detection methods were concentrated on debates (Thomas et al., 2006; Somasundaran and Wiebe, 2009; Walker et al., 2012). In recent years, mining users’ stance from social media has attracted increasing attention due to its broad applications (Du et al., 2017; Dey et al., 2018; Wei et al., 2018). For example, Du et al. (2017) incorporated targetspecific information into stance classification with an attention mechanism. Dey et al. (2018) proposed a two-phase RNN method, where the first phase is to filter the non-neutral text while the second phase is to classify the attitude. Wei et al. (2018) further extended the model to deal with multi-target stance detection and utilized a shared memory network to capture the stance related information towards multiple related targets. Sun et al. (2018) adopted a hierarchical attention method to construct text representation with various linguistic factors. 2.2 Cross-target Stance Detection There are also several studies being developed for cross-target stance detection problems, which can be divided into two classes. The first one mainly focuses on word-level transfer, which utilizes the common words shared by two targets to bridge the knowledge gap. For example, Augenstein et al. (2016) proposed a bidirectional conditional encoding method by incorporating the target to learn the target-specific words. Xu et al. (2018) further utilized the self-attention mechanism to identify the word importance. The second type of approach attempts to address this transfer learning problem with concept-level knowledge shared by two targets. For example, Wei and Mao (2019) proposed a variational Transfer Network (VTN) method, which complements the commonly used knowledge by inferring the latent topics shared by the two targets. 2.3 Incorporating External Knowledge There are also plenty of studies that incorporate external resources, such as prior knowledge, grammar rules, domain descriptions, into deep learning framework to address the data sparsity issue (Zhang et al., 2018; Dragoni and Petrucci, 2018; Zhang et al., 2019b; Hu et al., 2016). For example, Lei et al. (2018) integrated the external knowledge in the word embedding layer. Margatina et al. (2019) combined the external knowledge with the hidden layer acquired by RNN. However, these methods ignored the relations between external knowledge and input context. Ma et al. (2018) developed a Sentic LSTM method, which contained an additional affective gate mechanism in the LSTM cell to assist in learning knowledge-aware context representation. 3190 Figure 1: The framework of the proposed SEKT model for cross-target stance detection. It consists of two main components, i.e., SE-graph and knowledge-enhanced BiLSTM. 3 Our Methodology 3.1 Task Definition and Model Overview We use Xs = {xs i, ps i}Ns i=1 to denote the collection of labeled data in the source domain, where each x denotes the input text and p denotes the corresponding target. Ns represents the number of instances in Xs. Each sentence-target pair (xs, ps) ∈Xs has a stance label ys. Given an input sentence xt and a corresponding target pt in the target domain, this study aims to predict a stance label for the input sentence xt towards the given target pt by using the model learned with the labeled data Xs in source domain. As illustrated in Figure 1, our model consists of two primary components: a semantic-emotion graph (SE-graph) network and a knowledgeenhanced BiLSTM network. First, we build SEgraph from semantic-related and emotion-related lexicons, where GCN is employed to learn the graph representation that captures the semantic connections between words or emotion tags with the multi-hop connection. Then, we extend the BiLSTM classifier to fully integrate the SE-graph by adding a novel knowledge-aware memory unit (KAMU) to the LSTM cell. Next, we will introduce the main components of our model in detail. 3.2 Semantic-Emotion Knowledge Graph Construction The data in different domains usually shares certain background knowledge that can possibly be transferred from the source domain to the target domain. Thus, we leverage external knowledge as a bridge between the source and target domains. To this end, we build a semantic-emotion knowledge graph (SE-graph) to represent the external knowledge that may contribute to cross-target stance detection. The SE-graph utilizes the words or emotion tags in the semantic and emotion lexicons as nodes, and constructs weighted edges between words or emotion tags based on their cooccurrence frequency. First, we utilize the whole words from the semantic lexicon SenticNet (Cambria et al., 2018) as the word-nodes and add edges between the semantic words that capture the wordword semantic connections. Second, we attempt to assign emotion tags to the words in SenticNet by looking for the emotion lexicon EmoLex (Mohammad and Turney, 2013), and add edges between the words and emotion tags that capture the wordtag connection. For example, for a word “mad” in SenticNet, its semantic-related words from SenticNet are ”resent, malice, rage, temper”, and the corresponding emotion tags from EmoLex are “#anger’, #disgust”. In this way, we can construct a weighted SE graph G. However, each emotion tag (node) represents a concept-level knowledge, 3191 which tends to have many connected nodes. As a result, emotional knowledge may dominate the input text. To alleviate this issue, we re-scale the weights of the word-tag edges by a constant. The SE-graph can capture the semantic connections between words and emotion tags with multihop connections. It can help the stance detector to differentiate the important and appropriate words for knowledge transfer. Intuitively, the nodes with high degrees can be considered as the words that contain common background knowledge, which often act as a bridge between different targets. 3.3 SE-graph Embedding We learn the embedding of each node in the SEgraph with graph convolutional network (GCN), aiming to fully exploit the multi-hop semantic and emotional connections between the nodes. Due to the semantic locality between the words, we extract a k-hop subgraph from SE-graph for each word. The subgraph is then fed into a GCN to learn the graph representation. Here, we adopt GCN because it has been proved to be effective and efficient to learn graph embedding (Zhang et al., 2019a). Formally, let E ∈Rv×d be a matrix containing all v nodes in SE-graph with their features, where d is the size of the node embedding. For each node, we extract a k-hop subgraph Gs from the whole graph, which has a degree matrix D and adjacency matrix A. The normalized symmetric adjacency matrix of subgraph Gs can be calculated as: ˜A = D−1 2 AD−1 2 . By feeding the subgraph Gs into a two-layer GCN, the corresponding subgraph representation L ∈Rn×c with n nodes can be calculated by: L = σ( ˜Aσ( ˜AEW0)W1) (1) where σ represents a non-linear function, W0 ∈ Rd∗v and W1 ∈Rd∗c are trainable parameters. To obtain a more compact graph representation, we further feed L into a fully-connected layer, producing a final graph representation M ∈Rd. 3.4 Knowledge-enhanced BiLSTM Preliminary (Vanilla BiLSTM) Generally, two independent BiLSTM networks (denoted as BiLSTMx and BiLSTMp) are employed to encode the input sentence x and the target p, respectively. BiLSTM can capture the left and right context of each word in the input. In particular, for the t-th word wt in the input sequence of the target, BiLSTMp computes its forward hidden state −→h p t and backward hidden state ←−h p t . We concatenate both the forward and backward hidden states to form the final hidden state hp t = [−→h p t ⊕←−h p t ] for word wt at the t-th position of the input target. After learning the contextual representation of the target, we learn a target-aware sentence representation Hs by initializing BiLSTMx with the final hidden state of BiLSTMp. The background knowledge contained in external lexicons is the collection of facts that individuals are expected to know, and plays a crucial role in reading comprehension. We propose a knowledgeenhanced BiLSTM (KE-BiLSTM) model, which incorporates the external background knowledge contained in the semantic-emotion knowledge graph into the BiLSTMs via a novel knowledgeaware memory unit (KAMU). KE-BiLSTM helps to identify discriminative semantic and emotion knowledge from the input text. It is motivated by two considerations: • The external commonsense knowledge provides rich information of entities and relations between them, and highlights the features that are essential for stance detection. For example, with the external semantic lexicon, we can correctly understand the unusual word “zugzwang” through the semantically related words “chess”, “strategy”, “forced” contained in the semantic lexicon. Hence, we devise KE-BiLSTM to effectively leverage the graph embedding of SE-graph and fully explore the external knowledge from both word-level and concept-level. • There exist dynamic interaction patterns and complementarity between the context and the external knowledge within the input sequence for stance detection. Instead of leveraging only the input context in each BiLSTM unit, we take external commonsense knowledge into consideration by adding a novel knowledge-aware memory unit to the BiLSTM, which dynamically controls the amount of external knowledge at each encoding step and thus balances the contextual and knowledge information for stance detection. As illustrated in Figure 2, KE-BiLSTM consists of two primary parts: a BiLSTM network (depicted in blue) and a knowledge-aware memory unit (depicted in green). Similar to the standard BiLSTM 3192 Figure 2: The structure of the knowledge-enhanced BiLSTM unit. network, KE-BiLSTM also computes forward and backward hidden sequences, which are then combined to form the output representation. Due to limited space, we solely introduce the implementation details of the forward layer. The forward and backward knowledge-enhance LSTMs can be computed in a similar way. In KE-BiLSTM, the BiLSTM network learns the sequential features of the input text. Formally, in the forward layer of BiLSTM, the input gate it, forget gate ft, output gate gt, and the memory cell −→ C t are updated as: it = σ(Wiwt + Ui −→h t−1 + Vi −→ C t−1) (2) ft = σ(Wfwt + Uf −→h t−1 + Vf −→ C t−1) (3) gt = tanh(Wgwt + Ug −→h t−1 + Vg −→ C t−1) (4) −→ C t = ft ⊙−→ C t−1 + it ⊙gt (5) where σ represents the sigmoid function. W, U, and V are trainable parameters. wt is the t-th word of the input text. −→h t−1 is the hidden state for the t −1-th word. We propose a knowledge-aware memory component to incorporate the external knowledge into BiLSTM. For each word wt, we extract the corresponding entity from SE-graph by performing n-gram matching and acquire a subgraph representation M0 t . A new knowledge memory −→ Mt at time t is computed with a linear interpolation between the previous M0 t and its candidate activation δt: −→ Mt = zt ⊙M0 t + (1 −zt) ⊙δt (6) where zt ∈[0, 1] is utilized to balance the importance of M0 t and δt, which can be computed by: zt = σ(Wzwt + UzM0 t ) (7) where Wz and Uz are parameters to be learned. The candidate activation δt is updated as: δt = tanh(Wδwt + Uδ(rt ⊙M0 t )) (8) where Wδ and Uδ are parameters to be learned. rt is the reset gate which aims to combine the knowledge in M0 t and wt, which is defined as: rt = σ(Wrwt + UiM0 t ) (9) where Wr and Ur are projection parameters. Finally, the linear transformation of wt, ht−1, −→ Mt and Ct are combined to calculate the output −→o t of the forward KE-BiLSTM layer: −→o t = σ(Wowt + Uo −→h t−1 + Vo −→ Mt + Qo −→ C t) (10) −→h t = ot ⊙tanh(−→ C t + −→ Mt) (11) where −→o t and −→h t denote the output gate and the hidden state of the forward network of KEBiLSTM unit at time step t. The hidden state ←−h t of the backward network at time step t can be computed in a same way. We can get the overall hidden state ht = [−→h t ⊕←−h t] for word wt. Finally, we can use KE-BiLSTM to learn knowledge-enhanced sentence representation Hs = {hs 1, . . . , hs n} and knowledge-enhanced target representation Hp = {hp 1, . . . , hp m}, where n and m denote the lengths of sentence x and given target p, respectively. 3.5 Stance Detection We employ an attention mechanism to characterize the effect of the target on enforcing our SEKT model to pay more attention to the important words of the context. In particular, we use the target representation Hp as the attention source to calculate the attention weight αt for the t-th word: αt = softmax( ¯hpThx t ) (12) where ¯hp denote the average vector of target representation Hp. We can learn the attentive sentence representation emb by congregating the embeddings of hidden states Hs with attention vector α: emb = n X t=1 αthx t (13) 3193 Target Favor/Against/None Avg-length DT 148/299/260 17.1 HC 163/565/256 17.0 FM 268/ 511/170 18.4 LA 167/544/222 19.0 TP 333/452/460 33.3 Table 1: The statistics of our experimental data extended from SemEval-2016 Task 6. Finally, the sentence representation emb is fed into a fully-connected layer followed by a softmax layer to compute a stance probability distribution: ˆy = softmax(Wyemb + by) (14) where Wy is a projection parameter and by is a bias term. ˆy denotes the predicted stance probability for the input sentence x and target p. Given an annotated training set Xs, we utilize the cross-entropy between the predicted stance ˆy and the ground-truth stance y as our loss function for stance detection: L = − N X i=1 C X j=1 yij log ˆyij (15) where N represents the number of instances in the training set. C denotes the number of possible stance categories. yi represents the one-hot represented ground-truth label for the i-th instance. ˆyi is the predicted stance probability vector. This model can be optimized with the standard gradient descent algorithm. 4 Experiments 4.1 Experimental Data We extend the SemEval-2016 Task 6 dataset (denoted as SemEval-2016) to evaluate the performance of our SEKT model for cross-target stance detection. SemEval-2016 is the first stance detection dataset collected from Twitter, which contains 4870 stance-bearing tweets towards different targets. Each tweet is classified as “favor”, “against” or “none”. Following the previous work (Wei and Mao, 2019), we use the tweets from four targets, including Donald Trump (DT), Hillary Clinton (HC), Legalization of Abortion (LA), and Feminist Movement (FM). These targets are commonly utilized to evaluate the cross-target stance classification. In addition to the four targets in SemEval-2016, we introduce an additional Trade Policy (TP) target as the fifth target, which is an incredibly hot topic nowadays. Specifically, 1245 tweets related to TP are collected and manually labeled as “favor”, “against” and “none”. The statistics of this expanded dataset are reported in Table 1. Concerning the targets, the expanded dataset can be divided into two groups: Women’s Right (FM, LA) and American Politics (HC, DT, TP). Thus, we constructed 8 cross-target stance detection tasks ( DT→HC, HC→DT, FM→LA, LA→FM, TP→HC, HC→TP, TP→DT, DT→TP). Here, the left side of the arrow corresponds to the source target and the right side of the arrow denotes the destination target. 4.2 Evaluation Metrics Two evaluation metrics are adopted to verify our SEKT model. First, following (Wei and Mao, 2019), we leverage the average F1-score as one evaluation metric (denoted as Favg). Second, since the targets in the dataset are imbalanced, we also compute both the micro-averaged F1 (dominating large class) and macro-averaged F1 (dominating small class), and treat their average as another evaluation metric: F1m = (F1micro + F1macro)/2. 4.3 Implementation Details In the experiments, we use the 300-dimensional word2vec pre-trained on English Google News corpus to initialize the word embeddings. Follow (Augenstein et al., 2016), the node features is pretrained on unlabelled corpora. The hidden size of LSTM is set to 100. Dropout (dropout rate = 0.2) is used to avoid overfitting. The Adam optimizer is applied to train the model, with the mini-batch size of 8 and the learning rate of 0.001. 4.4 Baseline Methods We evaluate and compare our model with several strong baselines, which are described as follows: • BiLSTM: This method uses BiLSTM to encode the sentence and target separately. The hidden states from both directions are combined to infer the stance label. • BiCond (Augenstein et al., 2016): This method is similar to BiLSTM but uses a conditional encoding method that learns a targetdependent sentence representation for stance detection. • CrossNet (Xu et al., 2018): This model is a variant of BiCond, which leverages a self3194 Source-Target: FM→LA LA→FM HC→DT DT→HC HC→TP TP→HC DT→TP TP→DT BiLSTM 0.448 0.412 0.298 0.358 0.291 0.395 0.311 0.341 BiCond 0.450 0.416 0.297 0.358 0.292 0.402 0.317 0.347 CrossNet 0.454 0.433 0.431 0.362 0.298 0.417 0.314 0.374 VTN 0.473 0.478 0.479 0.364 BERT 0.479 0.339 0.436 0.365 0.261 0.231 0.241 0.456 CrossNet-C 0.449 0.439 0.442 0.369 0.297 0.413 0.324 0.355 CrossNet-CF 0.467 0.457 0.457 0.396 0.307 0.411 0.377 0.398 CrossNet-CA 0.473 0.475 0.455 0.407 0.301 0.442 0.409 0.396 TextCNN-E 0.469 0.458 0.380 0.404 0.309 0.450 0.356 0.396 SEKT (Ours) 0.536 0.513 0.477 0.420 0.335 0.460 0.444 0.395 Table 2: Performance comparison of cross-target stance detection in terms of F1avg on 8 tasks. Source-Target: FM→LA LA→FM HC→DT DT→HC HC→TP TP→HC DT→TP TP→DT BiLSTM 0.401 0.379 0.433 0.401 0.236 0.418 0.207 0.389 BiCond 0.403 0.392 0.442 0.408 0.239 0.424 0.207 0.396 CrossNet 0.442 0.431 0.461 0.418 0.244 0.425 0.211 0.407 BERT 0.499 0.395 0.412 0.399 0.353 0.295 0.391 0.478 CrossNet-C 0.473 0.399 0.439 0.403 0.251 0.428 0.221 0.414 CrossNet-CF 0.497 0.438 0.434 0.404 0.280 0.437 0.302 0.428 CrossNet-CA 0.507 0.434 0.452 0.401 0.283 0.453 0.375 0.440 TextCNN-E 0.513 0.466 0.360 0.385 0.283 0.472 0.191 0.433 SEKT (Ours) 0.523 0.510 0.463 0.432 0.300 0.489 0.391 0.435 Table 3: Performance comparison of different models for cross-target stance detection. attention layer to capture important words in the input text. • VTN (Wei and Mao, 2019): The model utilizes the latent topics shared between the two targets as transferable knowledge for crosstarget adaptation. • BERT (Devlin et al., 2019): The method finetunes a pre-trained BERT model to perform cross-target detection. Specifically, we convert the given context and target to “[CLS] + target + [SEP] + context” structure for source and target domain, respectively. We also extend CrossNet and TextCNN to incorporate external knowledge (SE-graph), resulting in stronger competitors. • CrossNet-C: Similar to (Margatina et al., 2019), we extend the original CrossNet model by incorporating external knowledge. Here, three variants are considered, where CrossNet-C adopts the attentional concatenation, CrossNet-CF uses the feature-based gating mechanism, and CrossNet-CA adopts an attentional affine transformation. • TextCNN-E: TextCNN (Kim, 2014) is an important baseline for text classification. Here, we extend TextCNN to the crosstarget setting, denoted as TextCNN-E. Specifically, each word is represented as a 3D tensor by concatenating the embeddings of k semantically/emotionally-related words. 4.5 Overall Performance We report the experimental results in terms of F1avg and F1m in Table 2 and Table 3, respectively. From the results, we can observe that BiLSTM has the worst performance because BiLSTM neither exploits the target information nor considers knowledge transfer for the cross-target stance detection. BiCond performs slightly better than BiLSTM, since it explicitly encodes the target information. As an extension to BiCond by introducing the attention mechanism, CrossNet shows a marginal improvement (e.g., 13.4% on HC→DT for F1avg, 3.9% on LA→FM for F1m). This may be because that the attention mechanism can learn the informative stance-aware sentence representation. However, this knowledge transfer scheme is based on word-level information, which often suffers from the data scarcity problem. VTN, which is a concept-level knowledge transfer model, achieves the best performance among all the baseline methods. It is noteworthy that the performance of BERT is not stable. Promising results are achieved on FM→LA and HC→DT, but it performs unsatisfactorily on other tasks. The reason may be that BERT does not explicitly employ any knowledge transfer 3195 SEKT w/o SE w/o KAMU FM→LA 0.536 (0.523) 0.461 (0.492) 0.471 (0.499) LA→FM 0.513 (0.510) 0.443 (0.455) 0.475 (0.469) HC→DT 0.477 (0.463) 0.449 (0.439) 0.449 (0.450) DT→HC 0.420 (0.432) 0.400 (0.404) 0.411 (0.407) HC→TP 0.335 (0.279) 0.314 (0.278) 0.321 (0.280) TP→HC 0.460 (0.489) 0.448 (0.466) 0.453 (0.471) DT→TP 0.444 (0.391) 0.407 (0.371) 0.411 (0.376) TP→DT 0.395 (0.435) 0.394 (0.420) 0.395 (0.431) Table 4: Ablation test results in terms of F1avg and F1m (in the parentheses) by discarding SE graph (w/o SE) and knowledge-aware memory unit (w/o KAMU). strategy. The proposed SEKT method yields better performance than all the baselines in most of the tasks. For example, our method improves 5.7% on FM→LA, 3.5% on LA→FM, 5.5% on DT→HC over the best competitors in terms of F1avg. The advantage of SEKT comes from its two characteristics: (i) we develop a GCN based model to fully exploit the external knowledge from both semantic and emotion lexicons; (ii) a knowledge-aware memory unit is proposed to better fuse the external knowledge. We also compare our SEKT model with the competitors that also integrate the semantic-emotion knowledge graph with GCN, e.g., CrossNet-C, CrossNet-CF, CrossNet-CA and TextCNN-E. The results are demonstrated in Table 2 and Table 3. CrossNet-C produces the worst performance in general. The reason is that concatenating the external knowledge and context representation could make the external knowledge lost in the sentence encoding process. CrossNet-CF and CrossNet-CA perform better than CrossNet-C since they incorporate the external knowledge into the hidden layers of BiLSTM. As expected, SEKT achieves the best performance, which verifies the effectiveness of the KAMU model. 4.6 Ablation Study To investigate the impact of each part on our SEKT model, we perform the ablation test by discarding SE graph knowledge (denoted as w/o SE) and knowledge-aware memory unit (denoted as w/o KAMU), respectively. Specifically, for the w/o SE model, the external knowledge is expressed by a weighted sum of the embeddings of four semantically/emotionally-related words. For the w/o KAMU model, we replace the KE-BiLSTM structure by the standard BiLSTM layer, and the external knowledge is combined in the hidden layer. hop No. DT→HC LA→FM DT→TP 1 0.401 0.489 0.431 2 0.417 0.513 0.444 3 0.420 0.479 0.424 4 0.374 0.369 0.408 Table 5: The experimental results with respect to varying number of hops in GCN. The ablation results are summarized in Table 4. From the results, we observe that both the SE graph and KAMU make great improvements to our SEKT method. The external semantic and emotional knowledge can help SEKT to capture multi-hop semantic correlations between words or emotion tags. On the one hand, KAMU helps to fully incorporate the external knowledge into the BiLSTM network, which makes the representation learning model more general to new targets. Number of Hops Based on our empirical observation, capturing the multi-hop semantic correlation is one of the most important parts for the overall performance of SEKT. Thus, we also investigate the impact of the number of hops used in GCN. In particular, we evaluate the performance of SEKT by varying the number of hops from 1 to 4 with a step size of 1. From Table 5, we can observe that the best results are achieved when the number of hops is 2 or 3. This is because GCN with a mediate hop number can capture semantic correlations between words while preventing from introducing unnecessary noises. 5 Error Analysis To better understand the limitations of SEKT, we additionally carry out an analysis of the errors made by SEKT. Specifically, we randomly select 100 instances that are incorrectly predicted by SEKT from the expanded SemEval-2016 dataset. We revealed several reasons for the classification errors, which can be divided into the following categories. First, SEKT fails to classify some sentences that contain latent opinions or require deep comprehension. For example, for the sentence “I guess NBC does not like to hear the truth.[favor]” with a target “Donald Trump”, SEKT tends to predict an incorrect against stance. This is because the SEKT model cannot learn the implicit relationship between NBC∗and TRUMP, which is not acquirable from the semantic-emotion lexicons. The ∗National Broadcasting Company 3196 second error category is caused by special hashtags with implicit meanings. For example, SEKT cannot correctly predict the stance for the sentence “The gift that keeps on giving. #makeitstop #SemST”[against]. This may be because the information in the sentence is not sufficient enough such that SEKT cannot capture the sequential patterns of the stance-related words. It suggests that certain data augmentation strategy needs to be devised in the future so as to capture the sequential patterns between stance-related words from short texts. 6 Conclusion In this paper, we proposed a semantic-emotion knowledge transferring (SEKT) model for crosstarget stance classification, which used the external knowledge from semantic and emotion lexicons as commonsense knowledge to bridge the gap across different targets. Specifically, we first built a SE-graph from semantic and emotion lexicons, which leveraged external knowledge from both word-level and concept-level. Second, the GCN was employed to learn the graph representation that captured multi-hop semantic connections between words or emotion tags. Third, we extend the standard BiLSTM classifier to fully integrate the external knowledge by adding a novel knowledge-aware memory unit to the BiLSTM cell. The experimental results demonstrated that the SEKT model significantly outperformed the state-of-the-art methods for cross-target stance detection. 7 Acknowledgements This research was supported in part by the National Key R&D Program of China, 2018YFB2101100, 2018YFB2101101 and NSFC under Grant Nos. U1836107, 61972111, 61572158 and 61602132. Min Yang was partially supported by National Natural Science Foundation of China (No. 61906185), Guangdong Basic and Applied Basic Research Foundation (No. 2019A1515011705 and No. 2018A030313943), and the project AWS13C008. References I Augenstein, T Rocktaeschel, A Vlachos, and K Bontcheva. 2016. Stance detection with bidirectional conditional encoding. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Sheffield. Erik Cambria, Soujanya Poria, Devamanyu Hazarika, and Kenneth Kwok. 2018. Senticnet 5: Discovering conceptual primitives for sentiment analysis by means of context embeddings. In Thirty-Second AAAI Conference on Artificial Intelligence. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Kuntal Dey, Ritvik Shrivastava, and Saroj Kaushik. 2018. Topical stance detection for twitter: A twophase lstm model using attention. In European Conference on Information Retrieval, pages 529–536. Springer. Mauro Dragoni and Giulio Petrucci. 2018. A fuzzybased strategy for multi-domain sentiment analysis. International Journal of Approximate Reasoning, 93:59–73. Jiachen Du, Ruifeng Xu, Yulan He, and Lin Gui. 2017. Stance classification with target-specific neural attention networks. International Joint Conferences on Artificial Intelligence. Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric Xing. 2016. Harnessing deep neural networks with logic rules. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2410–2420. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751. Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Zeyang Lei, Yujiu Yang, and Min Yang. 2018. SAAN: A sentiment-aware attention network for sentiment analysis. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR 2018, Ann Arbor, MI, USA, July 0812, 2018, pages 1197–1200. ACM. Yukun Ma, Haiyun Peng, and Erik Cambria. 2018. Targeted aspect-based sentiment analysis via embedding commonsense knowledge into an attentive lstm. In Thirty-Second AAAI Conference on Artificial Intelligence. Katerina Margatina, Christos Baziotis, and Alexandros Potamianos. 2019. Attention-based conditioning methods for external knowledge integration. arXiv preprint arXiv:1906.03674. 3197 Saif M Mohammad and Peter D Turney. 2013. Crowdsourcing a word–emotion association lexicon. Computational Intelligence, 29(3):436–465. Swapna Somasundaran and Janyce Wiebe. 2009. Recognizing stances in online debates. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 226–234. Association for Computational Linguistics. Qingying Sun, Zhongqing Wang, Qiaoming Zhu, and Guodong Zhou. 2018. Stance detection with hierarchical attention network. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2399–2409. Matt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: Determining support or opposition from congressional floor-debate transcripts. In Proceedings of the 2006 conference on empirical methods in natural language processing, pages 327–335. Association for Computational Linguistics. Marilyn A Walker, Pranav Anand, Robert Abbott, and Ricky Grant. 2012. Stance classification using dialogic properties of persuasion. In Proceedings of the 2012 conference of the North American chapter of the association for computational linguistics: Human language technologies, pages 592–596. Association for Computational Linguistics. Penghui Wei, Junjie Lin, and Wenji Mao. 2018. Multitarget stance detection via a dynamic memoryaugmented network. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pages 1229–1232. ACM. Penghui Wei and Wenji Mao. 2019. Modeling transferable topics for cross-target stance detection. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1173–1176. ACM. Chang Xu, Cecile Paris, Surya Nepal, and Ross Sparks. 2018. Cross-target stance classification with selfattention networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 778– 783. Bowen Zhang, Xiaofei Xu, Min Yang, Xiaojun Chen, and Yunming Ye. 2018. Cross-domain sentiment classification by capsule network with semantic rules. IEEE Access, 6:58284–58294. Chen Zhang, Qiuchi Li, and Dawei Song. 2019a. Aspect-based sentiment classification with aspectspecific graph convolutional networks. arXiv preprint arXiv:1909.03477. Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo. 2019b. Integrating semantic knowledge to tackle zero-shot text classification. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1031–1040.
2020
291
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3198–3210 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3198 KinGDOM: Knowledge-Guided DOMain Adaptation for Sentiment Analysis Deepanway Ghosal† , Devamanyu HazarikaΦ , Abhinaba Roy⊘, Navonil Majumder†, Rada Mihalcea⊓, Soujanya Poria† †Singapore University of Technology and Design, Singapore ΦNational University of Singapore, Singapore ⊘Nanyang Technological University, Singapore ⊓University of Michigan, USA {deepanway ghosal@mymail., sporia@, nmder@}sutd.edu.sg, {[email protected], abhinaba.roy@ntu}.edu.sg, [email protected] Abstract Cross-domain sentiment analysis has received significant attention in recent years, prompted by the need to combat the domain gap between different applications that make use of sentiment analysis. In this paper, we take a novel perspective on this task by exploring the role of external commonsense knowledge. We introduce a new framework, KinGDOM, which utilizes the ConceptNet knowledge graph to enrich the semantics of a document by providing both domain-specific and domain-general background concepts. These concepts are learned by training a graph convolutional autoencoder that leverages inter-domain concepts in a domain-invariant manner. Conditioning a popular domain-adversarial baseline method with these learned concepts helps improve its performance over state-of-the-art approaches, demonstrating the efficacy of our proposed framework. 1 Introduction Sentiment Analysis (SA) is a popular NLP task used in many applications (Zhang et al., 2018). Current models trained for this task, however, cannot be reliably deployed due to the distributional mismatch between the training and evaluation domains (Daum´e III and Marcu, 2006). Domain adaptation, a case of transductive transfer learning, is a widely studied field of research that can be effectively used to tackle this problem (Wilson and Cook, 2018). Research in the field of cross-domain SA has proposed diverse approaches, which include learning domain-specific sentiment words/lexicons (Sarma et al., 2018; Hamilton et al., 2016b), co-occurrence based learning (Blitzer et al., 2007a), domainadversarial learning (Ganin et al., 2016), among sketch Source domain: Books Target domain: Electronics TypeOf RelatedTo Review Documents wallpaper design Conceptnet Conceptual bridge source-specific domain-general target-specific Concepts: draw RelatedTo screen saver TypeOf Figure 1: ConceptNet provides networks with background concepts that enhance their semantic understanding. For example, for a target sentence from electronics domain, The software came with decent screen savers, comprising domainspecific terms like screen saver or wallpaper, ConceptNet helps connecting them to general concepts like design, thus allowing a network better understand their meaning. Furthermore, inter-domain conceptual bridge can also be established to connect source and target domains (wallpaper–sketch have similar conceptual notions under the link design). others. In this work, we adopt the domainadversarial framework and attempt to improve it further by infusing commonsense knowledge using ConceptNet – a large-scale knowledge graph (Speer et al., 2017). Augmenting neural models with external knowledge bases (KB) has shown benefits across a range of NLP applications (Peters et al., 2019; Li et al., 2019; IV et al., 2019; liu et al., 2019; Bi et al., 2019). Despite their popularity, efforts to incorporate KBs into the domain-adaptation framework has been sporadic (Wang et al., 2008; Xiang et al., 2010). To this end, we identify multiple advantages of using commonsense KBs for domain adaptation. First, KBs help in grounding text to real enti3199 ties, factual knowledge, and commonsense concepts. Commonsense KBs, in particular, provide a rich source of background concepts–related by commonsense links–which can enhance the semantics of a piece of text by providing both domainspecific and domain-general concepts (Yang et al., 2019; Zhong et al., 2019; Agarwal et al., 2015; Zhong et al., 2019) (see Fig. 1). For cross-domain SA, word polarities might vary among different domains. For example, heavy can be a positive feature for a truck, but a negative feature for a smartphone. It is, however, difficult to assign contextual-polarities solely from data, especially when there is no supervision (Boia et al., 2014). In this domain-specific scenario, commonsense knowledge provides a dynamic way to enhance the context and help models understand sentimentbearing terms and opinion targets through its structural relations (Cambria et al., 2018). They also often aid in unearthing implicitly expressed sentiment (Balahur et al., 2011). Second, domains often share relations through latent semantic concepts (Kim et al., 2017a). For example, notions of wallpaper (from electronics) and sketch (from books) can be associated via related concepts such as design (see Fig. 1). Multi-relational KBs provide a natural way to leverage such inter-domain relationships. These connections can help models understand target-specific terms by associating to known domain-general or even source-specific concepts. Following these intuitions, we propose a twostep modular framework, KinGDOM (KnowledgeGuided Domain adaptation), which utilizes commonsense KB for domain adaptation. KinGDOM first trains a shared graph autoencoder using a graph convolution network (GCN) on ConceptNet, so as to learn: 1) inter-domain conceptual links through multiple inference steps across neighboring concepts; and 2) domain-invariant concept representations due to shared autoencoding. It then extracts document-specific sub-graph embeddings and feeds them to a popular domain-adversarial model DANN (Ganin et al., 2016). Additionally, we also train a shared autoencoder on these extracted graph embeddings to promote further domain-invariance (Glorot et al., 2011). Our main contributions in this work are: 1. We propose KinGDOM, a domain-adversarial framework that uses an external KB (ConceptNet) for unsupervised domain adaptation. KinGDOM learns domain-invariant features of KB concepts using a graph autoencoding strategy. 2. We demonstrate, through experiments, that KinGDOM surpasses state-of-the-art methods on the Amazon-reviews dataset (Blitzer et al., 2007b), thus validating our claim that external knowledge can aid the task of cross-domain SA. In the remaining paper, §2 explains related works and compares KinGDOM to them; §3 presents task definition and preliminaries; §4 introduces our proposed framework, KinGDOM; §5 discusses experimental setup followed by results and extensive analyses in §6; finally, §7 concludes this paper. 2 Related Work Domain adaptation methods can be broadly categorized into three approaches: a) instanceselection (Jiang and Zhai, 2007; Chen et al., 2011; Cao et al., 2018), b) self-labeling (He and Zhou, 2011) and c) representation learning (Glorot et al., 2011; Chen et al., 2012; Tzeng et al., 2014). Our focus is on the third category which has emerged as a popular approach in this deep representation learning era (Ruder, 2019; Poria et al., 2020). Domain-adversarial Training. Our work deals with domain-adversarial approaches (Kouw and Loog, 2019), where we extend DANN Ganin et al. (2016). Despite its popularity, DANN cannot model domain-specific information (e.g. indicators of tasty, delicious for kitchen domain) (Peng et al., 2018b). Rectifications include shared-private encoders that model both domain-invariant and specific features (Li et al., 2012; Bousmalis et al., 2016a; Kim et al., 2017b; Chang et al., 2019), using adversarial and orthogonality losses (Liu et al., 2017; Li et al., 2018). Although we do not use private encoders, we posit that our model is capable of capturing domain-specificity via the sentencespecific concept graph. Also, our approach is flexible enough to be adapted to the setup of sharedprivate encoders. External Knowledge. Use of external knowledge has been explored in both inductive and transductive settings (Banerjee, 2007; Deng et al., 2018). Few works have explored external knowledge in domain adaptation based on Wikipedia as auxiliary information, using co-clustering (Wang et al., 2008) and semi-supervised learning (SSL) (Xiang et al., 2010). SSL has also been explored by Alam 3200 et al. (2018) in the Twitter domain. Although we share a similar motivation, there exist crucial differences. Primarily, we learn graph embeddings at the concept level, not across complete instances. Also, we do not classify each concept node in the graph, which renders SSL inapplicable to our setup. Domain Adaptation on Graphs. With the advent of graph neural networks, graph-based methods have become a new trend (Ghosal et al., 2019) in diverse NLP tasks such as emotion recognition in conversations (Poria et al., 2019). Graphbased domain adaptation is categorized based on the availability of cross-domain connections. For domain-exclusive graphs, approaches include SSL with GCNs (Shen and Chung, 2019) and domainadversarial learning (Dai et al., 2019). For crossdomain connected graphs, co-regularized training (Ni et al., 2018) and joint-embedding (Xu et al., 2017) have been explored. We also utilize GCNs to learn node representations in our cross-domain ConceptNet graph. However, rather than using explicit divergence measures or domain-adversarial losses for domain invariance, we uniquely adopt a shared-autoencoder strategy on GCNs. Such ideas have been explored in vector-based approaches (Glorot et al., 2011; Chen et al., 2012). Sentiment Analysis. One line of work models domain-dependent word embeddings (Sarma et al., 2018; Shi et al., 2018; K Sarma et al., 2019) or domain-specific sentiment lexicons (Hamilton et al., 2016a), while others attempt to learn representations based on co-occurrences of domainspecific with domain-independent terms (Blitzer et al., 2007a; Pan et al., 2010; Sharma et al., 2018). Our work is related to approaches that address domain-specificity in the target domain (Peng et al., 2018b; Bhatt et al., 2015). Works like Liu et al. (2018) attempts to model target-specificity by mapping domain-general information to domainspecific representations by using domain descriptor vectors. In contrast, we address relating domainspecific terms by modeling their relations with the other terms in knowledge bases like ConceptNet. 3 Background 3.1 Task Definition Domain adaptation deals with the training of models that can perform inference reliably in multiple domains. Across domains, it is assumed that the feature and label spaces are the same but with discrepancies in their feature distributions. In our setup, we consider two domains: source Ds and target domain Dt with different marginal data distributions, i.e., PDs(x) ≠PDt(x). This scenario, also known as the covariate shift (Elsahar and Gall´e, 2019), is predominant in SA applications and arises primarily with shifts in topics – causing a difference in vocabulary usage and their corresponding semantic and sentiment associations. We account for unsupervised domain adaptation, where we are provided with labeled instances from the source domain Dl s = {(xi,yi)}Ns i=1 and unlabeled instances from the target domain Du t = {(xi)}Nt i=1.1 This is a realistic setting as curating annotations for the target domain is often expensive as well as time consuming. Given this setup, our goal is to train a classifier that can achieve good classification performance on the target domain. 3.2 Domain-Adversarial Neural Network We base our framework on the domain-adversarial neural network (DANN) proposed by Ganin et al. (2016). DANN learns a shared mapping of both source and target domain instances M(xs/t) such that a classifier C trained for the source domain can be directly applied for the target domain. Training of C is performed using the cross-entropy loss: Lcls = E(xs,ys) (− K ∑ k=1 1[k=ys] log C (M (xs))), where K is the number of labels. Both the mapping function M and the classifier C are realized using neural layers with parameters θM and θC. Adversarial Loss. The core idea of DANN is to reduce domain gap by learning common representations that are indistinguishable to a domain discriminator. To learn a domain-invariant mapping, DANN uses an adversarial discriminator Dadv with parameters θD, whose job is to distinguish between source and target instances, M(xs) vs. M(xt). It is trained using the cross-entropy loss: LadvD = −Exs (log Dadv (M (xs))) −Ext (log (1 −Dadv (M (xt)))). The mapping function then learns domain invariance by pitting against the discriminator in a minimax optimization with loss LadvM = −LadvD (Tzeng et al., 2017). This setup forces the features to become discriminative to the main 1For our case, each instance is a review document 3201 domain-specific concepts - source - target domain-general concepts } Domain-aggregated Graph Conceptnet 𝒢 𝒢′ Filtering with seed concepts R-GCN DistMult Edge Loss encoder decoder Bag-of-words Task Classifier C Domaindiscriminator Dadv Commonsense Graph Feature Extraction GCN autoencoder gradient-reversal Review Document Step 1: Knowledge Graph Training Step 2: Domain-adversarial Training Graph Feature Reconstructor Drecon 𝒢′ 𝒲 xcg x graph feature encoder M′ ℒcls ℒadvD ℒrecon ℒadvM = −ℒadvD zgrp 𝒢′ DANN encoder M zdann Figure 2: Illustration of KinGDOM: Step 1 uses GCN to learn concept representations. Step 2 feeds concept features to DANN. All domains DVD Books Kitchen Electronics RelatedTo (580k) RelatedTo RelatedTo RelatedTo RelatedTo HasContext (80k) HasContext HasContext IsA IsA IsA (60k) IsA IsA Synonym Synonym DerivedFrom (42k) Synonym Synonym DerivedFrom DerivedFrom Synonym (40k) DerivedFrom DerivedFrom HasContext HasContext AtLocation (14k) AtLocation CapableOf AtLocation AtLocation UsedFor (12k) CapableOf AtLocation UsedFor UsedFor CapableOf (11k) UsedFor SimilarTo SimilarTo SimilarTo SimilarTo (10k) SimilarTo UsedFor CapableOf CapableOf Etymologically (5k) Antonym Antonym Antonym Antonym Table 1: Top-10 relations of G ′ based on frequency. Top relations for each domain are also mentioned. learning task and indistinguishable across domains. The point estimates of the parameters are decided at a saddle point using the minimax objective: θ∗= arg min θM,C max θD (Lcls + λLadvD), where λ is a hyper-parameter. The minimax objective is realized by reversing the gradients of LadvD when back-propagating through M. 4 Our Proposed Method KinGDOM aims to improve the DANN approach by leveraging an external knowledge source i.e., ConceptNet. Such a knowledge base is particularly useful for domain adaptation as it contains both domain specific and domain general knowledge. Unlike traditional word embeddings and semantic knowledge graphs (e.g. WordNet), ConceptNet is unique as it contains commonsense related information. We posit that both these properties of ConceptNet will be highly useful for domain adaptation. KinGDOM follows a two-step approach described below: Step 1: This step deals with training a domainaggregated sub-graph of ConceptNet. In particular, it involves: a) Creating a sub-graph of ConceptNet based on all domains (§4.1). b) Training a graph-convolutional autoencoder to learn concept embeddings (Schlichtkrull et al., 2018) (§4.2). Step 2: After the graph autoencoder is trained, a) we extract and pool document-relevant features from the trained graph for each instance in the dataset (§4.3). b) The corresponding graph feature vector is then fed into the DANN architecture for adversarial training (Ganin et al., 2016). To further enforce domain invariance, we also introduce a shared autoencoder to reconstruct the graph features (§4.4). 3202 4.1 Step 1a) Domain-Aggregated Commonsense Graph Construction We construct our domain-aggregated graph from ConceptNet (Speer et al., 2017). First, we introduce the following notation: the ConceptNet graph is represented as a directed labeled graph G = (V,E,R), with concepts/nodes 2 vi ∈V and labeled edges (vi,rij,vj) ∈E, where rij ∈R is the relation type of the edge between vi and vj. The concepts in ConceptNet are unigram words or ngram phrases. For instance one such triplet from ConceptNet is [baking-oven, AtLocation, kitchen]. ConceptNet has approximately 34 million edges, from which we first extract a subset of edges. From the training documents of all domains in our dataset, we first extract the set of all the unique nouns, adjectives, and adverbs.3 These extracted words are treated as the seeds that we use to filter ConceptNet into a sub-graph. In particular, we extract all the triplets from G which are within a distance of 1 to any of those seed concepts, resulting in a sub-graph G′ = (V′,E′,R′), with approximately 356k nodes and 900k edges. This sub-graph would thus contain concepts across all domains along with inter-concept links. Looking at the sub-graph G′ from the lens of each domain, we can observe the top-10 relations within the domain in Table 1. 4.2 Step 1b) Knowledge Graph Pre-training To utilize G′ in our task, we first need to compute a representation of its nodes. We do this by training a graph autoencoder model to perform link prediction. The model takes as input an incomplete set of edges ˆE′ from E′ in G′ and then assign scores to possible edges (c1,r,c2), determining how likely are these edges to be in E′. Following Schlichtkrull et al. (2018), our graph autoencoder model consists of: a R-GCN entity encoder and a DistMult scoring decoder. Encoder Module. We employ the Relational Graph Convolutional Network (R-GCN) encoder from Schlichtkrull et al. (2018) as our graph encoder network. The power of this model comes from its ability to accumulate relational evidence in multiple inference steps from the local neighborhood around a given concept. The neighborhoodbased convolutional feature transformation process always ensures that distinct domains are connected 2We use node, concept, and entity interchangeably 3We use the Spacy POS Tagger: https://spacy.io/ usage/linguistic-features#pos-tagging via underlying concepts and influence each other to create enriched domain-aggregated feature vectors. Precisely, our encoder module consists of two R-GCN encoders stacked upon one another. The initial concept feature vector gi is initialized randomly and thereafter transformed into the domainaggregated feature vector hi ∈Rd using the twostep graph convolution process. The transformation process is detailed below: f(xi,l) = σ( ∑ r∈R ∑ j∈Nr i 1 ci,r W (l) r xj + W (l) 0 xi), hi = h(2) i = f(h(1) i ,2) ; h(1) i = f(gi 1), where Nr i denotes the neighbouring concepts of concept i under relation r ∈R; ci,r is a normalization constant which either can be set in advance, such that, ci,r = ∣Nr i ∣, or can be learned in a gradient-based learning setup. σ is an activation function such as ReLU, and W (1/2) r , W (1/2) 0 are learnable parameters of the transformation. This stack of transformations effectively accumulates the normalized sum of the local neighborhood i.e. the neighborhood information for each concept in the graph. The self-connection ensures self-dependent feature transformation. Decoder Module. DistMult factorization (Yang et al., 2014) is used as the scoring function. For a triplet (ci,r,cj), the score s is obtained as follows: s(ci,r,cj) = σ(hT ciRrhcj), where σ is the logistic function; hci, hcj ∈Rd are the R-GCN encoded feature vectors for concepts ci, cj. Each relation r ∈R is also associated with a diagonal matrix Rr ∈Rd×d. Training. We train our graph autoencoder model using negative sampling (Schlichtkrull et al., 2018). For triplets in ˆE′ (positive samples), we create an equal number of negative samples by randomly corrupting the positive triplets. The corruption is performed by randomly modifying either one of the constituting concepts or the relation, creating the overall set of samples denoted by T . The task is set as a binary classification between the positive/negative triplets, where the model is trained with the standard cross-entropy loss: LG′ = −1 2∣ˆE′∣ ∑ (ci,r,cj,y)∈T (y log s(ci,r,cj)+ (1 −y)log(1 −s(ci,r,cj))). 3203 Once we train the autoencoder graph model, it will ensure that target domain-specific concepts (crucial for KG) can possibly be explained via domain-general concepts and further via interdomain knowledge. In other words, the encoded node representations hi will capture commonsense graph information in the form of domain-specific and domain-general features and thus will be effective for the downstream task when there is a distributional shift during evaluation. 4.3 Step 2a) Commonsense Graph Feature Extraction The trained graph autoencoder model as explained in the previous section §4.2, can be used for feature extraction. We now describe the methodology to extract the document-specific commonsense graph features for a particular document x: 1) The first step is to extract the set of all unique nouns, adjectives, and adverbs present in the document. We call this set W. 2) Next, we extract a subgraph from G′, where we take all triplets for which both the constituting nodes are either in W or are within the vicinity of radius 1 of any of the words in W. We call this graph G′ W. 3) We then make a forward pass of G′ W through the encoder of the pre-trained graph autoencoder model. This results in feature vectors hj for all unique nodes j in G′ W. 4) Finally, we average over the feature vectors hj for all unique nodes in G′ W, to obtain the commonsense graph features xcg for document x. We surmise that since most documents will have both domain-specific and domain-general words in W, xcg will inherently capture the commonsense information likely to be helpful during domain adaptation. 4.4 Step 2b) Domain-adversarial Training We feed the commonsense graph feature xcg pooled from G′ W for document x (§4.3) into the DANN architecture (see §3.2). We proceed by learning a encoder function for the graph vector zgrp = M ′ θG(xcg) and combine its representation with the DANN encoder zdann = MθM (x) to get the final feature representation [zdann;zgrp], of the document x. Here, [a;b] represents concatenation. The task classifier C and domain-discriminator Dadv now takes this modified representation, [zdann;zgrp], as its input instead of only zdann. To further enforce domain-invariance into the encoded graph representation zgrp, we consider it as a hidden code in a traditional autoencoder and consequently add a shared decoder Drecon (with parameters θR) with a reconstruction loss (meansquared error): Lrecon (Xs,Xt) = Lrecon (Xs) + Lrecon (Xt), s.t. Lrecon = −Excg (∥Drecon(zgrp) −xcg∥2 2). We hypothesize that if θR can reconstruct graph features for both domains, then it would ensure stronger domain invariance constraints in zgrp. The final optimization of this domain-adversarial setup is based on the minimax objective: θ∗= arg min θG,M,C,R max θD (Lcls + λLadvD + γ Lrecon), where λ and γ are hyper-parameters. 5 Experimental Setup 5.1 Dataset We consider the Amazon-reviews benchmark dataset for domain adaptation in SA (Blitzer et al., 2007b). This corpus consists of Amazon product reviews and ranges across four domains: Books, DVDs, Electronics, and Kitchen appliances. Each review is associated with a rating denoting its sentiment polarity. Reviews with rating up to 3 stars are considered to contain negative sentiment and 4 or 5 stars as positive sentiment. The dataset follows a balanced distribution between both labels yielding 2k unlabelled training instances for each domain. Testing contains 3k - 6k samples for evaluation. We follow similar pre-processing as bone by Ganin et al. (2016); Ruder and Plank (2018) where each review is encoded into a 5000-dimensional tfidf weighted bag-of-words (BOW) feature vector of unigrams and bigrams. 5.2 Training Details We follow Ganin et al. (2016) in training our network. Our neural layers i.e., DANN encoder (M), graph feature encoder (M′), graph feature reconstructor (Drecon), task classifier (C) and domain discriminator (Dadv) are implemented with 100 dimensional fully connected layers. We use a cyclic λ as per (Ganin et al., 2016) and γ = 1 after validating with γ ∈{0.5,1,2}. 25% dropout is used in 3204 70 77 84 91 B -> D K -> D E -> D E -> B K -> B D -> B B -> E K -> E D -> E B -> K D -> K E -> K Avg. 82.3 88.4 84.6 85 81.7 87.4 82.2 81.5 78.2 76.9 78.8 80.7 83.1 80.9 88.6 83 81.8 79.9 86.9 79.9 80.3 75.9 74.9 78.6 79.2 82.6 76.3 85.4 78.3 77.9 75.4 84.3 73.3 72.3 70.9 71.3 73.8 74 78.4 DANN DANN+ KinGDOM target: DVD target: Books target: Electronics target: Kitchen Figure 3: Results of DANN vs DANN+ vs KinGDOM across different target domains. Best viewed in colour. the fully connected layers and the model is trained with Adam (Kingma and Ba, 2015) optimizer. 5.3 Baseline Methods In this paper, to inspect the role of external commonsense knowledge and analyze the improvement in performance it brings, we intentionally use BOW features and compare them against other baseline models that also use BOW features. This issue has also been addressed by Poria et al. (2020). The flexibility of KinGDOM allows other approaches, such as mSDA, CNN, etc. to be easily incorporated in it, which we plan to analyze in the future. We compare KinGDOM with the following unsupervised domain adaptation baseline methods: DANN (Ganin et al., 2016) is a domain-adversarial method, based on which we develop KinGDOM (§3.2); DANN+ The DANN model where we use an Adam optimizer instead of the original SGD optimizer. The network architecture and the rest of the hyperparameters are kept same; Variational Fair Autoencoder (VFAE) (Louizos et al., 2015) learns latent representations independent from sensitive domain knowledge, while retaining enough task information by using a MMD-based loss; Central Moment Discrepancy (CMD) (Zellinger et al., 2017) is a regularization method which minimizes the difference between feature representations by utilizing equivalent representation of probability distributions by moment sequences; Asym (Saito et al., 2017) is the asymmetric tri-training framework that uses three neural networks asymmetrically for domain adaptation; MT-Tri (Ruder and Plank, 2018) is similar to Asym, but uses multi-task learning; Domain Separation Networks (DSN) (Bousmalis et al., 2016b) learns to extract shared and private components of each domain. As per Peng et al. (2018a), it stands as the present state-of-the-art method for unsupervised domain adaptation; Task Refinement Learning (TRL) (Ziser and Reichart, 2019) Task Refinement Learning is an unsupervised domain adaptation framework which iteratively trains a Pivot Based Language Model to gradually increase the information exposed about each pivot; TAT (Liu et al., 2019) is the transferable adversarial training setup to generate examples which helps in modelling the domain shift. TAT adversarially trains classifiers to make consistent predictions over these transferable examples; CoCMD (Peng et al., 2018a) is a co-training method based on the CMD regularizer which trains a classifier on simultaneously extracted domain specific and invariant features. CoCOMD, however, is SSL-based as it uses labeled data from the target domain. Although it falls outside the regime of unsupervised domain adaptation, we report its results to provide a full picture to the reader. 6 Results and Analysis As mentioned in §5.3, we reimplemented the baseline DANN model using Adam optimizer and observed that its results has been notably underreported in many of the unsupervised domain adaptation literature for sentiment analysis (see Table 2). In the original DANN implementation (Ganin et al., 2016), Stochastic Gradient Descent (SGD) was used as the optimizer. However, in DANN+, using Adam optimizer leads to substantial performance jump that outperforms many of the recent advanced domain adaptation methods – CMD (Zellinger et al., 2017), VFAE (Louizos et al., 2015), ASym (Saito et al., 2017), and MTTri (Ruder and Plank, 2018). We compare the performance of KinGDOM with its base models – DANN and DANN+. As observed in Fig. 3, KinGDOM surpasses DANN+ by 1.4% which asserts the improvement in domaininvariance due to the incorporation of external com3205 Source ↓ Target DANN (5k) DANN+ (5k) VFAE (5k) CMD (5k) Asym (5k) MT-Tri (5k) TRL* (5k) DSN (5k) CoCMD* (5k) KinGDOM (5k) DANN+ (30k) TAT (30k) KinGDOM (30k) B →D 78.4 82.6 79.9 80.5 80.7 81.2 82.2 82.8 83.1 83.1 84.7 84.5 85.0 B →E 73.3 79.9 79.2 78.7 79.8 78.0 81.9 83.0 82.2 83.0 80.1 83.9 B →K 77.9 81.8 81.6 81.3 82.5 78.8 82.7 84.4 85.3 85.0 84.0 83.6 86.6 D →B 72.3 80.3 75.5 79.5 73.2 77.1 80.1 81.8 81.4 82.7 81.9 82.7 D →E 75.4 79.9 78.6 79.7 77.0 81.0 81.4 83.4 81.7 83.4 81.9 83.9 D →K 78.3 83.0 82.2 83.0 82.5 79.5 83.3 85.5 84.6 85.3 84.0 87.1 E →B 71.3 74.9 72.7 74.4 73.2 73.5 75.1 76.9 76.9 77.1 83.2 78.4 E →D 73.8 78.6 76.5 76.3 72.9 75.4 75.8 77.1 78.3 78.8 79.6 77.9 80.3 E →K 85.4 88.6 85.0 86.0 86.9 87.2 87.2 87.3 88.4 89.0 90.0 89.4 K →B 70.9 75.9 72.0 75.6 72.5 73.8 72.1 76.4 77.2 78.2 77.1 75.8 80.0 K →D 74.0 79.2 73.3 77.5 74.9 77.8 78.0 79.6 80.7 81.3 77.7 82.3 K →E 84.3 86.9 83.8 85.4 84.6 86.0 86.7 87.2 87.4 88.0 88.2 88.6 Avg. 76.3 80.9 78.4 79.8 78.4 79.1 81.2 82.4 82.3 82.9 82.4 84.0 Table 2: Comparison with different baseline and state-of-the-art models (§5.3). TRL* reported results on four combinations. CoCMD* is a semi-supervised domain adaptation method. DSN is the current state-of-the-art for unsupervised domain adaptation on the Amazon reviews dataset. Scores for MT-Tri are extrapolated from the graphs illustrated in Ruder and Plank (2018). Note: B: Books, D: DVD, E:Electronics, and K: Kitchen domains. 5k, 30k signify 5000 and 30,000 dimensional BOW features. monsense knowledge. Next, we look at Table 2 where comparisons are made with other baselines, including the state-ofthe-art DSN approach. As observed, KinGDOM outperforms DSN in all the task scenarios, indicating the efficacy of our approach. Blitzer et al. (2007b), in their original work, noted that domain transfer across the two groups of DVD, Books and Electronics, Kitchen is particularly challenging. Interestingly, in our results, we observe the highest gains when the source and target domains are from these separate groups (e.g., Kitchen →DVD, Kitchen →Books, Electronics →Books). In Table 2, we also compare KinGDOM against CoCMD and TAT. Although CoCMD is a semisupervised method, KinGDOM surpasses its performance in several of the twelve domain-pair combinations and matches its overall result without using any labelled samples from the target domain. TAT is the state-of-the-art method for unsupervised domain adaptation in the Amazon reviews dataset when used with 30,000 Bag-Of-Words (BOW) features. Interestingly, KinGDOM used with 5000 BOW features can match TAT with 30,000 BOW features and outperforms TAT by around 1.6% overall when used with the same 30,000 BOW features. The reimplementation of DANN – DANN+ with 30,000 BOW also surpasses the result of TAT by 0.5%. The results indicate that external knowledge, when added to a simple architecture such as DANN, can surpass sophisticated state-of-the-art models, such as DSN and TAT. Our primary intention to utilize DANN as the base model is to highlight the role of knowledge base infusion in domain adaptation, devoid of sophisticated models, and complex neural maneuvering. Nevertheless, the flexibility of KinGDOM allows it to be associated with advanced models too (e.g., DSN, TAT), which we believe could perform even better. We intend to analyze this in the future. 6.1 Ablation Studies We further analyze our framework and challenge our design choices. Specifically, we consider three variants of our architecture based on alternative ways to condition DANN with the graph features. Each of these variants reveals important clues regarding the invariance properties and task appropriateness of zgrp. Variant 1 denotes separate decoders Drecon for source and target domains. In Variant 2, domain classifier Dadv takes only zdann as input whereas the sentiment classifier C takes the concatenated feature [zdann;zgrp]. Finally, in Variant 3, Dadv takes input [zdann;zgrp] whereas C only takes zdann. As seen in Fig. 4, all the variants perform worse than KinGDOM. For Variant 1, the performance drop indicates that having a shared decoder Drecon in KinGDOM facilitates 3206 74 76 78 80 82 Books Dvd 80.9 78.8 77.2 75.1 79.9 77 80.2 77.9 80.3 78 Variant 1 Variant 2 Variant 3 Glove-DANN KinGDOM 75 78 82 85 88 Electronics Kitchen 86 83.8 82.4 77.1 82.2 81 84.9 83.1 85.5 82.4 Figure 4: Average accuracy (%) on target domains across different variants defined in §6.1. Best viewed in colour. learning invariant representations and helps target domain classification. For Variant 2, removal of zgrp from domain classifier diminishes the domaininvariance capabilities, thus making the domain classifier stronger and leading to a drop in sentiment classification performance. For Variant 3, removal of zgrp from sentiment classifier C degrades the performance. This indicates that in KinGDOM, zgrp contain task appropriate features retrieved from external knowledge (see §1). Besides ablations, we also look at alternatives to the knowledge graph and bag-of-words representation used for the documents. For the former, we consider replacing ConceptNet with WordNet (Fellbaum, 2010), which is a lexical knowledge graph with conceptual-semantic and lexical connections. We find the performance of KinGDOM with WordNet to be 1% worse than ConceptNet in terms of average accuracy score. This indicates the compatibility of ConceptNet with our framework. However, the competitive performance with WordNet also suggests the usability of our framework with any structural resource comprising inter-domain connections. For the latter, we use Glove-averaged embeddings with DANN. Glove is a popular word embedding method which captures semantics using co-occurrence statistics (Pennington et al., 2014). Results in Fig. 4 show that using only Glove does not provide the amount of conceptual semantics available in ConceptNet. 6.2 Case Studies We delve further into our results and qualitatively analyze KinGDOM. We look at a particular test document from DVD domain, for which KinGDOM predicts the correct sentiment, both when the target domain: DVD source domain: Electronics source domain: Books CGI film graphic graphics card computer graphic graphic novel writing RelatedTo Synonym RelatedTo RelatedTo RelatedTo RelatedTo Figure 5: Domain-general term graphic bridges the commonsense knowledge between domain-specific terms in Electronics, Books and DVD. source domain is Electronics and also Books. In similar settings, DANN mispredicts the same document. Looking at the corresponding documentspecific sub-graph for this document, we observe conceptual links to both domain-general concepts and domain-specific concepts from the source domain. In Fig. 5, we can see the domain-specific terms CGI and film to be related to the general concept graphic which is further linked to domain-specific concepts like graphics card, writing, etc. from Electronics, Books, respectively. This example shows how KinGDOM might use these additional concepts to enhance the semantics as required for sentiment prediction. 7 Conclusion In this paper, we explored the role of external commonsense knowledge for domain adaptation. We introduced a domain-adversarial framework called KinGDOM, which relies on an external commonsense KB (ConceptNet) to perform unsupervised domain adaptation. We showed that we can learn domain-invariant features for the concepts in the KB by using a graph convolutional autoencoder. Using the standard Amazon benchmark for domain adaption in sentiment analysis, we showed that our framework exceeds the performance of previously proposed methods for the same task. Our experiments demonstrate the usefulness of external knowledge for the task of cross-domain sentiment analysis. Our code is publicly available at https://github.com/declare-lab/kingdom. Acknowledgments This research is supported by A*STAR under its RIE 2020 Advanced Manufacturing and Engineering (AME) programmatic grant, Award No. A19E2b0098. 3207 References Basant Agarwal, Namita Mittal, Pooja Bansal, and Sonal Garg. 2015. Sentiment analysis using common-sense and context information. Comp. Int. and Neurosc., 2015:715730:1–715730:9. Firoj Alam, Shafiq R. Joty, and Muhammad Imran. 2018. Domain adaptation with adversarial training and graph embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1077–1087. Alexandra Balahur, Jes´us M. Hermida, and Andr´es Montoyo. 2011. Detecting implicit expressions of sentiment in text based on commonsense knowledge. In Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis, WASSA@ACL 2011, Portland, OR, USA, June 24, 2011, pages 53–60. Association for Computational Linguistics. Somnath Banerjee. 2007. Boosting inductive transfer for text classification using wikipedia. In The Sixth International Conference on Machine Learning and Applications, ICMLA 2007, Cincinnati, Ohio, USA, 13-15 December 2007, pages 148–153. IEEE Computer Society. Himanshu Sharad Bhatt, Deepali Semwal, and Shourya Roy. 2015. An iterative similarity based adaptation technique for cross-domain text classification. In Proceedings of the 19th Conference on Computational Natural Language Learning, CoNLL 2015, Beijing, China, July 30-31, 2015, pages 52–61. Bin Bi, Chen Wu, Ming Yan, Wei Wang, Jiangnan Xia, and Chenliang Li. 2019. Incorporating external knowledge into machine reading for generative question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2521–2530, Hong Kong, China. Association for Computational Linguistics. John Blitzer, Mark Dredze, and Fernando Pereira. 2007a. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, June 23-30, 2007, Prague, Czech Republic. John Blitzer, Mark Dredze, and Fernando Pereira. 2007b. Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 440–447, Prague, Czech Republic. Association for Computational Linguistics. Marina Boia, Claudiu Cristian Musat, and Boi Faltings. 2014. Acquiring commonsense knowledge for sentiment analysis through human computation. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, July 27 -31, 2014, Qu´ebec City, Qu´ebec, Canada, pages 901–907. AAAI Press. Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016a. Domain separation networks. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 343–351. Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016b. Domain separation networks. In Advances in neural information processing systems, pages 343–351. Erik Cambria, Soujanya Poria, Devamanyu Hazarika, and Kenneth Kwok. 2018. Senticnet 5: Discovering conceptual primitives for sentiment analysis by means of context embeddings. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 1795–1802. AAAI Press. Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Michael I. Jordan. 2018. Partial transfer learning with selective adversarial networks. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 2724–2732. Wei-Lun Chang, Hui-Po Wang, Wen-Hsiao Peng, and Wei-Chen Chiu. 2019. All about structure: Adapting structural information across domains for boosting semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 1900–1909. Minmin Chen, Kilian Q. Weinberger, and John Blitzer. 2011. Co-training for domain adaptation. In Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011. Proceedings of a meeting held 12-14 December 2011, Granada, Spain, pages 2456–2464. Minmin Chen, Zhixiang Eddie Xu, Kilian Q. Weinberger, and Fei Sha. 2012. Marginalized denoising autoencoders for domain adaptation. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012. Quanyu Dai, Xiao Shen, Xiao-Ming Wu, and Dan Wang. 2019. Network transfer learning via adversarial domain adaptation with graph convolution. CoRR, abs/1909.01541. 3208 Hal Daum´e III and Daniel Marcu. 2006. Domain adaptation for statistical classifiers. J. Artif. Intell. Res., 26:101–126. Yang Deng, Ying Shen, Min Yang, Yaliang Li, Nan Du, Wei Fan, and Kai Lei. 2018. Knowledge as A bridge: Improving cross-domain answer selection with external knowledge. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 3295–3305. Hady Elsahar and Matthias Gall´e. 2019. To annotate or not? predicting performance drop under domain shift. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2163–2173, Hong Kong, China. Association for Computational Linguistics. Christiane Fellbaum. 2010. Wordnet. In Theory and applications of ontology: computer applications, pages 231–243. Springer. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario March, and Victor Lempitsky. 2016. Domainadversarial training of neural networks. Journal of Machine Learning Research, 17(59):1–35. Deepanway Ghosal, Navonil Majumder, Soujanya Poria, Niyati Chhaya, and Alexander Gelbukh. 2019. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 154–164. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011, pages 513–520. William L. Hamilton, Kevin Clark, Jure Leskovec, and Dan Jurafsky. 2016a. Inducing domain-specific sentiment lexicons from unlabeled corpora. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 595– 605. The Association for Computational Linguistics. William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016b. Diachronic word embeddings reveal statistical laws of semantic change. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1489–1501, Berlin, Germany. Association for Computational Linguistics. Yulan He and Deyu Zhou. 2011. Self-training from labeled features for sentiment analysis. Inf. Process. Manage., 47(4):606–616. Robert L. Logan IV, Nelson F. Liu, Matthew E. Peters, Matt Gardner, and Sameer Singh. 2019. Barack’s wife hillary: Using knowledge graphs for fact-aware language modeling. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 5962–5971. Association for Computational Linguistics. Jing Jiang and ChengXiang Zhai. 2007. Instance weighting for domain adaptation in NLP. In ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, June 2330, 2007, Prague, Czech Republic. Prathusha K Sarma, Yingyu Liang, and William Sethares. 2019. Shallow domain adaptive embeddings for sentiment analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5548–5557, Hong Kong, China. Association for Computational Linguistics. Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, and Jiwon Kim. 2017a. Learning to discover cross-domain relations with generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 1857– 1865. Young-Bum Kim, Karl Stratos, and Dongchan Kim. 2017b. Adversarial adaptation of synthetic or stale data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1297–1307. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Wouter Marco Kouw and Marco Loog. 2019. A review of domain adaptation without target labels. IEEE transactions on pattern analysis and machine intelligence. Lianghao Li, Xiaoming Jin, and Mingsheng Long. 2012. Topic correlation analysis for cross-domain text classification. In Proceedings of the TwentySixth AAAI Conference on Artificial Intelligence, July 22-26, 2012, Toronto, Ontario, Canada. AAAI Press. Pengfei Li, Kezhi Mao, Xuefeng Yang, and Qi Li. 2019. Improving relation extraction with knowledgeattention. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 229–239, Hong Kong, China. Association for Computational Linguistics. 3209 Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. What’s in a domain? learning domain-robust text representations using adversarial training. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 16, 2018, Volume 2 (Short Papers), pages 474–479. Hong Liu, Mingsheng Long, Jianmin Wang, and Michael Jordan. 2019. Transferable adversarial training: A general approach to adapting deep classifiers. In International Conference on Machine Learning, pages 4013–4022. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-task learning for text classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1–10. Qi Liu, Yue Zhang, and Jiangming Liu. 2018. Learning domain representation for multi-domain sentiment classification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 541–550. zhibin liu, Zheng-Yu Niu, Hua Wu, and Haifeng Wang. 2019. Knowledge aware conversation generation with explainable reasoning over augmented graphs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1782– 1792, Hong Kong, China. Association for Computational Linguistics. Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. 2015. The variational fair autoencoder. arXiv preprint arXiv:1511.00830. Jingchao Ni, Shiyu Chang, Xiao Liu, Wei Cheng, Haifeng Chen, Dongkuan Xu, and Xiang Zhang. 2018. Co-regularized deep multi-network embedding. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, WWW 2018, Lyon, France, April 23-27, 2018, pages 469–478. Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Cross-domain sentiment classification via spectral feature alignment. In Proceedings of the 19th International Conference on World Wide Web, WWW 2010, Raleigh, North Carolina, USA, April 26-30, 2010, pages 751–760. Minlong Peng, Qi Zhang, Yu-gang Jiang, and XuanJing Huang. 2018a. Cross-domain sentiment classification with target domain specific information. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2505–2513. Minlong Peng, Qi Zhang, Yu-Gang Jiang, and Xuanjing Huang. 2018b. Cross-domain sentiment classification with target domain specific information. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 2505–2513. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532–1543. ACL. Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 43–54, Hong Kong, China. Association for Computational Linguistics. Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, and Rada Mihalcea. 2020. Beneath the tip of the iceberg: Current challenges and new directions in sentiment analysis research. arXiv preprint arXiv:2005. Soujanya Poria, Navonil Majumder, Rada Mihalcea, and Eduard Hovy. 2019. Emotion recognition in conversation: Research challenges, datasets, and recent advances. IEEE Access, 7:100943–100953. Sebastian Ruder. 2019. Neural Transfer Learning for Natural Language Processing. Ph.D. thesis, NATIONAL UNIVERSITY OF IRELAND, GALWAY. Sebastian Ruder and Barbara Plank. 2018. Strong baselines for neural semi-supervised learning under domain shift. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1044–1054. Association for Computational Linguistics. Kuniaki Saito, Yoshitaka Ushiku, and Tatsuya Harada. 2017. Asymmetric tri-training for unsupervised domain adaptation. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2988–2997. JMLR. org. Prathusha K. Sarma, Yingyu Liang, and Bill Sethares. 2018. Domain adapted word embeddings for improved sentiment classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pages 37–42. Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max 3210 Welling. 2018. Modeling relational data with graph convolutional networks. In The Semantic Web - 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3-7, 2018, Proceedings, volume 10843 of Lecture Notes in Computer Science, pages 593–607. Springer. Raksha Sharma, Pushpak Bhattacharyya, Sandipan Dandapat, and Himanshu Sharad Bhatt. 2018. Identifying transferable information across domains for cross-domain sentiment classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 968–978. Xiao Shen and Fu-Lai Chung. 2019. Network embedding for cross-network node classification. CoRR, abs/1901.07264. Bei Shi, Zihao Fu, Lidong Bing, and Wai Lam. 2018. Learning domain-sensitive and sentimentaware word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 2494–2504. Association for Computational Linguistics. Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Thirty-First AAAI Conference on Artificial Intelligence. Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 2962–2971. IEEE Computer Society. Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. 2014. Deep domain confusion: Maximizing for domain invariance. CoRR, abs/1412.3474. Pu Wang, Carlotta Domeniconi, and Jian Hu. 2008. Using wikipedia for co-clustering based cross-domain text classification. In Proceedings of the 8th IEEE International Conference on Data Mining (ICDM 2008), December 15-19, 2008, Pisa, Italy, pages 1085–1090. IEEE Computer Society. Garrett Wilson and Diane J. Cook. 2018. Adversarial transfer learning. CoRR, abs/1812.02849. Evan Wei Xiang, Bin Cao, Derek Hao Hu, and Qiang Yang. 2010. Bridging domains using world wide knowledge for transfer learning. IEEE Trans. Knowl. Data Eng., 22(6):770–783. Linchuan Xu, Xiaokai Wei, Jiannong Cao, and Philip S. Yu. 2017. Embedding of embedding (EOE): joint embedding for coupled heterogeneous networks. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, WSDM 2017, Cambridge, United Kingdom, February 6-10, 2017, pages 741–749. ACM. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575. Pengcheng Yang, Lei Li, Fuli Luo, Tianyu Liu, and Xu Sun. 2019. Enhancing topic-to-essay generation with external commonsense knowledge. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2002–2012, Florence, Italy. Association for Computational Linguistics. Werner Zellinger, Thomas Grubinger, Edwin Lughofer, Thomas Natschl¨ager, and Susanne Saminger-Platz. 2017. Central moment discrepancy (cmd) for domain-invariant representation learning. arXiv preprint arXiv:1702.08811. Lei Zhang, Shuai Wang, and Bing Liu. 2018. Deep learning for sentiment analysis: A survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov., 8(4). Peixiang Zhong, Di Wang, and Chunyan Miao. 2019. Knowledge-enriched transformer for emotion detection in textual conversations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 165–176, Hong Kong, China. Association for Computational Linguistics. Yftah Ziser and Roi Reichart. 2019. Task refinement learning for improved accuracy and stability of unsupervised domain adaptation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5895–5906.
2020
292
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3211–3220 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3211 Modelling Context and Syntactical Features for Aspect-based Sentiment Analysis Minh Hieu Phan & Philip Ogunbona School of Computer Science and Software Engineering University of Wollongong, NSW 2522, Australia [email protected], [email protected] Abstract The aspect-based sentiment analysis (ABSA) consists of two conceptual tasks, namely an aspect extraction and an aspect sentiment classification. Rather than considering the tasks separately, we build an end-to-end ABSA solution. Previous works in ABSA tasks did not fully leverage the importance of syntactical information. Hence, the aspect extraction model often failed to detect the boundaries of multi-word aspect terms. On the other hand, the aspect sentiment classifier was unable to account for the syntactical correlation between aspect terms and the context words. This paper explores the grammatical aspect of the sentence and employs the self-attention mechanism for syntactical learning. We combine part-of-speech embeddings, dependencybased embeddings and contextualized embeddings (e.g. BERT, RoBERTa) to enhance the performance of the aspect extractor. We also propose the syntactic relative distance to de-emphasize the adverse effects of unrelated words, having weak syntactic connection with the aspect terms. This increases the accuracy of the aspect sentiment classifier. Our solutions outperform the state-of-the-art models on SemEval-2014 dataset in both two subtasks. 1 Introduction The process of understanding the sentiments expressed by consumers in a product review (opinionated text) is referred to as sentiment analysis. Deep insights into the opinionated text are gained through a fine-grained entity- or aspect-based sentiment labeling of the product being reviewed. Such insights can be invaluable for business decision making. Aspect-based sentiment analysis (ABSA) consists of two sub-tasks, namely an aspect extraction (AE) and an aspect sentiment classification (ASC). However, the majority of reported works focused on one of the two sub-tasks alone. Representative works include (Xu et al., 2018; Da’u and Salim, 2019; Poria et al., 2016) for aspect extraction and (Zeng et al., 2019; Huang et al., 2018; Song et al., 2019; Thet et al., 2010) for aspect sentiment classification. Recent approaches (He et al., 2019; Wang et al., 2018; Li et al., 2019) attempted to develop an integrated solution to solve both tasks simultaneously by formulating both sub-tasks as a single sequence labelling with a unified tagging scheme. Adding unified tokens introduces overhead and complexity in the original ABSA tasks. Thus, multi-task models often have poorer performance compared with single-task models which are trained independently. Recent advances in the NLU introduced contextualized language models, namely OpenAI GPT (Radford et al., 2018), BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019). These models can capture the characteristics of word uses and account for different textual context in which words appear. Upon investigating the latest BERT/RoBERTabased architectures used in aspect extraction, it became apparent that they were unable to determine the boundaries of multi-word aspects. For instance, the extractors broke the multi-word expression,“quality of food” into “quality of” and “food”. We hypothesize that this shortcoming is caused by the inability of the contextualized embeddings to encode rich syntactical information. In this paper, we integrate syntactical information into contextualized embeddings and propose an ABSA solution consisting of an aspect extractor and an aspect sentiment classifier as illustrated by Fig. 1. The proposed AE architecture, named contextualized syntax-based aspect extraction (CSAE), consists of POS embeddings, dependency-based embeddings (Levy and Goldberg, 2014) and selfattention in addition to RoBERTa layer. Our ASC solution is closely related to the work 3212 Figure 1: ABSA architecture of Zeng et al. (2019) in which the local context focus (LCF) mechanism is exploited to down-weight the contribution of words that are far away from local context. However, this approach simply regarded the word counts between two words as their semantic relative distance and neglected the mutual syntactical relationship. Our approach employs the shortest path between two words in dependency parsing tree as a syntactic relative distance (SRD). We name this model local context focus on syntax ASC (LCFS-ASC). Comparative experiments are conducted on two SemEval-2014 datasets (Pontiki et al., 2014) to demonstrate the importance of syntactical features in improving both AE and ASC models. The main contributions of this paper can be highlighted as: (1) We propose the multi-channel CSAE model which distils grammatical aspects into contextualized features for improving sequential taggings; (2) We contribute the LCFS-ASC which can analyze syntactical connections between words to better understand local contexts that are relevant to target aspect terms; (3) We study the importance of the SRD by exploring the attention score in the LCF layer. 2 Related Work This section details the evolution of ABSA solutions from word-embedding-based models to contextualized-embedding-based models and highlights their strengths and weaknesses. Word-embedding-based Model Recent ABSA works used pre-trained word embeddings as a data processing layer and added subsequent layers for a richer feature learning. Targetdependent Long Short-Term Memory (TD-LSTM) model (Tang et al., 2015) embedded the context words and target words into a vector space and employed LSTM cells to encode long-distance relationships in an input sequence. TD-LSTM captured the relatedness of target words with context words to extract relevant information for ABSA. Attention mechanism has been widely applied to the ABSA problem to overcome the vanishing gradients observed in long input sequence. Attention-based LSTM with Aspect Embedding (ATAE-LSTM) (Wang et al., 2016) utilized attention mechanism in addition to LSTM layers. Hence, the network can concentrate on crucial sentiment parts of a sentence in response to given aspects. Contextualized Pre-trained Language Model The quality of word representation is gauged by its capability to encode syntactical features and polysemic behaviour (i.e. word senses). Traditional word embeddings only produced singlecontext word representations. Recent works diverged from global word representations and considered context-dependent word embeddings which “described” the words differently in order to account for inherent word senses. BERT (Devlin et al., 2018) is a masked language model (LM) which masked a percentage of words in sentences and set up the training objective to predict the masked words. RoBERTa (Liu et al., 2019) improved upon BERT model by training the model longer with larger amount of data and eliminating next-sentence prediction objective. There have been several applications of BERT to the ABSA problem. AEN-BERT (Song et al., 2019) used BERT to embed a context sequence and a target sequence; and applied attention to draw semantic interaction between targets and context words. LCFBERT (Zeng et al., 2019) employed context dynamic masking/ context dynamic weighting to localize sentiment signals using semantic relative distance. This distance is measured by the word counts between the context word and target aspect terms. The local context layer allowed the model to emphasize semantic-relative contextual words. However, critical sentiment words sometimes can be associated with the target aspect terms through grammatical rules despite their large semantic relative distance. We hypothesize that using syntactical-relative-distance to identify unrelated words avoids mistakenly eliminating the contribution of crucial sentiment words. There are examples of recent BERT-based approaches works that achieved promising results in AE tasks (see for example Xu et al. (2019)). However, they required re-training a BERT model on a large domain-specific corpus which made it infeasible to achieve a domain-independent aspect extractor. We abstain from such post-training ap3213 proaches and look for a generic AE architecture. 3 Proposed Method Given a contextual sentence S consisting of n tokens, S = {wi|i ∈ [1, n]}, an end-toend ABSA tasks aims to extract the set A of m aspect terms being mentioned where A = {ai|i ∈[1, m]}; and determine the polarity yp ∈ {Positive, Negative, Neutral} associated with each extracted aspect. 3.1 Aspect Extraction Aspect extraction can be cast as a sequential labelling problem in which each input token wi is assigned a label yi. The labels yi take on values from the set {B, I, O} (Begin, Inside, Outside), representing respectively the beginning of aspect term, inside of aspect term and the non-aspect tokens. Fig. 2 depicts the overall architecture of the proposed contextualized syntax-based aspect extraction (CSAE) model. The CSAE consists of a contextualized embedding (e.g., BERT or RoBERTa), a part-of-speech embedding and a dependency-based embedding. The syntactical information in the final representation is enriched by concatenating the contextualized hidden states, attended POS states and attended dependency-based states. 3.1.1 Input Representation The contextualized model requires a special classification token [CLS] at the beginning of the input sequence and the separator [SEP] appended to the end of input sequence. The input sentence is converted to the format “[CLS]” + Input sequence + “[SEP]”. 3.1.2 Part-of-Speech Embedding The part-of-speech (POS) of each word is annotated by the Universal POS tags 1; subsequently the POS of an input sequence P = {p1, p2, ..., pn} is retrieved. The POS embedding layer takes the sparse vector representation P to extract a dense vector representation V P = {vp i |i ∈[1, n]} wherein vp i ∈Rh pos emb, and h pos emb refers to the hidden size of the POS embeddings. Then, the self-attention layer is utilized to observe the entire sequence of POS taggers and extract the grammatical dependencies in the input sentence. 1Universal POS Tags. URL: https://universaldependencies.org/u/pos/ Figure 2: Overall architecture of the proposed CSAE 3.1.3 Dependency-based Embedding Instead of using a linear bag-of-words context to form a context window, the dependencybased embedding (Levy and Goldberg, 2014) (DE) uses dependency-based contexts based on the syntactical relations in which the word participates. The process starts by using a dependency tree to parse the sentence. For each target word w and the modifiers m1, m2, . . . , mn associated with w, the context C = {(m1, rel1), (m2, rel2), . . . , (mn, reln)} is constructed. In this consideration, reli is the dependency relation (e.g., subj, amod, pobj) between a target word w and a modifier mi, while rel−1 represents the inverse relations. Before extracting the final contexts, the relations consisting of a preposition are collapsed by subsuming the preposition into a dependency label. Fig. 3 describes the process of collapsing prepositions into a dependency relation and demonstrates the extracted contexts of each target word in a given sentence. The DE can incorporate the distant relation which is out of reach in linear-context word embedding. It also de-emphasizes irrelevant words accidentally falling into the context windows. 3214 Figure 3: Dependency-based context example. Top: prepositions are collapsed into a single arc, telescope is a direct modifier of telescope. Bottom: contexts extracted for each word in a sentence 3.1.4 Fine-tuning Procedure The training objective is to minimize the crossentropy loss with L2 regularization. Specifically, the optimal parameters θ of the deep learning model are obtained from L(θ) = − n X i=1 ˆyi log yi + λ X θ∈Θ θ2, (1) where λ is the regularization parameter and ˆyi the predicted label corresponding to yi. 3.2 Aspect Sentiment Classification Given a contextual sentence S = {wi|i ∈ [1, n]} and extracted aspect terms A = {ai|i ∈ [1, m]}, we need to determine the polarity {Positive, Neutral, Negative} of the aspect terms in the contextual sentence. Fig. 4 illustrates the overall architecture of the proposed Local Context Feature-Aspect Sentiment Classification including two independent Contextualized Embedding for global and local contexts. 3.2.1 Input Representation To comprehend the global context, the contextual sentence S and aspect terms A are combined to construct global contexts G. The input format of global context G is G = [CLS]+S+[SEP]+A+[SEP]. On the other hand, the local contexts L is the contextual sentence S whose format is [CLS] + S + [SEP]. In BERT architecture, the global context G is explicitly represented as a pair of text consisting of a contextual sentence S and aspect terms A. When a token in G belongs to a first or second segment of the sentence pair, its segment token is Figure 4: Overall architecture of the proposed LCFASC indexed as 1 or 2 respectively. This next-sentenceprediction characteristic of the BERT model allows BERT-based ASC models to capture the semantic relationship between the contextual sentence and the aspect. Since RoBERTa removed the nextsentence-prediction task when training the model, it is suspected that the RoBERTa representation is not as informative as the BERT representation for the ASC task. The hidden state corresponding to a special classification token [CLS] represents the aggregation of the entire sentence. 3.2.2 Local Context Focus The local context vectors V l = {vl i|i ∈[1, n]} are obtained by feeding the local contexts into the contextualized embedding. Next, we apply context feature dynamic weight/context feature dynamic mask (CDW/CDM) (Zeng et al., 2019) techniques 3215 on V l to alleviate the negative influence of irrelevant opinion words which are distant from the target aspect terms. Relative Distance The SRD between words is measured by the shortest distance between their corresponding nodes in the dependency-parsed tree. If the aspect term is composed of multiple words, the SRD between an input word and a multi-word aspect term is computed as an average distance between each component word and an input word. Fig. 5 illustrates the dependency-parsed tree constructed from a sample product review. The SRD between an aspect term “sound amplifier” and sentiment word “loudly” is computed as: SRD(amplifier, loudly) = 2 SRD(sound, loudly) = 3 =⇒SRD(sound amplifier, loudly) = 2.5. On the other hand, the semantic relative distance when counting words between “sound amplifier” and “loudly” is 7 (as demonstrated in (Zeng et al., 2019)) which might make key sentiment words being down-weighted undesirably. Context dynamic mask (CDM) masks out the less-semantic context features whose SRD to target words is greater than the pre-defined threshold. Given the local contexts V l, the mask vector V m i for each contextual word mi is computed based on certain SRD threshold α: vm i = ( O SRDi > α I SRDi ≤α M = [vm 1 , vm 2 , ..., vm n ] V CDM = V l ⊙M (2) O and I are vectors of all zero and one respectively; O and I ∈Rh where h is the hidden size of a contextualized embedding and also the dimension of local context vector vl i. ⊙represents the elementwise dot product to mask out the local vector V l by using the mask matrix M Context dynamic weighting retains the contribution of less-semantic-relative context features but de-emphasizes them based on their distance to aspect terms. Thus, vw i = ( (1 −SRDi−α N ) · I SRDi > α I SRDi ≤α W = [vw 1 , vw 2 , ..., vw n ] V CDW = V l ⊙W (3) where N is the length of the contextual sentence. Fine-tuning Procedure The hidden state of classification token “[CLS]” hpool is pooled out and fed into a softmax layer to predict the polarity from the set {Positive, Neutral, Negative}. Similarly to the AE model, we use the cross-entropy loss with L2 regularization as a loss function to fine-tune the entire ASC deep-learning model. 4 Performance Evaluation 4.1 Dataset We evaluate and compare the proposed AE and ASC models on two benchmark datasets as described in Table 1. They are laptop-domain and restaurant-domain datasets taken from SemEval2014 Task 4 challenge (Pontiki et al., 2014). Each sample sentence in the datasets is annotated with marked aspect terms and their associated polarity. Table 1: Number of instances by polarity in training and test data Dataset Training Testing Pos Neg Neu Pos Neg Neu Restaurant 1315 462 368 426 143 146 Laptop 602 514 260 201 197 94 4.2 Baseline Models We benchmark the performance against recent models in ABSA tasks to demonstrate the effectiveness of the proposed CSAE model and LCFS-ASC model. The first group of models follow pipelining approach which train single-task models independently and pipeline the output of AE and ASC to build an end-to-end ABSA solution. To highlight the improved performance of the contextualized embeddings in ABSA tasks, we pick top high-performing word-embedding-based and contextualized-embedding-based models in both AE and ASC tasks. For a fair comparison, we only consider domain-independent models and eschew comparing with post-training approaches because they require re-purposing the entire model on large corpora before fine-tuning it for the in-domain end task. For AE task, we select two word-embeddingbased model and one contextualized-embeddingbased model to demonstrate that a simple BERT 3216 Figure 5: Dependency-parsed tree of the product review layer can outperform a sophisticated network using word embeddings: BiLSTM (Liu et al., 2015) is a Named Entity Recognition model employing Bidirectional LSTM on top of a Word Embedding representation. DTBCSNN (Ye et al., 2017) is a dependency tree based stacked convolutional neural network which used the inference layer for aspect extraction. BERT-AE (Devlin et al., 2018) utilizes a BERT representation for AE. This model acts as a reference to demonstrate the importance of our designed components adding to a contextualized representation. For ASC task, we select two word-embeddingbased models and four contextualized-embeddingbased models. Various BERT-based models are examined to demonstrate that the provided information about aspects can be employed to attend to relevant sentiment information and improve the BERT-based ASC models: AOA (Huang et al., 2018) uses multiple attention layers to model the interaction between aspects and sentences. MGAN (Fan et al., 2018) uses fine-grained and coarse-grained attention to capture word-level interaction between aspects and sentences. BERT-ASC (Devlin et al., 2018), utilizes a BERT representation for ASC BERT-PT (Xu et al., 2018) re-trains a contextualized BERT model on a large domain-specific corpus to enhance the quality of word representations to the end-task. AEN-BERT (Song et al., 2019) adopts contextualized BERT model and attention mechanism to model the relationship between context and targets. This model is used to show the improvements in ASC tasks when leveraging additional information about target terms in the given context. LCF-BERT (Zeng et al., 2019) employs LocalContext-Focus design with Semantic-RelativeDistance (SeRD) to discard unrelated sentiment words. This model acts as a reference to illustrate the importance of our proposed SRD metrics in improving ASC models. Since the choice of BERT model is not indicated in the paper (Zeng et al., 2019) and we do not have an access to BERTlarge model, we re-implement the LCF-BERT model using the BERTbase model based on their proposed methodology. The second group consists of integrated approaches which aim to extract aspect terms and determine polarity simultaneously through a unified tagging scheme. This group of models can model the joint information in both sub-tasks and leverage all available sources of training information to handle an end-to-end ABSA problem: MNN (Wang et al., 2018) employs attention mechanism to jointly learn the relationship between aspects and sentiments for a multi-task neural network. UABSA (Li et al., 2019) is a unified model for ABSA, consisting of two stacked RNNs for the target boundary detection tasks (auxiliary) and the complete ABSA tasks (primary). IMN (He et al., 2019) uses message passing architecture to transfer information iteratively through different tasks along latent variables. 4.3 Model Variations To evaluate our proposed models along with their components in both AE and ASC tasks, we conduct a series of experiments with different settings. For our proposed AE solution, we perform ablation study where certain modules are removed from the CSAE architecture to show their effects on the end performance: RoBERTa-AE utilizes a RoBERTa representation to demonstrate the improved quality of the RoBERTa representation in AE task. RoBERTa-POS employs a RoBERTa representation and a POS embedding to demonstrate that POS is helpful to identify aspect terms in a sentence. 3217 RoBERTa-Dep uses a RoBERTa representation and a dependency-based embedding to compare the effects of dependency-based features and POS features in AE tasks. CSAE is a complete model, consisting of RoBERTa, POS embedding and dependency-based embedding layers. For our proposed ASC solution, we experiment with the RoBERTa-ASC model without the LCF layer and a complete LCFS-ASC model with the LCF layer. Hence, the impact of LCF layer on ASC tasks can be demonstrated. RoBERTa-ASC utilizes a RoBERTa representation for ASC to compare the suitability of BERT and RoBERTa representations in ASC tasks. LCFS-ASC-CDW is a LCFS-ASC model employing CDW technique. LCFS-ASC-CDM is a LCFS-ASC model employing CDM technique. Note that we used the BERTbase to implement LCFS-ASC model due to the lack of adequate computing resources, as well as to ensure the fair comparison between the LCF-BERT and our proposed model. Similarly, the CSAE model is built on top of the RoBERTabase model. For AE task, we use the standard evaluation script provided by SemEval challenge to report F1-score. On the other hand, the accuracy and macro F1-score over 3 classes of polarities are considered to be evaluation metrics for ASC task. 5 Experiments Table 2: The examples column shows the sentences having multi-word aspect terms being highlighted in red. The two following columns display the predicted aspect terms by RoBERTa-AE and CSAE models respectively Examples RoBERTa-AE CSAE 1. Try the Times Square cocktail – ginger lemonade with vodka (also available without vodka) cocktail Times Square cocktail 2. The restaurant offers no desserts beyond the complimentary espresso cup filled with chocolate mousse espresso cup filled with, chocolate mousse espresso cup filled with chocolate mousse 3. Then just the other day, my left “mouse” button snapped! “mouse” button left “mouse” button Table 2 compares the performance of the RoBERTa-AE-based model and the complete CSAE model. It is noticeable that the CSAE model outperforms RoBERTa-AE model in defining the boundary of multi-word aspect terms. Using a contextualized RoBERTa feature, the RoBERTa-AE is only able to identify the noun “cocktail” in a noun phrase, suggesting a RoBERTa representation fails to capture rich syntactical structure in a contextual sentence. In the universal dependencies schema, “Times” and “Square” are a PROPN (proper noun) tag which is part of the name of specific place, and have compound relation with the noun “cocktail”. Being given explicit information about special syntactical properties of an example, CSAE successfully identifies a compound noun as an aspect term even though an aspect term “Time Square cocktail” does not appear in a training set. Additionally, even though RoBERTa-AE can identify individual aspect terms “espresso cup filled with” and “chocolate mousse” in example 2, it fails to group them together to form a complete multiword term. CSAE, on the other hand, is able to model the role of the preposition “with” and detect the true boundary of the aspect term. 6 Results & Analysis 6.1 Aspect Extraction 6.1.1 Main Results Table 3 summarizes the results of our proposed models compared with the baseline models. When compared with the word-embedding-based models, our CSAE model performs better than the BiLSTM and DTBCSNN models with gains of 3.93 percentage points (p.p), 1.99p.p and 5.23p.p, 2.68p.p in laptop and restaurant datasets respectively. The performance of our model is close to IMN’s in laptop domain and outperforms other integrated approaches in both settings. Especially, our CSAE model has F1-score at least 3.32 p.p higher than other integrated approaches in the restaurant domain, suggesting that single-task models can significantly outperform integrated solutions with sophisticated architecture by simply improving the quality of feature representations. 6.1.2 Ablation Study To investigate the effects of different designed components in a CSAE, we start with a base model using just a RoBERTa representation for aspect extraction and add other components one at a time. We found that our base model always gives superior performance compared to the BERT-based model. The performance is improved when we introduce 3218 the POS embedding and dependency-based embedding to capture rich syntactical information. The POS embeddings solely represent the POS of each individual word and leave the feature extraction job for the attention layer, while the dependencybased embeddings directly infuse the grammatical interaction between words into the word representation. Hence, it is expected that RoBERTa with dependency-based features has slightly higher F1score than RoBERTa with POS features. Overall, CSAE with full complement of both components gained significant improvement. It suggests that the RoBERTa model has not entirely “comprehended” the grammatical aspects of natural language and there is room for improvements in contextualized LM by further leveraging syntactical information of sentences. Table 3: Comparison of our best performing AE model variants in terms of F1 scores (%) with the state-of-theart methods Domain Laptop Rest Model F1 F1 Single-task BiLSTM 73.72 81.42 DTBCSNN 75.66 83.97 BERT-AE 73.92 82.56 Integrated MNN 76.94 83.05 UABSA 77.34 83.92 IMN 77.96 83.33 Proposed RoBERTa-AE 75.22 85.12 RoBERTa-POS 76.01 85.56 RoBERTa-Dep 76.88 86.25 CSAE 77.65 86.65 Note: The best result in each dataset is highlighted in bold 6.2 Aspect Sentiment Classification 6.2.1 Main Results Table 4 demonstrates that our proposed LCFS-ASC using Syntactic Relative Distance to localize the context features has the best performance in both Laptop and Restaurant dataset. The single-task, integrated and our proposed approach are displayed in the first, second and third parts, respectively. Our proposed model outperforms the BERT-PT by a large margin without utilizing additional knowledge from a larger corpus to train domain-specific embeddings. All BERT-based single-task models outperform the integrated models, suggesting that the unified tagging schema imposed overheads to the ASC tasks by introducing extra classes. As discussed in Section 3.2.1, the removal of the next-sentence-pair task in RoBERTa makes the RoBERTa representation less suitable to the ASC Table 4: Comparison results of our best performing ASC model variants in terms of F1 scores and accuracy (%) with the state-of-the-art methods Domain Laptop Rest Model F1 Acc F1 Acc AOA 74.5 81.2 MGAN 72.47 75.39 71.94 81.25 BERT-ASC * 72.68 76.25 76.98 84.46 BERT-PT 75.08 78.07 76.96 84.95 AEN-BERT 76.31 79.93 73.76 83.12 LCF-BERT-CDW * 76.20 80.21 79.12 85.91 LCF-BERT-CDM * 75.76 79.65 78.74 85.73 MNN 65.98 70.40 68.45 77.17 UABSA 68.24 72.30 68.38 79.68 IMN 72.02 75.36 75.66 83.89 RoBERTa-ASC 70.52 74.12 75.12 82.82 LCFS-ASC-CDW 77.13 80.52 80.31 86.71 LCFS-ASC-CDM 76.45 80.34 80.10 86.13 Note: The best result in each dataset is highlighted in bold. The results of models we reproduced by following the methodology published in the paper are indicated by asterisk (*). task leading to the underperformance of RoBERTaASC. The proposed LCFS-ASC has a slightly improved performance compared with the LCF-BERT when using either CDM or CDW. The result demonstrates the effectiveness of Syntactical Relative Distance in encoding syntactical information. CDW helps to boost the performance of LCFS-ASC model more than the CDM. Since CDM completely blocks the signals of the contexts being identified unimportant, it may falsely disregard useful signals. On the other hand, CDW emphasizes flexibility and allows further signals to contribute small weights corresponding to its relatedness with the aspect terms in the dependency-based tree. 6.2.2 Analysis of SRD’s Effects by Visualizing Attention Scores Figure 6: Attention scores of LCF-BERT-CDW (left) and LCFS-ASC-CDW (right) Fig. 6 visualizes the attention score for the best3219 performing LCFS-ASC-CDW and LCF-BERTCDW models. For a given input sentence, LCFSASC assigns a correct positive polarity to the aspect term “cuisine”, while LCF-BERT gives a wrong prediction as negative. Since LCF-BERT uses Semantic Relative Distance, the sentiment term “without a doubt” has been paid the most focus due to its close distance to the aspect term “cuisine” based on word counts metrics. On the other hand, the signal of a key sentiment word “delicious” is mistakenly down-weighted because it is far away from the aspect term “cuisine”. Nevertheless, the LCFS-ASC retains the importance of the word “delicious” because Syntactical Relative Distance accounts for the direct interaction between the adjective “delicious” and the aspect term “cuisine” in a dependency-based tree. 7 Conclusion and Future work We proposed an end-to-end ABSA solution which pipelined an aspect extractor and an aspect sentiment classifier. The results indicate that exploitation of syntactical structures of sentences empowers the contextualized models to improve on current works in both ASC and AE tasks. Our proposed aspect sentiment classifier outperformed post-training ASC model and enabled the creation of a domain-independent solution. The proposed SRD allows the aspect sentiment classifier to focus on critical sentiment words which modify the target aspect term through dependency-based structure. The substantial improvements highlight the under-performance of recent contextualized embedding models in “understanding” syntactical features and suggests future directions in developing more syntax-learning contextualized embeddings. One can try to adapt our proposed CSAE architecture for an integrated approach by applying the unified tagging scheme; thereby, aspect extraction and sentiment classification can be achieved simultaneously. 8 Acknowledgement Thanks to Vinh Hung Ngo, who has provided insightful advice to improve my writings and experimental results. References Aminu Da’u and Naomie Salim. 2019. Aspect extraction on user textual reviews using multi-channel convolutional neural network. PeerJ Computer Science, 5:e191. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Feifan Fan, Yansong Feng, and Dongyan Zhao. 2018. Multi-grained attention network for aspect-level sentiment classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3433–3442. Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2019. An interactive multi-task learning network for end-to-end aspect-based sentiment analysis. arXiv preprint arXiv:1906.06906. Binxuan Huang, Yanglan Ou, and Kathleen M Carley. 2018. Aspect level sentiment classification with attention-over-attention neural networks. arxiv preprint arXiv:1804.06536. Omer Levy and Yoav Goldberg. 2014. Dependencybased word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 302–308. Xin Li, Lidong Bing, Piji Li, and Wai Lam. 2019. A unified model for opinion target extraction and target sentiment prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6714–6721. Pengfei Liu, Shafiq Joty, and Helen Meng. 2015. Finegrained opinion mining with recurrent neural networks and word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1433–1443. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect-based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 27–35. Soujanya Poria, Erik Cambria, and Alexander Gelbukh. 2016. Aspect extraction for opinion mining with a deep convolutional neural network. KnowledgeBased Systems, 108:42–49. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openaiassets/researchcovers/languageunsupervised/language understanding paper. pdf. 3220 Youwei Song, Jiahai Wang, Tao Jiang, Zhiyue Liu, and Yanghui Rao. 2019. Attentional encoder network for targeted sentiment classification. arXiv preprint arXiv:1902.09314. Duyu Tang, Bing Qin, Xiaocheng Feng, and Ting Liu. 2015. Effective lstms for targetdependent sentiment classification. arXiv preprint arXiv:1512.01100. Tun Thura Thet, Jin-Cheon Na, and Christopher SG Khoo. 2010. Aspect-based sentiment analysis of movie reviews on discussion boards. Journal of information science, 36(6):823–848. Feixiang Wang, Man Lan, and Wenting Wang. 2018. Towards a one-stop solution to both aspect extraction and sentiment analysis tasks with neural multitask learning. In 2018 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE. Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based lstm for aspectlevel sentiment classification. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 606–615. Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2018. Double embeddings and CNN-based sequence labeling for aspect extraction. arXiv preprint arXiv:1805.04601. Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2019. BERT post-training for review reading comprehension and aspect-based sentiment analysis. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. Hai Ye, Zichao Yan, Zhunchen Luo, and Wenhan Chao. 2017. Dependency-tree based convolutional neural networks for aspect term extraction. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 350–362. Springer. Biqing Zeng, Heng Yang, Ruyang Xu, Wu Zhou, and Xuli Han. 2019. Lcf: A local context focus mechanism for aspect-based sentiment classification. Applied Sciences, 9(16):3389.
2020
293
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3221–3228 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3221 Parallel Data Augmentation for Formality Style Transfer Yi Zhang1∗, Tao Ge2, Xu Sun1 1MOE Key Lab of Computational Linguistics, School of EECS, Peking University 2Microsoft Research Asia, Beijing, China {zhangyi16,xusun}@pku.edu.cn [email protected] Abstract The main barrier to progress in the task of Formality Style Transfer is the inadequacy of training data. In this paper, we study how to augment parallel data and propose novel and simple data augmentation methods for this task to obtain useful sentence pairs with easily accessible models and systems. Experiments demonstrate that our augmented parallel data largely helps improve formality style transfer when it is used to pre-train the model, leading to the state-of-the-art results in the GYAFC benchmark dataset1. 1 Introduction Formality style transfer (FST) is defined as the task of automatically transforming a piece of text in one particular formality style into another (Rao and Tetreault, 2018). For example, given an informal sentence, FST aims to preserve the styleindependent content and output a formal sentence. Previous work tends to leverage neural networks (Xu et al., 2019; Niu et al., 2018; Wang et al., 2019) such as seq2seq models to address this challenge due to their powerful capability and large improvement over the traditional rule-based approaches (Rao and Tetreault, 2018). However, the performance of the neural network approaches is still limited by the inadequacy of training data: the public parallel corpus for FST training – GYAFC (Rao and Tetreault, 2018) – contains only approximately 100K sentence pairs, which can hardly satiate the neural models with millions of parameters. To tackle the data sparsity problem for FST, we propose to augment parallel data with three specific data augmentation methods to help improve the model’s generalization ability and reduce the overfitting risk. Besides applying the widely used back ∗Work done during the internship at Microsoft Research. 1Our augmented data is available at https://github. com/lancopku/Augmented_Data_for_FST FST (test instance) Input (informal) I dunno, even if she like you, and then she 'll prob. Reference (formal) I don't know. She probably will if she likes you. F-Dis Source I dunno... good luck. French Je ne sais pas... bonne chance. Target I don't know ... Good luck. M-Task Source I think she like cat too. Target I think she likes cat too. MT MT Figure 1: An example that Formality Style Transfer (FST) benefits from data augmented via formality discrimination (F-Dis) and multi-task transfer (MTask). The mapping knowledge indicated by the color (blue→pink) in FST test instance occur in the pairs augmented by F-Dis and M-Task. F-Dis identifies useful sentence pairs from paraphrased sentence pairs generated by cross-lingual MT, while M-Task utilizes training data from GEC to help formality improvement. translation (BT) method (Sennrich et al., 2016a) in Machine Translation (MT) to FST, our data augmentation methods include formality discrimination (F-Dis) and multi-task transfer (M-Task). They are both novel and effective in generating parallel data that introduces additional formality transfer knowledge that cannot be derived from the original training data. Specifically, F-Dis identifies useful pairs from the paraphrased pairs generated by cross-lingual MT; while M-task leverages the training data of Grammatical Error Correction (GEC) task to improve formality, as shown in Figure 1. Experimental results show that our proposed data augmentation methods can harvest large amounts of augmented parallel data for FST. The augmented parallel data proves helpful and significantly helps improve formality style transfer when it is used to pre-train the model, allowing the model to achieve the state-of-the-art results in the GYAFC benchmark dataset. 3222 2 Approach 2.1 Data Augmentation for Formality Style Transfer We study three data augmentation methods for formality style transfer: back translation, formality discrimination, and multi-task transfer. We focus on informal→formal style transfer since it is more practical in real application scenarios. 2.1.1 Back translation The original idea of back translation (BT) (Sennrich et al., 2016a) is to train a target-to-source seq2seq (Sutskever et al., 2014; Cho et al., 2014) model and use the model to generate source language sentences from target monolingual sentences, establishing synthetic parallel sentences. We generalize it as our basic data augmentation method and use the original parallel data to train a seq2seq model in the formal-to-informal direction. Then, we can feed formal sentences to this model that is supposed to be capable of generating their informal counterparts. The formal input and the informal output sentences can be paired to establish augmented parallel data. 2.1.2 Formality discrimination According to the observation that an informal sentence tends to become a formal sentence after a round-trip translation by MT models that are mainly trained with formal text like news, we propose a novel method called formality discrimination to generate formal rewrites of informal source sentences by means of cross-lingual MT models. A typical example is shown in Figure 2. To this end, we collect a number of potentially informal English sentences (e.g., from online forums). Formally, we denote the collected sentences as S = {si}|S| i=1 where si represents the i-th sentence. We first translate2 them into a pivot language (e.g., French) and then translate them back into English, as Figure 2 shows. In this way, we obtain a rewritten sentence s′ i for each sentence si ∈S. To verify whether s′ i improves the formality compared to si, we introduce a formality discriminator which in our case is a Convolutional Neural Network (CNN) to quantify the formality level of a sentence. We trained the formality discriminator with the sentences and their formality labels in the FST corpus (e.g., GYAFC). The pairs (si, s′ i) where s′ i largely improves the formality of si will 2https://translate.google.com/ Input i'm gonna trust my gut feelings. (0.12) Output I will trust my instinct. (0.96) French je vais faire confiance à mon instinct. MT MT Figure 2: Formality discrimination for FST. The numbers following the sentences are formality scores predicted by a formality discriminator. The pair (connected by the red dashed arrow) that obtains significant formality improvement will be kept as augmented data. be selected as the augmented data. The resulting data set Taug is such a set of pairs: Taug = {(si, s′ i)|P+(s′ i) −P+(si) ≥σ} (1) where P+(x) is the probability of sentence x being formal, predicted by the discriminator, and σ is the threshold3 for augmented data selection. In this way, we can obtain much helpful parallel data with valuable rewriting knowledge that is not covered by the original parallel data. 2.1.3 Multi-task transfer In addition to back translation and formality discrimination that use artificially generated sentence pairs for data augmentation, we introduce multitask transfer that uses annotated sentence pairs from other seq2seq tasks. We observe that informal texts are usually ungrammatical while formal texts are almost grammatically correct. Therefore, a desirable FST model should possess the ability to detect and rewrite ungrammatical texts, which has been verified by the previous empirical study (Ge et al., 2019) showing that using a state-of-theart grammatical error correction (GEC) model to post-process the outputs of an FST model can improve the result. Inspired by this observation, we propose to transfer the knowledge from GEC to FST by leveraging the GEC training data as the augmented parallel data to help improve formality. An example is illustrated in Figure 1 in which the annotated data for GEC provides knowledge to help the model rewrite the ungrammatical informal sentence. 2.2 Pre-training with Augmented Data In general, massive augmented parallel data can help a seq2seq model to learn contextualized representations, sentence generation and source-target alignments better. When the augmented parallel 3σ = 0.6 in our experiments. 3223 data is available, previous studies (Sennrich et al., 2016a; Edunov et al., 2018; Karakanta et al., 2018; Wang et al., 2018) for seq2seq tasks are inclined to train a seq2seq model with original training data and augmented data simultaneously. However, augmented data is usually noisier and less valuable than original training data. In simultaneous training, the massive augmented data tends to overwhelm the original data and introduce unnecessary and even erroneous editing knowledge, which is undesirable for our task. To better exploit the augmented data, we propose to first pre-train the model with augmented parallel data and then fine-tune the model with the original training data. In our pre-training & finetuning (PT&FT) approach, the augmented data is not treated equally to the original data; instead it only serves as prior knowledge that can be updated and even overwritten during the fine-tuning phase. In this way, the model can better learn from the original data without being overwhelmed or distracted by the augmented data. Moreover, separating the augmented and original data into different training phases makes the model become more tolerant to noise in augmented data, which reduces the quality requirement for the augmented data and enables the model to use noisier augmented data and even training data from other tasks. 3 Experiments In this section, we present the experimental settings and related experimental results. We focus on informal→formal style transfer since it is more practical in real application scenarios. 3.1 Experimental Settings We use GYAFC benchmark dataset (Rao and Tetreault, 2018) for training and evaluation. GYAFC’s training split contains a total of 110K annotated informal-formal parallel sentences, which are annotated via crowd-sourcing of two domains: Entertainment & Music (E&M) and Family & Relationships (F&R). In its test split, there are 1,146 and 1,332 informal sentences in E&M and F&R domain respectively and each informal sentence has 4 referential formal rewrites. We use all the three data augmentation methods we introduced and obtain a total of 4.9M augmented pairs. Among them, 1.6M are generated by back-translating (BT) formal sentences identified (as formal) by the formality discriminator in E&M and F&R domain on Yahoo Model E&M F&R BLEU BLEU Original data 69.44 74.19 Augmented data 51.83 55.66 ST 59.93 63.16 ST (up-sampling) 68.43 73.04 ST (down-sampling) 68.54 73.69 PT&FT 72.63 77.01 Table 1: The comparison of simultaneous training (ST) and Pre-train & Fine-tuning (PT&FT). Down-sampling and up-sampling are for balancing the size of the augmented data and the original data. Specifically, downsampling samples augmented data, while up-sampling increases the frequency of the original data. Answers L6 corpus4, 1.5M are derived by formality discrimination (F-Dis) by using French, German and Chinese as pivot languages, and 1.8M are from multi-task transfer (M-task) from the public GEC data (Lang-8 (Mizumoto et al., 2011; Tajiri et al., 2012) and NUCLE (Dahlmeier et al., 2013)). The informal sentences used in F-Dis strategy are also from Yahoo Answers L6 corpus. We use the Transformer (base) (Vaswani et al., 2017) as the seq2seq model with a shared vocabulary of 20K BPE (Sennrich et al., 2016b) tokens. We adopt the Adam optimizer to pre-train the model with the augmented parallel data and then fine-tune it with the original parallel data. In pre-training, the dropout rate is set to 0.1 and the learning rate is set to 0.0005 with 8000 warmup steps and scheduled to an inverse square root decay after warmup; while during fine-tuning, the learning rate is set to 0.00025. We pre-train the model for 80k steps and fine-tune the model for a total of 15k steps. The CNN we use as the formality discriminator has filter sizes of 3, 4, 5 with 100 feature maps. The dropout rate is set to 0.5. It achieves an accuracy of 93.09% over the GYAFC test set. 3.2 Experimental Results 3.2.1 Effect of Proposed Approach Table 1 compares the results of the models trained with simultaneous training (ST) and pre-training & fine-tuning (PT&FT). ST with the augmented and original data leads to a performance decline, because the noisy augmented data cannot achieve desirable performance by itself and may distract the model from exploiting the original data in simultaneous training. In contrast, PT&FT only uses 4https://webscope.sandbox.yahoo.com/catalog.php 3224 Model E&M F&R BLEU BLEU Original data 69.44 74.19 Pre-training & Fine-tuning + BT 71.18 75.34 + F-Dis 71.72 76.24 + M-Task 71.91 76.21 + BT + M-Task + F-Dis 72.63 77.01 Table 2: The comparison of different data augmentation methods for FST. the augmented data in the pre-training phase and treats it as the prior knowledge supplementary to the original training data, reducing the negative effects of the augmented data and improving the results. Table 2 compares the results of different data augmentation methods with PT&FT. Pre-training with augmented data generated by BT enhances the generalization ability of the model, thus we observe an improvement over the baseline. However, it does not introduce any new informal-to-formal transfer knowledge, leading to the least improvement among the three methods. In contrast, both F-Dis and M-Task introduce abundant transfer knowledge for FST. The augmented data of F-Dis includes various informal→formal rewrite knowledge derived from the MT models, allowing the model to better handle the test instances whose patterns are never seen in the original training data; while M-Task introduces GEC knowledge that helps improve formality in terms of grammar. We then combine all these beneficial augmented data for pre-training. As expected, the combination strategy achieves further improvement as shown in Table 2 since the it enables the model to take advantage of all the data augmentation methods. 3.2.2 Comparison with State-of-the-Art Results We compare our approach to the following previous approaches in the GYAFC benchmark: • Rule, PBMT, NMT, PBMT-NMT: Rule-based, phrase-based MT, NMT, PBMT-NMT hybrid model (Rao and Tetreault, 2018). • NMT-MTL: NMT model with multi-task learning (Niu et al., 2018). • GPT-CAT, GPT-Ensemble: fine-tuned encoder-decoder models (Wang et al., 2019) initialized by GPT (Radford et al., System E&M F&R BLEU BLEU No-edit 50.28 51.67 Rule 60.37 66.40 PBMT 66.88 72.40 NMT 58.27 68.26 NMT-PBMT 67.51 73.78 NMT-MTL 71.29 74.51 NMT-MTL-Ensemble* 72.01 75.33 GPT-CAT 72.70 77.26 GPT-Ensemble* 69.86 76.32 Our Approach 72.63 77.01 Our Approach* 74.24 77.97 Table 3: The comparison of our approach to the stateof-the-art results. * denotes the ensemble results. 2019). Specifically, GPT-CAT concatenates the original input sentence and the input sentence preprocessed by rules as input, while GPT-Ensemble is the ensemble of two GPT-based encoder-decoder models: one takes the original input sentence as input, the other takes the preprocssed sentence as input. Following Niu et al. (2018), we train 4 independent models with different initializations for ensemble decoding. According to Table 3, our single model performs comparably to the state-ofthe-art GPT-based encoder-decoder models (more than 200M parameters) with only 54M parameters. Our ensemble model further advances the state-ofthe-art result only with a comparable model size to the GPT-based single model (i.e., GPT-CAT). We also conduct human evaluation. Following Rao and Tetreault (2018), we assess the model output on three criteria: formality, fluency and meaning preservation. We compare our baseline model trained with original data, our best performing model and the previous state-of-the-art models (NMT-MTL and GPT-CAT). We randomly sample 300 items and each item includes an input and four outputs that shuffled to anonymize model identities. Two annotators are asked to rate the outputs on a discrete scale of 0 to 2. More details can be found in the appendix. The results are shown in Table 4 which demonstrates that our model is consistently well rated in human evaluation. 3.2.3 Analysis of Pivot Languages in Feature Discrimination We also conduct an exploratory study of the pivot languages used in formality discrimination. Among the three pivot languages (i.e. French, German and Chinese) in our experiments, it is interest3225 Model Formality Fluency Meaning Original data 1.31 1.77 1.80 NMT-MTL 1.34 1.78 1.92* GPT-CAT 1.42 1.84* 1.90 Ours 1.45* 1.85*† 1.92* Table 4: Results of human evaluation of FST. Scores marked with */† are significantly different from the scores of Original data / NMT-MTL (p < 0.05 in significance test). French German Chinese 300k 530k 680k Table 5: The sizes of augmented datasets generated by F-Dis based on different pivot languages. ing to observe a significant difference in the sizes of the obtained parallel data given the same source sentences and filter threshold, as shown in Table 5. Using Chinese as the pivot language results in the most data, probably due to the fact that Chinese and English belong to different language systems. The formality of original informal English sentences may be lost during translation, which turns out to facilitate the MT system to translate Chinese back into formal English. In contrast, French and German have much in common with English, especially for French in terms of the lexicon (Baugh and Cable, 1993). The translated sentences are likely to maintain informal sense, which hinders the MT system from generating formal English translations. We compare the performance with augmented data generated by three pivot languages separately in Table 6. Manual inspection reveals that a few pairs have the issue of meaning inconsistency in all the three sets, which mainly arises from the translation difficulties caused by omissions and poor grammaticality in informal sentences and the segmentation ambiguity in some pivot languages like Chinese. Among the three languages, the Chinesebased augmented data introduces more noise due to the additional segmentation ambiguity problem but brings fair improvement because of its largest size. In contrast, the German-based augmented data has relatively high quality and a moderate size, leading to the best result in our experiments. 4 Related Work Data augmentation has been much explored for seq2seq tasks like Machine Translation (He et al., 2016; Fadaee et al., 2017; Zhang et al., 2018b; PonModel E&M F&R BLEU BLEU Original data 69.44 74.19 F-Dis (Fr) 70.09 74.52 F-Dis (De) 71.15 75.18 F-Dis (Zh) 70.51 74.79 Table 6: Performances of formality discrimination based on different pivot languages: French (Fr), German (De) and Chinese (Zh). celas et al., 2018; Edunov et al., 2018; Li et al., 2019) and Grammatical Error Correction (Kiyono et al., 2019; Grundkiewicz et al., 2019; Zhao et al., 2019; Zhou et al., 2019; Ge et al., 2018a,b; Xie et al., 2018; Yuan et al., 2016; Rei et al., 2017). For text style transfer, however, due to the lack of parallel data, many studies focus on unsupervised approaches (Luo et al., 2019; Wu et al., 2019; Zhang et al., 2018a) and there is little related work concerning data augmentation. As a result, most recent work (Jhamtani et al., 2017; Xu et al., 2012) that models text style transfer as MT suffers from a lack of parallel data for training, which seriously limits the performance of powerful models. To solve this pain point, we propose novel data augmentation methods and study the best way to utilize the augmented data, which not only achieves a success in formality style transfer, but also would be inspiring for other text style transfer tasks. 5 Conclusion In this paper, we propose novel data augmentation methods for formality style transfer. Our proposed data augmentation methods can effectively generate diverse augmented data with various formality style transfer knowledge. The augmented data can significantly help improve the performance when it is used for pre-training the model and leads to the state-of-the-art results in the formality style transfer benchmark dataset. Acknowledgements We thank all the reviewers for providing the constructive suggestions. This work is partly supported by Beijing Academy of Artificial Intelligence. Xu Sun is the corresponding author of this paper. References Albert C Baugh and Thomas Cable. 1993. A history of the English language. Routledge. 3226 Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. CoRR, abs/1406.1078. Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner english: The nus corpus of learner english. In Proceedings of the eighth workshop on innovative use of NLP for building educational applications, pages 22–31. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. arXiv preprint arXiv:1808.09381. Marzieh Fadaee, Arianna Bisazza, and Christof Monz. 2017. Data augmentation for low-resource neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 2: Short Papers, pages 567–573. Tao Ge, Furu Wei, and Ming Zhou. 2018a. Fluency boost learning and inference for neural grammatical error correction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1055–1065. Tao Ge, Furu Wei, and Ming Zhou. 2018b. Reaching human-level performance in automatic grammatical error correction: An empirical study. arXiv preprint arXiv:1807.01270. Tao Ge, Xingxing Zhang, Furu Wei, and Ming Zhou. 2019. Automatic grammatical error correction for sequence-to-sequence text generation: An empirical study. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6059–6064. Roman Grundkiewicz, Marcin Junczys-Dowmunt, and Kenneth Heafield. 2019. Neural grammatical error correction systems with unsupervised pre-training on synthetic data. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 252–263. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Advances in Neural Information Processing Systems, pages 820–828. Harsh Jhamtani, Varun Gangal, Eduard H. Hovy, and Eric Nyberg. 2017. Shakespearizing modern language using copy-enriched sequence-to-sequence models. CoRR, abs/1707.01161. Alina Karakanta, Jon Dehdari, and Josef van Genabith. 2018. Neural machine translation for low-resource languages without parallel corpora. Machine Translation, 32(1-2):167–189. Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, and Kentaro Inui. 2019. An empirical study of incorporating pseudo data into grammatical error correction. arXiv preprint arXiv:1909.00502. Rumeng Li, Xun Wang, and Hong Yu. 2019. Metamt, a metalearning method leveraging multiple domain data for low resource machine translation. arXiv preprint arXiv:1912.05467. Fuli Luo, Peng Li, Jie Zhou, Pengcheng Yang, Baobao Chang, Xu Sun, and Zhifang Sui. 2019. A dual reinforcement learning framework for unsupervised text style transfer. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5116–5122. ijcai.org. Tomoya Mizumoto, Mamoru Komachi, Masaaki Nagata, and Yuji Matsumoto. 2011. Mining revision log of language learning sns for automated japanese error correction of second language learners. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 147–155. Xing Niu, Sudha Rao, and Marine Carpuat. 2018. Multi-task neural models for translating between styles within and across languages. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 1008– 1021. Alberto Poncelas, Dimitar Shterionov, Andy Way, Gideon Maillette de Buy Wenniger, and Peyman Passban. 2018. Investigating backtranslation in neural machine translation. CoRR, abs/1804.06189. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9. Sudha Rao and Joel R. Tetreault. 2018. Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 129–140. Marek Rei, Mariano Felice, Zheng Yuan, and Ted Briscoe. 2017. Artificial error generation with machine translation and syntactic patterns. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, BEA@EMNLP 2017, Copenhagen, Denmark, September 8, 2017, pages 287–292. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of the 3227 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104–3112. Toshikazu Tajiri, Mamoru Komachi, and Yuji Matsumoto. 2012. Tense and aspect error correction for esl learners using global context. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2, pages 198–202. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Xinyi Wang, Hieu Pham, Zihang Dai, and Graham Neubig. 2018. Switchout: an efficient data augmentation algorithm for neural machine translation. CoRR, abs/1808.07512. Yunli Wang, Yu Wu, Lili Mou, Zhoujun Li, and Wenhan Chao. 2019. Harnessing pre-trained neural networks with rules for formality style transfer. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3571– 3576, Hong Kong, China. Association for Computational Linguistics. Chen Wu, Xuancheng Ren, Fuli Luo, and Xu Sun. 2019. A hierarchical reinforced sequence operation method for unsupervised text style transfer. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4873–4883. Association for Computational Linguistics. Ziang Xie, Guillaume Genthial, Stanley Xie, Andrew Ng, and Dan Jurafsky. 2018. Noising and denoising natural language: Diverse backtranslation for grammar correction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 619–628. Ruochen Xu, Tao Ge, and Furu Wei. 2019. Formality style transfer with hybrid textual annotations. CoRR, abs/1903.06353. Wei Xu, Alan Ritter, Bill Dolan, Ralph Grishman, and Colin Cherry. 2012. Paraphrasing for style. In COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, 8-15 December 2012, Mumbai, India, pages 2899–2914. Zheng Yuan, Ted Briscoe, and Mariano Felice. 2016. Candidate re-ranking for smt-based grammatical error correction. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications, BEA@NAACL-HLT 2016, June 16, 2016, San Diego, California, USA, pages 256– 266. Yi Zhang, Jingjing Xu, Pengcheng Yang, and Xu Sun. 2018a. Learning sentiment memories for sentiment modification without parallel data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1103–1108. Association for Computational Linguistics. Zhirui Zhang, Shujie Liu, Mu Li, Ming Zhou, and Enhong Chen. 2018b. Joint training for neural machine translation models with monolingual data. In ThirtySecond AAAI Conference on Artificial Intelligence. Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. 2019. Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data. arXiv preprint arXiv:1903.00138. Wangchunshu Zhou, Tao Ge, Chang Mu, Ke Xu, Furu Wei, and Ming Zhou. 2019. Improving grammatical error correction with machine translation pairs. arXiv preprint arXiv:1911.02825. A Details of Human Evaluation We describe the grading standard of the three criteria we present in the main paper for FST: formality, fluency and meaning preservation. The outputs are rated on a discrete scale of 0 to 2. We hire two annotators who major in Linguistics and have received Bachelor degree. Formality Given the informal source sentence and an output, the annotators are asked to rate the formality of a sentence according to the formality improvement level, regardless of fluency and meaning. If the output shows significant formality improvement over the input, it will be rated 2 points. If the output is just slightly more formal than the input, it will be rated 1 point. If the output shows no improvement in the formality or even decreases the formality, it will be rated 0 point. 3228 Fluency Given the outputs, the annotators are asked to evaluate the fluency of each sentence in isolation. A sentence is considered to be fluent if it makes sense and is grammatically correct. The sentences satisfying the requirements will be rated 2 points. The sentences with minor errors will be rated 1 point. If the errors lead to confusing meaning, we give it 0 point. Meaning preservation Given the output sentence and the corresponding source sentence, the annotators are asked to estimate how much information is preserved of the output compared to the input sentences. If the output sentence and the input exactly convey the same idea, the corresponding system of the output gets 2 points. If they are mostly equivalent but different in some trivial details, the corresponding system gets 1 point. If the output omits some important details that affect the sentence’s meaning, the system will get no credit. For inter-annotator agreement, we calculate the Pearson correlation coefficient of two annotators over the three criteria. The Pearson correlation over the formality criteria is 0.62. For fluency and meaning preservation, the correlation scores are 0.69 and 0.61, respectively.
2020
294
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3229–3238 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3229 Relational Graph Attention Network for Aspect-based Sentiment Analysis Kai Wang1, Weizhou Shen1, Yunyi Yang1, Xiaojun Quan1∗, Rui Wang2 1School of Data and Computer Science, Sun Yat-sen University, China 2Alibaba Group, China {wangk73, shenwzh3, yangyy37}@mail2.sysu.edu.cn [email protected], [email protected] Abstract Aspect-based sentiment analysis aims to determine the sentiment polarity towards a specific aspect in online reviews. Most recent efforts adopt attention-based neural network models to implicitly connect aspects with opinion words. However, due to the complexity of language and the existence of multiple aspects in a single sentence, these models often confuse the connections. In this paper, we address this problem by means of effective encoding of syntax information. Firstly, we define a unified aspect-oriented dependency tree structure rooted at a target aspect by reshaping and pruning an ordinary dependency parse tree. Then, we propose a relational graph attention network (R-GAT) to encode the new tree structure for sentiment prediction. Extensive experiments are conducted on the SemEval 2014 and Twitter datasets, and the experimental results confirm that the connections between aspects and opinion words can be better established with our approach, and the performance of the graph attention network (GAT) is significantly improved as a consequence. 1 Introduction Aspect-based sentiment analysis (ABSA) aims at fine-grained sentiment analysis of online affective texts such as product reviews. Specifically, its objective is to determine the sentiment polarities towards one or more aspects appearing in a single sentence. An example of this task is, given a review great food but the service was dreadful, to determine the polarities towards the aspects food and service. Since the two aspects express quite opposite sentiments, just assigning a sentence-level sentiment polarity is inappropriate. In this regard, ABSA can provide better insights into user reviews compared with sentence-level sentiment analysis. ∗Corresponding author. Intuitively, connecting aspects with their respective opinion words lies at the heart of this task. Most recent efforts (Wang et al., 2016b; Li et al., 2017; Ma et al., 2017; Fan et al., 2018) resort to assorted attention mechanisms to achieve this goal and have reported appealing results. However, due to the complexity of language morphology and syntax, these mechanisms fail occasionally. We illustrate this problem with a real review So delicious was the noodles but terrible vegetables, in which the opinion word terrible is closer to the aspect noodles than delicious, and there could be terrible noodles appearing in some other reviews which makes these two words closely associated. Therefore, the attention mechanisms could attend to terrible with a high weight when evaluating the aspect noodles. Some other efforts explicitly leverage the syntactic structure of a sentence to establish the connections. Among them, early attempts rely on handcrafted syntactic rules (Qiu et al., 2011; Liu et al., 2013), though they are subject to the quantity and quality of the rules. Dependency-based parse trees are then used to provide more comprehensive syntactic information. For this purpose, a whole dependency tree can be encoded from leaves to root by a recursive neural network (RNN) (Lakkaraju et al., 2014; Dong et al., 2014; Nguyen and Shirai, 2015; Wang et al., 2016a), or the internal node distance can be computed and used for attention weight decay (He et al., 2018a). Recently, graph neural networks (GNNs) are explored to learn representations from the dependency trees (Zhang et al., 2019; Sun et al., 2019b; Huang and Carley, 2019). The shortcomings of these approaches should not be overlooked. First, the dependency relations, which may indicate the connections between aspects and opinion words, are ignored. Second, empirically, only a small part of the parse tree is related to this 3230 task and it is unnecessary to encode the whole tree (Zhang et al., 2018; He et al., 2018b). Finally, the encoding process is tree-dependent, making the batch operation inconvenient during optimization. In this paper, we re-examine the syntax information and claim that revealing task-related syntactic structures is the key to address the above issues. We propose a novel aspect-oriented dependency tree structure constructed in three steps. Firstly, we obtain the dependency tree of a sentence using an ordinary parser. Secondly, we reshape the dependency tree to root it at a target aspect in question. Lastly, pruning of the tree is performed to retain only edges with direct dependency relations with the aspect. Such a unified tree structure not only enables us to focus on the connections between aspects and potential opinion words but also facilitates both batch and parallel operations. Then we propose a relational graph attention network (RGAT) model to encode the new dependency trees. R-GAT generalizes graph attention network (GAT) to encode graphs with labeled edges. Extensive evaluations are conducted on the SemEval 2014 and Twitter datasets, and experimental results show that R-GAT significantly improves the performance of GAT. It also achieves superior performance to the baseline methods. The contributions of this work include: • We propose an aspect-oriented tree structure by reshaping and pruning ordinary dependency trees to focus on the target aspects. • We propose a new GAT model to encode the dependency relations and to establish the connections between aspects and opinion words. • The source code of this work is released for future research.1 2 Related Work Most recent research work on aspect-based sentiment analysis (ABSA) utilizes attention-based neural models to examine words surrounding a target aspect. They can be considered an implicit approach to exploiting sentence structure, since opinion words usually appear not far from aspects. Such approaches have led to promising progress. Among them, Wang et al. (2016b) proposed to use an attention-based LSTM to identify important sentiment information relating to a target aspect. 1https://github.com/shenwzh3/RGAT-ABSA Chen et al. (2017) introduced a multi-layer attention mechanism to capture long-distance opinion words for aspects. For a similar purpose, Tang et al. (2016) employed Memory Network with multi-hop attention and external memory. Fan et al. (2018) proposed a multi-grained attention network with both fine-grained and coarse-grained attentions. The pre-trained language model BERT (Devlin et al., 2018) has made successes in many classification tasks including ABSA. For example, Xu et al. (2019) used an additional corpus to posttrain BERT and proved its effectiveness in both aspect extraction and ABSA. Sun et al. (2019a) converted ABSA to a sentence-pair classification task by constructing auxiliary sentences. Some other efforts try to directly include the syntactic information in ABSA. Since aspects are generally assumed to lie at the heart of this task, establishing the syntactic connections between each target aspect and the other words are crucial. Qiu et al. (2011) manually defined some syntactic rules to identify the relations between aspects and potential opinion words. Liu et al. (2013) obtained partial alignment links with these syntactic rules and proposed a partially supervised word alignment model to extract opinion targets. Afterward, neural network models were explored for this task. Lakkaraju et al. (2014) used a recursive neural network (RNN) to hierarchically encode word representations and to jointly extract aspects and sentiments. In another work, Wang et al. (2016a) combined the recursive neural network with conditional random fields (CRF). Moreover, Dong et al. (2014) proposed an adaptive recursive neural network (AdaRNN) to adaptively propagate the sentiments of words to the target aspect via semantic composition over a dependency tree. Nguyen et al. (2015) further combined the dependency and constituent trees of a sentence with a phrase recursive neural network (PhraseRNN). In a simpler approach, He et al. (2018a) used the relative distance in a dependency tree for attention weight decay. They also showed that selectively focusing on a small subset of context words can lead to satisfactory results. Recently, graph neural networks combined with dependency trees have shown appealing effectiveness in ABSA. Zhang et al. (2019) and Sun et al. (2019b) proposed to use graph convolutional networks (GCN) to learn node representations from a dependency tree and used them together with 3231 I like the [recipe]pos here. 0.00 0.98 0.01 0.01 0.00 nsubj det dobj advmod root (a) The [recipe]neu includes some Chinese food like dumplings. 0.00 0.01 0.01 0.00 0.00 0.00 0.97 0.01 nsubj det amod dobj prep pobj cc root (b) The [falafel]neg was over cooked and dried but the [chicken]pos was fine. 0.00 0.00 0.00 0.01 0.00 0.00 0.27 0.71 0.01 0.00 0.00 0.00 det nsubj cop advmod cc conj cc det nsubj cop conj root (c) Figure 1: Three examples from restaurant reviews to illustrate the relationships among aspect, attention, and syntax in ABSA. Labeled edges indicate dependency relations, and scores under each word represent attention weights assigned by the attention-equipped LSTM. Words with high attention weights are highlighted in red boxes, and words in brackets are the target aspects followed by their sentiment labels. other features for sentiment classification. For a similar purpose, Huang and Carley (2019) used graph attention networks (GAT) to explicitly establish the dependency relationships between words. However, these approaches generally ignore the dependency relations which might identify the connections between aspects and opinion words. 3 Aspect-Oriented Dependency Tree In this section, we elaborate on the details of constructing an aspect-oriented dependency tree. 3.1 Aspect, Attention and Syntax The syntactic structure of a sentence can be uncovered by dependency parsing, a task to generate a dependency tree to represent the grammatical structure. The relationships between words can be denoted with directed edges and labels. We use three examples to illustrate the relationships among aspect, attention and syntax in ABSA, as shown in Figure 1. In the first example, the word like is used as a verb and it expresses a positive sentiment towards the aspect recipe, which is successfully attended by the attention-based LSTM model. However, when it is used as a preposition in the second example, the model still attends to it with a high weight, resulting in a wrong prediction. The third example shows a case where there are two aspects in a single sentence with different sentiment polarities. For the aspect chicken, the LSTM model mistakenly assigns high attention weights to the words but and dried, which leads to another prediction mistake. These examples demonstrate the limitations of the attention-based model in this task. Such mistakes are likely to be avoided by introducing explicit syntactic relations between aspects and other words. For example, it might be different if the model noticed the direct dependency relationship between chicken and fine in the third example, rather than with but. 3.2 Aspect-Oriented Dependency Tree The above analysis suggests that dependency relations with direct connections to an aspect may assist a model to focus more on related opinion words, and therefore should be more important than other relations. Also, as shown in Figure 1, a dependency tree contains abundant grammar information, and is usually not rooted at a target aspect. Nevertheless, the focus of ABSA is a target aspect rather than the root of the tree. Motivated by the above observations, we propose a novel aspectoriented dependency tree structure by reshaping an original dependency tree to root it at a target aspect, followed by pruning of the tree so as to discard unnecessary relations. Algorithm 1 describes the above process. For an input sentence, we first apply a dependency parser to obtain its dependency tree, where rij is the dependency relation from node i to j. Then, we build an aspect-oriented dependency tree in three steps. Firstly, we place the target aspect at the root, where multiple-word aspects are treated as entities. Secondly, we set the nodes with direct connections to the aspect as the children, for which the original 3232 Reshape and prune Figure 2: Construction of an aspect-oriented dependency tree (bottom) from an ordinary dependency tree (top). Algorithm 1 Aspect-Oriented Dependency Tree Input: aspect a = {wa i , wa i+1, ...wa k}, sentence s = {ws 1, ws 2, ...ws n}, dependency tree T, and dependency relations r. Output: aspect-oriented dependency tree ˆT. 1: Construct the root R for ˆT; 2: for i to k do 3: for j = 1 to n do 4: if ws j rji −−→wa i then 5: ws j rji −−→R 6: else if ws j rij ←−−wa i then 7: ws j rij ←−−R 8: else 9: n = distance(i, j) 10: ws j n:con −−−→R 11: end if 12: end for 13: end for 14: return ˆT dependency relations are retained. Thirdly, other dependency relations are discarded, and instead, we put a virtual relation n:con (n connected) from the aspect to each corresponding node, where n represents the distance between two nodes.2 If the sentence contains more than one aspect, we construct a unique tree for each aspect. Figure 2 shows an aspect-oriented dependency tree constructed from the ordinary dependency tree. There are at least two advantages with such an aspect-oriented structure. First, each aspect has its own dependency tree and can be less influenced by unrelated nodes and relations. Second, if an aspect contains more than 2We set n = ∞if the distance is longer than 4. one word, the dependency relations will be aggregated at the aspect, unlike in (Zhang et al., 2019; Sun et al., 2019b) which require extra pooling or attention operations. The idea described above is partially inspired by previous findings (He et al., 2018a; Zhang et al., 2018; He et al., 2018b) that it could be sufficient to focus on a small subset of context words syntactically close to the target aspect. Our approach provides a direct way to model the context information. Such a unified tree structure not only enables our model to focus on the connections between aspects and opinion words but also facilitates both batch and parallel operations during training. The motivation we put a new relation n:con is that existing parsers may not always parse sentences correctly and may miss important connections to the target aspect. In this situation, the relation n:con enables the new tree to be more robust. We evaluate this new relation in the experiment and the results confirm this assumption. 4 Relational Graph Attention Network To encode the new dependency trees for sentiment analysis, we propose a relational graph attention network (R-GAT) by extending the graph attention network (GAT) (Veliˇckovi´c et al., 2017) to encode graphs with labeled edges. 4.1 Graph Attention Network Dependency tree can be represented by a graph G with n nodes, where each represents a word in the sentence. The edges of G denote the dependency between words. The neighborhood nodes of node i can be represented by Ni. GAT iteratively updates each node representation (e.g., word embeddings) 3233 by aggregating neighborhood node representations using multi-head attention: hl+1 atti = ||K k=1  j∈Ni αlk ijW l khl j (1) αlk ij = attention(i, j) (2) where hl+1 atti is the attention head of node i at layer l + 1, ||K k=1xi denotes the concatenation of vectors from x1 to xk, αlk ij is a normalized attention coefficient computed by the k-th attention at layer l, W l k is an input transformation matrix. In this paper, we adopt dot-product attention for attention(i, j).3 4.2 Relational Graph Attention Network GAT aggregates the representations of neighborhood nodes along the dependency paths. However, this process fails to take dependency relations into consideration, which may lose some important dependency information. Intuitively, neighborhood nodes with different dependency relations should have different influences. We propose to extend the original GAT with additional relational heads. We use these relational heads as relation-wise gates to control information flow from neighborhood nodes. The overall architecture of this approach is shown in Figure 3. Specifically, we first map the dependency relations into vector representations, and then compute a relational head as: hl+1 reli = ||M m=1  j∈Ni βlm ij W l mhl j (3) glm ij = σ(relu(rijWm1 + bm1)Wm2 + bm2) (4) βlm ij = exp(glm ij ) Ni j=1 exp(glm ij ) (5) where rij represents the relation embedding between nodes i and j. R-GAT contains K attentional heads and M relational heads. The final representation of each node is computed by: xl+1 i = hl+1 atti || hl+1 reli (6) hl+1 i = relu(Wl+1xl+1 i + bl+1) (7) 3Dot product has fewer parameters but similar performance with feedforward neural network used in (Veliˇckovi´c et al., 2017). ݄ଷ ݄ସ ݓଵ ݓଶ ݓଷ ݓସ ݄ଵ ݄ଶ ݌݋ݏ݊݁݃ ݊݁ݑ ߙ௜௝ ߚ௜௝ ‡Žƒ–‹‘ƒŽЇƒ† ––‡–‹‘ƒŽЇƒ† ܿ݋݊ܿܽݐ ݄௔௧௧ೌ ݏ݋݂ݐ݉ܽݔ ݄௥௘௟ೌ Figure 3: Structure of the proposed relational graph attention network (R-GAT), which includes two genres of multi-head attention mechanism, i.e., attentional head and relational head. 4.3 Model Training We use BiLSTM to encode the word embeddings of tree nodes, and obtain its output hidden state hi for the initial representation h0 i of leaf node i. Then, another BiLSTM is applied to encode the aspect words, and its average hidden state is used as the initial representation h0 a of this root. After applying R-GAT on an aspect-oriented tree, its root representation hl a is passed through a fully connected softmax layer and mapped to probabilities over the different sentiment polarities. p(a) = softmax(Wphl a + bp) (8) Finally, the standard cross-entropy loss is used as our objective function: L(θ) = −  (S,A)∈D  a∈A log p(a) (9) where D contains all the sentence-aspects pairs, A represents the aspects appearing in sentence S, and θ contains all the trainable parameters. 5 Experiments In this section, we first introduce the datasets used for evaluation and the baseline methods employed for comparison. Then, we report the experimental results conducted from different perspectives. Finally, error analysis and discussion are conducted with a few representative examples. 3234 Dataset Positive Neutral Negative Train Test Train Test Train Test Laptop 994 341 870 128 464 169 Restaurant 2164 728 807 196 637 196 Twitter 1561 173 3127 346 1560 173 Table 1: Statistics of the three datasets. 5.1 Datasets Three public sentiment analysis datasets are used in our experiments, two of them are the Laptop and Restaurant review datasets from the SemEval 2014 Task (Maria Pontiki and Manandhar, 2014),4 and the third is the Twitter dataset used by (Dong et al., 2014). Statistics of the three datasets can be found in Table 1. 5.1.1 Implementation Details The Biaffine Parser (Dozat and Manning, 2016) is used for dependency parsing. The dimension of the dependency relation embeddings is set to 300. For R-GAT, we use the 300-dimensional word embeddings of GLoVe (Pennington et al., 2014). For R-GAT+BERT, we use the last hidden states of the pre-trained BERT for word representations and fine-tune them on our task. The PyTorch implementation of BERT 5 is used in the experiments. R-GAT is shown to prefer a high dropout rate in between [0.6, 0.8]. As for R-GAT+BERT, it works better with a low dropout rate of around 0.2. Our model is trained using the Adam optimizer (Kingma and Ba, 2014) with the default configuration. 5.2 Baseline Methods A few mainstream models for aspect-based sentiment analysis are used for comparison, including: • Syntax-aware models: LSTM+SynATT (He et al., 2018a), AdaRNN (Dong et al., 2014), PhraseRNN (Nguyen and Shirai, 2015), ASGCN (Zhang et al., 2019), CDT (Sun et al., 2019b), GAT (Veliˇckovi´c et al., 2017) and TD-GAT (Huang and Carley, 2019). • Attention-based models: ATAE-LSTM (Wang et al., 2016b) , IAN (Ma et al., 2017), RAM (Chen et al., 2017), MGAN (Fan et al., 2018), attention-equipped LSTM, and fine-tuned BERT (Devlin et al., 2018). • Other recent methods: GCAE (Xue and Li, 2018), JCI (Wang et al., 2018) and TNET (Li 4http://alt.qcri.org/semeval2014/task4/. 5https://github.com/huggingface/transformers et al., 2018). • Our methods: R-GAT is our relational graph attention network. R-GAT+BERT is our RGAT with the BiLSTM replaced by BERT, and the attentional heads of R-GAT will also be replaced by that of BERT. 5.3 Results and Analysis 5.3.1 Overall Performance The overall performance of all the models are shown in Table 2, from which several observations can be noted. First, the R-GAT model outperforms most of the baseline models. Second, the performance of GAT can be significantly improved when incorporated with relational heads in our aspectoriented dependency tree structure. It also outperforms the baseline models of ASGCN, and CDT, which also involve syntactic information in different ways. This proves that our R-GAT is better at encoding the syntactic information. Third, the basic BERT can already outperform all the existing ABSA models by significant margins, demonstrating the power of this large pre-trained model in this task. Nevertheless, after incorporating our R-GAT (R-GAT+BERT), this strong model sees further improvement and has achieved a new state of the art. These results have demonstrated the effectiveness of our R-GAT in capturing important syntactic structures for sentiment analysis. 5.3.2 Effect of Multiple Aspects The appearance of multiple aspects in one single sentence is very typical for ABSA. To study the influence of multiple aspects, we pick out the reviews with more than one aspect in a sentence. Each aspect is represented with its averaged (GloVe) word embeddings, and the distance between any two aspects of a sentence is calculated using the Euclidean distance. If there are more than two aspects, the nearest Euclidean distance is used for each aspect. Then, we select three models (GAT, R-GAT, R-GAT+BERT) for sentiment prediction, and plot the aspect accuracy by different distance ranges in Figure 4. We can observe that the aspects with nearer distances tend to lead to lower accuracy scores, indicating that the aspects with high semantic similarity in a sentence may confuse the models. However, with our R-GAT, both GAT and BERT can be improved across different ranges, showing that our method can alleviate this problem to a certain extent. 3235 Category Method Restaurant Laptop Twitter Accuracy Macro-F1 Accuracy Macro-F1 Accuracy Macro-F1 Syn. LSTM+SynATT 80.45 71.26 72.57 69.13 AdaRNN 66.30 65.90 PhraseRNN 66.20 59.32 ASGCN 80.77 72.02 75.55 71.05 72.15 70.40 CDT 82.30 74.02 77.19 72.99 74.66 73.66 GAT 78.21 67.17 73.04 68.11 71.67 70.13 TD-GAT 80.35 76.13 74.13 72.01 72.68 71.15 Att. ATAE-LSTM 77.20 68.70 IAN 78.60 72.10 RAM 80.23 70.80 74.49 71.35 69.36 67.30 MGAN 81.25 71.94 75.39 72.47 72.54 70.81 LSTM 79.10 69.00 71.22 65.75 69.51 67.98 BERT 85.62 78.28 77.58 72.38 75.28 74.11 Others GCAE 77.28 69.14 JCI 68.84 67.23 TNET 80.69 71.27 76.54 71.75 74.90 73.60 Ours R-GAT 83.30 76.08 77.42 73.76 75.57 73.82 Ours R-GAT+BERT 86.60 81.35 78.21 74.07 76.15 74.88 Table 2: Overall performance of different methods on the three datasets. Figure 4: Results of multiple aspects analysis, which shows that the aspects with nearer distances tend to lead to lower accuracy scores. 5.3.3 Effect of Different Parsers Dependency parsing plays a critical role in our method. To evaluate the impact of different parsers, we conduct a study based on the R-GAT model using two well-known dependency parsers: Stanford Parser (Chen and Manning, 2014) and Biaffine Parser (Dozat and Manning, 2016).6 Table 3 shows the performance of the two parsers in UAS and LAS metrics, followed by their performance for aspect-based sentiment analysis. From the table, 6The parsers are implemented by Stanford CoreNLP (Manning et al., 2014) and AllenNLP (Gardner et al., 2018). Parser Performance Dataset UAS LAS Restaurant Laptop Twitter Stanford 94.10 91.49 0.8133 0.7539 0.7283 Biaffine 95.74 94.08 0.8330 0.7742 0.7557 Table 3: Results of R-GAT based on two different parsers, where UAS and LAS are metrics to evaluate the parsers and higher scores mean better performance. Tree Method Restaurant Laptop Twitter Ordinary GAT 78.21 73.04 71.67 R-GAT 79.91 72.72 71.76 Reshaped GAT 78.57 72.10 71.82 R-GAT 83.30 77.42 75.57 R-GAT−n:con 81.16 73.66 70.95 Table 4: Results of ablation study, where “Ordinary” means using ordinary dependency trees, “Reshaped” denotes using the aspect-oriented trees, and “*-n:con” denote the aspect-oriented tree without using n:con. we can find that the better Biaffine parser results in higher sentiment classification accuracies. Moreover, it further implies that while existing parsers can capture most of the syntactic structures correctly, our method has the potential to be further improved with the advances of parsing techniques. 5.3.4 Ablation Study We further conduct an ablation study to evaluate the influence of the aspect-oriented dependency tree 3236 Category (%) Example Neutral 46 No green beans, no egg, no anchovy dressing, no [nicoise olives]neu, no red onion. Comprehension 32 It took about 2 1/2 hours to be served our 2 [courses]neg. Advice 6 Try the [rose roll]pos (not on menu). Double negation 6 But [dinner]pos here is never disappointing, even if the prices are a bit over the top. Neutral 50 Entrees include classics like lasagna, [fettuccine alfredo]neu and chicken parmigiana. Comprehension 31 We requested they re-slice the [sushi]pos, and it was returned to us in small cheese-like cubes. Advice 5 Gave a [mojito]pos and sit in the back patio. Double negation 3 And these are not small, wimpy fast food type [burgers]pos - these are real, full sized patties Table 5: Results of error analysis of R-GAT and R-GAT+BERT on 100 misclassified examples from the Restaurant dataset. The reasons are classified into four categories, for which a sample is given. The upper table corresponds to the results of R-GAT and the lower one corresponds to R-GAT+BERT. structure and the relational heads. We present the results on ordinary dependency trees for comparison. From table 4, we can observe that R-GAT is improved by using the new tree structure on all three datasets, while GAT is only improved on the Restaurant and Twitter datasets. Furthermore, after removing the virtual relation n:con, the performance of R-GAT drops considerably. We manually examined the misclassified samples and found that most of them can be attributed to poor parsing results where aspects and their opinion words are incorrectly connected. This study validates that adding the n:con relation can effectively alleviate the parsing problem and allows our model to be robust. In this paper, the maximal number of n is set to 4 according to empirical tests. Other values of n are also explored but the results are not any better. This may suggest that words with too long dependency distances from the target aspect are unlikely to be useful for this task. 5.3.5 Error Analysis To analyze the limitations of current ABSA models including ours, we randomly select 100 misclassified examples by two models (R-GAT and R-GAT+BERT) from the Restaurant dataset. After looking into these bad cases, we find the reasons behind can be classified into four categories. As shown in Table 5, the primary reason is due to the misleading neutral reviews, most of which include an opinion modifier (words) towards the target aspect with a direct dependency connection. The second category is due to the difficulty in comprehension, which may demand deep language understanding techniques such as natural language inference. The third category is caused by the advice which only recommend or disrecommend people to try, with no obvious clues in the sentences indicating the sentiments. The fourth category is caused by double negation expression, which is also difficult for current models. Through the error analysis, we can note that although current models have achieved appealing progress, there are still some complicated sentences beyond their capabilities. There ought to be more advanced natural language processing techniques and learning algorithms developed to further address them. 6 Conclusion In this paper, we have proposed an effective approach to encoding comprehensive syntax information for aspect-based sentiment analysis. We first defined a novel aspect-oriented dependency tree structure by reshaping and pruning an ordinary dependency parse tree to root it at a target aspect. We then demonstrated how to encode the new dependency trees with our relational graph attention network (R-GAT) for sentiment classification. Experimental results on three public datasets showed that the connections between aspects and opinion words can be better established with R-GAT, and the performance of GAT and BERT are significantly improved as a result. We also conducted an ablation study to validate the role of the new tree structure and the relational heads. Finally, an error analysis was performed on incorrectly-predicted examples, leading to some insights into this task. Acknowledgments The work was supported by the Fundamental Research Funds for the Central Universities (No.19lgpy220) and the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (No.2017ZT07X355). Part of this work was done when the first author was an intern at Alibaba. 3237 References Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 740–750. Peng Chen, Zhongqian Sun, Lidong Bing, and Wei Yang. 2017. Recurrent attention network on memory for aspect sentiment analysis. In Proceedings of the 2017 conference on empirical methods in natural language processing, pages 452–461. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent twitter sentiment classification. In Proceedings of the 52nd annual meeting of the association for computational linguistics (volume 2: Short papers), volume 2, pages 49–54. Timothy Dozat and Christopher D Manning. 2016. Deep biaffine attention for neural dependency parsing. arXiv preprint arXiv:1611.01734. Feifan Fan, Yansong Feng, and Dongyan Zhao. 2018. Multi-grained attention network for aspect-level sentiment classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3433–3442. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language processing platform. arXiv preprint arXiv:1803.07640. Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2018a. Effective attention modeling for aspect-level sentiment classification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1121–1131. Shexia He, Zuchao Li, Hai Zhao, and Hongxiao Bai. 2018b. Syntax for semantic role labeling, to be, or not to be. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 2061–2071. Association for Computational Linguistics. Binxuan Huang and Kathleen Carley. 2019. Syntaxaware aspect level sentiment classification with graph attention networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5472–5480, Hong Kong, China. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Himabindu Lakkaraju, Richard Socher, and Chris Manning. 2014. Aspect specific sentiment analysis using hierarchical deep learning. In NIPS Workshop on deep learning and representation learning. Cheng Li, Xiaoxiao Guo, and Qiaozhu Mei. 2017. Deep memory networks for attitude identification. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, pages 671– 680. ACM. Xin Li, Lidong Bing, Wai Lam, and Bei Shi. 2018. Transformation networks for target-oriented sentiment classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 946– 956. Kang Liu, Heng Li Xu, Yang Liu, and Jun Zhao. 2013. Opinion target extraction using partially-supervised word alignment model. In Twenty-Third International Joint Conference on Artificial Intelligence. Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2017. Interactive attention networks for aspect-level sentiment classification. arXiv preprint arXiv:1709.00893. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60. John Pavlopoulos Harris Papageorgiou Ion Androutsopoulos Maria Pontiki, Dimitris Galanis and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 27–35. Thien Hai Nguyen and Kiyoaki Shirai. 2015. Phrasernn: Phrase recursive neural network for aspect-based sentiment analysis. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2509–2514. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational linguistics, 37(1):9–27. Chi Sun, Luyao Huang, and Xipeng Qiu. 2019a. Utilizing bert for aspect-based sentiment analysis via constructing auxiliary sentence. In Proceedings of the 3238 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 380–385. Kai Sun, Richong Zhang, Samuel Mensah, Yongyi Mao, and Xudong Liu. 2019b. Aspect-level sentiment analysis via convolution over dependency tree. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5683– 5692, Hong Kong, China. Association for Computational Linguistics. Duyu Tang, Bing Qin, and Ting Liu. 2016. Aspect level sentiment classification with deep memory network. arXiv preprint arXiv:1605.08900. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903. Shuai Wang, Sahisnu Mazumder, Bing Liu, Mianwei Zhou, and Yi Chang. 2018. Target-sensitive memory networks for aspect sentiment classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 957–967. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016a. Recursive neural conditional random fields for aspect-based sentiment analysis. arXiv preprint arXiv:1603.06679. Yequan Wang, Minlie Huang, Li Zhao, et al. 2016b. Attention-based lstm for aspect-level sentiment classification. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 606–615. Hu Xu, Bing Liu, Lei Shu, and S Yu Philip. 2019. Bert post-training for review reading comprehension and aspect-based sentiment analysis. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2324–2335. Wei Xue and Tao Li. 2018. Aspect based sentiment analysis with gated convolutional networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2514–2523. Chen Zhang, Qiuchi Li, and Dawei Song. 2019. Aspect-based sentiment classification with aspectspecific graph convolutional networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4560–4570, Hong Kong, China. Association for Computational Linguistics. Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2205–2215. Association for Computational Linguistics.
2020
295
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3239–3248 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3239 SpanMlt: A Span-based Multi-Task Learning Framework for Pair-wise Aspect and Opinion Terms Extraction He Zhao ∗, Longtao Huang ∗†, Rong Zhang, Quan Lu, Hui Xue Alibaba Group {sicheng.zh,kaiyang.hlt,stone.zhangr,luquan.lq,hui.xueh}@alibaba-inc.com Abstract Aspect terms extraction and opinion terms extraction are two key problems of fine-grained Aspect Based Sentiment Analysis (ABSA). The aspect-opinion pairs can provide a global profile about a product or service for consumers and opinion mining systems. However, traditional methods can not directly output aspect-opinion pairs without given aspect terms or opinion terms. Although some recent co-extraction methods have been proposed to extract both terms jointly, they fail to extract them as pairs. To this end, this paper proposes an end-to-end method to solve the task of Pair-wise Aspect and Opinion Terms Extraction (PAOTE). Furthermore, this paper treats the problem from a perspective of joint term and relation extraction rather than under the sequence tagging formulation performed in most prior works. We propose a multi-task learning framework based on shared spans, where the terms are extracted under the supervision of span boundaries. Meanwhile, the pair-wise relations are jointly identified using the span representations. Extensive experiments show that our model consistently outperforms stateof-the-art methods. 1 Introduction Fine-grained aspect-based sentiment analysis (ABSA) or opinion mining is a field of study that analyzes people’s detailed insights towards a product or service. Aspect terms (AT) extraction and opinion terms (OT) extraction are two fundamental subtasks in ABSA (Pang and Lee., 2008; Liu, 2012). Aspect terms, also named as opinion targets, are the word sequences in the sentence describing attributes or features of the targets. Opinion terms, sometimes called opinion words, are those expressions carrying subjective attitudes. For example, ∗Both authors contributed equally to this research. †Corresponding author Figure 1: An example of the difference between coextraction and pair extraction of AT and OT. in the sentence “Otherwise, this place has great service and prices and a nice friendly atmosphere”, the aspect terms are service, prices and atmosphere, and the opinion terms are great and nice friendly. Recently, a new research focus, which aims at co-extracting the aspect and opinion terms (Wang et al., 2016, 2017; Li and Lam, 2017; Wang and Pan, 2018; Yu et al., 2019), has drawn increasing attention in both academia and industry. Such methods use joint models and have achieved great progress on both subtasks. However, the extracted AT and OT are not in pairs, and the corresponding relations between them are not well extracted. As the example sentence shown in Figure 1, (service, great), (prices, great) and (atmosphere, nice friendly) are three aspect-opinion pairs. In contrast, the co-extraction methods can only output the AT set {service, prices, atmosphere} and the OT set {great, nice friendly} jointly. The aspect-opinion pairs can deploy more finegrained sentiment analysis for review text and will benefit many downstream applications, such as opinion summarization and product profiling. By referring to the aspect-opinion pairs in a review sentence, customers can get a glimpse of the pros and cons of a product or service in a short time. Based on the promising results in previous AT and OT extraction, one possible solution for aspect-opinion pair extraction is to decouple the whole task into two subtasks. Firstly, all aspect terms need to be extracted from the sentences. Then, the OT cor3240 responding to each AT can be extracted using a Target-oriented Opinion Words Extraction (TOWE) method (Fan et al., 2019). Though this two-stage pipeline approach can extract aspect-opinion pairs, it will suffer from error propagation and the pairs extracting performance will rely heavily on the accuracy of AT extraction. To this end, an end-to-end method that can automatically extract AT and OT as pairs is essential for fine-grained sentiment analysis and opinion mining. Considering the significance of the aspectopinion pairs in review sentences, this paper targets at a new subtask for fine-grained ABSA, named PAOTE (Pair-wise Aspect and Opinion Terms Extraction). Given a review sentence, the objective of PAOTE is to extract all the (AT, OT) pairs. Different from the traditional co-extraction task of AT and OT, PAOTE outputs AT and OT in pairs while the co-extraction task only outputs them in separate sets as shown in Figure 1. Most of the previous AT and OT extraction methods formulate the task as a sequence tagging problem (Wang et al., 2016, 2017; Wang and Pan, 2018; Yu et al., 2019), specifically using a 5-class tag set: {BA (beginning of aspect), IA (inside of aspect), BP (beginning of opinion), IP (inside of opinion), O (others)}. However, the sequence tagging methods suffer from a huge search space due to the compositionality of labels for extractive ABSA tasks, which has been proven in (Lee et al., 2017b; Hu et al., 2019). And as the example in Figure 1, the sequence tagging methods get into trouble when there exist one-to-many or many-to-one relations between AT and OT in the sentence. In this paper, we propose a span-based multi-task framework to jointly extract both the AT/OT and the pair-wise relations. Motivated by prior works (Lee et al., 2017a; Luan et al., 2018), the proposed framework firstly learns word-level representations using a base encoder and then enumerates all possible spans on the input sentence. By sharing the generated span representations, the AT/OT can be extracted under the supervision of span boundaries and class labels. Meanwhile, the pair-wise relations can be identified by computing the span-span correspondence. We further design different encoder structures for the framework. To validate the effectiveness of our method, we conduct a serial of experiments based on public datasets. The comparison results show that the proposed framework can efficiently avoid the cascading errors between tasks and outperforms the state-of-the-art pipeline and joint methods. In summary, the main contributions of this paper are concluded as follows: 1) We propose an end-to-end model for a new task PAOTE. To the best of our knowledge, it is the first end-to-end model that can jointly extract the AT/OT and the pair-wise relations between them. 2) We design a novel span-based multi-task neural network for PAOTE. It can overcome the drawbacks of sequence tagging methods by taking advantage of the span-level information. And the mutual impact between AT/OT and their pair-wise relations can be identified in this model. 3) We conduct extensive experiments and the results show that our proposed model outperforms the state-of-the-art methods. 2 Related Works 2.1 Aspect and Opinion Terms Extraction For fine-grained ABSA, the aspect terms extraction and opinion terms extraction are two basic subtasks, which has been studied in numerous prior works (Hu and Liu, 2004; Popescu and Etzioni, 2005; Wu et al., 2009; Li et al., 2010; Qiu et al., 2011; Liu et al., 2012, 2013, 2015; Yin et al., 2016; Xu et al., 2019; Devlin et al., 2019). More recently, many works concentrate on co-extracting AT and OT using joint models. Most of the works treat the task as a sequence tagging problem. Wang et al. proposed a joint Recursive Neural Conditional Random Fields (RNCRF) model by using the dependency parse tree to capture dual-propagation among AT and OT (Wang et al., 2016). Then they extended their research and constructed a Recursive Neural Structural Correspondence Network (RNSCN) for cross-domain aspect and opinion terms co-extraction (Wang and Pan, 2018). Another outstanding work, Coupled Multi-Layer Attentions (CMLA) network, learns attentions for AT and OT (Wang et al., 2017). However, all these coextraction methods do not consider the AT and OT as pairs. For the pair-wise aspect and opinion terms extraction, an obvious solution is a two-stage pipeline strategy. The first stage is to extract aspect terms. Li et al. proposed a state-of-the-art model that can extract aspect terms by using the truncated history attention and the selective transformation network (Li et al., 2018). Then in the second stage, the target-oriented opinion terms can be extracted 3241 with the given aspect terms. This subtask has been proposed in a recent work (Fan et al., 2019), where they develop a target-fused sequence tagging method. However, the opinion detection heavily depends on the extracted aspect accuracy, which suffers from error propagation. Our framework is the first to joint perform the two subtasks into an end-to-end model. Moreover, our method does not need any external lexicons or parsers and can effectively deal with multiple relations. 2.2 Joint Entity and Relation Extraction Joint Entity and Relation Extraction (JERE), which aims to detect entity mentions and their semantic relations simultaneously in text, is an important task in information extraction. The earliest works mostly depend on feature engineering approaches (Kate and Mooney, 2010; Hoffmann et al., 2011; Li and Ji, 2014; Miwa and Sasaki, 2014). In recent studies, neural models for JERE have shown superior performance (Katiyar and Cardie, 2016; Zhang et al., 2017; Miwa and Bansal, 2016; Zheng et al., 2017). Moreover, neural multi-task learning has been shown effective in enhancing the interaction between entities and relations. In this paper, we adopt a JERE paradigm to solve the PAOTE task and develop a multi-task framework by extending previous unified setups (Luan et al., 2018) and endto-end span-based models (Lee et al., 2017a, 2018). 3 Span-based Multi-task Framework 3.1 Problem Definition Given an input sentence S = {w1, w2, ..., wN} of N words, the PAOTE task is to extract a set of all the aspect terms AT = {at1, at2, .., ati}, a set of all the opinion terms OT = {ot1, ot2, ..., otj} and a set of all the (AT, OT) pairs P = {(atm, otn), ...} from the sentence. Note that the atm ∈AT and the otn ∈OT could be a single word or a phrase. Inspired by JERE methods, we process the task in a span-based term-relation joint extraction scheme rather than as a sequence tagging problem. Firstly, all possible spans SP = {s1, s2, ..., sK} are enumerated from the given sentence, where each span is a slice (up to a reasonable length ls) of the input sentence. Based on the candidate spans, the outputs are two folds: 1) the term types T for all spans SP, aiming at the AT/OT recognition; 2) the pair-wise relation R for all pair of spans SP × SP, aiming at the (AT, OT) pair identification. Formally, the two subtasks are defined as follows: • Term Recognition is to assign a unique term label T ∈{A, O, null} to each candidate span sc, where A denotes sc ∈AT, O denotes sc ∈OT and null denotes that the span does not belong to AT or OT. • Pair-wise Relation Identification is to assign a binary label R ∈{True, False} to each ordered span pair (sc1, sc2). Note that the pair-wise relation is defined as a directed relation which always starts from an aspect term and points to an opinion term. So in this formulation, sc1 acts as AT and sc2 acts as OT. True denotes that sc1 and sc2 are correctly associated. 3.2 Framework The overall architecture of our span-based multitask framework (SpanMlt) is shown in Figure 2. Given an input sentence, a base encoder is adopted to learn contextualized word representations. Then, a span generator is deployed to enumerate all possible spans, which are represented based on the hidden outputs of the base encoder. For the multitask learning setup, the span representations are shared for two output scorers. The term scorer is to assign the term label with the highest score to each span. And the relation scorer is to evaluate the pair-wise correspondence between every two spans and assign a binary label to each span pair. 3.3 Span Generator Given an input sentence {w1, w2, ..., wN}, a span si = {wSTART(i), ..., wEND(i)} is a single word or phrase with a starting index START(i) and an ending index END(i). And the maximum length of si is ls: 1 ≤START(i) ≤END(i) ≤N (1) END(i) −START(i) < ls (2) The span generator is a component enumerating all possible spans to generate the candidates for aspect or opinion terms. Then each span will be represented by using the contextualized word representations learned from various base encoders. 3.4 Base Encoders for Span Representations Noting that SpanMlt is a general framework, we can potentially leverage any network as the encoder to learn word-level representations, which would be shared by higher-level modules. In this paper, we implement two different encoders. One is the 3242 Figure 2: The overall architecture of the span-based multi-task framework, which alternatively takes a BERT structure or a BiLSTM structure as the base encoder to learn representations for input words and candidate spans. BiLSTM with pre-trained word embeddings, which has been widely used in numerous neural-based models for NLP tasks. The other is BERT (Devlin et al., 2018), a pre-trained bidirectional transformer encoder which has achieved state-of-the-art performances across a variety of NLP tasks. 3.4.1 BiLSTM Encoder For the BiLSTM encoder, the input vectors {x1, x2, ..., xN} are generated for the word sequence firstly. Motivated by (Lee et al., 2017a; Luan et al., 2018), two strategies are involved in building the vector representations: 1) pre-trained word embeddings and 1-dimension CNN over characters; 2) fixed ELMo embeddings. Then, a bidirectional LSTM network is used to encode each word xt: ht = [←−−−− LSTM(xt); −−−−→ LSTM(xt)], t ∈[1, N] (3) where ht is the concatenated hidden output of BiLSTM. To better learn vector representations combined with the syntactic head information for each candidate span, we further employ a self-attention layer over the word vectors in the span. Following previous works (Yang et al., 2016; Zhou et al., 2016), the attention is implemented with a feed forward neural network (FFNN): ut = FFNNα(ht, θα) (4) αi,t = exp(ut) END(i) P k=START(i) exp(uk) (5) ˆhi = END(i) X k=START(i) αi,t · ut (6) where θα is the parameters for FFNN, and ˆhi is a weighted sum of word vectors in span si. Therefore, based on the BiLSTM encoder, the final representation pi for span si can be concatenated as: pi = [hSTART(i); hEND(i); ˆhi; φ(i)] (7) where φ(i) is the feature vector encoding the size of the span si. 3.4.2 BERT Encoder For the BERT encoder, the input sequence is generated by concatenating a [CLS] token, the original word sequence, and a [SEP] token. Each token is converted into an input vector xt by summing the token, segment, and position embeddings. Assume BERT(·) is the base (or fine-tuned) BERT model. The hidden representation for each token can be obtained: ht = BERT(xt) (8) Then the span vector representation pi is directly generated by hSTART(i) and hEND(i): pi = [hSTART(i); hEND(i)] (9) Unlike the BiLSTM encoder, we do not use the self-attention or the feature vector for the BERT encoder. Since the transformer of BERT has already utilized the attention mechanism and can learn sufficient contextualized information. And from our preliminary investigations and experiments, most complicated structures may damage the availability of BERT architecture and increase the training difficulty, which will be discussed in Section 4. 3.5 Objective To construct the loss function for joint training, we use FFNNs over shared span representations to 3243 compute the scores of how likely a span si has a term label yT i , and how likely a span pair (si, sj) has a relation label yR i,j, respectively. 3.5.1 Term Scorer For the term score, each span representation pi is fed into an FFNN, and then is normalized with the softmax function to output the probability of the term label: f T i = FFNNT (pi, θT ) (10) P(yT i |si) = Softmax(f T i ) (11) Thus, the loss function for the term extraction subtask can be formulated using the span-level crossentropy error between the predicted distribution P(yT i |si) and the gold distribution P(yT i ∗|si): Loss(T ) = − k X i=1 P(yT i ∗|si)log(P(yT i |si)) (12) 3.5.2 Relation Scorer For the pair-wise relation score between two spans (si, sj), we first compute the probability that a span is in a relation: f Rs i = FFNNRs(pi, θRs) (13) In order to reduce the number of generated pairs, we sort the spans according to their scorers fRs i and only the top-k spans are selected to be paired. Then, to measure the correspondence between two spans, the representation pi for span si, the representation pj for span sj, and an element-wise multiplication pi ⊙pj are concatenated as the input of FFNN: f R i,j = FFNNR([pi; pj; pi ⊙pj], θR) (14) The span scores and the correspondence score are summed and fed into the output softmax function: P(yR i,j|(si, sj)) = Softmax(f Rs i + f Rs j + f R i,j) (15) Thus, the loss function for the pair-wise relation extraction subtask can be formulated using the pairlevel cross-entropy error between the predicted distribution P(yR i,j|(si, sj)) and the gold distribution P(yR i,j ∗|(si, sj)): Loss(R) = − k X i=1 k X j=1 P(yR i,j ∗|(si, sj))log(P(yR i,j|(si, sj))) (16) Finally, losses from the term scorer and the relation scorer are combined as the training objective of the SpanMlt framework: J(θ) = λT Loss(T ) + λRLoss(R) (17) where λT and λR are two hyper-parameters to balance the two tasks. 4 Experiments 4.1 Datasets We evaluate our framework on two sets of public datasets, which are both in LAPTOP and RESTAURANT domains from Semeval 2014 Task 4, Semeval 2015 Task 12 and Semeval 2016 Task 5. One is provided by (Fan et al., 2019), where the AT and OT pairs are labeled. The other is provided by (Wang et al., 2017, 2016), where only the aspect terms and opinion terms are labeled. 4.2 Baselines Since we are the first to study the joint extraction task of pair-wise AT and OT, there is no available end-to-end model in the literature to be compared. To better evaluate our method, we first compare the AT/OT extraction performances with several widely used sequence tagging models which are constructed by different encoder structures. Then we compare with three joint models, which have achieved state-of-the-art results in AT&OT co-extraction. To evaluate the extraction of (AT, OT) pairs, we further implement a pipeline approach HAST+TOWE. Moreover, since we formulate our problem as a joint term and relation extraction task, we also compare with a joint entity and relation extraction method JERE-MHS. These baselines are introduced as follows: BiLSTM+CRF A sequence tagging method with a BiLSTM network built on top of pre-trained word embeddings, followed by a CRF output layer to perform BIO classification. BERT+CRF A sequence tagging method based on a BERT encoder. The output hidden states of input words are taken as the features for CRF. BERT+BiLSTM+CRF A sequence tagging method based on a BERT encoder. The output hidden states of input words are fed into a BiLSTM structure and then followed by an output CRF layer. RNCRF A joint model of recursive neural network and CRF, proposed by (Wang et al., 2016) for single-domain AT and OT extraction. CMLA A joint model of multi-layer attentions proposed by (Wang et al., 2017). GMTCMLA A global inference model based on CMLA proposed by (Yu et al., 2019). RNSCN A joint model proposed by (Wang and Pan, 2018) for cross-domain aspect and opinion terms extraction. 3244 Models 14lap 14res 15res 16res AT OT Pair AT OT Pair AT OT Pair AT OT Pair BiLSTM+CRF 69.80 64.96 78.03 75.13 66.27 64.70 70.43 73.33 BERT+CRF 56.38 50.14 54.37 48.41 57.01 45.95 55.83 49.38 BERT+BiLSTM+CRF 56.99 51.33 54.08 51.53 55.85 47.79 55.18 51.53 RNCRF 74.92 67.21 75.18 67.95 74.14 64.50 73.12 65.51 CMLA 75.57 66.27 76.08 66.32 78.31 66.15 76.84 65.73 RNSCN 73.71 75.89 82.12 81.67 71.02 69.78 75.11 72.18 HAST+TOWE (pipeline) 79.14 67.50 53.41 82.56 75.10 62.39 79.84 68.45 58.12 81.44 75.71 63.84 JERE-MHS 74.61 64.02 52.34 79.79 77.44 66.02 75.00 71.38 59.64 76.08 78.02 67.65 SpanMlt (ours) 84.51 80.61 68.66 87.42 83.98 75.60 81.76 78.91 64.68 85.62 85.33 71.78 Table 1: Main results (F1-score) for AT, OT and (AT, OT) pairs extraction on the four datasets from (Fan et al., 2019). State-of-the-art results are marked bold. SpanMlt with the best model setup achieves 15.25%, 9.58%, 5.04% and 4.13% absolute gains compared to the best pair extraction methods. Models 14lap 14res 15res AT OT AT OT AT OT RNCRF 78.42 79.44 84.93 84.11 67.47 67.62 CMLA 77.80 80.17 85.29 83.18 70.73 73.68 GMTCMLA 78.69 79.89 84.50 85.20 70.53 72.78 SpanMlt 77.87 80.51 85.24 85.79 71.07 75.02 Table 2: F1-scores for AT/OT extraction on the three datasets from (Wang et al., 2016, 2017). HAST+TOWE (pipeline) A pipeline approach where the AT are first detected using a model proposed by (Li et al., 2018). Then given the predicted AT, the OT are extracted using a recent TOWE method (Fan et al., 2019). In this way, the pair-wise relation between AT and OT can be established. JERE-MHS A model for joint entity-relation extraction, proposed by (Bekoulis et al., 2018). Although there are a number of complicated models for JERE, few works can simultaneously classify the entity types and the relation types. This method is the outstanding one which can be appropriate to solve our PAOTE task. 4.3 Hyperparameter Settings For the BiLSTM encoder, we use the 300d GloVe word embeddings pre-trained on unlabeled data of 840 billion tokens1. We use a 3-layer BiLSTM with 100-dimension hidden states. The 8-dimensional char embeddings are randomly initialized. For the character CNN, the filter size is 50 with window sizes of 3, 4 and 5. The ELMo embeddings, pretrained by a 3-layer BiLSTM with 1024 hidden states are fixed and not fine-tuned during the training stage. We use 0.4 dropout for the BiLSTMs and 0.5 dropout for the embeddings. The FFNNs are 50-dimensional with 2 hidden layers. The learning rate is set to be 0.005 for Adam optimizer. For the BERT encoder, we use the pre-trained uncased BERTbase model2, and run pre-training on 14lap train set and on the sum of 14res, 1https://nlp.stanford.edu/projects/glove/ 2https://github.com/google-research/bert 15res and 16res train set to get the domainspecific BERTfinetune models, for LAPTOP and RESTAURANT respectively. The maximum sequence length is 512 with a batch size of 8. The FFNNs are 512-dimensional with a single hidden layer. The learning rate is set to 2e-5 for Adam optimizer. The maximum length of generated spans is set to 8 and top 40% are candidate for pairs. λT and λR are both set to 1.0. We randomly split 10% of the train sets as dev sets for tuning the hyperparameters. Note that, all the baseline methods are implemented using their publicly released source codes. All the compared models are trained with best settings and the results for test sets are reported when it achieves the best performances on the dev sets. 4.4 Evaluation Metrics We report F1 scores that measure the performance of our model and all the compared methods respectively for the three subtasks: AT extraction, OT extraction, and pair-wise relation extraction. An extracted AT or OT is regarded as a correct prediction when the boundaries of the span are identical to the ground-truth, and the term label is accurately assigned. An extracted pair-wise relation is correct only when both AT and OT are accurately identified and the relation label is accurately predicted. 4.5 Main Results The main results are shown in Table 1. Our SpanMlt framework consistently achieves the best scores, both for the AT/OT extraction task and the pair-wise relation extraction task. For AT/OT extraction, the performance of sequence tagging methods is not satisfactory and the BERT-based models perform worst among all these methods. This suggests that BERT may not work well when the dataset for fine-tuning is small. The AT and OT co-extraction models perform much better than sequence tagging methods, indicating that the inter3245 Models 14lap 14res 15res 16res AT OT Pair AT OT Pair AT OT Pair AT OT Pair SpanMlt-BERTbase 80.41 78.12 62.88 84.46 84.07 72.06 75.12 78.14 60.48 79.38 84.13 67.96 SpanMlt-BERTfinetune 80.78 79.71 65.75 84.26 84.11 72.72 77.71 78.47 61.06 80.95 84.92 69.58 SpanMlt-BiLSTM 81.30 77.58 64.41 83.02 83.42 73.80 80.14 76.48 59.91 82.44 83.87 67.72 - attention 78.69 76.83 62.88 82.55 81.22 71.97 79.48 75.12 59.22 81.90 83.50 67.21 - char embeddings 75.22 71.09 56.20 76.06 78.90 64.20 79.01 74.41 59.06 78.85 81.55 64.17 SpanMlt-BiLSTM-ELMo 84.51 80.61 68.66 87.42 83.98 75.60 81.76 78.91 64.68 85.62 85.33 71.78 Table 3: Comparisons for SpanMlt with different base encoders. actions between AT and OT are significant for term extraction. However, all these joint models fail to associate AT and OT as pairs. For pair-wise AT/OT extraction, the HAST+TOWE pipeline method outperforms most other models on aspect detection, but the F1 scores of opinion extraction and pair extraction is much lower than that of SpanMlt, which is primarily due to the error propagation. Another joint entity and relation extraction method, namely JERE-MHS, performs worse than HAST for aspect extraction, but better than TOWE for opinion extraction. To evaluate the efficacy of SpanMlt on separate AT or OT extraction more intuitively, we further compare with two state-of-the-art models on the larger public datasets from (Wang et al., 2016, 2017), which has no (AT, OT) pair labeled. Table 2 shows that our SpanMlt also achieves comparable results. The minor gap is because there exist some sentences only with AT or OT and without pair-wise relations in this dataset. Thus leads our method to fail to involve the impact of pair-wise relations. 4.6 Framework Analysis Base Encoders. To further investigate the efficacy of different base encoders for our framework, namely, BiLSTM encoder and BERT encoder, we do experiments as shown in Table 3. The BiLSTM encoder with ELMo embeddings performs the best, which indicates the importance of initialized input embeddings. When using pre-trained Glove embeddings for BiLSTM encoder, the results are also satisfactory. An ablation study for the two key components, attention mechanism and char embeddings for BiLSTM encoder, suggests that both components are helpful for improving the performance. The BERTbase encoder performs better in OT extraction but is inferior to the BiLSTM without ELMo in AT extraction. By using the BERTfinetune model, the performance is improved, which indicates that introducing domainspecific information can help BERT to learn better contextualized word presentations. Figure 3 shows Figure 3: F1 curves on 14lap dataset for the two tasks, using the base BERT model or fine-tuned BERT models with increasing training steps. AT OT Pair Multi-task (SpanMlt) 84.51 80.61 68.66 Single-task Term 83.70 79.09 Single-task Relation 64.19 Table 4: Ablation study for multi-task learning on 14lap test set. F1 curves with increasing training steps for finetuning BERT on our 14lap train set. We can see that the score first increases and achieves the highest at 5000-6000 steps. But then it decreases as the steps increasing. This result demonstrates that despite the domain-specific information is useful, too many steps on fine-tuning the pre-trained BERT models may not benefit the downstream tasks. Multi-task Setup. We evaluate the effect of multitask learning for the term extraction subtask and the pair-wise relation extraction subtask defined in our SpanMlt framework. Table 4 reports the F1 scores for an ablation study on 14lap test set. It is observed that the performance improves when learning the two tasks jointly compared with each single task. In addition, to investigate the balance between the two subtasks for multi-task learning, we also draw the F1 curves when adjusting the loss weights λT and λR, as shown in Figure 4. By varying λT /λR, we can see that the model attains the best performance at 1.00 for AT/OT extraction and 1.25 for pair-wise relation extraction. Nevertheless, our multi-task framework is relatively robust when varying the weight settings for the two subtasks. 3246 Sentence HAST+TOWE SpanMlt I’ve had it for about 2 months now and found no issues with software or updates. (software, no issues)✓ (software, no issues)✓, (updates, no issues)✓ I seem to be having repeat problems as the Mother Board in this one is diagnosed as faulty, related to the graphics card. (Mother Board, problems)×, (graphics card, faulty)✓ (Mother Board, faulty)✓, (graphics card, faulty)✓ Every time I log into the system after a few hours , there is this endlessly frustrating process that I have to go through. (system, frustrating)× My laptop with Windows 7 crashed and I did not want Windows 8. (Windows 8, crashed)× (Windows 7, crashed)✓ Table 5: Case study. The golden AT and OT in the sentences are colored as blue and red respectively. And the correct predictions are marked with ✓and incorrect predictions are marked with ×. Figure 4: F1 curves on 14lap test set for the two tasks using the best model setup when adjusting the loss balance, λT /λR. (a) span length (b) top k Figure 5: Effect of the maximum span length ls and the top k of candidate spans with highest scores to be paired for our framework. Parameter Sensitivity. Figure 5 shows F1 scores with different maximum span length ls and different top k of candidate spans to generate pairs on 14lap test set. We can see that F1 scores first increases as ls becomes larger. But it slows the growth when the maximum span length is larger than 8. This indicates that too small ls could not include all the useful words to generate the spans with accurate boundaries. Nevertheless, the extraction performance is not sensitive to maximum span length. For example, the difference between 8 and 20 are not statistically significant. For the number of candidate spans to generate pairs, top k, we can observe similar trends as that of span length. Too small k may cause that many correct AT and OT are not included in the candidate set, while large k will not improve extraction performance and may cost more training time. 4.7 Case Study As mentioned previously, SpanMlt is able to identify one-to-many or many-to-one relationships between aspect and opinion terms. To verify that, we pick some examples from the test set of 14lap and show the prediction results of SpanMlt and the pipeline approach HAST+TOWE, as presented in Table 5. In the first two cases, we can see that SpanMlt can correctly assign the same opinion term for two appositive aspect terms. While the pipeline method is less effective when dealing the one-tomany relations either by missing the correct AT (e.g. “updates”) or assigning the incorrect OT (e.g. “problems”). Moreover, we find that our method may sometimes fail to recognize term boundaries (e.g., “log into the system” in case 3). There are also some bad cases due to the fact that our method fails to extract all pairs (e.g. “Windows8” and “not want” in case 4 are missed). 5 Conclusion In this paper, we study a novel task Pair-wise Aspect and Opinion Terms Extraction (PAOTE). We treat this task as a joint term and relation extraction problem and develop a span-based multi-task learning framework (SpanMlt). Our framework can effectively learn contextualized information with various base encoders. Specifically, we try two different encoders (BiLSTM encoder and BERT encoder). Then a span generator enumerates all possible spans and each span is represented based on the outputs of the encoders. For joint optimizing the objectives of term extraction and pair-wise relation extraction, the two subtasks share the span representations and the losses are combined. The experimental results demonstrate that our SpanMlt significantly outperforms all the compared methods. For future works, we will explore pair-wise AT and OT extraction together with aspect category and sentiment polarity classification. 3247 Acknowledgments This research is supported in part by the National Natural Science Foundation of China under Grant 61702500. References Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018. Joint entity recognition and relation extraction as a multi-head selection problem. Expert Syst. Appl., 114:34–45. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. ArXiv, abs/1810.04805. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Zhifang Fan, Zhen Wu, Xin-Yu Dai, Shujian Huang, and Jiajun Chen. 2019. Target-oriented opinion words extraction with target-fused neural sequence labeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2509–2518. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke S. Zettlemoyer, and Daniel S. Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In ACL. Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li, and Yiwei Lv. 2019. Open-domain targeted sentiment analysis via span-based extraction and classification. ArXiv, abs/1906.03820. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In KDD. Rohit J. Kate and Raymond J. Mooney. 2010. Joint entity and relation extraction using card-pyramid parsing. In CoNLL. Arzoo Katiyar and Claire Cardie. 2016. Investigating lstms for joint extraction of opinion entities and relations. In ACL. Kenton Lee, Luheng He, Mike Lewis, and Luke S. Zettlemoyer. 2017a. End-to-end neural coreference resolution. ArXiv, abs/1707.07045. Kenton Lee, Luheng He, and Luke S. Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to-fine inference. In NAACL-HLT. Kenton Lee, Tom Kwiatkowski, Ankur P. Parikh, and Dipanjan Das. 2017b. Learning recurrent span representations for extractive question answering. ArXiv, abs/1611.01436. Fangtao Li, Chao Han, Minlie Huang, Xiaoyan Zhu, Yingju Xia, Shu Zhang, and Hao Yu. 2010. Structure-aware review mining and summarization. In COLING. Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In ACL. Xin Li, Lidong Bing, Piji Li, Wai Lam, and Zhimou Yang. 2018. Aspect term extraction with history attention and selective transformation. ArXiv, abs/1805.00760. Xin Li and William W Y Lam. 2017. Deep multi-task learning for aspect term extraction with memory interaction. In EMNLP. Bing Liu. 2012. Sentiment analysis and opinion mining. In Synthesis Lectures on Human Language Technologies. Kang Liu, Heng Li Xu, Yang Liu, and Jun Zhao. 2013. Opinion target extraction using partially-supervised word alignment model. In IJCAI. Kang Liu, Liheng Xu, and Jun Zhao. 2012. Opinion target extraction using word-based translation model. In EMNLP-CoNLL. Pengfei Liu, Shafiq R. Joty, and Helen M. Meng. 2015. Fine-grained opinion mining with recurrent neural networks and word embeddings. In EMNLP. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In EMNLP. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. ArXiv, abs/1601.00770. Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table representation. In EMNLP. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. In Foundations and Trends in Information Retrieval. Ana-Maria Popescu and Oren Etzioni. 2005. Extracting product features and opinions from reviews. In HLT/EMNLP. Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational Linguistics, 37:9–27. Wenya Wang and Sinno Jialin Pan. 2018. Recursive neural structural correspondence network for crossdomain aspect and opinion co-extraction. In ACL. 3248 Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016. Recursive neural conditional random fields for aspect-based sentiment analysis. In EMNLP. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2017. Coupled multi-layer attentions for co-extraction of aspect and opinion terms. In AAAI. Yuanbin Wu, Qi Zhang, Xuanjing Huang, and Lide Wu. 2009. Phrase dependency parsing for opinion mining. In EMNLP. Hu Xu, Bing Liu, Lei Shu, and Philip Yu. 2019. BERT post-training for review reading comprehension and aspect-based sentiment analysis. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2324–2335, Minneapolis, Minnesota. Association for Computational Linguistics. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J. Smola, and Eduard H. Hovy. 2016. Hierarchical attention networks for document classification. In HLT-NAACL. Yichun Yin, Furu Wei, Li Dong, Kaimeng Xu, Mingjie Zhang, and Mengchu Zhou. 2016. Unsupervised word and dependency path embeddings for aspect term extraction. In IJCAI. Jianfei Yu, Jing Jiang, and Ruiping Xia. 2019. Global inference for aspect and opinion terms co-extraction based on multi-task neural networks. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 27:168–177. Meishan Zhang, Yue Zhang, and Guohong Fu. 2017. End-to-end neural relation extraction with global optimization. In EMNLP. Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. 2017. Joint extraction of entities and relations based on a novel tagging scheme. ArXiv, abs/1706.05075. Xinjie Zhou, Xiaojun Wan, and Jianguo Xiao. 2016. Attention-based lstm network for cross-lingual sentiment classification. In EMNLP.
2020
296
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3249–3258 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3249 Syntax-Aware Opinion Role Labeling with Dependency Graph Convolutional Networks Bo Zhang1, Yue Zhang2, Rui Wang2, Zhenghua Li1∗, Min Zhang1 1. Institute of Artificial Intelligence, School of Computer Science and Technology, Soochow University, Suzhou, China 2. Alibaba Inc., Hangzhou, China {bzhang17}@stu.suda.edu.cn, {zhli13,minzhang}@suda.edu.cn {shiyu.zy,masi.wr}@alibaba-inc.com Abstract Opinion role labeling (ORL) is a fine-grained opinion analysis task and aims to answer “who expressed what kind of sentiment towards what?”. Due to the scarcity of labeled data, ORL remains challenging for data-driven methods. In this work, we try to enhance neural ORL models with syntactic knowledge by comparing and integrating different representations. We also propose dependency graph convolutional networks (DEPGCN) to encode parser information at different processing levels. In order to compensate for parser inaccuracy and reduce error propagation, we introduce multi-task learning (MTL) to train the parser and the ORL model simultaneously. We verify our methods on the benchmark MPQA corpus. The experimental results show that syntactic information is highly valuable for ORL, and our final MTL model effectively boosts the F1 score by 9.29 over the syntaxagnostic baseline. In addition, we find that the contributions from syntactic knowledge do not fully overlap with contextualized word representations (BERT). Our best model achieves 4.34 higher F1 score than the current state-ofthe-art. 1 Introduction Opinion and sentiment analysis has a wide range of real-world applications like social media monitoring (Bollen et al., 2011), stock market prediction (Nguyen et al., 2015), box office prediction (Yu et al., 2010), and general e-commerce applications (Kim et al., 2013; Hu et al., 2017; Cui et al., 2017). In particular, fine-grained opinion analysis aims to identify users’ opinions in a text, including opinion expressions, holders of the opinions, targets of the opinions, target-dependent attitude, and intensity of opinions (Marasovi´c and Frank, 2018), which is very important for understanding political stance, ∗Corresponding author $ Cardoso says challenge facing Chavez is . . . Holder Expression Target Figure 1: An Example of ORL (bottom) and syntactic dependency tree (top) for “Cardoso says challenge facing Chavezis is reestablishing normalcy.” customers’ reviews, marketing trends, and other subjective information (Ravi and Ravi, 2015). As a typical fine-grained opinion mining task, opinion role labeling (ORL) aims to identify different roles relevant to each opinion, i.e., who expressed what kind of sentiment towards what (Liu, 2012). Due to the lack of large-scale labeled data, ORL remains a challenging task to tackle. As a reference point, semantic role labeling (SRL) is very similar to ORL in the problem definition, but has 10 times more labeled data and thus achieves much higher performance than ORL (80∼90 vs. 60∼70 in F1 score). Motivated by the correlations between the two tasks, SRL has been utilized to help the ORL task by many previous studies (Ruppenhofer et al., 2008; Marasovi´c and Frank, 2018; Zhang et al., 2019b). However, when opinion expressions and arguments compose complicated syntactic structures, it is difficult to correctly recognize the opinion arguments even with shallow semantic representation like SRL (Marasovi´c and Frank, 2018). To compensate for the limited scale of labeled data for data-driven approaches, linguistic knowledge like syntax provides structural information representing human understanding of the text. Naturally, dependency relations between words ease the discovering of opinion roles. Taking the example in Figure 1, the Target span is often incompletely recognized without syntactic dependency 3250 relations, missing either “facing Chavez” or “challenge”. For the similar SRL task, many previous works have proposed to incorporate syntax into the neural models (Marcheggiani and Titov, 2017; He et al., 2018; Xia et al., 2019a). In contrast, few studies in the recent years explore this line of research for ORL. There are two barriers to apply syntactic dependency parsing to NLP tasks, i.e., 1) inaccuracy of the parsing results, and 2) error propagation of the processing pipeline. To overcome the first barrier, instead of employing the final discrete outputs (i.e., single 1-best dependency trees), we make use of the probability matrix of all dependency arcs (also can be viewed as an edge-weighted directed graph) before searching for the 1-best tree. Such probabilistic representation of syntax provides more information while alleviating parsing errors. For the second barrier, considering that the pipeline methods are notorious for the error propagation problem, we introduce multi-task learning (MTL) frameworks, which have been widely used in many NLP models when predictions at various processing levels are needed (Collobert and Weston, 2008; Ruder, 2017). Apart from the syntactic information, contextualized word representations like BERT (Devlin et al., 2019) are widely used to compensate for the sparsity of task-specific training data. They compress distributional semantics of words from large corpora, making the local context fluent and natural. However, the long-distance dependencies between words are often ignored, which is ideally able to be captured by syntactic analysis. In summary, based on previous studies in using syntax to improve various tasks, this work investigates whether syntax can enhance the neural ORL model. Particularly, we try to answer the following three questions. • How to effectively integrate various syntactic information into the neural ORL model? • How to alleviate the propagation of errors brought by syntactic parsing? • Is syntactic knowledge already covered by the contextualized word representations like BERT? Based on our experiments, we observe that 1) compared with single 1-best parse trees, encoding the edge-weighted graphs achieves better results, as the model is less sensitive to parsing errors while keeping richer structural information; 2) integrating various syntactic information, both explicit and implicit, boosts performance, and MTL framework can effectively alleviate the error propagation problem; and 3) contributions from syntactic information, especially from long-distance dependency relations, do not fully overlap with those from the contextualized word representations like BERT. Our overall model delivers a new stateof-the-art result on the benchmark MPQA corpus, with 4.34 absolute improvement over the previous best result. 2 Related work An opinion consists of several components, e.g., expressions, holders, and targets. Some previous works focus on recognizing some components, whereas others try to recognize all components at the same time. Yang and Cardie (2014) and Breck et al. (2007) work entirely on labeling of the opinion expressions. Kim and Hovy (2006) and Johansson and Moschitti (2013) apply pipeline models to firstly predicting opinion expressions and then labeling holders and targets for each expression. Joint models simultaneously identify all opinion components, predicting which role is related to which opinion (Choi et al., 2006; Yang and Cardie, 2013; Katiyar and Cardie, 2016). In this work, we follow the opinion role labeling (ORL) task setting of Marasovi´c and Frank (2018) and Zhang et al. (2019b), and try to predict holders and targets for the given opinion expressions. Previous works make use of SRL resources to address the issue of data scarcity for ORL, considering SRL is highly related to ORL and has a considerable amount of training data. Inspired by the similarity between ORL and SRL in task definition, Kim and Hovy (2006) and Ruppenhofer et al. (2008) address ORL with a well-trained SRL model by treating opinion expressions as semantic predicates, and opinion roles as semantic roles. Marasovi´c and Frank (2018) take SRL as an auxiliary task, and employ different MTL frameworks to learn the common grounds between ORL and SRL and distinguish task-specific knowledge. Zhang et al. (2019b) extract neural features from a welltrained SRL model as SRL-aware word representations, and then feed them into the input layer of ORL, aiming to alleviate the error propagation problem. 3251 Figure 2: The overall architecture of our models. Many previous works have shown that syntactic information is of great value for SRL and other NLP tasks (He et al., 2018; Zhang et al., 2019c; Strubell et al., 2018; Xia et al., 2019a; Miwa and Bansal, 2016; Zhang et al., 2019a). Xia et al. (2019b) use the relative position between predicate words and other words in a dependency tree to represent syntactic information, while Roth and Lapata (2016) employ LSTM to obtain the embedding of a dependency path. Tai et al. (2015) and Kipf and Welling (2016) propose TreeLSTM and graph convolution network (GCN) to encode the tree/graphstructural data respectively. Both TreeLSTM and GCN are commonly used techniques to encode parse trees (Miwa and Bansal, 2016; Marcheggiani and Titov, 2017; Bastings et al., 2017). Zhang et al. (2019a) and Xia et al. (2019a) extract the hidden states from the LSTM encoder of the parser model as syntax-aware word representations, and feed them to downstream tasks as extra inputs. In contrast, few works have proved that syntactic knowledge is useful in the neural ORL models. Yang and Cardie (2013) integrate the shortest path features from dependency trees into a traditional CRF-based ORL model. To our best knowledge, this work is the first to investigate how to incorporate syntax into neural ORL models. 3 Basic Models The ORL model aims to extract opinion-holdertarget structures from text by identifying the segments of these opinion arguments. The task can be modeled as a sequence labeling problem. We adopt the {BMESO} encoding schema to assign a tag for each word (Zhang et al., 2019b). Following Marasovi´c and Frank (2018) and Zhang et al. (2019b), we focus on recognizing the holders and the targets for the given opinion expression and exploit a deep BiLSTM-CRF-based model as our baseline. The Figure 2-(a) shows the architecture of our ORL baseline model, which is composed of three key components, i.e., the input layer, the BiLSTMbased encoder, and the CRF-based decoder. Given the input sentence S = w1, w2, ..., wn and the opinion expression segment E = ws, ws+1, ..., we(1 ≤ s ≤e ≤n), the input vector consists of the word embeddings and the expression-indicator embeddings as the following equation shows: xi = eword wi ⊕eexp 0/1 (1) where eword wi is the embedding of word wi, and the expression-indicator embedding is eexp 0 for nonexpression words and eexp 1 for words inside the 3252 opinion expression (i.e., s ≤i ≤e). At the encoder layer, we apply three stacking layers of BiLSTM to fully encode the sentence and obtain the expression-specific representations at word level. The CRF-based decoder at the output layer delivers the globally optimal sequence tags. The Biaffine parser is the state-of-the-art dependency parser proposed by Dozat and Manning (2017), as shown in Figure 2-(b). The parser contains a multi-layer BiLSTM layer for encoding the input sentence, followed by a biaffine transformation layer for computing the probabilities of all word pairs. Then it searches for the highest-scoring and well-formed tree via the maximum spanning tree (MST) algorithm. The three cascaded layers, i.e., the BiLSTMbased encoder, the biaffine scorer, and the MST decoder, represent syntactic information at different levels. The encoder extracts the neural features from the input sentence and outputs hidden states (HDN), which can be regarded as implicit information. The 1-best output parse tree, on the other hand, conveys explicit syntactic structures. The biaffine scorer gives a probability matrix for all possible dependency arcs (also can be viewed as an edge-weighted directed graph), which represents richer explicit syntactic information than the 1-best parse tree. 4 The Syntax-Aware Approach Despite of recent advances in dependency parsing (Dozat and Manning, 2017), parsers still cannot output parse trees with high accuracy on out-ofdomain or irregular data. In this work, we exploit rich syntactic information contained in the edgeweighted graphs to mitigate the effects of parsing errors. Specifically, we firstly employ graph convolutional networks (GCN) to encode the edgeweighted graphs, and then integrate them into different processing levels of ORL with implicit parser hidden states. Finally, we employ novel MTL frameworks to alleviate the error propagation problem further. 4.1 Dependency Graph Convolutional Networks (DEPGCN) In this subsection, we propose dependency graph convolutional networks (DEPGCN) to better encode the syntactic information from the edgeweighted graphs. On the one hand, compared with explicit 1-best parse trees, edge-weighted graphs convey richer structural information by providing all latent syntactic structures, and avoid error propagation as well. On the other hand, compared with the implicit hidden states of the parser encoder (Zhang et al., 2019a; Xia et al., 2019a), an edgeweighted graph, denoted as an attention matrix, explicitly captures the modification strength of word pairs. The original GCN is designed for directly modeling graph-structured data (Kipf and Welling, 2016). Although each node only receives information from its immediate neighbors through edges in one GCN layer, multi-layer GCN can propagate information more globally if there exist connected paths. Formally, the output of node i at the l-th layer of GCN is computed by the following equation: h(l) i = F   n X j=1 AijW(l)h(l−1) j + b(l)   (2) where A is the adjacency matrix of a graph with n nodes, W(l) and b(l) are the model parameters, F is an activation function. h0 i is the initial input vector. As shown by Figure 2-(e), we apply DEPGCN to connect the parser model and the ORL model. We first obtain the edge-weighted graph from the decoder of a well-trained biaffine parser as a data preprocessing step, and then feed the graph into our DEPGCN in the form of an adjacency matrix A 1. Then we feed the outputs of the ORL BiLSTM-based encoder as the initial inputs h0 to the DEPGCN. Finally, we feed the output of the DEPGCN to the CRF-based decoder, and update the ORL results under the guidance of the syntactic information. Moreover, we introduce dense connections to the multi-layer DEPGCN for extracting more structural information (Huang et al., 2017; Guo et al., 2019). Instead of only adding connections between adjacent layers, we use dense connections from each layer to all the subsequent layers. Formally, the input of node i at the l-th layer is: x(l) i = h(0) i ⊕h(1) i ⊕· · · ⊕h(l−1) i (3) where h(l) i is the output of node i at the l-th layer. We also make residual connections over DEPGCN to mitigate the vanishing gradient problem, which 1Moreover, following Marcheggiani and Titov (2017), we also add a self-loop for each node in the graph, which means all diagonal elements of A are set to 1. 3253 means that the output dimension of each DEPGCN layer is decided by the layer number and the input dimension of the bottom DEPGCN. 4.2 Combining Explicit and Implicit Syntax (DEPGCN+DEPHDN) Different from explicit 1-best parse trees or edgeweighted graphs, hidden states of the BiLSTM encoder of a dependency parser provide useful syntactic knowledge and are less sensitive to parsing errors. Using such implicit syntactic representations has been demonstrated to be highly effective for downstream tasks (Zhang et al., 2019a; Xia et al., 2019a). In this section, we describe how to integrate implicit syntactic information from parser hidden states and explicit syntactic information from the edge-weighted graph into the ORL model for better performance. We first briefly describe the use of the dependency parser’s hidden states, named as DEPHDN. As shown by Figure 2-(d), we extract the outputs from the parser encoder and feed them into the BiLSTM-based encoder of ORL as extra inputs. The hidden states of each parser BiLSTM layer are obtained as the syntactic representations, i.e., h(l) 1 , · · · , h(l) n , where h(l) n is output of the l-th layer of the parser BiLSTM encoder at wn. Then, we use the weighted-sum operation to get a single vector hsyn i as the final syntactic representation of word wi. hsyn i = Wλ L X j=1 αjhj i (4) where L is the layer number of parser BiLSTMbased encoder; W, αj and λ are model parameters; αj is softmax-normalized weights for hj; λ is used to scale the syntactic representations. The syntactic representations hsyn i are concatenated with the original ORL input vectors, so that richer word representations are obtained. Furthermore, in order to simultaneously benefit from the implicit and explicit syntactic information, as shown in Figure 2-(f), we simply extract the edge-weighted graph from the parser decoder and apply the DEPGCN approach over the ORL encoder to obtain syntax-enhanced representations. 4.3 Pipeline vs. Multi-Task Learning The three approaches, depicted in Figure 2-(d-f) respectively, can work either in the pipeline way or in the MTL way. Specifically, the pipeline way first trains the dependency parser and then fixes the parser components during training the ORL model. In contrast, the MTL way trains both the parser and the ORL model at the same time. In this subsection, we explore the MTL way to alleviate the error propagation problem further besides the DEPGCN approach. As a baseline, Figure 2-(c) shows the most common MTL method, which shares a common encoder and uses multiple task-specific output layers, known as the hard-parameter-sharing MTL (Ruder, 2017; Marasovi´c and Frank, 2018). However, this approach is not suitable for our scenario where the auxiliary parsing task has much more labeled data than the main ORL task, since the shared encoder is very likely to bias toward to parsing performance (Xia et al., 2019a). Inspired by Xia et al. (2019a), we adopt the architectures of Figure 2-(d-f) to keep task model parameters separately, and train ORL and the parser simultaneously. We update model parameters according to the combined loss of the ORL and the dependency parser during training: ζ = ζORL + αζDep (5) where ζORL and ζDep is the loss of the ORL model and the parser respectively, and α is a corpus weighting factor to control the loss contribution of the dependency data in each batch as discussed in Section 5. Compared with the previous pipeline training process, the parameters of the parser are not pretrained and fixed, but updated by training objectives of both ORL and the parser. This results in a ORL-preferred dependency parsing model. 5 Experiment Setup Dataset. We conduct experiments on MPQA version 2.0 corpus (Wiebe et al., 2005), which has been widely adopted as a benchmark dataset for opinion mining (Katiyar and Cardie, 2016; Marasovi´c and Frank, 2018; Zhang et al., 2019b). In this work, we adopt the same data split (132/350 documents as dev/test data) and the same five-fold cross-validation (CV) data split on the test data as Zhang et al. (2019b) for a fair comparison. Evaluation Metrics. Unless specified, we use recall (R), precision (P) and their F1 measure value of exact match to evaluate the ORL performance, and the results are the average of the five-fold CV experiments. Following Marasovi´c and Frank 3254 (2018) and Zhang et al. (2019b), we also include the binary and proportional overlap as additional evaluation metrics. Dependency Parser. Following the standard practice in the dependency parsing community, the original phrase-structure Penn Treebank data are converted into the Stanford dependencies using the Stanford Parser v3.3.0. We use the converted dependency data to train our biaffine parser for obtaining the 1-best trees, the edge-weighted graphs, and the parser hidden states. In addition, we use the Stanford POS tagger to obtain POS tags for the biaffine parser. For other settings, we follow the work of Dozat and Manning (2017). BERT. We use BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2019) to obtain deep contextualized word representations as our extra inputs. In particular, we use BERT-base (uncased) model and extract representations from the top-1 hidden layer. Our experiments show that using the top-1 layer representations performs better than the more common use of aggregating top-4 hidden layers.2 Parameters. We follow the previous works of Zhang et al. (2019b) and Marasovi´c and Frank (2018) without much parameter tuning. Specifically, we use the pretrained 100-dimensional glove embeddings (Pennington et al., 2014). The BiLSTM layer number is set to 3, and the hidden output size is 200. We apply 0.33 dropout to word representation and the hidden states of the BiLSTM. We choose Adam (Kingma and Ba, 2014) to optimize model parameters with a learning rate 10−3. The entire training instances are trained for 30 epochs with the batch size of 50, and the best-epoch model at the peak performance on the dev corpus is chosen. For the MTL, we train the batches of ORL and parsing in turn since this interleaving training can obtain better performance in our experiments. Besides, we use the corpus weighting trick to balance the gap in data sizes between the two tasks. 6 Results and Analysis In this section, we first conduct experiments on the dev data to verify the effectiveness of our pro2In fact, we also investigate another typical use of BERT, i.e., the fine-tuning method. However, the ORL performance is much lower than the feature extraction method described above. Besides, considering the training speed and flexibility in our proposed syntax-aware model, it is more flexible to adopt the feature extraction method, i.e., extracting BERT outputs as extra word representations (frozen during training). P R F1 w/o Syntax BASELINE 59.08 55.15 57.02 w/ Explicit Info. DEPHEAD 60.82 55.30 57.91 TREELSTM 60.85 55.25 57.90 DEPGCN-HARD 61.10 56.16 58.50 DEPGCN 61.53 57.26 59.28 w/ Implicit Info. DEPHDN 63.42 59.61 61.45 Explicit & Implicit DEPGCN+DEPHDN 63.80 61.43 62.58 Table 1: Experiments with explicit and implicit syntactic information on the dev dataset. posed approaches from several aspects: 1) how to effectively use explicit syntactic information; 2) usefulness of explicit vs. implicit syntax and their combination; 3) which MTL framework is most effective. Then we present overall results on the test dataset, with and without BERT. Finally, we conduct detailed analysis to gain more insights. 6.1 Using Explicit Syntax In order to know the best way to use explicit information from the dependency parser, we conduct comparative experiments by integrating the information of the explicit 1-best trees or the explicit edge-weighted graphs. The second major row of Table 1 shows the results of integrating such explicit syntactic information on the dev data. In particular, BASELINE uses no syntactic information, known as the syntax-agnostic method; DEPHEAD concatenates an extra embedding of the head word in the 1-best parse tree with the original input; TREELSTM applies the TreeLSTM to encode the 1-best tree structures; DEPGCN applies GCN to encode the edge-weighted graphs. For DEPGCN-HARD, the 1-best tree is converted to a binary adjacency and is encoded by DEPGCN. It is obvious that using explicit syntactic information is helpful for ORL. All the syntax-aware models improve the performance by 0.88∼2.26 F1 score. The DEPHEAD approach is the most intuitive way to represent syntactic information by using head word embeddings, which serves as a simple syntax-aware baseline method. The TREELSTM approach encodes 1-best tree recursively in a much more complex way, but achieves nearly the same performance with the DEPHEAD method. We suspect the reason may be that the TREELSTM method is prone to parsing errors. The DEPGCN-HARD approach also encodes 3255 Multi-task learning P R F1 M-BASELINE 62.23 56.84 59.39 M-DEPGCN 65.59 61.61 63.52 M-DEPHDN 65.74 63.67 64.68 M-DEPGCN+DEPHDN 65.94 64.15 65.03 Table 2: Experimental results under the MTL framework on the dev dataset. the 1-best tree, and achieves higher performance. Compared with the TREELSTM approach, the DEPGCN-HARD approach is less sensitive to parsing errors, since a GCN layer only considers local adjacent structures and performs one-hop information propagation, whereas a TreeLSTM propagates information in either bottom-up or top-down order where earlier errors affect later computations a lot. The best result of exploiting explicit information is obtained by the DEPGCN method, which is able to integrate richer structural information from edgeweighted graphs. 6.2 Explicit vs. Implicit Syntax, and Combination The bottom two major rows of Table 1 show the results on the dev data. DEPHDN exploits implicit information of parser hidden states. We can see that the implicit DEPHDN method outperforms the best explicit DEPGCN method by 2.17 F1 score, indicating the effectiveness of the integration of parser hidden states, which is consistent with previous studies on the SRL task (Xia et al., 2019a). The advantage of using implicit hidden states is being able to greatly alleviate the error propagation from explicit parsing results. We further simultaneously integrate explicit and implicit syntactic information into one model, which achieves the best performance of 62.58 F1 score, and outperforms the syntax-agnostic baseline and the DEPHDN method by 5.56 and 1.13 F1 scores, respectively. This demonstrates that ORL can benefit from both explicit and implicit syntactic information. In summary, we can conclude that encoding the edge-weighted graphs is more effective than the 1best trees, and combining both explicit and implicit syntactic information brings higher performance than either. 6.3 Effects of Multi-Task Learning In order to alleviate the error propagation problem and explore better integration of different approaches, we apply MTL frameworks to the abovementioned pipeline architectures. Table 2 shows the results of the MTL settings with previously better-performing configurations on the dev dataset, together with a commonly used hard-parameter-sharing MTL for parsing and ORL. M-BASELINE serves as an MTL baseline, which shares the encoder for the two tasks (Figure 2-c). M-DEPGCN and M-DEPHDN respectively apply the DEPGCN and DEPHDN approaches under our MTL framework, and M-DEPGCN+DEPHDN combines them. Firstly, although sharing the encoder of the parser and ORL already brings in more than 2 F1 score improvement compared with the syntaxagnostic baseline (BASELINE), it is much inferior to other MTL approaches and the pipeline DEPHDN method (comparing Table 1). This may be caused by the weakness of the encoder parameters for ORL, as discussed in Section 4 and Xia et al. (2019a). Secondly, compared with the corresponding approaches under the pipeline architecture, all approaches under our MTL framework improve the performance by 2.45∼4.24 F1 scores, which indicates that MTL is highly effective in alleviating the error propagation problem. Finally, the combination of the explicit edgeweighted graphs and the implicit parser hidden states is still the most effective model under the MTL framework, outperforming the BASELINE in Table 1 by 8.01 F1 score. 6.4 Final Results In this section, we report the overall performance of our approaches compared with previous methods on the test data, as shown in Table 3. In particular, we list our syntax-agnostic baseline (BASELINE in Table 1), others’ works (Zhang et al. (2019b) and Marasovi´c and Frank (2018), using SRL for ORL), best non-MTL approaches based on our results on the dev data (DEPGCN for explicit syntactic information and DEPHDN for implicit syntactic information), and finally the MTL-based models. The results of BASELINE with BERT and our best model with BERT are also listed to demonstrate the contributions from the contextualized word representations. We can draw the following findings. • Combining explicit and implicit syntactic information improves the performance, indicat3256 Exact F1 Binary F1 Proportional F1 Holder Target Overall Holder Target Overall Holder Target Overall Basic Model BASELINE 73.05 44.21 58.79 81.21 69.50 75.43 79.33 62.53 71.03 BASELINE+BERT 76.74 52.61 64.73 85.45 75.74 80.62 83.58 69.31 76.48 Zhang et al. (2019b) 73.07 42.70 58.30 81.57 68.34 75.15 79.35 61.22 70.55 w/ SRL Marasovi´c and Frank (2018) 75.58 46.40 61.51 83.80 72.06 77.87 81.67 65.18 73.61 Zhang et al. (2019b) 76.95 50.50 63.74 84.91 73.29 79.10 82.82 67.31 75.08 w/ Syntax DEPGCN 73.82 45.97 60.12 81.11 68.54 74.93 79.15 61.96 70.70 DEPHDN 76.96 46.95 62.29 83.79 70.20 77.15 82.44 63.56 73.21 DEPGCN + DEPHDN 76.21 49.38 63.12 83.0 72.25 77.81 81.58 66.59 74.28 w/ Syntax + MTL M-DEPGCN 77.50 50.78 64.28 84.17 72.91 78.60 82.77 66.77 74.85 M-DEPHDN 77.36 50.81 64.31 84.35 72.45 78.50 82.95 66.51 74.87 M-DEPGCN+DEPHDN 78.01 51.92 65.13 84.97 73.36 79.24 83.67 67.77 75.82 M-DEPGCN+DEPHDN+BERT 79.51 56.61 68.08 87.09 76.99 82.04 85.70 72.32 79.01 Table 3: Overall experimental results on the test dataset. 1-5 6-10 11-15 ≥16 15 25 35 45 55 65 75 Exact F1 (%) DepGCN DepHDN (a) Span Length 1-2 3-4 5-6 ≥7 Comb M-Comb (b) Distance to Expression 1-5 6-10 11-15 ≥16 Baseline BERT (c) Span Length 1-2 3-4 5-6 ≥7 M-Comb Both (d) Distance to Expression Figure 3: Performance on predicting arguments with different span lengths and distances to the expressions. ing they are complementary to each other. • Compared with the DEPGCN and DEPHDN approaches (i.e., explicit or implicit only), the DEPGCN+DEPHDN approach achieves better performance on both Holder and Target recognition. • All of the MTL configurations boost the performance compared with their pipeline counterparts, as ORL-oriented parsing models are learned, and the error propagation problem is less severe. • Our best syntax-aware MTL model combined with BERT achieves the best performance, outperforming the baseline with BERT by more than 3 F1 score. • Compared with the previous state-of-the-art methods, we obtain 4.34 and 1.39 improvement of F1 scores with and without BERT, respectively. Overall, our best model achieves 9.29 higher F1 score over the syntax-agnostic baseline. 6.5 Further Analysis In this section, we conduct analysis to better understand the contributions from the syntactic information and BERT. In particular, we compute the exact F1 score according to different lengths of opinion arguments, as well as different distances between the arguments and their corresponding expressions. Influence of Syntax. Figure 3-(a-b) show the effects of syntax on predicting arguments of different span lengths and distances to their expressions, respectively. We observe that 1) the performance of combining explicit and implicit syntactic information is always higher than either of them, while the DEPGCN and DEPHDN approaches compensate each other at different argument span lengths; and 2) MTL performs better than the best pipeline 3257 US and UK Criticise Mugabe ’s Victory Gold Holder Target Base Target +BERT Holder Target +Syntax Holder Target Figure 4: An example of different ORL outputs for “US and UK Criticise Mugabe ’s Victory”. model consistently, which indicates that the usage of syntax is further enhanced as the error propagation is less severe. Influence of BERT. Figure 3-(c-d) show the similar graphs of the best syntax-aware model and BERT. Firstly, both M-Comb and BERT bring substantial improvements over the syntax-agnostic baseline. Secondly, despite that the syntactic information and BERT are similar in the overall performance, the syntactic information is more effective for arguments with longer spans and farther distances to the expressions, as the syntax helps to capture long-distance dependencies between words. And lastly, the integration of syntax and BERT can further improve the performance, demonstrating that contributions from the two are complementary. Case Study. One case study is given in Figure 4. In this example, the gold holder “US and UK” is difficult to be identified by the baseline model. Even with the help of BERT, which brings more contextual information, the model still only captures one of them, the closest holder “UK”. Our syntax-aware model accurately predicts the holder due to the coordination structure being captured by the syntactic dependency information. 7 Conclusions In this paper, we present a syntax-aware opinion role labeling approach based on dependency GCN and MTL. We compare different representations of syntactic dependency information and propose dependency GCN to encode richer structural information from different processing levels of the parser. The MTL framework further boosts the performance, and together with BERT, our best model achieves a new state-of-the-art result on the widely-used ORL benchmark MPQA corpus. Overall, our syntax-aware model brings in about 9.29 improvement of exact F1 score compared with the syntax-agnostic baseline. Acknowledgments The authors would like to thank the anonymous reviewers for the helpful comments. This work was supported by National Natural Science Foundation of China (Grant No. 61525205, 61876116) and a Project Funded by the Priority Academic Program Development (PAPD) of Jiangsu Higher Education Institutions, and was also partially supported by Alibaba Group through Alibaba Innovative Research Program. References Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima’an. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of EMNLP, pages 1957–1967. Johan Bollen, Huina Mao, and Xiaojun Zeng. 2011. Twitter mood predicts the stock market. Journal of computational science, 2(1):1–8. Eric Breck, Yejin Choi, and Claire Cardie. 2007. Identifying expressions of opinion in context. In Proceedings of IJCAI, volume 7, pages 2683–2688. Yejin Choi, Eric Breck, and Claire Cardie. 2006. Joint extraction of entities and relations for opinion recognition. In Proceedings of EMNLP, pages 431–439. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of ICML, pages 160–167. Lei Cui, Shaohan Huang, Furu Wei, Chuanqi Tan, Chaoqun Duan, and Ming Zhou. 2017. SuperAgent: A customer service chatbot for e-commerce websites. In Proceedings of ACL, pages 97–102. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171– 4186. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependecy parsing. In Proceedings of ICLR. Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Attention guided graph convolutional networks for relation extraction. In Proceedings of ACL, pages 241–251. Shexia He, Zuchao Li, Hai Zhao, and Hongxiao Bai. 2018. Syntax for semantic role labeling, to be, or not to be. In Proceedings of ACL, pages 2061–2071. Ya-Han Hu, Yen-Liang Chen, and Hui-Ling Chou. 2017. Opinion mining from online hotel reviews–a text summarization approach. Information Processing & Management, 53(2):436–449. 3258 Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. 2017. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708. Richard Johansson and Alessandro Moschitti. 2013. Relational features in fine-grained opinion analysis. Computational Linguistics, 39(3):473–509. Arzoo Katiyar and Claire Cardie. 2016. Investigating lstms for joint extraction of opinion entities and relations. In Proceedings of ACL, pages 919–929. Soo-Min Kim and Eduard Hovy. 2006. Extracting opinions, opinion holders, and topics expressed in online news media text. In Proceedings of the Workshop on Sentiment and Subjectivity in Text, pages 1– 8. Suin Kim, Jianwen Zhang, Zheng Chen, Alice Oh, and Shixia Liu. 2013. A hierarchical aspect-sentiment model for online reviews. In Proceedings of AAAI, pages 526–533. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Bing Liu. 2012. Sentiment Analysis and Opinion Mining. Morgan & Claypool. Ana Marasovi´c and Anette Frank. 2018. Srl4orl: Improving opinion role labeling using multi-task learning with semantic role labeling. In Proceedings of NAACL-HLT, pages 583–594. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of EMNLP, pages 1506–1515. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of ACL, pages 1105– 1116. Thien Hai Nguyen, Kiyoaki Shirai, and Julien Velcin. 2015. Sentiment analysis on social media for stock movement prediction. Expert Systems with Applications, 42(24):9603–9611. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP, pages 1532– 1543. Kumar Ravi and Vadlamani Ravi. 2015. A survey on opinion mining and sentiment analysis: tasks, approaches and applications. Knowledge-Based Systems, 89:14–46. Michael Roth and Mirella Lapata. 2016. Neural semantic role labeling with dependency path embeddings. In Proceedings of ACL, pages 1192–1202. Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098. Josef Ruppenhofer, Swapna Somasundaran, and Janyce Wiebe. 2008. Finding the sources and targets of subjective expressions. In Proceedings of LREC. Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. In Proceedings of EMNLP, pages 5027–5038. Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of ACL-IJCNLP, pages 1556–1566. Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language resources and evaluation, 39(2-3):165–210. Qingrong Xia, Zhenghua Li, and Min Zhang. 2019a. A syntax-aware multi-task learning framework for chinese semantic role labeling. In Proceedings of EMNLP-IJCNLP, pages 5385–5395. Qingrong Xia, Zhenghua Li, Min Zhang, Zhang Meishan, Guohong Fu, Rui Wang, and Luo Si. 2019b. Syntax-aware neural semantic role labeling. In Proceedings of AAAI, pages 7305–7313. Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In Proceedings of ACL. Bishan Yang and Claire Cardie. 2014. Joint modeling of opinion expression extraction and attribute classification. Transactions of ACL, 2:505–516. Xiaohui Yu, Yang Liu, Xiangji Huang, and Aijun An. 2010. Mining online reviews for predicting sales performance: A case study in the movie domain. IEEE Transactions on Knowledge and Data engineering, 24(4):720–734. Meishan Zhang, Zhenghua Li, Guohong Fu, and Min Zhang. 2019a. Syntax-enhanced neural machine translation with syntax-aware word representations. In Proceedings of NAACL-HLT, pages 1151–1161. Meishan Zhang, Peili Liang, and Guohong Fu. 2019b. Enhancing opinion role labeling with semanticaware word representations from semantic role labeling. In Proceedings of NAACL-HLT, pages 641– 646. Yue Zhang, Rui Wang, and Luo Si. 2019c. Syntaxenhanced self-attention-based semantic role labeling. In Proceedings of EMNLP-IJCNLP, pages 616– 626.
2020
297
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3259–3266 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3259 Towards Better Non-Tree Argument Mining: Proposition-Level Biaffine Parsing with Task-Specific Parameterization Gaku Morio, Hiroaki Ozaki, Terufumi Morishita, Yuta Koreeda and Kohsuke Yanai Hitachi, Ltd. Research and Development Group Kokubunji, Tokyo, Japan {gaku.morio.vn, hiroaki.ozaki.yu, terufumi.morishita.wp, yuta.koreeda.pb, kohsuke.yanai.cs}@hitachi.com Abstract State-of-the-art argument mining studies have advanced the techniques for predicting argument structures. However, the technology for capturing non-tree-structured arguments is still in its infancy. In this paper, we focus on non-tree argument mining with a neural model. We jointly predict proposition types and edges between propositions. Our proposed model incorporates (i) task-specific parameterization (TSP) that effectively encodes a sequence of propositions and (ii) a proposition-level biaffine attention (PLBA) that can predict a non-tree argument consisting of edges. Experimental results show that both TSP and PLBA boost edge prediction performance compared to baselines. 1 Introduction Argument mining, a research area that focuses on predicting argumentation structures in a text, has been receiving much attention. To date, efforts in argument mining were devoted to predicting tree arguments in which a claim proposition is represented as a root and premise propositions are represented as leaves. For example, Stab and Gurevych (2017) introduced Argument Annotated Essays (hereafter, Essay), and researchers attempted to predict tree arguments in the corpus (Eger et al., 2017; Potash et al., 2017; Kuribayashi et al., 2019). However, these techniques lack the capability of dealing with more flexible arguments such as reason edges where a proposition can have several parents. To this end, Park and Cardie (2018) provided a less restrictive argument mining dataset known as Cornell eRulemaking Corpus (CDCP), which contains flexible edges (see VALUES (a), (b), and TESTIMONY (e) in Figure 1). Figure 2 shows a distribution of outgoing edges for Essay and CDCP. Propositions in CDCP have sparse connections, making the majority of propositions iso... [ I'm with Massachusetts on this one. ]a ... [ Repetitive and robo - calls are annoying and not productive. ]b ... [ Another fact about robo - calls is that their messages often start in the middle, ]c ... [ or maybe this is done on purpose. ]d ... [ When it has happened to me, I just hang up. ]e ... [ Policies regulating the number of contacts made within a specific time period should include all modes of technology. ]f VALUE (a) VALUE (b) FACT (c) VALUE (d) TESTIMONY (e) POLICY (f) REASON REASON REASON Figure 1: Example graph in the CDCP corpus 0 1 2 3 4 5 #outgoing edge 0 2000 4000 frequency Essay CDCP Figure 2: Distribution of the outgoing edges (i.e., Support/Attack or REASON/EVIDENCE relations) from a node (proposition) in Essay and CDCP corpora lated from the others. Besides, a proposition in Essay has at most one outgoing edge, while that in CDCP has a variable number of edges (i.e., there are about 200 propositions which have two or more outgoing edges). Therefore, it is important to work on the less restrictive arguments. Yet, it has not been deeply studied except a few studies (Niculae et al., 2017; Galassi et al., 2018). In this paper, we present a novel model for nontree argument mining. Different from the previous studies of Niculae et al. (2017); Galassi et al. (2018), we focus on an effective encoding for the propositions and a graph-based non-tree argument parsing technique. Given sentence or clause spans in an argument, our model jointly predicts proposition types for the spans, edges between the propositions and edge labels by employing following two architectures: – Task-Specific Parameterization (TSP) is an effective encoding step for the proposition sequence. On top of a shared encoder, we prepare two dis3260 tinct attention-to-encoder layers to maintain taskspecific representations. One is for the proposition type, and the other for the edges (and their labels). TSP employs our expectation that edge- and proposition type-specific representations should be separately obtained. This is because representations of proposition types and edges are relatively less bonded when compared to the tree-structured Essay where each premise proposition always has one outgoing edge. – Proposition-Level Biaffine Attention (PLBA) is used to predict non-tree edges after the encoding step. Biaffine attention has recently been used for syntactic or semantic token-to-token dependency parsing (Dozat and Manning, 2017, 2018; Wang et al., 2019; Zhang et al., 2019; Li et al., 2019b,a). We extend the biaffine attention to predict proposition-to-proposition dependencies. Experimental results on CDCP show that our proposed model improves performance. Analyses also show that task-specific information can be captured by TSP. 2 Dataset We use CDCP (Park and Cardie, 2018; Niculae et al., 2017) with 731 arguments. The corpus provides five types of propositions (32 REFERENCE, 746 FACT, 1026 TESTIMONY, 2160 VALUE and 815 POLICY), and two types of argumentative edges (1307 REASON and 46 EVIDENCE). For example, FACT poses a truth value that can be verified with objective evidence: That process usually takes as much as two years or more. CDCP also provides directed edges between propositions and edge label. A proposition i is REASON for a proposition j if i provides rationale for j, or is EVIDENCE if it proves whether j is true or not. 3 Task Formalization Input: We assume a text consisting of N tokens and M proposition spans is given. We denote the i-th proposition span as (START(i), END(i)) where START(i) and END(i) are the starting and ending token indices, respectively. Thus, 1 ≤ START(i) ≤END(i) ≤N. Output: For each given span i, we predict its proposition type, outgoing edges, and edge labels (i.e., REASON and EVIDENCE), where the graph does not necessarily form a tree. 4 Approach An overview of our proposed model is shown in Figure 3 (right). We encode propositions by TSP, and use PLBA to obtain non-tree arguments. We use wt to denote the concatenation of t-th set of word features, each set consisting of a surface, a part-of-speech tag, a GloVe vector (Pennington et al., 2014) and an optional ELMo vector (Peters et al., 2018). The input words for span i are fed into a bidirectional LSTM: hSTART(i):END(i) = BILSTM wSTART(i):END(i)  . 4.1 TSP: Task-Specific Parameterization We provide task-specific encoding layers, one for proposition types and the other for edges (and their labels), on the top of the BILSTM. We expect the lower layers to extract task-universal representations and the upper layers to extract more task-specific representations (Liu et al., 2019; Ethayarajh, 2019). First, to be aware of informative tokens such as discourse markers, we obtain task-aware span representations for each task τ ∈{type, edge}: aτ,t = v⊤ τ (Wτht + bτ), sτ,i,t = exp(aτ,t) PEND(i) k=START(i) exp(aτ,k) , hspan att τ,i = END(i) X t=START(i) sτ,i,tht, where vτ, Wτ and bτ are parameters. We note that hspan att τ,i ∈{hspan att type,i , hspan att edge,i }. Then, each typeand edge-specific proposition span is represented as: hspan type,i = hEND(i) ⊕hspan att type,i ⊕φ(i), hspan edge,i = hEND(i) ⊕hspan att edge,i ⊕φ(i), where ⊕is a concatenation operation and φ(i) is a span length feature. The span representations are then fed into new BiLSTMs to encode task-specific proposition sequences: stype,i = BILSTMtype(hspan type,i), sedge,i = BILSTMedge(hspan edge,i). 4.2 PLBA: Proposition-Level Biaffine Attention To predict non-tree edges between propositions, we use biaffine attention (Dozat and Manning, 2018) 3261   MLPtype   MLP(trg) edge Value Value   edge MLP(src) edge MLP(src) label   MLPtype   ⊗ attention attention   MLPtype \ Value Value   MLPtype   ⊗   START()   END() attention attention attention attention edge classifier label classifier Task-Specific Parameterization (TSP) type classifier Proposition-Level Biaffine Attention (PLBA)   MLP(trg) label   Span    Span ~   START(~)   END(~)   type BILSTM BILSTM BILSTM Figure 3: Simplified overview of (left) non-TSP model using a naive single attention-to-encoder system and (right) our proposed model. Note that, for each figure, only two propositions in six propositions are shown for the visibility. that computes scores of all proposition pairs by the following operation: BIAFFINEk (x, y) = x 1 ⊤ Uky, where Uk is a parameter. We apply multi-layer perceptrons (MLPs) and a biaffine operation to a pair of edge-specific representations (sedge,i, sedge,j) to obtain a probability of a directed edge from i-th span to j-th span: e(src) i = MLP(src) edge sedge,i  , e(trg) j = MLP(trg) edge sedge,j  , ˆ edgei,j = sigmoid  BIAFFINEedge  e(src) i , e(trg) j  , and the label for the edge (i, j) is calculated as ℓ(src) i = MLP(src) label(sedge,i), ℓ(trg) j = MLP(trg) label(sedge,j), ˆ labeli,j = softmax  BIAFFINElabel  ℓ(src) i , ℓ(trg) j  . We train edges and labels by summing the losses, backpropagating gradients for the labels only through gold edges. At inference, the predicted labels are masked by the edges: ˆ edgei,j ⊗ ˆ labeli,j. 4.3 Joint Learning with Proposition Type We classify the proposition type for span i with the type-specific representation: ˆ typei = softmax MLPtype stype,i  . Finally, we minimize the joint objective of edge loss Ledge i , label loss Llabel i and type loss Ltype i : L = M X i=1  λedgeLedge i + λlabelLlabel i + λtypeLtype i  , where λ are hyperparameters to adjust training. 5 Experiments Following Niculae et al. (2017), we evaluate the test set of CDCP that contains 973 propositions and 272 edges. F1 scores for the proposition type prediction and the edge prediction along with their average are used for the evaluations. For the edge labels, we only consider the classification of EVIDENCE rather than macro-averaged scores because labels are highly imbalanced. We calculate label scores on gold edges. 5.1 Baselines To the best of our knowledge, two existing studies are comparable in our task settings. The first set of baselines are factor-based models (SVM basic/full/strict ; RNN basic/full/strict; Niculae et al., 2017). Another set of baselines are neural residual models (deep basic PG/LG ; deep residual PG/LG; Galassi et al., 2018), which are the state-of-the-art models in terms of edge classification. We also provided a non-TSP model for comparison where we use a joint aggregation to make stype,i = sedge,i. To this end, we provide a shared 3262 model edge type avg. avg. label EVIDENCE deep basic: LG 22.56 43.79 33.18 RNN: full 14.6 52.4 33.5 RNN: strict 10.5 65.9 38.2 deep basic: PG 22.45 63.31 42.88 RNN: basic 14.4 72.7 43.5 deep residual: PG 20.76 71.99 46.37 deep residual: LG 29.29 65.28 47.28 SVM: basic 24.7 71.6 48.1 SVM: full 25.1 73.5 49.3 SVM: strict 26.7 73.2 50.0 ours 34.04 78.91 56.48 18.73 + checkpoint ensemble 33.84 79.48 56.66 21.28 Table 1: F1 comparison against the existing models on CDCP representation for both type and edge: hspan type&edge,i = hspan type,i = hspan edge,i, = hEND(i) ⊕hspan att type&edge,i ⊕φ(i). and we use a joint encoder: stype&edge,i = stype,i = sedge,i = BILSTMtype&edge(hspan type&edge,i). According to the change above, the non-TSP model also requires us to modify the pre-biaffine MLPs and the proposition type classifier (see Appendix for more details). 5.2 Implementation GloVe (Pennington et al., 2014) and ELMo (Peters et al., 2018) were used as input embeddings. The hyperparameters were tuned with Optuna (Akiba et al., 2019) without using ELMo and TSP for fair comparison (see Appendix for more details). Each model was trained for 100 epochs with Adam (Kingma and Ba, 2015), and we selected a model that exhibited the highest average development F1 scores amongst all the classifiers. 6 Results We ran the experiment 30 times with different random seeds. Table 1 shows their average scores, showing our models outperform all the baselines. F1 performance for each proposition type are: FACT=51.58, POLICY=83.32, REFERENCE=100.0, TESTIMONY=78.99, and VALUE=80.67. We also report the results of our model with checkpoint ensemble (Chen et al., 2017)1, showing a stable 1Different from the study, we simply employed the best three checkpoints. 25.0 30.0 35.0 65.0 70.0 75.0 80.0 (a) edge 65.0 70.0 75.0 80.0 (b) type Figure 4: Task-specific ablation study (F1 scores). The dashed red line indicates a state-of-the-art baseline. but so because why disagree agree would i 0.00 0.25 0.50 0.75 1.00 attention value edge type Figure 5: Attention weight analysis with a violin plot by a kernel density estimation performance for both the proposition type and EVIDENCE label classification. 6.1 Ablation Study Figure 4 shows ablation studies. The non-ELMo model already outperforms the state-of-the-art baseline in the edge prediction task, showing that PLBA is effective. Besides, ELMo boosted the type classification. Figure 4a shows that the edge scores for the non-multi-task model are significantly lower, while Figure 4b shows that its type scores are barely affected. The result implies the edge task utilizes type information in the lower layer, but the type task is less dependent on edges. Besides, the edge scores for the non-TSP model are worse, indicating that TSP is effective in obtaining a stable performance. The result implies that TSP acquires edge-specific representations independently from types. 6.2 What Does TSP Learn? To further analyze TSP, we investigated the taskspecific token attention sτ,i,t. Figure 5 shows the attention distributions by a kernel density estimation for a number of selected tokens. The figure shows that not only discourse markers (i.e., because, but and so) but rhetorical or subjective claims (i.e., why and disagree) were focused in edge predictions. We found in the corpus that propositions with disagree and why are likely to be a top (claim) node. This suggests that these subjective statements can be 3263 used for predicting the top nodes. For proposition types, a number of first-person pronouns such as I were useful. We attribute this result to the TESTIMONY propositions which express personal experiences, e.g., but I never received any notice from my original mortgage lender that my mortgage was sold. 7 Related Work Researchers in argument mining have been utilizing Essay (Stab and Gurevych, 2014), a tree argument corpus. For example, Persing and Ng (2016) employed integer linear programming. Eger et al. (2017) investigated argument mining as a dependency parsing problem with neural models. Potash et al. (2017) developed a pointer network architecture to predict edges. However, we cannot simply utilize them for non-tree arguments because these models were built upon the assumption that an argument forms a tree structure. Non-tree arguments are relatively less emphasized. Niculae et al. (2017) attempted to resolve the problem with a factor-based model. Our study is primarily inspired by the semantic dependency parsing of Dozat and Manning (2018) and we predict the whole graph jointly. Galassi et al. (2018) proposed a deep learning-based model that utilizes residual connections to predict proposition pair relations. 8 Conclusion This paper focused on non-tree argument mining. We provided an approach to effectively encode a proposition sequence and to predict non-tree edges. Experimental results showed that our proposed model outperforms baselines. This paper demonstrated that we could successfully analyze more flexible structures in arguments. For future work, we aim to develop a universal model to handle both tree and non-tree arguments. Acknowledgments We appreciate Prof. Dr. Naoaki Okazaki at Tokyo Institute of Technology for his helpful comments. References Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’19, pages 2623– 2631, New York, NY, USA. ACM. Hugh Chen, Scott Lundberg, and Su-In Lee. 2017. Checkpoint ensembles: Ensemble methods from a single training process. CoRR, abs/1710.03282. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Timothy Dozat and Christopher D. Manning. 2018. Simpler but more accurate semantic dependency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 484–490, Melbourne, Australia. Association for Computational Linguistics. Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural end-to-end learning for computational argumentation mining. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11–22, Vancouver, Canada. Association for Computational Linguistics. Kawin Ethayarajh. 2019. How contextual are contextualized word representations? comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Hong Kong, China. Association for Computational Linguistics. Andrea Galassi, Marco Lippi, and Paolo Torroni. 2018. Argumentative link prediction using residual networks and multi-objective learning. In Proceedings of the 5th Workshop on Argument Mining, pages 1–10, Brussels, Belgium. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015. Tatsuki Kuribayashi, Hiroki Ouchi, Naoya Inoue, Paul Reisert, Toshinori Miyoshi, Jun Suzuki, and Kentaro Inui. 2019. An empirical study of span representations in argumentation structure parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4691– 4698, Florence, Italy. Association for Computational Linguistics. Ying Li, Zhenghua Li, Min Zhang, Rui Wang, Sheng Li, and Luo Si. 2019a. Self-attentive biaffine dependency parsing. In Proceedings of the TwentyEighth International Joint Conference on Artificial 3264 Intelligence, IJCAI-19, pages 5067–5073. International Joint Conferences on Artificial Intelligence Organization. Zuchao Li, Hai Zhao, Zhuosheng Zhang, Rui Wang, Masao Utiyama, and Eiichiro Sumita. 2019b. SJTUNICT at MRP 2019: Multi-task learning for end-toend uniform semantic graph parsing. In Proceedings of the Shared Task on Cross-Framework Meaning Representation Parsing at the 2019 Conference on Natural Language Learning, pages 45–54, Hong Kong. Association for Computational Linguistics. Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Linguistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073–1094, Minneapolis, Minnesota. Association for Computational Linguistics. Vlad Niculae, Joonsuk Park, and Claire Cardie. 2017. Argument mining with structured SVMs and RNNs. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 985–995, Vancouver, Canada. Association for Computational Linguistics. Joonsuk Park and Claire Cardie. 2018. A corpus of eRulemaking user comments for measuring evaluability of arguments. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Isaac Persing and Vincent Ng. 2016. End-to-end argumentation mining in student essays. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1384–1394, San Diego, California. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Peter Potash, Alexey Romanov, and Anna Rumshisky. 2017. Here’s my point: Joint pointer architecture for argument mining. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1364–1373, Copenhagen, Denmark. Association for Computational Linguistics. Christian Stab and Iryna Gurevych. 2014. Annotating argument components and relations in persuasive essays. In COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, August 23-29, 2014, Dublin, Ireland, pages 1501–1510. Christian Stab and Iryna Gurevych. 2017. Parsing argumentation structures in persuasive essays. Computational Linguistics, 43(3):619–659. Xinyu Wang, Jingxian Huang, and Kewei Tu. 2019. Second-order semantic dependency parsing with end-to-end neural networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4609–4618, Florence, Italy. Association for Computational Linguistics. Yue Zhang, Wei Jiang, Qingrong Xia, Junjie Cao, Rui Wang, Zhenghua Li, and Min Zhang. 2019. SUDAAlibaba at MRP 2019: Graph-based models with BERT. In Proceedings of the Shared Task on CrossFramework Meaning Representation Parsing at the 2019 Conference on Natural Language Learning, pages 149–157, Hong Kong. Association for Computational Linguistics. A Appendices A.1 Input Representation Following the work of Kuribayashi et al. (2019) and Potash et al. (2017), we propose incorporating multiple types of token representation to provide rich input features. Specifically, the proposed system combines surface, part-of-speech (POS) tags, GloVe (Pennington et al., 2014) embedding, and ELMo (Peters et al., 2018) as input features for each token. The following descriptions explain how we acquire each input representation: Surface Tokens are parsed by SpaCy (https:// spacy.io/). Surfaces that appear less than four times are replaced by special UNK tokens. POS tags We employ POS tags obtained by SpaCy. GloVe We employ 300-dimensional GloVe vectors (obtained from http://nlp.stanford.edu/ data/glove.840B.300d.zip). ELMo We employ the pretrained ELMo (obtained from https://s3-us-west-2.amazonaws. com/allennlp/models/elmo/2x4096_512_ 2048cnn_2xhighway/elmo_2x4096_512_ 2048cnn_2xhighway_weights.hdf5 and elmo_2x4096_512_2048cnn_2xhighway_ 3265 hyperparameter value or search space GloVe dimention 300 GloVe embedding linear 100 POS embedding linear 100 ELMo type 2x4096, 512 2048cnn 2xhighway input dropout 0.25, 0.33, 0.45 BILSTM dimension 200, 300, 400 BILSTM stack 1 BILSTMτ dimension 200, 300, 400 BILSTMτ stack 2, 3 recurrent dropout of all BiLSTMs 0.25, 0.33, 0.45 output dropout of all BiLSTMs 0.25, 0.33, 0.45 dimention of all MLPs 600, 700 dropout of all MLPs 0.25, 0.33, 0.45 activation of all MLPs ReLU (λedge, λlabel, λtype) (0.6, 0.2, 0.2), (0.4, 0.3, 0.3), (0.333, 0.333, 0.333) learning rate 0.0012, 0.0011, 0.001, 0.0009, 0.0008 Adam β1 0.9 Adam β2 0.999 epoch 100 mini-batch size 16 Table 2: List of hyperparameters. Multiple values indicates that the hyperparameter was tuned within those values. Underlines show the selected hyperparameter by the Optuna framework. options.json). Following Peters et al. (2018), we mix different layers of ELMo for each token: ˜sk = exp(sk) P k′ exp(sk′), wELMo START(i):END(i) = X k ˜skELMok START(i):END(i), where ELMok START(i):END(i)(0 < k ≤ NELMo) is the hidden state of the k-th layer of the ELMo obtained by START(i) to END(i) tokens, ELMo0 START(i):END(i) are the features from character-level CNN in ELMo, and sk are trainable parameters. The ELMo paramters are fixed by truncating backpropagation. The surface and POS tag of a token are each embedded into a vector. A multi-layered perceptron (MLP) is applied to each surface and POS. All features are then concatenated to form input token representation: wt = wsurface t ⊕wPOS t ⊕wGloVe t , Optionally, we can concatenate ELMo: wt = wsurface t ⊕wPOS t ⊕wGloVe t ⊕wELMo t . A.2 Non-TSP Model For non-TSP model in experiments, we provide a shared representation for both type and edge: hspan type&edge,i = hspan type,i = hspan edge,i, = hEND(i) ⊕hspan att type&edge,i ⊕φ(i). and we use a joint encoder: stype&edge,i = BILSTMtype&edge(hspan type&edge,i). According to the change above, the non-TSP also requires us to modify the pre-biaffine operations: e(src) i = MLP(src) edge stype&edge,i  , e(trg) j = MLP(trg) edge stype&edge,j  , ℓ(src) i = MLP(src) label(stype&edge,i), ℓ(trg) j = MLP(trg) label(stype&edge,j), and the proposition type classifier: ˆ typei = softmax MLPtype stype&edge,i  . A.3 Hyperparameter Tuning We tuned the hyperparameters using a subset considering our preliminary experiments. See Table 2 for hyperparameter search space and list of hyperparameters chosen by the Optuna framework (Akiba et al., 2019). We tried 20 hyperparameter sets. As can be seen from the table, the high dropout rate is effective. We estimate this is because the system can prevent an overfitting. We also found stacking BiLSTMs in TSP higher can improve performance, implying the semantics can be captured in upper layers. 3266 A.4 Single-task Setup For the single-task setup (non-multi-task), we provide each task-specific learning: type, edge, and edge label. Each model was optimized using its objective using the same hyperparameters.
2020
298
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3267–3277 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3267 A Span-based Linearization for Constituent Trees Yang Wei, Yuanbin Wu, and Man Lan School of Computer Science and Technology East China Normal University [email protected] {ybwu,mlan}@cs.ecnu.edu.cn Abstract We propose a novel linearization of a constituent tree, together with a new locally normalized model. For each split point in a sentence, our model computes the normalizer on all spans ending with that split point, and then predicts a tree span from them. Compared with global models, our model is fast and parallelizable. Different from previous local models, our linearization method is tied on the spans directly and considers more local features when performing span prediction, which is more interpretable and effective. Experiments on PTB (95.8 F1) and CTB (92.1 F1) show that our model significantly outperforms existing local models and efficiently achieves competitive results with global models. 1 Introduction Constituent parsers map natural language sentences to hierarchically organized spans (Cross and Huang, 2016). According to the complexity of decoders, two types of parsers have been studied, globally normalized models which normalize probability of a constituent tree on the whole candidate tree space (e.g. chart parser (Stern et al., 2017a)) and locally normalized models which normalize tree probability on smaller subtrees or spans. It is believed that global models have better parsing performance (Gaddy et al., 2018). But with the fast development of neural-network-based feature representations (Hochreiter and Schmidhuber, 1997; Vaswani et al., 2017), local models are able to get competitive parsing accuracy while enjoying fast training and testing speed, and thus become an active research topic in constituent parsing. Locally normalized parsers usually rely on tree decompositions or linearizations. From the perspective of decomposition, the probability of trees can be factorized, for example, on individual spans. Teng and Zhang (2018) investigates such a model which predicts probability on each candidate span. It achieves quite promising parsing results, while the simple local probability factorization still leaves room for improvements. From the perspective of linearization, there are many ways to transform a structured tree into a shallow sequence. As a recent example, Shen et al. (2018) linearizes a tree with a sequence of numbers, each of which indicates words’ syntactic distance in the tree (i.e., height of the lowest common ancestor of two adjacent words). Similar ideas are also applied in Vinyals et al. (2015), Choe and Charniak (2016) and transition-based systems (Cross and Huang, 2016; Liu and Zhang, 2017a). With tree linearizations, the training time can be further accelerated to O(n), but the parsers often sacrifice a clear connection with original spans in trees, which makes both features and supervision signals from spans hard to use. In this work, we propose a novel linearization of constituent trees tied on their span representations. Given a sentence W and its parsing tree T , for each split point after wi in the sentence, we assign it a parsing target di, where (di, i) is the longest span ending with i in T . We can show that, for a binary parsing tree, the set {(di, i)} includes all left child spans in T . Thus the linearization is actually sufficient to recover a parsing tree of the sentence. Compared with prior work, the linearization is directly based on tree spans, which might make estimating model parameters easier. We also build a different local normalization compared with the simple per-span-normalization in Teng and Zhang (2018). Specifically, the probability P(di|i) is normalized on all candidate split points on the left of i. The more powerful local model can help to further improve parsing performance while retaining the fast learning and inference speed (with a greedy heuristic for handling illegal sequences, we can achieve O(n log n) average inference complexity). 3268 S VP S NP PRP She VBZ loves VBG writing NN code . . VP NP (a) Original parsing tree. S NP PRP She NN code . .   VP  S-VP  NP VBZ loves VBG writing (b) Right binary tree. (0,1) (1,2) (2,3) (3,4) (4,5) (0,2) (1,3) (2,4) (3,5) (0,3) (1,4) (2,5) (0,4) (1,5) (0,5) 𝑑1 = 0 𝑑2 = 1 𝑑3 = 2 𝑑4 = 1 𝑑5 = 0 (c) Span table and linearization. Figure 1: The process of generating the linearization of the sentence “She loves writing code .”. Given an original parsing tree (a), we firstly convert it to a right binary tree by recursively combining the rightmost two children (b). Then, we represent the tree as a span table, and divide it into five parts according to the right boundaries of the spans (c). Green and red circles represent left and right child spans respectively. Gray circles represent spans which do not appear in the tree. In each part, there is only one longest span (green circles), thus the corresponding value of that part is just the left boundary of the green circle. We perform experiments on PTB and CTB. The proposed parser significantly outperforms existing locally normalized models, and achieves competitive results with state-of-the-art global models (95.8 F1 on PTB and 92.1 F1 on CTB). We also evaluate how the new linearization helps parse spans with different lengths and types. To summarize, our main contributions include: • Proposing a new linearization which has clear interpretation (Section 2). • Building a new locally normalized model with constraints on span scores (Section 3). • Compared with previous local models, the proposed parser achieves better performance (competitive with global models) and has faster parsing speed (Section 4). 2 Tree Linearization We first prepare some notations. Let W = (w1, w2, . . . , wn) be a sentence, T be its binary constituent tree and Aij →BikCkj be a derivation in T . Denote (i, j)(0 ≤i < j ≤n) to be a span from wi+1 to wj (for simplicity, we ignore the label of a span). Definition 1. Given a sentence W and its tree T , we call D = (d1, d2, . . . , dn) a linearization of T , where di ∈{0, 1, . . . , i −1} and (di, i) is the longest span ending with i in T . Clearly, there is only one such linearization for a tree. We have an equal definition of D, which shows the span (di, i) is a left child span. Proposition 1. Given a tree T , the set of spans {(di, i) | i = 1, 2, . . . , n} is equal to the set of left child spans 1 S = {(i, j) | ∃Aik →BijCjk} ∪{(0, n)}. Proof. First, for each j, there is only one left child span (i, j) ending with j, otherwise if (i′, j) is a left child span with i′ ̸= i (e.g. i′ < i), (i, j) must also be a right child span. Therefore |S| = n. Similarly, if i ̸= dj, (i, j) should be a right child span of (dj, j). Thus we can generate the linearization using Algorithm 1. For span (i, j) and its gold split k, we can get dk = i. Then we recursively calculate the linearization of span (i, k) and (k, j). Note that the returned linearization D does not contain dn, so we append zero (dn = 0 for the root node) to the end as the final linearization. Figure 1 is a generation process of sentence “She loves writing code .”. From the span table, it is obvious that there is only one left child span (green circles) ending with the same right boundary. In the following discussions, we will use D and S interchangeably. Next, we show two properties of a legal D. Proposition 2. A linearization D can recover a tree T iff. 1. 0 ≤di < i, ∀1 ≤i ≤n. 1The root node is also regarded as a left child span. 3269 Algorithm 1 Tree linearization. 1: function LINEARIZATION(i, j, T ) 2: if i + 1 = j then 3: D ←[] 4: else 5: k ←the split point of span (i, j) in T 6: Dl ←LINEARIZATION(i, k, T ) 7: Dr ←LINEARIZATION(k, j, T ) 8: D ←Dl ⊕[i] ⊕Dr 9: end if 10: return D 11: end function 2. dj is not in the range (di, i), ∀j > i. Proof. The necessity is obvious. We show the sufficiency by induction on the sentence length. When n = 1, the conclusion stands. Assuming for all linearizations with length less than n, property 1 and 2 lead to a well-formed tree, and now consider a linearization with length n. Define k = max{k′ | dk′ = 0, k′ < n}. Since d1 = 0 (by property 1), k is not none. We split the sentence into (0, k), (k, n), and claim that after removing (0, n), the spans in D are either in (0, k) or (k, n), thus by induction we obtain the conclusion. To validate the claim, for k′ < k, by property 1, we have dk′ < k′ < k, thus (dk′, k′) is in (0, k). For k′ > k, by property 2, either dk′ ≥k or dk′ = 0. Since k is the largest index with dk = 0, we have dk′ ̸= 0, which means (dk′, k′) is in (k, n). Therefore, we show the existence of a tree from D. The tree is also unique, because if two trees T and T ′ have the same linearization, by Proposition 1, we have T = T ′. Proposition 2 also suggests a top-down algorithm (Algorithm 2) for performing tree inference given a legal linearization. For span (i, j) (with label ℓ(i, j)), we find the rightmost split k satisfying dk = i, and then recursively decode the two subtrees rooted at span (i, k) and (k, j), respectively. When D does not satisfy property 2 (our model can ensure property 1), one solution is to seek a minimum change of D to make it legal. However, it is reduced to a minimum vertex cover problem (regarding each span (di, i) as a point, if two spans violate property 2, we connect an edge between them. ). We can also slightly modify Algorithm 2 to perform an approximate inference (Section 3.4). Algorithm 2 Tree reconstruction. 1: function TREE(i, j, D) 2: if i + 1 = j then 3: node ←Leaf(wj, ℓ(i, j)) 4: else 5: k ←max {k′ | dk′ = i, i < k′ < j} 6: childl ←TREE(i, k, D) 7: childr ←TREE(k, j, D) 8: node ←Node(childl, childr, ℓ(i, j)) 9: end if 10: return node 11: end function Finally we need to deal with the linearization of non-binary trees. For spans having more than two child spans, there is no definition for their middle child spans whether they are left children or right children, thus Proposition 1 might not stand. We recursively combine two adjacent spans from right to left using an empty label ∅. Then the tree can be converted to a binary tree (Stern et al., 2017a). For a unary branch, we treat it as a unique span with a new label which concatenates all the labels in the branch. 3 The Parser In this section, we introduce our encoder, decoder and inference algorithms in detail. Then we compare our normalization method with two other methods, globally normalized and existing locally normalized methods. 3.1 Encoder We represent each word wi using three pieces of information, a randomly initialized word embedding ei, a character-based embedding ci obtained by a character-level LSTM and a randomly initialized part-of-speech tag embedding pi. We concatenate these three embeddings to generate a representation of word wi, xi = [ei; ci; pi]. To get the representation of the split points, the word representation matrix X = [x1, x2, . . . , xn] is fed into a bidirectional LSTM or Transformer (Vaswani et al., 2017) firstly. Then we calculate the representation of the split point between wi and wi+1 using the outputs from the encoders, hi = [ → hi; ← hi+1]. (1) 3270 Note that for Transformer encoder, → hi is calculated in the same way as Kitaev and Klein (2018a). 3.2 Decoder Since a split point can play two different roles when it is the left or right boundary of a span, we use two different vectors to represent the two roles inspired by Dozat and Manning (2017). Concretely, we use two multi-layer perceptrons to generate two different representations, li = MLPl(hi), ri = MLPr(hi). (2) Then we can define the score of span (i, j) using a biaffine attention function (Dozat and Manning, 2017; Li et al., 2019), αij = l⊤ i Wrj + b⊤ 1 li + b⊤ 2 rj, where W, b1 and b2 are all model parameters. αij measures the possibility of (i, j) being a left child span in the tree. Different from Stern et al. (2017a) which does global normalization on the probability of the whole tree and Teng and Zhang (2018) which does local normalization on each candidate span, we do normalization on all spans with the same right boundary j. Thus the probability of span (i, j) to be a left child span is defined as, P(i|j) = Softmaxi(αij), ∀i < j. (3) Finally, we can predict the linearization using the probability P(i|j), dj = arg max i P(i|j), ∀i < j. (4) For label prediction, we first infer the tree structure from the linearization (Section 3.4). 2 Then we use a multi-layer perceptron to calculate the label probability of span (i, j), P(ℓ|i, j) = Softmax(MLPlabel([li; rj]))ℓ. Final predicted label of span (i, j) is ℓ(i, j) = arg maxℓP(ℓ|i, j). 2Note that we would perform label prediction without the tree inference step which will train the entire parser in linear time as sequence labelling models (G´omez-Rodr´ıguez and Vilares, 2018), but we empirically find that the tree structure helps improving the label classifier. 3.3 Training Objective Given a gold parsing tree T and its linearization (d1, d2, . . . , dn), we can calculate the loss using the negative log-likelihood: L = −1 n( n X i=1 log P(di|i)+ X (i,j,ℓ)∈T log P(ℓ|i, j)). The loss function consists of two parts. One is the structure loss, which is only defined on the left child spans. The other one is the label loss, which is defined on all the spans in T . 3.4 Tree Inference To reconstruct the tree structure from the predicted linearization (d1, d2, . . . , dn), we must deal with illegal sequences. One solution is to convert an illegal linearization to a legal one, and then use Algorithm 2 to recover the tree. However, the optimal converting algorithm is NP hard as discussed in Section 2. We propose two approximate reconstruction methods, both of which are based on replacing line 5 of Algorithm 2. One is to find the largest k satisfying dk ≤i, k ←max {k′ | dk′ ≤i, i < k′ < j}. The other is to find the index k of the smallest dk (if there are multiple choices, we choose the largest one), k ←arg min k′ dk′. Both methods are applicable to legal situations, and they have similar performance in our empirical evaluations. The inference time complexity is O(n2) in the worst-case for unbalanced trees, while in average it is O(n log n) (which is the same as Stern et al. (2017a)). Finally, instead of reconstructing trees from linearization sequences (d1, d2, . . . , dn), we could have an accurate CKY-style decoding algorithm from probabilities P(i|j) (Equation 3). Specifically, it maximizes the product of left child span probabilities, G(i, j) = max {P(i|k) × G(k, j) | i < k < j}, where G(i, j) represents the highest probability of subtree with root node (i, j). We can calculate G(0, n) using dynamic programming algorithm and back-trace the tree accordingly. The complexity is O(n3). 3271 (0,1) (1,2) (2,3) (0,2) (1,3) (0,3) 𝒩𝒯 (a) Global normalization. 𝒩0,1 (0,1) (1,2) (2,3) (0,2) (1,3) (0,3) 𝒩0,2 𝒩0,3 𝒩1,2 𝒩1,3 𝒩2,3 (b) Local normalization. 𝒩3 (0,1) (1,2) (2,3) (0,2) (1,3) (0,3) 𝒩1 𝒩2 (c) Our normalization. Figure 2: Factor graphs of three types of normalization. Green circles represent all potential spans in the span table. Red blocks represent scores of the spans. Blue blocks represent normalization operations and dotted lines connect all the spans involved in the normalization. Global normalization (a) needs to calculate the sum of all span scores in parsing tree T . Existing local normalization (e.g. Teng and Zhang (2018)) (b) only calculates the probability of each candidate span. Our method (c) does local normalization on all the spans with the same right boundary. 3.5 More Discussions on Normalization We can compare our locally normalized model (Equation 3) with other probability factorizations of constituent trees (Figure 2). Global normalization (Figure 2(a)) performs marginalization over all candidate trees, which requires dynamic programming decoding. As a local model, our parser is a span-level factorization of the tree probability, and each factor only marginalizes over a linear number of items (i.e., the probability of span (i, j) is normalized with all scores of (i′, j), i′ < j). It is easier to be parallelized and enjoys a much faster parsing speed. We will show that its performance is also competitive with global models. Teng and Zhang (2018) studies two local normalized models over spans, namely the span model and the rule model. The span model simply considers individual spans independently (Figure 2(b)) which may be the finest factorization. Our model lies between it and the global model. The rule model considers a similar normalization with our model. If it is combined with the top-down decoding (Stern et al., 2017a), the two parsers look similar. 3 We discuss their differences. The rule model takes all ground truth spans from the gold trees, and for each span (i, j), it compiles a probability P((i, j) ←(i, k)(k, j)) for its ground truth split k. Our parser, on the other side, factorizes on each word. Therefore, for the 3We thank an anonymous reviewer for pointing out the connection. The following discussions are based on his/her detailed reviews. same span (i, j), their normalization is constrained within (i, j), while ours is over all i′ < j. The main advantage of our parser is simpler span representations (not depend on parent spans): it makes the parser easy to batch for sentences with different lengths and tree structures since each di can be calculated offline before training. 4 Experiments 4.1 Data and Settings Datasets and Preprocessing All models are trained on two standard benchmark treebanks, English Penn Treebank (PTB) (Marcus et al., 1993) and Chinese Penn Treebank (CTB) 5.1. The POS tags are predicted using Stanford Tagger (Toutanova et al., 2003). To clean the treebanks, we strip the leaf nodes with POS tag -NONE- from the two treebanks and delete the root nodes with constituent type ROOT. For evaluating the results, we use the standard evaluation tool 4. For words in the testing corpus but not in the training corpus, we replace them with a unique label <UNK>. We also replace the words in the training corpus with the unknown label <UNK> with probability punk(w) = z z+c(w), where c(w) is the number of time word w appears in the training corpus and we set z = 0.8375 as Cross and Huang (2016). Hyperparameters We use 100D GloVe (Pennington et al., 2014) embedding for PTB and 80D structured-skipgram (Ling et al., 2015) embedding 4http://nlp.cs.nyu.edu/evalb/ 3272 Type NP VP S PP SBAR ADVP ADJP QP WHNP Count 18630 8743 5663 5492 1797 1213 893 490 429 PSN Model 93.15 91.81 91.21 89.73 87.81 86.89 73.01 89.80 97.20 Our Model 93.42 92.62 91.95 89.91 88.93 87.39 75.14 91.63 97.44 Difference +0.27 +0.81 +0.74 +0.18 +1.12 +0.50 +2.13 +1.83 +0.24 Table 1: Comparison on different phrases types. Here we only list top nine types. 1 6 11 16 21 26 31 36 41 46 Span length 89 90 91 92 93 94 95 Fscore (%) Our Model PSN Model Figure 3: F1 scores against span length. Here the length l represents lengths between [l, l + 4]. for CTB. For character encoding, we randomly initialize the character embeddings with dimension 64. We use Adam optimizer with initial learning rate 1.0 and epsilon 10−9. For LSTM encoder, we use a hidden size of 1024, with 0.33 dropout in all the feed-forward and recurrent connections. For Transformer encoder, we use the same hyperparameters as Kitaev and Klein (2018a). For split point representation, we apply two 1024-dimensional hidden size feed-forward networks. All the dropout we use in the decoder layer is 0.33. We also use BERT (Devlin et al., 2019) (uncased, 24 layers, 16 attention heads per layer and 1024-dimensional hidden vectors) and use the output of the last layer as the pre-trained word embeddings. 5 Training Details We use PyTorch as our neural network toolkit and run the code on a NVIDIA GeForce GTX Titan Xp GPU and Intel Xeon E52603 v4 CPU. All models are trained for up to 150 epochs with batch size 150 (Zhou and Zhao, 2019). 4.2 Main Results Table 2 shows the final results on PTB test set. Our models (92.6 F1 with LSTM, 93.7 F1 with Trans5The source code for our model is publicly available: https://github.com/AntNLP/ span-linearization-parser former) significantly outperform the single locally normalized models. Compared with globally normalized models, our models also outperform those parsers with LSTM encoder and achieve a competitive result with Transformer encoder parsers. With the help of BERT (Devlin et al., 2018), our models with two encoders both achieve the same performance (95.8 F1) as the best parser (Zhou and Zhao, 2019). Table 3 shows the final results on CTB test set. Our models (92.1 F1) also significantly outperform local models and achieve competitive result amongst global models. Compared with Teng and Zhang (2018) which does local normalization on single span, our model increases 0.2 F1 on PTB, which shows that doing normalization on more spans is really better. Our model also significantly outperforms Shen et al. (2018) which predicts the syntactic distance of a tree. This indicates the superiority of our linearization method directly tied on the spans. 4.3 Evaluation To better understand the extent to which our model transcends the locally normalized model which does normalization on a single span described in Teng and Zhang (2018), we do several experiments to compare the performance about different lengths of spans and different constituent types. In order to make a fair comparison, we implement their model by ourselves using the same LSTM encoder as ours. Besides, we ignore the LSTM for label prediction and complex span representations in their models and use simpler settings. Our own implementation achieves the same result as they report (92.4 F1). For convenience, we call their model per-span-normalization (PSN for short) model in the following. Influence of Span Length First, we analyse the influence of different lengths of spans and the results are shown in Figure 3. We find that for sentences of lengths between [11, 45], our model significantly outperforms PSN model. For short 3273 Model LR LP F1 Global Model Stern et al. (2017a) 90.6 93.0 91.8 Gaddy et al. (2018) 91.8 92.4 92.1 Kitaev and Klein (2018a)♠ 93.2 93.9 93.6 Zhou and Zhao (2019)♠ 93.6 93.9 93.8 Local Model Vilares et al. (2019) 90.6 Liu et al. (2018) 91.2 Ma et al. (2017) 91.5 Shen et al. (2018) 91.7 92.0 91.8 Liu and Zhang (2017a) 91.8 Hong and Huang (2018) 91.5 92.5 92.0 Teng and Zhang (2018) 92.2 92.5 92.4 Dyer et al. (2016)♥ 92.4 Stern et al. (2017b)♥ 92.6 92.6 92.6 Our Model 92.3 92.9 92.6 Our Model♠ 93.3 94.1 93.7 Pre-training/Ensemble/Re-ranking Liu et al. (2018) 92.3 Choe and Charniak (2016) 93.8 Liu and Zhang (2017a) 94.2 Fried et al. (2017) 94.7 Kitaev and Klein (2018a)♠ 94.9 95.4 95.1 Kitaev and Klein (2018b)♠ 95.5 95.7 95.6 Zhou and Zhao (2019)♠ 95.7 96.0 95.8 Our Model (+BERT) 95.6 96.0 95.8 Our Model (+BERT)♠ 95.5 96.1 95.8 Table 2: Final results on the PTB test set. ♠means the models use Transformer as their encoder. ♥means generative models. spans, PSN model only needs to consider few spans, which is more local and it is enough for the perspan-normalization to handle this situation. For long spans, our model needs to do normalization on more spans and the state space becomes large linearly. So the accuracy decreases fast, and there is no advantage compared with PSN model which uses CKY algorithm for inference. For spans of other lengths, our locally normalized method can take all spans with the same right boundary into consideration and add sum-to-one constraints on their scores. As a result, our model outperforms PSN model even without the help of accurate inference. Influence of Constituent Type Then we compare the accuracy of different constituent types. Table 1 shows the results of nine types which occur most frequently. Our model all performs better Model LR LP F1 Global Model Kitaev and Klein (2018a)♠ 86.8 88.1 87.4 Zhou and Zhao (2019)♠ 89.4 90.1 89.7 Local Model Dyer et al. (2016) 84.6 Liu et al. (2018) 85.4 Liu and Zhang (2017b) 85.2 85.9 85.5 Vilares et al. (2019) 85.6 Liu and Zhang (2017a) 86.1 Shen et al. (2018) 86.4 86.6 86.5 Fried and Klein (2018) 87.0 Teng and Zhang (2018) 87.1 87.5 87.3 Our Model 87.9 89.3 88.6 Our Model♠ 87.4 89.9 88.7 Pre-training/Ensemble/Re-ranking Kitaev and Klein (2018b)♠ 91.6 92.0 91.8 Our Model (+BERT) 91.7 92.4 92.0 Our Model (+BERT)♠ 91.9 92.3 92.1 Table 3: Final results on the CTB test set. ♠means the models use Transformer as their encoder. Note that Zhou and Zhao (2019) uses gold POS tags in their code, so we rerun their code using predicted POS tags for fair comparison. Model LR LP F1 Full model 92.31 92.87 92.59 - MLPl and MLPr 92.15 92.72 92.43 - normalization 91.25 92.93 92.08 + label linearization 90.79 91.56 91.17 Table 4: Ablation test on the PTB test set. Here we use the same settings as in Section 4.3. than PSN model, especially in types SBAR, ADJP and QP. When optimizing the representation of one split point, our model can consider all of the words before it, which can be helpful to predict some types. For example, when we predict an adjective phrase (ADJP), its representation has fused the words’ information before it (e.g. linking verb like “is”), which can narrow the scope of prediction. 4.4 Ablation Study We perform several ablation experiments by modifying the structure of the decoder layer. The results are shown in Table 4. First, we delete the two different split point representations described in Equation (2) and directly use the output of LSTM as the final representation. Final performance slightly decreases, which indi3274 Inference Algorithm LR LP F1 G(i, j) 92.31 92.87 92.59 k = max {k′ | dk′ ≤i} 92.39 92.75 92.57 k = arg mink′ dk′ 91.93 93.21 92.57 Table 5: Results of different inference algorithms described in Section 3.4. Model sents/sec Global Model Stern et al. (2017a) 20 Kitaev and Klein (2018a)♠(w. Cython) 150 Zhou and Zhao (2019)♠(w. Cython) 159 Local Model Teng and Zhang (2018) 22 Stern et al. (2017a) 76 Liu and Zhang (2017b) 79 Shen et al. (2018) 111 Shen et al. (2018) (w/o tree inference) 351 Vilares et al. (2019) 942 Our Model 220 Our Model♠ 155 Table 6: Parsing speeds on the PTB test set. ♠means the models use Transformer as their encoders. “w. Cython” stands for using Cython to optimize the python code. “w/o tree inference” stands for evaluating without tree inference. The model in Kitaev and Klein (2018a) is ran by ourselves, and other speeds are extracted from their original papers. cates that distinguishing the representations of left and right boundaries of a span is really helpful. Then we delete the local normalization on partial spans and only calculate the probability of each span to be a left child. The inference algorithm is the same as our full model. Final result decreases by 0.5 F1, despite improvement on precision. This might be because our normalization method can add constraints on all the spans with the same right boundary, which makes it effective when only one span is correct. Finally, we try to predict the labels sequentially, which means assigning each split i a tuple (di, ℓleft i , ℓright i ), where ℓleft i and ℓright i represent the labels of the longest spans ending and starting with i in the tree, respectively. This may make our model become a sequence labeling model similar to G´omez-Rodr´ıguez and Vilares (2018). However, the performance is very poor, and this is largely due to the loss of structural information in the label prediction. Therefore, how to balance efficiency and label prediction accuracy might be a research problem in the future. 4.5 Inference Algorithms We compare three inference algorithms described in Section 3.4. The results are shown in Table 5. We find that different inference algorithms have no obvious effect on the performance, mainly due to the powerful learning ability of our model. Thus we use the third method which is the most convenient to implement. 4.6 Parsing Speed The parsing speeds of our parser and other parsers are shown in Table 6. Although our inference complexity is O(n log n), our speed is faster than other local models, except Shen et al. (2018) which evaluates without tree inference and Vilares et al. (2019) which utilizes a pure sequence tagging framework. This is mainly due to the simplicity of our model and the parallelism of matrix operations for structure prediction. Compared with globally normalized parsers like Zhou and Zhao (2019) and Kitaev and Klein (2018a), our model is also faster even if they use optimization for python code (e.g. Cython 6). Other global model like Stern et al. (2017a) which infers in O(n3) complexity is much slower than ours, and this shows the superiority of our linearization in speed. 5 Related Work Globally normalized parsers often have high performance on constituent parsing due to their search on the global state space (Stern et al., 2017a; Kitaev and Klein, 2018a; Zhou and Zhao, 2019). However, they suffer from high time complexity and are difficult to parallelize. Thus many efforts have been made to optimize their efficiency (Vieira and Eisner, 2017). Recently, the rapid development of encoders (Hochreiter and Schmidhuber, 1997; Vaswani et al., 2017) and pre-trained language models (Devlin et al., 2018) have enabled local models to achieve similar performance as global models. Teng and Zhang (2018) propose two local models, one does normalization on each candidate span and one on each grammar rule. Their models even outperform the global model in Stern et al. (2017a) thanks to the better representation of spans. However, they still need an O(n3) complexity inference algorithm to reconstruct the final parsing tree. 6https://cython.org/ 3275 Meanwhile, many work do research on faster sequential models. Transition-based models predict a sequence of actions and achieve an O(n) complexity (Watanabe and Sumita, 2015; Cross and Huang, 2016; Liu and Zhang, 2017a). However, they suffer from the issue of error propagation and cannot be parallel. Sequence labeling models regard tree prediction as sequence prediction problem (G´omez-Rodr´ıguez and Vilares, 2018; Shen et al., 2018). These models have high efficiency, but their linearizations have no direct relation to the spans, so the performance is much worse than span-based models. We propose a novel linearization method closely related to the spans and decode the tree in O(n log n) complexity. Compared with Teng and Zhang (2018), we do normalization on more spans, thus achieve a better performance. In future work, we will apply graph neural network (Velickovic et al., 2018; Ji et al., 2019; Sun et al., 2019) to enhance the span representation. Due to the excellent properties of our linearization, we can jointly learn constituent parsing and dependency parsing in one graph-based model. In addition, there is also a right linearization defined on the set of right child spans. We can study how to combine the two linear representations to further improve the performance of the model. 6 Conclusion In this work, we propose a novel linearization of constituent trees tied on the spans tightly. In addition, we build a new normalization method, which can add constraints on all the spans with the same right boundary. Compared with previous local normalization methods, our method is more accurate for considering more span information, and reserves the fast running speed due to the parallelizable linearization model. The experiments show that our model significantly outperforms existing local models and achieves competitive results with global models. Acknowledgments The authors would like to thank the reviewers for their helpful comments and suggestions. The authors would also like to thank Tao Ji and Changzhi Sun for their advices on models and experiments. The corresponding author is Yuanbin Wu. This research is (partially) supported by STCSM (18ZR1411500), the Foundation of State Key Laboratory of Cognitive Intelligence, iFLYTEK(COGOS-20190003), and an open research fund of KLATASDS-MOE. References Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2331–2336. The Association for Computational Linguistics. James Cross and Liang Huang. 2016. Span-based constituency parsing with a structure-label system and provably optimal dynamic oracles. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 1–11. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 199–209. Daniel Fried and Dan Klein. 2018. Policy gradient as a proxy for dynamic oracles in constituency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pages 469–476. Daniel Fried, Mitchell Stern, and Dan Klein. 2017. Improving neural parsing by disentangling model combination and reranking effects. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 2: Short Papers, 3276 pages 161–166. Association for Computational Linguistics. David Gaddy, Mitchell Stern, and Dan Klein. 2018. What’s going on in neural constituency parsers? an analysis. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 999–1010. Carlos G´omez-Rodr´ıguez and David Vilares. 2018. Constituent parsing as sequence labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1314– 1324. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Juneki Hong and Liang Huang. 2018. Linear-time constituency parsing with rnns and dynamic programming. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pages 477–483. Tao Ji, Yuanbin Wu, and Man Lan. 2019. Graphbased dependency parsing with graph neural networks. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 2475–2485. Association for Computational Linguistics. Nikita Kitaev and Dan Klein. 2018a. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 2675–2685. Nikita Kitaev and Dan Klein. 2018b. Multilingual constituency parsing with self-attention and pre-training. CoRR, abs/1812.11760. Ying Li, Zhenghua Li, Min Zhang, Rui Wang, Sheng Li, and Luo Si. 2019. Self-attentive biaffine dependency parsing. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5067–5073. ijcai.org. Wang Ling, Chris Dyer, Alan W. Black, and Isabel Trancoso. 2015. Two/too simple adaptations of word2vec for syntax problems. In NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA, May 31 - June 5, 2015, pages 1299– 1304. The Association for Computational Linguistics. Jiangming Liu and Yue Zhang. 2017a. In-order transition-based constituent parsing. Transactions of the Association for Computational Linguistics, 5:413–424. Jiangming Liu and Yue Zhang. 2017b. Shift-reduce constituent parsing with neural lookahead features. Transactions of the Association for Computational Linguistics, 5:45–58. Lemao Liu, Muhua Zhu, and Shuming Shi. 2018. Improving sequence-to-sequence constituency parsing. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 4873–4880. AAAI Press. Chunpeng Ma, Akihiro Tamura, Lemao Liu, Tiejun Zhao, and Eiichiro Sumita. 2017. Improving featurerich transition-based constituent parsing using recurrent neural networks. IEICE Transactions, 100D(9):2205–2214. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19(2):313–330. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532–1543. ACL. Yikang Shen, Zhouhan Lin, Athul Paul Jacob, Alessandro Sordoni, Aaron C. Courville, and Yoshua Bengio. 2018. Straight to the tree: Constituency parsing with neural syntactic distance. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1171–1180. Mitchell Stern, Jacob Andreas, and Dan Klein. 2017a. A minimal span-based neural constituency parser. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 818–827. Mitchell Stern, Daniel Fried, and Dan Klein. 2017b. Effective inference for generative neural parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 911, 2017, pages 1695–1700. Changzhi Sun, Yeyun Gong, Yuanbin Wu, Ming Gong, Daxin Jiang, Man Lan, Shiliang Sun, and Nan Duan. 2019. Joint type inference on entities and relations 3277 via graph convolutional networks. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 1361–1370. Association for Computational Linguistics. Zhiyang Teng and Yue Zhang. 2018. Two local models for neural constituent parsing. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 119–132. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL 2003, Edmonton, Canada, May 27 - June 1, 2003. The Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 6000–6010. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. 2018. Graph attention networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Tim Vieira and Jason Eisner. 2017. Learning to prune: Exploring the frontier of fast and accurate parsing. TACL, 5:263–278. David Vilares, Mostafa Abdou, and Anders Søgaard. 2019. Better, faster, stronger sequence tagging constituent parsers. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 3372–3383. Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2773–2781. Taro Watanabe and Eiichiro Sumita. 2015. Transitionbased neural constituent parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 1169–1179. Junru Zhou and Hai Zhao. 2019. Head-driven phrase structure grammar parsing on penn treebank. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 2396–2408.
2020
299
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 19–25 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 19 Coach: A Coarse-to-Fine Approach for Cross-domain Slot Filling Zihan Liu, Genta Indra Winata, Peng Xu, Pascale Fung Center for Artificial Intelligence Research (CAiRE) Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong [email protected] Abstract As an essential task in task-oriented dialog systems, slot filling requires extensive training data in a certain domain. However, such data are not always available. Hence, cross-domain slot filling has naturally arisen to cope with this data scarcity problem. In this paper, we propose a Coarse-to-fine approach (Coach) for cross-domain slot filling. Our model first learns the general pattern of slot entities by detecting whether the tokens are slot entities or not. It then predicts the specific types for the slot entities. In addition, we propose a template regularization approach to improve the adaptation robustness by regularizing the representation of utterances based on utterance templates. Experimental results show that our model significantly outperforms state-of-theart approaches in slot filling. Furthermore, our model can also be applied to the cross-domain named entity recognition task, and it achieves better adaptation performance than other existing baselines. The code is available at https: //github.com/zliucr/coach. 1 Introduction Slot filling models identify task-related slot types in certain domains for user utterances, and are an indispensable part of task-oriented dialog systems. Supervised approaches have made great achievements in the slot filling task (Goo et al., 2018; Zhang et al., 2019), where substantial labeled training samples are needed. However, collecting large numbers of training samples is not only expensive but also time-consuming. To cope with the data scarcity issue, we are motivated to investigate cross-domain slot filling methods, which leverage knowledge learned in the source domains and adapt the models to the target domain with a minimum number of target domain labeled training samples. A challenge in cross-domain slot filling is to handle unseen slot types, which prevents general Can you put this tune onto latin dance cardio Playlist Music Item O O O O O O B I I O O O O B O O O O (a) Framework proposed by Bapna et al. (2017). Can you put this tune onto latin dance cardio O O O O B O B I I Slot Entity Playlist Music Item Step 1 Step 2 Step 2 (b) Our proposed framework, Coach. Figure 1: Cross-domain slot filling frameworks. classification models from adapting to the target domain without any target domain supervision signals. Recently, Bapna et al. (2017) proposed a cross-domain slot filling framework, which enables zero-shot adaptation. As illustrated in Figure 1a, their model conducts slot filling individually for each slot type. It first generates word-level representations, which are then concatenated with the representation of each slot type description, and the predictions are based on the concatenated features for each slot type. Due to the inherent variance of slot entities across different domains, it is difficult for this framework to capture the whole slot entity (e.g., “latin dance cardio” in Figure 1a) in the target domain. There also exists a multiple prediction problem. For example, “tune” in Figure 1a could be predicted as “B” for both “music item” and “playlist”, which would cause additional trouble for the final prediction. We emphasize that in order to capture the whole slot entity, it is pivotal for the model to share its parameters for all slot types in the source domains and learn the general pattern of slot entities. Therefore, as depicted in Figure 1b, we propose a new cross-domain slot filling framework called Coach, 20 Can you put this tune onto latin dance cardio Can you put this restaurant type onto artist Template Generation Encoder Conditional Random Field (CRF) O O B 3-way (B-I-O) Classification Per Token ... ... Utterance Correct B I I ... ... latin dance cardio ... ... tune Encoder Encoder Representation Representation Slot Type Description Matrix ... ... similarity comparison similarity comparison Music Item Playlist Utterance Representation Template Representations Regularization Loss Step One Step Two Can you put this music item onto playlist Incorrect ... Can you put this object name onto city Figure 2: Illustration of our framework, Coach, and the template regularization approach. a coarse-to-fine approach. It first coarsely learns the slot entity pattern by predicting whether the tokens are slot entities or not. Then, it combines the features for each slot entity and predicts the specific (fine) slot type based on the similarity with the representation of each slot type description. In this way, our framework is able to avoid the multiple predictions problem. Additionally, we introduce a template regularization method that delexicalizes slot entity tokens in utterances into different slot labels and produces both correct and incorrect utterance templates to regularize the utterance representations. By doing so, the model learns to cluster the representations of semantically similar utterances (i.e., in the same or similar templates) into a similar vector space, which further improves the adaptation robustness. Experimental results show that our model surpasses the state-of-the-art methods by a large margin in both zero-shot and few-shot scenarios. In addition, further experiments show that our framework can be applied to cross-domain named entity recognition, and achieves better adaptation performance than other existing frameworks. 2 Related Work Coarse-to-fine methods in NLP are best known for syntactic parsing (Charniak et al., 2006; Petrov, 2011). Zhang et al. (2017) reduced the search space of semantic parsers by using coarse macro grammars. Different from the previous work, we apply the idea of coarse-to-fine into cross-domain slot filling to handle unseen slot types by separating the slot filling task into two steps (Zhai et al., 2017; Guerini et al., 2018). Coping with low-resource problems where there are zero or few existing training samples has always been an interesting and challenging task (Kingma et al., 2014; Lample et al., 2018; Liu et al., 2019a,b; Lin et al., 2020). Cross-domain adaptation addresses the data scarcity problem in low-resource target domains (Pan et al., 2010; Jaech et al., 2016; Guo et al., 2018; Jia et al., 2019; Liu et al., 2020; Winata et al., 2020). However, most research studying the cross-domain aspect has not focused on predicting unseen label types in the target domain since both source and target domains have the same label types in the considered tasks (Guo et al., 2018). In another line of work, to bypass unseen label types, Ruder and Plank (2018) and Jia et al. (2019) utilized target domain training samples, so that there was no unseen label type in the target domain. Recently, based on the framework proposed by Bapna et al. (2017) (discussed in Section 1), Lee and Jha (2019) added an attention layer to produce slot-aware representations, and Shah et al. (2019) leveraged slot examples to increase the robustness of cross-domain slot filling adaptation. 3 Methodology 3.1 Coach Framework As depicted in Figure 2, the slot filling process in our Coach framework consists of two steps. In the first step, we utilize a BiLSTM-CRF structure (Lample et al., 2016) to learn the general pattern of slot entities by having our model predict whether tokens are slot entities or not (i.e., 21 3-way classification for each token). In the second step, our model further predicts a specific type for each slot entity based on the similarities with the description representations of all possible slot types. To generate representations of slot entities, we leverage another encoder, BiLSTM (Hochreiter and Schmidhuber, 1997), to encode the hidden states of slot entity tokens and produce representations for each slot entity. We represent the user utterance with n tokens as w = [w1, w2, ..., wn], and E denotes the embedding layer for utterances. The whole process can be formulated as follows: [h1, h2, ..., hn] = BiLSTM(E(w)), (1) [p1, p2, ..., pn] = CRF([h1, h2, ..., hn]), (2) where [p1, p2, ..., pn] are the logits for the 3-way classification. Then, for each slot entity, we take its hidden states to calculate its representation: rk = BiLSTM([hi, hi+1, ...hj]), (3) sk = Mdesc · rk, (4) where rk denotes the representation of the kth slot entity, [hi, hi+1, ..., hj] denotes the BiLSTM hidden states for the kth slot entity, Mdesc ∈Rns×ds is the representation matrix of the slot description (ns is the number of possible slot types and ds is the dimension of slot descriptions), and sk is the specific slot type prediction for this kth slot entity. We obtain the slot description representation rdesc ∈Rds by summing the embeddings of the N slot description tokens (similar to Shah et al. (2019)): rdesc = N X i=1 E(ti), (5) where ti is the ith token and E is the same embedding layer as that for utterances. 3.2 Template Regularization In many cases, similar or the same slot types in the target domain can also be found in the source domains. Nevertheless, it is still challenging for the model to recognize the slot types in the target domain owing to the variance between the source domains and the target domain. To improve the adaptation ability, we introduce a template regularization method. As shown in Figure 2, we first replace the slot entity tokens in the utterance with different slot labels to generate correct and incorrect utterance templates. Then, we use BiLSTM and an attention layer (Felbo et al., 2017) to generate the utterance and template representations: et = htwa, αt = exp(et) Pn j=1 exp(ej), R = n X t=1 αtht, (6) where ht is the BiLSTM hidden state in the tth step, wa is the weight vector in the attention layer and R is the representation for the input utterance or template. We minimize the regularization loss functions for the right and wrong templates, which can be formulated as follows: Lr = MSE(Ru, Rr), (7) Lw = −β × MSE(Ru, Rw), (8) where Ru is the representation for the user utterance, Rr and Rw are the representations of right and wrong templates, we set β as one, and MSE denotes mean square error. Hence, in the training phase, we minimize the distance between Ru and Rr and maximize the distance between Ru and Rw. To generate a wrong template, we replace the correct slot entity with another random slot entity, and we generate two wrong templates for each utterance. To ensure the representations of the templates are meaningful (i.e., similar templates have similar representations) for training Ru, in the first several epochs, the regularization loss is only to optimize the template representations, and in the following epochs, we optimize both template representations and utterance representations. By doing so, the model learns to cluster the representations in the same or similar templates into a similar vector space. Hence, the hidden states of tokens that belong to the same slot type tend to be similar, which boosts the robustness of these slot types in the target domain. 4 Experiments 4.1 Dataset We evaluate our framework on SNIPS (Coucke et al., 2018), a public spoken language understanding dataset which contains 39 slot types across seven domains (intents) and ∼2000 training samples per domain. To test our framework, each time, we choose one domain as the target domain and the other six domains as the source domains. 22 Training Setting Zero-shot Few-shot on 20 (1%) samples Few-shot on 50 (2.5%) samples Domain ↓Model → CT RZT Coach +TR CT RZT Coach +TR CT RZT Coach +TR AddToPlaylist 38.82 42.77 45.23 50.90 58.36 63.18 58.29 62.76 68.69 74.89 71.63 74.68 BookRestaurant 27.54 30.68 33.45 34.01 45.65 50.54 61.08 65.97 54.22 54.49 72.19 74.82 GetWeather 46.45 50.28 47.93 50.47 54.22 58.86 67.61 67.89 63.23 58.87 81.55 79.64 PlayMusic 32.86 33.12 28.89 32.01 46.35 47.20 53.82 54.04 54.32 59.20 62.41 66.38 RateBook 14.54 16.43 25.67 22.06 64.37 63.33 74.87 74.68 76.45 76.87 86.88 84.62 SearchCreativeWork 39.79 44.45 43.91 46.65 57.83 63.39 60.32 57.19 66.38 67.81 65.38 64.56 FindScreeningEvent 13.83 12.25 25.64 25.63 48.59 49.18 66.18 67.38 70.67 74.58 78.10 83.85 Average F1 30.55 32.85 35.82 37.39 53.62 56.53 63.17 64.27 64.85 66.67 74.02 75.51 Table 1: Slot F1-scores based on standard BIO structure for SNIPS. Scores in each row represents the performance of the leftmost target domain, and TR denotes template regularization. Moreover, we also study another adaptation case where there is no unseen label in the target domain. We utilize the CoNLL-2003 English named entity recognition (NER) dataset as the source domain (Tjong Kim Sang and De Meulder, 2003), and the CBS SciTech News NER dataset from Jia et al. (2019) as the target domain. These two datasets have the same four types of entities, namely, PER (person), LOC (location), ORG (organization), and MISC (miscellaneous). 4.2 Baselines We use word-level (Bojanowski et al., 2017) and character-level (Hashimoto et al., 2017) embeddings for our model as well as all the following baselines. Concept Tagger (CT) Bapna et al. (2017) proposed a slot filling framework that utilizes slot descriptions to cope with the unseen slot types in the target domain. Robust Zero-shot Tagger (RZT) Based on CT, Shah et al. (2019) leveraged example values of slots to improve robustness of cross-domain adaptation. BiLSTM-CRF This baseline is only for the cross-domain NER. Since there is no unseen label in the NER target domain, the BiLSTM-CRF (Lample et al., 2016) uses the same label set for the source and target domains and casts it as an entity classification task for each token, which is applicable in both zero-shot and few-shot scenarios. 4.3 Training Details We use a 2-layer BiLSTM with a hidden size of 200 and a dropout rate of 0.3 for both the template encoder and utterance encoder. Note that the parameters in these two encoders are not shared. The BiLSTM for encoding the hidden states of slot entity tokens has one layer with a hidden size of 200, which would output the same dimension as the concatenated word-level and char-level embeddings. We use Adam optimizer with a learning rate of 0.0005. Cross-entropy loss is leveraged to train the 3-way classification in the first step, and the specific slot type predictions are used in the second step. We split 500 data samples in the target domain as the validation set for choosing the best model and the remainder are used for the test set. We implement the model in CT and RZT and follow the same setting as for our model for a fair comparison. 5 Results & Discussion 5.1 Cross-domain Slot Filling Quantitative Analysis As illustrated in Table 1, we can clearly see that our models are able to achieve significantly better performance than the current state-of-the-art approach (RZT). The CT framework suffers from the difficulty of capturing the whole slot entity, while our framework is able to recognize the slot entity tokens by sharing its parameters across all slot types. Based on the CT framework, the performance of RZT is still limited, and Coach outperforms RZT by a ∼3% F1-score in the zero-shot setting. Additionally, template regularization further improves the adaptation robustness by helping the model cluster the utterance representations into a similar vector space based on their corresponding template representations. Interestingly, our models achieve impressive performance in the few-shot scenario. In terms of the averaged performance, our best model (Coach+TR) outperforms RZT by ∼8% and ∼9% F1-scores on the 20-shot and 50-shot settings, respectively. We conjecture that our model is able to better recognize the whole slot entity in the target domain and map the representation of the slot entity belonging to the same slot type into a similar vector space 23 Target Samples‡ 0 samples 20 samples 50 samples unseen seen unseen seen unseen seen CT 27.10 44.18 50.13 61.21 62.05 69.64 RZT 28.28 47.15 52.56 63.26 63.96 73.10 Coach 32.89 50.78 61.96 73.78 74.65 76.95 Coach+TR 34.09 51.93 64.16 73.85 76.49 80.16 Table 2: Averaged F1-scores for seen and unseen slots over all target domains. ‡ represent the number of training samples utilized for the target domain. to the representation of this slot type based on Eq (4). This enables the model to quickly adapt to the target domain slots. Analysis on Seen and Unseen Slots We take a further step to test the models on seen and unseen slots in target domains to analyze the effectiveness of our approaches. To test the performance, we split the test set into “unseen” and “seen” parts. An utterance is categorized into the “unseen” part as long as there is an unseen slot (i.e., the slot does not exist in the remaining six source domains) in it. Otherwise we categorize it into the “seen” part. The results for the “seen” and “unseen” categories are shown in Table 2. We observe that our approaches generally improve on both unseen and seen slot types compared to the baseline models. For the improvements in the unseen slots, our models are better able to capture the unseen slots since they explicitly learn the general pattern of slot entities. Interestingly, our models also bring large improvements in the seen slot types. We conjecture that it is also challenging to adapt models to seen slots due to the large variance between the source and target domains. For example, slot entities belonging to the “object type” in the “RateBook” domain are different from those in the “SearchCreativeWork” domain. Hence, the baseline models might fail to recognize these seen slots in the target domain, while our approaches can adapt to the seen slot types more quickly in comparison. In addition, we observe that template regularization improves performance in both seen and unseen slots, which illustrates that clustering representations based on templates can boost the adaptation ability. 5.2 Cross-domain NER From Table 3, we see that the Coach framework is also suitable for the case where there are no unseen labels in the target domain in both the zero-shot and few-shot scenarios, while CT and RZT are not as effective as BiLSTM-CRF. However, we observe that template regularization loses its effectiveness Target Samples 0 samples 50 samples CT (Bapna et al. (2017)) 61.43 65.85 RZT (Shah et al. (2019)) 61.94 65.21 BiLSTM-CRF 61.77 66.57 Coach 64.08 68.35 Coach + TR 64.54 67.45 Table 3: F1-scores on the NER target domain (CBS SciTech News). Task zero-shot few-shot on 50 samples sum trs bilstm sum trs bilstm Slot Filling 33.89 34.33 35.82 73.80 72.66 74.02 NER 63.04 63.29 64.47 66.98 68.04 68.35 Table 4: Ablation study in terms of the methods to encode the entity tokens on Coach. in this task, since the text in NER is relatively more open, which makes it hard to capture the templates for each label type. 5.3 Ablation Study We conduct an ablation study in terms of the methods to encode the entity tokens (described in Eq. (3)) to investigate how they affect the performance. Instead of using BiLSTM, we try two alternatives. One is to use the encoder of Transformer (trs) (Vaswani et al., 2017), and the other is to simply sum the hidden states of slot entity tokens. From Table 4, we can see that there is no significant performance difference among different methods, and we observe that using BiLSTM to encode the entity tokens generally achieves better results. 6 Conclusion We introduce a new cross-domain slot filling framework to handle the unseen slot type issue. Our model shares its parameters across all slot types and learns to predict whether input tokens are slot entities or not. Then, it detects concrete slot types for these slot entity tokens based on the slot type descriptions. Moreover, template regularization is proposed to improve the adaptation robustness further. Experiments show that our model significantly outperforms existing cross-domain slot filling approaches, and it also achieves better performance for the cross-domain NER task, where there is no unseen label type in the target domain. Acknowledgments This work is partially funded by ITF/319/16FP and MRP/055/18 of the Innovation Technology Commission, the Hong Kong SAR Government. 24 References Ankur Bapna, Gokhan T¨ur, Dilek Hakkani-T¨ur, and Larry Heck. 2017. Towards zero-shot frame semantic parsing for domain scaling. Proc. Interspeech 2017, pages 2476–2480. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Eugene Charniak, Mark Johnson, Micha Elsner, Joseph Austerweil, David Ellis, Isaac Haxton, Catherine Hill, R Shrivaths, Jeremy Moore, Michael Pozar, et al. 2006. Multilevel coarse-to-fine pcfg parsing. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 168–175. Alice Coucke, Alaa Saade, Adrien Ball, Th´eodore Bluche, Alexandre Caulier, David Leroy, Cl´ement Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, et al. 2018. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190. Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1615–1625. Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and YunNung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 753–757. Marco Guerini, Simone Magnolini, Vevake Balaraman, and Bernardo Magnini. 2018. Toward zero-shot entity recognition in task-oriented conversational agents. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 317– 326. Jiang Guo, Darsh Shah, and Regina Barzilay. 2018. Multi-source domain adaptation with mixture of experts. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4694–4703. Kazuma Hashimoto, Yoshimasa Tsuruoka, Richard Socher, et al. 2017. A joint many-task model: Growing a neural network for multiple nlp tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1923– 1933. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Aaron Jaech, Larry Heck, and Mari Ostendorf. 2016. Domain adaptation of recurrent neural networks for natural language understanding. Interspeech 2016, pages 690–694. Chen Jia, Xiaobo Liang, and Yue Zhang. 2019. Crossdomain ner using cross-domain language modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2464–2474. Durk P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. 2014. Semi-supervised learning with deep generative models. In Advances in neural information processing systems, pages 3581–3589. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and MarcAurelio Ranzato. 2018. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049. Sungjin Lee and Rahul Jha. 2019. Zero-shot adaptive transfer for conversational language understanding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6642–6649. Zhaojiang Lin, Zihan Liu, Genta Indra Winata, Samuel Cahyawijaya, Andrea Madotto, Yejin Bang, Etsuko Ishii, and Pascale Fung. 2020. Xpersona: Evaluating multilingual personalized chatbot. arXiv preprint arXiv:2003.07568. Zihan Liu, Jamin Shin, Yan Xu, Genta Indra Winata, Peng Xu, Andrea Madotto, and Pascale Fung. 2019a. Zero-shot cross-lingual dialogue systems with transferable latent variables. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1297–1303. Zihan Liu, Genta Indra Winata, and Pascale Fung. 2020. Zero-resource cross-domain named entity recognition. arXiv preprint arXiv:2002.05923. Zihan Liu, Genta Indra Winata, Zhaojiang Lin, Peng Xu, and Pascale Fung. 2019b. Attention-informed mixed-language training for zero-shot cross-lingual task-oriented dialogue systems. arXiv preprint arXiv:1911.09273. 25 Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Cross-domain sentiment classification via spectral feature alignment. In Proceedings of the 19th international conference on World wide web, pages 751–760. ACM. Slav Petrov. 2011. Coarse-to-fine natural language processing. Springer Science & Business Media. Sebastian Ruder and Barbara Plank. 2018. Strong baselines for neural semi-supervised learning under domain shift. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1044–1054. Darsh Shah, Raghav Gupta, Amir Fayazi, and Dilek Hakkani-Tur. 2019. Robust zero-shot cross-domain slot filling with example values. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5484–5490, Florence, Italy. Association for Computational Linguistics. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–147. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Genta Indra Winata, Samuel Cahyawijaya, Zihan Liu, Zhaojiang Lin, Andrea Madotto, Peng Xu, and Pascale Fung. 2020. Learning fast adaptation on cross-accented speech recognition. arXiv preprint arXiv:2003.01901. Feifei Zhai, Saloni Potdar, Bing Xiang, and Bowen Zhou. 2017. Neural models for sequence chunking. In Thirty-First AAAI Conference on Artificial Intelligence. Chenwei Zhang, Yaliang Li, Nan Du, Wei Fan, and Philip Yu. 2019. Joint slot filling and intent detection via capsule neural networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5259–5267, Florence, Italy. Association for Computational Linguistics. Yuchen Zhang, Panupong Pasupat, and Percy Liang. 2017. Macro grammars and holistic triggering for efficient semantic parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1214–1223.
2020
3
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 323–333 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 323 Contextualized Weak Supervision for Text Classification Dheeraj Mekala1 Jingbo Shang1,2 1 Department of Computer Science and Engineering, University of California San Diego, CA, USA 2 Halıcıo˘glu Data Science Institute, University of California San Diego, CA, USA {dmekala, jshang}@ucsd.edu Abstract Weakly supervised text classification based on a few user-provided seed words has recently attracted much attention from researchers. Existing methods mainly generate pseudo-labels in a context-free manner (e.g., string matching), therefore, the ambiguous, context-dependent nature of human language has been long overlooked. In this paper, we propose a novel framework ConWea, providing contextualized weak supervision for text classification. Specifically, we leverage contextualized representations of word occurrences and seed word information to automatically differentiate multiple interpretations of the same word, and thus create a contextualized corpus. This contextualized corpus is further utilized to train the classifier and expand seed words in an iterative manner. This process not only adds new contextualized, highly label-indicative keywords but also disambiguates initial seed words, making our weak supervision fully contextualized. Extensive experiments and case studies on real-world datasets demonstrate the necessity and significant advantages of using contextualized weak supervision, especially when the class labels are fine-grained. 1 Introduction Weak supervision in text classification has recently attracted much attention from researchers, because it alleviates the burden of human experts on annotating massive documents, especially in specific domains. One of the popular forms of weak supervision is a small set of user-provided seed words for each class. Typical seed-driven methods follow an iterative framework — generate pseudolabels using some heuristics, learn the mapping between documents and classes, and expand the seed set (Agichtein and Gravano, 2000; Riloff et al., 2003; Kuipers et al., 2006; Tao et al., 2015; Meng et al., 2018). Most of, if not all, existing methods generate pseudo-labels in a context-free manner, therefore, the ambiguous, context-dependent nature of human languages has been long overlooked. Suppose the user gives “penalty” as a seed word for the sports class, as shown in Figure 1. The word “penalty” has at least two different meanings: the penalty in sports-related documents and the fine or death penalty in law-related documents. If the pseudolabel of a document is decided based only on the frequency of seed words, some documents about law may be mislabelled as sports. More importantly, such errors will further introduce wrong seed words, thus being propagated and amplified over the iterations. In this paper, we introduce contextualized weak supervision to train a text classifier based on userprovided seed words. The “contextualized” here is reflected in two places: the corpus and seed words. Every word occurrence in the corpus may be interpreted differently according to its context; Every seed word, if ambiguous, must be resolved according to its user-specified class. In this way, we aim to improve the accuracy of the final text classifier. We propose a novel framework ConWea, as illustrated in Figure 1. It leverages contextualized representation learning techniques, such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019), together with user-provided seed information to first create a contextualized corpus. This contextualized corpus is further utilized to train the classifier and expand seed words in an iterative manner. During this process, contextualized seed words are introduced by expanding and disambiguating the initial seed words. Specifically, for each word, we develop an unsupervised method to adaptively decide its number of interpretations, and accordingly, group all its occurrences based on their contextualized representations. We design a principled comparative ranking method to select highly label324 User-Provided Seed Words Messi scored the penalty! … Judge passed the order of … The court issued a penalty … …… Messi scored the penalty$1! … Judge passed the order of … The court$1 issued a penalty$0 … …… Raw Docs Extended Seed Words Class Seed Words Soccer soccer, goal, penalty Law law, judge, court … … Contextualized Docs Class Seed Words Soccer soccer, goal$0, goal$1, penalty$0, penalty$1, Law law, judge, court$0, court$1 … … Text Classifier Messi scored the penalty$1! … Judge passed the order of … The court$1 issued a penalty$0 … …… Contextualized Docs with Predictions Contextualized & Expanded Seed Words Class Seed Words Soccer soccer, goal$0, penalty$1, … Law law, judge, court$1, penalty$0, … … … Law Soccer Cosmos Politics Comparative Ranking Figure 1: Our proposed contextualized weakly supervised method leverages BERT to create a contextualized corpus. This contextualized corpus is further utilized to resolve interpretations of seed words, generate pseudolabels, train a classifier and expand the seed set in an iterative fashion. indicative keywords from the contextualized corpus, leading to contextualized seed words. We will repeat the iterative classification and seed word expansion process until the convergence. To the best of our knowledge, this is the first work on contextualized weak supervision for text classification. It is also worth mentioning that our proposed framework is compatible with almost any contextualized representation learning models and text classification models. Our contributions are summarized as follows: • We propose a novel framework enabling contextualized weak supervision for text classification. • We develop an unsupervised method to automatically group word occurrences of the same word into an adaptive number of interpretations based on contextualized representations and userprovided seed information. • We design a principled ranking mechanism to identify words that are discriminative and highly label-indicative. • We have performed experiments on real-world datasets for both coarse- and fine-grained text classification tasks. The results demonstrate the superiority of using contextualized weak supervision, especially when the labels are fine-grained. Our code is made publicly available at GitHub1. 2 Overview Problem Formulation. The input of our problem contains (1) a collection of n text documents D = {D1, D2, . . . , Dn} and (2) m target classes C = {C1, C2, . . . , Cm} and their seed words S = {S1, S2, . . . , Sm}. We aim to build a high-quality 1https://github.com/dheeraj7596/ConWea document classifier from these inputs, assigning class label Cj ∈C to each document Di ∈D. Note that, all these words could be upgraded to phrases if phrase mining techniques (Liu et al., 2015; Shang et al., 2018) were applied as preprocessing. In this paper, we stick to the words. Framework Overview. We propose a framework, ConWea, enabling contextualized weak supervision. Here, “contextualized” is reflected in two places: the corpus and seed words. Therefore, we have developed two novel techniques accordingly to make both contextualizations happen. First, we leverage contextualized representation learning techniques (Peters et al., 2018; Devlin et al., 2019) to create a contextualized corpus. We choose BERT (Devlin et al., 2019) as an example in our implementation to generate a contextualized vector of every word occurrence. We assume the user-provided seed words are of reasonable quality — the majority of the seed words are not ambiguous, and the majority of the occurrences of the seed words are about the semantics of the user-specified class. Based on these two assumptions, we are able to develop an unsupervised method to automatically group word occurrences of the same word into an adaptive number of interpretations, harvesting the contextualized corpus. Second, we design a principled comparative ranking method to select highly label-indicative keywords from the contextualized corpus, leading to contextualized seed words. Specifically, we start with all possible interpretations of seed words and train a neural classifier. Based on the predictions, we compare and contrast the documents belonging to different classes, and rank contextualized words based on how label-indicative, frequent, and 325 (a) Similarity Distribution: Windows (b) Cluster Visualisation: Windows (c) Cluster Visualisation: Penalty Figure 2: Document contextualization examples using word “windows” and “penalty”. τ is decided based on the similarity distributions of all seed word occurrences. Two clusters are discovered for both words, respectively. unusual these words are. During this process, we eliminate the wrong interpretations of initial seed words and also add more highly label-indicative contextualized words. This entire process is visualized in Figure 1. We denote the number of iterations between classifier training and seed word expansion as T, which is the only hyper-parameter in our framework. We discuss these two novel techniques in detail in the following sections. To make our paper self-contained, we will also brief the pseudo-label generation and document classifiers. 3 Document Contextualization We leverage contextualized representation techniques to create a contextualized corpus. The key objective of this contextualization is to disambiguate different occurrences of the same word into several interpretations. We treat every word separately, so in the rest of this section, we focus on a given word w. Specifically, given a word w, we denote all its occurrences as w1, . . . , wn, where n is its total number of occurrences in the corpus. Contextualized Representation. First, we obtain a contextualized vector representation bwi for each wi. Our proposed method is compatible with almost any contextualized representation learning model. We choose BERT (Devlin et al., 2019) as an example in our implementation to generate a contextualized vector for each word occurrence. In this contextualized vector space, we use the cosine similarity to measure the similarity between two vectors. Two word occurrences wi and wj of the same interpretation are expected to have a high cosine similarity between their vectors bwi and bwj. For the ease of computation, we normalize all contextualized representations into unit vectors. Choice of Clustering Methods. We model the word occurrence disambiguation problem as a clustering problem. Specifically, we propose to use the K-Means algorithm (Jain and Dubes, 1988) to cluster all contextualized representations bwi into K clusters, where K is the number of interpretations. We prefer K-Means because (1) the cosine similarity and Euclidean distance are equivalent for unit vectors and (2) it is fast and we are clustering a significant number of times. Automated Parameter Setting. We choose the value of K purely based on a similarity threshold τ. τ is introduced to decide whether two clusters belong to the same interpretation by checking if the cosine similarity between two cluster center vectors is greater than τ. Intuitively, we should keep increasing K until there exist no two clusters with the same interpretation. Therefore, we choose K to be the largest number such that the similarity between any two cluster centers is no more than τ. K = arg max K {cos(ci, cj) < τ∀i, j} (1) where ci refers to the i-th cluster center vector after clustering all contextualized representations into K clusters. In practice, K is usually no more than 10. So we increase K gradually until the constraint is violated. We pick τ based on user-provided seed information instead of hand-tuning, As mentioned, we make two “majority” assumptions: (1) For any seed word, the majority of its occurrences follow the intended interpretation by the user; and (2) The majority of the seed words are not ambiguous — they only have one interpretation. Therefore, for each seed word s, we take the median of pairwise cosine similarities between its occurrences. τ(s) = median({sim(bsi, bsj)|∀i, j}) (2) 326 Algorithm 1: Corpus Contextualization Input: Word occurrences w1, w2, . . . , wn of the word w, Seed words s1, s2, . . . , sm and their occurrences si,j. Output: Contextualized word occurrences ˆ w1, ˆ w2, . . . , ˆ wn Obtain bwi and bsi,j using BERT. Compute τ follow Equation 3. K ←1 while True do Run K-Means on {bwi} for (K+1) clusters. Obtain cluster centers c1, c2, . . . , cK+1. if maxi,j cos(ci, cj) > τ then Break K ←K + 1 Run K-Means on {bwi} for K clusters. Obtain cluster centers c1, c2, . . . , cK. for each occurrence wi do Compute ˆwi following Equation 4. Return ˆwi. Then, we take the median of these medians over all seed words as τ. Mathematically, τ = median({τ(s)|∀s}) (3) The nested median solution makes the choice of τ safe and robust to outliers. For example, consider the word “windows” in the 20Newsgroup corpus. In fact, the word windows has two interpretations in the 20Newsgroup corpus — one represents an opening in the wall and the other is an operating system. We first compute the pairwise similarities between all its occurrences and plot the histogram as shown in Figure 2(a). From this plot, we can see that its median value is about 0.7. We apply the same for all seed words and obtain τ following Equation 3. τ is calculated to be 0.82. Based on this value, we gradually increase K for “windows” and it ends up with K = 2. We visualize its KMeans clustering results using t-SNE (Maaten and Hinton, 2008) in Figure 2(b). Similar results can be observed for the word penalty, as shown in Figure 2(c). These examples demonstrate how our document contextualization works for each word. In practice, to make it more efficient, one can subsample the occurrences instead of enumerating all pairs in a brute-force manner. Contextualized Corpus. The interpretation of each occurrence of w is decided by the cluster-ID to which its contextualized representation belongs. Specifically, given each occurrence wi, the word w Figure 3: The HAN classifier used in our ConWea framework. It is trained on our contextualized corpus with the generated pseudo-labels. is replaced by ˆwi in the corpus as follows: ˆwi = ( w if K = 1 w$j∗ otherwise (4) where j∗= arg K max j=1 cos(bwi, cj) By applying this to all words and their occurrences, the corpus is contextualized. The pseudo-code for corpus contextualization is shown in Algorithm 1. 4 Pseudo-Label and Text Classifier We generate pseudo-labels for unlabeled contextualized documents and train a classifier based on these pseudo-labels, similar to many other weakly supervised methods (Agichtein and Gravano, 2000; Riloff et al., 2003; Kuipers et al., 2006; Tao et al., 2015; Meng et al., 2018). These two parts are not the focus of this paper. We briefly introduce them to make the paper self-contained. Pseudo-Label Generation. There are several ways to generate pseudo-labels from seed words. As proof-of-concept, we employ a simple but effective method based on counting. Each document is assigned a label whose aggregated term frequency of seed words is maximum. Let tf( ˆw, d) denote term-frequency of a contextualized word w in the contextualized document d and Sc represents set of seed words of class c, the document d is assigned a label l(d) as follows: l(d) = arg max l { X i tf(si, d)|∀si ∈Sl} (5) 327 Document Classifier. Our framework is compatible with any text classification model. We use Hierarchical Attention Networks (HAN) (Yang et al., 2016) as an example in our implementation. HAN considers the hierarchical structure of documents (document – sentences – words) and includes an attention mechanism that finds the most important words and sentences in a document while taking the context into consideration. There are two levels of attention: word-level attention identifies the important words in a sentence and sentence level attention identifies the important sentences in a document. The overall architecture of HAN is shown in Figure 3. We train a HAN model on contextualized corpus with the generated pseudo-labels. The predicted labels are used in seed expansion and disambiguation. 5 Seed Expansion and Disambiguation Seed Expansion. Given contextualized documents and their predicted class labels, we propose to rank contextualized words and add the top few words into the seed word sets. The core element of this process is the ranking function. An ideal seed word s of label l, is an unusual word that appears only in the documents belonging to label l with significant frequency. Hence, for a given class Cj and a word w, we measure its ranking score based on the following three aspects: • Label-Indicative. Since our pseudo-label generation follows the presence of seed words in the document, ideally, the posterior probability of a document belonging to the class Cj after observing the presence of word w (i.e., P(Cj|w)) should be very close to 100%. Therefore, we use P(Cj|w) as our label-indicative measure: LI(Cj, w) = P(Cj|w) = fCj,w fCj where fCj refers to the total number of documents that are predicted as class Cj, and among them, fCj,w documents contain the word w. All these counts are based on the prediction results on the input unlabeled documents. • Frequent. Ideally, a seed word s of label l appears in the documents belonging to label l with significant frequency. To measure the frequency score, we first compute the average frequency of seed word s in all the documents belonging to label l. Since average frequency is unbounded, we apply tanh function to scale it, resulting in the frequency score, F(Cj, w) = tanh fCj(w) fCj  Here, different from fCj,w defined earlier, fCj(w) is the frequency of word w in documents that are predicted as class Cj. • Unusual: We want our highly label-indicative and frequent words to be unusual. To incorporate this, we consider inverse document frequency (IDF). Let n be the number of documents in the corpus D and fD,w represents the document frequency of word w, the IDF of a word w is computed as follows: IDF(w) = log n fD,w  Similar to previous work (Tao et al., 2015), we combine these three measures using the geometric mean, resulting in the ranking score R(Cj, w) of a word w for a class Cj. R(Cj, w) = LI(Cj, w) × F(Cj, w) × IDF(w) 1/3 Based on this aggregated score, we add top words to expand the seed word set of the class Cj. Seed Disambiguation. While the majority of userprovided seed words are nice and clean, some of them may have multiple interpretations in the given corpus. We propose to disambiguate them based on the ranking. We first consider all possible interpretations of an initial seed word, generate the pseudo-labels, and train a classifier. Using the classified documents and the ranking function, we rank all possible interpretations of the same initial seed word. Because the majority occurrences of a seed word are assumed to belong to the user-specified class, the intended interpretation shall be ranked the highest. Therefore, we retain only the top-ranked interpretation of this seed word. After this step, we have fully contextualized our weak supervision, including the initial userprovided seeds. 6 Experiments In this section, we evaluate our framework and many compared methods on coarse- and finegrained text classification tasks under the weakly supervised setting. 328 Table 1: Dataset statistics. Dataset # Docs # Coarse # Fine Avg Doc Len NYT 13,081 5 25 778 20News 18,846 6 20 400 6.1 Datasets Following previous work (Tao et al., 2015), (Meng et al., 2018), we use two news datasets in our experiments. The dataset statistics are provided in Table 1. Here are some details. • The New York Times (NYT): The NYT dataset contains news articles written and published by The New York Times. These articles are classified into 5 wide genres (e.g., arts, sports) and 25 fine-grained categories (e.g., dance, music, hockey, basketball). • The 20 Newsgroups (20News): The 20News dataset2 is a collection of newsgroup documents partitioned widely into 6 groups (e.g., recreation, computers) and 20 fine-grained classes (e.g., graphics, windows, baseball, hockey). We perform coarse- and fine-grained classifications on the NYT and 20News datasets. NYT dataset is imbalanced in both fine-grained and coarse-grained classifications. 20News is nearly balanced in fine-grained classification but imbalanced in coarse-grained classification. Being aware of these facts, we adopt micro- and macro-F1 scores as evaluation metrics. 6.2 Compared Methods We compare our framework with a wide range of methods described below: • IR-TF-IDF treats the seed word set for each class as a query. The relevance of a document to a label is computed by aggregated TF-IDF values of its respective seed words. The label with the highest relevance is assigned to each document. • Dataless (Chang et al., 2008) uses only label surface names as supervision and leverages Wikipedia to derive vector representations of labels and documents. Each document is labeled based on the document-label similarity. • Word2Vec first learns word vector representations (Mikolov et al., 2013) for all terms in the corpus and derive label representations by aggregating the vectors of its respective seed words. Finally, each document is labeled with the most 2http://qwone.com/˜jason/20Newsgroups/ similar label based on cosine similarity. • Doc2Cube (Tao et al., 2015) considers label surface names as seed set and performs multidimensional document classification by learning dimension-aware embedding. • WeSTClass (Meng et al., 2018) leverages seed information to generate pseudo documents and refines the model through a self-training module that bootstraps on real unlabeled documents. We denote our framework as ConWea, which includes contextualizing corpus, disambiguating seed words, and iterative classification & key words expansion. Besides, we have three ablated versions. ConWea-NoCon refers to the variant of ConWea trained without the contextualization of corpus. ConWea-NoSeedExp is the variant of ConWea without the seed expansion module. ConWeaWSD refers to the variant of ConWea, with the contextualization module replaced by Lesk algorithm (Lesk, 1986), a classic Word-sense disambiguation algorithm (WSD). We also present the results of HAN-Supervised under the supervised setting for reference. We use 80-10-10 for train-validation-test splitting and report the test set results for it. All weakly supervised methods are evaluated on the entire datasets. 6.3 Experiment Settings We use pre-trained BERT-base-uncased3 to obtain contextualized word representations. We follow Devlin et al. (2019) and concatenate the averaged word-piece vectors of the last four layers. The seed words are obtained as follows: we asked 5 human experts to nominate 5 seed words per class, and then considered the majority words (i.e., > 3 nominations) as our final set of seed words. For every class, we mainly use the label surface name as seed words. For some multi-word class labels (e.g., “international business”), we have multiple seed words, but never exceeds four per each class. The same seed words are utilized for all compared methods for fair comparisons. For ConWea, we set T = 10. For any method using word embedding, we set its dimension to be 100. We use the public implementations of WeSTClass4 and Dataless5 with the hyper-parameters mentioned in their original papers. 3https://github.com/google-research/ bert 4https://github.com/yumeng5/WeSTClass 5https://cogcomp.org/page/software_ view/Descartes 329 Table 2: Evaluation Results for All Methods on Fine-Grained and Coarse-Grained Labels. Both micro-F1 and macro-F1 scores are presented. Ablation and supervised results are also included. NYT 20 Newsgroup 5-Class (Coarse) 25-Class (Fine) 6-Class (Coarse) 20-Class (Fine) Methods Micro-F1 Macro-F1 Micro-F1 Macro-F1 Micro-F1 Macro-F1 Micro-F1 Macro-F1 IR-TF-IDF 0.65 0.58 0.56 0.54 0.49 0.48 0.53 0.52 Dataless 0.71 0.48 0.59 0.37 0.50 0.47 0.61 0.53 Word2Vec 0.92 0.83 0.69 0.47 0.51 0.45 0.33 0.33 Doc2Cube 0.71 0.38 0.67 0.34 0.40 0.35 0.23 0.23 WeSTClass 0.91 0.84 0.50 0.36 0.53 0.43 0.49 0.46 ConWea 0.95 0.89 0.91 0.79 0.62 0.57 0.65 0.64 ConWea-NoCon 0.91 0.83 0.89 0.74 0.53 0.50 0.58 0.57 ConWea-NoExpan 0.92 0.85 0.76 0.66 0.58 0.53 0.58 0.57 ConWea-WSD 0.83 0.78 0.72 0.64 0.52 0.46 0.49 0.47 HAN-Supervised 0.96 0.92 0.94 0.82 0.90 0.88 0.83 0.83 6.4 Performance Comparison We summarize the evaluation results of all methods in Table 2. As one can observe that our proposed framework achieves the best performance among all the compared weakly supervised methods. We discuss the effectiveness of ConWea as follows: • Our proposed framework ConWea outperforms all the other methods with significant margins. By contextualizing the corpus and resolving the interpretation of seed words, ConWea achieves inspiring performance, demonstrating the necessity and the importance of using contextualized weak supervision. • We observe that in the fine-grained classification, the advantages of ConWea over other methods are even more significant. This can be attributed to the contextualization of corpus and seed words. Once the corpus is contextualized properly, the subtle ambiguity between words is a drawback to other methods, whereas ConWea can distinguish them and predict them correctly. • The comparison between ConWea and the ablation method ConWea-NoExpan demonstrates the effectiveness of our Seed Expansion. For example, for fine-grained labels on the 20News dataset, the seed expansion improves the microF1 score from 0.58 to 0.65. • The comparison between ConWea and the two ablation methods ConWea-NoCon and ConWeaWSD demonstrates the effectiveness of our Contextualization. Our contextualization, building upon (Devlin et al., 2019), is adaptive to the input corpus, without requiring any additional human annotations. However, WSD methods(e.g., (Lesk, 1986)) are typically trained for a general domain. If one wants to apply WSD to some specific corpus, additional annotated training data might be required to meet the similar performance as ours, which defeats the purpose of a weakly supervised setting. Therefore, we believe that our contextualization module has its unique advantages. Our experimental results further confirm the above reasoning empirically. For example, for coarse-grained labels on the 20News dataset, the contextualization improves the micro-F1 score from 0.53 to 0.62. • We observe that ConWea performs quite close to supervised methods, for example, on the NYT dataset. This demonstrates that ConWea is quite effective in closing the performance gap between the weakly supervised and supervised settings. 6.5 Parameter Study The only hyper-parameter in our algorithm is T, the number of iterations of iterative expansion & classification. We conduct experiments to study the effect of the number of iterations on the performance. The plot of performance w.r.t. the number of iterations is shown in Figure 4. We observe that the performance increases initially and gradually converges after 4 or 5 iterations. We observe that after the convergence point, the expanded seed words have become almost unchanged. While there is some fluctuation, a reasonably large T, such as T = 10, is a good choice. 6.6 Number of Seed Words We vary the number of seed words per class and plot the F1 score in Figure 5. One can observe that in general, the performance increases as the number of seed words increase. There is a slightly different pattern on the 20News dataset when the labels are fine-grained. We conjecture that it is caused by the 330 (a) NYT Coarse (b) NYT Fine (c) 20News Coarse (d) 20News Fine Figure 4: Micro- and Macro-F1 scores w.r.t. the number of iterations. (a) NYT Coarse (b) NYT Fine (c) 20News Coarse (d) 20News Fine Figure 5: Micro- and Macro-F1 scores w.r.t. the number of seed words. subtlety of seed words in fine-grained cases – additional seed words may bring some noise. Overall, three seed words per class are enough for reasonable performance. 6.7 Case Study We present a case study to showcase the power of contextualized weak supervision. Specifically, we investigate the differences between the expanded seed words in the plain corpus and contextualized corpus over iterations. Table 3 shows a column-bycolumn comparison for the class For Sale on the 20News dataset. The class For Sale refers to documents advertising goods for sale. Starting with the same seed sets in both types of corpora, from Table 3, in the second iteration, we observe that “space” becomes a part of expanded seed set in the plain corpus. Here “space” has two interpretations, one stands for the physical universe beyond the Earth and the other is for an area of land. This error gets propagated and amplified over the iterations, further introducing wrong seed words like “nasa”, “shuttle” and “moon”, related to its first interpretation. The seed set for contextualized corpus addresses this problem and adds only the words with appropriate interpretations. Also, one can see that the initial seed word “offer” has been disambiguated as “offer$0”. 7 Related Work We review the literature about (1) weak supervision for text classification methods, (2) contextualized representation learning techniques, (3) document classifiers, and (4) word sense disambiguation. 7.1 Weak Supervision for Text Classification Weak supervision has been studied for building document classifiers in various of forms, including hundreds of labeled training documents (Tang et al., 2015; Miyato et al., 2016; Xu et al., 2017), class/category names (Song and Roth, 2014; Tao et al., 2015; Li et al., 2018), and user-provided seed words (Meng et al., 2018; Tao et al., 2015). In this paper, we focus on user-provided seed words as the source of weak supervision, Along this line, Doc2Cube (Tao et al., 2015) expands label keywords from label surface names and performs multidimensional document classification by learning dimension-aware embedding; PTE (Tang et al., 2015) utilizes both labeled and unlabeled documents to learn text embeddings specifically for a task, which are later fed to logistic regression classifiers for classification; Meng et al. (2018) leverage seed information to generate pseudo documents and introduces a self-training module that bootstraps on real unlabeled data for model refining. This method is later extended to handle hierarchical classifications based on a pre-defined label taxonomy (Meng et al., 2019). However, all these weak supervisions follow a context-free manner. Here, we propose to use contextualized weak supervision. 7.2 Contextualized Word Representations Contextualized word representation is originated from machine translation (MT). CoVe (McCann et al., 2017) generates contextualized representations for a word based on pre-trained MT models, More recently, ELMo (Peters et al., 2018) leverages neural language models to replace MT models, 331 Table 3: Case Study: Seed word expansion of the For Sale class in context-free and contextualized corpora. The For Sale class contains documents advertising goods for sale. Blue bold words are potentially wrong seeds. Seed Words for For Sale class Iter Plain Corpus Contextualized Corpus 1 sale, offer, forsale sale, offer, forsale 2 space, price, shipping, sale, offer shipping, forsale, offer$0, condition$0, sale 3 space, price, shipping, sale, nasa, price, shipping, sale, forsale, condition$0, offer, package, email offer$0, package, email 4 space, price, moon, shipping, sale, nasa, price, shipping, sale, forsale, condition$0, offer, shuttle, package, email offer$0, package, email, offers$0, obo$0 which removes the dependency on massive parallel texts and takes advantages of nearly unlimited raw corpora. Many models leveraging language modeling to build sentence representations (Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2019) emerge almost at the same time. Language models have also been extended to the character level (Liu et al., 2018; Akbik et al., 2018), which can generate contextualized representations for character spans. Our proposed framework is compatible with all the above contextualized representation techniques. In our implementation, we choose to use BERT to demonstrate the power of using contextualized supervision. 7.3 Word Sense Disambiguation Word Sense Disambiguation (WSD) is one of the challenging problems in natural language processing. Typical WSD models (Lesk, 1986; Zhong and Ng, 2010; Yuan et al., 2016; Raganato et al., 2017; Le et al., 2018; Tripodi and Navigli, 2019) are trained for a general domain. Recent works (Li and Jurafsky, 2015; Mekala et al., 2016; Gupta et al., 2019) also showed that machine-interpretable representations of words considering its senses, improve document classification. However, if one wants to apply WSD to some specific corpus, additional annotated training data might be required to meet the similar performance as ours, which defeats the purpose of a weakly supervised setting. In contrast, our contextualization, building upon (Devlin et al., 2019), is adaptive to the input corpus, without requiring any additional human annotations. Therefore, our framework is more suitable than WSD under the weakly supervised setting.. Our experimental results have verified this reasoning and showed the superiority of our contextualization module over WSD in weakly supervised document classification tasks. 7.4 Document Classifier Document classification problem has been long studied. In our implementation of the proposed ConWea framework, we used HAN (Yang et al., 2016), which considers the hierarchical structure of documents and includes attention mechanisms to find the most important words and sentences in a document. CNN-based text classifiers(Kim, 2014; Zhang et al., 2015; Lai et al., 2015) are also popular and can achieve inspiring performance. Our framework is compatible with all the above text classifiers. We choose HAN just for a demonstration purpose. 8 Conclusions and Future Work In this paper, we proposed ConWea, a novel contextualized weakly supervised classification framework. Our method leverages contextualized representation techniques and initial user-provided seed words to contextualize the corpus. This contextualized corpus is further used to resolve the interpretation of seed words through iterative seed word expansion and document classifier training. Experimental results demonstrate that our model outperforms previous methods significantly, thereby signifying the superiority of contextualized weak supervision, especially when labels are fine-grained. In the future, we are interested in generalizing contextualized weak supervision to hierarchical text classification problems. Currently, we perform coarse- and fine-grained classifications separately. There should be more useful information embedded in the tree-structure of the label hierarchy. Also, extending our method for other types of textual data, such as short texts, multi-lingual data, and code-switched data is a potential direction. Acknowledgements We thank Palash Chauhan and Harsh Jhamtani for valuable discussions. 332 References Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of the fifth ACM conference on Digital libraries, pages 85–94. ACM. Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638–1649. Ming-Wei Chang, Lev-Arie Ratinov, Dan Roth, and Vivek Srikumar. 2008. Importance of semantic representation: Dataless classification. In Aaai, volume 2, pages 830–835. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Vivek Gupta, Ankit Saw, Pegah Nokhiz, Harshit Gupta, and Partha Talukdar. 2019. Improving document classification with multi-sense embeddings. arXiv preprint arXiv:1911.07918. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 328–339. Anil K Jain and Richard C Dubes. 1988. Algorithms for clustering data. Englewood Cliffs: Prentice Hall, 1988. Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. Benjamin J Kuipers, Patrick Beeson, Joseph Modayil, and Jefferson Provost. 2006. Bootstrap learning of foundational representations. Connection Science, 18(2):145–158. Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In Twenty-ninth AAAI conference on artificial intelligence. Minh Le, Marten Postma, Jacopo Urbani, and Piek Vossen. 2018. A deep dive into word sense disambiguation with lstm. In Proceedings of the 27th international conference on computational linguistics, pages 354–365. Michael Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone. In Proceedings of the 5th annual international conference on Systems documentation, pages 24–26. Jiwei Li and Dan Jurafsky. 2015. Do multi-sense embeddings improve natural language understanding? arXiv preprint arXiv:1506.01070. Keqian Li, Hanwen Zha, Yu Su, and Xifeng Yan. 2018. Unsupervised neural categorization for scientific publications. In SIAM Data Mining, pages 37–45. SIAM. Jialu Liu, Jingbo Shang, Chi Wang, Xiang Ren, and Jiawei Han. 2015. Mining quality phrases from massive text corpora. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, pages 1729–1744. Liyuan Liu, Jingbo Shang, Xiang Ren, Frank Fangzheng Xu, Huan Gui, Jian Peng, and Jiawei Han. 2018. Empower sequence labeling with task-aware neural language model. In Thirty-Second AAAI Conference on Artificial Intelligence. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems, pages 6294–6305. Dheeraj Mekala, Vivek Gupta, Bhargavi Paranjape, and Harish Karnick. 2016. Scdv: Sparse composite document vectors using soft clustering over distributional representations. arXiv preprint arXiv:1612.06778. Yu Meng, Jiaming Shen, Chao Zhang, and Jiawei Han. 2018. Weakly-supervised neural text classification. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pages 983–992. ACM. Yu Meng, Jiaming Shen, Chao Zhang, and Jiawei Han. 2019. Weakly-supervised hierarchical text classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6826–6833. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Takeru Miyato, Andrew M Dai, and Ian Goodfellow. 2016. Adversarial training methods for semisupervised text classification. arXiv:1605.07725. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL-HLT, pages 2227–2237. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. 333 Alessandro Raganato, Jose Camacho-Collados, and Roberto Navigli. 2017. Word sense disambiguation: A unified evaluation framework and empirical comparison. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 99–110. Ellen Riloff, Janyce Wiebe, and Theresa Wilson. 2003. Learning subjective nouns using extraction pattern bootstrapping. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 25–32. Association for Computational Linguistics. Jingbo Shang, Jialu Liu, Meng Jiang, Xiang Ren, Clare R Voss, and Jiawei Han. 2018. Automated phrase mining from massive text corpora. IEEE Transactions on Knowledge and Data Engineering, 30(10):1825–1837. Yangqiu Song and Dan Roth. 2014. On dataless hierarchical text classification. In AAAI. Jian Tang, Meng Qu, and Qiaozhu Mei. 2015. Pte: Predictive text embedding through large-scale heterogeneous text networks. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1165–1174. ACM. Fangbo Tao, Chao Zhang, Xiusi Chen, Meng Jiang, Tim Hanratty, Lance Kaplan, and Jiawei Han. 2015. Doc2cube: Automated document allocation to text cube via dimension-aware joint embedding. Dimension, 2016:2017. Rocco Tripodi and Roberto Navigli. 2019. Game theory meets embeddings: a unified framework for word sense disambiguation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 88–99. Weidi Xu, Haoze Sun, Chao Deng, and Ying Tan. 2017. Variational autoencoder for semi-supervised text classification. In AAAI. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies, pages 1480–1489. Dayu Yuan, Julian Richardson, Ryan Doherty, Colin Evans, and Eric Altendorf. 2016. Semi-supervised word sense disambiguation with neural models. arXiv preprint arXiv:1603.07012. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pages 649–657. Zhi Zhong and Hwee Tou Ng. 2010. It makes sense: A wide-coverage word sense disambiguation system for free text. In Proceedings of the ACL 2010 system demonstrations, pages 78–83.
2020
30
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3278–3283 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3278 An Empirical Comparison of Unsupervised Constituency Parsing Methods∗ Jun Li⋄, Yifan Cao⋄, Jiong Cai⋄, Yong Jiang†, and Kewei Tu⋄ ⋄School of Information Science and Technology, ShanghaiTech University, Shanghai, China ⋄Shanghai Engineering Research Center of Intelligent Vision and Imaging, Shanghai, China †DAMO Academy, Alibaba Group {lijun2, caoyf, caijiong, tukw}@shanghaitech.edu.cn [email protected] Abstract Unsupervised constituency parsing aims to learn a constituency parser from a training corpus without parse tree annotations. While many methods have been proposed to tackle the problem, including statistical and neural methods, their experimental results are often not directly comparable due to discrepancies in datasets, data preprocessing, lexicalization, and evaluation metrics. In this paper, we first examine experimental settings used in previous work and propose to standardize the settings for better comparability between methods. We then empirically compare several existing methods, including decade-old and newly proposed ones, under the standardized settings on English and Japanese, two languages with different branching tendencies. We find that recent models do not show a clear advantage over decade-old models in our experiments. We hope our work can provide new insights into existing methods and facilitate future empirical evaluation of unsupervised constituency parsing. 1 Introduction Unsupervised constituency parsing, a task in the area of grammar induction, aims to learn a constituency parser from a training corpus without parse tree annotations. While research on unsupervised constituency parsing has a long history (Carroll and Charniak, 1992; Pereira and Schabes, 1992; Stolcke and Omohundro, 1994), recently there is a resurgence of interest in this task and several approaches based on neural networks have been proposed that achieve impressive performance (Shen et al., 2018; Drozdov et al., 2019; Shen et al., 2019; Kim et al., 2019b,a; Jin et al., 2019). ∗This work was supported by the National Natural Science Foundation of China (61976139). Kewei Tu is the corresponding author. With the recent development in research of unsupervised constituency parsing, however, the problem of lacking a unified experimental setting begins to emerge, which makes empirical comparison between different approaches difficult. First of all, although almost all previous approaches are evaluated on the Penn Treebank (Marcus and Marcinkiewicz, 1993), they differ in how they preprocess the training data, with respect to the sentence length limit, punctuation removal, vocabulary pruning, and so on. For example, non-neural methods such as Constituent Context Model (CCM) (Klein and Manning, 2002) are trained on short sentences, while modern neural based methods such as Parsing-Reading-Predict Network (PRPN) (Shen et al., 2018; Htut et al., 2018) do not impose any limit on sentence length. Furthermore, existing approaches also differ in their evaluation metrics, with respect to the methods of computing averages, counting trivial spans, and so on. The evaluation results of the same approach using different metrics can differ significantly in some cases. Unfortunately, we have seen more than one paper that directly compares approaches evaluated with different metrics. In this paper, we propose three standardized experimental settings with respect to data preprocessing, post-processing, evaluation metrics, and tuning. We then empirically compare five existing methods under the standardized settings, including two decade-old methods and three recently proposed neural methods. We run our experiments on English and Japanese, two languages with different branching tendencies. Interestingly, the overall experimental results show that the recent methods do not show a clear advantage over the decade-old methods. We hope our empirical comparison could provide new insights into the relative strength and weakness of existing methods and our standard3279 ized experimental settings could facilitate future evaluation of unsupervised constituency parsing. Our pre/post-processing and evaluation source code can be found at https://github.com/i-lijun/ UnsupConstParseEval. 2 Experimental Setup 2.1 Models We choose to evaluate five models under our experimental setup: PRPN1 (Shen et al., 2018), URNNG2 (Kim et al., 2019b), CCM3 (Klein and Manning, 2002), CCL4 (Seginer, 2007), DIORA5 (Drozdov et al., 2019). We use the open source implementation of each model, which we make sure can reproduce the results in the original papers. PRPN is a neural-based model designed for language modeling by leveraging latent syntactic structures. It calculates syntactic distances between words of a sentence which can be used to obtain an unlabeled parse tree. Note that as a constituency parser, PRPN is incomplete (Dyer et al., 2019). URNNG is an unsupervised version of the supervised neural parser RNNG (Dyer et al., 2016). It uses a chart parser to approximate the posterior of the original RNNG. DIORA is a recursive autoencoder using the inside-outside algorithm to compute scores and representations of spans in the input sentence. It is the only model in our comparison that uses external word embedding (in our experiments, we use ELMo (Peters et al., 2018) for English and fastText (Grave et al., 2018) for Japanese). CCM is a generative distributive model, the parameters of which are updated with the EM algorithm. It is the only model in our comparison that uses the gold Part-of-Speech tags as input. CCL is an incremental parser, which uses a representation for syntactic structures similar to dependency links. In addition to these models, we note that there are several other models that achieve good results on unsupervised constituency parsing, such as UML-DOP (Bod, 2006), UPParse (Ponvert et al., 2011), feature CCM (Golland et al., 2012), DepthBounded PCFG (Jin et al., 2018), and Compound PCFG (Kim et al., 2019a). However, because of 1https://github.com/yikangshen/PRPN 2https://github.com/harvardnlp/urnng 3https://github.com/davidswelt/dmvccm 4https://github.com/DrDub/cclparser 5https://github.com/iesl/diora limited time and computational resource, as well as a lack of open source implementations for some of the models, we do not evaluate them in our experiments. 2.2 Datasets and Preprocessing We use two corpora in our evaluation: the English Penn Treebank (PTB) (Marcus and Marcinkiewicz, 1993) and the Japanese Keyaki Treebank (KTB) (Butler et al., 2012). We pick KTB in addition to PTB for the purpose of checking the generalizability of existing models on left-branching languages. For PTB, we follow the standard split, using section 02-21 for training, 22 for validation and 23 for testing. For KTB, we shuffle the corpus and use 80% of the sentences for training, 10% for validation and 10% for testing. Many previous approaches learn from training sentences of length ≤10, but recent models based on language modeling often use a length limit of 40 or set no length limit at all. We experiment with both length ≤10 and length ≤40. We do not impose any length limit on test sentences. Previous models also have different ways to deal with punctuation. Although Jones (1994) and Spitkovsky et al. (2011) point out that careful treatment of punctuation may be helpful in unsupervised parsing, many previous models choose to remove punctuation and some recent models treat punctuation as normal words. Only a few models such as CCL (Seginer, 2007) make special treatment of punctuation. We experiment with two settings for length 40, one with punctuation and one without. To reduce the vocabulary size, we replace all the numerals with a <num>token and words that appear only once with <unk>. 2.3 Post-processing The parses output by CCL do not contain punctuation even when it is trained with punctuation, so it cannot be evaluated properly using a test set with punctuation. In addition, although the right branching baseline is a very strong baseline when punctuation is removed, its evaluation score becomes very low if punctuation is included because of its treatment of trailing punctuation. So we extend the post-processing method used in (Drozdov et al., 2019) to either add back punctuation marks or modify their connections in a parse tree: for a trailing punctuation mark, we manually attach it to 3280 Train ptb len10 nopunct ptb len40 nopunct ptb len40 punct Metric micro macro evalb micro macro evalb micro macro evalb Evaluated on test sentences with length ≤10. PRPN 31.29 ± 4.49 37.29 ± 5.04 44.72 ± 3.59 56.98 ± 3.66 58.79 ± 2.85 65.23 ± 2.92 38.07 (52.17) ± 3.94 (± 3.08) 33.75 (46.1) ± 3.33 (± 2.75) 51.56 (60.59) ± 3.08 (± 1.94) URNNG 50.77 ± 1.11 53.67 ± 0.83 60.41 ± 0.89 51.43 ± 0.00 54.20 ± 0.00 60.94 ± 0.00 47.95 (49.07) ± 0.00 (± 0.00) 41.65 (44.61) ± 0.00 (± 0.00) 59.34 (59.78) ± 0.00 (± 0.00) DIORA 31.55 ± 2.50 37.90 ± 2.13 44.93 ± 2.00 50.26 ± 0.72 52.92 ± 0.68 59.86 ± 0.58 42.66 (47.13) ± 0.98 (± 1.92) 37.77 (41.37) ± 0.84 (± 1.30) 55.15 (57.87) ± 0.77 (± 1.36) CCL 28.31 36.61 33.55 53.67 57.45 53.67 n/a (62.39) n/a (52.33) n/a (62.00) CCM 62.97 63.35 70.14 50.29 53.73 60.03 1.04 (54.30) 4.30 (54.68) 22.70 (58.02) LBranch 13.32 22.39 30.37 13.32 22.39 30.37 11.73 (13.79) 14.08 (24.31) 30.98 (35.66) RBranch 51.43 54.20 60.79 51.43 54.20 60.79 1.03 (56.80) 4.30 (56.19) 22.63 (67.74) UBound 83.20 78.74 86.64 83.20 78.74 86.64 68.19 56.85 75.15 Evaluated on all test sentences. PRPN 18.08 ± 3.66 21.73 ± 3.69 22.85 ± 3.45 41.99 ± 4.05 45.50 ± 3.73 45.36 ± 3.82 33.25 (42.17) ± 3.20 (± 1.82) 33.92 (43.55) ± 3.27 (± 1.95) 36.85 (44.43) ± 3.03 (± 1.60) URNNG 34.62 ± 2.19 38.58 ± 1.65 38.43 ± 2.07 35.88 ± 0.00 39.58 ± 0.00 39.62 ± 0.00 36.7 (36.72) ± 0.00 (± 0.00) 38.44 (38.84) ± 0.00 (± 0.00) 40.11 (40.03) ± 0.00 (± 0.00) DIORA 20.44 ± 1.53 23.72 ± 1.66 25.08 ± 1.44 46.27 ± 0.31 47.81 ± 0.33 49.39 ± 0.29 41.48 (46.94) ± 0.43 (± 1.59) 41.56 (46.73) ± 0.37 (± 1.50) 44.63 ( 49.38 ) ± 0.41 (± 1.44) CCL 19.08 21.56 18.68 37.41 41.67 37.98 n/a (49.70) n/a (51.51) n/a (47.46) CCM 49.54 52.60 52.48 40.90 43.62 44.34 0.09 (33.15) 0.54 (36.88) 5.48 (35.65) LBranch 6.00 8.98 11.49 6.00 8.98 11.49 4.88 (5.55) 6.36 (8.30) 10.01 (11.07) RBranch 35.88 39.58 39.61 35.88 39.58 39.61 0.07 (35.54) 0.52 (38.98) 5.45 (39.3) UBound 84.41 83.32 85.34 84.41 83.32 85.34 77.76 75.06 78.96 Table 1: Experimental results on PTB. The column headings show the training setups and the evaluation metrics. The presence or removal of punctuation in a test set is kept consistent with the corresponding training setup. Scores in parentheses are obtained using the post-processing method of section 2.3. For models sensitive to random seeds (PRPN, URNNG and DIORA), we report the means and standard deviations from five runs. LBranch and RBranch represent the left and right branching baselines. UBound represents the score upper bound that a binary tree parser can achieve. the root of the constituency parse tree; for a punctuation mark inside the sentence, we attach it to the lowest common ancestor of its two adjacent words in the parse tree. Note that the above procedure will produce non-binary parse trees. 2.4 Evaluation Metrics The performance of a constituency parser is often evaluated with F1 scores. However, two ways of averaging F1 scores over multiple test sentences are available, i.e., micro average and macro average. In micro average, all the span predictions are aggregated together and then compared with the gold spans to get the precision and recall. In contrast, macro average is obtained by calculating the F1 score for each individual sentence and then take an average over all the sentences. We use both metrics in our experiments. Note that when computing F1 scores, we remove trivial spans, i.e., single-word spans and whole-sentence spans, and we calculate duplicate constituents only once. We additionally use the standard PARSEVAL metric computed by the Evalb program6. Although Evalb calculates the micro average F1 score, it differs from our micro average metric in that it will count the whole sentence spans and duplicated spans are calculated and not removed. 2.5 Tuning and Model Selection To maintain the unsupervised nature of our experiments, we avoid the common practice of using gold parses of the validation set for hyperparameter tuning. CCM and CCL do not expose any hyperparameter for tuning. We tune PRPN and URNNG based on their perplexity on the validation set. DIORA does not provide a metric that can be used for tuning, so we do not tune it. We tune PRPN and URNNG with the same time budget of 5 days on a GPU cluster with TITAN V GPUs. We use Bayesian optimization7 to automatically tune these models. We set the ranges of hyperparameter values around the default values provided in the original papers. 6https://nlp.cs.nyu.edu/evalb/ 7https://github.com/fmfn/ BayesianOptimization 3281 Train ktb len10 nopunct ktb len40 nopunct ktb len40 punct Metric micro macro evalb micro macro evalb micro macro evalb Evaluated on test sentences with length ≤10. PRPN 10.18 ± 2.75 23.72 ± 2.17 30.48 ± 2.12 14.29 ± 11.95 26.80 ± 9.56 33.67 ± 9.25 8.09 (9.27) ± 1.12 (± 1.32) 20.47 (23.95) ± 0.98 (± 1.15) 29.61 (29.95) ± 0.86 (± 0.89) URNNG 1.37 ± 0.00 16.60 ± 0.00 23.43 ± 0.00 1.93 ± 0.00 17.13 ± 0.00 23.86 ± 0.00 2.71 (1.89) ± 0.00 (± 0.00) 16.25 (17.91) ± 0.00 (± 0.00) 25.39 (24.74) ± 0.00 (± 0.00) DIORA 21.96 ± 6.59 32.37 ± 5.35 39.60 ± 5.10 34.69 ± 6.51 42.20 ± 5.00 49.45 ± 5.04 27.00 (27.34) ± 3.82 (± 4.51) 34.68 (35.86) ± 2.95 (± 3.69) 44.10 (43.24) ± 2.92 (± 3.15) CCL 18.49 30.31 32.28 2.74 18.43 27.47 n/a (13.93) n/a (27.85) n/a (36.90) CCM 24.69 36.32 41.72 32.67 41.97 47.89 3.44 (3.45) 16.47 (18.93) 26.05 (25.82) LBranch 23.86 34.69 41.07 23.86 34.69 41.07 20.10 (25.46) 29.98 (36.37) 38.81 (45.61) RBranch 1.37 16.60 23.67 1.37 16.60 23.67 2.12 (1.29) 15.68 (17.52) 25.05 (27.97) UBound 57.68 60.82 67.25 57.68 60.82 67.25 49.62 52.86 61.41 Evaluated on all test sentences. PRPN 8.01 ± 1.19 13.92 ± 1.28 15.61 ± 1.09 11.11 ± 8.06 17.25 ± 8.82 18.45 ± 7.39 5.83 (7.15) ± 0.71 (± 0.77) 10.16 (12.17) ± 0.78 (± 0.88) 13.1 (14.07) ± 0.65 (± 0.67) URNNG 0.24 ± 0.00 6.44 ± 0.00 8.47 ± 0.00 0.68 ± 0.00 6.94 ± 0.00 8.87 ± 0.00 0.33 (0.26) ± 0.00 (± 0.00) 5.08 (5.6) ± 0.00 (± 0.00) 8.01 (7.95) ± 0.00 (± 0.00) DIORA 14.95 ± 3.22 21.90 ± 4.19 21.97 ± 2.95 29.94 ± 3.16 35.06 ± 4.04 35.72 ± 2.90 24.22 (23.48) ± 4.32 (± 4.45) 28.09 (28.08) ± 3.88 (± 4.18) 30.06 (28.98) ± 3.98 (± 4.03) CCL 12.62 19.43 18.03 1.20 7.69 12.60 n/a (8.63) n/a (14.18) n/a (18.44) CCM 12.21 21.70 19.46 20.21 28.60 26.80 1.33 (1.42) 5.91 (6.78) 8.94 (8.98) LBranch 11.15 20.62 18.49 11.15 20.62 18.49 9.63 (10.77) 16.77 (19.66) 16.60 (18.26) RBranch 0.22 6.43 8.46 0.22 6.43 8.46 0.20 (0.17) 4.83 (5.45) 7.89 (8.54) UBound 64.38 62.52 67.32 64.38 62.52 67.32 59.40 56.44 62.53 Table 2: Experimental results on KTB. 3 Experimental Results We list the experimental results of all the models and the left/right-branching baselines for PTB and KTB in Table 1 and Table 2 respectively. Since all the models except CCL produce binary parse trees, we also show the score upper bound that a binary tree parser can achieve, which is computed by binarizing the gold trees and calculating their scores against the original gold trees. Note that our results can be very different from those reported in the original papers of these models because of different experimental setups. For example, the original CCM paper reports an F1 score of 71.9 on PTB, but we report 62.97. This is because the original CCM experiment uses the whole WSJ corpus (with length ≤10) for both training and test, which is very different from our setup. Also note that for the left and right branching baselines and the binary upper bound, the scores for “length 10 no punct” and “length 40 no punct” are the same, because these baselines do not require training and are evaluated on the same test sets. Overall Comparison There is no universal winner for all the settings but there is clear winners for specific settings. On PTB, it is surprising to see that each model is the winner of at least one setting. Right-branching is a very strong baseline and with post-processing it outperforms all the models in some settings of “ptb len40 punct”. On KTB, DIORA is the winner in most of the settings, while CCM has a strong performance on “ktb len10 nopunct”. Left-branching is a strong baseline especially when evaluated on sentences with length ≤10. Although CCM and DIORA achieve the best overall performance, we note that they both utilize additional resources. CCM uses gold POS tags and DIORA uses pretrained word embedding. Our preliminary experiments on PTB show a significant drop in performance when we run CCM using words without gold POS tags, with the Evalb F1 score dropping from 70.14 to 57.29 when evaluated on length ≤10 under the “ptb len10 nopunct” setting. DIORA also performs worse when pretrained word embedding is replaced by randomly initialized embedding, with the average Evalb F1 score dropping from 49.39 to 42.63 when evaluated on all sentences under the “ptb len40 nopunct” setting. Overall, we do not see a clear advantage of more recent neural models over traditional models. There are two factors that should be taken into account though. First, neural models are significantly slower and therefore may not have been sufficiently tuned because of the fixed tuning time budget. Second, the training data may still be too 3282 small from the perspective of neural models. Finally, we also note that our post-processing method for adding back punctuation almost always improves the score in PTB, sometimes by a large margin (e.g., for CCM and RBranch). On KTB, however, it sometimes decreases the score. This may be caused by different annotation standards for punctuation in the two treebanks. Impact of Experimental Settings Different experimental settings lead to remarkable difference in the evaluation scores of the same model. Different evaluation metrics also produce very different scores. With the same output parses, they can sometimes differ more than 20 F1 points. Running Time Traditional models such as CCM and CCL are fast, taking only several minutes. On the other hand, neural models take hours or even days to train. Apart from training, the inference stage is also very fast for traditional models but slow for neural models. Considering their close F1 scores, we believe at least in the scenario of limited data and computational resources, traditional models are preferred to neural models. Comments on Individual Models We find that CCM when trained with length ≤10 sentences is very competitive. On PTB, it even outperforms all the other models that are trained on length 40 data with no punctuation. However, CCM cannot handle punctuation very well without post-processing. URNNG seems to degrade to mostly rightbranching in many settings (thus having very low standard deviations). This is possibly due to two reasons: 1) URNNG takes a lot of time to train and is therefore only lightly tuned because of the tuning time budget; 2) in the original paper, URNNG is trained with punctuation but evaluated without punctuation, which is quite different from our settings. PRPN has a strong performance on PTB when trained with long sentences. However, we note that PRPN has a right-branching bias during inference (Dyer et al., 2019). If we switch its inference bias to left-branching, the performance drops significantly (for more than 10 points). Because of its rightbranching bias, PRPN does not perform well on KTB. 4 Discussion We make the following recommendations for future experiments on unsupervised constituency parsing. For the sentence length limit, we think one can set any limit on the training data, but should report evaluation results on both length ≤10 and alllength test data. For the evaluation metrics, since small details in implementing micro and macro average will lead to nontrivial differences, we suggest using PARSEVAL which has publicly available implementation. For models sensitive to random seeds, we recommend reporting means and standard deviations from multiple runs. We also recommend evaluation on treebanks of both leftbranching and right-branching languages, such as PTB and KTB. References Rens Bod. 2006. An all-subtrees approach to unsupervised parsing. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, ACL-44, pages 865– 872, Stroudsburg, PA, USA. Association for Computational Linguistics. Alastair Butler, Zhu Hong, Tomoko Hotta, Ruriko Otomo, Kei Yoshimoto, and Zhen Zhou. 2012. Keyaki treebank: phrase structure with functional information for japanese. In Proceedings of Text Annotation Workshop. Glenn Carroll and Eugene Charniak. 1992. Two experiments on learning probabilistic dependency grammars from corpora. Technical report, Brown University, Providence, RI, USA. Andrew Drozdov, Pat Verga, Mohit Yadav, Mohit Iyyer, and Andrew McCallum. 2019. Unsupervised latent tree induction with deep inside-outside recursive autoencoders. In North American Association for Computational Linguistics. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of NAACL. Chris Dyer, G´abor Melis, and Phil Blunsom. 2019. A critical analysis of biased parsers in unsupervised parsing. arXiv preprint arXiv:1909.09428. Dave Golland, John DeNero, and Jakob Uszkoreit. 2012. A feature-rich constituent context model for grammar induction. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers - Volume 2, ACL ’12, pages 17–22, Stroudsburg, PA, USA. Association for Computational Linguistics. Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018). 3283 Phu Mon Htut, Kyunghyun Cho, and Samuel Bowman. 2018. Grammar induction with neural language models: An unusual replication. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4998–5003, Brussels, Belgium. Association for Computational Linguistics. Lifeng Jin, Finale Doshi-Velez, Timothy Miller, Lane Schwartz, and William Schuler. 2019. Unsupervised learning of PCFGs with normalizing flow. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2442–2452, Florence, Italy. Association for Computational Linguistics. Lifeng Jin, Finale Doshi-Velez, Timothy A. Miller, William Schuler, and Lane Schwartz. 2018. Unsupervised grammar induction with depth-bounded pcfg. Transactions of the Association for Computational Linguistics, 6:211–224. Bernard E. M. Jones. 1994. Exploring the role of punctuation in parsing natural text. In COLING 1994 Volume 1: The 15th International Conference on Computational Linguistics. Yoon Kim, Chris Dyer, and Alexander Rush. 2019a. Compound probabilistic context-free grammars for grammar induction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2369–2385, Florence, Italy. Association for Computational Linguistics. Yoon Kim, Alexander M. Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, and G´abor Melis. 2019b. Unsupervised recurrent neural network grammars. In Proceedings of NAACL. Dan Klein and Christopher D Manning. 2002. A generative constituent-context model for improved grammar induction. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 128–135. Association for Computational Linguistics. Mitchell P Marcus and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19(2). Fernando Pereira and Yves Schabes. 1992. Insideoutside reestimation from partially bracketed corpora. In Proceedings of the 30th Annual Meeting on Association for Computational Linguistics, ACL ’92, pages 128–135, Stroudsburg, PA, USA. Association for Computational Linguistics. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. Elias Ponvert, Jason Baldridge, and Katrin Erk. 2011. Simple unsupervised grammar induction from raw text with cascaded finite state models. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 1077–1086. Association for Computational Linguistics. Yoav Seginer. 2007. Fast unsupervised incremental parsing. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 384–391. Yikang Shen, Zhouhan Lin, Chin wei Huang, and Aaron Courville. 2018. Neural language modeling by jointly learning syntax and lexicon. In International Conference on Learning Representations. Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. 2019. Ordered neurons: Integrating tree structures into recurrent neural networks. In International Conference on Learning Representations. Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2011. Punctuation: Making a point in unsupervised dependency parsing. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning, CoNLL ’11, pages 19–28, Stroudsburg, PA, USA. Association for Computational Linguistics. Andreas Stolcke and Stephen Omohundro. 1994. Inducing probabilistic grammars by bayesian model merging. In International Colloquium on Grammatical Inference, pages 106–118. Springer.
2020
300
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3284–3294 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3284 Efficient Constituency Parsing by Pointing Thanh-Tung Nguyen†¶, Xuan-Phi Nguyen†¶, Shafiq Joty¶§, Xiaoli Li† ¶Nanyang Technological University §Salesforce Research Asia †Institute for Infocomm Research, A-STAR Singapore {ng0155ng@e.;nguyenxu002@e.;srjoty@}ntu.edu.sg [email protected] Abstract We propose a novel constituency parsing model that casts the parsing problem into a series of pointing tasks. Specifically, our model estimates the likelihood of a span being a legitimate tree constituent via the pointing score corresponding to the boundary words of the span. Our parsing model supports efficient top-down decoding and our learning objective is able to enforce structural consistency without resorting to the expensive CKY inference. The experiments on the standard English Penn Treebank parsing task show that our method achieves 92.78 F1 without using pre-trained models, which is higher than all the existing methods with similar time complexity. Using pre-trained BERT, our model achieves 95.48 F1, which is competitive with the state-of-theart while being faster. Our approach also establishes new state-of-the-art in Basque and Swedish in the SPMRL shared tasks on multilingual constituency parsing. 1 Introduction Constituency or phrase structure parsing is a core task in natural language processing (NLP) with myriad downstream applications. Therefore, devising effective and efficient algorithms for parsing has been a key focus in NLP. With the advancements in neural approaches, various neural architectures have been proposed for constituency parsing as they are able to effectively encode the input tokens into dense vector representations while modeling the structural dependencies between tokens in a sentence. These include recurrent networks (Dyer et al., 2016; Stern et al., 2017b) and more recently selfattentive networks (Kitaev and Klein, 2018). The parsing methods can be broadly distinguished based on whether they employ a greedy transition-based algorithm or a globally optimized S She 1 ∅ VP enjoys 2 S-VP playing 3 tennis 4 . 5 Span Representation S(T) = {((1, 5), S), ((2, 5), ∅), ((2, 4), VP), ((3, 4), S-VP)} Pointing Representation P(T) = {(1 )5,S), (2 )5,∅), (3 )4,S-VP), (4 )2,VP), (5 )1,S)} Figure 1: A binarized constituency tree for the sentence “She enjoys playing tennis.”. The node S-VP is an example of a collapsed atomic label. We omit POS tags and singleton spans for simplicity. Below the tree, we show span and pointing representations of the tree. chart parsing algorithm. The transition-based parsers (Dyer et al., 2016; Cross and Huang, 2016; Liu and Zhang, 2017) generate trees autoregressively as a form of shift-reduce decisions. Though computationally attractive, the local decisions made at each step may propagate errors to subsequent steps which would suffer from exposure bias. Chart parsing methods, on the other hand, learn scoring functions for subtrees and perform global search over all possible trees to find the most probable tree for a sentence (Durrett and Klein, 2015; Gaddy et al., 2018; Kitaev and Klein, 2018; Kitaev et al., 2019). In this way, these methods can ensure consistency in predicting structured output. The limitation, however, is that they run slowly at O(n3) or higher time complexity. In this paper, we propose a novel parsing approach that casts constituency parsing into a series of pointing problems (Figure 1). Specifically, 3285 our parsing model estimates the pointing score from one word to another in the input sentence, which represents the likelihood of the span covering those words being a legitimate phrase structure (i.e., a subtree in the constituency tree). During training, the likelihoods of legitimate spans are maximized using the cross entropy loss. This enables our model to enforce structural consistency, while avoiding the use of structured loss that requires expensive O(n3) CKY inference (Gaddy et al., 2018; Kitaev and Klein, 2018). The training in our model can be fully parallelized without requiring structured inference as in (Shen et al., 2018; G´omez and Vilares, 2018). Our pointing mechanism also allows efficient top-down decoding with a best and worse case running time of O(n log n) and O(n2), respectively. In the experiments with English Penn Treebank parsing, our model without any pre-training achieves 92.78 F1, outperforming all existing methods with similar time complexity. With pre-trained BERT (Devlin et al., 2019), our model pushes the F1 score to 95.48, which is on par with the state-of-the-art (Kitaev et al., 2019), while supporting faster decoding. Our model also performs competitively on the multilingual parsing tasks in the SPMRL 2013/2014 shared tasks and establishes new state-of-the-art in Basque and Swedish. We will release our code at https://ntunlpsg.github.io/project/parser/ptrconstituency-parser 2 Model Similar to Stern et al. (2017a), we view constituency parsing as the problem of finding a set of labeled spans over the input sentence. Let S(T) denote the set of labeled spans for a parse tree T. Formally, S(T) can be expressed as S(T) := {((it, jt), lt)}|S(T)| t=1 for it < jt (1) where |S(T)| is the number of spans in the tree. Figure 1 shows an example constituency tree and its corresponding labeled span representation. Following the standard practice in parsing (Gaddy et al., 2018; Shen et al., 2018), we convert the n-ary tree into a binary form and introduce a dummy label ∅to spans that are not constituents in the original tree but created as a result of binarization. Similarly, the labels in unary chains corresponding to nested labeled spans are collapsed into unique atomic labels, such as S-VP in Fig. 1. Although our method shares the same “spanbased” view with that of Stern et al. (2017a), our approach diverges significantly from their framework in the way we treat the whole parsing problem, and the representation and modeling of the spans, as we describe below. 2.1 Parsing as Pointing In contrast to previous approaches, we cast parsing as a series of pointing decisions. For each index i in the input sequence, the parsing model points it to another index pi in order to identify the tree span (i, pi), where i ̸= pi. Similar to Pointer Networks (Vinyals et al., 2015a), each pointing mechanism is modeled as a multinomial distribution over the indices of the input tokens (or encoder states). However, unlike the original pointer network where a decoder state points to an encoder state, in our approach, every encoder state hi points to another encoder state hpi. In this paper, we generally use x ) y to mean x points to y. We will refer to the pointing operation either as a function of the encoder states (e.g., hi ) hpi) or simply the corresponding indices (e.g., i ) pi). They both mean the same operation where the pointing function takes the encoder state hi as the query vector and points to hpi by computing an attention distribution over all the encoder states. Let P(T) denote the set of pointing decisions derived from a tree T by a transformation H, i.e., H : T →P(T). For the parsing process to be valid, the transformation H and its inverse H′ which transforms P(T) back to T, should both have a one-to-one mapping property. Otherwise, the parsing model may confuse two different parse trees with the same pointing representation. In this paper, we propose a novel transformation that satisfies this property, as defined by the following proposition (proof provided in the Appendix). Proposition 1 Given a binary constituency tree T for a sentence containing n tokens, the transformation H converts it into a set of pointing decisions P(T) = {(i ) pi, li) : i = 1, . . . , n −1; i ̸= pi} such that (min(i, pi), max(i, pi)) is the largest span that starts or ends at i, and li is the label of the nonterminal associated with the span. To elaborate further, each pointing decision in P(T) represents a specific span in S(T). The pointing i ) pi is directional, while the span that it represents (i′, j′) is non-directional. In other words, there may exist position i such that i > pi, 3286 Algorithm 1 Convert binary tree to Pointing Input: Binary tree T and its span representation S(T) Output: Pointing representation P(T) P(T) = [] ▷Empty pointing list for each leafi in T do node ←leafi (x, y) ←(i, i) ▷Initialize current span, x ≤y li ←∅ ▷Initialize label of current span while x = i or y = i do pi ←x + y −i li ←node.label ▷The span’s label node ←node.parent (x, y) ←node.span ▷Span covered by node end while ▷Until i is no longer start/end point push(P(T), (i )pi, li)) end for return P(T) while i′ < j′ ∀i′, j′ ∈[1, n]. In fact, it is easy to see that if the token at index i is a left-child of a subtree, the largest span involving i starts at i, and in this case i < pi and i′ = i, j′ = pi. On the other hand, if the token is a right-child of a subtree, the respective largest span ends at position i, in which case i > pi and i′ = pi, j′ = i (e.g., see 4 )2 in Figure 1). In addition, as the spans in S(T) are unique, it can be shown that the pointing decisions in P(T) are also distinct from one another (see Appendix for a proof by contradiction). Given such pointing formulation, for every constituency tree, there exists a trivial case (1 ) n, l1) where p1 = n and l1 is generally ‘S’. Thus, to make our formulation more general with n inputs and n outputs and convenient for the method description discussed later on, we add another trivial case (n ) 1, l1). With this generalization, we can represent the pointing decisions of any binary constituency tree T as: P(T) = {(i )pi, li) : i = 1, . . . , n; i ̸= pi} (2) The pointing representation of the tree in Figure 1 is given at the bottom of the figure. To illustrate, in the parse tree, the largest phrase that starts or ends at token 2 (‘enjoys’) is the subtree rooted at ‘∅’, which spans from 2 to 5. In this case, the span starts at token 2. Similarly, the largest phrase that starts or ends at token 4 (‘tennis’) is the span “enjoys playing tennis”, which is rooted at ‘VP’. In this case, the span ends at token 4. Algorithm 1 describes the procedure to convert a binary tree to its corresponding pointing representation. Specifically, from each leaf token i, the algorithm traverses upward along the hierarchy until the non-terminal node that does not start or end with i. In this way, the largest span starting or ending with i can be identified. 2.2 Top-Down Tree Inference In the previous section, we described how to convert a constituency tree T into a sequence of pointing decisions P(T). We use this transformation to train the parsing model (described in detail in Sections 2.3 - 2.4). During inference, given a sentence to parse, our decoder with the help of the parsing model predicts P(T), from which we can construct the tree T. However, not all sets of pointings P(T) guarantee the generation of a valid tree. For example, for a sentence with four (4) tokens, the pointing P(T) = {(1 ) 4, l1), (2 ) 3, l2), (3 ) 4, l3), (4 ) 1, l1)} does not generate a valid tree because token ‘3’ cannot belong to both spans (2, 3) and (3, 4). In other words, simply taking the arg max over the pointing distributions may not generate a valid tree. Our approach to decoding is inspired by the span-based approach of Stern et al. (2017a). In particular, to reduce the search space, we score for span identification (given by the pointing function) and label assignment separately. Span Identification. We adopt a top-down greedy approach formulated as follows. k∗ = arg maxk ssplit(i, k, j) (3) where ssplit(i, k, j) is the score of having a splitpoint at position k (i ≤k < j), as defined by the following equation. ssplit(i, k, j) = ρ(k )i) + ρ(k+1 )j) (4) where ρ(k ) i) and ρ(k +1 ) j) are the pointing scores (probabilities) for spans (i, k) and (k+1, j), respectively. Note that the pointing scores are asymmetric, meaning that ρ(i ) j) may not be equal to ρ(j ) i), because pointing from i to j is different from pointing from j to i. This is different from previous approaches, where the score of a span is defined to be symmetric. We build a tree for the input sentence by computing Eq. 3 recursively starting from the full sentence span (1, n). In the general case when i < k < j −1, our pointing-based parsing model should learn to assign high scores to the two spans (i, k) and (k + 1, j), or equivalently the pointing decisions k )i and k+1 )j. However, the pointing formulation described so far omits the trivial self-pointing 3287 decisions, which represent the singleton spans. A singleton span is only created when the splitting decision splits an n-size span into a single-token span (singleton span) and a sub-span of size n−1, i.e., when k = i or k = j −1. For instance, for the parsing process in Figure 2a, the splitting decision at the root span (1, 5) results in a singleton span (1, 1) and a general span (2, 5). For this splitting decision, Eq. 3 requires the scores of (1, 1) and (2, 5). However, the set of pointing decisions P(T) does not cover the pointing for (1, 1). This discrepancy can be resolved by modeling the singleton spans separately. To achieve that, we redefine Eq. 3 as follows: ssplit(i, k, j) =    sp(i )i) + gp(i+1 )j) if k = i gp(j−1 )i) + sp(j )j) if k = j −1 gp(k )i) + gp(k+1 )j) otherwise (5) where sp and gp respectively represent the scores for the singleton and general pointing functions (to be defined formally in Section 2.3). Remark on structural consistency. It is important to note that since the pointing functions are defined to have a global structural property (i.e., the largest span that starts/ends with i), our model inherently enforces structural consistency. The pointing formulation of the parsing problem also makes the training process simple and efficient; it allows us to train the model effectively with simple cross entropy loss (see Section 2.4). Label Assignment. Label assignment of spans is performed after every split decision. Specifically, as we split a span (i, j) into two sub-spans (i, k) and (k+1, j) which corresponds to the pointing functions of k )i and k+1 )j, we perform the label assignments for the two new sub-spans as lk = arg max l∈L gc(l|k) lk+1 = arg max l∈L gc(l|k + 1) (6) where gc is the label classifier for any general (non-unary) span and L is the set of possible nonterminal labels. Following Shen et al. (2018), we use a separate classifier uc for determining the labels of the unary spans, e.g., the first layer of labels NP, ∅, . . ., NP, ∅) in Figure 2. Also, note that the label assignment is done based on only the query vector (the encoder state that is used to point). Algorithm 2 Pointing parsing algorithm Input: Sentence length n; pointing scores: gp(i, j), sp(i, j); label scores: gc(l|i), uc(l|i), 1 ≤i ≤j ≤n, l ∈Lg/Lu Output: Parse tree T Q = [(1, n)] ▷queue of spans S = [(1, n, arg maxl gc(l|1)] ▷general spans, labels U ={((t, t), arg maxl uc(l|t))}n t=1 ▷unary spans, labels while Q ̸= ∅do (i, j) = pop(Q) if j ≤i + 1 then Continue end if k∗= arg maxi≤k<j ssplit(i, k, j) ▷using gp, sp if k = i then push(Q, (i + 1, j)) push(S, (i + 1, j, arg maxl gc(l|i + 1))) else if k = j −1 then push(Q, (i, j −1)) push(S, (i, j −1, arg maxl gc(l|j −1))) else push(Q, (i, k)) push(Q, (k + 1, j)) push(S, (i, k, arg maxl gc(l|k))) push(S, (k + 1, j, arg maxl gc(l|k + 1))) end if end while T = S ∪U Figure 2 illustrates the top-down parsing process for our running example. It consists of a sequence of pointing decisions (Figure 2a, top to bottom), which are then trivially converted to the parse tree (Figure 2b). We also provide the pseudocode in Algorithm 2. Specifically, the algorithm finds the best split for the current span (i, j) using the pointing scores and pushes the newly created sub-spans into the FIFO queue Q. The process terminates when there are no more spans to be split. Similar to Stern et al. (2017a), our parsing algorithm has the worst and best case time complexities of O(n2) and O(n log n), respectively. 2.3 Model Architecture We now describe the architecture of our parsing model: the sentence encoder, the pointing model and the labeling model. Sentence Encoder. Given an input sequence of n words X = (x1, . . . , xn), we first embed each word xi to its respective vector representation ei as: ei = echar i + eword i + epos i (7) where echar i , eword i , epos i are respectively the character, word, and part-of-speech (POS) embeddings of the word xi. Following Kitaev and Klein (2018), we use a character LSTM to compute the character embedding of a word. We experiment with both randomly initialized and pre3288 (a) Execution of pointing parsing algorithm (b) Output parse tree. Figure 2: Inferring the parse tree for a given sentence and its part-of-speech (POS) tags (predicted by an external POS tagger). Starting with the full sentence span (1, 5) and its label S, we predict split point 1 using the base (sp) and general (gp) pointing scores as per Eqn. 3-5. The left singleton span (1, 1) is assigned with a label NP and the right span (2, 5) is assigned with a label ∅using the label classifier gc as per Eqn. 6. The recursion of splitting and labeling continues until the process reaches a terminal node. The label assignment for the unary spans is done by the uc classifier. trained word embeddings. If pretrained embeddings are used, the word embedding eword i is the summation of the word’s randomly-initialized embedding and the pretrained embedding. The POS embeddings (epos i ) are randomly initialized. The word representations (ei) are then passed to a neural network based sequence encoder to obtain their hidden representations. Since our method does not require any specific encoder, one may use any encoder model, such as Bi-LSTM (Hochreiter and Schmidhuber, 1997) or self-attentive encoder (Kitaev and Klein, 2018). In this paper, unless otherwise specified, we use the self-attentive encoder model as our main sequence encoder because of its efficiency with parallel computation. The model is factorized into content and position information in both the self-attention sub-layer and the feedforward layer. Details about this factorization process is provided in Kitaev and Klein (2018). Pointing and Labeling Models. The results of the aforementioned sequence encoding process are used to compute the pointing and labeling scores. More formally, the encoder network produces a sequence of n latent vectors H = (h1, . . . , hn) for the input sequence X = (x1, . . . , xn). After that, we apply four (4) separate position-wise two-layer Feed-Forward Networks (FFN), formulated as FFN(x) = ReLU(xW1 + b1)W2 + b2, to transform H into task-specific latent representations for the respective pointing and labeling tasks. hgp i = FFNgp(hi); hsp i = FFNsp(hi) (8) hgc i = FFNgc(hi); huc i = FFNuc(hi) (9) Note that there is no parameter sharing between FFNgp, FFNsp, FFNgc and FFNuc. The pointing functions are then modeled as the multinomial (or attention) distributions over the input indices for each input position i as follows. gp(i, k) = exp(hgp i (hgp k )T ) Pn k=1 exp(hgp i (hgp k )T ) (10) sp(i, k) = exp(hsp i (hsp k )T ) Pn k=1 exp(hsp i (hsp k )T ) (11) For label assignment functions, we simply feed the label representations Hgc = (hgc 1 , . . . , hgc n ) and Huc = (huc 1 , . . . , huc n ) into the respective softmax classification layers as follows. gc(l|i) = exp(hgc i wgc l ) P|Lg| l=1 exp(hgc i wgc l ) (12) uc(l|i) = exp(huc i wuc l ) P|Lu| l=1 exp(huc i wuc l ) (13) 3289 where Lg and Lu are the set of possible labels for the general and unary spans respectively, wgc l and wuc l are the class-specific trainable weight vectors. 2.4 Training Objective We train our parsing model by minimizing the total loss Ltotal(θ) defined as: Ltotal(θ) = Lgp(θe, θgp) + Lsp(θe, θsp) +Lgc(θe, θgc) + Luc(θe, θuc) (14) where each individual loss is a cross entropy loss computed for the corresponding labeling or pointing task, and θ = {θe, θgp, θsp, θgc, θuc} represents the overall model parameters; specifically, θe denotes the encoder parameters shared by all components, while θgp, θsp, θgc and θuc denote the separate parameters catering for the four pointing and labeling functions, gp, sp, gc and uc, respectively. 3 Experiments To show the effectiveness of our approach, we conduct experiments on English and Multilingual parsing tasks. For English, we use the standard Wall Street Journal (WSJ) part of the Penn Treebank (PTB) (Marcus et al., 1993), whereas for multilingual, we experiment with seven (7) different languages from the SPMRL 2013-2014 shared task (Seddah et al., 2013): Basque, French, German, Hungarian, Korean, Polish and Swedish. For evaluation on PTB, we report the standard labeled precision (LP), labeled recall (LR), and labelled F1 computed by evalb1. For the SPMRL datasets, we report labeled F1 and use the same setup in evalb as Kitaev and Klein (2018). 3.1 English (PTB) Experiments Setup. We follow the standard train/valid/test split, which uses sections 2-21 for training, section 22 for development and section 23 for evaluation. This gives 45K sentences for training, 1,700 sentences for development, and 2,416 sentences for testing. Following previous studies, our model uses POS tags predicted by the Stanford tagger (Toutanova et al., 2003). For our model, we adopt the self-attention encoder with similar hyperparameter details proposed by Kitaev and Klein (2018). The character embeddings are of 64 dimensions. For general 1http://nlp.cs.nyu.edu/evalb/ Model LR LP F1 Top-Down Inference Stern et al. (2017a) 93.20 90.30 91.80 Shen et al. (2018) 92.00 91.70 91.80 Our Model 92.81 92.75 92.78 CKY/Chart Inference Gaddy et al. (2018) 92.10 Kitaev and Klein (2018) 93.20 93.90 93.55 Other Approaches G´omez and Vilares (2018) 90.7 Liu and Zhang (2017) 91.8 Stern et al. (2017b) 92.57 92.56 92.56 Zhou and Zhao (2019) 93.64 93.92 93.78 Table 1: Results for single models (no pre-training) on the PTB WSJ test set, Section 23. and unary label classifiers (gc and uc), the hidden dimension of the specific position-wise feedforward networks is 250, while those for pointing functions (gp and sp) have hidden dimensions of 1024. Our model is trained using the Adam optimizer (Kingma and Ba, 2015) with a batch size of 100 sentences. Additionally, we use 100 warm-up steps, within which we linearly increase the learning rate from 0 to the base learning rate of 0.008. Model selection for testing is performed based on the labeled F1 score on the validation set. Results for Single Models. The experimental results on PTB for the models without pre-training are shown in Table 1. As it can be seen, our model achieves an F1 of 92.78, the highest among the models using top-down inference strategies. Specifically, our method outperforms Stern et al. (2017a) and Shen et al. (2018) by about 1.0 point in F1-score. Notably, our model with LSTM encoder achieves an F1 of 92.26, which is still better than all the top-down parser methods. On the other hand, while Kitaev and Klein (2018) and Zhou and Zhao (2019) achieve higher F1 score, their inference speed is significantly slower than ours because of the use of CKY based algorithms, which run at O(n3) time complexity for Kitaev and Klein (2018) and O(n5) for Zhou and Zhao (2019). Furthermore, their training objectives involve the use of structural hinge loss, which requires online CKY inference during training. This makes their training time considerably slower than that of our method, which is trained 3290 Model F1 Our model BERTBASE-uncased 95.34 Our model BERTLARGE-cased 95.48 Kitaev and Klein (2018) ELMO 95.13 Kitaev et al. (2019) BERTLARGE-cased 95.59 Table 2: Restuls on PTB WSJ test set with pretraining. directly with span-wise cross entropy loss. In addition, Zhou and Zhao (2019) uses external supervision (head information) from the dependency parsing task. Dependency parsing models, in fact, have a strong resemblance to the pointing mechanism that our model employs (Ma et al., 2018). As such, integrating dependency parsing information into our model may also be beneficial. We leave this for future work. Results with Pre-training Similar to Kitaev and Klein (2018) and Kitaev et al. (2019), we also evaluate our models with BERT (Devlin et al., 2019) embeddings . Following them in the inclusion of contextualized token representations, we adjust the number of self-attentive layers to 2 and the base learning rate to 0.00005. As shown in Table 2, our model achieves an F1 score of 95.48, which is on par with the state-ofthe-art models. However, the advantage of our method is that it is faster than those methods. Specifically, our model runs at O(n2) worst-case time complexity, while that of Kitaev et al. (2019) is O(n3). Comparison on parsing speed is discussed in the following section. Parsing Speed Comparison. In addition to parsing performance in F1 scores, we also compare our parser against the previous neural approaches in terms of parsing speed. We record the parsing timing over 2416 sentences of the PTB test set with batch size of 1, on a machine with NVIDIA GeForce GTX 1080Ti GPU and Intel(R) Xeon(R) Gold 6152 CPU. This setup is comparable to the setup of Shen et al. (2018). As shown in Table 3, our parser outperforms Shen et al. (2018) by 19 more sentences per second, despite the fact that our parsing algorithm runs at O(n2) worse-case time complexity while the one used by Shen et al. (2018) can theoretically run at O(n log n) time complexity. To elaborate further, the algorithm presented in Shen et al. Model # sents/sec Petrov and Klein (2007) 6.2 Zhu et al. (2013) 89.5 Liu and Zhang (2017) 79.2 Stern et al. (2017a) 75.5 Kitaev and Klein (2018) 94.40 Shen et al. (2018) 111.1 Our model 130.2 Table 3: Parsing speed for different models computed on the PTB WSJ test set. (2018) can only run at O(n2) complexity. To achieve O(n log n) complexity, it needs to sort the list of syntactic distances, which the provided code2 does not implement. In addition, the speed up for our method can be attributed to the fact that our algorithm (see Algorithm 2) uses a while loop, while the algorithm of Shen et al. (2018) has many recursive function calls. Recursive algorithms tend to be less empirically efficient than their equivalent while/for loops in handling lowlevel memory allocations and function call stacks. 3.2 SPMRL Multilingual Experiments Setup. Similar to the English PTB experiments, we use the predicted POS tags from external taggers (provided in the SPMRL datasets). The train/valid/test split is reported in Table 6. For single model evaluation, we use the identical hyperparameters and optimizer setups as in English PTB. For experiments with pre-trained models, we use the multilingual BERT (Devlin et al., 2019), which was trained jointly on 104 languages. Results. The results for the single models are reported in Table 4. We see that our model achieves the highest F1 score in Basque and Swedish, which are higher than the baselines by 0.52 and 1.37 respective in F1. Our method also performs competitively with the previous state-of-the-art methods on other languages. Table 5 reports the performance of the models using pre-trained BERT. Evidently, our method achieves state-of-the-art results in Basque and Swedish, and performs on par with the previous best method by Kitaev et al. (2019) in the other five languages. Again, note that our method is considerably faster and easier to train than the 2https://github.com/hantek/ distance-parser 3291 Model Basque French German Hebrew Hungarian Korean Polish Swedish (Anders Bjorkelund and Szanto, 2014) 88.24 82.53 81.66 89.80 91.72 83.81 90.50 85.50 (Coavoux and Crabb´e, 2017) 88.81 82.49 85.34 89.87 92.34 86.04 93.64 84.0 (Kitaev and Klein, 2018) 89.71 84.06 87.69 90.35 92.69 86.59 93.69 84.45 Our Model 90.23 82.20 84.91 90.63 91.07 85.36 93.99 86.87 Table 4: SPMRL experiment single model test. Model Basque French German Hebrew Hungarian Korean Polish Swedish (Kitaev et al., 2019) 91.63 87.43 90.20 92.99 94.90 88.80 96.36 88.86 Our model 92.02 86.69 90.28 93.67 94.24 88.71 96.14 89.10 Table 5: SPMRL experiment pre-trained model test (with pretraining). Language Train Valid Test Basque 7,577 948 946 French 14,759 1,235 2,541 German 40,472 5,000 5,000 Hebrew 5,000 500 716 Hungarian 8,146 1,051 1,009 Korean 23,010 2,066 2,287 Polish 6,578 821 822 Swedish 5,000 494 666 Table 6: SPMRL Multilingual dataset split. method of Kitaev et al. (2019). 4 Related Work Prior to the neural tsunami in NLP, parsing methods typically model correlations in the output space through probabilistic context-free grammars (PCFGs) on top of sparse (and discrete) input representations either in a generative regime (Klein and Manning, 2003) or a discriminative regime (Finkel et al., 2008) or a combination of both (Charniak and Johnson, 2005). Beside the chart parser approach, there is also a long tradition of transition-based parsers (Sagae and Lavie, 2005) Recently, however, with the advent of powerful neural encoders such as LSTMs (Hochreiter and Schmidhuber, 1997), the focus has been switched more towards effective modeling of correlations in the input’s latent space, as the output structures are nothing but a function of the input (Gaddy et al., 2018). Various neural network models have been proposed to effectively encode the dense input representations and correlations, and have achieved state-of-the-art parsing results. To enforce the structural consistency, existing neural parsing methods either employ a transition-based algorithm (Dyer et al., 2016; Liu and Zhang, 2017; Kitaev and Klein, 2019) or a globally optimized chart-parsing algorithm (Gaddy et al., 2018; Kitaev and Klein, 2018). Meanwhile, researchers also attempt to convert the constituency parsing problem into tasks that can be solved in alternative ways. For instance, Fern´andez-Gonz´alez and Martins (2015) transform the phrase structure into a special form of dependency structure. Such a dependency structure, however, requires certain corrections while converting back to the corresponding constituency tree. G´omez and Vilares (2018) and Shen et al. (2018) propose to map the constituency tree for a sentence of n tokens into a sequence of n −1 labels or scalars based on the depth or height of the lowest common ancestors between pairs of consecutive tokens. In addition, methods like (Vinyals et al., 2015b; Vaswani et al., 2017) apply the sequence-to-sequence framework to “translate” a sentence into the linearized form of its constituency tree. While being trivial and simple, parsers of this type do not guarantee structural correctness, because the syntax of the linearized form is not constrained during tree decoding. Our approach differs from previous work in that it represents the constituency structure as a series of pointing representations and has a relatively simpler cross entropy based learning objective. The pointing representations can be computed in parallel, and can be efficiently converted into a full constituency tree using a top-down algorithm. Our pointing mechanism shares certain similarities with the Pointer Network (Vinyals et al., 2015a), but is distinct from it in that our method points a word to another word within the same encoded sequence. 3292 5 Conclusion We have presented a novel constituency parsing method that is based on a pointing mechanism. Our method utilizes an efficient top-down decoding algorithm that uses pointing functions for scoring possible spans. The pointing formulation inherently captures global structural properties and allows efficient training with cross entropy loss. With experiments we have shown that our method outperforms all existing top-down methods on the English Penn Treebank parsing task. Our method with pre-training rivals the state-ofthe-art method, while being faster than it. On multilingual constituency parsing, it also establishes new state-of-the-art in Basque and Swedish. Acknowledgments We would like to express our gratitude to the anonymous reviewers for their insightful feedback on our paper. Shafiq Joty would like to thank the funding support from his Start-up Grant (M4082038.020). References Agnieszka Falenska Richard Farkas Thomas Mueller Wolfgang Seeker Anders Bjorkelund, Ozlem Cetinoglu and Zsolt Szanto. 2014. The imswrocław-szeged-cis entry at the spmrl 2014 shared task: Reranking and morphosyntax meet unlabeled data. In Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of NonCanonical Languages, pages 97–102. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and MaxEnt discriminative reranking. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 173–180, Ann Arbor, Michigan. Association for Computational Linguistics. Maximin Coavoux and Benoˆıt Crabb´e. 2017. Multilingual lexicalized constituency parsing with wordlevel auxiliary tasks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 331–336, Valencia, Spain. Association for Computational Linguistics. James Cross and Liang Huang. 2016. Span-based constituency parsing with a structure-label system and provably optimal dynamic oracles. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1–11, Austin, Texas. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Greg Durrett and Dan Klein. 2015. Neural CRF parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 302–312, Beijing, China. Association for Computational Linguistics. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199–209, San Diego, California. Association for Computational Linguistics. Daniel Fern´andez-Gonz´alez and Andr´e F. T. Martins. 2015. Parsing as reduction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1523–1533, Beijing, China. Association for Computational Linguistics. Jenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. 2008. Efficient, feature-based, conditional random field parsing. In Proceedings of ACL08: HLT, pages 959–967, Columbus, Ohio. Association for Computational Linguistics. David Gaddy, Mitchell Stern, and Dan Klein. 2018. What’s going on in neural constituency parsers? an analysis. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 999–1010. Association for Computational Linguistics. Carlos G´omez, Rodr´ıguez and David Vilares. 2018. Constituent parsing as sequence labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1314– 1324. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. 3293 Nikita Kitaev, Steven Cao, and Dan Klein. 2019. Multilingual constituency parsing with self-attention and pre-training. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3499–3505, Florence, Italy. Association for Computational Linguistics. Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676–2686, Melbourne, Australia. Association for Computational Linguistics. Nikita Kitaev and Dan Klein. 2019. Tetra-tagging: Word-synchronous parsing with linear-time inference. CoRR, abs/1904.09745. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 423–430, Sapporo, Japan. Association for Computational Linguistics. Jiangming Liu and Yue Zhang. 2017. Shift-reduce constituent parsing with neural lookahead features. Transactions of the Association for Computational Linguistics, 5:45–58. Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, and Eduard Hovy. 2018. Stackpointer networks for dependency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1403–1414, Melbourne, Australia. Association for Computational Linguistics. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Comput. Linguist., 19(2):313–330. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 404–411, Rochester, New York. Association for Computational Linguistics. Kenji Sagae and Alon Lavie. 2005. A classifier-based parser with linear run-time complexity. In Proceedings of the Ninth International Workshop on Parsing Technology, pages 125–132. Association for Computational Linguistics. Djam´e Seddah, Reut Tsarfaty, Sandra K¨ubler, Marie Candito, Jinho D. Choi, Rich´ard Farkas, Jennifer Foster, Iakes Goenaga, Koldo Gojenola Galletebeitia, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepi´orkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Woli´nski, Alina Wr´oblewska, and Eric Villemonte de la Clergerie. 2013. Overview of the SPMRL 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 146–182, Seattle, Washington, USA. Association for Computational Linguistics. Yikang Shen, Zhouhan Lin, Athul Paul Jacob, Alessandro Sordoni, Aaron Courville, and Yoshua Bengio. 2018. Straight to the tree: Constituency parsing with neural syntactic distance. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1171–1180, Melbourne, Australia. Association for Computational Linguistics. Mitchell Stern, Jacob Andreas, and Dan Klein. 2017a. A minimal span-based neural constituency parser. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 818–827. Mitchell Stern, Daniel Fried, and Dan Klein. 2017b. Effective inference for generative neural parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1695–1700, Copenhagen, Denmark. Association for Computational Linguistics. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology Volume 1, NAACL ’03, pages 173–180. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015a. Pointer networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2692–2700. Curran Associates, Inc. Oriol Vinyals, Ł ukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015b. Grammar as a foreign language. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2773–2781. Curran Associates, Inc. Junru Zhou and Hai Zhao. 2019. Head-driven phrase structure grammar parsing on penn treebank. In 3294 Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 2396–2408. Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and accurate shiftreduce constituent parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 434–443. Association for Computational Linguistics. Appendix Proof of Proposition 1 Given P(T) = {(i ) pi, li) : i = 1, . . . , n −1; i ̸= pi}, generated from tree T (here we omit the unary leaves and POStags), we at first define the inverse H′ as follows: H′(P(T)) = {((min(i, pi), max(i, pi)), li) : i = 1, . . . , n −1} We would prove H′(P(T)) = T A binary tree T has exactly n−1 internal nodes (or spans). It is noteworthy to mention that for each pointing (i ) pi, li), ((min(i, pi), max(i, pi)), li) is a span in T. As we consider i from 1 to n −1, there are totally at most n-1 such spans in H′(P(T))(we do not know whether these spans are not be distinct). Therefore, if we can prove that all ((min(i, pi), max(i, pi)), li) spans are distinct for i = 1, . . . , n−1, H′(P(T)) will cover all the span in T, therefore, H′(P(T)) = T. We prove this by contradiction. Assume that there exist i, j ∈{1, . . . , n −1} such that (min(i, pi), max(i, pi)) = (min(j, pj), max(j, pj)) for j ̸= i. First, if pi = n, then according to the above condition, (min(j, pj), max(j, pj)) = (min(i, n), max(i, n)) = (i, n). This means, either j = n or j = i, which contradicts with our initial assumption that j ̸= i and j ∈{1, . . . , n −1}. So, pi cannot be equal to n. Similarly, we can prove that pj also cannot be equal to n. Thus, we can conclude that pi, pj ∈{1, . . . , n −1}. Now, without loss of generality, let us assume that j > i. With this assumption, the two spans will be identical if and only if pi = j and pj = i. In this case, the span (i, j) would be the largest span that starts with i and ends at j. However, since 1 ≤i < j ≤n −1, the span (i, j) must be a left or right child of another (parent) span. If (i, j) is the left child, then the parent span needs to start with i, making it larger than (i, j). This contradicts to the property that (i, j) = (i, pi) is the largest span that starts or ends at i. Similarly, if (i, j) is the right child, then the parent span needs to end at j, making it larger than (i, j). This again contradicts to the property that (j, i) = (j, pj) is the largest span that starts or ends at j. In conclusion, we have H′(P(T)) = T. This would guarantee that H and H′ are one-to-one: If there exist T1, T2 such that P(T1) = P(T2), we would have H′(P(T1)) = H′(P(T2)) or T1 = T2.If there exist T1, T2 such that H′(P(T1)) = H′(P(T2)), we would have T1 = T2.
2020
301
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3295–3305 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3295 Efficient Second-Order TreeCRF for Neural Dependency Parsing Yu Zhang, Zhenghua Li∗, Min Zhang Institute of Artificial Intelligence, School of Computer Science and Technology, Soochow University, Suzhou, China [email protected], {zhli13,minzhang}@suda.edu.cn Abstract In the deep learning (DL) era, parsing models are extremely simplified with little hurt on performance, thanks to the remarkable capability of multi-layer BiLSTMs in context representation. As the most popular graphbased dependency parser due to its high efficiency and performance, the biaffine parser directly scores single dependencies under the arc-factorization assumption, and adopts a very simple local token-wise cross-entropy training loss. This paper for the first time presents a second-order TreeCRF extension to the biaffine parser. For a long time, the complexity and inefficiency of the inside-outside algorithm hinder the popularity of TreeCRF. To address this issue, we propose an effective way to batchify the inside and Viterbi algorithms for direct large matrix operation on GPUs, and to avoid the complex outside algorithm via efficient back-propagation. Experiments and analysis on 27 datasets from 13 languages clearly show that techniques developed before the DL era, such as structural learning (global TreeCRF loss) and high-order modeling are still useful, and can further boost parsing performance over the state-of-the-art biaffine parser, especially for partially annotated training data. We release our code at https: //github.com/yzhangcs/crfpar. 1 Introduction As a fundamental task in NLP, dependency parsing has attracted a lot of research interest due to its simplicity and multilingual applicability in capturing both syntactic and semantic information (Nivre et al., 2016). Given an input sentence x = w0w1 . . . wn, a dependency tree, as depicted in Figure 1, is defined as y = {(i, j, l), 0 ≤i ≤ n, 1 ≤j ≤n, l ∈L}, where (i, j, l) is a dependency from the head word wi to the modifier word ∗Corresponding author $0 I1 saw2 Sarah3 with4 a5 telescope6 nsubj dobj pobj det root prep Figure 1: An example full dependency tree. In the case of partial annotation, only some (not all) dependencies are annotated, for example, the two thick (blue) arcs. wj with the relation label l ∈L. Between two mainstream approaches, this work focuses on the graph-based paradigm (vs. transition-based). Before the deep learning (DL) era, graph-based parsing relies on many hand-crafted features and differs from its neural counterpart in two major aspects. First, structural learning, i.e., explicit awareness of tree structure constraints during training, is indispensable. Most non-neural graph-based parsers adopt the max-margin training algorithm, which first predicts a highest-scoring tree with the current model, and then updates feature weights so that the correct tree has a higher score than the predicted tree. Second, high-order modeling brings significant accuracy gains. The basic first-order model factors the score of a tree into independent scores of single dependencies (McDonald et al., 2005a). Second-order models were soon propose to incorporate scores of dependency pairs, such as adjacent-siblings (McDonald and Pereira, 2006) and grand-parent-child (Carreras, 2007; Koo and Collins, 2010), showing significant accuracy improvement yet with the cost of lower efficiency and more complex decoding algorithms.1 In contrast, neural graph-based dependency parsing exhibits an opposite development trend. Pei et al. (2015) propose to use feed-forward neural 1Third-order and fourth-order models show little accuracy improvement probably due to the feature sparseness problem (Koo and Collins, 2010; Ma and Zhao, 2012). 3296 networks for automatically learning combinations of dozens of atomic features similar to Chen and Manning (2014), and for computing subtree scores. They show that incorporating second-order scores of adjacent-sibling subtrees significantly improved performance. Then, both Wang and Chang (2016) and Kiperwasser and Goldberg (2016) propose to utilize BiLSTM as an encoder and use minimal feature sets for scoring single dependencies in a first-order parser. These three representative works all employ global max-margin training. Dozat and Manning (2017) propose a strong and efficient biaffine parser and obtain state-of-the-art accuracy on a variety of datasets and languages. The biaffine parser is also first-order and employs simpler and more efficient non-structural training via local head selection for each token (Zhang et al., 2017). Observing such contrasting development, we try to make a connection between pre-DL and DL techniques for graph-based parsing. Specifically, the first question to be addressed in this work is: can previously useful techniques such as structural learning and high-order modeling further improve the state-of-the-art2 biaffine parser, and if so, in which aspects are they helpful? For structural learning, we focus on the more complex and less popular TreeCRF instead of maxmargin training. The reason is two-fold. First, estimating probability distribution is the core issue in modern data-driven NLP methods (Le and Zuidema, 2014). The probability of a tree, i.e., p(y | x), is potentially more useful than an unbounded score s(x, y) for high-level NLP tasks when utilizing parsing outputs. Second, as a theoretically sound way to measure model confidence of subtrees, marginal probabilities can support Minimum Bayes Risk (MBR) decoding (Smith and Smith, 2007), and are also proven to be crucial for the important research line of token-level active learning based on partial trees (Li et al., 2016). One probable reason for the less popularity of TreeCRF, despite its usefulness, is due to the complexity and inefficiency of the inside-outside algorithm, especially the outside algorithm. As far as we know, all existing works compute the inside and outside algorithms on CPUs. The inefficiency issue becomes more severe in the DL era, due to 2Though many recent works report higher performance with extra resources, for example contextualized word representations learned from large-scale unlabeled texts under language model loss, they either adopt the same architecture or achieve similar performance under fair comparison. the unmatched speed of CPU and GPU computation. This leads to the second question: can we batchify the inside-outside algorithm and perform computation directly on GPUs? In that case, we can employ efficient TreeCRF as a built-in component in DL toolkits such as PyTorch for wider applications (Cai et al., 2017; Le and Zuidema, 2014). Overall, targeted at the above two questions, this work makes the following contributions. • We for the first time propose second-order TreeCRF for neural dependency parsing. We also propose an efficient and effective triaffine operation for scoring second-order subtrees. • We propose to batchify the inside algorithm via direct large tensor computation on GPUs, leading to very efficient TreeCRF loss computation. We show that the complex outside algorithm is no longer needed for the computation of gradients and marginal probabilities, and can be replaced by the equally efficient back-propagation process. • We conduct experiments on 27 datasets from 13 languages. The results and analysis show that both structural learning and high-order modeling are still beneficial to the state-ofthe-art biaffine parser in many ways in the DL era. 2 The Basic Biaffine Parser We re-implement the state-of-the-art biaffine parser (Dozat and Manning, 2017) with two modifications, i.e., using CharLSTM word representation vectors instead of POS tag embeddings, and the first-order Eisner algorithm (Eisner, 2000) for projective decoding instead of the non-projective MST algorithm. Scoring architecture. Figure 2 shows the scoring architecture, consisting of four components. Input vectors. The ith input vector is composed of two parts: the word embedding and the CharLSTM word representation vector of wi. ei = emb(wi) ⊕CharLSTM(wi) (1) where CharLSTM(wi) is obtained by feeding wi into a BiLSTM and then concatenating the two last hidden vectors (Lample et al., 2016). We find that replacing POS tag embeddings with 3297 . . . ei . . . ek . . . ej . . . BiLSTM × 3 MLPh MLPm Biaffine MLPh′ MLPs MLPm′ Triaffine hi hk hj rh i rm j rh′ i rs k rm′ j s(i, j) s(i, k, j) Figure 2: Scoring architecture with second-order extension. CharLSTM(wi) leads to consistent improvement, and also simplifies the multilingual experiments by avoiding POS tag generation (especially n-fold jackknifing on training data). BiLSTM encoder. To encode the sentential contexts, the parser applies three BiLSTM layers over e0 . . . en. The output vector of the top-layer BiLSTM for the ith word is denoted as hi. MLP feature extraction. Two shared MLPs are applied to hi, obtaining two lower-dimensional vectors that detain only syntax-related features: rh i ; rm i = MLPh/m (hi) (2) where rh i and rm i are the representation vector of wi as a head word and a modifier word respectively. Biaffine scorer. Dozat and Manning (2017) for the first time propose to compute the score of a dependency i →j via biaffine attention: s(i, j) =  rm j 1 T Wbiaffinerh i (3) where Wbiaffine ∈Rd×d. The computation is extremely efficient on GPUs. Local token-wise training loss. The biaffine parser adopts a simple non-structural training loss, trying to independently maximize the local probability of the correct head word for each word. For a gold-standard head-modifier pair (wi, wj) in a training instance, the cross-entropy loss is L(i, j) = −log es(i,j) P 0≤k≤n es(k,j) (4) In other words, the model is trained based on simple head selection, without considering the tree structure at all, and losses of all words in a minibatch are accumulated. Decoding. Having scores of all dependencies, we adopt the first-order Eisner algorithm with time complexity of O(n3) to find the optimal tree. y∗= arg max y  s(x, y) ≡ X i→j∈y s(i, j)   (5) Handling dependency labels. The biaffine parser treats skeletal tree searching and labeling as two independent (training phase) and cascaded (parsing phase) tasks. This work follows the same strategy for simplicity. Please refer to Dozat and Manning (2017) for details. 3 Second-order TreeCRF This work substantially extends the biaffine parser in two closely related aspects: using probabilistic TreeCRF for structural training and explicitly incorporating high-order subtree scores. Specifically, we further incorporate adjacent-sibling subtree scores into the basic first-order model:3 s(x, y) = X i→j∈y s(i, j)+ X i→{k,j}∈y s(i, k, j) (6) where k and j are two adjacent modifiers of i and satisfy either i < k < j or j < k < i. As a probabilistic model, TreeCRF computes the conditional probability of a tree as p(y | x) = es(x,y) Z(x) ≡P y′∈Y(x) es(x,y′) (7) where Y(x) is the set of all legal (projective) trees for x, and Z(x) is commonly referred to as the normalization (or partition) term. During training, TreeCRF employs the following structural training loss to maximize the conditional probability of the gold-standard tree y given x. L(x, y) = −log p(y | x) = −s(x, y) + log Z(x) (8) 3This work can be further extended to incorporate grandparent-modifier subtree scores based on the viterbi algorithm of O(n4) time complexity proposed by Koo and Collins (2010), which we leave for future work. 3298 i j i i + 1 j i r j ⇐ Ii,j : i j i r r + 1 j ⇐ Si,j : i j i r j ⇐ Ci,j : Figure 3: Diagrams of the second-order inside algorithm based on bottom-up dynamic programming. 3.1 Scoring Second-order Subtrees To avoid major modification to the original scoring architecture, we take a straightforward extension to obtain scores of adjacent-sibling subtrees. First, we employ three extra MLPs to perform similar feature extraction. rh′ i ; rs i; rm′ i = MLPh′/s/m′ (hi) (9) where rh′ i ; rs i; rm′ i are the representation vectors of wi as head, sibling, and modifier respectively.4 Then, we propose a natural extension to the biaffine equation, and employ triaffine for score computation over three vectors.5 s(i, k, j) =  rs k 1 T rh′ i TWtriaffine  rm′ j 1  (10) where Wtriaffine ∈Rd′×d′×d′ is a three-way tensor. The triaffine computation can be quite efficiently performed with the einsum function on PyTorch. 3.2 Computing TreeCRF Loss Efficiently The key to TreeCRF loss is how to efficiently compute log Z(x), as shown in Equation 8. This problem has been well solved long before the DL era for non-neural dependency parsing. Straightforwardly, we can directly extend the viterbi decoding algorithm by replacing max product with sum 4Another way is to use one extra MLP for sibling representation, and re-use head and modifier representation from the basic first-order components, which however leads to inferior performance in our preliminary experiments. 5We have also tried the approximate method of Wang et al. (2019), which uses three biaffine operations to simulate the interactions of three input vectors, but observed inferior performance. We omit the results due to the space limitation. Algorithm 1 Second-order Inside Algorithm. 1: define: I, S, C ∈Rn×n×B  B is #sents in a batch 2: initialize: Ci,i = log e0 = 0, 0 ≤i ≤n 3: for w = 1 to n do  span width 4: Batchify: 0 ≤i; j = i + w ≤n 5: Ii,j = log eCi,i+Cj,i+1 + P i<r<j eIi,r+Sr,j+s(i,r,j) ! + s(i, j) 6: Si,j = log P i≤r<j eCi,r+Cj,r+1 7: Ci,j = log P i<r≤j eIi,r+Cr,j 8: end for  refer to Figure 3 9: return C0,n ≡log Z product, and naturally obtain log Z(x) in the same polynomial time complexity. However, it is not enough to solely perform the inside algorithm for non-neural parsing, due to the inapplicability of the automatic differentiation mechanism. In order to obtain marginal probabilities and then feature weight gradients, we have to realize the more sophisticated outside algorithm, which is usually at least twice slower than the inside algorithm. This may be the major reason for the less popularity of TreeCRF (vs. max-margin training) before the DL era. As far as we know, all previous works on neural TreeCRF parsing explicitly implement the insideoutside algorithm for gradient computation (Zhang et al., 2019; Jiang et al., 2018). To improve efficiency, computation is transferred from GPUs to CPUs with Cython programming. This work shows that the inside algorithm can be effectively batchified to fully utilize the power of GPUs. Figure 3 and Algorithm 1 together illustrate the batchified version of the second-order inside algorithm, which is a direct extension of the secondorder Eisner algorithm in McDonald and Pereira (2006) by replacing max product with sum product. We omit the generations of incomplete, complete, and sibling spans in the opposite direction from j to i for brevity. Basically, we first pack the scores of same-width spans at different positions (i, j) for all B sentences in the data batch into large tensors. Then we can do computation and aggregation simultaneously on GPUs via efficient large tensor operation. Similarly, we also batchify the decoding algorithm. Due to space limitation, we omit the details. It is noteworthy that the techniques described here are also applicable to other grammar formulations such as CKY-style constituency parsing (Finkel et al., 2008; Drozdov et al., 2019). 3299 3.3 Outside via Back-propagation Eisner (2016) proposes a theoretical proof on the equivalence between the back-propagation mechanism and the outside algorithm in the case of constituency (phrase-structure) parsing. This work empirically verifies this equivalence for dependency parsing. Moreover, we also find that marginal probabilities p(i →j | x) directly correspond to gradients after back-propagation with log Z(x) as the loss: ∂log Z ∂s(i, j) = X y:(i,j)∈y p(y | x) = p(i →j | x) (11) which can be easily proved. For TreeCRF parsers, we perform MBR decoding (Smith and Smith, 2007) by replacing scores with marginal probabilities in the decoding algorithm, leading to a slight but consistent accuracy increase. 3.4 Handling Partial Annotation As an attractive research direction, studies show that it is more effective to construct or even collect partially labeled data (Nivre et al., 2014; Hwa, 1999; Pereira and Schabes, 1992), where a sentence may correspond to a partial tree |yp| < n in the case of dependency parsing. Partial annotation can be very powerful when combined with active learning, because annotation cost can be greatly reduced if annotators only need to annotate sub-structures that are difficult for models. Li et al. (2016) present a detailed survey on this topic. Moreover, Peng et al. (2019) recently released a partially labeled multi-domain Chinese dependency treebank based on this idea. Then, the question is how to train models on partially labeled data. Li et al. (2016) propose to extend TreeCRF for this purpose and obtain promising results in the case of non-neural dependency parsing. This work applies their approach to the neural biaffine parser. We are particularly concerned at the influence of structural learning and high-order modeling on the utilization of partially labeled training data. For the basic biaffine parser based on first-order local training, it seems the only choice is omitting losses of unannotated words. In contrast, tree constraints allow annotated dependencies to influence the probability distributions of unannotated words, and high-order modeling further helps by promoting inter-token interaction. Therefore, both structural learning and high-order modeling are intuitively very beneficial. Under partial annotation, we follow Li et al. (2016) and define the training loss as: L(x, yp) = −log X y∈Y(x);y⊇yp p(y | x) = −log Z(x, yp) ≡ P y∈Y(x);y⊇yp es(x,y) Z(x) (12) where Z(x, yp) only considers all legal trees that are compatible with the given partial tree and can also be efficiently computed like Z(x). 4 Experiments Data. We conduct experiments and analysis on 27 datasets from 13 languages, including two widely used datasets: the English Penn Treebank (PTB) data with Stanford dependencies (Chen and Manning, 2014), and the Chinese data at the CoNLL09 shared task (Hajiˇc et al., 2009). We also adopt the Chinese dataset released at the NLPCC19 cross-domain dependency parsing shared task (Peng et al., 2019), containing one source domain and three target domains. For simplicity, we directly merge the train/dev/test data of the four domains into larger ones respectively. One characteristic of the data is that most sentences are partially annotated based on active learning. Finally, we conduct experiments on Universal Dependencies (UD) v2.2 and v2.3 following Ji et al. (2019) and Zhang et al. (2019) respectively. We adopt the 300d multilingual pretrained word embeddings used in Zeman et al. (2018) and take the CharLSTM representations as input. For UD2.2, to compare with Ji et al. (2019), we follow the raw text setting of the CoNLL18 shared task (Zeman et al., 2018), and directly use their sentence segmentation and tokenization results. For UD2.3, we also report the results of using gold-standard POS tags to compare with Zhang et al. (2019). Evaluation metrics. We use unlabeled and labeled attachment score (UAS/LAS) as the main metrics. Punctuations are omitted for PTB. For the partially labeled NLPCC19 data, we adopt the official evaluation script, which simply omits the words without gold-standard heads to accommodate partial annotation. We adopt Dan Bikel’s randomized parsing evaluation comparator for significance test. 3300 LOC CRF CRF (CPU) CRF2O 0 250 500 750 1,000 Sents/sec Figure 4: Parsing speed comparison on PTB-test. Parameter settings. We directly adopt most parameter settings of Dozat and Manning (2017), including dropout and initialization strategies. For CharLSTM, the dimension of input char embeddings is 50, and the dimension of output vector is 100, following Lample et al. (2016). For the secondorder model, we set the dimensions of rh′/s/m′ i to 100, and find little accuracy improvement when increasing to 300. We trained each model for at most 1,000 iterations, and stop training if the peak performance on the dev data does not increase in 100 consecutive epochs. Models. LOC uses local cross-entropy training loss and employs the Eisner algorithm for finding the optimal projective tree. CRF and CRF2O denote the first-order and second-order TreeCRF model respectively. LOCMST denotes the basic local model that directly produces non-projective tree based on the MST decoding algorithm of Dozat and Manning (2017). 4.1 Efficiency Comparison Figure 4 compares the parsing speed of different models on PTB-test. For a fair comparison, we run all models on the same machine with Intel Xeon CPU (E5-2650v4, 2.20GHz) and GeForce GTX 1080 Ti GPU. “CRF (CPU)” refers to the model that explicitly performs the inside-outside algorithm using Cython on CPUs. Multi-threading is employed since sentences are mutually independent. However, we find that using more than 4 threads does not further improve the speed. We can see that the efficiency of TreeCRF is greatly improved by batchifying the inside algorithm and implicitly realizing the outside algorithm by back-propagation on GPUs. For the first-order CRF model, our implementation can parse about 500 sentences per second, over 10 times faster than the multi-thread “CRF (CPU)”. For the secondorder CRF2O, our parser achieves the speed of 400 Dev Test UAS LAS UAS LAS PTB Biaffine17 95.74 94.08 F&K19 91.59 Li19 95.76 93.97 95.93 94.19 Ji19 95.88 93.94 95.97 94.31 Zhang19 93.96 LOC 95.82 93.99 96.08 94.47 CRF w/o MBR 95.74 93.96 96.04 94.34 CRF 95.76 93.99 96.02 94.33 CRF2O w/o MBR 95.92 94.16 96.14 94.49 CRF2O 95.90 94.12 96.11 94.46 CoNLL09 Biaffine17 88.90 85.38 Li19 88.68 85.47 88.77 85.58 LOC 89.07 86.10 89.15 85.98 CRF w/o MBR 89.04 86.04 89.14 86.06 CRF 89.12 86.12 89.28 86.18† CRF2O w/o MBR 89.29 86.24 89.49 86.39 CRF2O 89.44 86.37 89.63‡ 86.52‡ NLPCC19 LOC 77.01 71.14 76.92 71.04 CRF w/o MBR 77.40 71.65 77.17 71.58 CRF 77.34 71.62 77.53‡ 71.89‡ CRF2O w/o MBR 77.58 71.92 77.89 72.25 CRF2O 78.08 72.32 78.02‡ 72.33‡ Table 1: Main results. We perform significance test against LOC on the test data, where “†” means p < 0.05 and “‡” means p < 0.005. Biaffine17: Dozat and Manning (2017); F&K19: Falenska and Kuhn (2019); Li19: Li et al. (2019); Ji19: Ji et al. (2019); Zhang19: Zhang et al. (2019). sentences per second, which is able to meet the requirements of a real-time system. More discussions on efficiency are presented in Appendix A. 4.2 Main Results Table 1 lists the main results on the dev and test data. The trends on dev and test are mostly consistent. For a fair comparison with previous works, we only consider those without using extra resources such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019). We can see that our baseline LOC achieves the best performance on both PTB and CoNLL09. On PTB, both CRF and CRF2O fail to improve 3301 100 200 300 400 91 92 93 94 CRF2O CRF LOC 100 200 300 400 83 84 85 86 CRF2O CRF LOC 100 200 300 400 66 68 70 72 CRF2O CRF LOC Figure 5: Convergence curves (LAS vs. training epochs) on dev data of PTB, CoNLL09, and NLPCC19. SIB UCM LCM P R F PTB LOC 91.16 90.80 90.98 61.59 50.66 CRF 91.24 90.92 91.08 61.92 50.33 CRF2O 91.56 91.11 91.33 63.08 50.99 CoNLL09 LOC 79.20 79.02 79.11 40.10 28.91 CRF 79.17 79.55 79.36 40.61 29.38 CRF2O 81.00 80.63 80.82 42.53 30.09 Table 2: Sub- and full-tree performance on test data. the parsing accuracy further, probably because the performance is already very high. However, as shown by further analysis in Section 4.3, the positive effect is actually introduced by structural learning and high-order modeling. On CoNLL09, CRF significantly outperforms LOC, and CRF2O can further improve the performance. On the partially annotated NLPCC19 data, CRF outperforms LOC by a very large margin, indicating the usefulness of structural learning in the scenario of partial annotation. CRF2O further improves the parsing performance by explicitly modeling second-order subtree features. These results confirm our intuitions discussed in Section 3.4. Please note that the parsing accuracy looks very low because the partially annotated tokens are usually difficult for models. 4.3 Analysis Impact of MBR decoding. For CRF and CRF2O, we by default to perform MBR decoding, which employs the Eisner algorithm over marginal probabilities (Smith and Smith, 2007) to find the best tree. y∗= arg max y  X i→j∈y p(i →j|x)   (13) 1/8 1/4 1/2 full 90 91 92 93 94 CRF2O (Dep) LOC (Dep) CRF2O (Sent) LOC (Sent) 1/8 1/4 1/2 full 78 80 82 84 86 CRF2O (Dep) LOC (Dep) CRF2O (Sent) LOC (Sent) Figure 6: LAS on PTB (left) and CoNLL09-test (right) regarding the amount of training data (dependencies vs. sentences). Table 1 reports the results of directly finding 1-best trees according to dependency scores. Except for PTB, probably due to the high accuracy already, MBR decoding brings small yet consistent improvements for both CRF and CRF2O. Convergence behavior. Figure 5 compares the convergence curves. For clarity, we plot one data point corresponding to the peak LAS every 20 epochs. We can clearly see that both structural learning and high-order modeling consistently improve the model. CRF2O achieves steadily higher accuracy and converges much faster than the basic LOC. Performance at sub- and full-tree levels. Beyond the dependency-wise accuracy (UAS/LAS), we would like to evaluate the models regarding performance at sub-tree and full-tree levels. Table 2 shows the results. We skip the partially labeled NLPCC19 data. UCM means unlabeled complete matching rate, i.e., the percent of sentences obtaining whole correct skeletal trees, while LCM further requires that all labels are also correct. For SIB, we evaluate the model regarding unlabeled adjacent-sibling subtrees (system outputs vs. gold-standard references). According to Equation 6, (i, k, j) is an adjacent-sibling subtree, if and only if wk and wj are both children of wi at the same side, and there are no other children of wi between them. Given two trees, we can col3302 bg ca cs de en es fr it nl no ro ru Avg. UD2.2 LOCMST 90.44 91.11 91.04 80.21 86.86 90.67 87.99 91.19 88.24 90.35 86.24 93.01 88.95 LOC 90.45 91.14 90.97 80.02 86.83 90.56 87.76 91.14 87.72 90.74 86.20 93.01 88.88 CRF 90.73 91.25 91.01 80.56† 86.92 90.81† 88.16 91.64† 88.10 90.85 86.50 93.17† 89.14‡ CRF2O 90.77 91.29 91.54† 80.46 87.32† 90.86† 87.96 91.91‡ 88.62‡ 91.02† 86.90‡ 93.33‡ 89.33‡ using raw text Ji19 88.28 89.90 89.85 77.09 81.16 88.93 83.73 88.91 84.82 86.33 84.44 86.62 85.83 CRF2O 89.72 91.27 90.94 78.26 82.88 90.79 86.33 91.02 87.92 90.17 85.71 92.49 88.13 UD2.3 LOCMST 90.56 91.03 91.98 81.59 86.83 90.64 88.23 91.67 88.20 90.63 86.51 93.03 89.23 LOC 90.57 91.10 91.85 81.68 86.54 90.47 88.40 91.53 88.18 90.65 86.31 92.91 89.19 CRF 90.52 91.19 92.02 81.43 86.88† 90.76† 88.75 91.76 88.08 90.79 86.54 93.16‡ 89.32‡ CRF2O 90.76 91.12 92.15‡ 81.94 86.93† 90.81‡ 88.83† 92.34‡ 88.21† 90.78 86.62 93.22‡ 89.48‡ using gold POS tags Zhang19 90.15 91.39 91.10 83.39 88.52 90.84 88.59 92.49 88.37 92.82 84.89 93.11 89.85 CRF2O 91.32 92.57 92.66 84.56 88.98 91.88 89.83 92.94 89.85 93.26 87.39 93.86 90.76 Table 3: LAS on UD2.2 and UD2.3 test datasets. Again, † and ‡ means significance level at p < 0.05 and p < 0.005 respectively against the LOC parser. lect all adjacent-sibling subtrees and compose two sets of triples. Then we evaluate the P/R/F values. Please note that it is impossible to evaluate SIB for partially annotated references. We can clearly see that by modeling adjacentsibling subtree scores, the SIB performance obtains larger improvement than both CRF and LOC, and this further contributes to the large improvement on full-tree matching rates (UCM/LCM). Capability to learn from partial trees. To better understand why CRF2O performs very well on partially annotated NLPCC19, we design more comparative experiments by retaining either a proportion of random training sentences (full trees) or a proportion of random dependencies for each sentence (partial trees). Figure 6 shows the results. We can see that the performance gap is quite steady when we gradually reduce the number of training sentences. In contrast, the gap clearly becomes larger when each training sentence has less annotated dependencies. This shows that CRF2O is superior to the basic LOC in utilizing partial annotated data for model training. 4.4 Results on Universal Dependencies Table 3 compares different models on UD datasets, which contain a lot of non-projective trees. We adopt the pseudo-projective approach (Nivre and Nilsson, 2005) for handling the ubiquitous nonprojective trees of most languages. Basically, the idea is to transform non-projective trees into projective ones using more complex labels for postprocessing recovery. We can see that for the basic local parsers, the direct non-projective LOCMST and the pseudoprojective LOC achieve very similar performance. More importantly, both CRF and CRF2O produce consistent improvements over the baseline in many languages. On both UD2.2 and UD2.3, Our proposed CRF2O model achieves the highest accuracy for 10 languages among 12, and obtains significant improvement in more than 7 languages. Overall, the averaged improvement is 0.45 and 0.29 on UD2.2 and UD2.3 respectively, which is also significant at p < 0.005. On average, our CRF2O parser outperforms Ji et al. (2019) by 2.30 on UD2.2 raw texts following CoNLL-2018 shared task setting, and Zhang et al. (2019) by 0.91 on UD2.3 data with gold POS tags. It is noteworthy that the German (de) result is kindly provided by Tao Ji after rerunning their parser with predicted XPOS tags, since their reported result in Ji et al. (2019) accidentally used gold-standard sentence segmentation, tokenization, and XPOS tags. Our CRF2O parser achieves an average LAS of 87.64 using their XPOS tags. 3303 5 Related Works Batchification has been widely used in linear-chain CRF, but is rather complicated for tree structures. Eisner (2016) presents a theoretical proof on the equivalence of outside and back-propagation for constituent tree parsing, and also briefly discusses other formalisms such as dependency grammar. Unfortunately, we were unaware of Eisner’s great work until we were surveying the literature for paper writing. As an empirical study, we believe this work is valuable and makes it practical to deploy TreeCRF models in real-life systems. Falenska and Kuhn (2019) present a nice analytical work on dependency parsing, similar to Gaddy et al. (2018) on constituency parsing. By extending the first-order graph-based parser of Kiperwasser and Goldberg (2016) into second-order, they try to find out how much structural context is implicitly captured by the BiLSTM encoder. They concatenate three BiLSTM output vectors (i, k, j) for scoring adjacent-sibling subtrees, and adopt maxmargin loss and the second-order Eisner decoding algorithm (McDonald and Pereira, 2006). Based on their negative results and analysis, they draw the conclusion that high-order modeling is redundant because BiLSTM can implicitly and effectively encode enough structural context. They also present a nice survey on the relationship between RNNs and syntax. In this work, we use a much stronger basic parser and observe more significant UAS/LAS improvement than theirs. Particularly, we present an in-depth analysis showing that explicitly highorder modeling certainly helps the parsing model and thus is complementary to the BiLSTM encoder. Ji et al. (2019) employ graph neural networks to incorporate high-order structural information into the biaffine parser implicitly. They add a threelayer graph attention network (GAT) component (Veliˇckovi´c et al., 2018) between the MLP and Biaffine layers. The first GAT layer takes rh i and rm i from MLPs as inputs and produces new representation rh1 i and rm1 i by aggregating neighboring nodes. Similarly, the second GAT layer operates on rh1 i and rm1 i , and produces rh2 i and rm2 i . In this way, a node gradually collects multi-hop highorder information as global evidence for scoring single dependencies. They follow the original local head-selection training loss. In contrast, this work adopts global TreeCRF loss and explicitly incorporates high-order scores into the biaffine parser. Zhang et al. (2019) investigate the usefulness of structural training for the first-order biaffine parser. They compare the performance of local head-selection loss, global max-margin loss, and TreeCRF loss on multilingual datasets. They show that TreeCRF loss is overall slightly superior to max-margin loss, and LAS improvement from structural learning is modest but significant for some languages. They also show that structural learning (especially TreeCRF) substantially improves sentence-level complete matching rate, which is consistent with our findings. Moreover, they explicitly compute the inside and outside algorithms on CPUs via Cython programming. In contrast, this work proposes an efficient secondorder TreeCRF extension to the biaffine parser, and presents much more in-depth analysis to show the effect of both structural learning and high-order modeling. 6 Conclusions This paper for the first time presents second-order TreeCRF for neural dependency parsing using triaffine for explicitly scoring second-order subtrees. We propose to batchify the inside algorithm to accommodate GPUs. We also empirically verify that the complex outside algorithm can be implicitly performed via efficient back-propagation, which naturally produces gradients and marginal probabilities. We conduct experiments and detailed analysis on 27 datasets from 13 languages, and find that structural learning and high-order modeling can further enhance the state-of-the-art biaffine parser in various aspects: 1) better convergence behavior; 2) higher performance on sub- and full-tree levels; 3) better utilization of partially annotated data. Acknowledgments The authors would like to thank: 1) the anonymous reviewers for the helpful comments, 2) Wenliang Chen for helpful discussions on high-order neural dependency parsing, 3) Tao Ji for kindly sharing the data and giving beneficial suggestions for the experiments on CoNLL18 datasets, 4) Wei Jiang, Yahui Liu, Haoping Yang, Houquan Zhou and Mingyue Zhou for their help in paper writing and polishing. This work was supported by National Natural Science Foundation of China (Grant No. 61876116, 61525205, 61936010) and a Project Funded by the Priority Academic Program Development (PAPD) of Jiangsu Higher Education Institutions. 3304 References Jiong Cai, Yong Jiang, and Kewei Tu. 2017. CRF autoencoder for unsupervised dependency parsing. In Proceedings of EMNLP, pages 1638–1643. Xavier Carreras. 2007. Experiments with a higherorder projective dependency parser. In Proceedings of EMNLP, pages 957–961. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of EMNLP, pages 740–750. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL, pages 4171– 4186. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In Proceedings of ICLR. Timothy Dozat, Peng Qi, and Christopher D. Manning. 2017. Stanford’s graph-based neural dependency parser at the CoNLL 2017 shared task. In Proceedings of CoNLL, pages 20–30. Andrew Drozdov, Patrick Verga, Mohit Yadav, Mohit Iyyer, and Andrew McCallum. 2019. Unsupervised latent tree induction with deep inside-outside recursive auto-encoders. In Proceedings of NAACL, pages 1129–1141. Jason Eisner. 2000. Bilexical grammars and their cubic-time parsing algorithms. In Advances in Probabilistic and Other Parsing Technologies. Jason Eisner. 2016. Inside-outside and forwardbackward algorithms are just backprop (tutorial paper). In Proceedings of WS, pages 1–17. Agnieszka Falenska and Jonas Kuhn. 2019. The (non)utility of structural features in BiLSTM-based dependency parsers. In Proceedings of ACL, pages 117–128. Jenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. 2008. Efficient, feature-based, conditional random field parsing. In Proceedings of ACL, pages 959–967. David Gaddy, Mitchell Stern, and Dan Klein. 2018. What’s going on in neural constituency parsers? an analysis. In Proceedings of NAACL, pages 999– 1010. Jan Hajiˇc, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Ant`onia Mart´ı, Llu´ıs M`arquez, Adam Meyers, Joakim Nivre, Sebastian Pad´o, Jan ˇStˇep´anek, Pavel Straˇn´ak, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The CoNLL2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of CoNLL, pages 1–18. Rebecca Hwa. 1999. Supervised grammar induction using training data with limited constituent information. In Proceedings of ACL, pages 73–79. Tao Ji, Yuanbin Wu, and Man Lan. 2019. Graph-based dependency parsing with graph neural networks. In Proceedings of ACL, pages 2475–2485. Xinzhou Jiang, Zhenghua Li, Bo Zhang, Min Zhang, Sheng Li, and Luo Si. 2018. Supervised treebank conversion: Data and approaches. In Proceedings of ACL, pages 2706–2716. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. Transactions of ACL, pages 313–327. Terry Koo and Michael Collins. 2010. Efficient thirdorder dependency parsers. In Proceedings of ACL, pages 1–11. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL, pages 2475–2485. Phong Le and Willem Zuidema. 2014. The insideoutside recursive neural network model for dependency parsing. In Proceedings of EMNLP, pages 729–739. Ying Li, Zhenghua Li, Min Zhang, Rui Wang, Sheng Li, and Luo Si. 2019. Self-attentive biaffine dependency parsing. In Proceedings of IJCAI, pages 5067–5073. Zhenghua Li, Min Zhang, Yue Zhang, Zhanyi Liu, Wenliang Chen, Hua Wu, and Haifeng Wang. 2016. Active learning for dependency parsing with partial annotation. In Proceedings of ACL, pages 344–354. Xuezhe Ma and Hai Zhao. 2012. Fourth-order dependency parsing. In Proceedings of COLING, pages 785–796. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005a. Online large-margin training of dependency parsers. In Proceedings of ACL, pages 91– 98. Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of EACL, pages 81–88. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajiˇc. 2005b. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of EMNLP, pages 523–530. Joakim Nivre, Yoav Goldberg, and Ryan McDonald. 2014. Squibs: Constrained arc-eager dependency parsing. CL, pages 249–257. 3305 Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceedings of LREC, pages 1659–1666. Joakim Nivre and Jens Nilsson. 2005. Pseudoprojective dependency parsing. In Proceedings of ACL, pages 99–106. Wenzhe Pei, Tao Ge, and Baobao Chang. 2015. An effective neural network model for graph-based dependency parsing. In Proceedings of ACL-IJCNLP, pages 313–322. Xue Peng, Zhenghua Li, Min Zhang, Rui Wang, Yue Zhang, and Luo Si. 2019. Overview of the nlpcc 2019 shared task: cross-domain dependency parsing. In Proceedings of NLPCC, pages 760–771. Fernando Pereira and Yves Schabes. 1992. Insideoutside reestimation from partially bracketed corpora. In Proceedings of ACL, pages 128–135. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL, pages 2227– 2237. David A. Smith and Noah A. Smith. 2007. Probabilistic models of nonprojective dependency trees. In Proceedings of EMNLP, pages 132–140. Robert Endre Tarjan. 1977. Finding optimum branchings. Networks, pages 25–35. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. 2018. Graph attention networks. In Proceedings of ICLR. Wenhui Wang and Baobao Chang. 2016. Graph-based dependency parsing with bidirectional LSTM. In Proceedings of ACL, pages 2475–2485. Xinyu Wang, Jingxian Huang, and Kewei Tu. 2019. Second-order semantic dependency parsing with end-to-end neural networks. In Proceedings of ACL, pages 4609–4618. Daniel Zeman, Jan Hajiˇc, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Multilingual parsing from raw text to universal dependencies. In Proceedings of CoNLL, pages 1–21. Xingxing Zhang, Jianpeng Cheng, and Mirella Lapata. 2017. Dependency parsing as head selection. In Proceedings of EACL, pages 665–676. Zhisong Zhang, Xuezhe Ma, and Eduard Hovy. 2019. An empirical investigation of structured output modeling for graph-based neural dependency parsing. In Proceedings of ACL, pages 5592–5598. A More on Efficiency Training speed. During training, we greedily find the 1-best head for each word without tree constraints. Therefore, the processing speed is faster than the evaluation phase. Specifically, for LOC, CRF and CRF2O, the average one-iteration training time is about 1min, 2.5min and 3.5min on PTB. In other words, the parser consumes about 700/300/200 sentences per second. MST decoding. As Dozat et al. (2017) pointed out, they adopted an ad-hoc approximate algorithm which does not guarantee to produce the highestscoring tree, rather than the ChuLiu/Edmonds algorithm for MST decoding. The time complexity of the ChuLiu/Edmonds algorithm is O(n2) under the optimized implementation of Tarjan (1977). Please see the discussion of McDonald et al. (2005b) for details. For LOCMST, we directly borrow the MST decoding approach in the original parser of Dozat and Manning (2017). LOCMST achieves 94.43 LAS on PTB-test (inferior to 94.47 of LOC, see Table 1), and its parsing speed is over 1000 sentences per second. Faster decoding strategy. Inspired by the idea of ChuLiu/Edmonds algorithm, we can further improve the efficiency of the CRF parsing models by avoiding the Eisner decoding for some sentences. The idea is that if by greedily assigning a local max-scoring head word to each word, we can already obtain a legal projective tree, then we omit the decoding process for the sentence. We can judge whether an output is a legal tree (single root and no cycles) using the Tarjan algorithm in O(n) time complexity. Further, we can judge whether a tree is a projective tree also in a straightforward way very efficiently. In fact, we find that more than 99% sentences directly obtain legal projective trees on PTB-test by such greedy assignment on marginal probabilities first. We only need to run the decoding algorithm for the left sentences.
2020
302
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3306–3316 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3306 Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs Michael A. Lepori1 Tal Linzen1,2 R. Thomas McCoy2 1Department of Computer Science 2Department of Cognitive Science Johns Hopkins University {mlepori1, tal.linzen, tom.mccoy}@jhu.edu Abstract Sequence-based neural networks show significant sensitivity to syntactic structure, but they still perform less well on syntactic tasks than tree-based networks. Such tree-based networks can be provided with a constituency parse, a dependency parse, or both. We evaluate which of these two representational schemes more effectively introduces biases for syntactic structure that increase performance on the subject-verb agreement prediction task. We find that a constituency-based network generalizes more robustly than a dependencybased one, and that combining the two types of structure does not yield further improvement. Finally, we show that the syntactic robustness of sequential models can be substantially improved by fine-tuning on a small amount of constructed data, suggesting that data augmentation is a viable alternative to explicit constituency structure for imparting the syntactic biases that sequential models are lacking. 1 Introduction Natural language syntax is structured hierarchically, rather than sequentially (Chomsky, 1957; Everaert et al., 2015). One phenomenon that illustrates this fact is English subject-verb agreement, the requirement that verbs and their subjects must match in number. The hierarchical structure of a sentence determines which noun phrase each verb must agree with; sequential heuristics such as agreeing with the most recent noun may succeed on simple sentences such as (1a) but fail in more complex cases such as (1b): (1) a. The boys kick the ball. b. The boys by the red truck kick the ball. We investigate whether a neural network must process input according to the structure of a syntactic parse in order for it to learn the appropriate No Constituency Constituency No Heads BiLSTM Constituency LSTM Heads Dependency LSTM Head-Lexicalized LSTM Table 1: Linguistic properties of our four models. rules governing these dependencies, or whether there is sufficient signal in natural language corpora for low-bias networks (such as sequential LSTMs) to learn these structures. We compare sequential LSTMs, which process sentences from left to right, with tree-based LSTMs that process sentences in accordance with an externally-provided, groundtruth syntactic structure. We consider two types of syntactic structure: constituency structure (Chomsky, 1993; Pollard and Sag, 1994) and dependency structure (Tesniere, 1959; Hudson, 1984). We investigate models provided with either structure, both structures, or neither structure (see Table 1), and assess how robustly these models learn subject-verb agreement when trained on natural language.1 Even with the syntactic biases present in treebased LSTMs, it is possible that natural language might not impart a strong enough signal to teach a network how to robustly track subject-verb dependencies. How might the performance of these tree-based LSTMs change if they were fine-tuned on a small dataset designed to impart a stronger syntactic signal? Furthermore, would we still need these tree structures, or could a sequential LSTM now learn to track syntactic dependencies? We find that building in either type of syntactic structure improves performance over the BiLSTM 1Code, data, and models are at https://github. com/mlepori1/Representations_Of_Syntax 3307 baseline, thus showing that these structures are learned imperfectly (at best) by low-bias models from natural language data. Of the two types of structure, constituency structure turns out to be more useful. The dependency-only model performs well on natural language test sets, but fails to generalize to an artificially-constructed challenge set. After fine-tuning on a small dataset that is designed to impart a strong syntactic signal, the BiLSTM generalizes more robustly, but still falls short of the tree-based LSTMs. We conclude that for a network to robustly show sensitivity to syntactic structure, stronger biases for syntactic structure need to be introduced than are present in a low-bias learner such as a BiLSTM, and that, at least for the subject-verb agreement task, constituency structure is more important than dependency structure. Both tree-based model structure and data augmentation appear to be viable approaches for imparting these biases. 2 Related Work Prior work has shown that neural networks without explicit mechanisms for representing syntactic structure can show considerable sensitivity to syntactic dependencies (Goldberg, 2019; Gulordava et al., 2018; Linzen et al., 2016), and that certain aspects of the structure of the sentence can be reconstructed from their internal representations (Lin et al., 2019; Giulianelli et al., 2018; Hewitt and Manning, 2019). Marvin and Linzen (2018) showed that sequential models still have substantial room for improvement in capturing syntax, and other work has shown that models with a greater degree of syntactic structure outperform sequential models on syntax-sensitive tasks (Yogatama et al., 2018; Kuncoro et al., 2018, 2017), including some of the tree-based models used here (Bowman et al., 2015; Li et al., 2015). One contribution of the present work is to tease apart the two major types of syntactic structure to see which one imparts more effective syntactic biases. 3 Models 3.1 BiLSTM As our baseline model, we used a simple extension to the LSTM architecture (Hochreiter and Schmidhuber, 1997), the bidirectional LSTM (BiLSTM; Schuster and Paliwal, 1997). This model runs one LSTM from left to right over a sequence, and another from right to left, without appealing to tree structure. Bidirectional LSTMs outperform unidirectional LSTMs on a variety of tasks (Huang et al., 2015; Chiu and Nichols, 2016), including syntaxsensitive tasks (Kiperwasser and Goldberg, 2016). Ravfogel et al. (2019) also employs BiLSTMs for a similar agreement task. 3.2 Tree LSTMs To study the effects of explicitly building tree structure into the model architecture, we used the Constituency LSTM and the Dependency LSTM (Tai et al., 2015), which are types of recursive neural networks (Goller and Kuchler, 1996). The Constituency LSTM operates in accordance with a binary constituency parse, composing together vectors representing a left child and a right child into a vector representing their parent. Models similar to the Constituency LSTM have been proposed by Le and Zuidema (2015) and Zhu et al. (2015). In a Dependency LSTM, the representations of a head’s children are summed, and then composed with the representation of the head itself to yield a representation of the phrase that has that head. See Appendix A for more details on both models. 3.3 Head-Lexicalized Tree LSTMs To create a model where composition is simultaneously guided by both a dependency parse and a constituency parse, we modified the constituency model described in Section 3.2, turning it into a head-lexicalized tree LSTM. In a standard Constituency LSTM, the input for all non-leaf nodes is a vector of all 0’s. To add head lexicalization, we instead feed in the word embedding of the correct headword of that constituent as the input, where the choice of headword is determined using the Stanford Dependency Parser (Manning et al., 2014). See Appendix B for more details, as well as an example of a head-lexicalized constituency tree. This model is similar to the head-lexicalized tree LSTM of Teng and Zhang (2017). However, their model learns how to select the heads of constituents in an unsupervised manner; these heads may not correspond to the syntactic notion of heads. Because we seek to understand the effect of using the heads derived from the dependency parse, we provide our models with explicit head information. 4 Task We adapted a syntax-sensitive task that previous work has used to assess the syntactic capabilities 3308 of LSTMs—the number prediction task (Linzen et al., 2016). The most standard version of this task is based on a left-to-right language modeling objective; however, tree-based models are not compatible with left-to-right language modeling. Therefore, we made two modifications to this objective, both of which have precedents in the literature: First, we gave the model an entire present-tense sentence with main verb masked out, following Goldberg (2019). Second, the model’s target output was the number of the masked verb: SINGULAR or PLURAL; we follow Linzen et al. (2016) and Ravfogel et al. (2019) in framing number prediction as a classification task. To solve the task, the model must identify the subject whose head is the main verb (in the dependency formalism), and use that information to determine the syntactic number of the verb; e.g., for (2), the answer is SINGULAR. (2) The girl *MASK* the ball. Linzen et al. (2016) pointed out that there are several incorrect heuristics which models might adopt for this task because these heuristics still produce decent classification accuracy. One salient example is picking the syntactic number of the most recent noun to the left of the verb. We hypothesize that tree-based models will be less susceptible to these non-robust heuristics than sequential models. 5 Experiment 1: Natural Language Data: We train our models on a subset of the dataset from Linzen et al. (2016) that is chosen to have a uniform label distribution (50% SINGULAR and 50% PLURAL). We made this choice because our task format differs from that used in some past work (see Section 4), so performance on the task as we have framed it cannot be directly compared to prior work. In the absence of baselines from the literature, we use chance performance of 50% as a baseline; to ensure that this baseline is reasonable, we balance the label distribution during training to discourage models from becoming biased toward one label. We use two types of test sets: those that contain adversarial attractors, and those that do not. An adversarial attractor is a noun that is between the subject and the main verb of a sentence and that has the opposite syntactic number from the subject noun. Adversarial attractors have been found to produce agreement errors in humans (Bock and Miller, 1991) and neural models (Goldberg, 2019; ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0% 25% 50% 75% 100% 0 1 2 3 4 Number of attractors Accuracy: natural evaluation set ● ● ● ● BiLSTM Dependency Constituency Head 0% 25% 50% 75% 100% BiLSTM Dependency Constituency Head Accuracy: constructed evaluation set (a) Results for models trained on natural language. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0% 25% 50% 75% 100% 0 1 2 3 4 Number of attractors Accuracy: natural evaluation set ● ● ● ● BiLSTM Dependency Constituency Head 0% 25% 50% 75% 100% BiLSTM Dependency Constituency Head Accuracy: constructed evaluation set (b) Results for models trained on natural language and then exposed to a 500-sentence augmentation set. Figure 1: Results on binary classification of masked verbs as SINGULAR or PLURAL. All results are averages across 3 runs. Chance performance is 50%. Gulordava et al., 2018; Linzen et al., 2016). We use code from Goldberg (2019)2 to extract adversarial datasets containing varying numbers of attractors, from 0 to 4 attractors. Sentence (3) provides an example of a sentence with 4 attractors. (3) Algorithmic problems such as [type] [checking] and [type] [inference] are more difficult for equirecursive types as well. See Appendix D for details on our corpus and on preprocessing, and Appendix C.1 for training. Natural language evaluation: All of the treebased models outperformed the BiLSTM in the presence of attractors (Figure 1a). Compared to prior work with the number prediction task, our BiLSTM performed very poorly on the 4 Attractors dataset. However, our results cannot be directly compared to previous work because of the modifications we have made to the task, data, and training procedure in order to accommodate tree2https://github.com/yoavg/bert-syntax 3309 based models. In light of these modifications, there are several reasons why the BiLSTM’s low accuracy is unsurprising. First, we used a balanced label distribution during training. In the standard dataset from Linzen et al. (2016), the class labels are not balanced, so models evaluated on that dataset might outperform our BiLSTM by exploiting the biased label distribution—a heuristic that our balanced training set discourages. Another potential cause for the BiLSTM’s poor performance is that, in order to balance the label frequencies, we used a smaller training set than was used in past work (81,000 sentences instead of 121,000 sentences). Finally, it is possible that allowing models to see the entire sentence may allow them to acquire nonrobust heuristics related to the words following the main verb. For example, a model might learn spurious correlation between the syntactic number of subjects and their direct objects. See Appendix E, Table 2 for results on all test sets. Constructed sentence evaluation: With naturally occurring sentences, it is possible that models perform well not because they have mastered syntax, but rather because of statistical regularities in the data. For example, given The players *MASK* the ball, the model may be able to exploit the fact that animate nouns tend to be subjects while inanimate nouns do not. As pointed out by Gulordava et al. (2018), this would allow the model to correctly predict syntactic number, but for the wrong reasons. To test whether our models were leveraging this statistical heuristic, we constructed a 400-sentence test set where this heuristic cannot succeed. We did so using a probabilistic contextfree grammar (PCFG) under which all words of a given part of speech are equally likely in all positions; each sentence from this grammar is of the form Subject-Verb-Object, and all noun phrases can optionally be modified by adjectives and/or prepositional phrases (see Appendix F), as in (4): (4) The fern near the sad teachers hates the singer. The Dependency LSTM is especially likely to fall prey to word cooccurrence heuristics, as it lacks the ability for a parent to account for the sequential position of its children. This can be an issue when determining whether a verb is supposed to be singular or plural, because the model has no robust way to distinguish a verb’s subject from its direct object. The dependency model did indeed perform at chance (See the bar graph in Figure 1a).3 This suggests that the dependency model’s high accuracy is partially due to lexical heuristics rather than syntactic processing. In contrast, the other models performed well, suggesting that they are less susceptible to relying on word cooccurrence. 6 Experiment 2: Fine-tuning In Experiment 1, tree-based models dramatically outperformed the BiLSTM in the presence of attractors. This difference may have arisen because most natural language sentences are simple, and thus they do not generate enough signal to illustrate the importance of tree structure to a low-bias learner, such as a BiLSTM. Recent work has shown the effectiveness of syntactically-motivated fine-tuning at increasing the robustness of neural models (Min et al., 2020). Would our models generalize more robustly if we added a few training examples that do not lend themselves to non-syntactic heuristics? To provide the model with a stronger signal about the importance of syntactic structure, we fine-tuned our models on a dataset designed to impart this signal. We used a variant of the PCFG (see Appendix F) from Section 5 to generate a 500sentence augmentation set. This augmentation set cannot be solved using word cooccurrence statistics, and contains some sentences with attractors. The models were then fine-tuned on the augmentation set for just one epoch over the 500 examples. See Appendix C.2 for training details. Results: The head-lexicalized model and the BiLSTM benefited most from fine-tuning, with the head-lexicalized model now matching the performance of the Constituency LSTM, and the BiLSTM showing dramatic improvement on sentences with multiple attractors (Figure 1b; see Appendix E, Table 3 for detailed results). While the BiLSTM’s accuracy increased on sentences with attractors, it decreased on the No Attractors test set. We suspect that this is because augmentation discouraged the model from using heuristics: while this makes performance more robust overall, it may hurt accuracy on simple examples where the heuristics give the correct answer (Min et al., 2020). As expected from its architectural limitations, the Dependency LSTM did not noticeably benefit from fine-tuning 3Most sentences in the test set have only two nouns. 50% of the time, they will agree in number, and the syntactic number is unambiguous. Random guessing on the other 50% of cases would yield about 75% accuracy. 3310 because it cannot extract the relevant information from the augmentation set. There was no clear effect of augmentation on the Constituency LSTM.4 7 Discussion Overall, we found that neural models trained on natural language achieve much more robust performance on syntactic tasks when syntax is explicitly built into the model. This suggests that the information we provided to our tree-based models is unlikely to be learned from natural language by models with only general inductive biases. In Experiment 1, the network provided with a dependency parse did the best on most of the natural language test sets. This is unsurprising, as the task is largely about a particular dependency (i.e., the dependency between a verb and its subject). At the same time, as demonstrated by the constructed sentence test, the syntactic capabilities of the Dependency LSTM are inherently limited. Thus, it must default to non-robust heuristics in cases where the unlabeled dependency information is ambiguous. In future work, these syntactic limitations may be overcome by giving the model typed dependencies (which would distinguish between a subject-verb dependency and a verb-object dependency). One might expect the head-lexicalized model to perform the best, since it can leverage both syntactic formalisms. However, it performs no better than the constituency model when trained on natural language, suggesting that there is little benefit to incorporating dependency structure into a Constituency LSTM. In some cases, the headlexicalized model without fine-tuning even performs worse than the Constituency LSTM. When fine-tuned on more challenging constructed examples, the head-lexicalized model performed similarly to the Constituency LSTM, suggesting that there is not enough signal in the natural language training set to teach this model what to do with the heads it has been given. Our results point to two possible approaches for improving how models handle syntax. The first approach is to use models that have explicit mechanisms for representing syntactic structure. In particular, our results suggest that the most important aspect of syntactic structure to include is constituency 4Note that the constructed test set used here is controlled to have no overlap with the augmentation set. Thus, it is not exactly the same as the set used in Section 5, but both corpora are generated from the same CFG. structure, as constituency models appear to implicitly learn dependency structure as well. Though the models we used require parse trees to be provided, it is possible that models can learn to induce tree structure in an unsupervised or weakly-supervised manner (Bowman et al., 2016; Choi et al., 2018; Shen et al., 2019). Another effective approach for improving the syntactic robustness of neural models is data augmentation, as demonstrated in Experiment 2. With this approach, it is possible to bring the syntactic performance of less-structured models closer to that of models with explicit tree structure, even with an augmentation set generated simply and easily using a PCFG. Future work should further explore both of these approaches. Our conclusions about the importance of explicit mechanisms for representing syntactic structure can be strengthened by developing different formulations of the tree LSTMs. It seems particularly promising to explore alternative formulations of the Dependency LSTM (as mentioned above) and the effect of learning embeddings of non-terminal symbols for the Constituency LSTM. Finally, future work should investigate whether data augmentation can fully bridge the gap between low-bias learners and structured tree LSTMs, and whether our conclusions apply to other syntactic phenomena besides agreement. Acknowledgments This research was supported by a Google Faculty Award to Tal Linzen, NSF Graduate Research Fellowship No. 1746891, and NSF Grant No. BCS1920924. Our experiments were conducted using the Maryland Advanced Research Computing Center (MARCC). References Kathryn Bock and Carol Ann Miller. 1991. Broken agreement. Cognitive Psychology, 23:45–93. Samuel R. Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. 2016. A fast unified model for parsing and sentence understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1466–1477, Berlin, Germany. Association for Computational Linguistics. Samuel R. Bowman, Christopher D. Manning, and Christopher Potts. 2015. Tree-structured composition in neural networks without tree-structured ar3311 chitectures. In Proceedings of the 2015 International Conference on Cognitive Computation: Integrating Neural and Symbolic Approaches, volume 1583 of COCO’15, pages 37–42, Aachen, Germany, Germany. Jason P. C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics, 4:357–370. Jihun Choi, Kang Min Yoo, and Sang-goo Lee. 2018. Learning to compose task-specific tree structures. In 32nd AAAI Conference on Artificial Intelligence. Noam Chomsky. 1957. Syntactic structures. Mouton and Co., The Hague. Noam Chomsky. 1993. A minimalist program for linguistic theory. The View from Building 20: Essays in Linguistics in Honor of Sylvain Bromberger, pages 1–53. Martin B. H. Everaert, Marinus A. C. Huybregts, Noam Chomsky, Robert C. Berwick, and Johan J. Bolhuis. 2015. Structures, not strings: Linguistics as part of the cognitive sciences. Trends in Cognitive Sciences, 19(12):729–743. Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 240–248, Brussels, Belgium. Association for Computational Linguistics. Yoav Goldberg. 2019. Assessing BERT’s syntactic abilities. arXiv preprint arXiv:1901.05287. Cristoph Goller and Andreas Kuchler. 1996. Learning task-dependent distributed representations by backpropagation through structure. In 1996 IEEE International Conference on Neural Networks, volume 1, pages 347–352 vol.1. IEEE. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195–1205, New Orleans, Louisiana. Association for Computational Linguistics. John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138, Minneapolis, Minnesota. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arxiv:1508.01991. Richard A. Hudson. 1984. Word grammar. Blackwell Oxford. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference for Learning Representations. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. Transactions of the Association for Computational Linguistics, 4:313–327. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, and Noah A. Smith. 2017. What do recurrent neural network grammars learn about syntax? In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1249–1258, Valencia, Spain. Association for Computational Linguistics. Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom. 2018. LSTMs can learn syntax-sensitive dependencies well, but modeling structure makes them better. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426–1436, Melbourne, Australia. Association for Computational Linguistics. Phong Le and Willem Zuidema. 2015. Compositional distributional semantics with long short term memory. In Proceedings of the 4th Joint Conference on Lexical and Computational Semantics, pages 10–19, Denver, Colorado. Association for Computational Linguistics. Jiwei Li, Thang Luong, Dan Jurafsky, and Eduard Hovy. 2015. When are tree structures necessary for deep learning of representations? In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2304–2314, Lisbon, Portugal. Association for Computational Linguistics. Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open sesame: Getting inside BERT’s linguistic knowledge. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 241–253, Florence, Italy. Association for Computational Linguistics. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521– 535. 3312 Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192–1202. Junghyun Min, R. Thomas McCoy, Dipanjan Das, Emily Pitler, and Tal Linzen. 2020. Syntactic data augmentation increases robustness to inference heuristics. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Seattle, Washington. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Carl Pollard and Ivan A. Sag. 1994. Head-driven phrase structure grammar. University of Chicago Press. Shauli Ravfogel, Yoav Goldberg, and Tal Linzen. 2019. Studying the inductive biases of RNNs with synthetic variations of natural languages. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 3532–3542. Mike Schuster and Kuldip K. Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45:2673–2681. Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron C. Courville. 2019. Ordered neurons: Integrating tree structures into recurrent neural networks. In International Conference on Learning Representations. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1556–1566, Beijing, China. Association for Computational Linguistics. Zhiyang Teng and Yue Zhang. 2017. Head-lexicalized bidirectional tree LSTMs. Transactions of the Association for Computational Linguistics, 5:163–177. Lucien Tesniere. 1959. El´ements de syntaxe structurale, Klincksieck. Pans. Dani Yogatama, Yishu Miao, Gabor Melis, Wang Ling, Adhiguna Kuncoro, Chris Dyer, and Phil Blunsom. 2018. Memory architectures in recurrent neural network language models. International Conference on Learning Representations. Xiaodan Zhu, Parinaz Sobhani, and Hongyu Guo. 2015. Long short-term memory over recursive structures. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of ICML’15, pages 1604–1612. 3313 A Appendix: Tree LSTM Details The constituency-based model that we use is the N-ary Tree-LSTM from Tai et al. (2015), with N fixed at 2 such that the tree is strictly binary; the equations for this model are shown below. Each W is an input weight matrix, each U is a hidden state update weight matrix, each b is a bias term, each x is an input word embedding, and each h is a hidden state. These equations are adaptations of the typical LSTM equations that allow the LSTM to be structured according to a constituency parse. The xj is the input embedding for a particular node in the constituency tree. In a Constituency LSTM, all leaf nodes receive the embedding for the word at that leaf, while all other nodes receive a vector of 0’s. Every non-leaf node is thus a composition of the hidden states of its two children. In these equations, k = 1 or 2, which allows “the left hidden state in a binary tree to have either an excitatory or inhibitory effect on the forget gate of the right child” (Tai et al., 2015). Importantly, this model distinguishes between a node’s left and right children. ij = σ(W (i)xj + 2 X l=1 U (i) l hjl + b(i)) (1) fjk = σ(W (f)xj + 2 X l=1 U (f) kl hjl + b(f)) (2) oj = σ(W (o)xj + 2 X l=1 U (o) l hjl + b(o)) (3) uj = tanh(W (u)xj + 2 X l=1 U (u) l hjl + b(u)) (4) cj = ij ⊙uj + 2 X l=1 fjl ⊙cjl (5) hj = oj ⊙tanh(cj) (6) The following equations, also from Tai et al. (2015), define a child-sum Tree LSTM, which we structure according to a dependency parse. Here, the input xj is the embedding of the headword of that node in the DAG that defines a dependency parse. Note that in this model, the hidden representations of the children of a node are summed. Thus, this model cannot distinguish the linear order of its children. ˜hj = X k∈C(j) hk (7) ij = σ(W (i)xj + U (i)˜hj + b(i)) (8) fjk = σ(W (f)xj + U (f)hk + b(f)) (9) oj = σ(W (o)xj + U (o)˜hj + b(o)) (10) uj = tanh(W (u)xj + U (u)˜hj + b(u)) (11) cj = ij ⊙uj + X k∈C(j) fjk ⊙ck (12) hj = oj ⊙tanh(cj) (13) B Appendix: Details of the Head-Lexicalized Tree LSTM Variant Our head-lexicalized tree LSTM architecture is structured exactly the same as the Constituency LSTM. Thus, Equations 1 through 6 characterize the parameters and operations performed by the head-lexicalized tree LSTM. The difference between the two architectures lies in the input, xj. In the Constituency LSTM, a node j was provided an input vector xj only if j was a leaf node. In the head-lexicalized tree LSTM model, we use a dependency parse to generate a tag for each node in the constituency tree, which identifies which word in the corresponding constituent is the most dominant word in the dependency tree. The word embedding corresponding to the most dominant word in constituent j is then provided as input xj. Thus, every node in the tree receives an input vector, and the root node is guaranteed to have the headword of the whole sentence provided as input. More formally, a dependency parse forms a tree, TD. For each word, w, in a given sentence, denote its score, s(w), as the depth of w in TD. A constituency parse forms a tree TC. For every node j in TC, let lj denote the set of words corresponding to children of j that are leaves of TC. The input vector xj is then just the embedding of w = arg minw∈lj s(w). Ties should not exist within a constituent, but if they do (due to parsing errors), then they are broken arbitrarily. See Figure 2 for an example of a headlexicalized constituency tree. C Appendix: Training Details C.1 Experiment 1 We use an embedding size and hidden cell size of 100 for every model. Our word embeddings 3314 bake bake cake cake the bake bakers bakers near table table the near bakers The Figure 2: Head-lexicalized constituency tree for the sentence The bakers near the table bake the cake. are 100-dimensional pretrained GloVe embeddings from the Wikipedia 2014 + Gigaword 5 distribution (glove.6b.zip) (Pennington et al., 2014), and we do not tune them during training. We also employ the Adam optimizer (Kingma and Ba, 2015) with the PyTorch default learning rate of 0.001. Because this is a binary classification problem, we use binary cross entropy as our loss function. These hyperparameter choices are based on Linzen et al. (2016), but we increase the hidden size from 50 to 100, in order to create slightly more capacity. Though this may seem small, the models achieved high overall accuracy, suggesting that model size was not a bottleneck. We cap training at 50 epochs, but also employ early stopping. The early stopping procedure is as follows: Train for 10,000 sentences, then evaluate on the validation data. Stop when the average decrease in validation loss over the previous five evaluations is less than 0.0005. For all models, this occurs after about 1 or 1.5 epochs. During training, the parameters that resulted in the best validation loss are saved, and these weights are used during testing. We repeat this procedure for three random initializations of each model. The reported results are averages over these three models. In order to turn a tree LSTM into a binary classifier, we feed the hidden state of the root into a linear layer that condenses the output into a single value, and squash the result to the range [0, 1] using a sigmoid activation function. If the result of that process is greater than 0.5, then we predict label 1, else we predict label 0. For the bidirectional LSTM, we take the representation of the masked verb from both the left to right and right to left passes and feed both of these into a linear classifier. Then we repeat the process described above, using a sigmoid activation function to constrain the prediction to the range [0, 1], and classifying based on this value. C.2 Experiment 2 We take the same models from Experiment 1 and fine tune them on the augmentation set. We train for one epoch with the same parameters used in Experiment 1, and then use the resulting weights to evaluate the models. D Appendix: Data The original dataset contains approximately 1.3 million sentences. We use the Stanford constituency parser and Stanford dependency parser (Manning et al., 2014) to generate the two types of parse trees for each of these sentences, and then convert these objects into suitable representations for our models. In this process, a small percentage of examples were discarded due to the parser failing to parse them. We deviate from past work by ensuring that both classes (SINGULAR and PLURAL) are of equal size. This results in more data from the majority class (singular verb class) being thrown away. After these exclusions, we have approximately 903,000 sentences remaining. We provide our models 9% of this (81,300 sentences) to train on, 0.1% (904 sentences) to validate, and then generate our test sets from the remainder of the data. All sentences were stripped of quotation marks, apostrophes, parentheses and hyphens in order to minimize parsing failures. The sizes of our test sets are as follows: No Attractors (50,000 sentences), Any Attractors (52,815 sentences), One Attractor (41,902 sentences), Two Attractors (8,473 sentences), Three Attractors (1,884 sentences), and Four Attractors (556 sentences). Note also that the Any Attractors dataset is the union of the One, Two, Three, and Four Attractors datasets. E Appendix: Full Results Table 2 contains the full results after training all models on natural language. Table 3 contains the full results after augmentation. F Appendix: Probabilistic Context Free Grammars Figure 3 contains the probabilistic context free grammar used to generate the constructed corpora. 3315 S →DetPs VPs DetPp VPp DetPs →Det NPs DetPp →Det NPp NPs →Adj NPs NPs PP Nouns NPp →Adj NPp NPp PP Nounp PP →Prep DetPs Prep DetPp VPs →Verbs DetPs Verbs DetPp VPp →Verbp DetPp Verbp DetPp Det →the Nouns →plane plant bear bird car dancer singer president squirrel cloud actor doctor nurse chair student teacher fern Nounp →planes plants bears birds cars dancers singers presidents squirrels clouds actors doctors nurses chairs students teachers ferns Verbs →eats pleases loves likes hates destroys creates fights bites shoots arrests takes leaves buys brings carries kicks Verbp →eat please love like hate destroy create fight bite shoot arrest take leave buy bring carry kick Adj →fancy green handsome pretty large big scary nice happy sad dangerous evil sloppy Prep →on by near around Figure 3: Probabilistic Context-free grammar used for creating constructed datasets. For the constructed language test set, the probabilities for the three potential expansions of NPs and NPp are .1, .1, .8, respectively. For the augmentation set, these probabilities are .69, .04, .27. For all other nonterminals, all possible expansions have uniform probability in both test and augmentation sets. PPs are present in approximately one third of sentences in both the test and augmentation sets. 3316 Attractors BiLSTM Dependency Constituency Head No 96.4% 95.5% 97.3% 97.2% Any 70.8% 91.4% 90.2% 87.0% 1 74.6% 91.9% 91.3% 88.7% 2 59.7% 89.7% 87.1% 82.0% 3 48.4% 87.6% 83.7% 77.0% 4 41.0% 87.0% 80.8% 73.1% Constructed 96.0% 73.8% 97.6% 97.3% Table 2: Natural language results for all datasets. Best performances are bolded. All numbers are averaged over three models. Attractors BiLSTM Dependency Constituency Head No 88.8% 94.9% 98.1% 97.2% Any 77.1% 90.1% 91.9% 91.5% 1 76.6% 90.8% 93.2% 92.8% 2 78.5% 87.9% 88.1% 87.6% 3 80.1% 85.2% 83.5% 83.3% 4 81.1% 85.9% 78.4% 79.4% Constructed 95.3% 75.8% 99.7% 99.8% Table 3: Results for all datasets after augmentation. All numbers are averaged over three models.
2020
303
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3317–3330 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3317 Structure-Level Knowledge Distillation For Multilingual Sequence Labeling Xinyu Wang⋄, Yong Jiang†, Nguyen Bach†, Tao Wang†, Fei Huang†, Kewei Tu⋄∗ ⋄School of Information Science and Technology, ShanghaiTech University Shanghai Engineering Research Center of Intelligent Vision and Imaging Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences †DAMO Academy, Alibaba Group {wangxy1,tukw}@shanghaitech.edu.cn {yongjiang.jy,nguyen.bach,leeo.wangt,f.huang}@alibaba-inc.com Abstract Multilingual sequence labeling is a task of predicting label sequences using a single unified model for multiple languages. Compared with relying on multiple monolingual models, using a multilingual model has the benefit of a smaller model size, easier in online serving, and generalizability to low-resource languages. However, current multilingual models still underperform individual monolingual models significantly due to model capacity limitations. In this paper, we propose to reduce the gap between monolingual models and the unified multilingual model by distilling the structural knowledge of several monolingual models (teachers) to the unified multilingual model (student). We propose two novel KD methods based on structure-level information: (1) approximately minimizes the distance between the student’s and the teachers’ structurelevel probability distributions, (2) aggregates the structure-level knowledge to local distributions and minimizes the distance between two local probability distributions. Our experiments on 4 multilingual tasks with 25 datasets show that our approaches outperform several strong baselines and have stronger zero-shot generalizability than both the baseline model and teacher models. 1 Introduction Sequence labeling is an important task in natural language processing. Many tasks such as named entity recognition (NER) and part-of-speech (POS) tagging can be formulated as sequence labeling problems and these tasks can provide extra information to many downstream tasks and products such as searching engine, chat-bot and syntax parsing (Jurafsky and Martin, 2009). Most of the previ∗Kewei Tu is the corresponding author. This work was conducted when Xinyu Wang was interning at Alibaba DAMO Academy. ous work on sequence labeling focused on monolingual models, and the work on multilingual sequence labeling mainly focused on cross-lingual transfer learning to improve the performance of low-resource or zero-resource languages (Johnson et al., 2019; Huang et al., 2019a; Rahimi et al., 2019; Huang et al., 2019b; Keung et al., 2019), but their work still trains monolingual models. However, it would be very resource consuming considering if we train monolingual models for all the 7,000+ languages in the world. Besides, there are languages with limited labeled data that are required for training. Therefore it is beneficial to have a single unified multilingual sequence labeling model to handle multiple languages, while less attention is paid to the unified multilingual models due to the significant difference between different languages. Recently, Multilingual BERT (M-BERT) (Devlin et al., 2019) is surprisingly good at zero-shot cross-lingual model transfer on tasks such as NER and POS tagging (Pires et al., 2019). M-BERT bridges multiple languages and makes training a multilingual sequence labeling model with high performance possible (Wu and Dredze, 2019). However, accuracy of the multilingual model is still inferior to monolingual models that utilize different kinds of strong pretrained word representations such as contextual string embeddings (Flair) proposed by Akbik et al. (2018). To diminish the performance gap between monolingual and multilingual models, we propose to utilize knowledge distillation to transfer the knowledge from several monolingual models with strong word representations into a single multilingual model. Knowledge distillation (Buciluˇa et al., 2006; Hinton et al., 2015) is a technique that first trains a strong teacher model and then trains a weak student model through mimicking the output probabilities (Hinton et al., 2015; Lan et al., 2018; Mirzadeh et al., 2019) or hidden states (Romero 3318 et al., 2014; Seunghyun Lee, 2019) of the teacher model. The student model can achieve an accuracy comparable to that of the teacher model and usually has a smaller model size through KD. Inspired by KD applied in neural machine translation (NMT) (Kim and Rush, 2016) and multilingual NMT (Tan et al., 2019), our approach contains a set of monolingual teacher models, one for each language, and a single multilingual student model. Both groups of models are based on BiLSTM-CRF (Lample et al., 2016; Ma and Hovy, 2016), one of the state-of-the-art models in sequence labeling. In BiLSTM-CRF, the CRF layer models the relation between neighbouring labels which leads to better results than simply predicting each label separately based on the BiLSTM outputs. However, the CRF structure models the label sequence globally with the correlations between neighboring labels, which increases the difficulty in distilling the knowledge from the teacher models. In this paper, we propose two novel KD approaches that take structure-level knowledge into consideration for multilingual sequence labeling. To share the structure-level knowledge, we either minimize the difference between the student’s and the teachers’ distribution of global sequence structure directly through an approximation approach or aggregate the global sequence structure into local posterior distributions and minimize the difference of aggregated local knowledge. Experimental results show that our proposed approach boosts the performance of the multilingual model in 4 tasks with 25 datasets. Furthermore, our approach has better performance in zero-shot transfer compared with the baseline multilingual model and several monolingual teacher models. 2 Background 2.1 Sequence Labeling BiLSTM-CRF (Lample et al., 2016; Ma and Hovy, 2016) is one of the most popular approaches to sequence labeling. Given a sequence of n word tokens x = {x1, · · · , xn} and the corresponding sequence of gold labels y∗= {y∗ 1, · · · , y∗ n}, we first feed the token representations of x into a BiLSTM to get the contextual token representations r = {r1, · · · , rn}. The conditional probability p(y|x) is defined by: ψ(y′, y, ri) = exp(WT y ri + by′,y) (1) p(y|x) = nQ i=1 ψ(yi−1, yi, ri) P y′∈Y(x) nQ i=1 ψ(y′ i−1, y′ i, ri) (2) where Y(x) denotes the set of all possible label sequences for x, ψ is the potential function, Wy and by′,y are parameters and y0 is defined to be a special start symbol. WT y ri and by′,y are usually called emission and transition scores respectively. During training, the negative log-likelihood loss for an input sequence is defined by: LNLL = −log p(y∗|x) BiLSTM-Softmax approach to sequence labeling reduces the task to a set of label classification problem by disregarding label transitions and simply feeding the emission scores WT ri into a softmax layer to get the probability distribution of each variable yi. p(yi|x) = softmax(WT ri) (3) The loss function then becomes: LNLL = − n X i=1 log p(y∗ i |x) In spite of its simplicity, this approach ignores correlations between neighboring labels and hence does not adequately model the sequence structure. Consequently, it empirically underperforms the first approach in many applications. 2.2 Knowledge Distillation A typical approach to KD is training a student network by imitating a teacher’s predictions (Hinton et al., 2015). The simplest approach to KD on BiLSTM-Softmax sequence labeling follows Eq. 3 and performs token-level distillation through minimizing the cross-entropy loss between the individual label distributions predicted by the teacher model and the student model: LToken = − n X i=1 |V| X j=1 pt(yi = j|x) log ps(yi = j|x) (4) where pt(yi = j|x) and ps(yi = j|x) are the label distributions predicted by the teacher model and the student model respectively and |V| is the number of possible labels. The final loss of the student 3319 Mono-Embed y1 y2 y3 x3 x2 x1 Monolingual Teacher Multi-Embed y1 y2 y3 x3 x2 x1 Multilingual Student Embed BiLSTM CRF-layer Pos. Mono Pos. Multi Top-K B I O B B O B I I 0.5 0.4 0.1 Top-WK + LPos. B B O Gold Labels LNLL : Forward-backward Algorithm : Top-K Decoding : Loss Function : Weighting Top-K Words LTop-K LTop-WK Figure 1: Structure-level knowledge distillation approaches. Mono/Multi represents Monolingual and Multilingual, respectively. Pos. represents the posterior distribution. model combines the KD loss and the negative loglikelihood loss: L = λLToken + (1 −λ)LNLL where λ is a hyperparameter. As pointed out in Section 2.1, however, sequence labeling based on Eq. 3 has the problem of ignoring structure-level knowledge. In the BiLSTM-CRF approach, we can also apply an Emission distillation through feeding emission scores in Eq. 3 and get emission probabilities ˜p(yi|x), then the loss function becomes: LEmission = − n X i=1 |V| X j=1 ˜pt(yi = j|x) log ˜ps(yi = j|x) (5) 3 Approach In this section, we propose two approaches to learning a single multilingual sequence labeling model (student) by distilling structure-level knowledge from multiple mono-lingual models. The first approach approximately minimizes the difference between structure-level probability distributions predicted by the student and teachers. The second aggregates structure-level knowledge into local posterior distributions and then minimizes the difference between local distributions produced by the student and teachers. Our approaches are illustrated in Figure 1. Both the student and the teachers are BiLSTMCRF models (Lample et al., 2016; Ma and Hovy, 2016), one of the state-of-the-art models in sequence labeling. A BiLSTM-CRF predicts the distribution of the whole label sequence structure, so token-level distillation is no longer possible and structure-level distillation is required. 3.1 Top-K Distillation Inspired by Kim and Rush (2016), we propose to encourage the student to mimic the teachers’ global structural probability distribution over all possible label sequences: LStr = − X y∈Y(x) pt(y|x) log ps(y|x) (6) However, |Y(x)| is exponentially large as it represents all possible label sequences. We propose two methods to alleviates this issue through efficient approximations of pt(y|x) using the k-best label sequences. Top-K Eq. 6 can be seen as computing the expected student log probability with respect to the teacher’s structural distribution: LStr = −Ept(y|x)[log ps(y|x)] (7) The expectation can be approximated by sampling from the teacher’s distribution pt(y|x). However, unbiased sampling from the distribution is difficult. We instead apply a biased approach that regards the k-best label sequences predicted by the 3320 ψ(yk−1, yk, rk) LABEL SEQ. PROBS. STRUCTURAL KNOWLEDGE y1 y2 y3 Prob. y1 y2 y3 Weights k = 2 F F F 0.035 Top-2 T T F 0.57 yk−1\ yk y2 = F y2 = T F F T 0.316 F F T 0.43 y1 = F 2 1/2 F T F 0.105 α(yk = F) 1.00 2.50 10.83 y1 = T 1/2 2 F T T 0.007 α(yk = T) 1.00 2.50 8.13 k = 3 T F F 0.009 β(yk = F) 8.79 3.33 1.00 yk−1\ yk y3 = F y3 = T T F T 0.079 β(yk = T) 10.17 4.25 1.00 y2 = F 1/3 3 T T F 0.422 q(yk = F|x) 0.46 0.44 0.57 y2 = T 4 1/4 T T T 0.026 q(yk = T|x) 0.54 0.56 0.43 Table 1: Example of computing the structural knowledge for a sequence of 3 tokens with a label set of {T, F}. ψ(yk−1, yk, rk) represents the potential formulated in Eq. 1. Each Label Seq. Probs. is defined in Eq. 2 for the corresponding label sequence. Top-2 represents the two label sequences with the highest scores and Weights are their corresponding weights for KD (Eq. 8, 9). α(yk), β(yk) and the posterior distribution q(yk|x) are computed based on Eq. 11, 12 and 10 respectively. We assume that ψ(y0, y1, r1) = 1 regardless of whether y1 is T or F. teacher model as our samples. We use a modified Viterbi algorithm to predict the k-best label sequences T = {ˆy1, . . . , ˆyk}. Eq. 7 is then approximated as: LTop-K = −1 k X ˆy∈T log ps(ˆy|x) (8) This can also be seen as data augmentation through generating k pseudo target label sequences for each input sentence by the teacher. Weighted Top-K The Top-K method is highly biased in that the approximation becomes worse with a larger k . A better method is to associate weights to the k samples to better approximate pt(y|x). p′ t(y|x) =    pt(y|x) P ˆy∈T pt(ˆy|x) y ∈T 0 y /∈T Eq. 7 is then approximated as: LTop-WK = − X y∈T p′ t(y|x) log ps(y|x) (9) This can be seen as the student learning weighted pseudo target label sequences produced by the teacher for each input sentence. The Top-K approach is related to the previous work on model compression in neural machine translation (Kim and Rush, 2016) and multilingual neural machine translation (Tan et al., 2019). In neural machine translation, producing k-best label sequences is intractable in general and in practice, beam search decoding has been used to approximate the k-best label sequences. However, for linear-chain CRF model, k-best label sequences can be produced exactly with the modified Viterbi algorithm. 3.2 Posterior Distillation The Top-K is approximate with respect to the teacher’s structural distribution and still is slow on large k. Our second approach tries to distill structure-level knowledge based on tractable local (token-wise) distributions q(yk|x), which can be exactly computed. q(yk|x) = X {y1,...,yn}\yk p(y1, . . . , yn|x) = P {y1,...,yn}\yk nQ i=1 ψ(yi−1, yi, ri) Z (10) ∝α(yk) × β(yk) α(yk) = X {y0,...,yk−1} k Y i=1 ψ(yi−1, yi, ri) (11) β(yk) = X {yk+1,...,yn} n Y i=k+1 ψ(yi−1, yi, ri) (12) where Z is the denominator of Eq. 2 that is usually called the partition function and α(yk) and β(yk) are calculated in forward and backward pass utilizing the forward-backward algorithm. We assume that β(yn) = 1. Given the local probability distribution for each token, we define the KD loss function in a similar manner with the token-level distillation in Eq. 5. LPos. = − n X i=1 |V| X j=1 qt(yi = j|x) log qs(yi = j|x) (13) The difference between token-level distillation and posterior distillation is that posterior distillation is based on BiLSTM-CRF and conveys global 3321 Algorithm 1 KD for Multilingual Sequence Labeling 1: Input: Training corpora D = {D1, . . . , Dl} with l languages, monolingual models T = {T 1, . . . , T l} pretrained on the corresponding training corpus, learning rate η, multilingual student model M with parameters θ, total training epochs S, loss interpolation coefficient λ, interpolation annealing rate τ. 2: Initialize: Randomly initialize multilingual model parameters θ. Set the current training epoch S = 0, current loss interpolation λ = 1. Create an new empty training dataset ˆD. 3: 4: for Di ∈D do 5: for (xi j, yi j) ∈Di do 6: Teacher model Ti reads the input xi j and predicts probability distributions ˆpi j required for KD. 7: Append (xi j, yi j, ˆpi j) into the new training dataset ˆD. 8: end for 9: end for 10: 11: while S < S do 12: S = S + 1. 13: for mini-batch (x, y, ˆp) sampled from ˆD do 14: Compute the KD loss LKD(x, ˆp). 15: Compute the golden target loss LNLL(x, y). 16: Compute the final loss L = λLKD + (1 −λ)LNLL. 17: Update θ: θ = θ - η ∗∂L/∂θ . 18: if λ −τ > 0 do 19: Update interpolation factor λ: λ = λ −τ 20: else 21: Update interpolation factor λ: λ = 0 22: end if 23: end while structural knowledge in the local probability distribution. Posterior distillation has not been used in the related research of knowledge distillation in neural machine translation because of intractable computation of local distributions. In sequence labeling, however, local distributions in a BiLSTM-CRF can be computed exactly using the forward-backward algorithm. An example of computing the structural knowledge discussed in this and last subsections is shown in Table 1. 3.3 Multilingual Knowledge Distillation Let D = {D1, . . . , Dl} denotes a set of training data with l languages. Di denotes the corpus of the i-th language that contains multiple sentence and label sequence pairs Di = {(xi j, yi j)}mi j=1. To train a single multilingual student model from multiple monolingual pretrained teachers, for each input sentence, we first use the teacher model of the corresponding language to predict the pseudo targets (k-best label sequences or posterior distribution for posterior distillation). Then the student jointly learns from the gold targets and pseudo targets in training by optimizing the following loss function: LALL = λLKD + (1 −λ)LNLL where λ decreases from 1 to 0 throughout training following Clark et al. (2019), LKD is one of the Eq. 5, 8, 9, 13 or an averaging of Eq. 9, 13. The overall distillation process is summarized in Algorithm 1. 4 Experiment 4.1 Setup Dataset We use datasets from 4 sequence labeling tasks in our experiment. • CoNLL NER: We collect the corpora of 4 languages from the CoNLL 2002 and 2003 shared task (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) • WikiAnn NER (Pan et al., 2017): The dataset contains silver standard NER tags that are annotated automatically on 282 languages that exist in Wikipedia. We select the data of 8 languages from different language families or from different language subgroups of IndoEuropean languages. We randomly choose 5000 sentences from the dataset for each language except English, and choose 10000 sentences for English to reflect the abundance of English corpora in practice. We split the dataset by 8:1:1 for training/development/test. • Universal Dependencies (UD) (Nivre et al., 2016): We use universal POS tagging annotations in the UD datasets. We choose 8 languages from different language families or language subgroups and one dataset for each language. • Aspect Extraction: The dataset is from an aspect-based sentiment analysis task in SemEval-2016 Task 5 (Pontiki et al., 2016). We choose subtask 1 of the restaurants domain which has the most languages in all domains1, and split 10% of the training data as the development data. 1Subtask 1 of the restaurants domain contains 6 languages but we failed to get the French dataset as the dataset is not accessible from the provided crawling toolkit. 3322 Task CoNLL NER SemEval 2016 Aspect Extraction Approach English Dutch Spanish German Avg. Turkish Spanish Dutch English Russian Avg. REF TEACHERS 92.43 91.90 89.19 84.00 89.38 59.29 74.29 72.85 72.80 71.77 70.20 SOFTMAX 90.08 88.99 87.72 81.40 87.05 52.39 71.54 68.86 65.87 66.85 65.10 TOKEN 90.02 88.87 88.24 81.30 87.11 52.56 72.12 69.33 66.81 67.20 65.61 BASE BASELINE 90.13 89.11 88.06 82.16 87.36 55.79 72.02 69.35 67.54 68.02 66.54 EMISSION 90.28 89.31 88.65 81.96 87.55 51.52 72.60 69.10 67.21 68.52 65.79 OURS TOP-K 90.57 89.33 88.61 81.99 87.62 55.74 73.13 69.81 67.99 69.21 67.18 TOP-WK 90.52 89.24 88.64 82.15 87.64 56.40 72.81 69.33 68.16 69.42 67.22 POSTERIOR 90.68 89.41 88.57 82.22 87.72 56.69 73.47 69.98 68.11 69.22 67.49 POS.+TOP-WK 90.53 89.58 88.66 82.31 87.77 55.00 73.97 70.15 67.83 69.76 67.34 Table 2: Results in F1 score of CoNLL 2002/2003 NER task and Aspect Extraction of SemEval 2016 Task 5. Approach English Tamil Basque Hebrew Indonesian Persian Slovenian French Avg. REF TEACHERS 83.80 86.72 94.68 83.72 90.48 90.37 91.66 90.29 88.97 SOFTMAX 81.86 80.72 93.72 77.11 90.64 90.03 91.05 88.18 86.66 TOKEN 81.33 80.88 93.56 77.47 90.50 89.83 91.08 87.93 86.57 BASE BASELINE 82.56 82.39 94.13 78.89 91.11 90.23 91.62 88.92 87.48 EMISSION 82.54 82.23 94.37 78.45 90.92 89.92 91.56 89.47 87.43 OURS TOP-K 82.39 82.94 94.13 78.93 90.93 90.12 91.56 89.25 87.53 TOP-WK 82.55 82.71 94.44 78.79 91.18 90.22 91.37 89.32 87.57 POSTERIOR 83.03 83.02 94.35 78.77 91.75 90.11 91.95 89.65 87.83 Pos.+Top-WK 82.77 82.81 94.47 78.87 91.18 90.31 91.84 89.42 87.71 Table 3: F1 scores in the WikiAnn NER task. Approach English Hebrew Japanese Slovenian French Indonesian Persian Tamil Avg. REF TEACHERS 96.94 97.54 96.81 95.01 99.10 94.02 98.07 93.01 96.31 SOFTMAX 95.61 96.25 96.59 90.66 97.94 92.56 96.62 86.58 94.10 TOKEN 95.66 96.28 96.47 90.82 97.95 92.70 96.58 86.41 94.11 BASE BASELINE 95.71 96.18 96.60 90.64 97.89 92.62 96.63 86.19 94.06 EMISSION 95.63 96.21 96.52 90.76 97.98 92.64 96.61 86.66 94.13 OURS TOP-K 95.74 96.27 96.56 90.66 97.96 92.58 96.64 86.57 94.12 TOP-WK 95.68 96.23 96.58 90.73 97.89 92.62 96.62 86.74 94.14 POSTERIOR 95.71 96.34 96.59 90.91 97.99 92.72 96.69 87.36 94.29 POS.+TOP-WK 95.74 96.27 96.47 90.84 98.02 92.58 96.73 86.97 94.20 Table 4: Accuracies in UD POS tagging. Model Configurations In our experiment, all the word embeddings are fixed and M-BERT token embeddings are obtained by average pooling. We feed the token embeddings into the BiLSTM-CRF for decoding. The hidden size of the BiLSTM layer is 256 for the monolingual teacher models and 600 or 800 for the multilingual student model depending on the dataset as larger hidden size for the multilingual model results in better performance in our experiment. The settings of teacher and student models are as follows: • Monolingual Teachers: Each teacher is trained with a dataset of a specific language. We use M-BERT concatenated with languagespecific Flair (Akbik et al., 2018) embeddings and fastText (Bojanowski et al., 2017) word embeddings as token embeddings2 for all the 2We use fastText + M-BERT instead if the Flair embedding is not available for a certain language. monolingual teacher models. • Multilingual Student: The student model is trained with the datasets of all the languages combined. We only use M-BERT as token embeddings for the multilingual student model. Training For model training, the mini-batch size is set to 2000 tokens. We train all models with SGD optimizer with a learning rate of 0.1 and anneal the learning rate by 0.5 if there is no improvements on the development set for 10 epochs. For all models, we use a single NVIDIA Tesla V100 GPU for training including the student model. We tune the loss interpolation anneal rate in {0.5, 1.0} and the k value of Top-K ranging from [1, 10]. 4.2 Results We report results of the following approaches. 3323 Tamil Basque Hebrew Indonesian Persian Slovenian French Avg. TEACHERS 24.98 40.51 25.39 35.54 11.05 59.95 60.54 36.85 BASELINE 37.83 47.80 47.96 38.71 16.23 61.22 59.34 44.15 EMISSION 37.99 46.69 47.34 38.52 16.11 60.75 59.81 43.89 POSTERIOR 38.93 47.52 48.33 38.76 16.69 62.04 60.77 44.72 POSTERIOR+TOP-WK 38.23 47.49 48.79 39.32 16.19 62.03 60.34 44.63 Table 5: Results of zero-shot transfer in the NER task (CoNLL ⇒WikiAnn). • Baseline represents training the multilingual model with the datasets of all the languages combined and without knowledge distillation. • Emission is the KD method based on Eq. 5. • Top-K, Top-WK and Posterior are our KD methods formulated by Eq. 8, Eq. 9 and Eq. 13 resprectively. • Pos.+Top-WK is a mixture of posterior and weighted Top-K distillation. We also report the results of monolingual models as Teachers and multilingual BiLSTM-Softmax model with token-level KD based on Eq. 4 as Softmax and Token for reference. Table 2, 3, and 4 show the effectiveness of our approach on 4 tasks over 25 datasets. In all the tables, we report scores averaged over 5 runs. Observation #0. BiLSTM-Softmax models perform inferior to BiLSTM-CRF models in most cases in the multilingual setting: The results show that the BiLSTM-CRF approach is stronger than the BiLSTM-Softmax approach on three of the four tasks, which are consistent with previous work on sequence labeling (Ma and Hovy, 2016; Reimers and Gurevych, 2017; Yang et al., 2018). The token-level KD approach performs almost the same as the BiLSTM-Softmax baseline in most of the tasks except the Aspect Extraction task. Observation #1. Monolingual teacher models outperform multilingual student models: This is probably because the monolingual teacher models are based on both multilingual embeddings M-BERT and strong monolingual embeddings (Flair/fastText). The monolingual embedding may provide additional information that is not available to the multilingual student models. Furthermore, note that the learning problem faced by a multilingual student model is much more difficult than that of a teacher model because a student model has to handle all the languages using roughly the same model size as a teacher model. Observation #2. Emission fails to transfer knowledge: Emission outperforms the baseline NER POS TEACHERS 41.85 56.01 BASELINE 50.86 84.11 EMISSION 50.19 84.17 POSTERIOR 51.43 84.28 POSTERIOR+TOP-K 51.14 84.24 Table 6: Averaged results of zero-shot transfer on another 28 languages of the NER task and 24 languages of the POS tagging task. only on 12 out of 25 datasets. This shows that simply following the standard approach of knowledge distillation from emission scores is not sufficient for the BiLSTM-CRF models. Observation #3. Top-K and Top-WK outperform the baseline: Top-K outperforms the baseline on 15 datasets. It outperforms Emission on average on Wikiann NER and Aspect Extraction and is competitive with Emission in the other two tasks. Top-WK outperforms the baseline on 18 datasets and it outperforms Top-K in all the tasks. Observation #4. Posterior achieves the best performance on most of the tasks: The Posterior approach outperforms the baseline on 21 datasets and only underperforms the baseline by 0.12 on 2 languages in WikiAnn and by 0.01 on one language in UD POS tagging. It outperforms the other methods on average in all the tasks except that is slightly underperforms Pos.+Top-WK in the CoNLL NER task. Observation #5. Top-WK+Posterior stays in between: Pos.+Top-WK outperforms both Top-WK and Posterior only in the CoNLL NER task. In the other three tasks, its performance is above that of Top-WK but below that of Posterior. 4.3 Zero-shot Transfer We use the monolingual teacher models, multilingual baseline models and our Posterior and Pos.+Top-WK models trained on the CoNLL NER datasets to predict NER tags on the test sets of 7 languages in WikiAnn that used in Section 4.2. Table 5 shows the results. For the teacher models, we report the maximum score over all the teachers for 3324 English Dutch Spanish German Avg. TEACHERS 90.63 89.65 88.05 81.81 87.54 BASELINE 90.13 89.11 88.06 82.16 87.36 POSTERIOR 90.57 89.17 88.61 82.16 87.63 Table 7: Posterior distillation with weaker teachers. each language. The results show that multilingual models significantly outperform the teacher models. For languages such as Tamil and Hebrew, which are very different from the languages in the CoNLL datasets, the performance of the teacher models drops dramatically compared with the multilingual models. It shows that the language specific features in teacher models limits their generalizability on new languages. Our multilingual models, Posterior and Pos.+Top-WK outperform the baseline on all the languages. Emission slightly underperforms Baseline, once again showing its ineffectiveness in knowledge distillation. We also conduct experiments on zero-shot transferring over other 28 languages on WikiAnn NER datasets and 24 languages on UD POS tagging datasets. The averaged results are shown in Table 6. The NER experiment shows that our approaches outperforms Baseline on 24 out of 28 languages and the Posterior is stronger than Pos.+Top-WK by 0.29 F1 score on average. The POS tagging experiment shows that our approach outperforms Baseline on 20 out of 24 languages. For more details, please refer to the Appendices A. 4.4 KD with Weaker Teachers To show the effectiveness of our approach, we train weaker monolingual teachers using only M-BERT embeddings on four datasets of the CoNLL NER task. We run Posterior distillation and keep the setting of the student model unchanged. In this setting, Posterior not only outperforms the baseline, but also outperforms the teacher model on average. This shows that our approaches still work when the teachers have the same token embeddings as the student. By comparing Table 7 and 2, we can also see that stronger teachers lead to better students. 4.5 k Value in Top-K To show how the k value affects the performance of Top-K and Top-WK distillation methods, we compare the models with two distillation methods and different k values on the CoNLL NER task. Figure 2 shows that Top-K drops dramatically when k gets larger while Top-WK performs stably. Therefore 1 2 3 5 7 10 12 15 87.1 87.3 87.5 87.7 87.9 k Value F1 Score Top-K Top-WK Figure 2: Averaged F1 scores on the CoNLL NER task versus the k values of Top-K distillation. Training Time (hours) BASELINE 11 EMISSION 11.5 TOP-WK 18 POSTERIOR 16 Table 8: Training time of the Baseline and KD approaches on CoNLL NER datasets. The training time of KD approaches includes teachers predicting and student training. Top-WK is less sensitive to the hyper-parameter k and might be practical in real applications. 4.6 Training Time and Memory Consumption We compare the training time of different approaches on the CoNLL NER task and report the results in Table 8. Our Top-WK and Posterior approaches take 1.45 and 1.63 times the training time of the Baseline approach. For the memory consumption in training, the GPU memory cost does not vary significantly for all the approaches, while the CPU memory cost for all the KD approaches is about 2 times that of the baseline model, because training models with KD requires storing predictions of the teachers in the CPU memory. 5 Related Work Multilingual Sequence Labeling Many important tasks such as NER and POS tagging can be reduced to a sequence labeling problem. Most of the recent work on multilingual NER (Täckström, 2012; Fang et al., 2017; Enghoff et al., 2018; Rahimi et al., 2019; Johnson et al., 2019) and POS tagging (Snyder et al., 2009; Plank and Agi´c, 2018) focuses on transferring the knowledge of a specific language to another (low-resource) language. 3325 For example, Johnson et al. (2019) proposed crosslingual transfer learning for NER focusing on bootstrapping Japanese from English, which has a different character set than Japanese. Pretrained Word Representations Recent progress on pretrained word representations such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2019) and XLNet (Yang et al., 2019) significantly improve the performance of multiple NLP tasks. Multilingual BERT is a pretrained BERT model incorporating 104 languages into a single multilingual model. Pires et al. (2019) showed its ability of generalization and zero-shot transfer learning on NER and POS tagging and Keung et al. (2019) used adversarial learning with M-BERT and significantly improved zero-resource cross-lingual NER. On the tasks of NER and POS tagging, Flair embeddings (Akbik et al., 2018, 2019) is a state-of-the-art method based on character-level language models. Straka et al. (2019) found that concatenating Flair embeddings with BERT embeddings outperforms other mixtures of ELMo, BERT and Flair embeddings in most of the subtasks on the CoNLL 2018 Shared Task (Zeman and Hajiˇc, 2018) datasets on 54 languages, which inspired us to use M-BERT + Flair embeddings as the word representation of teachers. Knowledge Distillation Knowledge distillation has been used to improve the performance of small models with the guidance of big models, with applications in natural language processing (Kim and Rush, 2016; Kuncoro et al., 2016; Tan et al., 2019; Clark et al., 2019; Sun et al., 2019), computer vision (Ba and Caruana, 2014) and speech recognition (Huang et al., 2018). For simple classification problems, there is a variety of work on tasks such as sentiment analysis (Clark et al., 2019), image recognition (Hinton et al., 2015) and cross-lingual text classification (Xu and Yang, 2017). For structured prediction problems, there are lines of work on neural machine translation (Kim and Rush, 2016; Tan et al., 2019), connectionist temporal classification in the field of speech recognition (Huang et al., 2018) and dependency parsing (Kuncoro et al., 2016; Liu et al., 2018). Many recent researches on BERT with knowledge distillation are focused on distilling a large BERT model into a smaller one. (Tsai et al., 2019) distilled a large M-BERT model into a three layer M-BERT model for sequence labeling and achieved a competitively high accuracy with significant speed improvements. (Jiao et al., 2019) proposed TinyBERT for natural language understanding. (Sanh et al., 2019) proposed a distilled version of the BERT model which achieves a 60% faster speed and maintains 97% performance of the larger BERT model. 6 Discussion on Flair/M-BERT Fine-tuning Previous work has discussed and empirically investigated two ways of adapting monolingual pretrained embedding models to monolingual downstream tasks (Peters et al., 2019): either fixing the models and using them for feature extraction, or fine-tuning them in downstream tasks. They found that both settings have comparable performance in most cases. Wu and Dredze (2019) found that fine-tuning M-BERT with the bottom layers fixed provides further performance gains in multilingual setting. In this paper, we mainly focus on the first approach and utilize the pretrained embedding as fixed feature extractor because Flair/M-BERT finetuning is too slow for our large-scale experimental design of multilingual KD. Designing a cheap and fast fine-tuning approach for pretrained embedding models might be an interesting direction for future work. 7 Conclusion In this paper our major contributions are the two structure-level methods to distill the knowledge of monolingual models to a single multilingual model in sequence labeling: Top-K knowledge distillation and posterior distillation. The experimental results show that our approach improves the performance of multilingual models over 4 tasks on 25 datasets. The analysis also shows that our model has stronger zero-shot transfer ability on unseen languages on the NER and POS tagging task. Our code is publicly available at https://github. com/Alibaba-NLP/MultilangStructureKD. Acknowledgement This work was supported by the National Natural Science Foundation of China (61976139). References Alan Akbik, Tanja Bergmann, and Roland Vollgraf. 2019. Pooled contextualized embeddings for named 3326 entity recognition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 724–728, Minneapolis, Minnesota. Association for Computational Linguistics. Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638–1649, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2654–2662. Curran Associates, Inc. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Cristian Buciluˇa, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’06, pages 535–541, New York, NY, USA. ACM. Kevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher D. Manning, and Quoc V. Le. 2019. BAM! born-again multi-task networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5931–5937, Florence, Italy. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Jan Vium Enghoff, Søren Harrison, and Željko Agi´c. 2018. Low-resource named entity recognition via multi-source projection: Not quite there yet? In Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text, pages 195–201, Brussels, Belgium. Association for Computational Linguistics. Meng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to active learn: A deep reinforcement learning approach. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 595–605, Copenhagen, Denmark. Association for Computational Linguistics. Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop. Lifu Huang, Heng Ji, and Jonathan May. 2019a. Crosslingual multi-level adversarial transfer to enhance low-resource name tagging. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3823–3833, Minneapolis, Minnesota. Association for Computational Linguistics. Mingkun Huang, Yongbin You, Zhehuai Chen, Yanmin Qian, and Kai Yu. 2018. Knowledge distillation for sequence model. In Proc. Interspeech 2018, pages 3703–3707. Xiaolei Huang, Jonathan May, and Nanyun Peng. 2019b. What matters for neural cross-lingual named entity recognition: An empirical analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6396–6402, Hong Kong, China. Association for Computational Linguistics. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351. Andrew Johnson, Penny Karanasou, Judith Gaspers, and Dietrich Klakow. 2019. Cross-lingual transfer learning for Japanese named entity recognition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers), pages 182–189, Minneapolis - Minnesota. Association for Computational Linguistics. Daniel Jurafsky and James H. Martin. 2009. Speech and Language Processing (2Nd Edition). PrenticeHall, Inc., Upper Saddle River, NJ, USA. Phillip Keung, yichao lu, and Vikas Bhardwaj. 2019. Adversarial learning with contextual embeddings for zero-resource cross-lingual classification and NER. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1355– 1360, Hong Kong, China. Association for Computational Linguistics. Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317–1327, Austin, Texas. Association for Computational Linguistics. 3327 Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016. Distilling an ensemble of greedy dependency parsers into one MST parser. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1744–1753, Austin, Texas. Association for Computational Linguistics. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270, San Diego, California. Association for Computational Linguistics. Xu Lan, Xiatian Zhu, and Shaogang Gong. 2018. Knowledge distillation by on-the-fly native ensemble. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 7517–7527. Curran Associates, Inc. Yijia Liu, Wanxiang Che, Huaipeng Zhao, Bing Qin, and Ting Liu. 2018. Distilling knowledge for searchbased structured prediction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1393–1402. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074, Berlin, Germany. Association for Computational Linguistics. Seyed-Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, and Hassan Ghasemzadeh. 2019. Improved knowledge distillation via teacher assistant: Bridging the gap between student and teacher. CoRR, abs/1902.03393. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajiˇc, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 1659–1666, Portorož, Slovenia. European Language Resources Association (ELRA). Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Crosslingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946–1958, Vancouver, Canada. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 7–14, Florence, Italy. Association for Computational Linguistics. Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996– 5001, Florence, Italy. Association for Computational Linguistics. Barbara Plank and Željko Agi´c. 2018. Distant supervision from disparate sources for low-resource partof-speech tagging. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 614–620, Brussels, Belgium. Association for Computational Linguistics. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphée De Clercq, Véronique Hoste, Marianna Apidianaki, Xavier Tannier, Natalia Loukachevitch, Evgeniy Kotelnikov, Nuria Bel, Salud María Jiménez-Zafra, and Gül¸sen Eryi˘git. 2016. SemEval-2016 task 5: Aspect based sentiment analysis. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval2016), pages 19–30, San Diego, California. Association for Computational Linguistics. Afshin Rahimi, Yuan Li, and Trevor Cohn. 2019. Massively multilingual transfer for NER. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 151–164, Florence, Italy. Association for Computational Linguistics. Nils Reimers and Iryna Gurevych. 2017. Optimal hyperparameters for deep lstm-networks for sequence labeling tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP). Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2014. Fitnets: Hints for thin deep nets. CoRR, abs/1412.6550. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. The 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing - NeurIPS 2019. 3328 Byung Cheol Song Seunghyun Lee. 2019. Graphbased knowledge distillation by multi-head attention network. In British Machine Vision Conference (BMVC). Benjamin Snyder, Tahira Naseem, Jacob Eisenstein, and Regina Barzilay. 2009. Adding more languages improves unsupervised multilingual part-of-speech tagging: a Bayesian non-parametric approach. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 83–91, Boulder, Colorado. Association for Computational Linguistics. Milan Straka, Jana Straková, and Jan Hajiˇc. 2019. Evaluating contextualized embeddings on 54 languages in pos tagging, lemmatization and dependency parsing. arXiv preprint arXiv:1908.07448. Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. Patient knowledge distillation for bert model compression. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4314–4323. Oscar Täckström. 2012. Nudging the envelope of direct transfer methods for multilingual named entity recognition. In Proceedings of the NAACLHLT Workshop on the Induction of Linguistic Structure, pages 55–63, Montréal, Canada. Association for Computational Linguistics. Xu Tan, Yi Ren, Di He, Tao Qin, and Tie-Yan Liu. 2019. Multilingual neural machine translation with knowledge distillation. In International Conference on Learning Representations. Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002). Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–147. Henry Tsai, Jason Riesa, Melvin Johnson, Naveen Arivazhagan, Xin Li, and Amelia Archer. 2019. Small and practical BERT models for sequence labeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3632– 3636, Hong Kong, China. Association for Computational Linguistics. Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833–844, Hong Kong, China. Association for Computational Linguistics. Ruochen Xu and Yiming Yang. 2017. Cross-lingual distillation for text classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1415–1425, Vancouver, Canada. Association for Computational Linguistics. Jie Yang, Shuailong Liang, and Yue Zhang. 2018. Design challenges and misconceptions in neural sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3879–3889, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 5753– 5763. Curran Associates, Inc. Daniel Zeman and Jan Hajiˇc, editors. 2018. Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Association for Computational Linguistics, Brussels, Belgium. A Appendices In this appendices, we use ISO 639-1 codes3 to represent each language for simplification. A.1 Zero-shot Transfer Table 9, 10 shows performance of zero-shot transfer on the NER and POS tagging datasets. Our Posterior approach outperforms Baseline in 24 out of 28 languages on NER and 20 out of 24 languages on POS tagging. 3https://en.wikipedia.org/wiki/List_ of_ISO_639-1_codes 3329 ar be ca cs da el eo et fi gl (1): TEACHER 14.77 26.96 57.75 57.16 65.19 45.70 35.81 49.66 55.61 63.73 (2): BASELINE 27.72 64.64 55.78 65.40 68.33 60.76 37.94 59.54 63.41 64.83 (3): EMISSION 26.92 63.75 55.30 64.27 68.09 59.86 37.28 59.23 63.68 64.99 (4): POSTERIOR 27.83 64.62 56.82 65.69 69.08 60.66 38.44 60.47 64.03 65.07 (5): POSTERIOR+TOP-K 28.31 64.54 56.34 65.80 69.08 61.33 38.14 60.16 63.62 65.12 ∆: (4)-(1) 13.06 37.66 -0.93 8.53 3.89 14.96 2.63 10.81 8.42 1.34 ∆: (5)-(1) 13.54 37.58 -1.41 8.64 3.89 15.63 2.33 10.50 8.01 1.39 ∆: (4)-(2) 0.11 -0.02 1.04 0.29 0.76 -0.10 0.49 0.93 0.61 0.24 ∆: (5)-(2) 0.60 -0.10 0.55 0.40 0.75 0.57 0.20 0.62 0.20 0.29 ∆: (4)-(3) 0.91 0.88 1.53 1.42 1.00 0.80 1.16 1.24 0.35 0.08 ∆: (5)-(3) 1.40 0.79 1.04 1.53 0.99 1.47 0.86 0.93 -0.06 0.13 hr hu hy kk ko lt ms no pl pt (1): TEACHER 50.53 52.49 21.55 22.82 26.88 45.35 24.09 62.76 56.53 51.77 (2): BASELINE 60.19 62.75 32.32 35.85 35.56 52.31 24.76 67.38 69.31 52.10 (3): EMISSION 59.79 61.37 30.69 31.63 35.26 51.95 25.07 67.49 69.07 52.30 (4): POSTERIOR 61.10 63.34 32.80 37.38 36.19 52.75 25.42 68.58 70.27 53.51 (5): POSTERIOR+TOP-K 60.58 63.21 32.57 34.10 36.70 52.83 25.14 67.51 69.90 53.53 ∆: (4)-(1) 10.57 10.85 11.25 14.56 9.31 7.40 1.33 5.82 13.74 1.74 ∆: (5)-(1) 10.05 10.72 11.02 11.28 9.82 7.48 1.05 4.75 13.37 1.76 ∆: (4)-(2) 0.91 0.60 0.49 1.53 0.63 0.44 0.66 1.20 0.96 1.42 ∆: (5)-(2) 0.40 0.46 0.25 -1.75 1.14 0.53 0.38 0.13 0.58 1.44 ∆: (4)-(3) 1.31 1.97 2.12 5.75 0.93 0.80 0.35 1.09 1.20 1.22 ∆: (5)-(3) 0.79 1.83 1.88 2.47 1.44 0.88 0.07 0.03 0.83 1.23 ro ru sk sv tr uk vi zh Avg. (1): TEACHER 34.96 21.91 52.84 70.44 45.98 25.04 30.05 3.40 41.85 (2): BASELINE 36.46 28.68 60.44 68.91 57.14 49.19 33.38 28.94 50.86 (3): EMISSION 36.20 28.63 60.08 69.48 56.29 46.23 33.27 27.25 50.19 (4): POSTERIOR 37.06 29.07 61.09 68.23 57.88 48.76 33.64 30.15 51.43 (5): POSTERIOR+TOP-K 36.33 29.05 60.78 69.30 57.68 46.82 33.28 30.04 51.14 ∆: (4)-(1) 2.10 7.16 8.25 -2.21 11.90 23.72 3.59 26.75 9.58 ∆: (5)-(1) 1.37 7.14 7.94 -1.14 11.70 21.78 3.23 26.64 9.29 ∆: (4)-(2) 0.60 0.40 0.65 -0.68 0.74 -0.44 0.26 1.21 0.57 ∆: (5)-(2) -0.13 0.37 0.34 0.39 0.54 -2.38 -0.10 1.11 0.28 ∆: (4)-(3) 0.86 0.45 1.01 -1.25 1.59 2.53 0.37 2.90 1.23 ∆: (5)-(3) 0.13 0.42 0.70 -0.18 1.39 0.58 0.02 2.79 0.94 Table 9: F1 scores of zero-shot transfer on the WikiAnn NER datasets. ∆represents the difference of F1 score. 3330 ar bg ca cs da de es eu fi (1): TEACHER 47.85 48.24 80.04 51.62 53.79 44.35 81.03 44.29 51.50 (2): BASELINE 80.82 88.59 89.95 87.55 88.35 87.70 91.32 69.62 80.06 (3): EMISSION 80.85 88.62 90.00 87.56 88.47 87.89 91.27 69.68 80.10 (4): POSTERIOR 80.95 88.26 89.77 87.50 88.68 87.79 91.48 70.03 80.52 (5): POSTERIOR+TOP-K 80.77 88.30 89.77 87.46 88.58 87.84 91.29 70.17 80.38 ∆: (4)-(1) 33.10 40.02 9.73 35.88 34.89 43.44 10.45 25.74 29.02 ∆: (5)-(1) 32.92 40.06 9.73 35.84 34.79 43.49 10.26 25.88 28.88 ∆: (4)-(2) 0.12 -0.33 -0.18 -0.05 0.33 0.09 0.15 0.41 0.47 ∆: (5)-(2) -0.05 -0.30 -0.18 -0.09 0.23 0.14 -0.03 0.55 0.32 ∆: (4)-(3) 0.09 -0.36 -0.24 -0.06 0.21 -0.10 0.20 0.34 0.42 ∆: (5)-(3) -0.08 -0.33 -0.23 -0.10 0.11 -0.05 0.02 0.49 0.28 hi hr it ko nl no pl pt ro (1): TEACHER 33.09 69.40 79.33 37.90 40.02 50.86 48.68 77.66 70.45 (2): BASELINE 76.41 88.28 93.66 58.47 87.30 88.84 85.26 93.38 86.20 (3): EMISSION 76.15 88.17 93.74 58.65 87.32 88.94 85.27 93.49 86.15 (4): POSTERIOR 76.64 88.46 93.70 59.09 87.19 88.91 85.31 93.42 86.33 (5): POSTERIOR+TOP-K 76.44 88.34 93.83 58.85 87.20 88.83 85.60 93.15 86.57 ∆: (4)-(1) 43.55 19.06 14.37 21.19 47.17 38.05 36.63 15.76 15.88 ∆: (5)-(1) 43.35 18.94 14.50 20.95 47.18 37.97 36.92 15.49 16.12 ∆: (4)-(2) 0.23 0.18 0.03 0.62 -0.11 0.07 0.05 0.03 0.13 ∆: (5)-(2) 0.03 0.06 0.17 0.38 -0.10 0.00 0.34 -0.23 0.36 ∆: (4)-(3) 0.50 0.29 -0.05 0.45 -0.13 -0.03 0.04 -0.07 0.18 ∆: (5)-(3) 0.30 0.18 0.09 0.21 -0.11 -0.10 0.33 -0.34 0.41 ru sk sr sv tr zh Avg. (1): TEACHER 50.81 56.09 70.04 50.63 54.93 51.55 56.01 (2): BASELINE 88.15 87.67 89.70 89.73 71.49 70.24 84.11 (3): EMISSION 88.10 87.73 89.60 89.91 71.68 70.72 84.17 (4): POSTERIOR 88.22 87.83 89.95 89.96 71.93 70.93 84.28 (5): POSTERIOR+TOP-K 88.10 87.84 89.92 89.69 71.99 70.74 84.24 ∆: (4)-(1) 37.41 31.74 19.91 39.33 17.00 19.38 28.28 ∆: (5)-(1) 37.29 31.75 19.88 39.06 17.06 19.19 28.23 ∆: (4)-(2) 0.07 0.16 0.25 0.23 0.44 0.69 0.17 ∆: (5)-(2) -0.05 0.18 0.22 -0.04 0.50 0.50 0.12 ∆: (4)-(3) 0.12 0.10 0.35 0.05 0.24 0.21 0.12 ∆: (5)-(3) 0.00 0.11 0.32 -0.21 0.31 0.02 0.07 Table 10: F1 scores of zero-shot transfer on the UD POS tagging datasets. ∆represents the difference of F1 score.
2020
304
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3331–3341 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3331 Dynamic Online Conversation Recommendation Xingshan Zeng1,2, Jing Li3, Lu Wang4, Zhiming Mao1,2, Kam-Fai Wong1,2 1The Chinese University of Hong Kong, Hong Kong, China 2MoE Key Laboratory of High Confidence Software Technologies, China 3Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China 4Khoury College of Computer Sciences, Northeastern University, Boston, United States 1,2{xszeng,zmmao,kfwong}@se.cuhk.edu.hk [email protected], [email protected] Abstract Trending topics in social media content evolve over time, and it is therefore crucial to understand social media users and their interpersonal communications in a dynamic manner. In this research we study dynamic online conversation recommendation, to help users engage in conversations that satisfy their evolving interests. Different from works in conversation recommendation which assume static user interests, our model captures the temporal aspects of user interests. Moreover, our model can cater for cold start problem where conversations are new and unseen in training. We propose a neural architecture to analyze changes of user interactions and interests over time, whose result is used to predict which discussions the users are likely to enter. We conduct experiments on large-scale collections of Reddit conversations. Results on three subreddits show that our model significantly outperforms state-of-the-art models based on static assumption of user interests. We further evaluate performance in cold start, and observe consistently better performance by our model when considering various degrees of sparsity of user’s chatting history and conversation contexts. Lastly, our analysis also confirms the change of user interests. This further justify the advantage and efficacy of our model. 1 Introduction Online social media platforms are popular outlets for individuals to exchange viewpoints and discuss topics they are interested in. However, the huge volume of online conversations produced daily hinders people’s capability of finding the information they are interested in. As a result, there is pressing demand for developing a conversation recommendation engine that tracks ongoing conversations and recommends suitable ones to users. Viewing the deluge of information streaming through social media, it is not hard to envision that [T1] In the UK they can request your encryption keys… …… [T2] … I doubt we are seeing the banning of encryption… in the ease of the authorities to go rummaging about your privacy. [T1] …where each country or group of countries gets to play with its own Internet, either making them secure or making them for surveillance. …… [T2] …but then again it kind of defeats the purpose of the Internet to go and fracture it like that… [T1] It's a bit like the Ubuntu variants that exist. In theory, one merely has to install the desired DE and select it at log in, but we still have those official DE variants to pick from. [T1] ksplice has existed for some time, but became part of the Oracle family. …… [T2] I've no idea and even the fact that such a feature is being added to the kernel is no indication that it will be used… Conversation 1 Conversation 2 Conversation 4 Conversation 3 Interests Change! Figure 1: Four chatting snippets posted by the same user U on Reddit. Arrows linking conversation 1 to 4 follow the chronological order. U’s interests shifted from Internet security (conversations 1 and 2) to operation system (conversation 3 and 4). users’ tastes, stances, and behaviors evolve over time (Wu et al., 2017). Nonetheless, existing work on recommending conversations (Chen et al., 2011; Zeng et al., 2018, 2019b) assume users’ discussion preferences do not change over time. Moreover, the common practice of recommendation is via collaborative filtering (CF), which relies on rich user interaction history for model training (Zeng et al., 2018, 2019b). When a conversation is entirely absent from training data, the model performance is inevitably compromised. This phenomenon is referred to as conversation cold start. As a result, existing methods which ignore the time-evolving user interests is insurmountable to tackle a common problem in practice, i.e., to predict future conversations created after the model is trained. To overcome this predicament, we explore dynamic conversation recommendation, which can model the change of user interests over time (henceforth user interest dynamics). To illustrate such change, Figure 1 shows multiple conversation turns posted by user U in four Reddit discussion snippets: C1 to C4 in the chronological order. As can 3332 be seen, U used to like discussing Internet security, indicated by “encryption”, “privacy”, and “surveillance” in C1 and C2. After a period of time, U’s interests changed to a different topic, operating system, as “ksplice”, “oracle”, and “Ubuntu” were later mentioned in C3 and C4. We design the model to capture user interests from both what they said in the past, and how they interacted with each other in the conversation structure. We first capture time-variant representations from user chatting history, where we assume user interests may change over time and therefore apply a gated recurrent unit (GRU) (Cho et al., 2014) to model time dependency. User interactions in the conversation context are then explored with both bidirectional gated recurrent unit (Bi-GRU) (Cho et al., 2014) for conversation turns’ chronological order and graph convolutional networks (GCN) (Marcheggiani and Titov, 2017) for in-reply-to relations. Both representations are learned to encode how participants formed the conversation structure, including what they said and whom they replied to. Next, we propose a user-aware attention to convey the user interest dynamics, which is further put over an interactionencoded conversation to measure whether its ongoing contexts fit a user’s current interests. Finally, we predict how likely a user will engage in a conversation, as a result of recommendation. To the best of our knowledge, we are the first to study dynamic online conversation recommendation and to explore the effects of user interests change over time learned from both chatting content and interaction behavior. For this reason, we are capable of recommending future conversations based on users’ interests at the time. For experiments1, we collect Reddit conversations from three subreddits — “technology”, “todayilearned”, and “funny”, each exhibiting different data statistics, discussion topics, and language styles. An absolute date is used to separate training data (before the date) from test and validation data (after the date). In this way, most conversations in the test and validation parts are new conversations that have not been counted before. This presents a more realistic setup than previous studies (Zeng et al., 2018, 2019b), which let training data contain partial context for any conversations to allow the possibility of predicting users’ future engagement 1The datasets and codes are available at: https:// github.com/zxshamson/dy-conv-rec for recommendation. Experimental results in main comparisons show that our model significantly outperforms all previous methods that ignore the change of user interests or interactions within contexts. For example, we achieve 0.375 MAP in discussions of “technology”, compared with 0.222 yielded by our previous stateof-the-art model (Zeng et al., 2019b). Further study shows that we consistently perform better both in conversation cold start and with varying degrees of sparsity of user history and conversation contexts. Lastly, to provide more insights into user interest dynamics, we inspect our model outputs and find that users indeed tend to engage in different types of conversations at different times, confirming the usefulness of tracking user preferences in real-time for conversation recommendation. 2 Related Work User Response Prediction. This work is in line with user response prediction, such as message popularity forecast with handcrafted response features (Artzi et al., 2012; Backstrom et al., 2013) and conversation trajectory with user interaction structures (Cheng et al., 2017b; Jiao et al., 2018; Zeng et al., 2019a). These works predict responses from general public, while we work on personalized recommendation and focus on user interest modeling. For recommendation, there are extensive efforts on post-level recommendation (Chen et al., 2012; Yan et al., 2012) and conversation-level (Chen et al., 2011; Zeng et al., 2018, 2019b). In contrast with them which assume static user interests, we capture how user interests change over time and take advantage of the recent advancement of dynamic product recommendation (Wu et al., 2017; Beutel et al., 2018). To recommend conversations, we aim to learn user interest dynamics from chatting content and interaction behavior, which have never been explored in previous research. Conversation Structure Modeling. Our work is also related to previous work to understand how participants interact with each other in conversation structure. Earlier efforts focus on discovering word statistic patterns via probabilistic graphical models (Ritter et al., 2010; Louis and Cohen, 2015), which are unable to capture deep semantics embedded in complex interactions. Recent research points out the effectiveness to understand conversation structure from temporal dynamics (Cheng et al., 2017a; Jiao et al., 2018) and replying struc3333 𝑴𝟏 𝑻𝟏 𝑻𝟐 𝑻𝟑 𝑻𝟒 𝑼𝒊𝒅 𝑴𝟐 𝑴|𝒖| … … Msg Encoder GRU User Factor Embedding Initialize … Msg Encoder Bi-GRU GCN User Text Attention MLP Mechanism 𝒚-𝒖,𝒄(Predicted Score) Figure 2: Overall structure of our model. The left module is to model user interest dynamics, whose results together with conversation representations derived from the right part are used for producing final prediction. Predicted score ˆyu,c indicates how likely u will engage in c. “Msg Encoder” mainly contains two layers: word embedding layer and CNN modeling layer. ture (Miura et al., 2018; Zayats and Ostendorf, 2018; Zeng et al., 2019b). The two factors are coupled in our interaction modeling and their joint effects for dynamic conversation recommendation, ignored by prior work, will be extensively studied here. 3 Our Dynamic Conversation Recommendation Model This section describes our dynamic conversation recommendation model, whose overall structure is shown in Figure 2. In the following, we will first introduce how we model the user interest dynamics with their chatting history in Section 3.1, followed by the description of conversation modeling in Section 3.2. Afterwards, Section 3.3 will present how we produce final recommendation outputs. Objective function and learning procedures will be finally presented in Section 3.4. 3.1 User Interest Dynamic Modeling Given a sequence of chronologically ordered historical messages ⟨m1, m2, · · · , m|u|⟩of a user u (|u| is the message number of u), a message therein corresponds to a word sequence wm. Our goal is to capture the temporal patterns from the sequence of user chatting messages and then produce the user interest representation. We employ two-level modeling — message level and user level. Message-level Modeling. We model messagelevel representation from its word sequence. Specifically, given u’s historical message m, we first use a pre-trained word embedding layer to map each word into a vector space, and then employ a Convolutional Neural Network (CNN) (Kim, 2014) encoder to model word occurrence with their neighbors. Afterwards, we output representation zm to reflect m’s content. User-level Modeling. As shown in Wu et al. (2017), some user interests may change rapidly and some may last for a long time. For the latter, we adopt a user embedding layer IUF (·) to capture the time-invariant interest factor and define u’s factor as rUF u . For the time-variant interests, we are inspired by previous work (Beutel et al., 2018) and employ a GRU (Cho et al., 2014) encoder to capture how user interests change based on sequential chatting messages. For each time state t, we update user’s current interests hU u,t conditioned on the previous interests hU u,t−1 and the current behavior zmt (derived from the aforementioned message-level modeling, reflecting m’s content): hU u,t = GRU(hU u,t−1, zmt) (1) Further, to leverage time-invariant features in the modeling of user interest dynamics, we initialize GRU’s hidden states based on the learned user factor rUF u following linear transformation: hU u,0 = W UrUF u + bU. And the last GRU states, i.e., rU u = hU u,t|u|, conveying the latest view of user interest dynamics, will be later used in conversation modeling and recommendation prediction. 3.2 User-aware Conversation Modeling Here we introduce how we encode a conversation in aware of user interests. Each conversation c is formed with a sequence of chronologically ordered turns ⟨t1, t2, ..., t|c|⟩(|c| is the turn number of c). A turn t therein is in form of a word sequence wt, its author’s ID ut, and the turn it replies to for later exploiting in-reply-to structure. To learn c’s representation, we encode both word occurrence in each turn (via turn-level modeling) and interactions between conversation turns (via conversation-level modeling). Afterwards, to identify turns that match target user’s interests, we propose a user-aware attention over turns. 3334 Turn-level Modeling. For each turn t ∈c, similar to message-level modeling in Section 3.1, we use a CNN encoder over pre-trained word embeddings to capture content representation, zt. Further, zt is concatenated with author ut’s user embedding rUF ut (see Section 3.1) to yield turn-level representation rT t , conveying both what is said and who says that. Based on the turn-level representations, we then learn turn interactions. Conversation-level Modeling. To explore turn interactions, we exploit turn’s chronological order and replying structure, both useful in conversation modeling (Zeng et al., 2019b). Chronological Order. We employ a BiGRU (Cho et al., 2014) to capture how a turn interacts with the turns posted right before and after it, whose hidden states are updated as followings: −−−→ hGRU c,t = −−−→ GRU(hGRU c,t−1, rT t ) (2) ←−−− hGRU c,t = ←−−− GRU(hGRU c,t+1, rT t ) (3) We then concatenate the forward and backward hidden states to produce chronology-encoded turn representations: hGRU c,t = [ −−−→ hGRU c,t ; ←−−− hGRU c,t ]. Replying Structure. To further encode whoreplies-to-whom in conversation structure, we put a Graph Convolutional Network (GCN) (Marcheggiani and Titov, 2017) over the chronologyencoded turn representations (learned by Bi-GRU see above). Graph encoder is empirically better than sequential ones because replying relations usually exhibit tree structure (a post may lead to multiple replies). Concretely, we first build a directed graph for a conversation via adding edges from a turn to its replies. We then define turn interactions therein in three directions: predecessors to successors (Pre), successors to predecessors (Suc), and self interactions (Self). Next, we update a turn’s hidden state with the formula below: hGCN c,t = X i∈P re(t) gi,t(W P rehGRU c,i + bP re) + X j∈Suc(t) gj,t(W SuchGRU c,j + bSuc) + gt,t(W SelfhGRU c,t + bSelf) (4) Pre(t) and Suc(t) represent turn t’s predecessors and successors in replying graph; gi,j is a scalar gate controlling weights of turn interactions: gi,j = σ(W Dir(i,j)hGRU c,i + bDir(i,j)) (5) where Dir(i, j) indicates the type of i-j direction (Pre, Suc, or Self). The process described above can be viewed as one GCN layer. Multiple layers can be stacked, with a ReLU (Rectified Linear Unit) activated function to connect two succinct layers. It enables the networks to explore deeper interaction effects. User-aware Attention. To identify conversation turns that better match target user’s interests, we design a user-aware attention mechanism over interaction-encoded turns. The attention weights are defined to reflect the similarity between a conversation turn’s representation hGCN c,i and the target user’s latest interests rU u (see Section 3.1): ai = softmax(rU u · hGCN c,i ) (6) Finally, we compute the attentive sum of all turns and obtain the conversation representations conveying both interactions and user interests: rC c = X i aihGCN c,i (7) 3.3 Recommendation Prediction To predict whether a user u willengage in conversation c, we compute how u’s interest dynamics (carried by rU u in Section 3.1) are similar to c’s content and interaction styles (reflected by rC c in Section 3.2). We adopt a two-way interactions via MLP mechanism (He et al., 2017) to measure the similarity: ru,c = α(W T 2 (α(W T 1 [rU u ; rC c ] + b1)) + b2) (8) where α(·) is ReLU-activated function. For recommendation, we predict ˆyu,c ∈[0, 1], which signals how likely u will engage in c. The equation for the final output layer will be: ˆyu,c = σ(vT ru,c + b) (9) where σ represents sigmoid activation function. 3.4 Learning Objective Following Zeng et al. (2019b), we adopt weighted binary cross-entropy loss as our objective function, which assigns more weights to positive feedbacks (i.e. u engages in c): L = − X (u,c)∈T  λ·yu,c log(ˆyu,c)+(1−yu,c) log(1−ˆyu,c)  (10) 3335 Tech Learn Fun Number of Users 13,927 67,255 112,345 Number of Convs 8,286 42,220 67,908 Number of Turns 43,705 233,213 375,550 Hist Number / User 2.78 3.05 2.94 Turn Number / Conv 5.10 5.34 5.35 User Number / Conv 4.15 4.45 4.79 New User Rate (%) 8.20 8.24 7.81 New Conv Rate (%) 99.64 99.40 99.51 Table 1: Data statistics. “Conv”: conversation; “Hist”: historical messages. New user rate is the number of users newly appeared in May’s data (for test) divided by number of May’s users. New conversation rate is similar. where T is the training set, yu,c denotes the binary ground-truth label, and λ (λ > 1) is a hyperparameter to trade off the weights of positive and negative instances. We weigh more on positive feedbacks because they are more reliable, while the negative ones sometimes cannot reflect user’s interests, owing to many unpredictable issues (e.g., users’ busy time). For the same reason, we adopt the negative sampling strategy (He et al., 2017) in training, which also speeds up the training process. 4 Experimental Setup Datasets. For experiments, we collect online conversations from Reddit, a popular online platform. To build our datasets, we first downloaded a large corpus publicly available on Reddit2, which consists of posts and comments created since early 2006. Then, we gathered data posted from January to May 2015 on three subreddits reflecting discussion topics on “technology” (Tech), “todayilearned” (Learn), and “funny” (Fun). We chose these three subreddits as they were popular subreddits with different discussed topics and language styles. For each subreddit, posts and comments were connected with in-reply-to relations (indicated by comments’ “parent id” field) to form conversations. Finally, we removed conversations with only one turn and produced three conversation datasets of different topics. In model training and evaluation, we use conversation turns created from January to April for training. For those posted in May, we randomly select half of them for validation and the other half for 2https://www.reddit.com/r/datasets/ comments/3bxlg7/i_have_every_publicly_ available_reddit_comment/ 0 10 20 30 40 >50 # of User History Messages 2^0 2^4 2^8 2^12 2^16 # of Users Tech Learn Fun (a) User History Dist. 2 4 6 8 10 12 14 16 18 # of Conversation Turns 2^4 2^8 2^12 2^16 # of Conversations Tech Learn Fun (b) Conversation Turn Dist. Figure 3: Distribution of users’ historical message count (upper) and conversation turn count (lower). test. This reflects a more realistic scenario where the model is trained with past data and applied to future recommendation, as opposed to prior work which assumes all conversations can be split between training and test (Zeng et al., 2018, 2019b). Data Analysis. The dataset statistics are displayed on Table 1. Although differ in size, conversations therein exhibit similar average characteristics, likely because they come from the same platform. Moreover, over 99% of the conversations in test sets are future conversations (i.e. all turns were posted in May), highlighting the challenge of conversation cold start. We further plot the distributions of message (turn) number in Figure 3 ( 3(a) for users and 3(b) for conversations). It is seen from Figure 3(a) that a large proportion of users were involved in less than 10 conversation turns, where about 8% (shown in Table 1) of users are absent in the training data. For conversations (Figure 3(b)), their turn numbers follow a power-law distribution. Therefore, for both users and conversations, the sparse interaction history presents additional challenges for recommendation. In addition, Figure 4 shows distributions of conversation replying structure with 1, 2, and more root-to-leaf paths to characterize users’ interaction structure. We find that more than 60% of con3336 Tech Learn Fun Dataset 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Percentage One-path Two-path More-path Figure 4: Distributions of conversation structure. “Onepath”, “Two-path”, and “More-path” indicate the conversation has 1, 2, and more root-to-leaf paths. versations contain two or more paths, illustrating complex who-replies-to-whom interactions in the tree structure (with the original post as the root node and in-reply-to relations as edges). Therefore, graph-structured encoder may be a suitable alternative for capturing rich turn interactions in Reddit conversations. Preprocessing. For all datasets, we applied open source natural language toolkit (NLTK) (Loper and Bird, 2002) for tokenization. Further, links were replaced by a generic tag “⟨URL⟩” and all number tokens were removed. In the experiments, we maintained a vocabulary with all the remaining tokens (including punctuation and emoticons). Model Settings. In training, we adopt negative sampling with sampling ratio of 5 (see Section 3.4). We also randomly sample 100 negative instances for each positive one during validation and test, to avoid unbalanced labels. For parameters, we initialize the word embedding layer with 300-dim Common Crawl version of Glove embedding (Pennington et al., 2014), and the dimension of user factor embedding is set to 20. For the CNN turn encoders, we use filter windows of 2, 3, and 4, each with 100 feature maps. As for the GRU models for both user and conversation modeling, the hidden state size is set to 200 (100 for each direction in Bi-GRU). The same hidden state size is applied to the GCN interaction model. We also set the layer number of GCN (see in Section 3.2) to 1, based on validation results. In training, the batch size is set to 256 and Adam optimizer (Kingma and Ba, 2014) is adopted with an initial learning rate of 0.001. As for the trade off weight in loss function, we set λ = 100. Evaluation. Our evaluation metrics follow the common practice in conversation recommendation (Zeng et al., 2018, 2019b). Mean average precision (MAP), precision at 1 (P@1), and normalized Discounted Cumulative Gain at 5 (nDCG@5) are adopted to measure the ranking list of conversations to be recommended to a user.3 These metrics all have a value range of 0.0 to 1.0, and greater value indicates better performance. Comparisons. We first consider two simple baselines: 1) ranking conversations based on POPULARITY, measured by the number of participants. 2) TOPICRANK (Chen et al., 2011): ranking conversations by topic relevance to the target user’s historical messages, where topics are learned from both LDA (Blei et al., 2003) and TF-IDF statistics. We also include previous conversation recommendation models without learning user interest dynamics: 3) CRJTD (Zeng et al., 2018): a CF-based method that jointly models topics and discourse with LDA-style Bayesian models. 4) CRIM (Zeng et al., 2019b): a neural CF framework with GCNbased interaction modeling, which presents stateof-the-art conversation recommendation results in previous work. In addition, we compare with the following recent models for product recommendation. 5) RRN (Wu et al., 2017): exploiting RNN model to capture user interest dynamics only with user interaction history (without modeling turn content). 6) LC-RNN (latent cross-RNN) (Beutel et al., 2018): RNN-based user interest dynamic modeling with turn-level representations, with participant interactions in the conversation structure ignored. 5 Experimental Results We first report the main comparison results in Section 5.1, and then discuss the effects of sparsity and cold start in Section 5.2. Lastly, in Section 5.3, we probe into our model outputs to provide more insights into user interest dynamics. 5.1 Main Comparison Results Table 2 shows the comparison results on all three datasets. Our model achieves the highest scores, outperforming all comparison models by a large margin. It suggests that dynamic user interests learned from both content and interactions provide clearly useful signals on which conversations a user is likely to engage in. Below describes more detailed observations. 3We also experiment with nDCG@10, and same trend holds. 3337 Models Tech Learn Fun MAP P@1 nDCG MAP P@1 nDCG MAP P@1 nDCG Simple Baselines POPULARITY 0.055 0.012 0.031 0.057 0.012 0.033 0.058 0.011 0.033 TOPICRANK (Chen et al., 2011) 0.087 0.037 0.071 0.071 0.031 0.050 0.065 0.024 0.042 Unchanged Interests CRJTD (Zeng et al., 2018) 0.193 0.173 0.184 0.158 0.135 0.150 0.113 0.085 0.101 CRIM (SOTA) (Zeng et al., 2019b) 0.222 0.180 0.187 0.204 0.151 0.194 0.162 0.114 0.150 Dynamic Interests RRN (Wu et al., 2017) 0.190 0.210 0.199 0.221 0.270 0.238 0.190 0.227 0.201 LC-RNN (Beutel et al., 2018) 0.212 0.222 0.234 0.222 0.294 0.240 0.198 0.255 0.211 OURS 0.375 0.391 0.369 0.347 0.368 0.344 0.283 0.294 0.274 Table 2: Results of our main experiments (averaged over users). “nDCG” stands for “nDCG@5”. CRIM is from our prior work which obtained previous state-of-the-art. The best result for each column is in boldface. Our model significantly outperforms all comparisons (p < 0.01, paired t-test). The two baselines yield much worse results than others. This shows the challenging nature of conversation recommendation, and the limitation of simply using popularity or topic similarity. TOPICRANK performs slightly better than POPULARITY, indicating that individuals are more inclined to engage in conversations they like (reflected by topic relevance), rather than popular discussions with many participants. Our model outperforms CRJTD and CRIM (state-of-the-art model), which both assume fixed user interests, showing the usefulness of exploring user’s evolving interests over time. We also find that CRIM produces better results than CRJTD, likely because the former additionally captures user interactions among each other. For recommendation models that consider user interest dynamics, all models perform better than CRIM and CRJTD, which are both based on the CF architecture. This reveals CF’s limitation in dealing with cold start, which is a common phenomenon when recommending a large number of future conversations (see Table 1). Nevertheless, we see that our model performs much better than RRN and LC-RNN, indicating that both content and interaction features contribute to capturing user interests and how they change over time. 5.2 History Sparsity and Cold Start Similar to previous work in product recomendation (Sarwar et al., 2000), conversation recommendation models are also susceptible to the problems of history sparsity and cold start. We compare with LC-RNN (the best comparison model in Table 2) and CRIM (state-of-the-art model in conversation recommendation), and show in Figure 5 the MAP scores on Tech dataset with varying degrees of sparsity.4 Our model is shown to be consistently better in face of sparsity, including varying numbers of messages in user history, as well as varying numbers of available turns in conversation contexts. More detailed discussions are presented below. Varying Messages in User History. Refer to in Figure 5(a), all models produce non-monotonic performance curves, peaking at certain points (e.g. 25 historical messages for our model). This reveals the issue of user history sparsity, and difficulty in coping with excessive historical information. More importantly, it is observed that our model already outperformed LC-RNN and CRIM when the number of history message is 0. This may be attributed to our better modeling on conversation interaction structure. Varying Turns in Conversation Context. For conversations, Figure 5(b) shows the MAP scores with varying turn numbers available in contexts. All three models produce upward-trending curves, which is expected since more features can be learned from richer contexts, thus leading to better prediction. Our model and CRIM perform worse than LC-RNN when available turn number is small (less than 4). This is because graph-structured networks need minimum amount of interaction infor4Similar trends are observed on all datasets and hence only the results on Tech are displayed. 3338 0 5 10 15 20 25 30 35 40 45 50>50 # of History Messages 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Ours CRIM LC-RNN (a) User History 2 4 6 8 10 12 # of Turns 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Ours CRIM LC-RNN (b) Conversation Context Figure 5: MAP scores on Tech dataset with varying degrees of sparsity in user chatting history (upper) or conversation context (lower). Our model performs consistently better. mation for effective modeling of the conversation structures. Conversation Cold Start. To understand how models perform exactly in conversation cold start, we separate the test set into future conversations (newly created in testing and unseen in training data) and existing ones (with context partially in the training data). We then compute the results averaging over conversations. The resultant MAP scores are reported in Table 3. Our model outperforms the other two models by a large margin in recommending future conversations, thanks to the more accurate user interests that are learned from dynamic patterns of content and interactions. CRIM performs much better for existing conversations, by making use of rich user interaction history based on CF architecture. Our model abandons CF framework but still produce competitive performance, as we compute more accurate user-aware representations. 5.3 More Analyses on Our Model The aforementioned results have shown the efficacy and advantage of our model. In this section, we provide more insights into different factors behind Models Future Convs Existing Convs Tech Learn Fun Tech Learn Fun CRIM 0.208 0.165 0.142 0.684 0.731 0.455 LC-RNN 0.214 0.220 0.197 0.129 0.587 0.318 OURS 0.384 0.356 0.305 0.590 0.749 0.458 Table 3: MAP scores to predict future and existing conversations (averaged over conversations). Our model performs the best in conversation cold start. the model, in order to obtain a better understanding of its performance. Training with More History. We have shown the usefulness of capturing user interest dynamics with historical messages. A natural question is whether the model needs more history to perform better. Figure 6 shows our MAP scores trained on history data in the last x months (x = 1, 2, 3, 4), and the three datasets exhibit diverse characteristics in user interest dynamics. Only Tech exhibits an increasing trend. This is probably because earlier history enables learning of long-term dynamics and technology change usually happens in a time span that is longer than 1-2 months. On the contrary, topics on Fun and Learn may change more rapidly, making the earlier history more noisy and less helpful for modeling users’ current interests. 1 2 3 4 # of Months for History 0.1 0.2 0.3 0.4 0.5 0.6 Tech Learn Fun Figure 6: MAP scores of our model with training data in the last x months. Ablation Study. We then examine the contributions of different components in our model, and display the MAP scores of various ablations in Table 4. We observe that user factor embedding and user-aware attention contribute most to model outputs because they are critical in modeling user interests. Removing Bi-GRU or GCN also has a significant impact on performance, indicating the usefulness of learning user interactions from turn chronology and replying relations. To further understand the effects of Bi-GRU and 3339 Models Tech Learn Fun w/o user factor embedding 0.174 0.159 0.122 w/o user-aware attention 0.188 0.183 0.149 w/o Bi-GRU 0.299 0.253 0.206 w/o GCN 0.276 0.307 0.221 Our full model 0.375 0.347 0.283 Table 4: MAP scores with different parts ablated. The best MAP results are highlighted in bold. GCN in user interaction modeling, we compare the MAP scores of our full model and its variants without Bi-GRU or GCN in recommending conversations with 1, 2, or more root-to-leaf paths (as shown in Figure 7). GCN and Bi-GRU clearly demonstrate different capabilities. The former is good at encoding more complex structures (i.e. those with more paths), and the latter excels at sequential conversations. By leveraging the advantages of both, our full model performs the best for conversations of varying structures. One-path Two-path More-path Conversation Structure 0.0 0.2 0.4 0.6 0.8 w/o Bi-GRU w/o GCN Full Model Figure 7: Results of our full model and its variants without Bi-GRU or GCN for recommending conversations in different structures. X-axis: number of root-to-leaf paths. Y-axis: MAP scores. Case Study. Lastly, we use the example in Figure 1 to analyze what the model has learned for recommendation. Recall that user U’s interests shifted from Internet security, signaled earlier in C1 and C2, to operation system, when later chatting in C3 and C4. We examine the predicted likelihoods of U engaging in two future conversations: Conversation A and B. Figure 8 shows their contexts—A focuses on Internet security and B on file system, and U later engaged in B but not A due to the interest shift. In Table 5, we list our model’s outputs when fed with earlier history only (C1 and C2), later only (C3 and C4), and full history, respectively. Not surprisingly, much higher scores are given to A when only the earlier history is given, as it fits well with U’s previous preference. Similarly, we correctly predict U to engage in B with much higher confidence in the other two situations as file system (B’s focus) and operation system (U’s later interests) are highly related. Given the full history, our model produces more closed scores, showing its efficacy of learning user interest dynamics. Conversation A [T1]: Ahhh! This reminds me of when you could hack fax machines and routers by just whistling in the phone! [T2]: Hm, that’s pretty unrelated, though.. ... Conversation B [T1]: ...just downloaded FileZilla (from SourceForge) last night, and it automatically installed MacKepper and... [T2]: Dude, why? Filezilla has a website, you can download it straight from them... ... Figure 8: Context turns in Conversation (Conv.) A and B. Blue italic words indicate A’s topic—Internet security and red italic words in B reflects its focus on file system. U’s History Given Conv. A Conv. B Earlier history only (C1, C2) 0.733 0.267 Later history only (C3, C4) 0.297 0.703 Full history (C1, C2, C3, C4) 0.421 0.579 Table 5: Predicted likelihoods of U entering Conversations A and B. B is ranked higher than A due to shifted user interests. 6 Conclusion This paper presents a dynamic conversation recommendation model learned from the change of content and user interactions over time. Experimental results on three new datasets from Reddit show that our model significantly outperforms all comparisons, including previous state of the arts. Further discussion demonstrates the robustness of our model against history sparsity and cold start. We also analyze our model’s outputs to get more insights into user interest dynamics. Acknowledgements The research described in this paper is partially supported by HK RGC-GRF grant #14204118. Jing Li is partly funded by the Hong Kong Polytechnic University internal fund (1-BE2W). Lu Wang is supported by National Science Foundation through Grant IIS-1813341. We thank the three anonymous reviewers for the insightful suggestions on various aspects of this work. 3340 References Yoav Artzi, Patrick Pantel, and Michael Gamon. 2012. Predicting responses to microblog posts. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 602–606. Association for Computational Linguistics. Lars Backstrom, Jon M. Kleinberg, Lillian Lee, and Cristian Danescu-Niculescu-Mizil. 2013. Characterizing and curating conversation threads: expansion, focus, volume, re-entry. In Sixth ACM International Conference on Web Search and Data Mining, WSDM 2013, Rome, Italy, February 4-8, 2013, pages 13–22. Alex Beutel, Paul Covington, Sagar Jain, Can Xu, Jia Li, Vince Gatto, and Ed H. Chi. 2018. Latent cross: Making use of context in recurrent recommender systems. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, WSDM 2018, Marina Del Rey, CA, USA, February 5-9, 2018, pages 46–54. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent Dirichlet Allocation. Journal of machine Learning research, 3(Jan):993–1022. Jilin Chen, Rowan Nairn, and Ed Huai-hsin Chi. 2011. Speak little and well: recommending conversations in online social streams. In Proceedings of the International Conference on Human Factors in Computing Systems, CHI 2011, Vancouver, BC, Canada, May 7-12, 2011, pages 217–226. Kailong Chen, Tianqi Chen, Guoqing Zheng, Ou Jin, Enpeng Yao, and Yong Yu. 2012. Collaborative personalized tweet recommendation. In Proceedings of the 35th international ACM SIGIR Conference on Research and development in information retrieval, pages 661–670. ACM. Hao Cheng, Hao Fang, and Mari Ostendorf. 2017a. A factored neural network model for characterizing online discussions in vector space. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2296–2306. Justin Cheng, Michael Bernstein, Cristian DanescuNiculescu-Mizil, and Jure Leskovec. 2017b. Anyone can become a troll: Causes of trolling behavior in online discussions. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, CSCW ’17, pages 1217–1230. ACM. Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1724– 1734. Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web, WWW 2017, Perth, Australia, April 3-7, 2017, pages 173–182. Yunhao Jiao, Cheng Li, Fei Wu, and Qiaozhu Mei. 2018. Find the conversation killers: A predictive study of thread-ending posts. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, pages 1145–1154. International World Wide Web Conferences Steering Committee. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1746–1751. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Edward Loper and Steven Bird. 2002. Nltk: The natural language toolkit. In Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics. Annie Louis and Shay B. Cohen. 2015. Conversation trees: A grammar model for topic structure in forums. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1543–1553. Association for Computational Linguistics. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1506–1515. Yasuhide Miura, Ryuji Kano, Motoki Taniguchi, Tomoki Taniguchi, Shotaro Misawa, and Tomoko Ohkuma. 2018. Integrating tree structures and graph structures with neural networks to classify discussion discourse acts. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 3806–3818. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of Twitter conversations. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, pages 172–180. 3341 Badrul Munir Sarwar, George Karypis, Joseph A. Konstan, and John Riedl. 2000. Analysis of recommendation algorithms for e-commerce. In Proceedings of the 2nd ACM Conference on Electronic Commerce (EC-00), Minneapolis, MN, USA, October 1720, 2000, pages 158–167. Chao-Yuan Wu, Amr Ahmed, Alex Beutel, Alexander J. Smola, and How Jing. 2017. Recurrent recommender networks. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, WSDM 2017, Cambridge, United Kingdom, February 6-10, 2017, pages 495–503. Rui Yan, Mirella Lapata, and Xiaoming Li. 2012. Tweet recommendation with graph co-ranking. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long PapersVolume 1, pages 516–525. Association for Computational Linguistics. Victoria Zayats and Mari Ostendorf. 2018. Conversation modeling on reddit using a graph-structured LSTM. TACL, 6:121–132. Xingshan Zeng, Jing Li, Lu Wang, Nicholas Beauchamp, Sarah Shugars, and Kam-Fai Wong. 2018. Microblog conversation recommendation via joint modeling of topics and discourse. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, Volume 1 (Long Papers), pages 375–385. Xingshan Zeng, Jing Li, Lu Wang, and Kam-Fai Wong. 2019a. Joint effects of context and user history for predicting online conversation re-entries. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 2809–2818. Xingshan Zeng, Jing Li, Lu Wang, and Kam-Fai Wong. 2019b. Neural conversation recommendation with online interaction modeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4625–4635, Hong Kong, China.
2020
305
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3342–3352 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3342 Improving Multimodal Named Entity Recognition via Entity Span Detection with Unified Multimodal Transformer Jianfei Yu1, Jing Jiang2, Li Yang3, and Rui Xia1,∗ 1 School of Artificial Intelligence, Nanjing University of Science & Technology, China 2 School of Information Systems, Singapore Management University, Singapore 3 DBS Bank, Singapore {jfyu, rxia}@njust.edu.cn, [email protected], [email protected] Abstract In this paper, we study Multimodal Named Entity Recognition (MNER) for social media posts. Existing approaches for MNER mainly suffer from two drawbacks: (1) despite generating word-aware visual representations, their word representations are insensitive to the visual context; (2) most of them ignore the bias brought by the visual context. To tackle the first issue, we propose a multimodal interaction module to obtain both image-aware word representations and word-aware visual representations. To alleviate the visual bias, we further propose to leverage purely text-based entity span detection as an auxiliary module, and design a Unified Multimodal Transformer to guide the final predictions with the entity span predictions. Experiments show that our unified approach achieves the new state-of-the-art performance on two benchmark datasets. 1 Introduction Recent years have witnessed the explosive growth of user-generated contents on social media platforms such as Twitter. While empowering users with rich information, the flourish of social media also solicits the emerging need of automatically extracting important information from these massive unstructured contents. As a crucial component of many information extraction tasks, named entity recognition (NER) aims to discover named entities in free text and classify them into pre-defined types, such as person (PER), location (LOC) and organization (ORG). Given its importance, NER has attracted much attention in the research community (Yadav and Bethard, 2018). Although many methods coupled with either discrete shallow features (Zhou and Su, 2002; Finkel et al., 2005; Torisawa et al., 2007) or continuous deep features (Lample et al., 2016; Ma and Hovy, ∗Corresponding author. (a). [Kevin Durant PER] enters [Oracle Arena LOC] wearing off — White x [Jordan MISC] (b). Vote for [King of the Jungle MISC] — [Kian PER] or [David PER] ? Figure 1: Two examples for Multimodal Named Entity Recognition (MNER). Named entities and their entity types are highlighted. 2016) have shown success in identifying entities in formal newswire text, most of them perform poorly on informal social media text (e.g., tweets) due to its short length and noisiness. To adapt existing NER models to social media, various methods have been proposed to incorporate many tweet-specific features (Ritter et al., 2011; Li et al., 2012, 2014; Limsopatham and Collier, 2016). More recently, as social media posts become increasingly multimodal, several studies proposed to exploit useful visual information to improve the performance of NER (Moon et al., 2018; Zhang et al., 2018; Lu et al., 2018). In this work, following the recent trend, we focus on multimodal named entity recognition (MNER) for social media posts, where the goal is to detect named entities and identify their entity types given a {sentence, image} pair. For example, in Fig. 1.a, it is expected to recognize that Kevin Durant, Oracle Arena, and Jordan belong to the category of person names (i.e., PER), place names (i.e., LOC), and other names (i.e., MISC), respectively. While previous work has shown success of fusing visual information into NER (Moon et al., 2018; Zhang et al., 2018; Lu et al., 2018), they still suffer from several limitations: (1) The first obstacle 3343 lies in the non-contextualized word representations, where each word is represented by the same vector, regardless of the context it occurs in. However, the meanings of many polysemous entities in social media posts often rely on its context words. Take Fig. 1.a as an example, without the context words wearing off, it is hard to figure out whether Jordan refers to a shoe brand or a person. (2) Although most existing methods focus on modeling inter-modal interactions to obtain word-aware visual representations, the word representations in their final hidden layer are still based on the textual context, which are insensitive to the visual context. Intuitively, the associated image often provides more context to resolve polysemous entities, and should contribute to the final word representations (e.g., in Fig. 1.b, the image can supervise the final word representations of Kian and David to be closer to persons than animals). (3) Most previous approaches largely ignore the bias of incorporating visual information. Actually, in most social media posts, the associated image tends to highlight only one or two entities in the sentence, without mentioning the other entities. In these cases, directly integrating visual information will inevitably lead the model to better recognize entities highlighted by images, but fail to identify the other entities (e.g., Oracle Arena and King of the Jungle in Fig. 1). To address these limitations, we resort to existing pre-trained contextualized word representations, and propose a unified multimodal architecture based on Transformer (Vaswani et al., 2017), which can effectively capture inter-modality interactions and alleviate the visual bias. Specifically, we first adopt a recently pre-trained contextualized representation model (Devlin et al., 2018) as our sentence encoder, whose multi-head self-attention mechanism can guide each word to capture the semantic and syntactic dependency upon its context. Second, to better capture the implicit alignments between words and images, we propose a multimodal interaction (MMI) module, which essentially couples the standard Transformer layer with cross-modal attention mechanism to produce an image-aware word representation and a wordaware visual representation for each input word, respectively. Finally, to largely eliminate the bias of the visual context, we propose to leverage textbased entity span detection as an auxiliary task, and design a unified neural architecture based on Transformer. In particular, a conversion matrix is designed to construct the correspondence between the auxiliary and the main tasks, so that the entity span information can be fully utilized to guide the final MNER predictions. Experimental results show that our Unified Multimodal Transformer (UMT) brings consistent performance gains over several highly competitive unimodal and multimodal methods, and outperforms the state-of-the-art by a relative improvement of 3.7% and 3.8% on two benchmarks, respectively. The main contributions of this paper can be summarized as follows: • We propose a Multimodal Transformer model for the task of MNER, which empowers Transformer with a multimodal interaction module to capture the inter-modality dynamics between words and images. To the best of our knowledge, this is the first work to apply Transformer to MNER. • Based on the above Multimodal Transformer, we further design a unified architecture to incorporate a text-based entity span detection module, aiming to alleviate the bias of the visual context in MNER with the guidance of entity span predictions from this auxiliary module. 2 Methodology In this section, we first formulate the MNER task, and give an overview of our method. We then delve into the details of each component in our model. Task Formulation: Given a sentence S and its associated image V as input, the goal of MNER is to extract a set of entities from S, and classify each extracted entity into one of the pre-defined types. As with most existing work on MNER, we formulate the task as a sequence labeling problem. Let S = (s1, s2, . . . , sn) denote a sequence of input words, and y = (y1, y2, . . . , yn) be the corresponding label sequence, where yi ∈Y and Y is the pre-defined label set with the BIO2 tagging schema (Sang and Veenstra, 1999). 2.1 Overall Architecture Fig. 2.a illustrates the overall architecture of our Unified Multimodal Transformer, which contains three main components: (1) representation learning for unimodal input; (2) a Multimodal Transformer for MNER; and (3) a unified architecture with auxiliary entity span detection (ESD) module. 3344 [CLS] Kevin Durant enters ...... Jordan [SEP] ResNet t0 t1 t2 t3 ...... tn tn+1 r0 r1 r2 r3 ...... rn F1 F2 F3 ...... Fn Transformer Layer with Self-Attention Q K V Transformer Layer with Self-Attention Q K V B I O ...... B h0 h1 h2 h3 ...... hn hn+1 E1 E2 E3 ...... En I-PER O ...... B-MISC Visual Input Textual Input v49 ...... v2 v1 Multimodal Transformer for MNER Text Modality Visual Modality Conversion Matrix Multimodal Interaction Module BERT Encoder + + Auxiliary Entity Span Detection Module B-PER rn+1 E3 E2 E1 E0 En En+1 EA EA EA EA EA EA ...... ...... ...... E[SEP] EJordan Eenters EDurant EKevin E[CLS] c0 c1 c2 c3 cn cn+1 ...... Cross-Modal Attention Q K V Add & Norm Add & Norm Feed Forward v49 v2 v1 Visual Modality Cross-Modal Attention Add & Norm Feed Forward rn+1 ...... r1 r0 Text Modality Q K V Cross-Modal Attention Q K V Add & Norm Add & Norm Feed Forward h0 h1 hn+1 p1 p2 ...... p49 an+1 ...... a1 a0 bn+1 ...... b1 b0 + σ qn+1 ...... q1 q0 Add & Norm ...... ...... Visual Gate Figure 2: (a). Overall Architecture of Our Unified Multimodal Transformer. (b). Multimodal Interaction (MMI) Module. As shown at the bottom of Fig. 2.a, we first extract contextualized word representations and visual block representations from the input sentence and the input image, respectively. The right part of Fig. 2.a illustrates our Multimodal Transformer model for MNER. Specifically, a Transformer layer is first employed to derive each word’s textual hidden representation. Next, a multimodal interaction (MMI) module is devised to fully capture the inter-modality dynamics between the textual hidden representations and the visual block representations. The hidden representations from MMI are then fed to a conditional random field (CRF) layer to produce the label for each word. To alleviate the visual bias in MNER, we further stack a purely text-based ESD module in the left part of Fig. 2.a, where we feed its hidden representations to another CRF layer to predict each word’s entity span label. More importantly, to utilize this for our main MNER task, we design a conversion matrix to encode the dependency relations between corresponding labels from ESD to MNER, so that the entity span predictions from ESD can be integrated to get the final MNER label for each word. 2.2 Unimodal Input Representations Word Representations: Due to the capability of giving different representations for the same word in different contexts, we employ the recent contextualized representations from BERT (Devlin et al., 2018) as our sentence encoder. Following Devlin et al. (2018), each input sentence is preprocessed by inserting two special tokens, i.e., appending [CLS] to the beginning and [SEP] to the end, respectively. Formally, let S′ = (s0, s1, . . . , sn+1) be the modified input sentence, where s0 and sn+1 denote the two inserted tokens. Let X = (x0, x1, . . . , xn+1) be the word representations of S′, where xi is the sum of word, segment, and position embeddings for each token si. As shown in the bottom left of Fig. 2.a, X is then fed to the BERT encoder to obtain C = (c0, c1, . . . , cn+1), where ci ∈Rd is the generated contextualized representation for xi. Visual Representations: As one of the state-ofthe-art CNN models for image recognition, Residual Network (ResNet) (He et al., 2016) has shown its capability of extracting meaningful feature representations of the input image in its deep layers. We therefore keep the output from the last convolutional layer in a pretrained 152-layer ResNet to represent each image, which essentially splits each input image into 7×7=49 visual blocks with the same size and represents each block with a 2048dimensional vector. Specifically, given an input image V , we first resize it to 224×224 pixels, and obtain its visual representations from ResNet, denoted as U = (u1, u2, . . . , u49), where ui is the 2048-dimensional vector representation for the i-th visual block. To project the visual representations into the same space of the word representations, 3345 we further convert U with a linear transformation: V = W⊤ u U, where Wu ∈R2048×d is the weight matrix1. As shown in the bottom right of Fig. 2.a, V = (v1, v2, . . . , v49) is the visual representations generated from ResNet. 2.3 Multimodal Transformer for MNER In this subsection, we present our proposed Multimodal Transformer for MNER. As illustrated on the right of Fig. 2.a, we first add a standard Transformer layer over C to obtain each word’s textual hidden representation: R = (r0, r1, . . . , rn+1), where ri ∈Rd denotes the generated hidden representation for xi. Motivation: While the above Transformer layer can capture which context words are more relevant to the prediction of an input word xi, they fail to consider the associated visual context. On the one hand, due to the short length of textual contents on social media, the additional visual context may guide each word to learn better word representations. On the other hand, since each visual block is often closely related to several input words, incorporating the visual block representation can potentially make the prediction of its related words more accurately. Inspired by these observations, we propose a multimodal interaction (MMI) module to learn an image-aware word representation and a word-aware visual representation for each word. 2.3.1 Image-Aware Word Representation Cross-Modal Transformer (CMT) Layer: As shown on the left of Fig. 2.b, to learn better word representations with the guidance of associated images, we first employ an m-head cross-modal attention mechanism (Tsai et al., 2019), by treating V ∈Rd×49 as queries, and R ∈Rd×(n+1) as keys and values: CAi(V, R) = softmax([WqiV]⊤[WkiR] p d/m )[WviR]⊤; MH-CA(V, R) = W′[CA1(V, R), . . . , CAm(V, R)]⊤, where CAi refers to the i-th head of cross-modal attention, {Wqi, Wki, Wvi} ∈Rd/m×d, and W′ ∈ Rd×d denote the weight matrices for the query, key, value, and multi-head attention, respectively. Next, we stack another three sub-layers on top: eP = LN(V + MH-CA(V, R)); (1) P = LN(eP + FFN(eP)), (2) 1Bias terms are omitted to avoid confusion in this paper. where FFN is the feed-forward network (Vaswani et al., 2017), LN is the layer normalization (Ba et al., 2016), and P = (p1, p2, . . . , p49) is the output representations of the CMT layer. Coupled CMT Layer: However, since the visual representations are treated as queries in the above CMT layer, each generated vector pi is corresponding to the i-th visual block instead of the i-th input word. Ideally, the image-aware word representation should be corresponding to each word. To address this, we propose to couple P with another CMT layer, which treats the textual representations R as queries, and P as keys and values. As shown in the top left of Fig. 2.a, this coupled CMT layer generates the final image-aware word representations, denoted by A = (a0, a1, . . . , an+1). 2.3.2 Word-Aware Visual Representation To obtain a visual representation for each word, it is necessary to align each word with its closely related visual blocks, i.e., assigning high/low attention weights to its related/unrelated visual blocks. Hence, as shown in the right part of Fig. 2.b, we use a CMT layer by treating R as queries and V as keys and values, which can be considered as a symmetric version of the left CMT layer. Finally, it generates the word-aware visual representations, denoted by Q = (q0, q1, . . . , qn+1). Visual Gate: As pointed out in some previous studies (Zhang et al., 2018; Lu et al., 2018), it is unreasonable to align many function words such as the, of, and well with any visual block. Therefore, it is important to incorporate a visual gate to dynamically control the contribution of visual features. Following the practice in previous work, we design a visual gate by combining the information from the above word representations A and visual representations Q as follows: g = σ(W⊤ a A + W⊤ q Q), (3) where {Wa, Wq} ∈Rd×d are weight matrices, and σ is the element-wise sigmoid function. Based on the gate output, we can obtain the final wordaware visual representations as B = g · Q. 2.3.3 CRF Layer To integrate the word and the visual representations, we concatenate A and B to obtain the final hidden representations H = (h0, h1, . . . , hn+1), where hi ∈R2d. Following Lample et al. (2016), we then feed H to a standard CRF layer, which defines the 3346 probability of the label sequence y given the input sentence S and its associated image V : P(y|S, V ) = exp(score(H, y)) P y′ exp(score(H, y′)); (4) score(H, y) = n X i=0 Tyi,yi+1 + n X i=1 Ehi,yi; (5) Ehi,yi = wyi MNER · hi, (6) where Tyi,yi+1 is the transition score from the label yi to the label yi+1, Ehi,yi is the emission score of the label yi for the i-th word, and wyi MNER ∈R2d is the weight parameter specific to yi. 2.4 Unified Multimodal Transformer Motivation: Since the Multimodal Transformer presented above mainly focuses on modeling the interactions between text and images, it may lead the learnt model to overemphasize the entities highlighted by the image but ignore the remaining entities. To alleviate the bias, we propose to leverage text-based entity span detection (ESD) as an auxiliary task based on the following observation. As ResNet is pre-trained on ImageNet (Deng et al., 2009) for the image recognition task, its highlevel representations are closely relevant to the final predictions, i.e., the types of contained objects. This indicates that the visual representations from ResNet should be quite useful for identifying types of the detected entities, but are not necessarily relevant to detecting entity spans in the sentence. Therefore, we use purely text-based ESD to guide the final predictions for our main MNER task. Auxiliary Entity Span Detection Module: Formally, we model ESD as another sequence labeling task, and use z = (z1, . . . , zn) to denote the sequence of labels, where zi ∈Z and Z = {B, I, O}. As shown in the left part of Fig. 2.a, we employ another Transformer layer to obtain its specific hidden representations as T = (t0, t1, . . . , tn+1), followed by feeding it to a CRF layer to predict the probability of the label sequence z given S: P(z|S) = exp(Pn i=0 Tzi,zi+1 + Pn i=1 wzi ESD · ti) P z′ exp(Pn i=0 Tz′ i,z′ i+1 + Pn i=1 w z′ i ESD · ti) , where wzi ESD ∈Rd is the parameter specific to zi. Conversion Matrix: Although ESD is modeled as an auxiliary task separated from MNER, the two tasks are highly correlated since each ESD label should be only corresponding to a subset of labels in MNER. For example, given the sentence in Fig. 2.a, if the first token is predicted to be the TWITTER-2015 TWITTER-2017 Entity Type Train Dev Test Train Dev Test Person 2217 552 1816 2943 626 621 Location 2091 522 1697 731 173 178 Organization 928 247 839 1674 375 395 Miscellaneous 940 225 726 701 150 157 Total 6176 1546 5078 6049 1324 1351 Num of Tweets 4000 1000 3257 3373 723 723 Table 1: The basic statistics of our two Twitter datasets. beginning of an entity in ESD (i.e., have the label B), it should be also the beginning of a typed entity in MNER (e.g., have the label B-PER). To encode such inter-task correspondence, we propose to use a conversion matrix Wc ∈R|Z|×|Y|, where each element Wc j,k defines the conversion probability from Zj to Yk. Since we have some prior knowledge (e.g., the label B can only convert to a label subset {B-PER, B-LOC, B-ORG, BMISC}), we initialize Wc as follows: if Zj is not corresponding to Yk, Wc j,k is set to 0; otherwise, Wc j,k is set to 1 |Cj|, where Cj denotes a subset of Y that is corresponding to Zj. Modified CRF Layer for MNER: After obtaining the conversion matrix, we further propose to fully leverage the text-based entity span predictions to guide the final predictions of MNER. Specifically, we modify the CRF layer for MNER by incorporating the entity span information from ESD into the emission score defined in Eqn. (6): Ehi,yi = wyi MNER · hi + wzi ESD · ti · Wc zi,yi. (7) 2.5 Model Training Given a set of manually labeled training samples D = {Sj, V j, yj, zj}N j=1, our overall training objective function is a weighted sum of the sentencelevel negative log-likelihood losses for our main MNER task and the auxiliary ESD task2: L = −1 |D| N X j=1 log P(yj|Sj, V j) + λ log P(zj|Sj)  , where λ is a hyperparameter to control the contribution of the auxiliary ESD module. 3 Experiments We conduct experiments on two multimodal NER datasets, comparing our Unified Multimodal Transformer (UMT) with a number of unimodal and multimodal approaches. 2We obtain zj by removing the type information in yj. 3347 TWITTER-2015 TWITTER-2017 Single Type (F1) Overall Single Type (F1) Overall Modality Methods PER. LOC. ORG. MISC. P R F1 PER. LOC. ORG. MISC. P R F1 BiLSTM-CRF 76.77 72.56 41.33 26.80 68.14 61.09 64.42 85.12 72.68 72.50 52.56 79.42 73.43 76.31 CNN-BiLSTM-CRF 80.86 75.39 47.77 32.61 66.24 68.09 67.15 87.99 77.44 74.02 60.82 80.00 78.76 79.37 Text HBiLSTM-CRF 82.34 76.83 51.59 32.52 70.32 68.05 69.17 87.91 78.57 76.67 59.32 82.69 78.16 80.37 BERT 84.72 79.91 58.26 38.81 68.30 74.61 71.32 90.88 84.00 79.25 61.63 82.19 83.72 82.95 BERT-CRF 84.74 80.51 60.27 37.29 69.22 74.59 71.81 90.25 83.05 81.13 62.21 83.32 83.57 83.44 GVATT-HBiLSTM-CRF 82.66 77.21 55.06 35.25 73.96 67.90 70.80 89.34 78.53 79.12 62.21 83.41 80.38 81.87 AdaCAN-CNN-BiLSTM-CRF 81.98 78.95 53.07 34.02 72.75 68.74 70.69 89.63 77.46 79.24 62.77 84.16 80.24 82.15 Text+Image GVATT-BERT-CRF 84.43 80.87 59.02 38.14 69.15 74.46 71.70 90.94 83.52 81.91 62.75 83.64 84.38 84.01 AdaCAN-BERT-CRF 85.28 80.64 59.39 38.88 69.87 74.59 72.15 90.20 82.97 82.67 64.83 85.13 83.20 84.10 MT-BERT-CRF (Ours) 85.30 81.21 61.10 37.97 70.48 74.80 72.58 91.47 82.05 81.84 65.80 84.60 84.16 84.42 UMT-BERT-CRF (Ours) 85.24 81.58† 63.03† 39.45† 71.67 75.23 73.41† 91.56† 84.73† 82.24 70.10† 85.28 85.34 85.31† Table 2: Performance comparison on our two TWITTER datasets. † indicates that UMT-BERT-CRF is significantly better than GVATT-BERT-CRF and AdaCAN-BERT-CRF with p-value < 0.05 based on paired t-test. 3.1 Experiment Settings Datasets: We take two publicly available Twitter datasets respectively constructed by Zhang et al. (2018) and Lu et al. (2018) for MNER. Since the two datasets mainly include multimodal user posts published on Twitter during 2014-2015 and 2016-2017, we denote them as TWITTER-2015 and TWITTER-2017 respectively. Table 1 shows the number of entities for each type and the counts of multimodal tweets in the training, development, and test sets of the two datasets3. We have released the two datasets preprocessed by us for research purpose via this link: https://github.com/jefferyYu/UMT. Hyperparameters: For each unimodal and multimodal approach compared in the experiments, the maximum length of the sentence input and the batch size are respectively set to 128 and 16. For our UMT approach, most hyperparameter settings follow Devlin et al. (2018) with the following exceptions: (1) the word representations C are initialized with the cased BERTbase model pre-trained by Devlin et al. (2018), and fine-tuned during training. (2) we employ a pre-trained 152-layer ResNet4 to initialize the visual representations U and keep them fixed during training. (3) For the number of cross-modal attention heads, we set it as m=12. (4) The learning rate, the dropout rate, and the tradeoff parameter λ are respectively set to 5e-5, 0.1, and 0.5, which can achieve the best performance on the development set of both datasets via a small grid search over the combinations of [1e-5, 1e-4], [0.1, 0.5], and [0.1, 0.9]. 3The TWITTER-2017 dataset released by Lu et al. (2018) is slightly different from the one used in their experiments, as they later remove a small portion of tweets for privacy issues. 4https://download.pytorch.org/models/resnet152b121ed2d.pth. 3.2 Compared Systems To demonstrate the effect of our Unified Multimodal Transformer (UMT) model, we first consider a number of representative text-based approaches for NER: (1) BiLSTM-CRF (Huang et al., 2015), a pioneering study which eliminates the heavy reliance on hand-crafted features, and simply employs a bidirectional LSTM model followed by a CRF layer for each word’s final prediction; (2) CNN-BiLSTM-CRF (Ma and Hovy, 2016), a widely adopted neural network model for NER, which is an improvement of BiLSTM-CRF by replacing each word’s word embedding with the concatenation of its word embedding and CNNbased character-level word representations; (3) HBiLSTM-CRF (Lample et al., 2016), an end-toend hierarchical LSTM architectures, which replaces the bottom CNN layer in CNN-BiLSTMCRF with an LSTM layer to obtain the characterlevel word representations; (4) BERT (Devlin et al., 2018), a multi-layer bidirectional Transformer encoder, which gives contextualized representations for each word, followed by stacking a softmax layer for final predictions; (5) BERT-CRF, a variant of BERT by replacing the softmax layer with a CRF layer. Besides, we also consider several competitive multimodal approaches for MNER: (1) GVATTHBiLSTM-CRF (Lu et al., 2018), a state-of-the-art approach for MNER, which integrates HBiLSTMCRF with the visual context by proposing a visual attention mechanism followed by a visual gate to obtain word-aware visual representations; (2) AdaCAN-CNN-BiLSTM-CRF (Zhang et al., 2018), another state-of-the-art approach based on CNN-BiLSTM-CRF, which designs an adaptive coattention network to induce word-aware visual representations for each word; (3) GVATT-BERT-CRF 3348 TWITTER-2015 TWITTER-2017 Methods P R F1 P R F1 UMT-BERT-CRF 71.67 75.23 73.41 85.28 85.34 85.31 w/o ESD Module 70.48 74.80 72.58 84.60 84.16 84.42 w/o Conversion Matrix 70.43 74.98 72.63 84.72 84.97 84.85 w/o Image-Aware WR 70.33 75.44 72.79 83.83 85.94 84.87 w/o Visual Gate 71.34 75.15 73.19 85.31 84.68 84.99 Table 3: Ablation Study of Unified Multimodal Transformer. and AdaCAN-BERT-CRF, our two variants of the above two multimodal approaches, which replace the sentence encoder with BERT; (4) MT-BERTCRF, our Multimodal Transformer model introduced in Section 2.3; (5) UMT-BERT-CRF, our unified architecture by incorporating the auxiliary entity span detection module into Multimodal Transformer, as introduced in Section 2.4. All the neural models are implemented with PyTorch, and all the experiments are conducted on NVIDIA RTX 2080 Ti GPUs. 3.3 Main Results In Table 2, we report the precision (P), recall (R), and F1 score (F1) achieved by each compared method on our two Twitter datasets. First, comparing all the text-based approaches, we can clearly observe that BERT outperforms the other compared methods with a significant margin on both datasets. Moreover, it is easy to see that empowering BERT with a CRF layer can further boost the performance. All these observations indicate that the contextualized word representations are indeed quite helpful for the NER task on social media texts, due to the context-aware characteristics. This agrees with our first motivation. Second, comparing the state-of-the-art multimodal approaches with their corresponding unimodal baselines, we can find that the multimodal approaches can generally achieve better performance, which demonstrates that incorporating the visual context is generally useful for NER. Besides, we can see that although GVATT-HBiLSTM-CRF and AdaCAN-CNN-BiLSTM-CRF can significantly outperform their unimodal baselines, the performance gains become relatively limited when replacing their sentence encoder with BERT. This suggests the challenge and the necessity of proposing a more effective multimodal approach. Third, in comparison with the two existing multimodal methods, our Multimodal Transformer MT-BERT-CRF outperforms the state-of-the-art by 2.5% and 2.8% respectively, and also achieves betFigure 3: The number of entities (shown in yaxis) that are incorrectly predicted by BERT-CRF, but get corrected by each multimodal method Figure 4: The number of entities (shown in yaxis) that are correctly predicted by BERT-CRF, but wrongly predicted by each multimodal method ter performance than their BERT variants. We conjecture that the performance gains mainly come from the following reason: the two multimodal methods only focus on obtaining word-aware visual representations, whereas our MT-BERT-CRF approach targets at generating both image-aware word representations and word-aware visual representations for each word. These observations are in line with our second motivation. Finally, comparing all the unimodal and multimodal approaches, it is clear to observe that our Unified Multimodal Transformer (i.e., UMT-BERTCRF) can achieve the best performance on both datasets, outperforming the second best methods by 1.14% and 1.05%, respectively. This demonstrates the usefulness of the auxiliary entity span detection module, and indicates that the auxiliary module can help our Multimodal Transformer alleviate the bias brought by the associated images, which agrees with our third motivation. 3.4 Ablation Study To investigate the effectiveness of each component in our Unified Multimodal Transformer (UMT) architecture, we perform comparison between the full UMT model and its ablations with respect to the auxiliary entity span detection (ESD) module and the multimodal interaction (MMI) module. As shown in Table 3, we can see that all the components in UMT make important contributions to the final results. On the one hand, removing the whole ESD module will significantly drop the performance, which shows the importance of alleviating the visual bias. In particular, discarding the conversion matrix in the ESD module also leads to the performance drop, which indicates the usefulness of capturing the label correspondence between the auxiliary module and our main MNER task. On the other hand, as the main contribution of 3349 Importance of the MMI Module Importance of the ESD Module Importance of Associated Images Noise of Associated Images A. Review of [Wolf Hall MISC]1, Episode 1 : Three Card Trick (bit.ly/1BHnWNb) #[WolfHall MISC]2 B. [Kevin Love PER]1 was more excited about [GameofThrones MISC]2 than beating the [Hawks ORG]3 C. My mum took some awesome photos of @ [iamrationale PER]1 and @ [bastilledan PER]2. D. Ask [Siri MISC]1 what 0 divided by 0 is and watch her put you in your place. BERT-CRF: 1-LOC, 2-LOC 1-PER, 2-MISC, 3-ORG 1-MISC, 2-ORG 1-MISC AdaCAN-BERT-CRF: 1-LOC, 2-LOC 1-PER, 2-NONE, 3-ORG 1-PER, 2-PER 1-PER MT-BERT-CRF: 1-MISC, 2-MISC 1-PER, 2-NONE, 3-ORG 1-PER, 2-PER 1-PER UMT-BERT-CRF: 1-MISC, 2-MISC 1-PER, 2-MISC, 3-ORG 1-PER, 2-PER 1-PER Table 4: The second row shows several representative samples together with their manually labeled entities in the test set of our two TWITTER datasets, and the bottom four rows show predicted entities of different methods on these test samples. our MMI module, Image-Aware Word Representations (WR) demonstrates its indispensable role in the final performance due to the moderate performance drop after removal. Besides, removing the visual gate also results in minor performance drop, indicating its importance to the full model. 3.5 Further Analysis Importance of MMI and ESD Modules: To better appreciate the importance of two main contributions (i.e., MMI and ESD modules) in our proposed approaches, we conduct additional analysis on our two test sets. In Fig. 3 and Fig. 4, we show the number of entities that are wrongly/correctly predicted by BERT-CRF, but correctly/wrongly predicted by each multimodal method5. First, we can see from Fig. 3 that with the MMI module, our MT-BERT-CRF and UMT-BERT-CRF approaches correctly identify more entities, compared with the two multimodal baselines. Table 4.A shows a specific example. We can see that our two methods correctly classify the type of Wolf Hall as MISC whereas the compared systems wrongly predict its type as LOC, probably because our MMI module enforces the image-aware word representations of Wolf Hall to be closer to drama names. Second, in Fig. 4, it is clear to observe that compared with the other three methods, UMT-BERTCRF can significantly decrease the bias brought by the visual context due to incorporating our auxiliary ESD module. In Table 4.B, we show a concrete example: since Game of Thrones is ignored by the image, the two multimodal baselines fail to identify them; in contrast, with the help of the auxiliary 5Note that here we use strict matches (i.e., correct span and type predictions). ESD module, UMT-BERT-CRF successfully eliminates the bias. Effect of Incorporating Images: To obtain a better understanding of the general effect of incorporating associated images into our MNER task, we carefully examine our test sets and choose two representative test samples to compare the prediction results of different approaches. First, we observe that most improvements gained by multimodal methods come from those samples where the textual contents are informal or incomplete but the visual context provides useful clues. For example, in Table 4.C, we can see that without the visual context, BERT-CRF fails to identify that the two entities refer to two singers in the concert, but all the multimodal approaches can correctly classify their types after incorporating the image. Second, by manually checking the test set of our two datasets, we find that in around 5% of the social media posts, the associated images might be irrelevant to the textual contents due to two kinds of reasons: (1) these posts contain image memes, cartoons, or photos with metaphor; (2) their images and textual contents reflect different aspects of the same event. In such cases, we observe that multimodal approaches generally perform worse than BERT-CRF. A specific example is given in Table 4.D, where all the multimodal methods wrongly classify Siri as PER because of the unrelated face in the image. 4 Related Work As a crucial component of many information extraction tasks including entity linking (Derczynski et al., 2015), opinion mining (Maynard et al., 2012), and event detection (Ritter et al., 2012), named 3350 entity recognition (NER) has attracted much attention in the research community in the past two decades (Li et al., 2018). Methods for NER: In the literature, various supervised learning approaches have been proposed for NER. Traditional approaches typically focus on designing various effective NER features, followed by feeding them to different linear classifiers such as maximum entropy, conditional random fields (CRFs), and support vector machines (Chieu and Ng, 2002; Florian et al., 2003; Finkel et al., 2005; Ratinov and Roth, 2009; Lin and Wu, 2009; Passos et al., 2014; Luo et al., 2015). To reduce the feature engineering efforts, a number of recent studies proposed to couple different neural network architectures with a CRF layer (Lafferty et al., 2001) for word-level predictions, including convolutional neural networks (Collobert et al., 2011), recurrent neural networks (Chiu and Nichols, 2016; Lample et al., 2016), and their hierarchical combinations (Ma and Hovy, 2016). These neural approaches have been shown to achieve the state-of-the-art performance on different benchmark datasets based on formal text (Yang et al., 2018). However, when applying these approaches to social media text, most of them fail to achieve satisfactory results. To address this issue, many studies proposed to exploit external resources (e.g., shallow parser, Freebase dictionary, and orthographic characteristics) to incorporate a set of tweet-specific features into both traditional approaches (Ritter et al., 2011; Li et al., 2014; Baldwin et al., 2015) and recent neural approaches (Limsopatham and Collier, 2016; Lin et al., 2017), which can obtain much better performance on social media text. Methods for Multimodal NER (MNER): As multimodal data become increasingly popular on social media platforms, several recent studies focus on the MNER task, where the goal is to leverage the associate images to better identify the named entities contained in the text. Specifically, Moon et al. (2018) proposed a multimodal NER network with modality attention to fuse the textual and visual information. To model the inter-modal interactions and filter out the noise in the visual context, Zhang et al. (2018) and Lu et al. (2018) respectively proposed an adaptive co-attention network and a gated visual attention mechanism for MNER. In this work, we follow this line of work. But different from them, we aim to propose an effective multimodal method based on the recent Transformer architecture (Vaswani et al., 2017). To the best of our knowledge, this is the first work to apply Transformer to the task of MNER. 5 Conclusion In this paper, we first presented a Multimodal Transformer architecture for the task of MNER, which captures the inter-modal interactions with a multimodal interaction module. Moreover, to alleviate the bias of the visual context, we further proposed a Unified Multimodal Transformer (UMT), which incorporates an entity span detection module to guide the final predictions for MNER. Experimental results show that our UMT approach can consistently achieve the best performance on two benchmark datasets. There are several future directions for this work. On the one hand, despite bringing performance improvements over existing MNER methods, our UMT approach still fails to perform well on social media posts with unmatched text and images, as analyzed in Section 3.5. Therefore, our next step is to enhance UMT so as to dynamically filter out the potential noise from images. On the other hand, since the size of existing MNER datasets is relatively small, we plan to leverage the large amount of unlabeled social media posts in different platforms, and propose an effective framework to combine them with the small amount of annotated data to obtain a more robust MNER model. Acknowledgments We would like to thank three anonymous reviewers for their valuable comments. This research is supported by the National Research Foundation, Singapore under its International Research Centres in Singapore Funding Initiative, and the Natural Science Foundation of China under Grant 61672288. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of National Research Foundation, Singapore. References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. 3351 Timothy Baldwin, Marie-Catherine de Marneffe, Bo Han, Young-Bum Kim, Alan Ritter, and Wei Xu. 2015. Shared tasks of the 2015 workshop on noisy user-generated text: Twitter lexical normalization and named entity recognition. In Proceedings of the Workshop on Noisy User-generated Text, pages 126– 135. Hai Leong Chieu and Hwee Tou Ng. 2002. Named entity recognition: a maximum entropy approach using global information. In Proceedings of COLING. Jason PC Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics, 4:357–370. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2461– 2505. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Leon Derczynski, Diana Maynard, Giuseppe Rizzo, Marieke Van Erp, Genevieve Gorrell, Rapha¨el Troncy, Johann Petrak, and Kalina Bontcheva. 2015. Analysis of named entity recognition and linking for tweets. Information Processing & Management, 51(2):32–49. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of ACL. Radu Florian, Abe Ittycheriah, Hongyan Jing, and Tong Zhang. 2003. Named entity recognition through classifier combination. In Proceedings of NAACL. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of CVPR, pages 770–778. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL-HLT. Chenliang Li, Aixin Sun, Jianshu Weng, and Qi He. 2014. Tweet segmentation and its application to named entity recognition. IEEE Transactions on knowledge and data engineering, 27(2):558–570. Chenliang Li, Jianshu Weng, Qi He, Yuxia Yao, Anwitaman Datta, Aixin Sun, and Bu-Sung Lee. 2012. Twiner: named entity recognition in targeted twitter stream. In Proceedings of SIGIR. Jing Li, Aixin Sun, Jianglei Han, and Chenliang Li. 2018. A survey on deep learning for named entity recognition. arXiv preprint arXiv:1812.09449. Nut Limsopatham and Nigel Collier. 2016. Bidirectional LSTM for named entity recognition in twitter messages. In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT). Bill Yuchen Lin, Frank F Xu, Zhiyi Luo, and Kenny Zhu. 2017. Multi-channel bilstm-crf model for emerging named entity recognition in social media. In Proceedings of the 3rd Workshop on Noisy Usergenerated Text. Dekang Lin and Xiaoyun Wu. 2009. Phrase clustering for discriminative learning. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Di Lu, Leonardo Neves, Vitor Carvalho, Ning Zhang, and Heng Ji. 2018. Visual attention model for name tagging in multimodal social media. In Proceedings of ACL, pages 1990–1999. Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. 2015. Joint entity recognition and disambiguation. In Proceedings of EMNLP. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of ACL. Diana Maynard, Kalina Bontcheva, and Dominic Rout. 2012. Challenges in developing opinion mining tools for social media. In Proceedings of the@ NLP can u tag usergeneratedcontent. Seungwhan Moon, Leonardo Neves, and Vitor Carvalho. 2018. Multimodal named entity recognition for short social media posts. In Proceedings of NAACL. Alexandre Passos, Vineet Kumar, and Andrew McCallum. 2014. Lexicon infused phrase embeddings for named entity resolution. In Proceedings of CoNLL. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of CoNLL. 3352 Alan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named entity recognition in tweets: an experimental study. In Proceedings of ACL. Alan Ritter, Oren Etzioni, and Sam Clark. 2012. Open domain event extraction from twitter. In Proceedings of SIGKDD. Erik F Sang and Jorn Veenstra. 1999. Representing text chunks. In Proceedings of EACL. Kentaro Torisawa et al. 2007. Exploiting wikipedia as external knowledge for named entity recognition. In Proceedings of EMNLP-CoNLL. Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2019. Multimodal transformer for unaligned multimodal language sequences. In Proceedings of ACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NIPS, pages 5998– 6008. Vikas Yadav and Steven Bethard. 2018. A survey on recent advances in named entity recognition from deep learning models. In Proceedings of COLING. Jie Yang, Shuailong Liang, and Yue Zhang. 2018. Design challenges and misconceptions in neural sequence labeling. In Proceedings of COLING. Qi Zhang, Jinlan Fu, Xiaoyu Liu, and Xuanjing Huang. 2018. Adaptive co-attention network for named entity recognition in tweets. In Proceedings of AAAI, pages 5674–5681. GuoDong Zhou and Jian Su. 2002. Named entity recognition using an hmm-based chunk tagger. In Proceedings of ACL.
2020
306
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3353–3363 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3353 Stock Embeddings Acquired from News Articles and Price History, and an Application to Portfolio Optimization Xin Du Department of Advanced Interdisciplinary Studies, Graduate School of Engineering, The University of Tokyo [email protected] Kumiko Tanaka-Ishii Research Center of Advanced Science and Technology, The University of Tokyo [email protected] Abstract Previous works that integrated news articles to better process stock prices used a variety of neural networks to predict price movements. The textual and price information were both encoded in the neural network, and it is therefore difficult to apply this approach in situations other than the original framework of the notoriously hard problem of price prediction. In contrast, this paper presents a method to encode the influence of news articles through a vector representation of stocks called a stock embedding. The stock embedding is acquired with a deep learning framework using both news articles and price history. Because the embedding takes the operational form of a vector, it is applicable to other financial problems besides price prediction. As one example application, we show the results of portfolio optimization using Reuters & Bloomberg headlines, producing a capital gain 2.8 times larger than that obtained with a baseline method using only stock price data. This suggests that the proposed stock embedding can leverage textual financial semantics to solve financial prediction problems. 1 Introduction News articles influence the dynamics of financial markets. For example, after the release of breaking news, the share prices of related stocks are often observed to move. This suggests the possibility of using natural language processing (NLP) to aid traders by analyzing this influence between news article texts and prices. Recent studies (Ding et al., 2015; Hu et al., 2018; Chen et al., 2019; Yang et al., 2018) have indeed reported that news articles can be leveraged to improve the accuracy of predicting stock price movements. These previous works have used deep learning techniques. They train neural networks with article texts and financial market prices, attempting to improve price prediction. In these approaches, the overall mutual effect between texts and prices is distributed over the neural network, which makes it difficult to extract this effect and apply it to tasks other than price prediction. Therefore, we take a new approach by explicitly describing this mutual effect in terms of a vector. A stock is represented by a vector so that its inner product with an embedding of a text produces a larger value when the text is more related to the stock. In the rest of the paper, we call this vector a stock embedding. The names of stocks, such as “AAPL” (the ticker symbol for Apple Inc.), typically appear in a financial news article text. Because these names form part of the text, usual NLP techniques can be applied to acquire an embedding of a stock. Such general textual embedding, however, does not incorporate the financial reality of stock price changes. Hence, the proposed stock embedding represents the price as well as the semantics of the text, as we acquire it by training on both news articles and stock prices. Precisely, our stock embedding is trained through a binary classification problem, namely, whether a stock price goes up or down in comparison with the previous day’s price. As a result, an acquired stock embedding captures the relation between a stock name and a news article even when the article has no direct mention of the stock. Our stock embedding can be considered as one technique to specialize, or ground, a symbol that has a practical reality outside of text. Furthermore, two major advantages come with the vector form of our stock embedding. The first is that the training can be effectuated for all stocks at once, rather than stock by stock. This is an important advantage to alleviate data sparseness and prevent overfitting, as discussed in Section 4. The second advantage lies in the portability of a vector. In contrast to previous works, in which 3354 stock-specific information was distributed among the parameters of a neural network, a vector representing all the characteristics of a stock is much easier to extract and apply to other uses besides price prediction. Hence, this paper shows an example of portfolio optimization, one of the most important applications in finance. To the best of our knowledge, this is the first report of incorporating NLP into modern portfolio theory (Markowitz, 1952). Our method differs from previous works that used NLP to enhance investment strategies. Many previous works focused on stock price forecasting only (Ding et al., 2015; Hu et al., 2018) and did not attempt to apply the learned results to other financial tasks. Another previous work (Song et al., 2017) investigated portfolios with texts. It obtained a ranking of stocks from texts by using a neural network technique and then evaluated investment in the highest/lowest ranked stocks. That work was not based on modern portfolio theory, however, nor did it integrate price and text data. In contrast, our method uses NLP in addition to price data to acquire a general representation in the form of an embedding applicable to different targets. In our experiments, a portfolio generated using stock embeddings achieved an annual gain 2.8 times greater than that of a portfolio generated with price data only. This provides evidence that the stock embedding well encodes both text and price information. 2 Related Work The main idea of this article is based on important techniques of NLP. It is now common to represent discrete entities in natural language by continuous vectors. These vectors are called “embeddings” and usually obtained from neural network models. Examples include the word embedding (Mikolov et al., 2013), phrase embedding (Zhang et al., 2014), sentence embedding (Lin et al., 2017), and event embedding (Ding et al., 2016). One advantage of these continuous representations is that the geometry of an embedding system contains rich semantic information, as has been discovered at many levels (Mikolov et al., 2013; Reif et al., 2019). The acquisition of stock embeddings in this paper is based on the original idea developed for linguistic entities. Here, we extend the idea further so that the embeddings reflect the reality of a stock market outside text. A stock embedding is trained using the attention mechanism (Bahdanau et al., 2015), which is another current NLP technique. The basic idea of the original attention mechanism is to assign higher weights to more relevant word vectors and make the weights adaptive to different contexts. Our framework is based on the classification task for text-driven stock price movement, which has been studied intensely as follows. Early research on exploiting financial news articles for better stock price prediction dates back to Ou and Penman (1989), in which financial indicators were extracted manually from financial statements. Later, in Fung et al. (2002), NLP methods were adopted for automatic text feature extraction. Since the 2000s, Twitter and other text-centered social media platforms have become essential sources of financial signals. Bollen et al. (2011) found evidence for causality between the public mood extracted from tweets and the Dow Jones Industrial Average index. In Nguyen et al. (2015), post texts collected from the Yahoo! Finance Message Board were used to predict whether the prices of 18 US stocks would rise or drop on the next trading day. As deep learning methods for NLP have become more common, many papers have reported the use of neural networks for text-driven stock classification (or prediction) tasks. Ding et al. (2015) proposed an event embedding to represent a news headline with a vector and used a convolutional neural network for classification. In that work, all the event embeddings of news articles published on the same day were simply averaged to summarize that day’s market information. Hu et al. (2018) was among the first works that applied the attention mechanism to the task of news-driven stock price movement classification. They developed a dual-level attention framework, in which news articles were assigned different weights depending on the output of a logistic regression component with a bias term, so that the most informative news articles were “highlighted.” The method of weighting news articles in this paper is similar to that previous work. The stock-specific information in Hu et al. (2018) was encoded in the neural network, however, making it focused on the price prediction task. In contrast, we represent such stock-specific information by the stock embedding, i.e., a vector, which is easy to interpret geometrically and extract for other applications. For one such application, we evaluated our stock embedding in terms of portfolio optimization. To 3355 the best of our knowledge, this is the first paper applying NLP techniques to modern portfolio theory. We use the mean-variance minimization portfolio model (introduced in Section 7) proposed in Markowitz (1952), which directly led to the capital asset pricing model (Sharpe, 1964). 3 News-Driven Stock Price Classification In this paper, the stock embedding is trained with a deep learning system through binary classification of price movements. Let pt be the stock price on day t, and let yt be the desired output of the system. Here, t ∈{1, 2, . . . , T}, and T is the number of trading days in the considered time period. The binary classification problem indicates that yt is classified in the following way: yt = 1, pt ≥pt−1 0, pt < pt−1. (1) To train such a deep learning system, news articles are used as the input. In this work, news articles are considered daily (i.e., treated in units of days). We denote the set of articles published on day t by Nt, and each article by ni ∈Nt, with i = 1, . . . , |Nt|. This paper considers a time window around day t, denoted as [t −d1, t + d2] given two constants d1, d2. Let N[t−d1,t+d2] be the set of news articles published within the time window. When d2 = −1, indicating the use of articles until day t −1, the task is called prediction, as the training does not use any articles published on or after day t. In general, this task is acknowledged as very hard (Fama, 1970; Basu, 1977; Timmermann and Granger, 2004) according to the efficientmarket hypothesis (EMH)1, and such prediction provides only a limited gain, if any. Note that previous NLP studies concerning stock prices were all aimed at this hard problem (Ding et al., 2015; Hu et al., 2018; Xu and Cohen, 2018; Yang et al., 2018). On the other hand, when d2 ≥0, this paper refers to the task as classification. The performance on classification shows how well the model understands a news article. Because the prediction problem is too hard and offers limited gain, as proven by many previous works, our target lies in classification. The aims are thus to acquire embeddings that are highly sensitive to textual context and to 1According to the EMH, in an “efficient” market, prices reflect the true values of assets by having incorporated all past information, so nobody can predict the price. The EMH is hypothesized to hold but has also attracted criticism. apply them to tasks other than price prediction. Therefore, in this paper, we set d1 = 4 and d2 = 0. Let the classification model be represented by a mapping f. The probability that the price of a stock j, where j = 1, . . . , J, goes up on day t is ˆyj t = f N[t−4,t]  . (2) In the process of model optimization, the model should reduce the mean cross-entropy loss between every true label yj t and its corresponding estimate ˆyj t , as follows: lj = −1 T T X t=1  yj t log ˆyj t + (1 −yj t ) log(1 −ˆyj t )  . This function describes the loss for only one stock, but a stock market includes multiple stocks. This work considers all stocks in a market equally important. The overall loss function is therefore a simple average of the cross-entropy loss for all stocks, i.e., l = (PJ j=1 lj)/J. 4 Method to Acquire Stock Embeddings Let sj represent a stock embedding, where j = 1, 2, ..., J. This is initialized as a random vector and then trained via a neural model to obtain sj, whose inner product with the embedding of a related text becomes large. This section describes the proposed method to acquire stock embeddings by building up a neural network for price movement classification. The neural network consists of two parts: a text feature distiller and a price movement classifier. Text feature distiller. The text feature distiller first converts every news article ni into a pair of vectors (nK i , nV i ) corresponding to “key” and “value” vectors, respectively. Let NK t = {nK i }t, NV t = {nV i }t denote the sets of key/value vectors of the articles released on day t. Such dual-vector representation of a text was proposed and adopted successfully in Miller et al. (2016) and Daniluk et al. (2017). The pair of vectors contains the semantic information of the article text at two different levels. Roughly, nK i represents the article at the word level, whereas nV i represents it at the context level. The text feature distiller calculates the attention score for every article i published on day t. The attention score between article i and stock j is given by the inner product of the two vectors nK i and sj: scorei,j = nK i · sj. Note that there are other possible definitions of this inner product, such as the cosine similarity or 3356 a generalized inner product using some arbitrary function. Because this work focuses on the most basic capability of the stock embedding, it uses the most basic inner product (i.e., the dot product). Let αj i denote the weight put on news article i with respect to stock j, to classify whether the stock price will go up or down. With the use of scorei,j defined above, αj i is given as the following: αj i ≡ exp(scorei,j) P i′ exp(scorei′,j). αj i is thus acquired as the softmax function of the scores across the articles released on the same day. By using αj i as the weights put on news articles, we compute the market status of stock j on day t as the following, which is the input to the classifier: mj t = X nV i ∈NV t αj inV i . (3) Therefore, mj t is computed over a set of nV i , representing the context of texts on day t. We call mj t the market vector, to which we will return in Section 6. Price movement classifier. The input of the price movement classifier is a sequence of vectors, Mj [t−4,t] = [mj t−4, mj t−3, . . . , mj t], with respect to stock j. This is processed by a recurrent neural network using a bidirectional gated recurrent unit (Bi-GRU). The choice of a Bi-GRU was made by considering the model capacity and training difficulty. The classifier estimates the probability ˆyj t : hO t = GRU(Mj [t−4,t]), ˆyj t = σ(MLP(hO t )), (4) where σ(x) = 1/(1 + exp(−x)), and GRU and MLP stand for the Bi-GRU and a multilayer perceptron, respectively. An optional re-weighting technique over the GRU’s output vectors hO τ (τ ∈ [t −4, t]) (Hu et al., 2018) can be applied. In this case, after the first line of formula (4), the re-weighting is conducted in the following way: hO = tP τ=t−4 βτhO τ , and this hO becomes the input of the second line instead of hO t . Here, βτ, the weight for day τ, decides how much one day is considered in the classification. In our implementation, βτ = exp(vτ−t · hO τ ) P0 ξ=−4 exp(vξ · hO t+ξ) , where the vector vξ differentiates the temporal effects of news articles released around day t. vξ Figure 1: Illustration of the classifier sharing mechanism across stocks on day t: (a) one independent classifier per stock, and (b) a shared classifier across stocks. |Nt| denotes the number of news articles on day t. is initialized randomly and trained via the neural network. See Hu et al. (2018) for the details. Such formulation of neural network training has the advantage of avoiding overfitting. A common problem in the task of stock movement classification or prediction is small sample sizes, especially when adopting units of days. In contrast, the proposed model does not suffer from small sample sizes, because the price movement classifier can be trained across all the stocks by sharing one classifier, rather than by generating one classifier for each individual stock like in many previous works (Ding et al., 2015; Hu et al., 2018; Xu and Cohen, 2018). We call this a classifier sharing mechanism. Figure 1 illustrates the difference between models with and without classifier sharing. The upper figure (a) shows the conventional setting without sharing, in which J classifiers are generated, one for each stock. In contrast, the lower figure (b) shows one classifier generated for all stocks. This setting enables learning of the correlation among stocks, in addition to avoiding overfitting and the problem of small sample sizes. Specifically, the classifier is shared across all stocks, thus achieving a sample size about 50 to 100 times larger. 5 Dataset and Settings to Acquire Stock Embeddings 5.1 Dataset We used two news article datasets to build stock embeddings: the Wall Street Journal (WSJ, in the 3357 Dataset Period # of articles # of days / trading days Mean # of articles per day (std) Mean # of words per article (std) Wall Street Journal (WSJ) 2000/01-2015/12 403, 207 5, 649/4, 008 71.4 (41.1) 28.5 (9.0) Reuters & Bloomberg (R&B) 2006/10-2013/11 551, 479 2, 605/1, 794 211.7 (257.1) 9.34 (1.73) Table 1: Basic information on the two news article datasets. following) dataset and the Reuters & Bloomberg (R&B) dataset2, as listed in Table 1. WSJ contains around 400,000 news headlines published across 16 years, whereas R&B contains around 550,000 articles across 7 years. Compared with R&B, WSJ has a relatively more uniform distribution of news articles across time (see the standard deviations listed in parentheses in the fifth column of Table 1). Following previous studies reporting that the main body of a news text produces irrelevant noise (Ding et al., 2015), we extracted only the headlines in both datasets. As for the stocks, we selected two subsets of the stocks in Standard & Poor’s S&P 500 index, one for each of the WSJ and R&B datasets. These subsets consisted only of stocks that were mentioned in no fewer than 100 different news articles, so that mutual effects between the articles and the price history would appear pretty often in the texts. More importantly, this ensured that keyword retrieval-based methods that locate related articles by explicit keyword matching could be applied for comparison. For the WSJ and R&B datasets, the subsets had 89 and 50 stocks, respectively. All other stocks were removed from consideration. As seen in formula (2), the input for the neural network is N[t−4,t], the set of articles around day t, and the output is yj t . The label yj t is the binarized price movement of stock j at day t. This is measured by the log-return between two subsequent days: log returnj t = log pj t −log pj t−1. The distribution of log-returns is typically bell shaped with a center close to 0, as also mentioned in Hu et al. (2018). The return values of the days were separated into three categories of “negative,” “ambiguous,” and “positive” by the use of thresholds3. Here, “ambiguous” refers to those samples close to 0.0, which were removed. Thus, by using only the clearly negative and positive days, the returns were binarized. 2This dataset was made open source in Ding et al. (2015). 3We used the thresholds [−0.0053, 0.0079] for the WSJ dataset and [−0.0059, 0.0068] for the R&B dataset. The margins were asymmetric around 0 because these datasets had slightly more “rising” days than “declining” ones. Through such filtering, the number of samples for each stock became about two-thirds of the number of all trading days, or around4 2600 and 1200 samples for each stock, for the WSJ and R&B datasets, respectively. 5.2 Deep Learner System Settings The Adam optimizer (Kingma and Ba, 2015) was used with cosine annealing (Loshchilov and Hutter, 2017) to train the neural network. The initial learning rate was set to 5e-4. The mini-batch size was 64. We stopped the training process when the value of the loss function with respect to the validation set no longer dropped, and then we measured the accuracy on the test set for evaluation. As for the dual-vector representation of news article texts, introduced in Section 4, the key and value vectors were calculated as described here. The key vector nK i is defined as follows5 by using word embeddings wk acquired by Word2vec: nK i = P k γkwk P k γk , where γk = TFk · IDFk is the TFIDF (Manning and Sch¨utze, 2001) score of word k. The dimension of nK i equals that of the Word2vec model trained on the news corpus, i.e., 64 in our implementation. As for the value vector nV i , we used vectors acquired through a BERT encoder6. We used the pretrained BERT model available from Google Research, with 24 layers trained on an uncased corpus. This model outputs vectors of 1024 dimensions, but we reduced the dimensions to 256 by using principal component analysis (PCA), to suppress the number of parameters in the neural network. Along with the effect of the stock embedding, the effect of the dual-vector representation (DVR) is also 4The number of samples after filtering differed slightly among stocks, because the distribution of log-returns differed, while the same thresholds were used. 5We chose this method after examining several options, including the smooth inverse frequency (SIF) (Arora et al., 2017), TFIDF-weighted word embeddings, and several other methods. We found that TFIDF-weighted word embeddings with Word2vec worked best. 6BERT (Bidirectional Encoder Representations from Transformer) is a neural network model (Devlin et al., 2019) that can be used to encode text into vectors with a fixed dimension. 3358 evaluated in the following section. 6 Effect of Stock Embedding on Price Movement Classification The basic effect of the stock embedding was evaluated through the performance on the price movement classification task, as stated in Section 3. The whole dataset described in Section 5.1 was randomly divided into nonoverlapping training/validation/test sets in the ratios of 0.6/0.2/0.2. The training/validation/test parts did not share any samples from the same dates. Every method below was tested for 10 different random divisions, and the average performance is reported here. The proposed model is abbreviated as WA+CS+DVR, for weighted average with classifier sharing and dual-vector representation. For an ablation test, four models were considered, which varied the market vector of the day (defined in formula (3) in Section 4 (Ding et al., 2015)) and were with or without the dual-vector representation and classifier sharing (Ding et al., 2015; Hu et al., 2018; Xu and Cohen, 2018; Yang et al., 2018), as follows. Simple average: The simple average of the text representations of the same day is taken as the market vector of the day, as proposed by Ding et al. (2015). Weighted average (WA): As stated in formula (3), the market vector of the day is averaged by using the weights from the stock-text inner products, as proposed in Hu et al. (2018). Note again that their work did not apply classifier sharing but instead produced one classifier for each stock, nor did it adopt the dual-vector representation. WA + classifier sharing (CS): This refers to WA with classifier sharing across stocks. This variant does not adopt the dual-vector representation, i.e., nK i is set equal to nV i for every news article i. Thus, the same BERT text embedding is used for both nK i and nV i . WA + dual-vector representation (DVR): This refers to WA with the dual-vector representation of news texts. This variant does not adopt classifier sharing. Furthermore, to examine the effect of the data size, we tested different dataset portions: 1 year, 3 years, and the whole dataset. Therefore, the experimental variants involved five methods (four comparison + our proposal) and three data sizes, or a total of 15 experiments. Figure 2 summarizes the complete experimental results. The uppermost bar of each bar group, in red, corresponds to our model with classifier sharing (CS) and the dual-vector representation (DVR). The other bars, in orange, blue, purple, and green, correspond to the four ablation variants. The ablation datasets with only 1-year data contained around 150 training samples and were too small for most variants to work well, yet our proposed model, WA+CS+DVR, could still obtain positive results (classification accuracy over 50%). With the 3-year datasets, our WA+CS+DVR model widened the performance gap, whereas the simple average and weighted average models still failed to work better than random guessing. These results show the superiority of our model in handling the overfitting problem with small datasets. Finally, the significant differences between WA+CS+DVR (in red) and WA+CS (in blue) and between WA+DVR (in orange) and WA (in purple) strongly supported the advantage of adopting the dual-vector representation (DVR), especially when classifier sharing was combined. Overall, our model successfully achieved 68.8% accuracy for the R&B dataset, which was significantly better than any of the other four variants. 7 Portfolio Optimization Thus far, the evaluation on classification has shown the capability of our framework in understanding news articles. For financial applications, however, the task must be in the form of prediction; that is, it must produce some gain ahead of the time when a news article is published. As one such predictive example, we present portfolio optimization, one of the most important financial tasks, and we show how our stock embedding can be applied to it. A portfolio is essentially a set of weights assigned to stocks, representing the proportions of capital invested in them. Intuitively, a portfolio bears a bigger risk if a large proportion is invested in two highly positively correlated stocks, rather than two uncorrelated or negatively correlated stocks. Based on this idea, the mean-variance minimization model in Markowitz (1952) is formulated as follows: min w risk = wT Σw (5a) subject to wT r = E, (5b) 3359 Figure 2: Mean classification accuracy percentages (with SD in parentheses) over 10 replications. wT 1 = 1, (5c) 0 ≤wj ≤1 j = 1, ..., J, (5d) where Σ is the risk matrix; w is the vector of investment weights; r is a vector such that rj equals the mean historic return of stock j; 1 is a vector of ones; and E, the expected portfolio return, is a parameter decided by an investor’s preference. Note that higher E usually means higher risk born by the investor. In the original model of Markowitz, Σ is the covariance matrix of the historic return time series of stocks, Σij = Cov({ri}t, {rj}t) (i, j ∈ {1, ..., J}). According to Markowitz (1952), the solution of this optimization problem, which can be obtained via quadratic programming, gives the portfolio with the smallest risk for an expected overall return E. Using the covariance matrix as the risk matrix Σ is limited, however, for two reasons. First, the overwhelming noise in price movements prevents accurate estimation of the covariance. More importantly, it ignores the events described in news articles that indeed cause price movements. On the other hand, the stock embeddings built here provide much abundant textual information for defining Σ. Concretely, Σi,j = cos (si, sj). This should work because the stock embedding reflects a stock’s responsiveness to a certain class of news events. In other words, close stock embeddings indicate a correlated response pattern to an event described in news articles. Stock embeddings capture this correlation much better than the covariance matrix does, and this correlation is what a good portfolio relies on. By solving the same optimization problem but with a different matrix Σ, we get another vector of investment ratios, w, with respect to the stocks. By virtually investing according to w and observing the result within a certain period, Σ can be evaluated. For each of the WSJ and R&B datasets, we ran one investment simulation for various definitions of Σ, as follows. S&P 500 index: As a market baseline, we used an S&P 500 index portfolio, in which all 505 stocks in the index were considered and the investment weight wj was in proportion to the market capitalization of stock j. The price history of the portfolio was provided by Dow Jones. This method did not use Σ to form the portfolio. S&P 89*/50*: This approach was the same as above but with the set of stocks reduced to those tested in our work, as explained in Section 5.1: 89 stocks for the WSJ dataset7, and 50 for the R&B dataset. Covariance matrix of historic stock returns: Σ was the covariance matrix as originally proposed by Markowitz. Word2vec-general: (text only) Σ was the cosine matrix of the word embeddings trained on general corpora (fastText word embeddings (Bojanowski et al., 2017) were used in our experiments). For each stock, we used the word embedding of its ticker symbol, e.g., the word embedding of “AAPL” for Apple Inc. Word2vec-news: (text only) Σ was the cosine matrix of the word embedding vectors trained 7The S&P 89* portfolio was evaluated during the period of 2001 to 2016. The market capitalization history of the stocks before the year 2005 is not available, so the record was estimated for this missing period. First, the number of shares outstanding was extrapolated from the data of 2005-2016, in which the values were pretty stable during the whole period. The market capitalization was then acquired by multiplying the price by the shares outstanding. 3360 on news text corpora. We used the full text of the R&B dataset for training, in which all mentions of a stock in the text were replaced by the stock’s ticker symbol. Covariance · stock embedding: (text and price) Σ was the result of element-wise multiplication of the covariance matrix and the cosine matrix of the stock embeddings. Weighted BERT: (text only) Σ was the cosine matrix of stock vectors acquired as follows, where the BERT-based text representation nV i was used. For a stock j, the vector was obtained as a weighted average of nV i for which the text mentioned the stock or company. Here, the weight of article i was defined as follows: ηi ≡ (# of mentions of j in i) (# of mentions of all stocks in i). Stock embeddings: Σ was the cosine matrix of the stock embeddings. Figure 3: The process of portfolio generation and evaluation over several years. The vertical axis indicates the times when the portfolio is renewed, and the horizontal axis indicates the data grouped yearly. An average of the realized annual gains is computed to evaluate the portfolio’s performance. The portfolio evaluation was conducted in a yearly setting, as illustrated in Figure 3. At the beginning of each year, given some expected gain E, the portfolio was computed by using all news articles and historic prices until the end of the previous year. In other words, for each year, the training set in the experiment consisted of samples strictly earlier than those constituting the test set. Therefore, the evaluation was conducted in a prediction setting. Then, investments were made according to the yearly renewed portfolio as in Figure 3; that is, capital was allocated to stocks according to w. The realized annual gain of the portfolio followed this equation: annual gain = J X j=1 wj( pj end-of-year pj begin-of-year −1), where wj is the proportion of investment in stock j, and pj is the price of j. In this way, for each of the WSJ and R&B, we obtained results over 16 and 7 years, respectively. For different expected gains E ∈ {0.05, 0.06, ..., 0.29}, which cover typical cases in real-world portfolio construction, the average annual gain was computed. Figure 4 shows the experimental results. The upper graphs show the annual gain with respect to different values of E (horizontal axes) for (a) the WSJ and (b) the R&B, averaged over years. Every curve corresponds to a different definition of Σ. It can be seen that the proposed stock embedding method outperformed the other methods, except for larger E with WSJ8. Especially for the R&B dataset, stock embedding greatly outperformed all other methods at all E. The lower bar graph summarizes the overall aggregate gain for each method. The values in the bars indicate the average realized annual gains, while those above the bars are the ratios of the gains in comparison with that of the standard covariance method (in blue). The leftmost two bars in each bar graph show the gains of the S&P 500 portfolio and the S&P 89*/50* portfolio, respectively. As described above, the S&P 500 portfolio consisted of an average of around 500 stocks traded in the US, while the S&P 89*/50* portfolio, which was calculated with the same method but on a smaller set of stocks (89 for the WSJ, and 50 for the R&B), achieved higher gains than its S&P 500 sibling did. The values of the S&P portfolios generally went up during the periods of both datasets, and therefore, the gains were positive. The dashed horizontal line in each bar graph indicates the result for the standard covariance method as a baseline. Its gains were only 12.5% and 12.7% for the WSJ and R&B, respectively, but with stock embeddings, the gains increased to 17.2% and 35.5%, or 1.37 and 2.80 times greater than the baseline results, respectively. This per8Our method did not perform well only for large E. The mean-variance minimization model has been reported to become unstable under the two conditions of large E and low overall market gain (Dai and Wang, 2019). The return of the WSJ period (2000-2015) was lower than that of the R&B period (2006-2013), and therefore, these two conditions were more likely to be met for WSJ. 3361 Figure 4: Expected and realized annual portfolio gain in the investment simulations on both datasets: (a) results on the WSJ dataset (2000-2015), and (b) results on the R&B dataset (2006-2013). formance largely beat all other variants and gives evidence of how well the stock embeddings integrated both price and textual information. The results for the method that integrated the covariance matrix and stock embedding (in green) did not much outperform the baselines. A possible reason is that the stock embedding had already integrated the price information. As for the other variants based on pure text (in purple, orange, and brown), the results improved slightly. Among them, weighted BERT outperformed the other methods for both datasets. This indicates the potential of BERT and other recent neural language models for portfolio optimization. 8 Conclusion This paper has proposed the idea of a stock embedding, a vector representation of a stock in a financial market. A method was formulated to acquire such vectors from stock price history and news articles by using a neural network framework. In the framework, the stock embedding detects news articles that are related to the stock, which is the essence of the proposed method. We trained stock embeddings for the task of binary classification of stock price movements on two different datasets, the WSJ and R&B. The improvements in classification accuracy with our framework, due to the classifier sharing and dual-vector text representation proposed in this paper, implied that the stock embeddings successfully incorporated market knowledge from both the news articles and price history. Because the stock embedding is a vector that can be separated from the other components of the classification model, it can be applied to other tasks besides price movement classification. As an example, we showed the use of stock embeddings in a portfolio optimization task by replacing the risk matrix in the portfolio objective function with a cosine matrix of stock embeddings. In investment simulations on the R&B dataset, our stock embedding method generated 2.80 times the annual return obtained using the covariance matrix of the historic return series. This significant gain suggests further potential of our stock embedding for modeling the correlation among stocks in a financial market, and for further applications, such as risk control and asset pricing. Acknowledgments We sincerely thank the anonymous reviewers for their comments. This paper was supported by the Research Institute of Science and Technology for Society (HITE 17942497), and by the University of Tokyo Gap Fund Program. The paper reflects the view of the authors only. 3362 References Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Sanjoy Basu. 1977. Investment performance of common stocks in relation to their price-earnings ratios: A test of the efficient market hypothesis. The journal of Finance, 32(3):663–682. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Johan Bollen, Huina Mao, and Xiao-Jun Zeng. 2011. Twitter mood predicts the stock market. J. Comput. Science, 2(1):1–8. Deli Chen, Yanyan Zou, Keiko Harimoto, Ruihan Bao, Xuancheng Ren, and Xu Sun. 2019. Incorporating fine-grained events in stock movement prediction. In Proceedings of the Second Workshop on Economics and Natural Language Processing, pages 31–40, Hong Kong. Association for Computational Linguistics. Zhifeng Dai and Fei Wang. 2019. Sparse and robust mean–variance portfolio optimization problems. Physica A: Statistical Mechanics and its Applications, 523:1371 – 1378. Michal Daniluk, Tim Rockt¨aschel, Johannes Welbl, and Sebastian Riedel. 2017. Frustratingly short attention spans in neural language modeling. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Xiao Ding, Yue Zhang, Ting Liu, and Junwen Duan. 2015. Deep learning for event-driven stock prediction. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 2327–2333. Xiao Ding, Yue Zhang, Ting Liu, and Junwen Duan. 2016. Knowledge-driven event embedding for stock prediction. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2133–2142, Osaka, Japan. The COLING 2016 Organizing Committee. Eugene F. Fama. 1970. Efficient capital markets: A review of theory and empirical work. The Journal of Finance, 25(2):383–417. Gabriel Pui Cheong Fung, Jeffrey Xu Yu, and Wai Lam. 2002. News sensitive stock trend prediction. In Advances in Knowledge Discovery and Data Mining, 6th Pacific-Asia Conference, PAKDD 2002, Taipei, Taiwan, May 6-8, 2002, Proceedings, pages 481– 493. Ziniu Hu, Weiqing Liu, Jiang Bian, Xuanzhe Liu, and Tie-Yan Liu. 2018. Listening to chaotic whispers: A deep learning framework for news-oriented stock trend prediction. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, WSDM 2018, Marina Del Rey, CA, USA, February 5-9, 2018, pages 261–269. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Zhouhan Lin, Minwei Feng, C´ıcero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Ilya Loshchilov and Frank Hutter. 2017. SGDR: stochastic gradient descent with warm restarts. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Christopher D. Manning and Hinrich Sch¨utze. 2001. Foundations of statistical natural language processing. MIT Press. Harry Markowitz. 1952. Portfolio selection. The journal of finance, 7(1):77–91. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 3111– 3119. 3363 Alexander Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1400–1409, Austin, Texas. Association for Computational Linguistics. Thien Hai Nguyen, Kiyoaki Shirai, and Julien Velcin. 2015. Sentiment analysis on social media for stock movement prediction. Expert Syst. Appl., 42(24):9603–9611. Jane A Ou and Stephen H Penman. 1989. Financial statement analysis and the prediction of stock returns. Journal of accounting and economics, 11(4):295–329. Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B Viegas, Andy Coenen, Adam Pearce, and Been Kim. 2019. Visualizing and measuring the geometry of bert. In Advances in Neural Information Processing Systems 32, pages 8592–8600. Curran Associates, Inc. William F Sharpe. 1964. Capital asset prices: A theory of market equilibrium under conditions of risk. The journal of finance, 19(3):425–442. Qiang Song, Anqi Liu, and Steve Y. Yang. 2017. Stock portfolio selection using learning-to-rank algorithms with news sentiment. Neurocomputing, 264:20 – 28. Machine learning in finance. Allan Timmermann and Clive WJ Granger. 2004. Efficient market hypothesis and forecasting. International Journal of forecasting, 20(1):15–27. Yumo Xu and Shay B. Cohen. 2018. Stock movement prediction from tweets and historical prices. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1970–1979, Melbourne, Australia. Association for Computational Linguistics. Linyi Yang, Zheng Zhang, Su Xiong, Lirui Wei, James Ng, Lina Xu, and Ruihai Dong. 2018. Explainable text-driven neural network for stock prediction. In 5th IEEE International Conference on Cloud Computing and Intelligence Systems, CCIS 2018, Nanjing, China, November 23-25, 2018, pages 441–445. Jiajun Zhang, Shujie Liu, Mu Li, Ming Zhou, and Chengqing Zong. 2014. Bilingually-constrained phrase embeddings for machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 111–121, Baltimore, Maryland. Association for Computational Linguistics.
2020
307
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3364–3374 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3364 What Was Written vs. Who Read It: News Media Profiling Using Text Analysis and Social Media Context Ramy Baly1, Georgi Karadzhov2, Jisun An3, Haewoon Kwak3, Yoan Dinkov4, Ahmed Ali3, James Glass1, Preslav Nakov3 1MIT Computer Science and Artificial Intelligence Laboratory 2University of Cambridge, 3Qatar Computing Research Institute, HBKU, 4Sofia University {baly,glass}@mit.edu, [email protected] {jan,hkwak,amali,pnakov}@hbku.edu.qa, [email protected] Abstract Predicting the political bias and the factuality of reporting of entire news outlets are critical elements of media profiling, which is an understudied but an increasingly important research direction. The present level of proliferation of fake, biased, and propagandistic content online, has made it impossible to fact-check every single suspicious claim, either manually or automatically. Alternatively, we can profile entire news outlets and look for those that are likely to publish fake or biased content. This approach makes it possible to detect likely “fake news” the moment they are published, by simply checking the reliability of their source. From a practical perspective, political bias and factuality of reporting have a linguistic aspect but also a social context. Here, we study the impact of both, namely (i) what was written (i.e., what was published by the target medium, and how it describes itself on Twitter) vs. (ii) who read it (i.e., analyzing the readers of the target medium on Facebook, Twitter, and YouTube). We further study (iii) what was written about the target medium on Wikipedia. The evaluation results show that what was written matters most, and that putting all information sources together yields huge improvements over the current state-of-the-art. 1 Introduction The rise of the Web has made it possible for anybody to create a website or a blog and to become a news medium. Undoubtedly, this was a hugely positive development as it elevated freedom of expression to a whole new level, thus allowing anybody to make their voice heard online. With the subsequent rise of social media, anybody could potentially reach out to a vast audience, something that until recently was only possible for major news outlets. One of the consequences was a trust crisis: with traditional news media stripped off their gate-keeping role, the society was left unprotected against potential manipulation. The issue became a general concern in 2016, a year marked by micro-targeted online disinformation and misinformation at an unprecedented scale, primarily in connection to Brexit and the US Presidential campaign. These developments gave rise to the term “fake news”, which can be defined as “false, often sensational, information disseminated under the guise of news reporting.”1 It was declared Word of the Year 2016 by Macquarie Dictionary and of Year 2017 by the Collins English Dictionary. In an attempt to solve the trust problem, several initiatives such as Politifact, Snopes, FactCheck, and Full Fact, have been launched to fact-check suspicious claims manually. However, given the scale of the proliferation of false information online, it became clear that it was unfeasible to fact-check every single suspicious claim, even when this was done automatically, not only due to computational challenges but also due to timing. In order to factcheck a claim, be it manually or automatically, one often needs to verify the stance of mainstream media concerning that claim and the reaction of users on social media. Accumulating this kind of evidence takes time, but time flies very fast, and any delay means more potential sharing of the malicious content on social media. A study has shown that for some very viral claims, more than 50% of the sharing happens within the first ten minutes after posting the micro-post on social media (Zaman et al., 2014), and thus timing is of utmost importance. Moreover, an extensive recent study has found that “fake news” spreads six times faster and reaches much farther than real news (Vosoughi et al., 2018). 1www.collinsdictionary.com/dictionary/ english/fake-news 3365 A much more promising alternative is to focus on the source and to profile the medium that initially published the news article. The idea is that media that have published fake or biased content in the past are more likely to do so in the future. Thus, profiling media in advance makes it possible to detect likely “fake news” the moment it is published, by simply checking the reliability of its source. From a practical perspective, political bias and factuality of reporting have not only a linguistic aspect but also a social context. Here, we study the impact of both, namely (i) what was written (the text of the articles published by the target medium, the text and the audio signal in the videos of its YouTube channel, as well as how the medium selfdescribes itself on Twitter) vs. (ii) who read it (by analyzing the media readers in Facebook, Twitter, and YouTube). We further study (iii) what was written about the target medium on Wikipedia. Our contributions can be summarized as follows: • We model the leading political ideology (left, center or right bias) and the factuality of reporting (high, mixed, or low) of news media by modeling the textual content of what they publish vs. who reads it in social media (Twitter, Facebook, and YouTube). The latter is novel for these tasks. • We combine a variety of information sources about the target medium, many of which have not been explored for our tasks, e.g., YouTube video channels, political bias estimates of their Facebook audience, and information from the profiles of the media followers on Twitter. • We use features from different data modalities: text, metadata, and speech. The latter two are novel for these tasks. • We achieve sizeable improvements over the current state-of-the-art for both tasks. • We propose various ensembles to combine the different types of features, achieving further improvements, especially for bias detection. • We release the data, the features, and the code necessary to replicate our results. In the rest of this paper, we discuss some related work, followed by a description of our system’s architecture and the information sources we use. Then, we present the dataset, the experimental setup, and the evaluation results. Finally, we conclude with possible directions for future work. 2 Related Work While leveraging social information and temporal structure to predict the factuality of reporting of a news medium is not new (Canini et al., 2011; Castillo et al., 2011; Ma et al., 2015, 2016; Zubiaga et al., 2016), modeling this at the medium level is a mostly unexplored problem. A popular approach to predict the factuality of a medium is to check the general stance of that medium concerning already fact-checked claims (Mukherjee and Weikum, 2015; Popat et al., 2017, 2018). Therefore, stance detection became an essential component in fact-checking systems (Baly et al., 2018b). In political science, media profiling is essential for understanding media choice (Iyengar and Hahn, 2009), voting behavior (DellaVigna and Kaplan, 2007), and polarization (Graber and Dunaway, 2017). The outlet-level bias is measured as a similarity of the language used in news media to political speeches of congressional Republicans or Democrats, also used to measure media slant (Gentzkow and Shapiro, 2006). Article-level bias was also measured via crowd-sourcing (Budak et al., 2016). Nevertheless, public awareness of media bias is limited (Elejalde et al., 2018). Political bias was traditionally used as a feature for fact verification (Horne et al., 2018b). In terms of modeling, Horne et al. (2018a) focused on predicting whether an article is biased or not. Political bias prediction was explored by Potthast et al. (2018) and Saleh et al. (2019), where news articles were modeled as left vs. right, or as hyperpartisan vs. mainstream. Similarly, Kulkarni et al. (2018) explored the left vs. right bias at the article level, modeling both textual and URL contents of articles. In our earlier research (Baly et al., 2018a), we analyzed both the political bias and the factuality of news media. We extracted features from several sources of information, including articles published by each medium, what is said about it on Wikipedia, metadata from its Twitter profile, in addition to some web features (URL structure and traffic information). The experiments on the Media Bias/Fact Check (MBFC) dataset showed that combining features from these different sources of information was beneficial for the final classification. Here, we expand this work by extracting new features from the existing sources of information, as well as by introducing new sources, mostly related to the social media context, thus achieving sizable improvements on the same dataset. 3366 Figure 1: The architecture of our system for predicting the political bias and the factuality of reporting of news media. The features inside {curly brackets} are calculated at a finer level of granularity and are then aggregated at the medium level. The upper gray box shows the resources used to generate features, e.g., the OpenSmile toolkit is used to extract low-level descriptors (LLD) from YouTube videos; see Section 3 for further details. In follow-up work (Baly et al., 2019), we showed that jointly predicting the political bias and the factuality is beneficial, compared to predicting each of them independently. We used the same sources of information as in (Baly et al., 2018a), but the results were slightly lower. While here we focus on analyzing political bias and factuality separately, future work may analyze how the newly proposed features and sources affect the joint prediction. 3 System and Features In this section, we present our system. For each target medium, it extracts a variety of features to model (i) what was written by the medium, (ii) the audience of the medium on social media, and (iii) what was written about the medium in Wikipedia. This results in multi-modal (text, speech, and metadata) feature set, which we use to train a classifier to predict the political bias and the factuality of reporting of news media. Figure 1 illustrates the system architecture. 3.1 What Was Written We describe the features that we used to model the content generated by the news media, analyzing both the articles they publish on their website as well as relevant activity on social media. 3.1.1 Articles on the News Medium Website Given a target news medium, we first collect a number of articles it has published. Then, we extract various types of features from the text of these articles. Below we describe these features in more detail. Linguistic Features: These features focus on language use, and they model text structure, topic, sentiment, subjectivity, complexity, bias, and morality. They have proved useful for detecting fake articles, as well as for predicting the political bias and the factuality of reporting of news media (Horne et al., 2018b; Baly et al., 2018a). We extracted such features using the News Landscape (NELA) toolkit (Horne et al., 2018b), and we will refer to them as the NELA features in the rest of this paper. We averaged the NELA features for the individual articles in order to obtain a NELA representation for a news medium. Using arithmetic averaging is a good idea as it captures the general trend of articles in a medium, while limiting the impact of outliers. For instance, if a medium is known to align with left-wing ideology, this should not change if it published a few articles that align with right-wing ideology. We use this method to aggregate all features that we collected at a level of granularity that is finer than the medium-level. 3367 Embedding Features: We encoded each article using BERT (Devlin et al., 2019) by feeding the first 510 WordPieces2 from the article3 and then averaging the word representations extracted from the second-to-last layer.4 In order to obtain representations that are relevant to our tasks, we finetuned BERT by training a softmax layer on top of the [CLS] output vector to predict the label (bias or factuality) of news articles that are scrapped from an external list of media to avoid overfitting. The articles’ labels are assumed to be the same as those of the media in which they are published (a form of distant supervision). This is common practice in tasks such as “fake news” detection, where it is difficult to manually annotate large-scale datasets (Nørregaard et al., 2019). We averaged the BERT representations across the articles in order to aggregate them at the medium level. Aggregated Probabilities: We represent each article by a C-dimensional vector that corresponds to its posterior probabilities of belonging to each class ci, i ∈{1, . . . , C} of the given task, whether it is predicting the political bias or the factuality of the target news medium. These probabilities are produced by training a softmax layer on top of the [CLS] token in the above-mentioned fine-tuned BERT model. We averaged the probability representations across the articles in order to aggregate them at the medium level. 3.1.2 YouTube Video Channels Some news media post their video content on YouTube. Thus, we use YouTube channels by modeling their textual and acoustic contents to predict the political bias and the factuality of reporting of the target news medium. This source of information is relatively underexplored, but it has demonstrated potential for modeling bias (Dinkov et al., 2019) and factuality (Kopev et al., 2019). Due to the lack of viable methods for automatic channel retrieval, we manually looked up the YouTube channel for each medium. For each channel marked as English, we crawled 25 videos (on average) with at least 15 seconds of speech content. Then, we processed the speech segments from each video into 15-second episodes by mapping the duration timeline to the subtitle timestamps. 2There is a limit of maximum of 512 input tokens, and we had to leave space for the special tokens [CLS] and [SEP]. 3This is recommended in (Adhikari et al., 2019) when encoding full documents using Transformer-based models. 4This is common practice, since the last layer may be biased towards the pre-training objectives of BERT. We used the OpenSMILE toolkit (Eyben et al., 2010) to extract low-level descriptors (LLDs) from these speech episodes, including frame-based features (e.g., energy), fundamental frequency, and Mel-frequency cepstral coefficients (MFFC). This set of features proved to be useful in the Interspeech Computational Paralinguistics challenge of emotion detection (Schuller et al., 2009). To complement the acoustic information, we retrieved additional textual data such as descriptions, titles, tags, and captions. This information is encoded using a pre-trained BERT model. Furthermore, we extracted the NELA features from the titles and from the descriptions. Finally, we averaged the textual and the acoustic features across the videos to aggregate them at the medium level. 3.1.3 Media Profiles in Twitter We model how news media portray themselves to their audience by extracting features from their Media Twitter profiles. In our previous work, this has proven useful for political bias prediction (Baly et al., 2018a). Such features include information about whether Twitter verified the account, the year it was created, its geographical location, as well as some other statistics, e.g., the number of followers and of tweets posted. We encoded the profile’s description using SBERT for the following reasons: (i) unlike the articles, the number of media profiles is too small to fine-tune BERT, and (ii) most Twitter descriptions have sentence-like structure and length. If a medium has no Twitter account, we used a vector of zeros. 3.2 Who Read it We argue that the audience of a news medium can be indicative of the political orientation of that medium. We thus propose a number of features to model this, which we describe below. 3.2.1 Twitter Followers Bio Previous research has used the followers’ networks and the retweeting behavior in order to infer the political bias of news media (Wong et al., 2013; Atanasov et al., 2019; Darwish et al., 2020). Here, we analyze the self-description (bio) of Twitter users that follow the target news medium. The assumption is that (i) followers would likely agree with the news medium’s bias, and (ii) they might express their own bias in their self-description. 3368 We retrieved the public profiles of 5,000 followers for each target news medium with a Twitter account, and we excluded those with non-English bios since our dataset is mostly about US media. Then, we encoded each follower’s bio using SBERT (Reimers and Gurevych, 2019). As we had plenty of followers’ bios, this time fine-tuning BERT would have been feasible. However, we were afraid to use distant supervision for labeling as we did with the articles since people sometimes follow media with different political ideologies. Thus, we opted for SBERT, and we averaged the SBERT representations across the bios in order to obtain a medium-level representation. 3.2.2 Facebook Audience Like many other social media giants, Facebook makes its revenues from advertisements. The extensive user interaction enables Facebook to create detailed profiles of its users, including demographic attributes such as age, gender, income, and political leaning. Advertisers can explore these attributes to figure out the targeting criteria for their ads, and Facebook returns an audience estimate based on these criteria. For example, the estimated number of users who are female, 20-years-old, very liberal, and interested in the NY Times is 160K. These estimates have been used as a proxy to measure the online population in various domains (Fatehkia et al., 2018; Araujo et al., 2017; Ribeiro et al., 2018). In this study, we explore the use of political leaning estimates of users who are interested in particular news media. To obtain the audience estimates for a medium, we identify its Interest ID using the Facebook Marketing API 5. Given an ID, we retrieve the estimates of the audience (in the United States) who showed interest in the corresponding medium. Then, we extract the audience distribution over the political spectrum, which is categorized into five classes ranging from very conservative to very liberal. 3.2.3 YouTube Audience Statistics Finally, we incorporate audience information from YouTube videos. We retrieved the following metadata to model audience interaction: number of views, likes, dislikes, and comments for each video. As before, we averaged these statistics across the videos to obtain a medium-level representation. 5http://developers.facebook.com/docs/ marketing-api 3.3 What Was Written About the Target Medium Wikipedia contents describing news media were useful for predicting the political bias and the factuality of these media (Baly et al., 2018a). We automatically retrieved the Wikipedia page for each medium, and we encoded its contents using the pre-trained BERT model.6 Similarly to encoding the articles, we fed the encoder with the first 510 tokens of the page’s content, and used as an output representation the average of the word representations extracted from the second-to-last layer. If a medium had no page in Wikipedia, we used a vector of zeros. 4 Experiments and Evaluation 4.1 Dataset We used the Media Bias/Fact Check (MBFC) dataset, which consists of a list of news media along with their labels of both political bias and factuality of reporting. Factuality is modeled on a 3-point scale: low, mixed, and high. Political bias is modeled on a 7-point scale: extreme-left, left, center-left, center, center-right, right, and extremeright. Further details and examples of the dataset can be found in (Baly et al., 2018a). After manual inspection, we noticed that the left-center and right-center labels are ill-defined, ambiguous transitionary categories. Therefore, we decided to exclude news media with these labels. Also, to reduce the impact of potentially subjective decisions made by the annotators, we merged the extreme-left and extreme-right media with the left and right categories, respectively. As a result, we model political bias on a 3-point scale (left, center, and right), and the dataset got reduced to 864 news media. Table 1 provides statistics about the dataset. Political Bias Factuality Left 243 Low 162 Center 272 Mixed 249 Right 349 High 453 Table 1: Label counts in the dataset. We were able to retrieve Wikipedia pages for 61.2% of the media, Twitter profiles for 72.5% of the media, Facebook pages for 60.8% of the media, and YouTube channel for 49% of the media. 6Similarly to Twitter descriptions, the number of news media with Wikipedia pages is too small to fine-tune BERT. 3369 4.2 Experimental Setup We evaluated the following aspects about news media separately and in combinations: (i) what the target medium wrote, (ii) who read it, and (iii) what was written about that medium. We used the features described in Section 3 to train SVM classifiers for predicting the political bias and the factuality of reporting of news media. We performed an incremental ablation study by combining the best feature(s) from each aspect to obtain a combination that achieves even better results. We used 5-fold cross-validation to train and to evaluate an SVM model using different features and feature combinations. At each iteration of the crossvalidation, we performed a grid search to tune the hyper-parameters of our SVM model, namely the values of the cost C and of the γ value for the RBF kernel. In the process of search, we optimized for macro-average F1 score, i.e., averaging over the classes, since our dataset is not balanced, which is true for both tasks. Finally, we evaluated the model on the remaining unseen fold. Ultimately, we report both macro-F1 score, and accuracy. We compared our results to the majority class baseline and to our previous work (Baly et al., 2018a). The latter used (i) NELA features from articles, (ii) embedding representations of Wikipedia pages using averaged GloVe word embeddings, (iii) metadata from the media’s Twitter profiles, and (iv) URL structural features. Since we slightly modified the MBFC dataset, we retrained the old model on the new version of the dataset.7 To fine-tune BERT’s weights, we trained a softmax layer on top of the [CLS] token of the pretrained BERT model to classify articles for the task at hand: either predicting the articles’ political bias as left, center, or right, or predicting their level of factuality as low or high.8 To avoid overfitting, we scrapped articles from news media listed in the Media Bias/Fact Check database, but not included in our dataset: 30K articles from 298 such media. Finally, we used two strategies to evaluate feature combinations. The first one trains a single classifier using all features. The second one trains a separate classifier for each feature type and then uses an ensemble by taking a weighted average of the posterior probabilities of the individual models. 7The data and the corresponding code, both old and new, are available at https://github.com/ramybaly/ News-Media-Reliability 8We ignored mixed as it does not apply to articles. Note that we learn different weights for the different models, which ensures that we pay more attention to the probabilities produced by better models. We used the sklearn library to obtain probabilities from an SVM classifier as a function of the distance between the data point and the learned hyperplane using Platt scaling (for the binary case) or an extension thereof (for the 3-way case). 4.3 Political Bias Prediction Table 2 shows the evaluation results for political bias prediction, grouped according to different aspects. For each aspect, the upper rows correspond to individual features, while the lower ones show combinations thereof. The results in rows 3–5 show that averaging embeddings from a fine-tuned BERT to encode articles (row 4) works better than using NELA features (row 3). They also show that using the posterior probabilities obtained from applying a softmax on top of BERT’s [CLS] token (row 5) performs worse than using average embeddings (row 4). This suggest that it is better to incorporate information from the articles’ word representations rather than using [CLS] as a compact representation of the articles. Also, since our BERT was fine-tuned on articles with noisy labels obtained using distant supervision, its predictions for individual articles are also noisy, and so are the vectors of posterior. Yet, this fine-tuning seems to yield improved articlelevel representations for our task. The results in rows 7–10 show that captions are the most useful type of feature among those extracted from YouTube. This makes sense since captions contain the most essential information about the contents of a video. We can further see that the BERT-based features outperform the NELA ones. Overall, the YouTube features are underperforming since for half of the media we could not find a corresponding YouTube channel, and we used representations containing only zeroes. Rows 11-16 show the results for systems that combine article, Twitter, and YouTube features, either directly or in an ensemble. We can see on rows 13–16 that the YouTube and the Twitter profile features yield loss in performance when added to the article features (rows 11–12). Note that the article features already outperform the individual feature types from rows 3–10 by a wide margin, and thus we will use them to represent the What Was Written aspect of the model in our later experiments below. 3370 Group # Features Dim. Macro F1 Accuracy Baselines 1 Majority class – 19.18 40.39 2 Best model from (Baly et al., 2018a) 764 72.90 73.61 3 Articles: NELA 141 64.82 68.18 4 Articles: BERT representations 768 79.34 79.75 5 Articles: BERT probabilities 3 61.21 62.27 6 Twitter Profiles: Sentence BERT 768 59.23 60.88 7 YouTube: NELA (title, description) 260 45.78 50.46 8 YouTube: OpenSmile (LLDs) 385 46.13 50.69 A. What 9 YouTube: BERT (title, description, tags) 768 48.36 53.94 Was Written 10 YouTube: BERT (captions) 768 49.14 53.94 11 Articles: ALL (c) 912 81.00 81.48 12 Articles: ALL (en) 912 81.27 81.83 13 Articles + Twitter Prof. (c) 1,691 76.59 77.20 14 Articles + Twitter Prof. (en) 1,691 80.00 80.56 15 Articles + Twitter Prof. + YouTube cap. (c) 2,315 75.73 76.39 16 Articles + Twitter Prof. + YouTube cap. (en) 2,315 79.70 80.32 17 Twitter Follower: Sentence BERT 768 62.85 65.39 18 YouTube: Metadata 5 40.05 46.53 19 Facebook: Political Leaning Estimates 6 27.87 43.87 B. Who 20 Twitter Fol. + YouTube Meta. (c) 773 63.72 65.86 Read It 21 Twitter Fol. + YouTube Meta. (en) 773 65.12 66.44 22 Twitter Fol. + YouTube Meta. + Facebook Estimates (c) 779 63.63 65.74 23 Twitter Fol. + YouTube Meta. + Facebook Estimates (en) 779 64.18 66.20 C. What Was Written 24 Wikipedia: BERT 768 64.36 66.09 About the Medium Combinations 25 All features: rows 3–11; 18–20; 25 (c) 5,413 78.17 78.70 26 All features: rows 3–11; 18–20; 25 (en) 5,413 79.42 80.32 27 A+B: rows 12 & 21 (c) 1,685 84.28 84.87 28 A+B: rows 12 & 21 (en) 1,685 84.15 84.64 29 A+C: rows 12 & 24 (c) 1,680 81.53 81.98 30 A+C: rows 12 & 24 (en) 1,680 82.99 83.48 31 A+B+C: rows 12, 21 & 24 (c) 1,691 83.53 84.02 32 A+B+C: rows 12, 21 & 24 (en) 1,691 84.77 85.29 Table 2: Political bias prediction: ablation study of the proposed features. Dim refers to the number of features, whereas (c) and (en) indicate whether the features are concatenated or an ensemble was used, respectively. We can further notice that the ensembles consistently outperform feature concatenation models, which is actually true for all feature combinations in Table 2. Next, we compare rows 6 and 17, which show results when using Twitter information of different nature: from the target medium profile (row 6) vs. from the profiles of the followers of the target medium (row 17). We can see that the latter is much more useful, which confirms the importance of the Who Read It aspect, which we have introduced in this paper. Note that here we encode the descriptions and the self-description bio information using Sentence BERT instead of the pre-trained BERT; this is because, in our preliminary experiments (not shown in the table), we found the former to perform much better than the latter. Next, the results in rows 20–23 show that the YouTube metadata features improve the performance when combined with the Twitter followers’ features. On the other hand, the Facebook audience features’ performance is deficient and hurts the overall performance, i.e., these estimates seem not to correlate well with the political leanings of news media. Also, as pointed by (Flaxman et al., 2016), social networks can help expose people to different views, and thus the polarization in news readership might not be preserved. Row 24 shows that the Wikipedia features perform worse than most individual features above, which can be related to coverage as only 61.2% of the media in our dataset have a Wikipedia page. Nevertheless, these features are helpful when combined with features about other aspects; see below. 3371 Group # Features Dim. Macro F1 Accuracy Baselines 1 Majority class – 22.93 52.43 2 Best model from (Baly et al., 2018a) 764 61.08 66.45 3 Articles: NELA 141 55.54 62.62 4 Articles: BERT representations 768 61.46 67.94 5 Articles: BERT probabilities 3 51.39 61.46 6 Twitter Profiles: Sentence BERT 768 49.96 56.71 7 YouTube: NELA (title, description) 260 32.52 51.04 8 YouTube: OpenSmile (LLDs) 385 37.17 52.08 A. What 9 YouTube: BERT (title, description, tags) 768 38.19 54.28 Was Written 10 YouTube: BERT (captions) 768 38.82 55.56 11 Articles: ALL (c) 912 59.34 64.82 12 Articles: ALL (en) 912 48.27 59.95 13 Articles: BERT + Twitter Prof. (c) 1,691 61.06 66.09 14 Articles: BERT + Twitter Prof. (en) 1,691 61.50 68.63 15 Articles: BERT + Twitter Prof. + YouTube: cap. (c) 2,315 60.23 65.51 16 Articles: BERT + Twitter Prof. + YouTube: cap. (en) 2,315 58.21 66.44 17 Twitter Follower: Sentence BERT 768 42.19 58.45 18 YouTube: Metadata 5 31.92 52.78 19 Facebook: Political Leaning Estimates 6 27.24 53.70 B. Who 20 Twitter Fol. + YouTube Meta. (c) 773 42.48 58.76 Read It 21 Twitter Fol. + YouTube Meta. (en) 773 39.66 57.64 22 Twitter Fol. + YouTube Meta. + Facebook Estimates (c) 779 42.28 57.76 23 Twitter Fol. + YouTube Meta. + Facebook Estimates (en) 779 39.33 57.99 C. What Was Written 24 Wikipedia: BERT 768 45.74 55.32 About the Medium Combinations 25 All features: rows 3–10; 17–19; 24 (c) 5,413 62.42 67.79 26 All features: rows 3–10; 17–19; 24 (en) 5,413 45.24 60.42 27 A+B: rows 14 & 24 (c) 1,680 65.45 70.40 28 A+B: rows 14 & 24 (en) 1,680 61.80 69.25 29 A+C: rows 14 & 20 (c) 1,685 67.25 71.52 30 A+C: rows 14 & 20 (en) 1,685 62.53 69.90 31 A+B+C: rows 14, 20 & 24 (c) 1,691 64.14 69.36 32 A+B+C: rows 14, 20 & 24 (en) 1,691 60.35 68.90 Table 3: Factuality of reporting: ablation study of the proposed features. Dim refers to the number of features, whereas (c) and (en) indicate whether the features are concatenated or an ensemble was used, respectively. Finally, rows 25–32 in Table 3 show the evaluation results when combining all aspects. We can see that the best results are achieved when using the best features from each of the three aspects, where the combination is performed as an ensemble (row 32). This combination improves over using information from the article only (row 12) by +3.5 macro-F1 points absolute. It further yields sizeable absolute improvements over the baseline system from (Baly et al., 2018a), by +11.87 macroF1 points absolute. While this improvement is due to a large extent to improved techniques for text representation such as using fine-tuned BERT instead of averaged GloVe word embeddings, modeling the newly-introduced media aspects further yielded a lot of additional improvements. 4.4 Factuality Prediction Table 3 demonstrates the evaluation results when using the proposed sources/features for the task of predicting the factuality of reporting of news media. Similarly to the results for political bias prediction, rows 3–10 suggest that the features extracted from articles are more important than those coming from YouTube or from Twitter profiles, and that using BERT to encode the articles yields the best results. Note that overall, the results in this table are not as high as those for bias prediction. This reflects the level of difficulty of this task, and the fact that, in order to predict factuality, one needs external information or a knowledge base to be able to verify the published content. 3372 The results in rows 11–16 show that combining the Twitter profile features with the BERT-encoded articles improves the performance over using the article text only. Comparing rows 6 and 17 in Table 3, we can see that the Twitter follower features perform worse than using Twitter profiles features; this is the opposite of what we observed in Table 2. This makes sense since our main motivation to look at the followers’ profiles was to detect political bias, rather than factuality. Moreover, the metadata collected from media profiles about whether the corresponding account is verified, or its level of activity or connectivity (counts of friends and statuses) are stronger signals for this task. Finally, rows 25–32 show the results for modeling combinations of the three aspects we are exploring in this paper. The best results are achieved using the best features selected from the What was written and the What was written about the target medium aspects, concatenated together. This combination achieves sizeable improvements compared to the baseline system from (Baly et al., 2018a): by +6.17 macro-F1 points absolute. This result indicates that looking at the audience of the medium is not as helpful for predicting factuality as it was for predicting political bias, and that looking at what was written about the medium on Wikipedia is more important for this task. 5 Conclusion and Future Work We have presented experiments in predicting the political ideology, i.e., left/center/right bias, and the factuality of reporting, i.e., high/mixed/low, of news media. We compared the textual content of what media publish vs. who read it on social media, i.e., on Twitter, Facebook, and YouTube. We further modeled what was written about the target medium in Wikipedia. We have combined a variety of information sources, many of which were not explored for at least one of the target tasks, e.g., YouTube channels, political bias of the Facebook audience, and information from the profiles of the media followers on Twitter. We further modeled different modalities: text, metadata, and speech signal. The evaluation results have shown that while what was written matters most, the social media context is also important as it is complementary, and putting them all together yields sizable improvements over the state of the art. In future work, we plan to perform user profiling with respect to polarizing topics such as gun control (Darwish et al., 2020), which can then be propagated from users to media (Atanasov et al., 2019; Stefanov et al., 2020). We further want to model the network structure, e.g., using graph embeddings (Darwish et al., 2020). Another research direction is to profile media based on their stance with respect to previously fact-checked claims (Mohtarami et al., 2018; Shaar et al., 2020), or by the proportion and type of propaganda techniques they use (Da San Martino et al., 2019, 2020). Finally, we plan to experiment with other languages. Acknowledgments This research is part of the Tanbih project9, which aims to limit the effect of “fake news,” propaganda and media bias by making users aware of what they are reading. The project is developed in collaboration between the Qatar Computing Research Institute, HBKU and the MIT Computer Science and Artificial Intelligence Laboratory. References Ashutosh Adhikari, Achyudh Ram, Raphael Tang, and Jimmy Lin. 2019. DocBERT: BERT for document classification. arXiv preprint arXiv:1904.08398. Matheus Araujo, Yelena Mejova, Ingmar Weber, and Fabricio Benevenuto. 2017. Using Facebook ads audiences for global lifestyle disease surveillance: Promises and limitations. In Proceedings of the 2017 ACM Conference on Web Science, WebSci ’17, pages 253–257, Troy, NY, USA. Atanas Atanasov, Gianmarco De Francisci Morales, and Preslav Nakov. 2019. Predicting the role of political trolls in social media. In Proceedings of the 2019 SIGNLL Conference on Computational Natural Language Learning, CoNLL ’19, pages 1023– 1034, Hong Kong, China. Ramy Baly, Georgi Karadzhov, Dimitar Alexandrov, James Glass, and Preslav Nakov. 2018a. Predicting factuality of reporting and bias of news media sources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP ’18, pages 3528–3539, Brussels, Belgium. Ramy Baly, Georgi Karadzhov, Abdelrhman Saleh, James Glass, and Preslav Nakov. 2019. Multi-task ordinal regression for jointly predicting the trustworthiness and the leading political ideology of news 9http://tanbih.qcri.org/ 3373 media. In Proceedings of the 17th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT ’19, pages 2109– 2116, Minneapolis, MN, USA. Ramy Baly, Mitra Mohtarami, James Glass, Llu´ıs M`arquez, Alessandro Moschitti, and Preslav Nakov. 2018b. Integrating stance detection and fact checking in a unified corpus. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT ’18, pages 21–27, New Orleans, LA, USA. Ceren Budak, Sharad Goel, and Justin M Rao. 2016. Fair and balanced? Quantifying media bias through crowdsourced content analysis. Public Opinion Quarterly, 80(S1):250–271. Kevin R. Canini, Bongwon Suh, and Peter L. Pirolli. 2011. Finding credible information sources in social networks based on content and social structure. In Proceedings of the IEEE International Conference on Privacy, Security, Risk, and Trust, and the IEEE International Conference on Social Computing, SocialCom/PASSAT ’11, pages 1–8, Boston, MA, USA. Carlos Castillo, Marcelo Mendoza, and Barbara Poblete. 2011. Information credibility on Twitter. In Proceedings of the 20th International Conference on World Wide Web, WWW ’11, pages 675–684, Hyderabad, India. Giovanni Da San Martino, Stefano Cresci, Alberto Barr´on-Cede˜no, Seunghak Yu, Roberto Di Pietro, and Preslav Nakov. 2020. A survey on computational propaganda detection. In Proceedings of the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence, IJCAIPRICAI ’20, Yokohama, Japan. Giovanni Da San Martino, Seunghak Yu, Alberto Barron-Cedeno, Rostislav Petrov, and Preslav Nakov. 2019. Fine-grained analysis of propaganda in news articles. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, EMNLP ’19, pages 5636–5646, Hong Kong, China. Kareem Darwish, Michael Aupetit, Peter Stefanov, and Preslav Nakov. 2020. Unsupervised user stance detection on twitter. In Proceedings of the International AAAI Conference on Web and Social Media, ICWSM ’20, Atlanta, GA, USA. Stefano DellaVigna and Ethan Kaplan. 2007. The Fox News effect: Media bias and voting. The Quarterly Journal of Economics, 122(3):1187–1234. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT ’19, pages 4171–4186, Minneapolis, MN, USA. Yoan Dinkov, Ahmed Ali, Ivan Koychev, and Preslav Nakov. 2019. Predicting the leading political ideology of Youtube channels using acoustic, textual and metadata information. In Proceedings of the 20th Annual Conference of the International Speech Communication Association, INTERSPEECH ’19, pages 501–505, Graz, Austria. Erick Elejalde, Leo Ferres, and Eelco Herder. 2018. On the nature of real and perceived bias in the mainstream media. PloS one, 13(3):e0193765. Florian Eyben, Martin W¨ollmer, and Bj¨orn Schuller. 2010. openSMILE – the Munich versatile and fast open-source audio feature extractor. In Proceedings of the 18th ACM International Conference on Multimedia, pages 1459–1462, Florence, Italy. Masoomali Fatehkia, Ridhi Kashyap, and Ingmar Weber. 2018. Using Facebook ad data to track the global digital gender gap. World Development, 107:189–209. Seth Flaxman, Sharad Goel, and Justin M Rao. 2016. Filter bubbles, echo chambers, and online news consumption. Public opinion quarterly, 80(S1):298– 320. Matthew Gentzkow and Jesse M Shapiro. 2006. Media bias and reputation. Journal of political Economy, 114(2):280–316. Doris A Graber and Johanna Dunaway. 2017. Mass media and American politics. SAGE Publications. Benjamin D. Horne, William Dron, Sara Khedr, and Sibel Adali. 2018a. Assessing the news landscape: A multi-module toolkit for evaluating the credibility of news. In Proceedings of the The Web Conference, WWW ’18, pages 235–238, Lyon, France. Benjamin D. Horne, Sara Khedr, and Sibel Adali. 2018b. Sampling the news producers: A large news and feature data set for the study of the complex media landscape. In Proceedings of the Twelfth International Conference on Web and Social Media, ICWSM ’18, pages 518–527, Stanford, CA, USA. Shanto Iyengar and Kyu S Hahn. 2009. Red media, blue media: Evidence of ideological selectivity in media use. Journal of communication, 59(1):19–39. Daniel Kopev, Ahmed Ali, Ivan Koychev, and Preslav Nakov. 2019. Detecting deception in political debates using acoustic and textual features. In Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop, ASRU ’19, pages 652–659, Singapore. 3374 Vivek Kulkarni, Junting Ye, Steven Skiena, and William Yang Wang. 2018. Multi-view models for political ideology detection of news articles. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’18, pages 3518–3527, Brussels, Belgium. Jing Ma, Wei Gao, Prasenjit Mitra, Sejeong Kwon, Bernard J. Jansen, Kam-Fai Wong, and Meeyoung Cha. 2016. Detecting rumors from microblogs with recurrent neural networks. In Proceedings of the 25th International Joint Conference on Artificial Intelligence, IJCAI ’16, pages 3818–3824, New York, NY, USA. Jing Ma, Wei Gao, Zhongyu Wei, Yueming Lu, and Kam-Fai Wong. 2015. Detect rumors using time series of social context information on microblogging websites. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM ’15, pages 1751–1754, Melbourne, Australia. Mitra Mohtarami, Ramy Baly, James Glass, Preslav Nakov, Llu´ıs M`arquez, and Alessandro Moschitti. 2018. Automatic stance detection using end-to-end memory networks. In Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT ’18, pages 767–776, New Orleans, LA, USA. Subhabrata Mukherjee and Gerhard Weikum. 2015. Leveraging joint interactions for credibility analysis in news communities. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM ’15, pages 353–362, Melbourne, Australia. Jeppe Nørregaard, Benjamin D. Horne, and Sibel Adalı. 2019. NELA-GT-2018: A large multi-labelled news dataset for the study of misinformation in news articles. In Proceedings of the International AAAI Conference on Web and Social Media, ICWSM ’19, pages 630–638, Munich, Germany. Kashyap Popat, Subhabrata Mukherjee, Jannik Str¨otgen, and Gerhard Weikum. 2017. Where the truth lies: Explaining the credibility of emerging claims on the Web and social media. In Proceedings of the 26th International Conference on World Wide Web Companion, WWW ’17, pages 1003–1012, Perth, Australia. Kashyap Popat, Subhabrata Mukherjee, Jannik Str¨otgen, and Gerhard Weikum. 2018. CredEye: A credibility lens for analyzing and explaining misinformation. In Proceedings of The Web Conference 2018, WWW ’18, pages 155–158, Lyon, France. Martin Potthast, Johannes Kiesel, Kevin Reinartz, Janek Bevendorff, and Benno Stein. 2018. A stylometric inquiry into hyperpartisan and fake news. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL ’18, pages 231–240, Melbourne, Australia. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP ’19, pages 3982–3992, Hong Kong, China. Filipe N Ribeiro, Lucas Henrique, Fabricio Benevenuto, Abhijnan Chakraborty, Juhi Kulshrestha, Mahmoudreza Babaei, and Krishna P Gummadi. 2018. Media bias monitor: Quantifying biases of social media news outlets at large-scale. In Proceedings of the Twelfth International AAAI Conference on Web and Social Media, ICWSM ’18, pages 290– 299, Stanford, CA, USA. Abdelrhman Saleh, Ramy Baly, Alberto Barr´onCede˜no, Giovanni Da San Martino, Mitra Mohtarami, Preslav Nakov, and James Glass. 2019. Team QCRI-MIT at SemEval-2019 task 4: Propaganda analysis meets hyperpartisan news detection. In Proceedings of the 13th International Workshop on Semantic Evaluation, SemEval ’19, pages 1041– 1046, Minneapolis, MN, USA. Bj¨orn Schuller, Stefan Steidl, and Anton Batliner. 2009. The INTERSPEECH 2009 emotion challenge. In Proceedings of the 10th Annual Conference of the International Speech Communication Association, INTERSPEECH ’09, pages 312–315, Brighton, UK. Shaden Shaar, Giovanni Da San Martino, Nikolay Babulkov, and Preslav Nakov. 2020. That is a known lie: Detecting previously fact-checked claims. In Proceedings of the Annual Conference of the Association for Computational Linguistics, ACL ’20, Seattle, WA, USA. Petar Stefanov, Kareem Darwish, Atanas Atanasov, and Preslav Nakov. 2020. Predicting the topical stance and political leaning of media using tweets. In Proceedings of the Annual Conference of the Association for Computational Linguistics, ACL ’20, Seattle, WA, USA. Soroush Vosoughi, Deb Roy, and Sinan Aral. 2018. The spread of true and false news online. Science, 359(6380):1146–1151. Felix Ming Fai Wong, Chee Wei Tan, Soumya Sen, and Mung Chiang. 2013. Quantifying political leaning from tweets and retweets. In Proceedings of the Seventh International AAAI Conference on Weblogs and Social Media, ICWSM ’13, pages 640–649, Boston, MA, USA. Tauhid Zaman, Emily B. Fox, and Eric T. Bradlow. 2014. A bayesian approach for predicting the popularity of tweets. Ann. Appl. Stat., 8(3):1583–1611. Arkaitz Zubiaga, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Peter Tolmie. 2016. Analysing how people orient to and spread rumours in social media by looking at conversational threads. PLoS ONE, 11(3):1–29.
2020
308
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3375–3385 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3375 An Analysis of the Utility of Explicit Negative Examples to Improve the Syntactic Abilities of Neural Language Models Hiroshi Noji Artificial Intelligence Research Center AIST, Tokyo, Japan [email protected] Hiroya Takamura Artificial Intelligence Research Center AIST, Tokyo, Japan [email protected] Abstract We explore the utilities of explicit negative examples in training neural language models. Negative examples here are incorrect words in a sentence, such as barks in *The dogs barks. Neural language models are commonly trained only on positive examples, a set of sentences in the training data, but recent studies suggest that the models trained in this way are not capable of robustly handling complex syntactic constructions, such as long-distance agreement. In this paper, we first demonstrate that appropriately using negative examples about particular constructions (e.g., subject-verb agreement) will boost the model’s robustness on them in English, with a negligible loss of perplexity. The key to our success is an additional margin loss between the log-likelihoods of a correct word and an incorrect word. We then provide a detailed analysis of the trained models. One of our findings is the difficulty of object-relative clauses for RNNs. We find that even with our direct learning signals the models still suffer from resolving agreement across an object-relative clause. Augmentation of training sentences involving the constructions somewhat helps, but the accuracy still does not reach the level of subjectrelative clauses. Although not directly cognitively appealing, our method can be a tool to analyze the true architectural limitation of neural models on challenging linguistic constructions. 1 Introduction Despite not being exposed to explicit syntactic supervision, neural language models (LMs), such as recurrent neural networks, are able to generate fluent and natural sentences, suggesting that they induce syntactic knowledge about the language to some extent. However, it is still under debate whether such induced knowledge about grammar is robust enough to deal with syntactically challenging constructions such as long-distance subjectverb agreement. So far, the results for RNN language models (RNN-LMs) trained only with raw text are overall negative; prior work has reported low performance on the challenging test cases (Marvin and Linzen, 2018) even with the massive size of the data and model (van Schijndel et al., 2019), or argue the necessity of an architectural change to track the syntactic structure explicitly (Wilcox et al., 2019b; Kuncoro et al., 2018). Here the task is to evaluate whether a model assigns a higher likelihood on a grammatically correct sentence (1a) over an incorrect sentence (1b) that is minimally different from the original one (Linzen et al., 2016). (1) a. The author that the guards like laughs. b. * The author that the guards like laugh. In this paper, to obtain a new insight into the syntactic abilities of neural LMs, in particular RNNLMs, we perform a series of experiments under a different condition from the prior work. Specifically, we extensively analyze the performance of the models that are exposed to explicit negative examples. In this work, negative examples are the sentences or tokens that are grammatically incorrect, such as (1b) above. Since these negative examples provide a direct learning signal on the task at test time it may not be very surprising if the task performance goes up. We acknowledge this, and argue that our motivation for this setup is to deepen understanding, in particular the limitation or the capacity of the current architectures, which we expect can be reached with such strong supervision. Another motivation is engineering: we could exploit negative examples in different ways, and establishing a better way will be of practical importance toward building an LM or generator that can be robust on particular linguistic constructions. 3376 The first research question we pursue is about this latter point: what is a better method to utilize negative examples that help LMs to acquire robustness on the target syntactic constructions? Regarding this point, we find that adding additional token-level loss trying to guarantee a margin between log-probabilities for the correct and incorrect words (e.g., log p(laughs|h) and log p(laugh|h) for (1a)) is superior to the alternatives. On the test set of Marvin and Linzen (2018), we show that LSTM language models (LSTM-LMs) trained by this loss reach near perfect level on most syntactic constructions for which we create negative examples, with only a slight increase of perplexity about 1.0 point. Past work conceptually similar to us is Enguehard et al. (2017), which, while not directly exploiting negative examples, trains an LM with additional explicit supervision signals to the evaluation task. They hypothesize that LSTMs do have enough capacity to acquire robust syntactic abilities but the learning signals given by the raw text are weak, and show that multi-task learning with a binary classification task to predict the upcoming verb form (singular or plural) helps models aware of the target syntax (subject-verb agreement). Our experiments basically confirm and strengthen this argument, with even stronger learning signals from negative examples, and we argue this allows us to evaluate the true capacity of the current architectures. In our experiments (Section 4), we show that our margin loss achieves higher syntactic performance than their multi-task learning. Another relevant work on the capacity of LSTMLMs is Kuncoro et al. (2019), which shows that by distilling from syntactic LMs (Dyer et al., 2016), LSTM-LMs can improve their robustness on various agreement phenomena. We show that our LMs with the margin loss outperform theirs in most of the aspects, further strengthening the argument about a stronger capacity of LSTM-LMs. The latter part of this paper is a detailed analysis of the trained models and introduced losses. Our second question is about the true limitation of LSTM-LMs: are there still any syntactic constructions that the models cannot handle robustly even with our direct learning signals? This question can be seen as a fine-grained one raised by Enguehard et al. (2017) with a stronger tool and improved evaluation metric. Among tested constructions, we find that syntactic agreement across an object relative clause (RC) is challenging. To inspect whether this is due to the architectural limitation, we train another LM on a dataset, on which we unnaturally augment sentences involving object RCs. Since it is known that object RCs are relatively rare compared to subject RCs (Hale, 2001), frequency may be the main reason for the lower performance. Interestingly, even when increasing the number of sentences with an object RC by eight times (more than twice of sentences with a subject RC), the accuracy does not reach the same level as agreement across a subject RC. This result suggests an inherent difficulty in tracking a syntactic state across an object RC for sequential neural architectures. We finally provide an ablation study to understand the encoded linguistic knowledge in the models learned with the help of our method. We experiment under reduced supervision at two different levels: (1) at a lexical level, by not giving negative examples on verbs that appear in the test set; (2) at a construction level, by not giving negative examples about a particular construction, e.g., verbs after a subject RC. We observe no huge score drops by both. This suggests that our learning signals at a lexical level (negative words) strengthen the abstract syntactic knowledge about the target constructions, and also that the models can generalize the knowledge acquired by negative examples to similar constructions for which negative examples are not explicitly given. The result also implies that negative examples do not have to be complete and can be noisy, which will be appealing from an engineering perspective. 2 Target Task and Setup The most common evaluation metric of an LM is perplexity. Although neural LMs achieve impressive perplexity (Merity et al., 2018), it is an average score across all tokens and does not inform the models’ behaviors on linguistically challenging structures, which are rare in the corpus. This is the primary motivation to separately evaluate the models’ syntactic robustness by a different task. 2.1 Syntactic evaluation task As introduced in Section 1, the task for a model is to assign a higher probability to the grammatical sentence over the ungrammatical one, given a pair of minimally different sentences at a critical position affecting the grammaticality. For example, (1a) and (1b) only differ at a final verb form, and to assign a higher probability to (1a), models need 3377 to be aware of the agreement dependency between author and laughs over an RC. Marvin and Linzen (2018) test set While initial work (Linzen et al., 2016; Gulordava et al., 2018) has collected test examples from naturally occurring sentences, this approach suffers from the coverage issue, as syntactically challenging examples are relatively rare. We use the test set compiled by Marvin and Linzen (2018), which consists of synthetic examples (in English) created by a fixed vocabulary and a grammar. This approach allows us to collect varieties of sentences with complex structures. The test set is divided by the syntactic constructions appearing in each example. Many constructions are different types of subject-verb agreement, including local agreement on different sentential positions (2), and non-local agreement across different types of phrases. Intervening phrases include prepositional phrases, subject RCs, object RCs, and coordinated verb phrases (3). (1) is an example of agreement across an object RC. (2) The senators smile/*smiles. (3) The senators like to watch television shows and are/*is twenty three years old. Previous work has shown that non-local agreement is particularly challenging for sequential neural models (Marvin and Linzen, 2018). The other patterns are reflexive anaphora dependencies between a noun and a reflexive pronoun (4), and on negative polarity items (NPIs), such as ever, which requires a preceding negation word (e.g., no and none) at an appropriate scope (5): (4) The authors hurt themselves/*himself. (5) No/*Most authors have ever been popular. Note that NPI examples differ from the others in that the context determining the grammaticality of the target word (No/*Most) does not precede it. Rather, the grammaticality is determined by the following context. As we discuss in Section 3, this property makes it difficult to apply training with negative examples for NPIs for most of the methods studied in this work. All examples above (1–5) are actual test sentences, and we can see that since they are synthetic some may sound somewhat unnatural. The main argument behind using this dataset is that even not very natural, they are still strictly grammatical, and an LM equipped with robust syntactic abilities should be able to handle them as a human would do. We use the original test set used in Marvin and Linzen (2018).1 See the supplementary materials of this for the lexical items and example sentences in each construction. 2.2 Language models Training data Following the practice, we train LMs on the dataset not directly relevant to the test set. Throughout the paper, we use an English Wikipedia corpus assembled by Gulordava et al. (2018), which has been used as training data for the present task (Marvin and Linzen, 2018; Kuncoro et al., 2019), consisting of 80M/10M/10M tokens for training/dev/test sets. It is tokenized and rare words are replaced by a single unknown token, amounting to the vocabulary size of 50,000. Baseline LSTM-LM Since our focus in this paper is an additional loss exploiting negative examples (Section 3), we fix the baseline LM throughout the experiments. Our baseline is a three-layer LSTM-LM with 1,150 hidden units at internal layers trained with the standard cross-entropy loss. Word embeddings are 400-dimensional, and input and output embeddings are tied (Inan et al., 2016). Deviating from some prior work (Marvin and Linzen, 2018; van Schijndel et al., 2019), we train LMs at sentence level as in sequence-tosequence models (Sutskever et al., 2014). This setting has been employed in some previous work (Kuncoro et al., 2018, 2019).2 Parameters are optimized by SGD. For regularization, we apply dropout on word embeddings and outputs of every layer of LSTMs, with weight decay of 1.2e-6, and anneal the learning rate by 0.5 if the validation perplexity does not improve successively, checking every 5,000 mini-batches. Mini-batch size, dropout weight, and initial learning rate are tuned by perplexity on the dev set of Wikipedia dataset.3 Note that we tune these values for the baseline LSTM-LM and fix them across the experiments. 1We use the “EMNLP2018” templates in https://github.com/BeckyMarvin/LM syneval. 2On the other hand, the LSTM-LM of Marvin and Linzen (2018), which is prepared by Gulordava et al. (2018), is trained at document level through truncated backpropagation through time (BPTT) (Mikolov et al., 2011). Since our training regime is more akin to the task setting of syntactic evaluation, it may provide some advantage at test time. 3Following values are found: mini-batch size: 128; initial learnin rate: 20.0; dropout weight on the word embedding layer and each output layer of LSTM: 0.1. 3378 The size of our three-layer LM is the same as the state-of-the-art LSTM-LM at document-level (Merity et al., 2018). Marvin and Linzen (2018)’s LSTM-LM is two-layer with 650 hidden units and word embeddings. Comparing two, since the word embeddings of our models are smaller (400 vs. 650) the total model sizes are comparable (40M for ours vs. 39M for theirs). Nonetheless, we will see in the first experiment that our carefully tuned three-layer model achieves much higher syntactic performance than their model (Section 4), being a stronger baseline to our extensions, which we introduce next. 3 Learning with Negative Examples Now we describe four additional losses for exploiting negative examples. The first two are existing ones, proposed for a similar purpose or under a different motivation. As far as we know, the latter two have not appeared in past work.4 We note that we create negative examples by modifying the original Wikipedia training sentences, not sentences in the test set. As a running example, let us consider the case where sentence (6a) exists in a mini-batch, from which we create a negative example (6b). (6) a. An industrial park with several companies is located in the close vicinity. b. * An industrial park with several companies are located in the close vicinity. Notations By a target word, we mean a word for which we create a negative example (e.g., is). We distinguish two types of negative examples: a negative token and a negative sentence; the former means a single incorrect word (e.g., are), while the latter means an entire ungrammatical sentence. 3.1 Negative Example Losses Binary-classification loss This is proposed by Enguehard et al. (2017) to complement a weak inductive bias in LSTM-LMs for learning syntax. It is multi-task learning across the cross-entropy loss (Llm) and an additional loss (Ladd): L = Llm + βLadd, (1) where β is a relative weight for Ladd. Given outputs of LSTMs, a linear and binary softmax layers 4The loss for large-margin language models (Huang et al., 2018) is similar to our sentence-level margin loss. Whereas their formulation is more akin to the standard large-margin setting, aiming to learn a reranking model, our margin loss is simpler, just comparing two log-likelihoods of predefined positive and negative sentences. predict whether the next token is singular or plural. Ladd is a loss for this classification, only defined for the contexts preceding a target token xi: Ladd = X x1:i∈h∗ −log p(num(xi)|x1:i−1), where x1:i = x1 · · · xi is a prefix sequence and h∗ is a set of all prefixes ending with a target word (e.g., An industrial park with several companies is) in the training data. num(x) ∈{singular, plural} is a function returning the number of x. In practice, for each mini-batch for Llm, we calculate Ladd for the same set of sentences and add these two to obtain a total loss for updating parameters. As we mentioned in Section 1, this loss does not exploit negative examples explicitly; essentially a model is only informed of a key position (target word) that determines the grammaticality. This is rather an indirect learning signal, and we expect that it does not outperform the other approaches. Unlikelihood loss This is recently proposed (Welleck et al., 2020) for resolving the repetition issue, a known problem for neural text generators (Holtzman et al., 2019). Aiming at learning a model that can suppress repetition, they introduce an unlikelihood loss, which is an additional loss at a token level and explicitly penalizes choosing words previously appeared in the current context. We customize their loss for negative tokens x∗ i (e.g., are in (6b)). Since this loss is added at tokenlevel, instead of Eq. 1 the total loss is Llm, which we modify as: X x∈D X xi∈x −log p(xi|x1:i−1) + X x∗ i ∈negt(xi) g(x∗ i ), g(x∗ i ) = −α log(1 −p(x∗ i |x1:i−1)), where negt(·) returns negative tokens for a target xi.5 α controls the weight. x is a sentence in the training data D. The unlikelihood loss strengthens the signal to penalize undesirable words in a context by explicitly reducing the likelihood of negative tokens x∗ i . This is a more direct learning signal than the binary classification loss. Sentence-level margin loss We propose a different loss, in which the likelihoods for correct and incorrect sentences are more tightly coupled. As in 5Empty for non-target tokens. It may return multiple tokens sometimes, e.g., themselves→{himself, herself}. 3379 the binary classification loss, the total loss is given by Eq. 1. We consider the following loss for Ladd: X x∈D X x∗ j ∈negs(x) max(0, δ−(log p(x)−log p(x∗ j))), where δ is a margin value between the loglikelihood of original sentence x and negative sentences {x∗ j}. negs(·) returns a set of negative sentences by modifying the original one. Note that we change only one token for each x∗ j, and thus may obtain multiple negative sentences from one x when it contains multiple target tokens (e.g., she leaves there but comes back ...).6 Comparing to the unlikelihood loss, not only decreasing the likelihood of a negative example, this loss tries to guarantee a certain difference between the two likelihoods. The learning signal of this loss seems stronger in this sense; however, the tokenlevel supervision is missing, which may provide a more direct signal to learn a clear contrast between correct and incorrect words. This is an empirical problem we pursue in the experiments. Token-level margin loss Our final loss is a combination of the previous two, by replacing g(xi) in the unlikelihood loss by a margin loss: g(x∗ i ) = max(0, δ−(log p(xi|x1:i−1) −log p(x∗ i |x1:i−1)). We will see that this loss is the most advantageous in the experiments (Section 4). 3.2 Parameters Each method employs a few additional hyperparameters (β for the binary classification loss, α for the unlikelihood loss, and δ for the margin losses). We preliminary select β and α from {1, 10, 100, 1000} that achieve the best average syntactic performance and find β = 1 and α = 1000. For the two margin losses, we fix β = 1.0 and α = 1.0 and only see the effects of margin value δ. 6In principle, one can cumulate this loss within a single mini-batch for Llm as we do for the binary-classification loss. However, obtaining Ladd needs to run an LM entirely on negative sentences as well, which demands a lot of GPU memories. We avoid this by separating mini-batches for Llm and Ladd. We precompute all possible pairs of (x, x∗ j) and create a mini-batch by sampling from them. We make the batch size for Ladd (the number of pairs) as the half of that for Llm, to make the number of sentences contained in both kinds of batches equal. Finally, in each epoch, we only sample at most the half mini-batches of those for Llm to reduce the total amount of training time. 3.3 Scope of Negative Examples Since our goal is to understand to what extent LMs can be sensitive to the target syntactic constructions by giving explicit supervision via negative examples, we only prepare negative examples on the constructions that are directly tested at evaluation. Specifically, we mark the following words in the training data, and create negative examples: Present verb To create negative examples on subject-verb agreement, we mark all present verbs and change their numbers.7 Reflexive pronoun We also create negative examples on reflexive anaphora, by flipping between {themselves}↔{himself, herself}. These two are both related to the syntactic number of a target word. For binary classification we regard both as a target word, apart from the original work that only deals with subject-verb agreement (Enguehard et al., 2017). We use a single common linear layer for both constructions. In this work, we do not create negative examples for NPIs. This is mainly for technical reasons. Among four losses, only the sentence-level margin loss can correctly handle negative examples for NPIs, essentially because other losses are tokenlevel. For NPIs, left contexts do not have information to decide the grammaticality of the target token (a quantifier; no, most, etc.) (Section 2.1). Instead, in this work, we use NPI test cases as a proxy to see possible negative (or positive) impacts as compensation for specially targeting some constructions. We will see that in particular for our margin losses, such negative effects are very small. 4 Experiments on Additional Losses We first see the overall performance of baseline LSTM-LMs as well as the effects of additional losses. Throughout the experiments, for each setting, we train five models from different random seeds and report the average score and standard deviation. The code is available at https://github.com/aistairc/lm syntax negative. Naive LSTM-LM performs well The main accuracy comparison across target constructions for different settings is presented in Table 1. We first 7We use Stanford tagger (Toutanova et al., 2003) to find the present verbs. We change the number of verbs tagged by VBZ or VBP using inflect.py (https://pypi.org/project/inflect/). 3380 LSTM-LM Additional margin loss (δ = 10) Additional loss (α = 1000, β = 1) Distilled M&L18 Ours Sentence-level Token-level Binary-pred. Unlike. K19 AGREEMENT: Simple 94.0 98.1 (±1.3) 100.0 (±0.0) 100.0 (±0.0) 99.1 (±1.2) 99.7 (±0.6) 100.0 (±0.0) In a sent. complement 99.0 96.1 (±2.0) 95.8 (±0.7) 99.3 (±0.4) 96.9 (±2.4) 92.7 (±3.1) 98.0 (±2.0) Short VP coordination 90.0 93.6 (±3.0) 100.0 (±0.0) 99.4 (±1.1) 93.8 (±3.3) 95.6 (±3.0) 99.0 (±2.0) Long VP coordination 61.0 82.2 (±3.4) 94.5 (±1.0) 99.0 (±0.8) 83.9 (±3.2) 90.0 (±2.4) 80.0 (±2.0) Across a PP 57.0 92.6 (±1.4) 98.8 (±0.4) 98.6 (±0.3) 92.7 (±1.3) 95.2 (±1.2) 91.0 (±3.0) Across a SRC 56.0 91.5 (±3.4) 99.6 (±0.4) 99.8 (±0.2) 91.9 (±2.5) 97.1 (±0.7) 90.0 (±2.0) Across an ORC 50.0 84.5 (±3.1) 93.5 (±4.0) 93.7 (±2.0) 86.3 (±3.2) 88.7 (±4.1) 84.0 (±3.0) Across an ORC (no that) 52.0 75.7 (±3.3) 86.7 (±4.2) 89.4 (±2.7) 78.6 (±4.0) 86.4 (±3.5) 77.0 (±2.0) In an ORC 84.0 84.3 (±5.5) 99.8 (±0.2) 99.9 (±0.1) 89.3 (±6.2) 92.4 (±3.5) 92.0 (±4.0) In an ORC (no that) 71.0 81.8 (±2.3) 97.0 (±1.0) 98.6 (±0.9) 83.0 (±5.1) 88.9 (±2.4) 92.0 (±2.0) REFLEXIVE: Simple 83.0 94.1 (±1.9) 99.4 (±1.1) 99.9 (±0.2) 91.8 (±2.9) 98.0 (±1.1) 91.0 (±4.0) In a sent. complement 86.0 80.8 (±1.7) 99.2 (±0.6) 97.9 (±0.8) 79.0 (±3.1) 92.6 (±2.9) 82.0 (±3.0) Across an ORC 55.0 74.9 (±5.0) 72.8 (±2.4) 73.9 (±1.3) 72.3 (±3.0) 78.9 (±8.6) 67.0 (±3.0) NPI: Simple 40.0 99.2 (±0.7) 98.7 (±1.6) 97.7 (±2.0) 98.0 (±3.1) 98.2 (±1.2) 94.0 (±4.0) Across an ORC 41.0 63.5 (±15.0) 56.8 (±6.0) 64.1 (±13.8) 64.5 (±14.0) 48.5 (±6.4) 91.0 (±7.0) Perplexity 78.6 49.5 (±0.2) 56.4 (±0.5) 50.4 (±0.6) 49.6 (±0.3) 50.3 (±0.2) 56.7 (±0.2) Table 1: Comparison of syntactic dependency evaluation accuracies across different types of dependencies and perplexities. Numbers in parentheses are standard deviations. M&L18 is the result of two-layer LSTM-LM in Marvin and Linzen (2018). K19 is the result of distilled two-layer LSTM-LM from RNNGs (Kuncoro et al., 2019). VP: verb phrase; PP: prepositional phrase; SRC: subject relative clause; and ORC: object-relative clause. Margin values are set to 10, which works better according to Figure 1. Perplexity values are calculated on the test set of the Wikipedia dataset. The values of M&L18 and K19 are copied from Kuncoro et al. (2019). 0 1 5 10 15 margin 80 85 90 95 100 Accuracy / Perplexity Agreement sentence-level token-level 0 1 5 10 15 margin 80 85 90 95 100 Reflexive 0 1 5 10 15 margin 60 70 80 90 100 NPI 0 1 5 10 15 margin 50 52 54 56 58 Perplexity Figure 1: Margin value vs. macro average accuracy over the same type of constructions, or perplexity, with standard deviation for the sentence and token-level margin losses. δ = 0 is the baseline LSTM-LM without additional loss. notice that our baseline LSTM-LM (Section 2.2) performs much better than Marvin and Linzen (2018)’s LM. A similar observation is recently made by Kuncoro et al. (2019).8 This suggests that the original work underestimates the true syntactic ability induced by LSTM-LMs. The table also shows the results by their distilled LSTM-LM from RNNGs (Section 1). Higher margin value is effective For the two types of margin loss, which margin value should we use? Figure 1 reports average accuracies within the same types of constructions. For both token and sentence-levels, the task performance increases along δ, but a too large value (15) causes a nega8We omit the comparison but the scores are overall similar. tive effect, in particular on reflexive anaphora. Increases (degradations) of perplexity are observed in both methods but this effect is much smaller for the token-level loss. In the following experiments, we fix the margin value to 10 for both, which achieves the best syntactic performance. Which additional loss works better? We see a clear tendency that our token-level margin loss achieves overall better performance. Unlikelihood loss does not work unless we choose a huge weight parameter (α = 1000), but it does not outperform ours, with a similar value of perplexity. The improvements by binary-classification loss are smaller, indicating that the signals are weaker than other methods with explicit negative exam3381 0.1M 0.37M 0.5M 0.8M # ORCs 75 80 85 90 95 100 Accuracy on 'Across an ORC' with that (all cases) LSTM-LM margin (sent.) margin (token) 0.1M 0.37M 0.5M 0.8M # ORCs with that (animate only) 0.1M 0.37M 0.5M 0.8M # ORCs no that (all cases) 0.1M 0.37M 0.5M 0.8M # ORCs no that (animate only) Figure 2: Accuracies on “Across an ORC” (with and without complementizer “that”) by models trained on augmented data with additional sentences containing an object RC. Margin is set to 10. X-axis denotes the total number of object RCs in the training data. 0.37M roughly equals the number of subject RCs in the original data. “animate only” is a subset of examples (see body). Error bars are standard deviations across 5 different runs. ples. Sentence-level margin loss is conceptually advantageous in that it can deal with any type of sentence-level grammaticality including NPIs. We see that it is overall competitive with token-level margin loss but suffers from a larger increase of perplexity (4.9 points), which is observed even with smaller margin values (Figure 1). Understanding the cause of this degradation as well as alleviating it is an important future direction. 5 Limitations of LSTM-LMs In Table 1, the accuracies on dependencies across an object RC are relatively low. The central question in this experiment is whether this low performance is due to the limitation of current architectures, or other factors such as frequency. We base our discussion on the contrast between object (7) and subject (8) RCs: (7) The authors (that) the chef likes laugh. (8) The authors that like the chef laugh. Importantly, the accuracies for a subject RC are more stable, reaching 99.8% with the token-level margin loss, although the content words used in the examples are common.9 It is known that object RCs are less frequent than subject RCs (Hale, 2001; Levy, 2008), and it could be the case that the use of negative examples still does not fully alleviate this factor. Here, to understand the true limitation of the current LSTM architecture, we try to eliminate such other factors as much as possible under a controlled experiment. 9 Precisely, they are not the same. Examples of object RCs are divided into two categories by the animacy of the main subject (animate or not), while subject RCs only contain animate cases. If we select only animate examples from object RCs the vocabularies for both RCs are the same, remaining only differences in word order and inflection, as in (7, 8). Setup We first inspect the frequencies of object and subject RCs in the training data, by parsing them with the state-of-the-art Berkeley neural parser (Kitaev and Klein, 2018). In total, while subject RCs occur 373,186 times, object RCs only occur 106,558 times. We create three additional training datasets by adding sentences involving object RCs to the original Wikipedia corpus (Section 2.2). To this end, we randomly pick up 30 million sentences from Wikipedia (not overlapped to any sentences in the original corpus), parse by the same parser, and filter sentences containing an object RC, amounting to 680,000 sentences. We create augmented training sets by adding a subset, or all of these sentences to the original training sentences. Among the test cases about object RCs we only report accuracies on subject-verb agreement, on which the portion for subject RCs also exists. This allows us to compare the difficulties of two types of RCs for the present models. We also evaluate on “animate only” subset, which has a correspondence to the test cases for subject RCs with only differences in word order and inflection (like (7) and (8); see footnote 9). Of particular interest to us is the accuracy on these animate cases. We expect that the main reason for lower performance for object RCs is due to frequency, and with our augmentation the accuracy will reach the same level as that for subject RCs. Results However, for both all and animate cases, accuracies are below those for subject RCs (Figure 2). Although we see improvements from the original score (93.7), the highest average accuracy by the token-level margin loss on the “animate” subset is 97.1 (“with that”), not beyond 99%. This result indicates some architectural limitations of LSTM-LMs in handling object RCs robustly at a near perfect level. Answering why the accuracy 3382 Across a PP 80 85 90 95 100 Accuracy Across a SRC Across an ORC Long VP coord. 80 85 90 95 100 Accuracy LSTM-LM margin (token) margin (token) w/o negative examples on target verbs margin (token) w/o negative examples on a construction Figure 3: An ablation study to see the performance of models trained with reduced explicit negative examples (token-level and construction-level). One color represents the same models across plots, except the last bar (construction-level), which is different for each plot. does not reach (almost) 100%, perhaps with other empirical properties or inductive biases (Khandelwal et al., 2018; Ravfogel et al., 2019) is future work. 6 Do models generalize explicit supervision, or just memorize it? One distinguishing property of our margin loss, in particular token-level loss, is that it is highly lexical, making a contrast explicitly between correct and incorrect words. This direct signal may make models acquire very specialized knowledge about each target word, not very generalizable one across similar words and occurring contexts. In this section, to get insights into the transferability of syntactic knowledge induced by our margin losses, we provide an ablation study by removing certain negative examples during training. Setup We perform two kinds of ablation. For token-level ablation (-TOKEN), we avoid creating negative examples for all verbs that appear as a target verb10 in the test set. Another is constructionlevel (-PATTERN), by removing all negative examples occurring in a particular syntactic pattern. We ablate a single construction at a time for PATTERN, from four non-local subject-verb dependencies (across a prepositional phrase (PP), sub10swim, smile, laugh, enjoy, hate, bring, interest, like, write, admire, love, know, and is. Second verb (V1 and V2) Models All verbs like other verbs LSTM-LM 82.2 (±3.4) 13.0 (±12.2) 89.9 (±3.6) Margin (token) 99.0 (±0.8) 94.0 (±6.5) 99.6 (±0.5) -TOKEN 90.8 (±3.3) 51.0 (±29.9) 95.2 (±2.6) -PATTERN 90.1 (±4.6) 50.0 (±30.6) 94.6 (±2.2) Table 2: Accuracies on long VP coordinations by the models with/without ablations. “All verbs” scores are overall accuracies. “like” scores are accuracies on examples on which the second verb (target verb) is like. First verb (V1 and V2) Models likes other verbs LSTM-LM 61.5 (±20.0) 93.5 (±3.4) Margin (token) 97.0 (±4.5) 99.9 (±0.1) -TOKEN 63.5 (±18.5) 99.2 (±1.1) -PATTERN 67.0 (±21.2) 98.0 (±1.4) Table 3: Further analysis of accuracies on the “other verbs” cases of Table 2. Among these cases, the second column (“likes”) shows accuracies on examples where the first verb (not target) is likes. ject RC, object RC, and long verb phrase (VP)).11 We hypothesize that models are less affected by token-level ablation, as knowledge transfer across words appearing in similar contexts is promoted by language modeling objective. We expect that construction-level supervision would be necessary to induce robust syntactic knowledge, as perhaps different phrases, e.g., a PP and a VP, are processed differently. Results Figure 3 is the main results. Across models, we restrict the evaluation on four nonlocal dependency constructions, which we select as ablation candidates as well. For a model with -PATTERN, we evaluate only on examples of construction ablated in training (see caption). To our surprise, both -TOKEN and -PATTERN have similar effects, except “Across an ORC”, on which the degradation by -PATTERN is larger. This may be related to the inherent difficulty of object RCs for LSTM-LMs that we verified in Section 5. For such particularly challenging constructions, models may need explicit supervision signals. We observe lesser score degradation by ablating prepositional phrases and subject RCs. This suggests that, for example, the syntactic knowledge strengthened for prepositional phrases with negative examples could be exploited to learn the syntactic patterns about 11We identify all these cases from the parsed training data, which we prepared for the analysis in Section 5. 3383 subject RCs, even when direct learning signals on subject RCs are missing. We see approximately 10.0 points score degradation on long VP coordination by both ablations. Does this mean that long VPs are particularly hard in terms of transferability? We find that the main reasons for this drop, relative to other cases, are rather technical, essentially due to the target verbs used in the test cases. See Table 2, 3, which show that failed cases for the ablated models are often characterized by the existence of either like or likes. Excluding these cases (“other verbs” in Table 3), the accuracies reach 99.2 and 98.0 by -TOKEN and -PATTERN, respectively. These verbs do not appear as a target verb in the test cases of other tested constructions. This result suggests that the transferability of syntactic knowledge to a particular word may depend on some characteristics of that word. We conjecture that the reason for weak transferability to likes and like is that they are polysemous; e.g., in the corpus, like is much more often used as a preposition and being used as a present tense verb is rare. This type of issue due to frequency may be one reason for lessening the transferability. In other words, like can be seen as a challenging verb to learn its usage only from the corpus, and our margin loss helps for such cases. 7 Discussion and Conclusion Our results with explicit negative examples are overall positive. We have demonstrated that models exposed to these examples at training time in an appropriate way will be capable of handling the targeted constructions at near perfect level except a few cases. We found that our new token-level margin loss is superior to the other approaches and the remaining challenging cases are dependencies across an object relative clause. Object relative clauses are known to be harder for a human as well, and our results may indicate some similarities in the sentence processing behaviors by a human and RNN, though other studies also find some dissimilarities between them (Linzen and Leonard, 2018; Wilcox et al., 2019a). The difficulty of object relative clauses for RNNLMs has also been observed in the prior work (Marvin and Linzen, 2018; van Schijndel et al., 2019). A new insight provided by our study is that this difficulty holds even after alleviating the frequency effects by augmenting the target structures along with direct supervision signals. This indicates that RNNs might inherently suffer from some memory limitation like a human subject, for which the difficulty of particular constructions, including center-embedded object relative clauses, are known to be incurred due to memory limitation (Gibson, 1998; Demberg and Keller, 2008) rather than purely frequencies of the phenomena. In terms of language acquisition, the supervision provided in our approach can be seen as direct negative evidence (Marcus, 1993). Since human learners are known to acquire syntax without such direct feedback we do not claim that our proposed learning method itself is cognitively plausible. One limitation of our approach is that the scope of negative examples has to be predetermined and fixed. Alleviating this restriction is an important future direction. Though it is challenging, we believe that our final analysis for transferability, which indicates that the negative examples do not have to be complete and can be noisy, suggests a possibility of a mechanism to induce negative examples themselves during training, perhaps relying on other linguistic cues or external knowledge. Acknowledgements We would like to thank Naho Orita and the members of Computational Psycholinguistics Tokyo for their valuable suggestions and comments. This paper is based on results obtained from projects commissioned by the New Energy and Industrial Technology Development Organization (NEDO). References Vera Demberg and Frank Keller. 2008. Data from eyetracking corpora as evidence for theories of syntactic processing complexity. Cognition, 109:193–210. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199–209, San Diego, California. Association for Computational Linguistics. ´Emile Enguehard, Yoav Goldberg, and Tal Linzen. 2017. Exploring the syntactic abilities of RNNs with multi-task learning. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 3–14, Vancouver, Canada. Association for Computational Linguistics. Edward Gibson. 1998. Linguistic complexity: Locality of syntactic dependencies. Cognition, 68(1):1–76. 3384 Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195–1205. Association for Computational Linguistics. John Hale. 2001. A probabilistic earley parser as a psycholinguistic model. In Second Meeting of the North American Chapter of the Association for Computational Linguistics. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. Jiaji Huang, Yi Li, Wei Ping, and Liang Huang. 2018. Large margin neural language model. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1183–1191, Brussels, Belgium. Association for Computational Linguistics. Hakan Inan, Khashayar Khosravi, and Richard Socher. 2016. Tying word vectors and word classifiers: A loss framework for language modeling. In International Conference on Learning Representations. Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. 2018. Sharp nearby, fuzzy far away: How neural language models use context. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 284–294, Melbourne, Australia. Association for Computational Linguistics. Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676–2686, Melbourne, Australia. Association for Computational Linguistics. Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom. 2018. Lstms can learn syntax-sensitive dependencies well, but modeling structure makes them better. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426–1436, Melbourne, Australia. Association for Computational Linguistics. Adhiguna Kuncoro, Chris Dyer, Laura Rimell, Stephen Clark, and Phil Blunsom. 2019. Scalable syntaxaware language models using knowledge distillation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3472–3484, Florence, Italy. Association for Computational Linguistics. Roger Levy. 2008. Expectation-based syntactic comprehension. Cognition, 106(3):1126–1177. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of lstms to learn syntaxsensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521–535. Tal Linzen and Brian Leonard. 2018. Distinct patterns of syntactic agreement errors in recurrent networks and humans. In Proceedings of the 40th Annual Conference of the Cognitive Science Society, pages 692– 697, Austin, TX. Cognitive Science Society. Gary F. Marcus. 1993. Negative evidence in language acquisition. Cognition, 46(1):53 – 85. Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192–1202, Brussels, Belgium. Association for Computational Linguistics. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and optimizing LSTM language models. In International Conference on Learning Representations. Tomas Mikolov, Stefan Kombrink, Luk´as Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2011. Extensions of recurrent neural network language model. In ICASSP, pages 5528–5531. IEEE. Shauli Ravfogel, Yoav Goldberg, and Tal Linzen. 2019. Studying the inductive biases of RNNs with synthetic variations of natural languages. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3532–3542, Minneapolis, Minnesota. Association for Computational Linguistics. Marten van Schijndel, Aaron Mueller, and Tal Linzen. 2019. Quantity doesn’t buy quality syntax with neural language models. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, Hong Kong, China. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 conference of the North American chapter of the association for computational linguistics on human language technologyvolume 1, pages 173–180. Association for Computational Linguistics. Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2020. Neural text generation with unlikelihood training. In International Conference on Learning Representations. 3385 Ethan Wilcox, Roger P. Levy, and Richard Futrell. 2019a. What syntactic structures block dependencies in rnn language models? In Proceedings of the 41st Annual Meeting of the Cognitive Science Society. Cognitive Science Society. Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballesteros, and Roger Levy. 2019b. Structural supervision improves learning of non-local grammatical dependencies. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3302–3312, Minneapolis, Minnesota. Association for Computational Linguistics.
2020
309
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 334–339 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 334 Every Document Owns Its Structure: Inductive Text Classification via Graph Neural Networks Yufeng Zhang1∗, Xueli Yu1∗, Zeyu Cui1, Shu Wu1, Zhongzhen Wen2 and Liang Wang1 1Institute of Automation, Chinese Academy of Sciences 2Xi’an Jiaotong University {yufeng.zhang,xueli.yu}@cripac.ia.ac.cn {zeyu.cui,shu.wu}@nlpr.ia.ac.cn [email protected], [email protected] Abstract Text classification is fundamental in natural language processing (NLP), and Graph Neural Networks (GNN) are recently applied in this task. However, the existing graph-based works can neither capture the contextual word relationships within each document nor fulfil the inductive learning of new words. In this work, to overcome such problems, we propose TextING1 for inductive text classification via GNN. We first build individual graphs for each document and then use GNN to learn the finegrained word representations based on their local structures, which can also effectively produce embeddings for unseen words in the new document. Finally, the word nodes are incorporated as the document embedding. Extensive experiments on four benchmark datasets show that our method outperforms state-of-theart text classification methods. 1 Introduction Text classification is one of the primary tasks in the NLP field, as it provides fundamental methodologies for other NLP tasks, such as spam filtering, sentiment analysis, intent detection, and so forth. Traditional methods for text classification include Naive Bayes (Androutsopoulos et al., 2000), k-Nearest Neighbor (Tan, 2006) and Support Vector Machine (Forman, 2008). They are, however, primarily dependent on the hand-crafted features at the cost of labour and efficiency. There are several deep learning methods proposed to address the problem, among which Recurrent Neural Network (RNN) (Mikolov et al., 2010) and Convolutional Neural Network (CNN) (Kim, 2014) are essential ones. Based on them, extended models follow to leverage the classification performance, for instance, TextCNN (Kim, ∗The first two authors contribute equally to this work. 1https://github.com/CRIPAC-DIG/TextING 2014), TextRNN (Liu et al., 2016) and TextRCNN (Lai et al., 2015). Yet they all focus on the locality of words and thus lack of long-distance and non-consecutive word interactions. Graph-based methods are recently applied to solve such issue, which do not treat the text as a sequence but as a set of co-occurrent words instead. For example, Yao et al. (2019) employ Graph Convolutional Networks (Kipf and Welling, 2017) and turns the text classification problem into a node classification one (TextGCN). Moreover, Huang et al. (2019) improve TextGCN by introducing the message passing mechanism and reducing the memory consumption. However, there are two major drawbacks in these graph-based methods. First, the contextual-aware word relations within each document are neglected. To be specific, TextGCN (Yao et al., 2019) constructs a single graph with global relations between documents and words, where fine-grained text level word interactions are not considered (Wu et al., 2019; Hu et al., 2019a,b). In Huang et al. (2019), the edges of the graph are globally fixed between each pair of words, but the fact is that they may affect each other differently in a different text. Second, due to the global structure, the test documents are mandatory in training. Thus they are inherently transductive and have difficulty with inductive learning, in which one can easily obtain word embeddings for new documents with new structures and words using the trained model. Therefore, in this work, we propose a novel Text classification method for INductive word representations via Graph neural networks, termed TextING. In contrast to previous graph-based approaches with global structure, we train a GNN that can depict the detailed word-word relations using only training documents, and generalise to new documents in test. We build individual graphs by applying the sliding window inside each doc335 ument (Rousseau et al., 2015). The information of word nodes is propagated to their neighbours via the Gated Graph Neural Networks (Li et al., 2015, 2019), which is then aggregated into the document embedding. We also conduct extensive experiments to examine the advantages of our approach against baselines, even when words in test are mostly unseen (21.06% average gain in such inductive condition). Noticing a concurrent work (Nikolentzos et al., 2020) also reinforces the approach with a similar graph network structure, we describe the similarities and differences in the method section. To sum up, our contributions are threefold: • We propose a new graph neural network for text classification, where each document is an individual graph and text level word interactions can be learned in it. • Our approach can generalise to new words that absent in training, and it is therefore applicable for inductive circumstances. • We demonstrate that our approach outperforms state-of-the-art text classification methods experimentally. 2 Method TextING comprises three key components: the graph construction, the graph-based word interaction, and the readout function. The architecture is illustrated in Figure 1. In this section, we detail how to implement the three and how they work.   GGNN Readout Graph Construction Text Label GRU h" # h$ # h% # h& # + h& #(" Graph-based Word Interaction Attention Layer + Text Level Representation h" #(" h$ #(" h% #(" h& #(" Sum Max h& " h" " h% " h$ " )*+ Figure 1: The architecture of TextING. As an example, upon a graph of document, every word node updates itself from its neighbours and they aggregate to the ultimate graph representation. Graph Construction We construct the graph for a textual document by representing unique words as vertices and cooccurrences between words as edges, denoted as G = (V, E) where V is the set of vertices and E the edges. The co-occurrences describe the relationship of words that occur within a fixed-size sliding window (length 3 at default) and they are undirected in the graph. Nikolentzos et al. (2020) also use a sliding window of size 2. However, they include a particular master node connecting to every other node, which means the graph is densely connected and the structure information is vague during message passing. The text is preprocessed in a standard way, including tokenisation and stopword removal (Blanco and Lioma, 2012; Rousseau et al., 2015). Embeddings of the vertices are initialised with word features, denoted as h ∈R|V|×d where d is the embedding dimension. Since we build individual graphs for each document, the word feature information is propagated and incorporated contextually during the word interaction phase. Graph-based Word Interaction Upon each graph, we then employ the Gated Graph Neural Networks (Li et al., 2015) to learn the embeddings of the word nodes. A node could receive the information a from its adjacent neighbours and then merge with its own representation to update. As the graph layer operates on the first-order neighbours, we can stack such layer t times to achieve high-order feature interactions, where a node can reach another node t hops away. The formulas of the interaction are: at = Aht−1Wa, (1) zt = σ Wzat + Uzht−1 + bz  , (2) rt = σ Wrat + Urht−1 + br  , (3) ˜h t = tanh What + Uh(rt ⊙ht−1) + bh  , (4) ht = ˜h t ⊙zt + ht−1 ⊙ 1 −zt , (5) where A ∈R|V|×|V| is the adjacency matrix, σ is the sigmoid function, and all W, U and b are trainable weights and biases. z and r function as the update gate and reset gate respectively to determine to what degree the neighbour information contributes to the current node embedding. Readout Function After the word nodes are sufficiently updated, they are aggregated to a graph-level representation for the document, based on which the final prediction 336 Table 1: The statistics of the datasets including both short (sentence) and long (paragraph) documents. The vocab means the number of unique words in a document. The Prop.NW denotes the proportion of new words in test. Dataset # Docs # Training # Test # Classes Max.Vocab Min.Vocab Avg.Vocab Prop.NW MR 10,662 7,108 3,554 2 46 1 18.46 30.07% R8 7,674 5,485 2,189 8 291 4 41.25 2.60% R52 9,100 6,532 2,568 52 301 4 44.02 2.64% Ohsumed 7,400 3,357 4,043 23 197 11 79.49 8.46% is produced. We define the readout function as: hv = σ f1(ht v)  ⊙tanh f2(ht v)  , (6) hG = 1 |V| X v∈V hv + Maxpooling (h1...hV) , (7) where f1 and f2 are two multilayer perceptrons (MLP). The former performs as a soft attention weight while the latter as a non-linear feature transformation. In addition to averaging the weighted word features, we also apply a max-pooling function for the graph representation hG. The idea behind is that every word plays a role in the text and the keywords should contribute more explicitly. Finally, the label is predicted by feeding the graph-level vector into a softmax layer. We minimise the loss through the cross-entropy function: ˆyG = softmax (WhG + b) , (8) L = − X i yGilog (ˆyGi) , (9) where W and b are weights and bias, and yGi is the i-th element of the one-hot label. Model Variant We also extend our model with a multichannel branch TextING-M, where graphs with local structure (original TextING) and graphs with global structure (subgraphs from TextGCN) work in parallel. The nodes remain the same whereas the edges of latter are extracted from the large graph (built on the whole corpus) for each document. We train them separately and make them vote 1:1 for the final prediction. Although it is not the inductive case, our point is to investigate whether and how the two could complement each other from micro and macro perspectives. 3 Experiments In this section, we aim at testing and evaluating the overall performance of TextING. During the experimental tests, we principally concentrate on three concerns: (i) the performance and advantages of our approach against other comparable models, (ii) the adaptability of our approach for words that are never seen in training, and (iii) the interpretability of our approach on how words impact a document. Datasets. For the sake of consistency, we adopt four benchmark tasks the same as in (Yao et al., 2019): (i) classifying movie reviews into positive or negative sentiment polarities (MR)2, (ii) & (iii) classifying documents that appear on Reuters newswire into 8 and 52 categories (R8 and R52 respectively)3, (iv) classifying medical abstracts into 23 cardiovascular diseases categories (Ohsumed)4. Table 1 demonstrates the statistics of the datasets as well as their supplemental information. Baselines. We consider three types of models as baselines: (i) traditional deep learning methods including TextCNN (Kim, 2014) and TextRNN (Liu et al., 2016), (ii) simple but efficient strategies upon word features including fastText (Joulin et al., 2017) and SWEM (Shen et al., 2018), and (iii) graph-based methods for text classification including TextGCN (Yao et al., 2019) and Huang et al. (2019). Experimental Set-up. For all the datasets, the training set and the test set are given, and we randomly split the training set into the ratio 9:1 for actual training and validation respectively. The hyperparameters were tuned according to the performance on the validation set. Empirically, we set the learning rate as 0.01 with Adam (Kingma and Ba, 2015) optimiser and the dropout rate as 0.5. Some depended on the intrinsic attributes of the dataset, for example, the word interaction step and the sliding window size. We refer to them in the parameter sensitivity subsection. Regarding the word embeddings, we used the pre-trained GloVe (Pennington et al., 2014)5 with 2http://www.cs.cornell.edu/people/pabo/movie-reviewdata/ 3http://disi.unitn.it/moschitti/corpora.htm 4https://www.cs.umb.edu/˜smimarog/textmining/datasets/ 5http://nlp.stanford.edu/data/glove.6B.zip 337 Table 2: Test accuracy (%) of various models on four datasets. The mean ± standard deviation of our model is reported according to 10 times run. Note that some baseline results are from (Yao et al., 2019). Model MR R8 R52 Ohsumed CNN (Non-static) 77.75 ± 0.72 95.71 ± 0.52 87.59 ± 0.48 58.44 ± 1.06 RNN (Bi-LSTM) 77.68 ± 0.86 96.31 ± 0.33 90.54 ± 0.91 49.27 ± 1.07 fastText 75.14 ± 0.20 96.13 ± 0.21 92.81 ± 0.09 57.70 ± 0.49 SWEM 76.65 ± 0.63 95.32 ± 0.26 92.94 ± 0.24 63.12 ± 0.55 TextGCN 76.74 ± 0.20 97.07 ± 0.10 93.56 ± 0.18 68.36 ± 0.56 Huang et al. (2019) 97.80 ± 0.20 94.60 ± 0.30 69.40 ± 0.60 TextING 79.82 ± 0.20 98.04 ± 0.25 95.48 ± 0.19 70.42 ± 0.39 TextING-M 80.19 ± 0.31 98.13 ± 0.12 95.68 ± 0.35 70.84 ± 0.52 d = 300 as the input features while the out-ofvocabulary (OOV) words’ were randomly sampled from a uniform distribution [-0.01, 0.01]. For a fair comparison, the other baseline models shared the same embeddings. Results. Table 2 presents the performance of our model as well as the baselines. We observe that graph-based methods generally outperform other types of models, suggesting that the graph model benefits to the text processing. Further, TextING ranks top on all tasks, suggesting that the individual graph exceeds the global one. Particularly, the result of TextING on MR is remarkably higher. Because the short documents in MR lead to a lowdensity graph in TextGCN, it restrains the label message passing among document nodes, whereas our individual graphs (documents) do not rely on such label message passing mechanism. Another reason is that there are approximately one third new words in test as shown in Table 1, which implies TextING is more friendly to unseen words. The improvement on R8 is relatively subtle since R8 is simple to fit and the baselines are rather satisfying. The proportion of new words is also low on R8. The multichannel variant also performs well on all datasets. It implies the model can learn different patterns through different channels. Under Inductive Condition. To examine the adaptability of TextING under inductive condition, we reduce the amount of training data to 20 labelled documents per class and compare it with TextGCN. Word nodes absent in the training set are masked for TextGCN to simulate the inductive condition. In this scenario, most of the words in the test set are unseen during training, which behaves like a rigorous cold-start problem. The result of both models on MR and Ohsumed are listed in Table 3. An average gain of 21.06% shows that TextING is much less impacted by the reduction of exposed Table 3: Accuracy (%) of TextGCN and TextING on MR and Ohsumed, where MR uses 40 labelled documents (0.5% of full training data) and Ohsumed uses 460 labelled documents (13.7% of full training data). Model MR* Ohsumed* TextGCN 53.15 47.24 TextING 64.43 57.11 # Words in Training 465 7,009 # New Words in Test 18,299 7,148 words. In addition, a tendency of test performance and gain with different percentages of training data on MR is illustrated as Figure 2. TextING shows a consistent improvement when increasing number of words become unseen. 0 0.2 0.4 0.6 0.8 1 Training percentage 0.5 0.6 0.7 0.8 Accuracy 0 0.05 0.1 0.15 0.2 0.25 Gain percetage TextGCN TextING Gain Figure 2: Test performance and gain with different percent of training data ranging from 0.005 to 1 on MR. The less data in training, the more new words in test. Case Study. To understand what is of importance that TextING learns for a document, we further visualise the attention layer (i.e. the readout function), illustrated as Figure 3. The highlighted words are proportional to the attention weights, and they show a positive correlation to the label, which interprets how TextING works in sentiment analysis. Parameter Sensitivity. Figure 4 exhibits the performance of TextING with a varying number of the graph layer on MR and Ohsumed. The result reveals that with the increment of the layer, a node could receive more information from high-order 338 (a) Positive reviews (b) Negative reviews Figure 3: Attention visualisation of positive and negative movie reviews in MR. neighbours and learn its representation more accurately. Nevertheless, the situation reverses with a continuous increment, where a node receives from every node in the graph and becomes over-smooth. Figure 5 illustrates the performance as well as the graph density of TextING with a varying window size on MR and Ohsumed. It presents a similar trend as the interaction step’s when the number of neighbours of a node grows. 0 2 4 Interaction Step 0.78 0.79 0.8 0.81 Accuracy MR (a) MR 0 2 4 Interaction Step 0.69 0.695 0.7 0.705 0.71 Accuracy Obsumed (b) Ohsumed Figure 4: Accuracy with varying interaction steps. 1 3 5 7 Window Size 0.785 0.79 0.795 0.8 Accurancy 0 5 10 Density Accuracy Density (a) MR 1 3 5 7 Window Size 0.69 0.695 0.7 0.705 0.71 Accurancy 0 5 10 15 Density Accuracy Density (b) Ohsumed Figure 5: Accuracy with varying graph density. 4 Conclusion We proposed a novel graph-based method for inductive text classification, where each text owns its structural graph and text level word interactions can be learned. Experiments proved the effectiveness of our approach in modelling local word-word relations and word significances in the text. Acknowledgement This work is supported by National Natural Science Foundation of China (U19B2038, 61772528) and National Key Research and Development Program (2018YFB1402600, 2016YFB1001000). References Ion Androutsopoulos, John Koutsias, Konstantinos V Chandrinos, George Paliouras, and Constantine D Spyropoulos. 2000. An evaluation of naive bayesian anti-spam filtering. arXiv preprint cs/0006013. Roi Blanco and Christina Lioma. 2012. Graph-based term weighting for information retrieval. Information retrieval. George Forman. 2008. Bns feature scaling: an improved representation over tf-idf for svm text classification. In CIKM. ACM. Fenyu Hu, Yanqiao Zhu, Shu Wu, Weiran Huang, Liang Wang, and Tieniu Tan. 2019a. Graphair: Graph representation learning with neighborhood aggregation and interaction. arXiv preprint arXiv:1911.01731. Fenyu Hu, Yanqiao Zhu, Shu Wu, Liang Wang, and Tieniu Tan. 2019b. Hierarchical graph convolutional networks for semi-supervised node classification. arXiv preprint arXiv:1902.06667. Lianzhe Huang, Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng WANG. 2019. Text level graph neural network for text classification. In EMNLP. Armand Joulin, Edouard Grave, and Piotr Bojanowski Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In EACL. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Thomas N Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In ICLR. Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In AAAI. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. 2015. Gated graph sequence neural networks. In ICLR. Zekun Li, Zeyu Cui, Shu Wu, Xiaoyu Zhang, and Liang Wang. 2019. Fi-gnn: Modeling feature interactions via graph neural networks for ctr prediction. In ACM. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Recurrent neural network for text classification with multi-task learning. In IJCAI. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In ISCA. Giannis Nikolentzos, Antoine Jean-Pierre Tixier, and Michalis Vazirgiannis. 2020. Message passing attention networks for document understanding. In AAAI. 339 Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Franc¸ois Rousseau, Emmanouil Kiagias, and Michalis Vazirgiannis. 2015. Text categorization as a graph classification problem. In ACL. Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Chunyuan Li, Ricardo Henao, and Lawrence Carin. 2018. Baseline needs more love: On simple wordembedding-based models and associated pooling mechanisms. In ACL. Songbo Tan. 2006. An effective refinement strategy for KNN text classifier. Expert Systems with Applications. Shu Wu, Yuyuan Tang, Yanqiao Zhu, Liang Wang, Xing Xie, and Tieniu Tan. 2019. Session-based recommendation with graph neural networks. In AAAI. Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Graph convolutional networks for text classification. In AAAI.
2020
31
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3386–3403 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3386 On the Robustness of Language Encoders against Grammatical Errors Fan Yin1, Quanyu Long2, Tao Meng3, and Kai-Wei Chang3 1Peking University 2Shanghai Jiao Tong University 3University of California, Los Angeles [email protected]; [email protected]; [email protected]; [email protected] Abstract We conduct a thorough study to diagnose the behaviors of pre-trained language encoders (ELMo, BERT, and RoBERTa) when confronted with natural grammatical errors. Specifically, we collect real grammatical errors from non-native speakers and conduct adversarial attacks to simulate these errors on clean text data. We use this approach to facilitate debugging models on downstream applications. Results confirm that the performance of all tested models is affected but the degree of impact varies. To interpret model behaviors, we further design a linguistic acceptability task to reveal their abilities in identifying ungrammatical sentences and the position of errors. We find that fixed contextual encoders with a simple classifier trained on the prediction of sentence correctness are able to locate error positions. We also design a cloze test for BERT and discover that BERT captures the interaction between errors and specific tokens in context. Our results shed light on understanding the robustness and behaviors of language encoders against grammatical errors. 1 Introduction Pre-trained language encoders have achieved great success in facilitating various downstream natural language processing (NLP) tasks (Peters et al., 2018; Devlin et al., 2019; Liu et al., 2019b). However, they usually assume training and test corpora are clean and it is unclear how the models behave when confronted with noisy input. Grammatical error is an important type of noise since it naturally and frequently occurs in natural language, especially in spoken and written materials from non-native speakers. Dealing with such a noise reflects model robustness in representing language and grammatical knowledge. It would also have a positive social impact if language encoders can model texts from non-native speakers appropriately. Recent work on evaluating model’s behaviors against grammatical errors employs various methods, including (1) manually constructing minimal edited pairs on specific linguistic phenomena (Marvin and Linzen, 2018; Goldberg, 2019; Warstadt et al., 2019a,b); (2) labeling or creating acceptability judgment resources (Linzen et al., 2016; Warstadt and Bowman, 2019; Warstadt et al., 2019a); and (3) simulating noises for a specific NLP task such as neural machine translation (Lui et al., 2018; Anastasopoulos, 2019), sentiment classification (Baldwin et al., 2017). These studies either focus on specific phenomena and mainly conduct experiments on designated corpora or rely heavily on human annotations and expert knowledge in linguistics. In contrast, our work automatically simulates natural occurring data and various types of grammatical errors and systematically analyzes how these noises affect downstream applications. This holds more practical significance to understand the robustness of several language encoders against grammatical errors. Specifically, we first propose an effective approach to simulating diverse grammatical errors, which applies black-box adversarial attack algorithms based on real errors observed on NUS Corpus of Learner English (NUCLE) (Dahlmeier et al., 2013), a grammatical error correction benchmark. This approach transforms clean corpora into corrupted ones and facilitates debugging language encoders on downstream tasks. We demonstrate its flexibility by evaluating models on four language understanding tasks and a sequence tagging task. We next quantify model’s capacities of identifying grammatical errors by probing individual layers of pre-trained encoders through a linguistic acceptability task. We construct separate datasets for eight error types. Then, we freeze encoder layers 3387 and add a simple classifier on top of each layer to predict the correctness of input texts and locate error positions. This probing task assumes if a simple classifier behaves well on a designated type of error, then the encoder layer is likely to contain knowledge of that error (Conneau et al., 2017; Adi et al., 2017). Finally, we investigate how models capture the interaction between grammatical errors and contexts. We use BERT as an example and design an unsupervised cloze test to evaluate its intrinsic functionality as a masked language model (MLM). Our contributions are summarized as follows: 1. We propose a novel approach to simulating various grammatical errors. The proposed method is flexible and can be used to verify the robustness of language encoders against grammatical errors. 2. We conduct a systematic analysis of the robustness of language encoders and enhance previous work by studying the performance of models on downstream tasks with various grammatical error types. 3. We demonstrate: (1) the robustness of existing language encoders against grammatical errors varies; (2) the contextual layers of language encoders acquire stronger abilities in identifying and locating grammatical errors than token embedding layers; and (3) BERT captures the interaction between errors and specific tokens in context, in particular the neighboring tokens of errors. The code to reproduce our experiments are available at: https://github.com/uclanlp/ ProbeGrammarRobustness 2 Related Work Probing Pre-trained Language Encoders The recent success of pre-trained language encoders across a diverse set of downstream tasks has stimulated significant interest in understanding their advantages. A portion of past work on analyzing pre-trained encoders is mainly based on clean data. As mentioned in Tenney et al. (2019a), these studies can be roughly divided into two categories: (1) designing controlled tasks to probe whether a specific linguistic phenomenon is captured by models (Conneau et al., 2018; Peters et al., 2019; Tenney et al., 2019b; Liu et al., 2019a; Kim et al., 2019), or (2) decomposing the model structure and exploring what linguistic property is encoded (Tenney et al., 2019a; Jawahar et al., 2019; Clark et al., 2019). However, these studies do not analyze how grammatical errors affect model behaviors. Our work is related to studies on analyzing models with manually created noise. For example, Linzen et al. (2016) evaluate whether LSTMs capture the hierarchical structure of language by using verbal inflection to violate subject-verb agreement. Marvin and Linzen (2018) present a new dataset consisting of minimal edited pairs with the opposite linguistic acceptability on three specific linguistic phenomena and use it to evaluate RNN’s syntactic ability. Goldberg (2019) adjusts previous method to evaluate BERT. Warstadt et al. (2019a) further compare five analysis methods under a single phenomenon. Despite the diversity in methodology, these studies share common limitations. First, they employ only a single or specific aspects of linguistic knowledge; second, their experiments are mainly based on constructed datasets instead of real-world downstream applications. In contrast, we propose a method to cover a broader range of grammatical errors and evaluate on downstream tasks. A concurrent work (Warstadt et al., 2019b) facilitates diagnosing language models by creating linguistic minimal pairs datasets for 67 isolate grammatical paradigms in English using linguistcrafted templates. In contrast, we do not rely heavily on artificial vocabulary and templates. Synthesized Errors To evaluate and promote the robustness of neural models against noise, some studies manually create new datasets with specific linguistic phenomena (Linzen et al., 2016; Marvin and Linzen, 2018; Goldberg, 2019; Warstadt et al., 2019a). Others have introduced various methods to generate synthetic errors on clean downstream datasets, in particular, machine translation corpora. Belinkov and Bisk (2018); Anastasopoulos (2019) demonstrate that synthetic grammatical errors induced by character manipulation and word substitution can degrade the performance of NMT systems. Baldwin et al. (2017) augment original sentiment classification datasets with syntactically (reordering) and semantically (word substitution) noisy sentences and achieve higher performance. Our method is partly inspired by Lui et al. (2018), who synthesize semi-natural ungrammatical sentences by maintaining confusion matrices for five simple error types. Another line of studies uses black-box adversarial attack methods to create adversarial examples 3388 for debugging NLP models (Ribeiro et al., 2018; Jin et al., 2019; Alzantot et al., 2018; Burstein et al., 2019). These methods create a more challenging scenario for target models compared to the above data generation procedure. Our proposed simulation benefits from both adversarial attack algorithms and semi-natural grammatical errors. 3 Method We first explain how we simulate ungrammatical scenarios. Then, we describe target models and the evaluation design. 3.1 Grammatical Error Simulation Most downstream datasets contain only clean and grammatical sentences. Despite that recent language encoders achieve promising performance, it is unclear if they perform equally well on text data with grammatical errors. Therefore, we synthesize grammatical errors on clean corpora to test the robustness of language encoders. We use a controllable rule-based method to collect and mimic errors observed on NUCLE following previous work (Lui et al., 2018; Sperber et al., 2017) and apply two ways to introduce errors to clean corpora: (1) we sample errors based on the frequency distribution of NUCLE and introduce them to plausible positions; (2) inspired by the literature of adversarial attacks (Ribeiro et al., 2018; Jin et al., 2019; Alzantot et al., 2018), we conduct search algorithms to introduce grammatical errors that causing the largest performance drop on a given downstream task. Mimic Error Distribution on NUCLE We first describe how to extract the error distribution on NUCLE (Dahlmeier et al., 2013). NUCLE is constructed with naturally occurring data (student essays at NUS) annotated with error tags. Each ungrammatical sentence is paired with its correction that differs only in local edits. The two sentences make up a minimal edited pair. An example is like: 1. Will the child blame the parents after he growing up? × 2. Will the child blame the parents after he grows up? ✓ NUCLE corpus contains around 59,800 sentences with average length 20.38. About 6% of tokens in each sentence contain grammatical errors. There are 27 error tags, including Prep (indicating preposition errors), ArtOrDet (indicating article or determiner errors), Vform (indicating incorrect verb form) and so forth. We consider eight frequently-occurred, tokenlevel error types in NUCLE as shown in Table 1. These error types perturb a sentence in terms of syntax (SVA, Worder), semantics (Nn, Wchoice, Trans) and both (ArtOrDet, Prep, Vform), and thus cover a wide range of noise in natural language. Then, we construct a confusion set for each error type based on the observation on NUCLE. Each member of a confusion set is a token. We assign a weight wij between token ti and tj in the same set to indicate the probability that ti will be replaced by tj. In particular, for ArtOrDet, Prep and Trans, the confusion set consists of a set of tokens that frequently occur as errors or corrections on NUCLE. For each token ti in the set, we compute wij based on how many times ti is replaced by tj in minimal edited pairs on NUCLE. Notice that we add a special token ø to represent deletion and insertion. For Nn, when we find a noun, we add it and its singular (SG) or plural (PL) counterpart to the set. For SVA, when we find a verb with present tense, we add it and its third-person-singular (3SG) or non-third (not 3SG) counterpart to the set. For Worder, we exchange the position of an adverb with its neighboring adjective, participle or modal. For Vform, we use NLTK (Bird and Loper, 2004) to extract present, past, progressive, and perfect tense of a verb and add to the set. For Wchoice, we select ten synonyms of a target word from WordNet. The substitution weight is set to be uniform for both Vform and Wchoice. Grammatical Error Introduction We introduce errors in two ways. The first is called probabilistic transformation. Similar to Lui et al. (2018), we first obtain the parse tree of the target sentence using the Berkeley syntactic parser (Petrov et al., 2006). Then, we sample an error type from the error type distribution estimated from NUCLE and randomly choose a position that can apply this type of error according to the parse tree. Finally, we sample an error token based on the weights from the confusion set of the sampled error type and introduce the error token to the selected position. However, probabilistic transformation only represents the average case. To debug and analyze the robustness of language encoders, we consider another more challenging setting – worst-case transformation, where we leverage search algorithms 3389 Error type Error Description Confusion Set ArtOrDet Article/determiner errors { a, an, the, ø} Prep Preposition errors { on, in, at, from, for, under, over, with, into, during, until, against, among, throughout, to, by, about, like, before, across, behind, but, out, up, after, since, down, off, of, ø} Trans Link words/phrase errors {and, but, so, however, as, that, thus, also, because, therefore, if, although, which, where, moreover, besides, of, ø} Nn Noun number errors {SG, PL} SVA Subject-verb agreement errors {3SG, not 3SG} Vform Verb form errors {Present, Past, Progressive, Perfect} Wchoice Word choice errors {Ten synonyms from WordNet Synsets} Worder Word positions errors {Adverb w/ Adjective, Participle, Modal} Table 1: The target error types and the corresponding confusion sets. from the black-box adversarial attack to determine error positions. More concretely, we obtain an operation set for each token in a sentence by considering all possible substitutions based on all confusion sets. Note that some confusion sets are not applicable, for example the confusion set of Nn to a verb. Each operation in the operation set is to replace the target token or to change its position. Then, we apply a searching algorithm to select operations from these operation sets that change the prediction of the tested model and apply them to generate error sentences. Three search algorithms are considered: greedy search, beam search, and genetic algorithm. Greedy search attack is a two-step procedure. First, we evaluate the importance of tokens in a sentence. The importance of a token is represented by the likelihood decrease on the model prediction when it is deleted. The larger the decrease is, the more important the token is. After comparing all tokens, we obtain a sorted list of tokens in descending order of their importance. Then, we walk through the list. For each token in the list, we try out all operations from the operation set associated with that token and then practice the operation that degrades the likelihood of the model prediction the most. We keep repeating step two until the prediction changes or a budget (e.g., number of operations per sentence) is reached. Beam search is similar to greedy search. The only difference is that when we walk through the sorted list of tokens, we maintain a beam with fixed size k that contains the top k operation streams with the highest global degradation. Genetic algorithm is a population-based iterative method for finding more suitable examples. We start by randomly selecting operations to build a generation and then use a combination of crossover and mutation to find better candidates. We refer the readers to Alzantot et al. (2018) for details of the genetic algorithm in adversarial attack. Comprehensive descriptions of all methods are found in Appendix C. 3.2 Target Models We evaluate the following three pre-trained language encoders. Detailed descriptions of models and training settings are in Appendix B. ELMo (Peters et al., 2018) is a three-layer LSTM-based model pre-trained on the bidirectional language modeling task on 1B Word Benchmark (Chelba et al., 2014). We fix ELMo as a contextual embedding and add two layers of BiLSTM with attention mechanism on top of it. BERT (Devlin et al., 2019) is a transformerbased (Vaswani et al., 2017) model pre-trained on masked language modeling and next sentence prediction tasks. It uses 16GB English text and adapts to downstream tasks by fine-tuning. We use BERTbase-cased for Named Entity Recognition (NER) and BERT-base-uncased for other tasks and perform task-specific fine-tuning. RoBERTa (Liu et al., 2019b) is a robustly pretrained BERT model using larger pre-training data (160GB in total), longer pre-training time, the dynamic masking strategy and other optimized pre3390 training methods. We use RoBERTa-base and perform task-specific fine-tuning. 3.3 Evaluation Methods We design the following three evaluation methods to systematically analyze how language encoders are affected by grammatical errors in input. Simulate Errors on Downstream Tasks Using the simulation methods discussed in Section §3.1, we are able to perform evaluation on existing benchmark corpora. In our experiments, we consider the target models independently. The whole procedure is: given a dataset, the target model is first trained (fine-tuned) and evaluated on the clean training and development set. Then, we discard those wrongly predicted examples from the development set and apply simulation methods to perturb each remaining example. We compute the attack success rate (attacked examples / all examples) as an indicator of model robustness against grammatical errors. The smaller the rate is, the more robust a model is. Linguistic Acceptability Probing We design a linguistic acceptability probing task to evaluate each individual type of error. We consider two aspects: (1) if the model can tell whether a sentence is grammatically correct or not (i.e., a binary classification task); (2) if the model can locate error positions in the token-level. We fix the target model and train a self-attention classifier to perform both probing tasks. Cloze test for BERT We design an unsupervised cloze test to evaluate the masked language model component of BERT based on minimal edited pairs. For each minimal pair that differs only in one token, we quantify how the probability of predicting a single masked token in the rest of the sentence affected by this grammatical error. This method analyzes how error token affects clean context, which is complementary to Goldberg (2019) who focuses on SVA error and discusses how clean contexts influence the prediction of the masked error token. 4 How Grammatical Errors Affect Downstream Performance? In this section, we simulate grammatical errors and analyze performance drops on downstream tasks. We compare ELMo, BERT, RoBERTa and a baseline model InferSent (Conneau et al., 2017). Infersent ELMo BERT RoBERTa MRPC 75.42 80.30 86.48 89.88 MNLI-m 68.62 74.91 83.77 87.70 MNLI-mm 69.12 75.50 84.80 87.40 QNLI 77.39 78.23 90.58 92.50 SST-2 83.14 90.37 92.08 94.72 NER 91.21 95.20 95.45 Table 2: Original performance of the target models on language understanding and sequential tagging tasks. Datasets We use four language understanding datasets: MRPC (Dolan and Brockett, 2005), MNLI (Williams et al., 2018), QNLI (Rajpurkar et al., 2016), and SST-2 (Socher et al., 2013) from GLUE (Wang et al., 2019a) and a sequence tagging benchmark: CoNLL-2013 for NER. Detailed descriptions of these corpora are in Appendix A. We do not use other datasets from GLUE since they are either small in size or only contain short sentences. Attack Settings For all tasks, we limit the maximum percentage of allowed modifications in a sentence to be 15% of tokens, which is a reasonable rate according to the statistics estimated from the real data. As shown in Table 3, the worst-case transformation only modifies around 9% of tokens overall under such a limitation. For MNLI and QNLI, we only modify the second sentence, i.e., hypothesis and answer, respectively. For MRPC, we only modify the first sentence. We do not apply the genetic algorithm to MNLI and QNLI due to their relatively large number of examples in the development sets, which induce an extremely long time for attacking. For NER, we keep the named entities and only modify the remaining tokens. Results and Discussion Table 2 presents the test performance of four target models on the standard development set of each task. Table 3 summarizes the attack success rates on language understanding tasks, the decreases of F1 score on NER, and the mean percentage of modified tokens (number in brackets). All numbers are formatted in percentage. As shown in Table 3, with the probabilistic transformation, the attack success rates fall between 2% (RoBERTa, QNLI) and 10% (ELMo, MRPC). With the worst-case transformation, we obtain the highest attacked rate of 81.1% (ELMo, genetic algorithm, MRPC) and an average attacked rate across all tasks of 29% by perturbing only around 9% of tokens. This result confirms that all models are influenced by ungrammatical inputs. NER task is 3391 Model Alg. MRPC MNLI (m/mm) QNLI SST-2 NER Infersent dist. 6.51 (14.53) 8.30 (13.98) / 8.80 (14.23) 4.76 (12.53) 5.79 (14.38) greedy 53.42 (9.02) 36.52 (10.35) / 40.71 (10.06) 44.92 (7.61) 43.44 (8.02) beam 54.39 (9.08) 36.66 (10.37) / 40.87 (10.06) 45.16 (7.62) 43.86 (8.03) genetic 79.15 (8.60) 59.86 (8.39) BiLSTM dist. 9.99 (14.53) 7.76 (13.98) / 7.83 (14.23) 5.34 (12.53) 4.64 (14.38) 3.29 (13.75) + ELMo greedy 60.84 (8.19) 29.58 (10.28) / 32.92 (9.89) 39.12 (7.25) 37.55 (8.24) 17.81 (7.67) + Attn beam 61.49 (8.29) 29.74 (10.29) / 33.12 (9.91) 40.38 (7.33) 38.32 (8.32) 18.33 (7.85) genetic 81.14 (7.41) 59.25 (8.25) 39.78 (8.19) BERT dist. 3.69(14.53) 6.59 (13.98) / 6.95 (14.23) 2.33 (12.53) 4.73 (14.38) 3.07 (13.75) greedy 31.25 (7.95) 28.76 (10.28) / 32.04 (10.01) 25.43 (7.38) 33.54 (7.96) 17.12 (7.51) beam 31.81 (8.01) 29.03 (10.30) / 32.44 (10.04) 26.42 (7.48) 34.28 (8.01) 18.27 (7.74) genetic 59.01 (8.84) 58.53 (7.83) 38.83(7.64) RoBERTa dist. 3.04 (14.53) 5.66 (13.98) / 5.88(14.23) 1.92 (12.53) 3.53 (14.38) 2.52 (13.75) greedy 20.45 (8.11) 20.65 (10.43) / 21.47 (10.02) 19.82 (7.18) 31.06 (8.20) 15.84 (8.12) beam 20.73(8.14) 20.89 (10.44) / 21.91 (10.06) 20.52 (7.29) 31.91 (8.27) 16.51 (7.47) genetic 38.93 (9.17) 56.41 (8.39) 35.11(7.55) Table 3: Results of evaluating the robustness of models on downstream tasks. Each column represents a dataset and each row represents a victim model with the attack algorithm (dist. means probabilistic transformation). In each cell, we show the mean attack success rate (in percentage) and the mean percentage of modified words (number in the bracket) over the dataset. BERT RoBERTa MRPC MNLI SST MRPC MNLI SST Prep 16 178 36 15 103 43 Art/Det 5 270 20 7 228 28 Wchoice 93 1129 233 64 772 195 Vform 8 231 26 9 314 37 SVA 57 538 83 31 388 83 Nn 14 128 13 3 84 13 Worder 0 62 28 0 43 28 Trans 5 70 25 5 31 25 Table 4: Numbers of times each error type is chosen in successful attacks. We find that Wchoice and SVA are more harmful. in general harder to be influenced by grammatical errors. In terms of the probabilistic transformation, the drop of F1 scores ranges from 2% to 4%. For the worst-case transformation, the highest drop for NER is 18.33% (ElMo, beam search). Considering different target models, we observe that the impact of grammatical errors varies among models. Specifically, RoBERTa exhibits a strong robustness against the impact of grammatical errors, with consistently lower attack success rates (20.28% on average) and F1 score decreases (17.50% on average) across all tasks, especially on MRPC and MNLI. On the other hand, BERT, ELMo, and InferSent experience an average attack rate of 26.03%, 33.06%, 36.07% respectively on NLU tasks. Given the differences in pre-training strategies, we speculate that pretraining with more data might benefit model robustness against noised data. This speculation is consistent with (Warstadt et al., 2019b), where the authors also give a lightweight demonstration on LSTM and Transformer-XL (Dai et al., 2019) with varying training data. We leave a further exploration of this speculation and a detailed analysis of model architecture to future work. Note that in the experiment setting, for each model, we follow the literature to compute the attack success rate only on the instances where the model makes correct predictions. Therefore, the attack success rates across different models are not comparable. To compare the robustness of different encoders, we further examine the attack success rates on the common part in the development set that all the models make correct predictions. We find that the overall trend is similar to that in Table 3. For example, the greedy attack success rates of RoBERTa, BERT, and ELMo on MRPC and SST2 are 14.4%, 22.1%, 46.8%, and 28.2%, 30.0%, 33.9% respectively. To better understand the effect of grammatical errors, we also analyze (1) which error type harms the performance most, (2) how different error rates affect the performance. For the first question, we represent the harm of an error type by the total time it is chosen in successful greedy attack examples. We conduct experiments to analyze BERT and RoBERTa on the development sets of MRPC, MNLI-m, and SST-2 as shown in Table 3392 0.12 0.14 0.16 0.18 0.20 0.22 Perturbed rate 0.3 0.4 0.5 0.6 0.7 0.8 Attack success rate InferSent ELMo BERT RoBERTa Figure 1: Attack success rate when the numbers of modified tokens in a sentence increase. 4. Among all, Wchoice is the most harmful type while Worder the least. SVA ranks the second most harmful type. Notice that though Nn changes a token in a similar way with SVA (both adding or dropping -s or -es in most cases), they have different influences to the model. As for errors related to function words, Prep plays a more important role in general but ArtOrDet harms MNLI more. For the second one, we increase the allowed modifications of greedy attack from 15% to 45% of tokens in one sentence, resulting the actual percentage of modified tokens under 20%. We evaluate all models on the development set of MNLI-m. Results are shown in Fig 1. We find that all attack success rates grow almost linearly as we increase modifications. ELMo and BERT perform almost the same while InferSent grows faster at the beginning and RoBERTa grows slower when reaching the end. The average attack success rate comes to 70% when the error rate is around 20%. 5 To What Extent Models Identify Grammatical Errors? Our goal in this section is to assess the ability of the pre-trained encoders in identifying grammatical errors. We use a binary linguistic acceptability task to test the model ability in judging the grammatical correctness of a sentence. We further study whether the model can precisely locate error positions, which reflects the token-level ability. Data We construct separate datasets for each specific type of grammatical error. For each dataset, we extract 10,000 sentences whose lengths fall within 10 to 60 tokens from 1B Word Benchmark (Chelba et al., 2014). Then, we introduce the target error type to half of these sentences using probabilistic transformation and keep the error rate over each dataset around 3% (resulting in one or two layer 12 layer 8 layer 1 layer 0 85.6 84.3 93.3 68.1 88.3 88.4 94.5 73.8 63.4 70.3 78.4 52.2 60.8 64.6 70.3 48.2 Sentence-level Acc layer 12 layer 9 layer 1 layer 0 50.5 79.8 93.2 58.8 48.0 85.3 59.6 62.2 33.9 59.3 54.4 13.1 39.1 58.7 63.6 17.5 Token-level Acc 50 60 70 80 90 20 30 40 50 60 70 80 90 Figure 2: Probing four layers of BERT on four error types. The left side shows the accuracy of the binary linguistic acceptability task. The right side shows the accuracy of locating error positions. Each row represents a specific layer, and each column represents a type of errors, ArtOrDet, Nn, SVA, Worder from left to right. Full results are given in Appendix D errors in each sentence). Sentences are split into training (80%), development (10%) and test (10%). Models We study individual layers of ELMo (2 layers), BERT-base-uncased (12 layers) and RoBERTa-base (12 layers). In particular, we fix each layer and attach a trainable self-attention layer on top of it to obtain a sentence representation. The sentence representation is fed into a linear classifier to output the probability of whether the sentence is linguistically acceptable. See details about the selfattention layer and the linear classifier in Appendix B.3. We next extract the top two positions with the heaviest weights from the trained self-attention layer. If the positions with error token are included, we consider the errors are correctly located by the model in the token-level. This suggests whether contextual encoders are providing enough information for the classifier to identify error locations. For comparisons, we also evaluate the input embedding layer (non-contextualized, layer 0) of each model as a baseline. We compute accuracy for both sentence-level and token-level evaluations. Results and Discussion We visualize the results of four layers of BERT on four error types, ArtOrDet, Nn, SVA, and Worder in Fig 2. Complete results of all layers and other error types are in Appendix D. We find that the mean sentencelevel accuracy of the best contextual layers of BERT, ELMo, and RoBERTa across error types are 87.8%, 84.3%, and 90.4%, respectively, while input embedding layers achieve 64.7%, 65.8%, and 66.0%. In token-level, despite trained only on the 3393 2 4 6 8 10 12 Layer 0.0 0.1 0.2 0.3 0.4 0.5 Acc. 3 10 2 6 12 4 10 10 7 4 6 6 prep Attn head 2 4 6 8 10 12 Layer 0.0 0.2 0.4 0.6 Acc. 4 5 5 9 5 12 12 2 6 4 1 12 sva Attn head Figure 3: The accuracy of each attention head of BERT on token-level evaluation. The grey line stands for the best performing heads. The green line stands for the average performance of heads in one layer. prediction of whether a sentence is acceptable, the mean accuracy of classifiers upon the best layers of BERT, ELMo, and RoBERTa are 79.3%, 63.3%, and 80.3%, compared to 48.6%, 18.7%, and 53.4% of input embedding layers. The two facts indicate that these pre-trained encoder layers possess stronger grammatical error detecting and locating abilities compared to input embedding layers. We also observe patterns related to a specific model. Specifically, middle layers (layers 7-9) of BERT are better at identifying errors than lower or higher layers, as shown in Fig 2. But higher layers of BERT locate errors related to long-range dependencies and verbs such as SVA and Vform more accurately. To further investigate BERT’s knowledge of error locations. We conduct the same token-level evaluation to the 144 attention heads in BERT. Results for Prep and SVA are visualized in Fig 3. We find that even in a completely unsupervised manner, some attention heads results for 50%-60% accuracy in locating errors. Consistent with self-attention layers, attention heads from middle layers perform the best. See Appendix F for all error types. Due to space limit, we present results of RoBERTa and ELMo in Appendix D and summarize the observations in the following. RoBERTa exhibits better ability in detecting and locating errors in lower layers compared to BERT and achieves the best performance in top layers (layers 10-11). It is also good at capturing verb and dependency errors. On the other hand, the first layer of ELMo consistently gives the highest sentencelevel classification accuracy. But its best performing layer in locating errors depends on the error type and varies between the first and the second layer. In particular, The second layer of ELMo exhibits strong ability in locating Nn and outperforms BERT in accuracy. This is surprising given the fact that Nn is not obvious with character embeddings -6 -5 -4 -3 -2 -1 1 2 3 4 5 6 Prep Art Wci Tras Nn SVA Vform Vt 0.00 -0.00 0.01 0.02 0.02 0.09 0.02 0.02 0.02 0.01 0.01 0.00 0.00 0.01 0.00 0.00 0.01 0.02 0.06 0.03 0.01 0.00 0.00 -0.00 0.01 0.01 0.00 0.01 0.03 0.05 0.05 0.02 0.02 0.01 0.01 0.01 0.00 0.00 -0.00 -0.02 0.01 0.01 0.04 -0.00 -0.01 0.00 -0.00 -0.02 0.00 0.01 0.00 0.02 0.03 0.06 0.04 0.00 0.00 0.00 0.01 0.01 -0.00 0.00 0.00 0.01 0.02 0.04 0.01 0.00 0.00 -0.00 0.01 0.00 0.01 0.00 0.00 0.01 0.06 0.14 0.03 0.00 0.00 -0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.02 0.06 0.01 0.00 0.00 0.00 0.00 0.00 Figure 4: Probing BERT as an MLM. Each row represents a target error type. Each column represents the distance from the error position. Each number represents the mean likelihood drop over all pairs. We find that specific tokens are affected more by error tokens. from layer 0 of ELMo. We further notice that for all models, Worder is the hardest type to detect in the sentence-level and ArtOrDet and Worder are the hardest types to locate in the token-level. We hypothesize this is related to the locality of these errors which induces a weak signal for models to identify them. Appendix E demonstrates some examples of the token-level evaluation of BERT. 6 How BERT Captures the Interaction between Tokens When Errors Present We aim to reveal the interaction between grammatical errors and their nearby tokens through studying the masked language model (MLM) component of BERT. We investigate BERT as it is a typical transformer-based encoder. Our analysis can be extended to other models. Experimental Settings We conduct experiments on minimal edited pairs from NUCLE. We extract pairs with error tags ArtOrDet, Prep, Vt, Vform, SVA, Nn, Wchoice, Trans and keep those that only have one token changed. This gives us eight collections of minimal edited pairs with sizes of 586, 1525, 1817, 943, 2513, 1359, 3340, and 452, respectively. Given a minimal edited pair, we consider tokens within six-token away from the error token. We replace the same token in the grammatical and ungrammatical sentence with [MASK] one at a time and use BERT as an MLM to predict its likelihood. Then we compute the likelihood drop in the ungrammatical sentence and obtain the average drop over all minimal edited pairs. Results and Discussion Results are visualized in Fig 4. In general, We find that the decrease of likelihood on specific positions are greater than others 3394 ✓ This would thus reduce the financial burden of this group of people based on their income ceilings . × This would thus reduce the financial burden of these group of people based on their income ceilings . burden of this (these) group of 0.01 0.09 0.41 0.02 ✓ The inexpensive fuel cost and the sheer volume of energy produced by a nuclear reactor far outweighs the cost of research and development . × The inexpensive fuel cost and the sheer volume of energy produced by the nuclear reactor far outweighs the cost of research and development . produced by a (the) nuclear reactor 0.05 -0.02 0.31 0.42 Table 5: Examples with ArtOrDet. We show the minimal edit pairs and the likelihood decrease of each token within two tokens away from the errors. Wrong determiners and their corrections are marked in red. The heads in determiner-noun dependencies are marked in blue. As shown in the table, the heads in determinernoun dependencies get the largest likelihood decrease. in the presence of errors. Given the fact that certain dependencies between tokens such as subject-verb and determiner-noun dependencies are accurately modeled by BERT as demonstrated in prior work (Jawahar et al., 2019), we suspect that the presence of an error token will mostly affect its neighboring tokens (both in terms of syntactic and physical neighbors). This is consistent with our observation in Fig 4 that in the case of SVA where a subject is mostly the preceding token of a verb (although agreement attractors can exist between subject and verb), the proceeding tokens of error positions get the largest likelihood decreases overall. In the case of ArtOrDet where an article or a determiner can be an indicator and a dependent of the subsequent noun, predicting the next tokens of error positions becomes much harder. We provide two running examples with ArtOrDet in Table 5 to further illustrate this point. 7 Adversarial Training Finally, we explore a data augmentation method based on the proposed grammatical error simulations. We apply the greedy search algorithm to introduce grammatical errors to the training examples of a target task and retrain the model on the combination of original examples and the generated examples. We take the MRPC (Dolan and Brockett, 2005) dataset as an example to demonstrate the results. We augment the training set of 0.0 0.2 0.4 0.6 0.8 1.0 proportion 0.5 0.6 0.7 0.8 0.9 acc Original Corrupted Figure 5: Results of a data augmentation defense. The proportions indicate the amount of adversarial examples augmented to the training set compared to original amount. The dash and solid lines show the accuracy on corrupted and original development set with different proportions respectively. MRPC with different proportions of adversarial examples, fine-tune BERT on the augmented training set and then evaluate on both the original development set and the corrupted development set. Results are shown in Figure 5. we find that by adding a small number of adversarial examples, the accuracy is recovered from 46% to 82%. As the proportion of augmented adversarial examples increases, the accuracy continues to increase on the corrupted set, with negligible changes to the original validation accuracy. This fact also demonstrates that our simulated examples are potentially helpful for reducing the effect of grammatical errors. 8 Conclusion In this paper, we conducted a thorough study to evaluate the robustness of language encoders against grammatical errors. We proposed a novel method to simulating grammatical errors and facilitating our evaluations. We studied three pre-trained language encoders, ELMo, BERT, and RoBERTa and concentrated on three aspects of their abilities against grammatical errors: performance on downstream tasks when confronted with noised texts, ability in identifying errors and capturing the interaction between tokens in the presence of errors. This study shed light on understanding the behaviors of language encoders against grammatical errors and encouraged future work to enhance the robustness of these models. Acknowledgements We would like to thank the anonymous reviewers for their feedback. This work is supported by NSF Grant #IIS-1927554. 3395 References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani B. Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2890–2896. Antonios Anastasopoulos. 2019. An analysis of sourceside grammatical errors in nmt. In Proc. BlackboxNLP. Timothy Baldwin, Trevor Cohn, and Yitong Li. 2017. Robust training under linguistic adversity. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 2: Short Papers, pages 21–27. Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. Steven Bird and Edward Loper. 2004. NLTK: the natural language toolkit. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, Barcelona, Spain, July 21-26, 2004 Poster and Demonstration. Jill Burstein, Christy Doran, and Thamar Solorio, editors. 2019. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers). Association for Computational Linguistics. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2014. One billion word benchmark for measuring progress in statistical language modeling. In INTERSPEECH 2014, 15th Annual Conference of the International Speech Communication Association, Singapore, September 14-18, 2014, pages 2635– 2639. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does bert look at? an analysis of bert’s attention. In BlackBoxNLP@ACL. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 670– 680. Alexis Conneau, Germ´an Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single \$&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 2126–2136. Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner english: The NUS corpus of learner english. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications, BEA@NAACL-HLT 2013, June 13, 2013, Atlanta, Georgia, USA, pages 22–31. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978–2988, Florence, Italy. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In (Burstein et al., 2019), pages 4171– 4186. William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing, IWP@IJCNLP 2005, Jeju Island, Korea, October 2005, 2005. Yoav Goldberg. 2019. Assessing bert’s syntactic abilities. CoRR, abs/1901.05287. Ganesh Jawahar, Benoˆıt Sagot, and Djam´e Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 3651–3657. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is BERT really robust? natural language attack on text classification and entailment. CoRR, abs/1907.11932. Najoung Kim, Roma Patel, Adam Poliak, Patrick Xia, Alex Wang, Tom McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, and Ellie Pavlick. 2019. Probing what different NLP tasks teach machines about function word comprehension. In Proceedings of the Eighth Joint 3396 Conference on Lexical and Computational Semantics, *SEM@NAACL-HLT 2019, Minneapolis, MN, USA, June 6-7, 2019, pages 235–249. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of lstms to learn syntaxsensitive dependencies. TACL, 4:521–535. Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019a. Linguistic knowledge and transferability of contextual representations. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. Alison Lui, Antonios Anastasopoulos, and David Chiang. 2018. Neural machine translation of text from non-native speakers. CoRR, abs/1808.06267. Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1192–1202. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 2227–2237. Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. In Proceedings of the 4th Workshop on Representation Learning for NLP, RepL4NLP@ACL 2019, Florence, Italy, August 2, 2019., pages 7–14. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In ACL 2006, 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, Sydney, Australia, 17-21 July 2006. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2383–2392. Marco Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging nlp models. pages 856–865. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. CoRR, cs.CL/0306050. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1631–1642. Matthias Sperber, Jan Niehues, and Alex Waibel. 2017. Toward robust neural machine translation for noisy input sequences. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4593–4601. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? probing for sentence structure in contextualized word representations. In International Conference on Learning Representations. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5998–6008. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. Alex Wang, Ian F. Tenney, Yada Pruksachatkun, Katherin Yu, Jan Hula, Patrick Xia, Raghu Pappagari, Shuning Jin, R. Thomas McCoy, Roma Patel, Yinghui Huang, Jason Phang, Edouard Grave, Haokun Liu, Najoung Kim, Phu Mon Htut, Thibault F’evry, Berlin Chen, Nikita Nangia, Anhad Mohananey, Katharina Kann, Shikha Bordia, Nicolas 3397 Patry, David Benton, Ellie Pavlick, and Samuel R. Bowman. 2019b. jiant 1.2: A software toolkit for research on general-purpose text understanding models. http://jiant.info/. Alex Warstadt and Samuel R. Bowman. 2019. Grammatical analysis of pretrained sentence encoders with acceptability judgments. CoRR, abs/1901.03438. Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jeretic, and Samuel R. Bowman. 2019a. Investigating bert’s knowledge of language: Five analysis methods with npis. CoRR, abs/1909.02597. Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R Bowman. 2019b. Blimp: A benchmark of linguistic minimal pairs for english. arXiv preprint arXiv:1912.00582. Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1112–1122. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. 3398 A Downstream Task Details We test on four language understanding and a sequence labeling datasets. Statistics of these datasets are listed in Table 6. MRPC The Microsoft Research Paraphrase Corpus (MRPC) (Dolan and Brockett, 2005) is a paraphrase detection task which aims to predict a binary label for whether two sentences are semantically equivalent. MNLI The Multi-Genre Natural Language Inference Corpus (MNLI) (Williams et al., 2018) is a broad-domain natural language inference task to predict the relation (entailment, contradiction, neutral) between premise and hypothesis. MNLI contains both the matched (in-domain) and mismatched (cross-domain) sections. QNLI The Question-answering NLI task (QNLI) is recasted from the Stanford Question Answering Dataset (Rajpurkar et al., 2016), which aims to determine whether a context sentence contains the answer to the question (entailment, not entailment). SST-2 The Stanford Sentiment Treebank twoway class split (SST-2; (Socher et al., 2013)) is a binary classification task which assigns positive or negative labels to movie review sentences. CoNLL2003 - NER The shared task of CoNLL2003 Named Entity Recognition (NER) (Sang and Meulder, 2003) is a token level sequence labeling task to recognize four types of named entity: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. Dataset Train Dev Avg Len MRPC 3.7k 409 22.4 MNLI 393k 19k 10.1 QNLI 105k 5.5k 27.6 SST-2 67k 873 19.5 CoNLL2003 15k 3k 14.8 Table 6: Datasets statistics of MRPC, MNLI, QNLI, SST-2, and CoNLL2003. Train and Dev stands for the number of sentences in the train and development set. Avg Len stands for the average sentence length (in token) of the target sentence being attacked. B Model Details B.1 Pre-trained Encoder Details We study BERT (base, uncased), BERT (base, cased) (for NER only), RoBERTa (base), and ELMo. BERT (base) and RoBERTa (base) have the same architecture. Both of them are deep transformer models with 12 layers and 12 attention heads, 768 hidden size in each layer. They contain a learnable output layer for fine-tuning on [CLS] or <s>. We use PyTorch implement of BERT and RoBERTa from Wolf et al. (2019) and fine-tune them on downstream tasks. For ELMo, we fix ELMo representations as contextual embeddings of tokens and feed them to a two-layer, 1500D BiLSTM with cross-sentence attention mechanism as implemented in jiant. (Wang et al., 2019b). B.2 Training and Fine-tuning Details For BERT and RoBERTa, we set the maximum input length to be 128, the maximum number of epochs to be 3, and the dropout to be 0.1 for all tasks. We use Adam (Kingma and Ba, 2015) with an initial learning rate of 2e-5, batch size 16 and no warm-up steps for training. For ELMo, we train the BiLSTM using Adam (Kingma and Ba, 2015) with an initial learning rate of 1e-4, batch size 32. We set the dropout to be 0.2, the maximum number of epochs to be 10 and divide the learning rate by 5 when the performance does not improve for 2 epochs. B.3 Probing model Details We use a self-attention layer and a linear classifier to compose the probing component in section 5. The self-attention layer takes as input the hidden representations from the fixed layer i of an encoder, denoted as h = {hi 1, hi 2, ..., hi n} and outputs a sentence representation si: si = Σn j=1αjhi j (1) αj = softmax(vT b tanh(Wahi j)) (2) where Wa is a weight matrix and vb is a vector of parameters. si is fed to the classifier to output the probability of the sentence being linguistically acceptable. The self-attention layer has a hidden dim of 100 and 0.1 dropout. The classifier has 1 layer and 0.1 dropout. The probing model is trained with Adam (Kingma and Ba, 2015) using a learning rate of 0.001, batch size of 8 , L2 weight decay of 0.001 for 10 epochs and early stop patience of 2. 3399 C Attack Algorithms We conduct three searching algorithms, greedy search, beam search, genetic algorithm in adversarial attacks based on the real errors on NUCLE (Dahlmeier et al., 2013). For beam search, we set the beam size as 5. For genetic algorithm, we set the population in each generation to be 60 and set the maximum number of generations to be 23% of the corresponding sentence length. For example, if a sentence has 100 tokens, the genetic algorithm will iterate for at most 23 iterations. Algorithm 1, 2 and 3 are detailed descriptions of greedy attack, beam search attack, and genetic algorithm attack, respectively. D Probing Model Ability in Identifying Errors D.1 The Sentence-level Binary Classification Task Table 7 shows complete results for probing individual layers of ELMo, BERT, and RoBERTa across eight error types in the sentence-level binary classification task. We fix the parameters of pre-trained encoders and train a self-attention classifier for each layer to judge the binary linguistic acceptability of a sentence. We find that layer 1 of ELMo, middle layers of BERT, and top layers of RoBERTa perform the best in this evaluation. D.2 The Token-level Error Locating Task Table 8 shows complete results for probing individual layers of ELMo, BERT, and RoBERTa across eight error types in the token-level. We first fix the parameters of pre-trained encoders and train a self-attention classifier for each layer to judge the binary linguistic acceptability of a sentence. Then, we extract the two positions with the highest attention weights of self-attention layers and see if error tokens are included. E Case Study of Locating Error Positions We show some examples of the token-level evaluation in section 5. We randomly select one example for each error type and visualize the attention weights of the self-attention layer upon different layers of BERT. A deeper purple under each token means the self-attention layer is putting more attention on this token. Algorithm 1 Greedy attack 10cm Input: Original sentence Xori = {w1, w2, ..., wn}, ground truth prediction Yori, target model F, all confusion sets P, budget b. Output: Adversarial example Xadv. 1: Initialization: Xadv ←Xori 2: for each wi in Xori do 3: Delete wi and compute the drop of likelihood on Yori to obtain the importance score of wi, denoted as Swi. 4: Apply all substitutions of P to wi. Obtain the operation pool of wi, denoted as W sub i . 5: end for 6: 7: Get the index list of Xori according to the decrease order of token importance: I ←argsortwi∈Xori(Swi) 8: for each i in I do 9: pori ←F(Xadv)|Y =Yori 10: for each w ′ in W sub i do 11: Substitute wi with w ′ in Xadv (or swap their positions), 12: Yadv ←argmaxF(Xadv), padv ←F(Xadv)|Y =Yori 13: if not Yori = Yadv then return Xadv 14: else 15: if padv < pori then 16: wselect ←w ′, pori ←padv 17: end if 18: end if 19: end for 20: if the number of iterations exceed b then return Xori 21: end if 22: Substitute wi with wselect in Xadv, 23: end for 24: return Xori F The Token-level Evaluation on Attention Heads of BERT As mentioned in section 5. We also conduct the same token-level probing to 144 attention heads of BERT. In this experiment, the parameters in BERT are completely frozen. We observe that even in this unsupervised manner, some attention heads are still capable of precisely locating error positions. Middle layers of BERT perform the best. We further observe that some attention heads might be associated with specific types of errors. For example, head 2 in layer 9 and head 6 in layer 10 are good at capturing SVA and Vform. Both of these two errors are related to verbs. 3400 Algorithm 2 Beam search attack Input: Original sentence Xori = {w1, w2, ..., wn}, ground truth prediction Yori, target model F, all confusion sets P, budget b, beam size bm. Output: Adversarial example Xadv. 1: Initialization: bestBeam ←copy Xori for bm times. 2: for each wi in Xori do 3: Delete wi and compute the drop of likelihood on Yori to obtain the importance score of wi, denoted as Swi. 4: Apply all substitutions of P to wi. Obtain the operation pool of wi, denoted as W sub i . 5: end for 6: 7: Get the index list of Xori according to the decrease order of token importance: I ←argsortwi∈Xori(Swi) 8: for each w ′ in W sub I[0] do 9: Substitute wi with w ′ in Xori (or swap their positions) 10: Yadv ←argmaxF(Xori), padv ←F(Xori)|Y =Yori 11: if not Yori = Yadv then return Xori 12: else 13: topBeam ←Record top-bm examples with the lowest padv 14: end if 15: end for 16: 17: bestBeam ←topBeam 18: for each i in I/I[0] do 19: pori ←F(Xadv)|Y =Yori 20: oplist ←{} 21: for each Xbeam in bestBeam do 22: for each w ′ in W sub i do 23: Substitute wi with w ′ in Xbeam (or swap their positions) 24: Yadv ←argmaxF(Xbeam), padv ←F(Xbeam)|Y =Yori 25: if not Yori = Yadv then return Xbeam 26: else 27: Add op ←(w ′, padv, Xbeam) to oplist 28: end if 29: end for 30: end for 31: if number of iterations exceed b then return Xori 32: end if 33: Select the top-bm ops in oplist with lowest op.padv. Update bestBeam with each op.Xbeam. 34: end for 35: return Xori Algorithm 3 Genetic attack Input: Original sentence Xori = {w1, w2, ..., wn}, ground truth prediction Yori, target model F, all confusion sets P, budget b, population size ps, generation size G. Output: Adversarial example Xadv. 1: Initialize the first generation with empty set: P 0 ←∅. 2: for each wi in Xori do 3: Apply all substitutions of P to wi. Obtain the operation pool of wi, denoted as W sub i . 4: end for 5: for i = 1, 2, 3, ..., ps do 6: Randomly select a position j and an operation from W sub j to modify Xori. Then add to P 0. 7: end for 8: 9: for g = 1, 2, 3, ..., G −1 do 10: for i = 1, 2, 3, ..., ps do 11: Yadv ←argmaxF(P g−1 i ), padv ←F(P g−1 i )|Y =Yori 12: if not Yadv = Yori then return P g−1 i 13: else 14: Xelite ←argmin(padv) 15: P g 1 ←{Xelite} 16: prob ←Normalize sample probability with F(P g−1 i ) 17: for i = 2, 3, ..., ps do 18: Sample parent1 from P g−1 with probs prob 19: Sample parent2 from P g−1 with probs prob 20: child ←Crossover(parent1, parent2) 21: childmut ←Randomly select a position and an operation from W sub j to modify child 22: P g i ←childmut 23: end for 24: end if 25: end for 26: end for 27: return Xori 3401 Prep Artordet Vform Nn Wchoice Trans SVA Worder ELMo, layer 0 62.6 65.0 69.6 67.7 74.5 67.5 72.1 47.6 ELMo, layer 1 90.6 84.7 87.2 82.9 83.9 80.6 93.1 71.2 ELMo, layer 2 84.7 77.0 79.4 79.7 82.6 74.4 89.9 68.5 BERT, layer 0 62.5 60.8 67.4 64.6 73.9 69.5 70.3 48.2 BERT, layer 1 68.0 63.4 69.3 70.3 75.0 71.5 78.4 52.2 BERT, layer 2 74.4 67.0 75.3 74.8 76.7 73.1 84.4 62.0 BERT, layer 3 80.5 75.0 83.4 73.7 78.5 76.3 89.2 69.8 BERT, layer 4 82.7 80.7 83.6 77.7 82.6 79.6 90.6 72.4 BERT, layer 5 85.2 83.8 85.4 84.3 84.5 81.8 91.7 71.9 BERT, layer 6 88.2 86.6 85.8 86.7 84.5 82.6 90.9 73.4 BERT, layer 7 91.3 88.1 90.2 86.5 86.9 83.9 95.3 73.4 BERT, layer 8 92.5 88.3 91.4 88.4 86.3 85.5 94.5 73.8 BERT, layer 9 91.4 86.3 89.9 87.4 85.6 84.9 94.4 72.4 BERT, layer 10 90.8 87.4 88.2 87.0 86.1 84.8 94.9 71.8 BERT, layer 11 90.0 84.9 88.1 86.6 85.6 84.3 94.2 69.5 BERT, layer 12 88.4 85.6 88.1 84.3 84.0 82.6 93.3 68.1 RoBERTa, layer 0 61.9 65.9 69.7 67.1 75.1 69.1 68.3 50.9 RoBERTa, layer 1 78.3 74.7 84.6 77.6 80.2 75.9 88.4 67.8 RoBERTa, layer 2 85.2 79.4 88.7 83.0 83.3 78.8 90.9 71.8 RoBERTa, layer 3 89.3 85.7 90.6 86.9 87.0 84.1 94.3 72.6 RoBERTa, layer 4 90.2 88.7 91.8 88.7 86.2 86.4 94.5 74.5 RoBERTa, layer 5 91.4 89.1 92.9 90.5 89.0 87.1 95.5 74.5 RoBERTa, layer 6 93.4 91.3 91.9 91.4 88.9 86.8 95.0 75.3 RoBERTa, layer 7 93.9 90.5 91.8 90.4 88.2 86.9 94.6 74.7 RoBERTa, layer 8 93.9 91.1 93.4 92.3 88.0 87.2 94.4 75.9 RoBERTa, layer 9 94.3 90.6 92.5 92.1 89.4 88.0 95.7 74.7 RoBERTa, layer 10 94.4 92.0 93.3 92.3 89.9 88.1 95.0 75.1 RoBERTa, layer 11 95.3 91.5 93.3 89.4 88.8 88.2 95.2 76.0 RoBERTa, layer 12 94.5 91.1 92.7 88.3 87.3 87.9 95.3 74.8 Table 7: Results of the accuracy on the binary linguistic acceptability probing task for individual layers of ELMo, BERT, and RoBERTa. Prep Artordet Vform Nn Wchoice Trans SVA Worder ELMo, layer 0 23.2 14.3 22.3 9.8 21.8 10.2 18.4 29.6 ELMo, layer 1 56.5 42.6 51.8 82.0 72.0 69.4 30.6 55.1 ELMo, layer 2 68.0 34.2 55.4 85.9 73.0 42.8 49.2 62.7 BERT, layer 0 24.1 39.1 66.7 58.7 62.3 56.4 63.6 17.5 BERT, layer 1 56.6 33.9 66.9 59.3 69.4 71.1 54.4 13.1 BERT, layer 2 58.7 27.4 75.8 58.4 76.3 83.3 60.0 34.1 BERT, layer 3 64.5 55.2 56.2 62.4 79.3 83.0 64.2 67.8 BERT, layer 4 68.9 54.1 69.2 62.9 81.7 66.0 67.3 59.7 BERT, layer 5 67.4 52.4 76.9 60.8 83.8 80.7 62.2 62.3 BERT, layer 6 68.2 51.5 76.5 58.7 84.9 83.9 71.7 66.9 BERT, layer 7 70.4 52.3 93.0 61.8 82.8 81.9 61.3 61.2 BERT, layer 8 69.9 51.7 93.0 65.4 80.2 80.2 60.9 63.9 BERT, layer 9 71.7 48.0 91.6 85.3 84.9 79.6 59.6 62.2 BERT, layer 10 70.7 50.4 90.5 80.5 82.3 78.2 92.4 58.7 BERT, layer 11 70.1 49.2 96.3 80.5 81.0 80.7 90.5 60.3 BERT, layer 12 71.4 50.5 86.7 79.8 79.1 81.6 93.2 58.8 RoBERTa, layer 0 44.8 26.5 74.8 62.8 71.3 71.1 61.7 14.3 RoBERTa, layer 1 68.3 12.1 90.7 62.5 80.9 75.9 93.5 48.9 RoBERTa, layer 2 69.9 35.3 71.0 61.9 83.9 84.1 60.5 58.2 RoBERTa, layer 3 71.9 54.4 92.2 60.7 85.5 84.4 96.2 59.3 RoBERTa, layer 4 71.2 48.9 92.0 83.3 85.6 85.3 95.9 60.8 RoBERTa, layer 5 71.9 53.6 92.5 84.9 88.5 83.9 95.3 61.2 RoBERTa, layer 6 70.2 52.9 92.5 87.0 87.3 83.9 95.7 59.0 RoBERTa, layer 7 70.6 50.6 92.1 87.8 87.2 83.9 94.8 58.4 RoBERTa, layer 8 71.6 51.5 92.2 89.5 87.0 79.6 95.2 58.8 RoBERTa, layer 9 71.3 53.2 91.9 87.7 86.7 81.3 95.8 61.1 RoBERTa, layer 10 69.6 50.3 92.8 86.8 87.1 78.8 96.0 64.2 RoBERTa, layer 11 69.3 49.6 92.7 88.4 86.5 75.6 95.5 62.0 RoBERTa, layer 12 69.6 48.9 90.1 86.8 84.9 79.6 94.1 62.8 Table 8: Results of the accuracy on locating error positions for individual layers of ELMo, BERT, and RoBERTa. 3402 [CLS] 11 attacks , died for the plane crash near Buffalo . [SEP] layer1 layer2 layer3 layer4 layer5 layer6 layer7 layer8 layer9 layer10 layer11 layer12 Prep Error [CLS] That might help reduce the risk of a bank running out of the capital . [SEP] layer1 layer2 layer3 layer4 layer5 layer6 layer7 layer8 layer9 layer10 layer11 layer12 ArtOrDet Error [CLS] And after they eating the food , they digested the food . [SEP] layer1 layer2 layer3 layer4 layer5 layer6 layer7 layer8 layer9 layer10 layer11 layer12 Vform Error [CLS] His only rivals , ruling right-wing candidate Rodrigo Avila , shortly afterwards conceded defeat . [SEP] layer1 layer2 layer3 layer4 layer5 layer6 layer7 layer8 layer9 layer10 layer11 layer12 Nn Error [CLS] We needs a new form of certification to help consumers to choose sustainable seafood . [SEP] layer1 layer2 layer3 layer4 layer5 layer6 layer7 layer8 layer9 layer10 layer11 layer12 SVA Error [CLS] It has been even argued that Man is making it worse . [SEP] layer1 layer2 layer3 layer4 layer5 layer6 layer7 layer8 layer9 layer10 layer11 layer12 Worder Error [CLS] He is among the small percentage of Pygmies who can read but write . [SEP] layer1 layer2 layer3 layer4 layer5 layer6 layer7 layer8 layer9 layer10 layer11 layer12 Trans Error [CLS] It will be Rooney 's beginning meeting with the world 's most glamorous team . [SEP] layer1 layer2 layer3 layer4 layer5 layer6 layer7 layer8 layer9 layer10 layer11 layer12 Wchoice Error Figure 6: Visualization of attention weights of self-attention layers. A figure represents a sentence with a specific error type. Errors in a sentence are highlighted in red. Each column represents one layer of BERT that the selfattention layer is build upon. 3403 2 4 6 8 10 12 Layer 0.0 0.2 0.4 Acc. 3 10 2 6 12 4 10 10 7 4 6 6 prep Attn head 2 4 6 8 10 12 Layer 0.0 0.2 0.4 0.6 Acc. 4 5 5 9 5 12 12 2 6 4 1 12 sva Attn head 2 4 6 8 10 12 Layer 0.0 0.1 0.2 0.3 Acc. 3 12 2 7 2 3 3 1 7 7 6 2 artordet Attn head 2 4 6 8 10 12 Layer 0.0 0.2 0.4 0.6 Acc. 4 5 5 9 5 10 4 2 6 12 6 12 vform Attn head 2 4 6 8 10 12 Layer 0.0 0.1 0.2 0.3 0.4 Acc. 5 2 5 4 10 6 3 1 1 11 6 3 trans Attn head 2 4 6 8 10 12 Layer 0.2 0.4 0.6 Acc. 2 3 11 11 1 1 6 11 6 8 11 6 nn Attn head 2 4 6 8 10 12 Layer 0.2 0.4 0.6 Acc. 7 12 7 2 1 10 5 2 6 10 10 6 wchoice Attn head 2 4 6 8 10 12 Layer 0.2 0.4 0.6 Acc. 12 2 1 9 7 8 4 4 9 10 10 11 worder Attn head Figure 7: Visualization for each attention head of BERT for locating each type of error. A point in the figure represents the performance of an attention head. The grey line on the top represents the best performing head in each layer (annotated with its number). The green line in the middle represents the average performance of all heads in this layer.
2020
310
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3404–3417 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3404 Roles and Utilization of Attention Heads in Transformer-based Neural Language Models Jae-young Jo School of Computing, KAIST Dingbro AI Research [email protected] Sung-hyon Myaeng School of Computing, KAIST [email protected] Abstract Sentence encoders based on the transformer architecture have shown promising results on various natural language tasks. The main impetus lies in the pre-trained neural language models that capture long-range dependencies among words, owing to multi-head attention that is unique in the architecture. However, little is known for how linguistic properties are processed, represented, and utilized for downstream tasks among hundreds of attention heads inside the pre-trained transformerbased model. For the initial goal of examining the roles of attention heads in handling a set of linguistic features, we conducted a set of experiments with ten probing tasks and three downstream tasks on four pre-trained transformer families (GPT, GPT2, BERT, and ELECTRA). Meaningful insights are shown through the lens of heat map visualization and utilized to propose a relatively simple sentence representation method that takes advantage of most influential attention heads, resulting in additional performance improvements on the downstream tasks. 1 Introduction Sentence encoders in transformer architectures as in GPT, BERT (Vaswani et al., 2017; Radford, 2018; Devlin et al., 2019) and ELECTRA (Clark et al., 2020) have shown promising results on various natural language understanding (NLU) tasks, such as question answering, text entailment and natural language inference (NLI) (Bowman et al., 2015), owing to their pre-training capabilities in modeling languages. The pre-training effects of the transformer-based approaches are known to be crucial for obtaining superior performance in various downstream NLU tasks. The main impetus lies in capturing longrange dependencies among words obtainable with bidirectional learning and self-attention (Devlin et al., 2019) and sufficiently varied corpora of a large quantity (Radford et al., 2019). Despite all the recent successes of the transformer-based models, little is known for how linguistic properties are processed and represented internally when the architectures are used. Given that self-attention heads are unique in the family of transformer architectures, we attempt to answer the question of how basic linguistic properties are captured with the attention heads across the models and used for downstream tasks. Once we figure out the roles of attention heads in “storing” various linguistic properties, we should be able to modulate them to maximize the performance of the downstream tasks. Given the motivation, we analyze several publicly available pre-trained transformer encoders (BERT, GPT, GPT2, and ELECTRA) trained with different model capacities ranging from 144 to 384 attention heads and 12 to 24 layers. Considering the output vector from each attention head of an encoder as a mini-sentence embedding, we examine whether certain linguistic properties are “stored” in embeddings among ten sentence probing tasks (Conneau and Kiela, 2018) that cover surface, syntactic, and semantic information and require different linguistic properties (e.g. the depth of a parsed sentence). Each of the probing tasks is treated as if it were a downstream task for the examination; a classifier is attached for each of the primitive linguistic properties. In order to predict the depth of the parse tree, for example, an n-ary classifier is connected, where n is the number of possible depths. In order to aggregate and summarize the performance results out of all the attention heads, we construct an accuracy heat map for each probing task, where the patterns across layers and attention heads can be recognized easily. By examining the heat map, we can observe the patterns of how the 3405 attention heads contribute to the accuracy of each probing task, including whether an individual attention head is contributing to multiple linguistic features together or just specialized for a particular feature. Aiming at producing improved sentence representation, we use the analysis result that allows for selecting and concatenating the outputs of superior attention heads. The sentence representations from the hidden layers and the top-n attention heads are compared to check whether using only influential attention heads selectively could help certain downstream tasks. This attempt is in contrast with the common approach of using the output of the last layers of a transformer-based encoder as the representation that is fed into a downstream task. Our hypothesis is those final representations from the top of the transformer-based encoders might not be the best not only in carrying primitive linguistic properties of the language but also for downstream tasks. All the source code is publicly available1. The major contribution of our research is twofold: 1) we suggest an analysis method which helps understand where linguistic properties are learned and represented along attention heads in transformer architectures and 2) we show that using analysis results, attention heads can be maximally utilized for performance gains during the fine-tuning process on the downstream tasks and for capturing linguistic properties. 2 Related Work Several studies looked into the representations learned by a neural network for various language properties (Adi et al., 2016; Qian et al., 2016a,b). A similar line of work focused on learned linguistic features inside the word and sentence embeddings. They used downstream tasks in order to probe surface information, syntactic and semantic information (Shi et al., 2016; Conneau et al., 2018). Some recent work looked inside the sentence encoders with various depths, by analyzing the hidden states at a layer-level (Belinkov et al., 2017; Peters et al., 2018) and even at a neuron-level (Dalvi et al., 2018). Tenney et al. (2019a,b) attempted to understand linguistic characteristics learned in a series of pre-trained encoder models by jointly analyzing their behaviors across different NLP tasks. For studying attention mechanisms, there have 1https://github.com/heartcored98/ transformer_anatomy been two streams of work: 1) visual analysis of attention weights to associate various functionalities and 2) analysis of the characteristics of the output representations from individual attention heads. For the first category, Vig and Jesse (2019) developed a visualization tool for attention weights of BERT and GPT2 and identified notable heads but without any quantitative analysis. Ghader and Monz (2017) showed the extent to which attention agrees with traditional alignments in neural machine translation (MT). Jain and Wallace (2019) and Brunner et al. (2019) on the other hand argued that attention rarely provides an explanation of model predictions. They showed through attention map analysis that attention weights frequently are not correlated with other measures of feature importance. For the second category that attempts to discover various roles attention heads play, Raganato and Tiedemann (2018) studied the characteristics of individual attention heads from the transformer, pretrained with an MT task and evaluated on a limited suite of linguistic tasks, POS tagging, NER tagging, and chunking. Similarly, Clark et al. (2019) showed that some attention heads are specialized for dependency parsing and coreference resolution. Michel et al. (2019) showed through an ablation study that some dedicated heads have a significant role in MT and revealed the dynamics of attention heads during the training process. Voita et al. (2019) provided a method to identify the major role of each attention head in a transformer model trained for MT. The two studies are limited to MT and a particular transformer model, BERT. Unlike the recent studies mentioned above, our analysis is more comprehensive in its scope for generalizability. The analysis probes a variety of surface, syntactic, and semantic information at sentence levels with different transformer encoders pre-trained on language modeling tasks. More importantly, our work goes beyond an analysis and suggests a method of utilizing the analysis results for performance gains on several downstream tasks. It not only proposes a simple yet new method for the downstream tasks but also validates the analysis of the attention mechanisms. To the best of our knowledge, this is the first attempt to do an in-depth analysis of the seven recent pre-trained encoders for their internal workings in handling linguistic features, not to mention the newly proposed way for improvements on the downstream tasks. 3406 Positional Encoding Input Embedding 𝑥! ... ℎ",$ ... ℎ",% ℎ",& Multi-Head Attention Hidden States 𝑧" Add & Normalization Add & Normalization 𝑖-th Encoding Layer 𝐿× Feed Forward I 𝑥! 𝑧' (a) Basic Architecture of Transformer-based Encoder 𝑥" have 𝑥# an 𝑥$ apple 𝑧" 𝑙 𝑙( s(𝑧") (b) Layer-wise Evaluation ... Linear Softmax Classifier ℎ",$ 𝑙 𝑙( s(ℎ",$) (c) Head-wise Evaluation Linear Softmax Classifier Figure 1: (a) Basic architecture of a transformer-based encoder. (b) Evaluation scheme for a hidden state zi. (c) Evaluation scheme for an attention head output hi,j. L and H denote the number of stacked encoding layers and the number of attention heads packed within each encoding layer, respectively. 3 Methodology Consider a transformer-based encoder M, typically with a stack of L identical layers, each of which makes use of multi-head self-attention, and a two sub-layer feed-forward network coupled with layer normalization and residual connection (see Figure 1a). For a given input sequence x = (x1, x2, . . . , xn), each word embedding xis concatenated with a positional encoding and fed into the encoder layer to produce an attention head output hi,j ∈R dhead where i and j indicate the indices of the layer and the attention head, respectively. Then a series of sub-layers produce hidden states of the i-th encoding layer zi ∈R dmodel for each encoder. For all pre-trained encoders, dhead = 64 and dmodel = H × dhead where H is the number of attention heads per layer. Since the transformer-based encoders encode the input sequence word by word, zi and hi,j are produced individually for given word xk along the input sequence x. In order to produce a sequencelevel representation, we need to select one of the input representations of the sequence. Since the selection method depends on the chosen pre-trained model, we defer a detailed discussion to Section 4.1. For now, we assume zi and hi,j have been already determined with the specific word chosen from the input sequence and consider it as the sentence-level representation. (a) Selecting Influential Attention Head ... ... ... ... ... ... Red color denotes the attention head whose accuracy s(ℎ!,#) is relatively higher than others 𝑙 𝑙% s(ℎ$) (b) Reconstructed Embedding Evaluation ℎ$ Linear Softmax Classifier Figure 2: Selecting influential attention head output based on attention head-wise evaluation result. For example, assume colored three attention heads produce most superior representation then we concatenate the output from those attention head and use it as a sentence embedding. 3.1 Evaluating Hidden States on a Layer Consider a classification task where the pre-trained encoder predicts a linguistic feature intended in a sentence probing task. Assume we have a labeled dataset containing pairs of a sentence and a linguistic property label (e.g. tense). For a given sentence x and a label l in the dataset, the pre-training model (e.g. BERT) encodes x and produces vectors corresponding to zi and hi,j. Usually, only the vector from the last layer zi=L is used as the input feature representing the sentence for the classification task. However, in order to inspect the role of each internal layer for a linguistic property, we use {zi,l, l} for all i to train a logistic regression classifier on a train dataset and record classification accuracy s(zi) on a test dataset (see Figure 1b). Each accuracy score is then compared to the accuracy of the last layer, and then the best performance among the encoding layers is measured. We consider this comparison as a way of generating primitive evidence that hidden states from an internal layer provide more useful linguistic information than the representation from the last layer. 3.2 Evaluating Attention Heads Similar to Section 3.1, we also train a logistic regression classifier on {hi,j, l} and record classification accuracy s(hi,j) for all i and j. That is, every attention head is evaluated by feeding its own output vector to the classifier as a feature (see Figure 1c). We assume the more an attention head “stores” the information essential to the probing task, the higher its accuracy. We construct a heat map of classification accu3407 Encoder L H L × H Parameters GPT 12 12 144 110M GPT2 12 12 144 117M BERTBASE 12 12 144 110M BERTLARGE 24 16 384 340M ELECTRASMALL 12 4 48 14M ELECTRABASE 12 12 144 110M ELECTRALARGE 24 16 384 340M Table 1: Specification of the seven pre-trained encoders: the numbers of encoding layers (L), attention heads per layer (H), all the attention heads used (L×H) and trained parameters. racy for attention heads on x-axis and layers on y-axis, so that we can easily identify the distribution of the excited attention heads for the linguistic property handled in the pre-trained model. The overall trend of a heat map indicates the extent to which the activation is widely distributed or localized across different layers and attention heads. 3.3 Using Influential Attention Heads Given the analysis results, we now propose a method for generating a new sentence representation to improve not only the probing tasks but also other downstream tasks. New representations are tested within the chosen pre-trained models in this work but can be applied to all other transformerbased encoders. Given an encoder model M, we sort the attention heads along with their classification ‘validation’ accuracy s(hi,j) measured on a validation dataset (in order to prevent look-ahead bias during the selection process) for a given task, based on the attention head-wise evaluation method as in Section 3.2. Then top-n attention heads are selected and simply concatenated (see Figure 2) to form a new representation. We expect that the resulting vector hn ∈R n×dhead would be able to store more precious information for the task than the vectors constructed out of other attention heads since it consists of superior attention heads. In order to make comparisons against the embeddings from different encoding layers, we also train the classifier with {hn, l} and record the corresponding classification ‘test’ accuracy s(hn) measured on the test dataset. For fair comparisons, however, we set n to H (the number of attention heads per layer) so that reconstructed sentence embedding hn could have the same dimension to that of hidden states, dmodel. Tasks # Classes Task Description Length 6 Predict input sequence length WordContent 1000 Find words in a sentence Depth 8 Predict maximum depth of syntactic tree TopConst 20 Predict top-constituents BigramShift 2 Detect bigram order perturbation Tense 2 Predict main verb’s tense SubjNum 2 Predict whether a subj is plural ObjNum 2 Predict whether an obj is plural OddManOut 2 Detect noun or verb perturbation CoordInversion 2 Detect clausal order perturbation Table 2: Summary of sentence probing tasks. Each task consists of 100k train and 10k test samples. 4 Attention Head-wise Analysis 4.1 Pre-trained Transformer Encoders We ran experiments for seven different encoders with unique characteristics, as shown in Table 1. GPT (Radford, 2018) was trained by basic Language Modeling (LM) on the BookCorpus dataset. GPT2 (Radford et al., 2019) was originally trained with the largest model capacity (1.5B parameters) with massive text dataset and LM, but we select base model for fair comparison. BERT (Devlin et al., 2019), which adopted masked LM (MLM) with next sentence prediction (NSP) for better contextualized embedding, was trained on Book-Corpus and English Wikipedia datasets. The most recent one, ELECTRA, was trained with replaced token detection (RTD) in the generatordiscriminator mode. For GPT and GPT2, we pulled the representative sentence embedding zi and hi,j from the last input token with Byte-Pair Encoding tokenizer (Sennrich et al., 2016). For the BERT and ELECTRA family, we appended a special token <CLS>, which was originally designed to train sentence representations, in front of every input sequence and pulled the sentence embedding from it, using WordPiece tokenizer (Wu et al., 2016). Also, the implementation of the all transformers in our work are utilized from the Huggingface’s transformers library (Wolf et al., 2019). 4.2 Evaluation on Sentence Probing Tasks Ten sentence probing tasks enable us to check whether the sentence embeddings generated by the encoders store the linguistic properties specific to the individual tasks. Table 2 shows a description of each probing task with its number of classes, roughly indicating the difficulty of the task. For each probing task, we evaluated performance of three types of representation; s(zi), s(hi,j) and 3408 BERTBASE BERTLARGE GPT GPT2 ELECTRALARGE Length Depth SubjNum BigramShift CoordInversion OddManOut Figure 3: Heat maps of attention head-wise evaluation on sentence probing tasks. The rows correspond to the five pre-trained encoders (BERTBASE, BERTLARGE, GPT, GPT2, and ELECTRALARGE from the top). The six columns correspond to six tasks (Length, Depth, SubjNum, BigramShift, CoordInversion, and OddManOut, from the left). In each heat map, x-axis and y-axis show the index values of the attention heads and the layer numbers (the lower, the closer to the initial input), respectively. The brighter the color, the higher the accuracy for the attention head and hence more critical for the task. Note that the attention heads in the same layer are ordered by their classification accuracy values (i.e., an attention head with the highest accuracy on a layer is at the left-most location). s(hn) for a given pre-trained encoder by training the simple classifier with 256 batch size on the RMSProp optimizer (details on Appendix A). 4.3 Heat maps for Roles of Attention Heads After measuring the classification accuracy for using the representation from each attention head, s(hi,j) for all i,j, we created a heat map showing the accuracy distribution for a pre-trained encoder and a sentence probing task. Figure 3 shows 30 heat maps arranged for seven pre-trained encoders and six sentence probing tasks (full results are shown in Appendix B). For each heat map, the brighter the color in a region, the higher the accuracy is for the corresponding attention heads. Comparing the heat maps along with the different probing tasks for an encoder, we can see that the influential attention heads with bright colors appear in different layers, either localized or distributed. This indicates that the information related to different tasks is processed at different locations and with different levels of association among attention heads. For the Length and Depth tasks, requiring surface and syntactic information, for example, the accuracy of the heads in the lower layers starts to diminish from the mid-upper layers. On the other hand, the attention heads in the midlayers are activated for SubjNum and CoordInversion, which are more or less syntactic information. For BigramShift and OddManOut, which are more semantic, the attention heads along the upper layers are mostly activated. These results provide more detailed analyses and meaningful insights regarding the behavior of attention heads on different layers than the empirical results of Raganato and Tiedemann (2018) who shows the attention heads in lower and upper layers of the basic transformer tend to embed syntactic information and semantic information, respectively. More interestingly, the BigramShift and OddManOut heat maps show that all of the five encoder models represent word orders and verb/noun contexts starting from the 3409 Tasks Encoder BERTBASE BERTLARGE last best top-12 last best top-16 layer layer heads layer layer heads Length 58.0 87.8 95.0 54.8 94.4 95.2 WordContent 25.2 25.2 73.1 12.2 32.2 79.8 Depth 29.8 31.7 38.3 27.8 33.3 39.5 TopConst 69.8 74.7 84.2 62.8 78.2 85.6 BShift 78.1 78.1 88.3 77.2 81.1 90.9 Tense 86.0 87.0 89.0 85.6 86.7 88.9 SubjNum 82.0 84.7 88.2 80.0 87.9 90.5 ObjNum 75.4 75.4 83.4 64.2 78.0 84.4 OddManOut 59.6 59.6 65.1 55.5 59.2 69.0 CoordInv 65.5 65.9 74.6 64.9 70.9 78.5 Table 3: A summary of the probing tasks for three different embedding methods used in the pre-trained BERT architectures. mid-layers. Comparing the heat maps along with the transformer types, we can observe that the heatmaps within the same family show similar patterns, while those from different families tend to show different distributions of the superior attention heads. For example, the GPT family tends to show cooperation with a larger number of attention heads for the SubjNum and CoordInversion tasks while the BERT family consists of only a few “well-educated” attention heads. In the case of BigramShift and OddManOut, the majority of upper attention heads of the BERT family are more strongly associated with word order and verb/noun meanings with higher accuracy than those of the GPT family. Interestingly, ELECTRALARGE shows unique patterns for most of the probing tasks; highperformance heads are located on lower layers except for OddManOut, whereas the heads on the lower layers do not seem to deal with information for the probing tasks. ELECTRASMALL and ELECTRABASE model have similar heat maps (see Appendix B), but the ELECTRALARGE model is totally different from them. These tendency implies that the learning behaviors on the attention heads are not strictly similar among each other for the same pre-training tasks even with the same architecture. 4.4 Selecting Influential Attention Heads Having observed that different attention heads on different layers play their roles for different probing tasks, we devised a method of producing new embeddings as in 3.3 and ran an experiment to compare it against two baselines for the ten probing tasks. Table 3 reports on a comparison result of three embeddings constructed by the BERT family: the last layer zi=L, the best-performing layer zbest, and the reconstructed sentence embedding hn=H for each task and each pre-trained encoder (full results are in Appendix B). Comparing the accuracy between the last and best layers, we observe that the last layer is no better than the “best” layers for any of the probing tasks. From this, we can infer that certain linguistic features are dominantly processed on earlier layers and no further on later layers. The performance comparison between using the output of the “best” layer and the reconstructed sentence embedding (proposed) clearly shows that classification accuracy is increased significantly (19.22% in median) with the proposed method for almost all the tasks. It strongly supports that the proposed method can be employed to discover superior attention heads that can make up the final representation for processing specific linguistic information. Note that the newly constructed sentence embeddings consist of attention head outputs only. Our results imply that these embeddings might possess substantial information as much as the hidden states of the layers, which are produced by passing through the multi-head attention layers and the feed-forward network. 5 Boosting Downstream Task 5.1 Downstream Tasks from GLUE We evaluated the new embedding construction method for more complex tasks in order to see whether it extracts not only simple linguistic features but also rich sentence features from the pre-trained encoder for such tasks. Three downstream tasks (MRPC, STS-B, and SST-2) were selected from the General Language Understanding Evaluation (GLUE) benchmark, which has the most widely used datasets for evaluating languageoriented task performances. MRPC Microsoft Research Paraphrase Corpus consists of 4.1k train and 1.7k test sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent (Dolan and Brockett, 2005). STS-B The Semantic Textual Similarity Benchmark is a collection of 5.7k train and 1.4k test sentence pairs drawn from news headlines and other sources (Cer et al., 2017). They were annotated with a score from 1 to 5 denoting how similar the 3410 Tasks Encoder BERTBASE BERTLARGE last best top-12 last best top-16 layer layer heads layer layer heads MRPC (F1) 88.0 88.2 88.9 89.3 88.6 91.4 MRPC (Acc) 82.4 83.1 84.6 84.6 84.1 87.7 STS-B (P)* 88.2 74.6 88.6 89.5 54.8 89.4 STS-B (S) 87.9 73.5 88.3 89.1 53.6 88.7 SST-2 (Acc) 92.9 92.4 93.1 94.0 92.9 94.5 Table 4: A summary of three downstream tasks on dev set for the ordinary fine-tuning method using the last layer, best layer, and the proposed method of using topn attention heads. The reported scores are the median over 5 random restarts. (* P and S denote the pearson score and spearman score, respectively.) two sentences are for their semantics. SST-2 The Stanford Sentiment Treebank is a binary single-sentence classification task consisting of sentences extracted from movie reviews with human annotations of their sentiment (Socher et al., 2013) with 67k train and 1.8k test samples. It was designed to predict the sentiment score for a given sentence in binary scales. 5.2 Fine-tuning with Influential Heads First, we evaluated each of the attention heads on the three downstream tasks, following the procedure in Section 4.2. Using the head-wise evaluation results, we again reconstructed sentence embeddings from the output vectors of superior attention heads and use them as input representations for the downstream task classifier. Since pre-trained transformer encoders are usually fine-tuned when applied to the downstream tasks, we unfroze the parameters of the pre-trained encoder and fine-tuned both the classifier and the encoder, end to end. Also, we conducted regular fine-tuning experiments by adding a classifier on the top of the last hidden vectors for each pre-trained encoder. We use a batch size of 32 with a learning rate of 2e-5 and fine-tune for 3 epochs over the data for all the three downstream tasks, following the fine-tuning procedure in (Devlin et al., 2019). Each experiment is repeated five times with different random seeds to provide fair comparisons against the performance variance of fine-tuning on small datasets. The results are presented in Table 4. Both BERTBASE and BERTLARGE obtained additional performance gains, 0.82% and 1.04% points for the base and large models, respectively, over the model with the ordinary last-layer fine-tuning. We find that BERTLARGE receives an additional performance gain on the MRPC task by 2.1% and 3.1% point improvements on F1 and accuracy, respectively. Fine-tuning with attention heads only gives a slightly negative result on STS-B with BERTLARGE. Fine-tuning with the best-layer did not provide consistent performance increment. It is noteworthy that the performance of an already pre-trained encoder model could be further improved by simply pulling the output vectors from the influential attention heads. 6 Discussion 6.1 Heat Map Variations along Fine-tuning In order to investigate the impact of the fine-tuning process toward the internal attention heads, we also conducted the attention head-wise evaluation on each encoder after three epochs of the fine-tuning process. Our question was whether the influential attention heads at the initial pre-trained state would remain superior after the fine-tuning or the spot of influential heads would be shifted toward the last layer. The results are presented in Figure 4. First, we again observe that the regions of the influential heads vary among the downstream tasks. In the MRPC task, influential heads are distributed across the entire layers and heads, but the ones with the SST-2 task are highly concentrated toward the very upper layer. Notably, the heat maps of the STSB task are unusual in that there are two influential regions in the lower (first 25˜30% layers) and the upper layers. We can also observe that the overall heat map patterns are stretched while the model capacity is increased, as reported in (Tenney et al., 2019a). From the way feature vectors are pulled from the encoder, we observe that finetuning with the reconstructed sentence embeddings obtained from the top-n attention heads results in the smoother heatmap amplification, especially with the BERTLARGE model. The most interesting result is that the intensity (performance) of the initial heatmaps are amplified after experiencing the fine-tuning process while preserving overall distribution patterns. Another phenomenon is that the attention heads adjacent to the superior ones also give a slight performance increase. These results imply that the fine-tuning process leverages the initial superior attention heads regardless of their corresponding locations inside the model rather than trains arbitrary attention heads. This behavior might be the reason for explaining 3411 Initial Pre-trained Fine-tuned with Last Layer Fine-tuned with Top-n Heads Initial Pre-trained Fine-tuned with Last Layer Fine-tuned with Top-n Heads BERTBASE BERTLARGE MRPC STS-B SST-2 Figure 4: Heat maps of attention head-wise evaluation on downstream tasks. The rows correspond to the three tasks (MRPC, STS-B and SST-2 from the top). The first three column groups are evaluation results of BERTBASE and second three column groups are evaluation results of BERTLARGE. Each column groups correspond to the initial pre-trained state, fine-tuned with last layer and fine-tuned with top-n attention heads, respectively. In each heat map is drawn following the procedure of Figure 3. Note that the heat maps in the same row within the same encoder model share the same color bar range in order to compare performance changes. the additional performance increment on the downstream tasks. We conjecture that our reconstruction method could act as a partial residual connection as in DenseNet (Huang et al., 2017) during the fine-tuning process by feeding the reconstructed embedding to the input of the classifier which creates the direct gradient flow from the final objective loss of downstream tasks toward the internal superior attention heads. We believe that further work by varying the number of concatenated the attention heads (especially, n > H) would provide additional performance gain. 6.2 Syntactic-Semantic Exclusivity Our analysis so far concentrated on the distribution of the influential attention heads on different layers for given task as a way of differentiating their roles for individual tasks. A pattern we observed was that different number of heads are influential and that upper, lower, or all the layers tend to be influential, depending on the linguistic tasks. Our next question is whether individual heads on different layers are ”responsible” for processing syntactic or semantic properties exclusively or in a coordinating fashion. In order to observe the performance of attention head hi,j for syntactic and semantic tasks, we define a score for handling syntactic capabilities as an average of test accuracy scores, s(hi,j), from the [Depth, TopConstituents, BigramShift] group and that for semantic capabilities from the [Tense, SubjNumber, ObjNumber, OddManOut, CoordinationInversion] group. We omit the accuracy results from the surface information group since they it is difficult to lablem as syntactic or semantic. Figure 5 shows the syntactic-semantic score distributions of the attention heads for different pretrained transformer models. Each attention head seems to handle both syntactic and semantic information in a balanced way. This is interesting because different attention heads or layers are often more influential for many linguistic tasks. When averaged together over the tasks for either the syntactic or semantic group, however, it appears that processing syntactic and semantic information is shared by individual heads and layers. There is a tendency that the lower the layer, the less influential on syntactic and semantic processing. However, this tendency is not observed in the large models. For BERTLARGE, the highest layers (purple colors) contribute less for both syntactic and semantic properties. For ELECTRALARGE, the purple heads contribute the least. It re-confirms our hypothesis that using the last layer representation is not always the best. The linear relationship between syntac3412 50 55 60 65 70 semantic BERT-Base layer 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 BERT-Large 50 55 60 65 70 semantic GPT GPT2 50 55 60 65 70 semantic ELECTRA-Small 30 40 50 syntactic ELECTRA-Base 30 40 50 syntactic 50 55 60 65 70 semantic ELECTRA-Large Figure 5: A distribution of syntactic and semantic scores of the attention heads. In each scatter plot, xaxis and y-axis show the syntactic and semantic scores, respectively. The hue of a point represents the layer to which the corresponding attention head belongs. tic and semantic processing capabilities across the heads is considered a new finding. Although different layers and heads tend to play stronger or weaker roles for different linguistic properties as shown in the heat maps, they contribute to both syntactic and semantic processing in a well balanced way. 7 Conclusion While recent research demonstrated the capability of the transformer-based encoders for generating rich sentence representations, the roles of individual self-attention heads were hardly unknown. Furthermore, little is known for whether and how we can utilize them for better capturing linguistic properties and eventually improving the performance of downstream tasks for which the embeddings are constructed. One of the major contributions of this paper is to fill the void by inspecting where and how the attention heads are “trained” internally for classification tasks corresponding to different linguistic properties and for the downstream tasks. The analysis results clearly show a tendency through the comprehensive heat maps that syntactic and semantic information is mainly handled from the lower layers to the upper layers. We also showed that understanding the roles of attention heads in handling task-specific information can help to develop adaptive sentence representations, by selecting influential attention heads and testing them for the three downstream tasks. The additional performance gains obtained by the simple method show that this approach of using the anatomy of the transformer models and the attention heads is promising in utilizing expensive pre-trained transformer models to their maximal extent. Furthermore, we explored how the hundreds of attention heads underwent performance variation during the fine-tuning process on the downstream tasks, revealing the internal behaviors with the proposed analysis method. The analysis of syntacticsemantic score distributions revealed that individual attention heads capture both syntactic and semantic information. It also showed that the amount of both syntactic and semantic information handled by the heads vary from layer to layer, sometimes showing that the last layer contributes much less especially with large models. While the empirical results are strong, additional work remains to further our understanding of the internal workings of the transformer architecture and its role in building such strong language models for a variety of tasks. Immediate attention should be paid to the investigation of how heat maps would vary during the extensive pre-training so that we have a better understanding of the dynamics of the learning processes. Acknowledgments This work was supported by Institute for Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) (No. 2013-0-00131, Development of Knowledge Evolutionary WiseQA Platform Technology for Human Knowledge Augmented Services). We are grateful for the support of the GPU server to IdeaFactory, Startup KAIST. We also appreciate Kyubyong Park, Seongok Ryu, and YJ for reviewing the earlier version of this paper. 3413 References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. CoRR, abs/1608.04207. Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural machine translation models learn about morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 861–872, Vancouver, Canada. Association for Computational Linguistics. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Gino Brunner, Yang Liu, Dami´an Pascual, Oliver Richter, and Roger Wattenhofer. 2019. On the validity of self-attention as explanation in transformer models. arXiv preprint arXiv:1908.04211. Daniel Cer, Mona Diab, Eneko Agirre, I˜nigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Association for Computational Linguistics. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of bert’s attention. CoRR, abs/1906.04341. Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pretraining text encoders as discriminators rather than generators. In ICLR. Alexis Conneau and Douwe Kiela. 2018. SentEval: An evaluation toolkit for universal sentence representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), Miyazaki, Japan. European Languages Resources Association (ELRA). Alexis Conneau, German Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126–2136, Melbourne, Australia. Association for Computational Linguistics. Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, Anthony Bau, and James R. Glass. 2018. What is one grain of sand in the desert? analyzing individual neurons in deep nlp models. In AAAI. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Hamidreza Ghader and Christof Monz. 2017. What does attention in neural machine translation pay attention to? In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 30–39, Taipei, Taiwan. Asian Federation of Natural Language Processing. Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. 2017. Densely connected convolutional networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Sarthak Jain and Byron C. Wallace. 2019. Attention is not explanation. CoRR, abs/1902.10186. Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 14014– 14024. Curran Associates, Inc. Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1499–1509, Brussels, Belgium. Association for Computational Linguistics. Peng Qian, Xipeng Qiu, and Xuanjing Huang. 2016a. Analyzing linguistic knowledge in sequential model of sentence. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 826–835, Austin, Texas. Association for Computational Linguistics. Peng Qian, Xipeng Qiu, and Xuanjing Huang. 2016b. Investigating language universal and specific properties in word embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1478–1488, Berlin, Germany. Association for Computational Linguistics. Alec Radford. 2018. Improving language understanding by generative pre-training. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. 3414 Alessandro Raganato and J¨org Tiedemann. 2018. An analysis of encoder representations in transformerbased machine translation. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 287–297, Brussels, Belgium. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural MT learn source syntax? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1526– 1534, Austin, Texas. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593– 4601, Florence, Italy. Association for Computational Linguistics. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? probing for sentence structure in contextualized word representations. In International Conference on Learning Representations. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5797–5808, Florence, Italy. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. A Training and Evaluation Details A.1 Pre-trained Transformer Throughout the entire experiments, we mainly used huggingface’s seven pre-trained transformers2, implemented with Pytorch. However, since the original implemented models do not return the output vectors of the internal attention heads, we developed the wrapper class that enables extracting the output vectors from the created pre-trained model objects. The implementation details and procedures for replicating the experimental results are described in our repository3 A.2 Probing Task Benchmark We utilized the SentEval toolkit4 for both probing and downstream tasks. The probing task results reported in the main text are obtained with a logistic regression layer attached to the pooled output vector from the transformer. We trained the classifier with the batch size of 256 for all the experiments, freezing the parameters of the transformer. A RMSProp optimizer is used with the learning rate of 0.1. Only the L2 regularization is tuned among [10-5, 10-4, 10-3, 10-2, 10-1]. During training, we monitor the validation accuracy and stop the training process when it does not improve for the previous 3 epochs (tenacity=3). Also, each classifier is tested with 5-fold cross validation. In section 3.3, validation accuracy s(hi,j) is measured from the five-validation sets partitioned in a mutually exclusive way and averaged. 2https://github.com/huggingface/ transformers 3https://github.com/heartcored98/ transformer_anatomy. 4https://github.com/facebookresearch/ SentEval 3415 A.3 Downstream Task Benchmark The downstream task results reported in the main text are obtained with a logistic regression layer attached to the pooled output vector from the transformer. We trained the classifier with the batch size of 256 for all experiments, freezing the parameter of transformer, following the same procedure of A.2. Note that the metric for the STS-B task is Pearson and Spearman scores. Therefore we measured the validation Pearson score instead of validation accuracy for choosing influential attention heads for the STS-B task. A.4 Fine-tuning on Downstream Tasks During the fine-tuning process with one of the three different pooling methods (last-layer, best-layer, and top-n heads), we attached an additional linear layer with a dropout layer (dropout rate=0.1) and Tanh activation function, following the pooler architecture implemented in Huggingface’s transformer. Then the logistic regression layer is attached to the activation function. We trained the classifier with the batch size of 32 with a learning rate of 2e-5 with three epochs for all the experiments, unfreezing all the parameters of the transformer and the regressor. Each experiment is repeated five times with different random seeds to provide fair comparisons against the performance variance of the fine-tuning process conducted on small datasets. B Head-Wise Evaluation Results with Probing Tasks The performance variation of the probing tasks is shown in Table 5 that provides full experimental results with BERTBASE, BERTLARGE, GPT and GPT2. C Head-wise Evaluation Heatmaps Since Figure 3 provides partial results only, we provide Figure 6 and 7 here to show the full experimental results with and without sorted attention heads on the same layer. The former helps understanding how the influential heads are gathered for their strengths while the latter is useful for understanding how various linguistic capabilities are supported in association by a particular attention head. Task Encoder Task BERTBASE BERTLARGE GPT GPT2 Group last layer best layer top-12 heads last layer best layer top-16 heads last layer best layer top-12 heads last layer best layer top-12 heads Surface Length 58.0 87.8 (51.5) 95.0 (64.0) 54.8 94.4 (72.2) 95.2 (73.6) 52.2 96.4 (84.7) 96.2 (84.3) 57.8 88.9 (53.9) 92.8 (60.6) WordContent 25.2 25.2 (0.0) 73.1 (190.3) 12.2 32.2 (165.2) 79.8 (556.4) 35.3 35.3 (0.0) 71.3 (102.0) 37.5 37.5 (0.0) 71.0 (89.5) Syntactic Depth 29.8 31.7 (6.5) 38.3 (28.7) 27.8 33.3 (19.6) 39.5 (41.9) 27.2 30.6 (12.2) 38.9 (42.8) 28.0 31.0 (10.6) 40.0 (42.7) TopConstituents 69.8 74.7 (7.0) 84.2 (20.6) 62.8 78.2 (24.6) 85.6 (36.4) 53.0 65.1 (22.8) 79.5 (50.0) 57.3 63.1 (10.2) 82.8 (44.5) BigramShift 78.1 78.1 (0.0) 88.3 (13.1) 77.2 81.1 (5.1) 90.9 (17.8) 69.3 69.3 (0.0) 80.7 (16.5) 68.8 70.5 (2.5) 78.8 (14.6) Semantic Tense 86.0 87.0 (1.2) 89.0 (3.6) 85.6 86.7 (1.2) 88.9 (3.8) 88.6 88.6 (0.0) 89.0 (0.5) 88.2 88.6 (0.5) 89.3 (1.2) SubjNumber 82.0 84.7 (3.4) 88.2 (7.6) 80.0 87.9 (9.9) 90.5 (13.0) 78.8 78.8 (0.0) 84.2 (6.8) 83.5 83.5 (0.0) 87.7 (5.1) ObjNumber 75.4 75.4 (0.0) 83.4 (10.6) 64.2 78.0 (21.5) 84.4 (31.4) 71.2 71.9 (0.9) 80.5 (13.0) 70.2 74.1 (5.5) 82.5 (17.5) OddManOut 59.6 59.6 (0.0) 65.1 (9.3) 55.5 59.2 (6.6) 69.0 (24.2) 55.0 58.2 (5.9) 63.0 (14.6) 54.6 54.9 (0.6) 5.5 (1.6) CoordInversion 65.5 65.9 (0.5) 74.6 (13.8) 64.9 70.9 (9.2) 78.5 (20.9) 57.7 60.7 (5.1) 70.2 (21.6) 58.3 60.0 (2.9) 67.6 (15.9) Table 5: A summary of the probing tasks for three different embedding methods used in the five pre-trained architectures. The numbers in the parenthesis denote the percent increment of accuracy compared to those of the last layer. 3416 Length Depth TopConstituents BigramShift CoordInversion OddManOut WordContent Tense SubjNumber SubjNumber BERTBASE BERTLARGE GPT GPT2 ELECTRALARGE ELECTRABASE ELECTRASMALL Figure 6: Heatmaps of the attention head-wise evaluation on the ten sentence probing tasks with BERTBASE, BERTLARGE, GPT, GPT2, ELECTRASMALL, ELECTRABASE, and ELECTRALARGE. 70 heat maps correspond to the ten tasks (Length, WordContent, Depth, TopConstituents, SubjNum, ObjNum, BigramShift, CoordInversion, OddManOut, and Tense from the left). In each heat map, x-axis and y-axis show the index values of the attention heads and the layer numbers (the lower, the closer to the initial input), respectively. The brighter the color, the higher the accuracy for the attention head and hence more important for the task. Note that the attention heads on the same layer are ordered by their classification accuracy values (i.e. an attention head with the highest accuracy on a layer is at the left-most location). 3417 Length Depth TopConstituents BigramShift CoordInversion OddManOut WordContent Tense SubjNumber SubjNumber BERTBASE BERTLARGE GPT GPT2 ELECTRALARGE ELECTRABASE ELECTRASMALL Figure 7: Unsorted heatmaps of the attention head-wise evaluation on the ten sentence probing tasks with BERTBASE, BERTLARGE, GPT, GPT2, ELECTRASMALL, ELECTRABASE, and ELECTRALARGE. 70 heat maps correspond to the ten tasks (Length, WordContent, Depth, TopConstituents, SubjNum, ObjNum, BigramShift, CoordInversion, OddManOut, and Tense from the left). In each heat map, x-axis and y-axis show the index values of the attention heads and the layer numbers (the lower, the closer to the initial input), respectively. The brighter the color, the higher the accuracy for the attention head and hence more important for the task. Note that the attention heads on the same layer are not ordered like Figure 6.
2020
311
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3418–3428 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3418 Understanding Attention for Text Classification Xiaobing Sun and Wei Lu StatNLP Research Group Singapore University of Technology and Design xiaobing [email protected], [email protected] Abstract Attention has been proven successful in many natural language processing (NLP) tasks. Recently, many researchers started to investigate the interpretability of attention on NLP tasks. Many existing approaches focused on examining whether the local attention weights could reflect the importance of input representations. In this work, we present a study on understanding the internal mechanism of attention by looking into the gradient update process, checking its behavior when approaching a local minimum during training. We propose to analyze for each word token the following two quantities: its polarity score and its attention score, where the latter is a global assessment on the token’s significance. We discuss conditions under which the attention mechanism may become more (or less) interpretable, and show how the interplay between the two quantities may impact the model performance.1 1 Introduction Attention mechanism (Bahdanau et al., 2015) has been used as an important component across a wide range of NLP models. Typically, an attention layer produces a distribution over input representations to be attended to. Such a distribution is then used for constructing a weighted combination of the inputs, which will then be employed by certain downstream modules. Recently, several research efforts on investigating the interpretability of attention on tasks such as text classification, question answering, and natural language inference (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019; Arras et al., 2019) have been conducted. One of their important arguments was whether the attention distribution could adequately reflect the significance of inputs. To answer this question, they designed a series of metrics and 1Supplementary material and code at https:// github.com/richardsun-voyager/UAFTC conducted corresponding experiments. In their approaches, they were mainly observing how the attention may impact the outputs on the pre-trained models by changing some elements in the inputs. While such approaches have resulted in interesting findings, the attention mechanism itself remains a black box to us – it is still largely unclear what are the underlying factors that may have an impact on the attention mechanism. When analyzing the results of a typical model with attention on the text classification tasks, we noticed that in some instances, many of the word tokens with large attention weights were adjectives or adverbs which conveyed explicit signals on the underlying class label. On the other hand, in some other instances, we also noticed that such useful words may not always be able to receive significant attention weights, especially under certain configurations of hyperparameters, making the attention mechanism less interpretable. Such observations lead to several important questions. First, the attention weight for a word token appears to be the relative measurement to its significance, and is largely local and instance specific. Would there be an instance-independent quantity to assess the corpus-level importance of a word token? And if so, what role would such a quantity play in terms of interpreting the overall attention mechanism? Second, when the attention mechanism appears to be less interpretable, how would the underlying model be affected in terms of performance? In this work, we focus on answering the above questions. We argue that the attention scores (rather than attention weights) are able to capture the global, absolute importance of word tokens within a corpus. We present a study to figure out the underlying factors that may influence such attention scores under a simple neural classification model. Inspired by Qian (1999), we analyzed the gradients as well as the updates of intermediate variables in the process of gradient descent, and 3419 found that there exist some implicit trends on the intermediate variables related to attention: the degree of association between a word token and the class label may impact their attention scores. We argue that when certain hyperparameters are properly set, tokens with strong polarity – high degree of association with specific labels, would likely end up with large attention scores, making them more likely to receive large attention weights in a particular sentence. While in such scenarios, the attention mechanism would appear to be more interpretable, we also discuss scenarios where the attention weights may become less interpretable, and show how the polarity scores, another important token-level quantity, will play their roles in the overall model in terms of contributing towards the model performance. 2 Related Work Research on interpretability of neural models has received significant attention recently. One approach was using visualization to explore patterns that exist in the intermediate representations of neural networks. Simonyan et al. (2013) visualized the image-specific class saliency on image classification tasks using learnt ConvNets, and displayed the features captured by the neural networks. Li et al. (2016a,b) proposed visualization methods to look into the neural representations of the embeddings from the local composition, concessive sentences, clause composition, as well as the saliency of phrases and sentences, and illustrated patterns based on the visualizations. An erasure method was also adopted to validate the importance of different dimensions and words. Vig and Belinkov (2019) analyzed the attention structure on the Transformer (Vaswani et al., 2017) language model as well as GPT-2 (Radford et al., 2019) pre-trained model. Another approach to understanding neural approaches is to conduct theoretical analysis to investigate the underlying explanations of neural models. One example is the work of Levy and Goldberg (2014), which regarded the word embedding learning task as an optimization problem, and found that the training process of the skip-gram model (Mikolov et al., 2013a,b) can be explained as implicit factorization of a shifted positive PMI (pointwise mutual information) matrix. Recently, several research efforts have focused on the interpretability of the attention mechanism. Jain and Wallace (2019) raised the question on the explainability of feature importance as captured by the attention mechanism. They found the attention weights may not always be consistent with Attention Linear Sigmoid ... ... h1 h2 hj hn h = α h ∑j j j s = h W T hn−1 Output Figure 1: Classification architecture with attention the feature importance from the human perspective in tasks such as text classification and question answering. Serrano and Smith (2019) also carried out an analysis on the interpretability of the attention mechanism, with a focus on the text classification task. They conducted their study in a cautious way with respect to defining interpretability and the research scope. The paper concluded that the attention weights are noisy predictors of importance, but should not be regarded as justification for decisions. Wiegreffe and Pinter (2019) suggested that the notion of explanation needs to be clearly defined, and the study of the explanation requires taking all components of a model into account. Their results indicated that prior work could not disprove the usefulness of attention mechanisms with respect to explainability. Moreover, Michel et al. (2019) and Voita et al. (2019) examined the multi-head self-attention mechanism on Transformer-based models, particularly the roles played by the heads. Our work and findings are largely consistent with such findings reported in the literature. We believe there are many factors involved when understanding the attention mechanism. Inspired by Qian (1999), which investigated the internal mechanism of gradient descent, in this work we focus on understanding attention’s internal mechanism. 3 Classification Model with Attention We consider the task of text classification, with a specific focus on binary classification.2 The architecture of the model is depicted in Figure 1. There are various attention mechanisms introduced in the field (Luong et al., 2015). Two commonly used mechanisms are the additive attention (Bahdanau et al., 2015) and scaled dot-product attention (Vaswani et al., 2017). In this work, we will largely focus our analysis on the latter approach (but we will also touch the former approach later). 2Extending to multi-class classification is possible. See the supplementary material for detailed analysis and discussion. 3420 Consider an input token sequence of length n: x = e1, e2, . . . , en, where ej is the j-th input token whose representation before the attention layer is hj ∈Rd. The attention score for the j-th token is: aj = h⊤ j V λ , (1) where the hyperparameter λ is the scaling factor (typically set to a large value, e.g., √ d is often used in the literature (Vaswani et al., 2017)), and V ∈Rd is the context vector that can be viewed as a fixed query asking for the “most informative word” from the input sequence (Yang et al., 2016). The token representation hj can be the word embedding, or the output of an encoder. The corresponding attention weight would be: αj = exp(aj) P j′ exp aj′ . (2) The complete input sequence is represented as: h = X j αjhj, (3) and the output of the linear layer is: s = h⊤W , (4) which we call instance-level polarity score of the input sequence. Here, W ∈Rd is the weight vector for the linear layer. When we make predictions, if the resulting polarity score s is positive, the corresponding input sequence will be classified as positive (i.e., y = +1, where y is the output label). Otherwise, it will be classified as negative (i.e., y = −1). During training, assume we have a training set D = {(x(1), y(1)), (x(2), y(2)), . . . , (x(m), y(m))} with m labeled instances. Our overall loss is: ℓ= 1 m m X t=1 ℓ(t) = −1 m m X t=1 log  σ(y(t)s(t))  . (5) where y(t) and s(t) are the gold output label and the instance-level polarity score for the t-th instance respectively, and σ is the sigmoid function. The instance-level polarity score s can also be written as: s = X j αjh⊤ j W = X j αjsj. (6) Here, we have introduced the token-level polarity score sj for the input token representation hj: sj = h⊤ j W . (7) From here we can observe that the instance-level polarity score of the input sequence can be interpreted as the weighted sum of the token-level polarity scores, where the weights are given by the attention weights (αj for hj). Such attention weights measure the relative importance of the token within a specific input sequence. On the other hand, the attention score aj captures the absolute importance of the token. We believe such absolute measurements to the significance of words may be playing a more crucial role (than attention weights) when understanding the attention mechanism. Thus, unlike many previous research efforts, we will instead focus on the understanding of attention scores in this work. In this paper, we will mainly investigate a simple neural model where hj = ej. Here ej is the word embedding for the j-th input token. In other words, we assume the word embeddings are used as the inputs to the attention layer. Detailed discussions on other assumptions on hj can be found in the supplementary material. 4 Analysis We conduct some analysis in this section to understand how the attention mechanism works for the task of text classification. First, let us consider the following 3 different types of tokens: • positive tokens: tokens that frequently appear in positive training instances only, • negative tokens: tokens that frequently appear in negative training instances only, and • neutral tokens: tokens that appear evenly across both positive and negative training instances. We also call the first two types of tokens polarity tokens. For the ease of analysis and discussion, we assume each token belongs to either of these 3 types, and we assume the dataset is balanced and symmetric3. While some of these assumptions may seem strong, having them would significantly simplify our analysis. As we will see later in experiments, even though some of the above assumptions do not hold in some real datasets, our findings are still valid in practice. The gradient descent algorithm that minimizes a loss ℓcould be interpreted as the integration of 3In other words, if we flip the signs of the y labels for all documents in the training set, we arrive at exactly the same training set (under a particular mapping between tokens). 3421 the gradient flow equation using Euler’s Method (Scieur et al., 2017; Qian, 1999), written as: dz(τ) dτ = −∇ℓ(z(τ)), z(0) = z0, (8) where z is the parameter vector, and z0 is its initialization, and τ is the time step. We assume that all parameters have initializations, and will omit such initializations in the subsequent differential equations. We will not seek to solve the differential equations directly but to find out whether there exist some trends and patterns for certain variables during training. 4.1 Polarity Score Consider the token e in the vocabulary whose vector representation is e. Let us have an analysis on the polarity score se for the token e. This token may appear somewhere in the training set. We write e(t) j ≡e if and only if this token e appears as the j-th token in the t-th instance. Gradient update iteration will be represented as: dse(τ) dτ = (de(τ) dτ )⊤W (τ) + e⊤(τ)dW (τ) dτ , (9) where W (τ) is the linear layer weight vector at the time τ. Its update can be represented by another ordinary differential equation: dW (τ) dτ = −∂ℓ ∂W (τ), (10) Similarly, we have: de(τ) dτ = −∂ℓ ∂e(τ). (11) For simplicity, we will omit the time step τ in the equations. The derivative of the token level polarity score will be written as: dse dτ = −  ∂ℓ ∂e ⊤ W | {z } ∆s′e +  −e⊤∂ℓ ∂W  | {z } ∆s′′e . (12) The two partial derivatives can be calculated as4: ∂ℓ ∂e =−1 m X (t,j):e(t) j ≡e y(t)β(t)α(t) j " V (e −h(t))⊤ λ +I # W , (13) ∂ℓ ∂W = −1 m m X t=1 y(t)β(t)h(t), (14) 4See the supplementary material for details. where (t, j) : e(t) j ≡e means we are selecting such tokens from the t-th instance at the j-th position that are exactly e, and α(t) j is the attention weight for that j-th token in the selected t-th instance. The vector h(t) is the representation of the t-th instance, and β(t) is defined as β(t) = 1 −σ(y(t)s(t)). The first term in Equation 12 can be written as: ∆s′ e = 1 m X (t,j):e(t) j ≡e y(t)β(t)α(t) j se −s(t) λ V ⊤W + 1 m||W ||2 2 X (t,j):e(t) j ≡e y(t)β(t)α(t) j . (15) The sign of the second term above depends on: π(e) = X (t,j):e(t) j ≡e y(t)β(t)α(t) j . (16) This term has the following property: it is positive if e is a positive token, negative if e is negative, and close to 0 if e is neutral. The second term in Equation 12 is: ∆s′′ e = 1 m m X t=1 y(t)β(t)e⊤h(t) = 1 m m X t=1 y(t)β(t) X j α(t) j e⊤e(t) j = 1 m X (t,j) y(t)β(t)α(t) j e⊤e(t) j . (17) Equation 17 involves dot-products between embeddings. During training, certain trends and patterns will be developed for such dot-products. Near a local minimum, we can show that it is desirable to have e⊤ i ej > 0 when ei and ej are both positive tokens or both negative tokens, and e⊤ i ej < 0 when one is a positive token and the other is a negative token. More details and analysis on the desirability of these properties can be found in the supplementary material. Now let us look at the last term in Equation 17. This term can be re-written as: 1 m X (t,j):y(t)=+1 β(t)α(t) j  e⊤e(t) j  + 1 m X (t,j):y(t)=−1 β(t)α(t) j  −e⊤e(t) j  . (18) where we split the term into two based on the polarity of the training instances. 3422 In the first term, each ej token would be either a positive or a neutral token; in the second term, each ej would be either a negative or a neutral token, and again under the assumption on the dataset, all the terms involving neutral ej tokens would roughly sum to a value close to 0 (regardless of e). So we may assume there are no neutral ej tokens. Now, if e is a positive token, we can see it is desirable for both terms to be positive. If e is negative, it is desirable for both terms to be negative. If e is neutral, likely this term is close to 0. Overall, the update of se is: dse dτ = 1 m  V ⊤W /λ  ρ(e) | {z } (A) + 1 m||W ||2 2 π(e) | {z } (B) + 1 m X (t,j) y(t)β(t)α(t) j e⊤e(t) j | {z } (C) , (19) where ρ(e) = X (t,j):e(t) j ≡e y(t)β(t)α(t) j  se −s(t) . (20) Under the assumption that V ⊤W /λ is reasonably small (for example, we may set λ to an appropriate value, which is reasonably large), we have A ≈0. We then have the following results: • For positive tokens, we have B > 0 and C > 0. The corresponding polarity scores will likely increase after each update when approaching the local minimum, and may end up with relatively large positive polarity scores eventually. • For negative tokens, we have B < 0 and C < 0. The corresponding polarity scores will likely decrease after each update when approaching the local minimum, and may end up with relatively large negative polarity scores eventually. • For neutral tokens, we have B ≈0 and C ≈0. Their polarity scores will likely not change significantly after each update when approaching the local minimum, and may end up with polarity scores that are neither significantly positive nor significantly negative eventually. Based on the above results, we can also quickly note that ρ(e) has the following property: it is positive if e is a polarity token, and close to zero if e is neutral. These results are desirable as the token-level polarity scores will be used for defining the instancelevel polarity scores, which are in term useful for prediction of the final polarity of the sentence containing such tokens. However, we note that the above results depend on the assumption that term A is small. As we mentioned above, we may assume λ is large to achieve this. When V ⊤W /λ is not small enough, the term A may lead to a gap in the polarity scores between the positive and negative tokens, depending on the sign of V ⊤W – a term that will appear again in the next section when examining the attention scores. 4.2 Attention Score Now let us have an analysis on the attention score for each token. Again given a token e, the corresponding attention score is ae = e⊤V λ . Note that this is a global score that is independent of any instance. The update of ae is: dae(τ) dτ = 1 λ(de(τ) dτ )⊤V (τ) + 1 λe⊤(τ)dV (τ) dτ . (21) Similarly, let us rewrite the equation as: dae dτ = −1 λ  ∂ℓ ∂e ⊤ V | {z } ∆a′e +  −1 λe⊤∂ℓ ∂V  | {z } ∆a′′e . (22) We have ∂ℓ ∂V = −1 mλ X (t,j) y(t)β(t)α(t) j e(t) j  s(t) j −s(t) . (23) The first term can be calculated as: ∆a′ e = 1 mλ2 ||V ||2 2 X (t,j):e(t) j ≡e y(t)β(t)α(t) j  se −s(t) + 1 mλ X (t,j):e(t) j ≡e y(t)β(t)α(t) j W ⊤V . (24) The second term is: ∆a′′ e = 1 mλ2 X (t,j) y(t)β(t)α(t) j e⊤e(t) j  s(t) j −s(t) . (25) Similarly, this can be re-written as: 1 mλ2 X (t,j):y(t)=+1 β(t)α(t) j  s(t) j −s(t) e⊤e(t) j + 1 mλ2 X (t,j):y(t)=−1 β(t)α(t) j  s(t) −s(t) j  e⊤e(t) j . (26) 3423 This term shall be close to zero initially, regardless of e. However, this term may become positive for a polarity token e as learning progresses.5 The update of ae is (note that W ⊤V = V ⊤W ): dae dτ = 1 mλ2  V ⊤W · λ  π(e) | {z } (D) + 1 mλ2 ||V ||2 2 ρ(e) | {z } (E) (27) + 1 mλ2 X (t,j) y(t)β(t)α(t) j e⊤e(t) j  s(t) j −s(t) | {z } (F) . Let us now understand the influence of these terms respectively: • Term D. When V ⊤W > 0, the positive tokens will receive a positive update whereas the negative tokens will receive a negative update from this term after each step. When V ⊤W < 0, the influence is the other way around. It does not influence the attention scores of the neutral tokens much as the corresponding π(e) is approximately zero. When it is not close to zero, this term can lead to a gap between the final attention scores of the positive tokens and negative tokens. • Terms E and F. Based on our analysis, E > 0, and F ≥0 for polarity tokens, and E ≈0 and F ≈0 for neutral tokens. This means for the positive tokens and negative tokens, their attention scores will likely receive a positive value from this term after each update when approaching a local minimum. Their corresponding attention scores may end up with large positive scores eventually. For the neutral tokens, this term does not have much influence on their attention scores. From here we can observe that when V ⊤W · λ is small, the polarity tokens will likely end up with larger attention scores than the neutral tokens. This is actually a desirable situation – polarity tokens are likely more representative when used for predicting the underlying class labels, and therefore shall receive more “attention” in general. However, we note that if the scaling factor λ is too large, the term D may be significant. This means the sign of V ⊤W will then play a crucial role – when it is non-zero and when λ is very large, positive tokens and negative tokens will likely have 5See the supplementary material for more details. Dataset AvgLength VocabSize Size Train Dev Test SST 18 16174 3610/3310 444/428 909/912 IMDB 183 63311 8539/8673 2113/2191 2174/2189 20News I 185 17584 624/612 156/154 195/192 20News II 187 29433 794/790/716 91/70/79 84/100/90 Table 1: Datasets are all split into training, dev and test sets, respectively and are all balanced. The first 3 datasets are for binary classification (positive/negative), and the last is for 3-class classification (rec.motorcycles/sci.med/talk.politics.guns). attention scores of opposite signs. This may not be a very desirable situation as the attention scores would be less interpretable in that case. On the other hand, as we have discussed in the previous section, the scaling factor λ should not be too small too. Otherwise term A in Equation 19 would not be close to 0 – as a result the conclusions on the polarity scores for the tokens stated at end of Sec 4.1 may not hold. In conclusion, if we would like to observe the desirable behavior as discussed for the attention mechanism, it is important for us to choose an appropriate λ value or we shall possibly find ways to control the value of V ⊤W 6. We will conduct experiments on real datasets to verify our findings. Besides the above analysis, we have also analyzed polarity scores and attention scores from the model with additive attention, the model with an affine input layer and the model for multi-class classification respectively. There are terms that have similar effects on polarity and attention scores during training. Due to space limitations, we provide such details in the supplementary material. 5 Experiments We conducted experiments on four text classification datasets7. The statistics of the datasets are shown in Table 1. We followed the work of Jain and Wallace (2019) for pre-processing of the datasets8, and lower-cased all the tokens. • Stanford Sentiment Treebank (SST) (Socher et al., 2013). The original dataset that consists of 10,662 instances with labels ranging from 1 (most negative) to 5 (most positive). Similar to the work of Jain and Wallace (2019), we removed neutral instances (with label 3), and regarded instances with label 4 or 5 as positive and instances with the label 1 or 2 as negative. • IMDB (Maas et al., 2011). The original dataset 6We have further discussions on V ⊤W in the supplementary material. 7We also conducted analysis on synthetic datasets. The results can be found in the supplementary material. 8https://github.com/successar/ AttentionExplanation 3424 λ SST λ 20News I DP DP-L DP-A AD DP DP-L DP-A AD 0.001 55.3 79.8 67.9 62.8 0.001 54.8 88.6 78.6 49.4 1 74.4 81.2 73.4 73.4 1 88.4 93.0 85.3 87.6 10 82.2 81.7 80.8 80.3 10 92.8 91.2 92.8 92.0 20 81.4 80.9 81.0 81.2 20 93.5 92.2 93.5 91.2 50 80.8 82.0 81.5 79.9 50 93.3 92.3 92.2 91.7 100 81.2 81.1 80.7 80.8 100 92.8 91.2 92.8 93.3 10000 79.6 81.4 79.3 80.8 10000 92.8 92.0 93.0 92.0 λ IMDB λ 20News II DP DP-L DP-A AD DP DP-L DP-A AD 0.001 55.5 87.7 73.3 69.8 0.001 31.8 90.1 64.6 59.2 1 79.5 88.2 85.4 83.7 1 85.4 92.3 88.3 86.7 10 89.2 87.8 89.6 88.2 10 93.4 93.4 91.7 90.0 20 89.6 88.1 89.6 89.6 20 94.9 94.2 93.3 92.1 50 89.8 87.2 89.1 88.5 50 94.9 92.3 92.9 93.8 100 89.3 88.3 89.2 88.8 100 94.9 93.1 92.9 92.9 10000 89.3 88.4 88.9 88.9 10000 94.5 93.8 92.5 92.9 Table 2: Test set results in accuracy (%). Models were chosen based on the highest accuracy on the dev sets. L2-regularization was adopted on DP-L, DP-A and AD. that consists of 50,000 movie reviews with positive or negative labels. • 20Newsgroup I (20News I). The original dataset that consists of around 20,000 newsgroup correspondences. Similar to the work of Jain and Wallace (2019), we selected the instances from these two categories: “rec.sport.hockey” and “rec.sport.baseball”, and regarded the former as positive instances and the latter negative. • 20Newsgroup II (20News II). This is a dataset for 3-class classification. We selected instances from these three categories: “rec.motorcycles” , “sci.med” and “talk.politics.guns”. Our analysis focused on the ideal case (e.g., positive tokens only appear in positive documents). To be as consistent as possible with our analysis, we only examined the tokens of strong association with specific labels and the tokens that could be seen almost evenly across different types of instances based on their frequencies (note that we only selected these tokens for examination after training, but no tokens were excluded during the training process). We defined a metric γe to measure the association between the token e and instance labels9: γe = f+ e −f− e f+ e + f− e , (28) where f+ e and f− e refer to the frequencies in the positive and in the negative instances respectively. If γe ∈(0.5, 1) and f+ e > 5, the token will be regarded as a “positive token”. If γe ∈(−1, −0.5) 9For multi-class classification, we determined the polarity of each token based on the relative frequency of each token with respect to each label. For each token, we calculated the frequency distribution across the labels that they appear in. If the largest element of the distribution is above a given threshold, we will regard the token as a polarity one. and f− e > 5, the token will be regarded as a “negative token”. If γe ∈(−0.1, 0.1) and |f+ e −f− e | < 5, the token will be regarded as a “neutral token”.10 We ran the experiments using different scaling factors λ on the models with the scaled dot-product attention (DP) and additive attention (AD) respectively. For the former, we also investigated the performances on the models with a LSTM (DP-L) or an affine transformation layer (DP-A) as the input encoder.11 The Adagrad optimizer (Duchi et al., 2011) was used for gradient descent. Dropout (Srivastava et al., 2014) was adopted to prevent overfitting. All the parameters were learned from scratch to avoid the influence of prior information. For the same reason, while we may be able to use pretrained word embeddings, we chose to initialize word embeddings with a uniform distribution from -0.1 to 0.1 with a dimension d = 100. The results are shown in Table 2. For the scaled dot-product attention, which is our focus in this work, it can be observed that when the scaling factor λ is small (1 or 0.001), the test set results appear to be worse than the case when λ is set to a larger value. The optimal results may be obtained when λ is set to a proper value. However, setting λ to a very large value does not seem to have a significant impact on the performance – in this case, from Equations 1 and 2 we can see that the attention weights will be close to each other for all input tokens, leading to an effect similar to mean pooling. Results on using LSTM or the affine transformation layer as the input encoder are similar – setting a proper value for λ appears to be crucial. Figure 2 shows the results for polarity scores and attention scores for the first 3 datasets, when λ is set to a moderate value of 10 (i.e., √ d). These results are consistent with our analysis. It can be observed that generally positive tokens have positive polarity scores while negative tokens have negative polarity scores. Neutral tokens typically have polarity scores around zero. It can also be observed that both the positive and negative tokens generally have larger attention scores than the neutral tokens. We also examined whether there would be an obvious gap between the attention scores of the polarity tokens when λ is large. As we can see from Figure 3b, when λ is set to 100, the resulting attention scores for the positive tokens are smaller than those of the neutral (and negative) tokens. In 10Example selected tokens from these datasets can be found in the supplementary material. 11More results from these models can be found in the supplementary material. For each model, we only reported one set of the results with a random initialization as we found the patterns were similar with different initializations. 3425 0 2000 4000 6000 8000 Token Index 10 5 0 5 10 Polarity Score Pos Neg Neutral (a) SST: polarity scores (λ = 10) 0 5000 10000 15000 20000 25000 30000 35000 40000 Token Index 15 10 5 0 5 10 15 20 Polarity Score Pos Neg Neutral (b) IMDB: polarity scores (λ = 10) 0 2000 4000 6000 8000 10000 Token Index 10 5 0 5 10 Polarity Score Pos Neg Neutral (c) 20News I: polarity scores (λ = 10) 0 2000 4000 6000 8000 Token Index 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 Attention Score Pos Neg Neutral (d) SST: attention scores (λ = 10) 0 5000 10000 15000 20000 25000 30000 35000 40000 Token Index 2 1 0 1 2 Attention Score Pos Neg Neutral (e) IMDB: attention scores (λ = 10) 0 2000 4000 6000 8000 10000 Token Index 2 1 0 1 2 3 Attention Score Pos Neg Neutral (f) 20News I: attention scores (λ = 10) Figure 2: Polarity (top) and attention scores (bottom). Scaled dot product attention is used with λ = 10. 0 2000 4000 6000 8000 Token Index 20 15 10 5 0 5 10 15 20 Polarity Score Pos Neg Neutral (a) SST: polarity scores (λ = 100) 0 2000 4000 6000 8000 Token Index 0.05 0.00 0.05 0.10 Attention Score Pos Neg Neutral (b) SST: attention scores (λ = 100) 0 500 1000 1500 2000 2500 Token Index 2 1 0 1 2 3 4 Attention Score Motor Med Guns Neutral (c) 20News II: attention scores (λ = 10) Figure 3: Polarity (left) and attention (middle) scores for SST with scaling factor λ set to 100. Attention scores (right) for 20News II, with scaling factor λ set to 10. Scaled dot product attention is used. this case, the resulting attention scores appear to be less interpretable. However, as we discussed above, when λ is very large, the attention mechanism will effectively become mean pooling (we can also see from Figure 3b that attentions scores of all tokens are now much smaller), and the overall model would be relying on the average polarity scores of the word tokens in the sentence for making prediction. Interestingly, on the other hand, as we discussed before at the end of Section 4.1, when λ is large, the polarity tokens will likely end up with polarity scores of large magnitudes – a fact that can also be empirically observed in Figure 3a. It is because of such healthy polarity scores acquired, the model is still able to yield good performance in this case even though the attention scores do not appear to be very interpretable. We also tried to set a constraint on V ⊤W by introducing a regularization term to minimize it in the learning process. We found doing so will generally encourage the attention model to produce more interpretable attention scores – for example, even when λ was large, both the positive and negative tokens ended up with positive attention scores that were generally larger than those of the neutral tokens. However, empirically we did not observe a significant improvement in test performance. See the supplementary material for details. We examined the attention scores on the 20News II dataset which consists of 3 labels. As shown in Figure 3c, polarity tokens that are strongly associated with specific labels are still likely to have larger attention scores than those of neutral tokens. To understand whether there are similar patterns for the polarity and attention scores when using the additive attention models, we replaced the scaled dot-product attention layer with the additive attention layer and ran experiments on the SST dataset. The results are shown in Figure 4, which are similar to those of our scaled dot-product attention model. Furthermore, we analyzed the relationship between the global attention scores and the local attention weights. We collected all the attention weights on the test set of SST for the positive, negative and 3426 0 2000 4000 6000 8000 Token Index 4 2 0 2 4 Polarity Score Pos Neg Neutral 0 2000 4000 6000 8000 Token Index 2 1 0 1 2 3 4 Attention Score Pos Neg Neutral Figure 4: Polarity and attention scores when additive attention is used (on SST, λ = 10). Pos Neg Neutral 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Average Attention Weight Figure 5: Distributions of average attention weights for positive, negative and neutral tokens. The minimum, maximum, median, first and third quartiles are displayed for tokens of each type. Circles are outliers. neutral tokens, and calculated the average weight for each token. Next we plot in Figure 5 the distribution of such average attention weights for tokens of these three types separately. As we can observe, generally, the polarity tokens are more likely to have larger attention weights than the neutral tokens. However, the positive tokens seemed to receive lower scores than the negative tokens in terms of the attention weights. This is consistent with the attention scores shown in Figure 2d: the attention scores of the positive tokens were generally lower than those of the negative tokens. Meanwhile, we could see that there were some outliers of large weights for the neutral tokens (circles that appear outside the boxes are outliers). We looked into the case, it was due to that all of the three tokens in the short instance “is this progress” had negative attention scores, and the last token “progress” somehow had a relatively larger one, making its corresponding attention weight the largest amongst the three. This can be explained by the fact that attention weights only capture relative significance of tokens within a local context. These empirical results support our analysis as well as our belief on the significance of the attention scores. When certain hyperparameters are properly set, the attention mechanism tends to assign larger attention scores to those tokens which have strong association with instances of a specific label. Meanwhile, the polarity scores for such tokens tend to yield large absolute values (of possibly different signs, depending on the polarity of the tokens), which will be helpful when predicting instance labels. By contrast, neutral tokens that appeared evenly across instances of different labels are likely assigned small attention scores and polarity scores, making them relatively less influential. 6 Conclusions In this work, we focused on understanding the underlying factors that may influence the attention mechanism, and proposed to examine attention scores – a global measurement of significance of word tokens. We focused on binary classification models with dot-product attention, and analyzed through a gradient descent based learning framework the behavior of attention scores and polarity scores – another quantity that we defined and proposed to examine. Through the analysis we found that both quantities play important roles in the learning and prediction process and examining both of them in an integrated manner allows us to better understand the underlying workings of an attention based model. Our analysis also revealed factors that may impact the interpretability of the attention mechanism, providing understandings on why the model may still be robust even under scenarios where the attention scores appear to be less interpretable. The empirical results of experiments on various real datasets supported our analysis. We also extended to and empirically examined additive attention, multi-label classification and models with an affine input layer, and observed similar behaviors. There are some future directions that are worth exploring. Specifically, we can further examine the influence of using pre-trained word embeddings – whether similar words can help each other boost their polarity and attention scores. Moreover, we can also examine the influence of using deep contextualized input encoders such as ELMo (Peters et al., 2018) or BERT (Devlin et al., 2018). 3427 Acknowledgments We would like to thank the anonymous reviewers for their thoughtful and constructive comments. We also thank Rui Qiao for his help on proofreading. This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISGRP-2019-012), and Ministry of Education, Singapore, under its Academic Research Fund (AcRF) Tier 2 Programme (MOE AcRF Tier 2 Award No: MOE2017-T2-1-156). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of National Research Foundation, Singapore and AI Singapore or the views of the Ministry of Education, Singapore. References Leila Arras, Ahmed Osman, Klaus-Robert M¨uller, and Wojciech Samek. 2019. Evaluating recurrent neural network explanations. Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(Jul):2121–2159. Sarthak Jain and Byron C. Wallace. 2019. Attention is not explanation. In Proceedings of NAACL. Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Proceedings of NeurIPS. Jiwei Li, Xinlei Chen, Eduard H. Hovy, and Dan Jurafsky. 2016a. Visualizing and understanding neural models in NLP. Proceedings of NAACL. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016b. Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of EMNLP. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of ACL. Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In Proceedings of NeurIPS. Tomas Mikolov, Kai Chen, G.s Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In Proceedings of Workshop at ICLR. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of NeurIPS. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL. Ning Qian. 1999. On the momentum term in gradient descent learning algorithms. Neural networks, 12(1):145–151. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Technical report, OpenAI. Damien Scieur, Vincent Roulet, Francis Bach, and Alexandre d’Aspremont. 2017. Integration methods and accelerated optimization algorithms. In Proceedings of NeurIPS. Sofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In Proceedings of EMNLP. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research, 15(1):1929–1958. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NeurIPS. Jesse Vig and Yonatan Belinkov. 2019. Analyzing the structure of attention in a transformer language model. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. 3428 Elena Voita, David Talbot, F. Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi-head selfattention: Specialized heads do the heavy lifting, the rest can be pruned. In Proceedings of ACL. Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of EMNLP. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of NAACL.
2020
312
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3429–3435 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3429 A Relational Memory-based Embedding Model for Triple Classification and Search Personalization Dai Quoc Nguyen1, Tu Dinh Nguyen2, Dinh Phung1 1Monash University, Australia 2Trusting Social 1{dai.nguyen,dinh.phung}@monash.edu [email protected] Abstract Knowledge graph embedding methods often suffer from a limitation of memorizing valid triples to predict new ones for triple classification and search personalization problems. To this end, we introduce a novel embedding model, named R-MeN, that explores a relational memory network to encode potential dependencies in relationship triples. RMeN considers each triple as a sequence of 3 input vectors that recurrently interact with a memory using a transformer self-attention mechanism. Thus R-MeN encodes new information from interactions between the memory and each input vector to return a corresponding vector. Consequently, R-MeN feeds these 3 returned vectors to a convolutional neural network-based decoder to produce a scalar score for the triple. Experimental results show that our proposed R-MeN obtains state-of-theart results on SEARCH17 for the search personalization task, and on WN11 and FB13 for the triple classification task. 1 Introduction Knowledge graphs (KGs) – representing the genuine relationships among entities in the form of triples (subject, relation, object) denoted as (s, r, o) – are often insufficient for knowledge presentation due to the lack of many valid triples (West et al., 2014). Therefore, research work has been focusing on inferring whether a new triple missed in KGs is likely valid or not (Bordes et al., 2011, 2013; Socher et al., 2013). As summarized in (Nickel et al., 2016; Nguyen, 2017), KG embedding models aim to compute a score for each triple, such that valid triples have higher scores than invalid ones. Early embedding models such as TransE (Bordes et al., 2013), TransH (Wang et al., 2014), TransR (Lin et al., 2015), TransD (Ji et al., 2015), DISTMULT (Yang et al., 2015) and ComplEx (Trouillon et al., 2016) often employ simple linear operators such as addition, subtraction and multiplication. Recent embedding models such as ConvE (Dettmers et al., 2018) and CapsE (Nguyen et al., 2019b) successfully apply deep neural networks to score the triples. Existing embedding models are showing promising performances mainly for knowledge graph completion, where the goal is to infer a missing entity given a relation and another entity. But in real applications, less mentioned, such as triple classification (Socher et al., 2013) that aims to predict whether a given triple is valid, and search personalization (Vu et al., 2017) that aims to re-rank the relevant documents returned by a user-oriented search system given a query, these models do not effectively capture potential dependencies among entities and relations from existing triples to predict new triples. To this end, we leverage the relational memory network (Santoro et al., 2018) to propose RMeN to infer a valid fact of new triples. In particular, R-MeN transforms each triple along with adding positional embeddings into a sequence of 3 input vectors. R-MeN then uses a transformer self-attention mechanism (Vaswani et al., 2017) to guide the memory to interact with each input vector to produce an encoded vector. As a result, R-MeN feeds these 3 encoded vectors to a convolutional neural network (CNN)-based decoder to return a score for the triple. In summary, our main contributions are as follows: • We present R-MeN – a novel KG embedding model to memorize and encode the potential dependencies among relations and entities for two real applications of triple classification and search personalization. • Experimental results show that R-MeN obtains better performance than up-to-date embedding models, in which R-MeN produces new state-of-the-art results on SEARCH17 3430 for the search personalization task, and a new highest accuracy on WN11 and the secondhighest accuracy on FB13 for the triple classification task. 2 The proposed R-MeN Embedding Positional Encoding s r o CNN score M MLP g M MLP g M MLP g + + + Figure 1: Processes in our proposed R-MeN for an illustration purpose. “M” denotes a memory. “MLP” denotes a multi-layer perceptron. “g” denotes a memory gating. “CNN” denotes a convolutional neural networkbased decoder. Let G be a KG database of valid triples in the form of (subject, relation, object) denoted as (s, r, o). KG embedding models aim to compute a score for each triple, such that valid triples obtain higher scores than invalid triples. We denote vs, vr and vo ∈Rd as the embeddings of s, r and o, respectively. Besides, we hypothesize that relative positions among s, r and o are useful to reason instinct relationships; hence we add to each position a positional embedding. Given a triple (s, r, o), we obtain a sequence of 3 vectors {x1, x2, x3} as: x1 = W (vs + p1) + b x2 = W (vr + p2) + b x3 = W (vo + p3) + b where W ∈Rk×d is a weight matrix, and p1, p2 and p3 ∈Rd are positional embeddings, and k is the memory size. We assume we have a memory M consisting of N rows wherein each row is a memory slot. We use M(t) to denote the memory at timestep t, and M(t) i,: ∈Rk to denote the i-th memory slot at timestep t. We follow Santoro et al. (2018) to take xt to update M(t) i,: using the multi-head selfattention mechanism (Vaswani et al., 2017) as: ˆ M(t+1) i,: = [ ˆ M(t+1),1 i,: ⊕ˆ M(t+1),2 i,: ⊕ ... ⊕ˆ M(t+1),H i,: ] with ˆ M(t+1),h i,: = αi,N+1,h  Wh,V xt  + N X j=1 αi,j,h  Wh,V M(t) j,:  where H is the number of attention heads, and ⊕denotes a vector concatenation operation. Regarding the h-th head, Wh,V ∈Rn×k is a valueprojection matrix, in which n is the head size and k = nH. Note that {αi,j,h}N j=1 and αi,N+1,h are attention weights, which are computed using the softmax function over scaled dot products as: αi,j,h = exp (βi,j,h) PN+1 m=1 exp (βi,m,h) αi,N+1,h = exp (βi,N+1,h) PN+1 m=1 exp (βi,m,h) with βi,j,h =  Wh,QM(t) i,: T  Wh,KM(t) j,:  √n βi,N+1,h =  Wh,QM(t) i,: T Wh,Kxt  √n where Wh,Q ∈Rn×k and Wh,K ∈Rn×k are query-projection and key-projection matrices, respectively. As following Santoro et al. (2018), we feed a residual connection between xt and ˆ M(t+1) i,: to a multi-layer perceptron followed by a memory gating to produce an encoded vector yt ∈Rk for timestep t and the next memory slot M(t+1) i,: for timestep (t + 1). As a result, we obtain a sequence of 3 encoded vectors {y1, y2, y3} for the triple (s, r, o). We then use a CNN-based decoder to compute a score for the triple as: f (s, r, o) = max (ReLU ([y1, y2, y3] ∗Ω))T w where we view [y1, y2, y3] as a matrix in Rk×3; Ωdenotes a set of filters in Rm×3, in which m is the window size of filters; w ∈R|Ω| is a weight vector; ∗denotes a convolution operator; and max denotes a max-pooling operator. Note that we use the max-pooling operator – instead of the vector 3431 concatenation of all feature maps used in ConvKB (Nguyen et al., 2018) – to capture the most important feature from each feature map, and to reduce the number of weight parameters. We illustrate our proposed R-MeN as shown in Figure 1. In addition, we employ the Adam optimizer (Kingma and Ba, 2014) to train R-MeN by minimizing the following loss function (Trouillon et al., 2016; Nguyen et al., 2018): L = X (s,r,o)∈{G∪G′} log 1 + exp −t(s,r,o) · f (s, r, o)  in which, t(s,r,o) =  1 for (s, r, o) ∈G −1 for (s, r, o) ∈G′ where G and G′ are collections of valid and invalid triples, respectively. G′ is generated by corrupting valid triples in G. 3 Experimental setup 3.1 Task description and evaluation 3.1.1 Triple classification The triple classification task is to predict whether a given triple (s, r, o) is valid or not (Socher et al., 2013). Following Socher et al. (2013), we use two benchmark datasets WN11 and FB13, in which each validation or test set consists of the same number of valid and invalid triples. It is to note in the test set that Socher et al. (2013) did not include triples that either or both of their subject and object entities also appear in a different relation type or order in the training set, to avoid reversible relation problems. Table 1 gives statistics of the experimental datasets. Dataset #E #R #Triples in train/valid/test FB13 75,043 13 316,232 11,816 47,466 WN11 38,696 11 112,581 5,218 21,088 Table 1: Statistics of the experimental datasets. #E is the number of entities. #R is the number of relations. Each relation r has a threshold θr computed by maximizing the micro-averaged classification accuracy on the validation set. If the score of a given triple (s, r, o) is above θr, then this triple is classified as a valid triple, otherwise, it is classified as an invalid one. 3.1.2 Search personalization In search personalization, given a submitted query for a user, we aim to re-rank the documents returned by a search system, so that the more the returned documents are relevant for that query, the higher their ranks are. We follow (Vu et al., 2017; Nguyen et al., 2019a,b) to view a relationship of the submitted query, the user and the returned document as a (s, r, o)-like triple (query, user, document). Therefore, we can adapt our R-MeN for the search personalization task. We evaluate our R-MeN on the benchmark dataset SEARCH17 (Vu et al., 2017) as follows: (i) We train our model and use the trained model to compute a score for each (query, user, document) triple. (ii) We sort the scores in the descending order to obtain a new ranked list. (iii) We employ two standard evaluation metrics: mean reciprocal rank (MRR) and Hits@1. For each metric, the higher value indicates better ranking performance. 3.2 Training protocol 3.2.1 Triple classification We use the common Bernoulli strategy (Wang et al., 2014; Lin et al., 2015) when sampling invalid triples. For WN11, we follow Guu et al. (2015) to initialize entity and relation embeddings in our R-MeN by averaging word vectors in the relations and entities, i.e., vamerican arborvitae = 1 2 (vamerican + varborvitae), in which these word vectors are taken from the Glove 50-dimensional pre-trained embeddings (Pennington et al., 2014) (i.e., d = 50). For FB13, we use entity and relation embeddings produced by TransE to initialize entity and relation embeddings in our R-MeN, for which we obtain the best result for TransE on the FB13 validation set when using l2-norm, learning rate at 0.01, margin γ = 2 and d = 50. Furthermore, on WN11, we provide our new fine-tuned result for TransE using our experimental setting, wherein we use the same initialization taken from the Glove 50-dimensional pre-trained embeddings to initialize entity and relation embeddings in TransE. We get the best score for TransE on the WN11 validation set when using l1-norm, learning rate at 0.01, margin γ = 6 and d = 50. In preliminary experiments, we see the highest accuracies on the validation sets for both datasets when using a single memory slot (i.e., N = 1); and this is consistent with utilizing the single memory slot in language modeling (Santoro et al., 2018). Therefore, we set N = 1 to use the single memory slot for the triple classification task. Also from preliminary experiments, we select the batch size bs = 16 for WN11 and bs = 256 for FB13, and 3432 set the window size m of filters to 1 (i.e., m = 1). Regarding other hyper-parameters, we vary the number of attention heads H in {1, 2, 3}, the head size n in {128, 256, 512, 1024}, the number of MLP layers l in {2, 3, 4}, and the number of filters F = |Ω| in {128, 256, 512, 1024}. The memory size k is set to be nH = k. To learn our model parameters, we train our model using the Adam initial learning rate lr in {1e−6, 5e−6, 1e−5, 5e−5, 1e−4, 5e−4}. We run up to 30 epochs and use a grid search to select the optimal hyper-parameters. We monitor the accuracy after each training epoch to compute the relation-specific threshold θr to get the optimal hyper-parameters (w.r.t the highest accuracy) on the validation set, and to report the final accuracy on the test set. 3.2.2 Search personalization We use the same initialization of user profile, query and document embeddings used by Nguyen et al. (2019b) on SEARCH17 to initialize the corresponding embeddings in our R-MeN respectively. From the preliminary experiments, we set N = 1, bs = 16 and m = 1. Other hyper-parameters are varied as same as used in the triple classification task. We monitor the MRR score after each training epoch to obtain the highest MRR score on the validation set to report the final scores on the test set. 4 Main results 4.1 Triple classification Table 2 reports the accuracy results of our R-MeN model and previously published results on WN11 and FB13. R-MeN sets a new state-of-the-art accuracy of 90.5% that significantly outperforms other models on WN11. R-MeN also achieves a second highest accuracy of 88.9% on FB13. Overall, RMeN yields the best performance averaged over these two datasets. Regarding TransE, we obtain the second-best accuracy of 89.2% on WN11 and a competitive accuracy of 88.1% on FB13. Figure 2 shows the accuracy results for TransE and our R-MeN w.r.t each relation. In particular, on WN11, the accuracy for the one-to-one relation “similar to” significantly increases from 50.0% for TransE to 78.6% for RMeN. On FB13, R-MeN improves the accuracies over TransE for the many-to-many relations “institution” and “profession”. Method WN11 FB13 Avg. NTN (Socher et al., 2013) 86.2 87.2 86.7 TransH (Wang et al., 2014) 78.8 83.3 81.1 TransR (Lin et al., 2015) 85.9 82.5 84.2 TransD (Ji et al., 2015) 86.4 89.1 87.8 TransR-FT (Feng et al., 2016) 86.6 82.9 84.8 TranSparse-S (Ji et al., 2016) 86.4 88.2 87.3 TranSparse-US (Ji et al., 2016) 86.8 87.5 87.2 ManifoldE (Xiao et al., 2016a) 87.5 87.2 87.4 TransG (Xiao et al., 2016b) 87.4 87.3 87.4 lppTransD (Yoon et al., 2016) 86.2 88.6 87.4 ConvKB (Nguyen et al., 2019a) 87.6 88.8 88.2 TransE (Bordes et al., 2013) (ours) 89.2 88.1 88.7 Our R-MeN model 90.5 88.9 89.7 TransE-NMM (Nguyen et al., 2016) 86.8 88.6 87.7 TEKE H (Wang and Li, 2016) 84.8 84.2 84.5 Bilinear-COMP (Guu et al., 2015) 87.6 86.1 86.9 TransE-COMP (Guu et al., 2015) 84.9 87.6 86.3 Table 2: Accuracy results (in %) on the WN11 and FB13 test sets. The last 4 rows report accuracies of the models that use relation paths or incorporate with a large external corpus. The best score is in bold while the second best score is in underline. “Avg.” denotes the averaged accuracy over two datasets. 40 50 60 70 80 90 100 type_of has_instance member_holonym member_meronym subordinate_instance_of domain_region has_part part_of synset_domain_topic domain_topic similar_to WN11 R-MeN TransE 80 82 84 86 88 90 92 94 96 98 cause_of_death nationality gender profession institution ethnicity religion FB13 Figure 2: Accuracies for R-MeN and TransE w.r.t each relation on WN11 and FB13. 4.2 Search personalization Table 3 presents the experimental results on SEARCH17, where R-MeN outperforms up-todate embedding models and obtains the new highest performances for both MRR and Hits@1 metrics. We restate the prospective strategy proposed by Vu et al. (2017) in utilizing the KG embedding methods to improve the ranking quality of the personalized search systems. 3433 Method MRR H@1 SE (Original rank) 0.559 38.5 CI (Teevan et al., 2011) 0.597 41.6 SP (Vu et al., 2015) 0.631 45.2 TransE (Bordes et al., 2013) 0.669 50.9 ConvKB (Nguyen et al., 2019a) 0.750 59.9 CapsE (Nguyen et al., 2019b) 0.766 62.1 Our R-MeN 0.778 63.6 Table 3: Experimental results on the SEARCH17 test set. Hits@1 (H@1) is reported in %. Our improvements over all baselines are statistically significant with p < 0.05 using the paired t-test. 5 10 15 20 25 30 Epoch 86 87 88 89 90 91 Accuracy WN11 n=128 n=256 n=512 n=1024 5 10 15 20 25 30 Epoch 86 87 88 89 90 91 Accuracy WN11 H=1 H=2 H=3 0 5 10 15 20 25 30 Epoch 78 80 82 84 86 88 Accuracy FB13 n=128 n=256 n=512 n=1024 0 5 10 15 20 25 30 Epoch 80 82 84 86 88 Accuracy FB13 H=1 H=2 H=3 5 10 15 20 25 30 Epoch 0.73 0.74 0.75 0.76 0.77 0.78 0.79 MRR SEARCH17 n=128 n=256 n=512 n=1024 5 10 15 20 25 30 Epoch 0.74 0.75 0.76 0.77 0.78 0.79 MRR SEARCH17 H=1 H=2 H=3 Figure 3: Effects of the head size n and the number H of attention heads on the validation sets. 4.3 Effects of hyper-parameters Next, we present in Figure 3 the effects of hyperparameters consisting of the head size n, and the number H of attention heads. Using large head sizes (e.g., n = 1024) can produce better performances on all 3 datasets. Additionally, using multiple heads gives better results on WN11 and FB13, while using a single head (i.e., H = 1) works best on SEARCH17 because each query usually has a single intention. 4.4 Ablation analysis For the last experiment, we compute and report our ablation results over 2 factors in Table 4. In particular, the scores degrade on FB13 and SEARCH17 when not using the positional embeddings. More importantly, the results degrade on Model WN11 FB13 SEARCH17 Our R-MeN 91.3 88.8 0.792 (a) w/o Pos 91.3 88.7 0.787 (b) w/o M 89.6 88.4 0.771 Table 4: Ablation results on the validation sets. (i) Without using the positional embeddings. (ii) Without using the relational memory network, thus we define f (s, r, o) = max (ReLU ([vs, vr, vo] ∗Ω))T w. all 3 datasets without using the relational memory network. These show that using the positional embeddings can explore the relative positions among s, r and o; besides, using the relational memory network helps to memorize and encode the potential dependencies among relations and entities. 5 Conclusion We propose a new KG embedding model, named RMeN, where we integrate transformer self-attention mechanism-based memory interactions with a CNN decoder to capture the potential dependencies in the KG triples effectively. Experimental results show that our proposed R-MeN obtains the new state-of-the-art performances for both the triple classification and search personalization tasks. In future work, we plan to extend R-MeN for multihop knowledge graph reasoning. Our code is available at: https://github.com/daiquocnguyen/ R-MeN. Acknowledgements This research was partially supported by the ARC Discovery Projects DP150100031 and DP160103934. References Antoine Bordes, Nicolas Usunier, Alberto Garc´ıaDur´an, Jason Weston, and Oksana Yakhnenko. 2013. Translating Embeddings for Modeling Multirelational Data. In Advances in Neural Information Processing Systems 26, pages 2787–2795. Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning Structured Embeddings of Knowledge Bases. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, pages 301–306. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2D Knowledge Graph Embeddings. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, pages 1811–1818. 3434 Jun Feng, Minlie Huang, Mingdong Wang, Mantong Zhou, Yu Hao, and Xiaoyan Zhu. 2016. Knowledge Graph Embedding by Flexible Translation. In Principles of Knowledge Representation and Reasoning: Proceedings of the Fifteenth International Conference, pages 557–560. Kelvin Guu, John Miller, and Percy Liang. 2015. Traversing Knowledge Graphs in Vector Space. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 318–327. Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge Graph Embedding via Dynamic Mapping Matrix. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 687–696. Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao. 2016. Knowledge Graph Completion with Adaptive Sparse Transfer Matrix. In Proceedings of the Thirtieth Conference on Artificial Intelligence, pages 985– 991. Diederik Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning Entity and Relation Embeddings for Knowledge Graph Completion. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence Learning, pages 2181–2187. Dai Quoc Nguyen, Dat Quoc Nguyen, Tu Dinh Nguyen, and Dinh Phung. 2019a. Convolutional Neural Network-based Model for Knowledge Base Completion and Its Application to Search Personalization. Semantic Web, 10(5):947–960. Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. 2018. A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 327–333. Dai Quoc Nguyen, Thanh Vu, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. 2019b. A Capsule Network-based Embedding Model for Knowledge Graph Completion and Search Personalization. In Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 2180–2189. Dat Quoc Nguyen. 2017. An Overview of Embedding Models of Entities and Relationships for Knowledge Base Completion. arXiv preprint, arXiv:1703.08098. Dat Quoc Nguyen, Kairit Sirts, Lizhen Qu, and Mark Johnson. 2016. Neighborhood Mixture Model for Knowledge Base Completion. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 40–50. Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. 2016. A Review of Relational Machine Learning for Knowledge Graphs. Proceedings of the IEEE, 104(1):11–33. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532–1543. Adam Santoro, Ryan Faulkner, David Raposo, Jack Rae, Mike Chrzanowski, Theophane Weber, Daan Wierstra, Oriol Vinyals, Razvan Pascanu, and Timothy Lillicrap. 2018. Relational Recurrent Neural Networks. In Advances in Neural Information Processing Systems, pages 7299–7310. Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning With Neural Tensor Networks for Knowledge Base Completion. In Advances in Neural Information Processing Systems 26, pages 926–934. Jaime Teevan, Daniel J. Liebling, and Gayathri Ravichandran Geetha. 2011. Understanding and Predicting Personal Navigation. In Proceedings of the ACM International Conference on Web Search and Data Mining, pages 85–94. Th´eo Trouillon, Johannes Welbl, Sebastian Riedel, ´Eric Gaussier, and Guillaume Bouchard. 2016. Complex Embeddings for Simple Link Prediction. In Proceedings of the 33nd International Conference on Machine Learning, pages 2071–2080. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In NIPS, pages 5998–6008. Thanh Vu, Dat Quoc Nguyen, Mark Johnson, Dawei Song, and Alistair Willis. 2017. Search Personalization with Embeddings. In Proceedings of the European Conference on Information Retrieval, pages 598–604. Thanh Vu, Alistair Willis, Son Ngoc Tran, and Dawei Song. 2015. Temporal Latent Topic User Profiles for Search Personalisation. In Proceedings of the European Conference on Information Retrieval, pages 605–616. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge Graph Embedding by Translating on Hyperplanes. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pages 1112–1119. 3435 Zhigang Wang and Juan-Zi Li. 2016. Text-Enhanced Representation Learning for Knowledge Graph. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pages 1293– 1299. Robert West, Evgeniy Gabrilovich, Kevin Murphy, Shaohua Sun, Rahul Gupta, and Dekang Lin. 2014. Knowledge Base Completion via Searchbased Question Answering. In Proceedings of the 23rd International Conference on World Wide Web, pages 515–526. Han Xiao, Minlie Huang, and Xiaoyan Zhu. 2016a. From One Point to A Manifold: Knowledge Graph Embedding for Precise Link Prediction. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pages 1315–1321. Han Xiao, Minlie Huang, and Xiaoyan Zhu. 2016b. TransG : A Generative Model for Knowledge Graph Embedding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2316– 2325. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. In Proceedings of the International Conference on Learning Representations. Hee-Geun Yoon, Hyun-Je Song, Seong-Bae Park, and Se-Young Park. 2016. A Translation-Based Knowledge Graph Embedding Preserving Logical Property of Relations. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 907–916.
2020
313
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3436–3441 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3436 Do You Have the Right Scissors? Tailoring Pre-trained Language Models via Monte-Carlo Methods Ning Miao Yuxuan Song Hao Zhou Lei Li ByteDance AI lab {miaoning,songyuxuan,zhouhao.nlp,lileilab}@bytedance.com Abstract It has been a common approach to pre-train a language model on a large corpus and finetune it on task-specific data. In practice, we observe that fine-tuning a pre-trained model on a small dataset may lead to over- and/or under-estimation problem. In this paper, we propose MC-Tailor, a novel method to alleviate the above issue in text generation tasks by truncating and transferring the probability mass from over-estimated regions to underestimated ones. Experiments on a variety of text generation datasets show that MC-Tailor consistently and significantly outperforms the fine-tuning approach. Our code is available at https://github.com/NingMiao/ MC-tailor. 1 Introduction Recently, pre-trained language models (PLM), e.g. GPT-2 (Radford et al., 2019), have shown great promise in many applications of natural language generation, such as stylized text generation (Syed et al., 2019) and dialog system (Wolf et al., 2019). PLM is obtained by first pre-training on largescaled raw sentences (always general domain corpus), and then used in downstream tasks by finetuning on task-specific datasets (always from some specific domains). Specifically, given a pre-trained GPT-2 model, to generate sentences of email domain, we always need to fine-tune the GPT-2 on a small set of email domain corpus. However, we argue that to get desired sentence outputs, fine-tuning PLM on a specific domain dataset is not necessarily the best, especially when the fine-tuning dataset is of a small size. Typically, fine-tuning is conducted through Maximum Likelihood Estimation (MLE), with which the resulting model distribution will be asymptotically consistent with true distribution when the fine-tuning dataset has infinite data samples. But it is not the 1 Page . 图一 p𝑇rue(𝑥) p𝑀odel(𝑥) Over-estimation Under-estimation a c d b Figure 1: The over- and under-estimation problems of the model distribution. For example, sample b represents the simple sentence “Yes .”, whose probability is over-estimated. Its model NLL (4.01, negative loglikelihood) is significantly lower than the 95% confidence interval of its real NLL [4.89, 5.37], which is estimated on the training set. case of fine-tuning on small datasets, which always leads to the mismatch problem of the real and model distributions. Specifically, MLE minimizes the Kullback–Leibler (KL) divergence between model and true distributions. Theis et al. (2016) point out that minimizing KL avoids assigning an extremely small probability to any data point but assigns a lot of probability mass to non-data regions, which leads to a gap between PReal and PModel. Additionally, simple data patterns in the fine-tuning dataset could be easily memorized and over-estimated. Meanwhile, the complex ones may be under-estimated. The above problem is not severe with adequate data samples, but non-trivial when the size of the fine-tuning dataset is not large enough. (see Figure 1). To address the over- and under-estimated problem, in this paper, we propose MC-Tailor, which can tailor the resulting density of model distribution by cutting the probability mass of over-estimated zones to under-estimated zones, leading to more realistic model distribution after fine-tuning. Concretely, MC-Tailor consists of two components: a ratio estimator to distinguish over- and under3437 estimated regions of model distribution; and an early rejection sampling (ERS) component to tailor (reassign) probability mass and efficiently obtain sampled sentences from the model distribution. Note that the proposed ERS is inspired by Sequential Monte Carlo (SMC, Doucet et al. (2000)), but can avoid the degeneration from SMC, as it directly kills samples rather than performs resampling. We conduct experiments on various data sets to verify the effectiveness of the proposed MC-Tailor. Empirical results show that MC-Tailor can generate significantly better samples than finetuning, and the resulting model distributions of our model are closer to real data distributions. 2 Pre-Trained Language Model Language models generally estimate the density of sentences in real context within an autoregressive style: P(x) = N Y i=1 P(xi|x[1:i−1]), (1) where x is a sentence with length N. Recently, with an extremely large number of parameters, pretrained language models like GPT-2 (Radford et al., 2019) and Transformer-XL (Dai et al., 2019) have shown great promise in text generation. PLMs are first trained on a huge general domain data set and then fine-tuned on specific domain datasets of different downstream tasks. Specifically, given a pre-trained GPT2 model, to generate sentences of email domain, we always need to fine-tune the GPT2 on a small set of email domain corpus. Additionally, PLMs have some other important applications. Miao et al. (2019) use fine-tuned language models for constrained text generation. Wolf et al. (2019) fine-tune GPT-2 on a dialog data set to boost the performance of dialog system. However, as stated in the Introduction, directly fine-tuning the PLM on a small dataset may lead to the mismatch problem, namely the over- and underestimated problem between the true distribution and the model distribution. In the next section, we propose a new method to alleviate this problem. 3 Proposed MC-Tailor To mitigate the above shortcomings of finetuning, we propose MC-Tailor, which generates samples from a modified sample distribution. MC-Tailor is composed of a ratio estimator, which detects overand under-estimate regions of model distributions, and the Early Rejection Sampling algorithm (ERS), which accelerates sampling while ensuring sample quality. 3.1 Ratio Estimator Ratio estimator is a common technique to measure the gap between two related distributions (Yuxuan et al., 2020). In this work, We apply ratio estimator γ(x) to estimating PModel(x) PTrue(x) , the probability ratio of sentence x in fine-tuned model distribution PModel(x) and true distribution PTrue(x). To tailor the probability from a finetuned PLM, we cut the probabilities of over-fitting samples. Specifically, when γ(x) > 1, i.e., the model over-estimates the probability of sample x, we remove x with a probability of 1 − 1 r(x) to approximate PTrue(x). After normalization, probabilities of under-estimated areas will increase correspondingly. The resulting new distribution is PTailor ∝ PModel(x) max(γ(x),1). In this work, we try several different structures of ratio estimators. Convolutional Ratio Estimator. Since ratio estimation shares similar properties with classification problems and convolutional neural networks (CNN) are powerful classifiers, our first thought is to build a CNN-based ratio estimator. To be concrete, we use a two-layer CNN to predict whether x is from true or learned distribution. By training with cross-entropy loss, Softmax(CNN(x)) −→ PModel(x) PTrue(x) + PModel(x). (2) Naturally, we define γ(x) = Softmax(CNN(x)) 1 −Softmax(CNN(x)). (3) Dual Ratio Estimator. Though the basic convolutional ratio estimator is easy to apply, it makes sampling inefficient. For most sentence x, we can roughly predict whether it is in a specific domain or suffering from overfitting by the first a few words. However, γ(x) can only be obtained after a full sentence is generated, so massive computing resources are wasted on generating unpromising samples. To determine whether a prefix x[1:i] is promising, we can estimate γ ′(ˆx[1:i]) = min x[1:i]=ˆx[1:i] (γ(x)), (4) 3438 (a) RS (b) SMC (c) ERS Figure 2: Illustration of three sampling algorithms. Concentric circles are newly born particles. Green checkmarks and Red crosses appear when particles are accepted and killed, respectively. Gray circulars represent particles finally accepted while white circulars stand for the opposite. where γ ′(ˆx[1:i]) is the minimum ratio of all sentences with prefix ˆx[1:i]. If γ ′(ˆx[1:i]) is greater than a pre-defined threshold, all sentences with prefix x[1:i] should be rejected. As a result, we do not need to waste time to continue sampling. But if we directly train γ ′(ˆx[1:i]) to distinguish PTrue(x[1:i]) from PModel(x[1:i]), we will end up getting the average value of γ(x) for all sentences with prefix x[1:i], rather than the minimum value. If so, some sentences with low γ(x) will be erroneously rejected. Luckily, the properties of minmax dual sheds some light on this problem. We first define γ ′′(x) = maxi(γ ′(x[1:i])) as the dual form of γ ′(x). Under some weak conditions, we can prove that if γ ′′(x) approximates PModel(x) PTrue(x) , then γ ′(ˆx[1:i]) approximates min(γ(x)) for x with prefix x[1:i]. Similar to training γ(x), we train γ ′′(x) by distinguishing PTrue(x) from PModel(x). Since γ ′′(x) is a function of γ ′(ˆx[1:i]), we can get a set of proper parameters for γ ′(ˆx[1:i]). Hierarchical Ratio Estimator. Since a single ratio estimator may not be powerful enough to accurately estimate PModel(x) PReal(x) , we break down the workload to several γi(x) in the spirit of boosting. We first train γ0(x) to estimate PModel(x) PReal(x) , and get P 0 Tailor(x). And then we use γ1(x) to estimate the gap between PReal and P 0 Tailor(x)... With the collaboration of γi(x), we can get a more accurate P n Tailor(x). Using hierarchical ratio estimators also avoids using a single but complicated ratio estimator, which is prone to over-fitting. Similarly, we can add hierarchy to the dual ratio estimator to make a hierarchical dual ratio estimator. 3.2 Efficient Sampling In this part, we introduce our specially designed Early Rejection Sampling (ERS) algorithm for MCTailor. Improved from Sequential Monte Carlo, ERS can efficiently generate samples with high diversity. Rejection Sampling By applying RS, we first generate a batch of samples from PModel, and then rejecting some samples by rejection ratio 1 − 1 max(γ(x),1). However, RS is very inefficient in actual use since it rejects samples at the end of sampling. As shown in Figure 2a, lots of computation resources are wasted on ultimately rejected samples. Sequntial Monte Carlo Instead of rejecting samples at the end of sampling, SMC performs resampling at each step. The unnormalized resampling weight at step i is provided by γ ′(x[1:i−1]) γ′(x[1:i]) , leading to an asymptotically unbiased estimator. However, SMC suffers from serious degeneracy problem. In other words, samples from SMC tend to share a very small number of the ancestors because most of the ancestors are killed during resampling. As a result, sample diversity of SMC is critically low. Early Rejection Sampling To overcome the degeneracy problem of SMC and increase sample diversity. We propose Early Rejection Sampling (ERS) algorithm. ERS first uniformly samples a real number r in (0, 1). After step i, if γ ′(x[1 : i]) > 1 r, this particle is killed immediately and computation resources are released to parallel threads. The main difference between ERS and RS is that ERS kills unpromising particles before they are fully generated. But unlike SMC, there is no correlation between SMC samples, resulting in higher sample diversity. 4 Experiments In this section, We empirically compare the sample quality of our model and baseline models. We first set up experiments and show results in Section 4.2. 3439 Datasets #Train Style Fine-tune MC-TailorRS MC-TailorERS Ontonotes -bn 12k Broadcast news 124 117 111 -bc 12k Broadcast dialog 268 144 153 -mz 7k Magazine 126 112 110 -nw 35k Newswire 111 110 100 -tc 13k Telephone dialog 140 136 134 -wb 17k Web 166 138 136 Switchboard 203k Formal dialog 198 165 169 DailyDialog 76k Dialy dialog 120 117 113 IWSLT-16 133k Comference speech 240 217 213 Table 1: Rev-PPL of each method. All methods start from the same pre-trained GPT2 model. MC-TailorRS represents single-layer MC-Tailor with rejection sampling and MC-TailorERS is a hierarchical MC-Tailor with 3 layers and ERS algorithm. Results of SMC are not reported since it leads to very poor Rev-PPLs because of the lack of sample diversity. 4.1 Experimental Setup We conduct experiments on 9 data sets with different styles and sizes. And we use five different metrics, including human evaluation, to measure the generation performance of each method. Datasets. We use the following data sets for experiments. • Ontonotes (Pradhan et al., 2013) is a multigenre data set for sequence annotation. We use sentences from six genres (bn, bc, mz, nw, tc, wb) for the experiment. • Switchboard (Jurafsky et al., 1997) and DailyDialog (Li et al., 2017) are large and medium scale dialog data sets, of which only responses are used for the experiment. • IWSLT-16 (Cettolo et al., 2016) is a data set of paired conference speeches for machine translation. We use English sentences from De-En pairs to test model performance on the special conference speech domain. Evaluation Metrics. To evaluate the generation quality and diversity, we use the following metrics. • PPL reflects the average density of samples from test set in a generative model. Models with lower PPLs have more similar model distributions with real contexts. Unlike baseline models, MC-Tailor only has an unnormalized log-probability. We estimate the normalization constant of MC-Tailor by importance sampling and calculate PPLs directly from the normalized log-probability. • Rev-PPL is a good indicator for both sample quality and diversity, which is derived by first training a language model with generated samples and calculating the PPL of test set in the language model. • EMD-l is the earth mover distance between sentence lengths of real and generated data. • EMD-f is the earth mover distance between word frequencies of real and generated data. • Human Evaluation Score is added to reflect the comprehensive sample quality. We ask 4 volunteers to select a score from {0, 0.5, 1} for each sample according to their fluency and coherence with the target style. In 85% cases, at least three volunteers give the same score, showing the reliability of the human evaluation. Model Details. In all the experiments, we use the released GPT-2 with 117M parameters as the pretrained language model. We first fine-tune GPT2 on each dataset and then build our tailor on it. Early-stop is applied to avoid over-fitting. For ratio estimators, we use simple CNNs with two convolution layers where (filter number, kernel size) is set to (10,5) and (5,5), respectively. 4.2 Experimental Results Rev-PPLs of different models are shown in Table 1. We find that MC-Tailor significantly reduces Rev-PPLs than fine-tuning baseline in data sets of different sizes, from Ontonotes-mz with only 7k training samples to relatively large Switchboard data set with more than 200k samples. We also notice that multi-layer MC-TailorERS performs better than single-layer MC-TailorRS, which confirms the point in Section 3.2 that the gap between PModel and PData is too complex for a single-layer ratio estimator to estimate. Sample NLLs of each method (Table 2) further confirms that MC-Tailor succeeds in decreasing the probabilities of over-estimated simple patterns and reallocating them to underestimated samples. We further compare MC-Tailor with the baseline 3440 Refs Sentences NLL (Fine-tune) NLL (MC-TailorERS) a Thank you everyone for watching . 18.03 18.65 b Yes . 4.01 4.77 c What does that mean in the context of your book ? 26.56 26.44 d And it did n’t hurt too badly . 23.24 22.97 Table 2: NLL comparison of MC-TailorERS and the baseline on Ontonotes-bc. MC-TailorERS successfully reallocates the probabilities of over-estimated samples (simple sentences such as a and b) to under-estimated ones (complicated sentences such as c and d). Methods Fine-tune MC-TailorERS Samples Right . She should be charged with rape . In the case if you think of this And do you still feel that way every day ? Oh well . But it would be tough . I ’ve been there n’t said anything wrong . He knew about the attack at the Paris offices . Table 3: Generated samples of each method on Ontonotes-bc. Samples from MC-TailorERS are more informative and coherent with the target style than the baseline method. model under other metrics. From table 4, we find MC-Tailor greatly reduce PPL, which means increased probabilities of generating samples similar to test samples. And we can draw the conclusion that sample distributions of MC-Tailor are closer to real sample distributions, with lower EMD-l and EMD-f. What’s more, human evaluation scores of MC-Tailor are about 10% higher than fine-tuning, which indicates better sample quality to human eyes. Cases shown in Table 3 further demonstrate the advantage of MC-Tailor in fluency and informativeness. Seq-GAN is also compared in our experiment. However, rev-ppls of GANs are even higher than directly fine-tuning GPT-2, and they are especially difficult to train. So we remove Seq-GAN from baseline models. The acceleration effect of ERS is also verified in the experiment. For MC-Tailor with 1, 2, and 3 layers of ratio estimator, ERS reduces 30%, 79%, and 90% of computation wasted on unpromising samples, achieving 1.5x, 2.8x, 5x accelerations, respectively. 5 Conclusion In this paper, we propose MC-Tailor to alleviate the over- and under-estimation problem between true and model distributions. MC-Tailor is composed of a ratio estimator, which adjusts the probabilities of MLE fine-tuned PLMs to approximate true distributions, and the ERS to accelerate sampling while ensuring sample quality. Experiments on various datasets show the effectiveness and efficiency of MC-Tailor. Data MCT PPL EMD-l EMD-f Human Onto-bn  34.1 4.31 0.57 0.60  30.1 1.90 0.53 0.81 Onto-bc  30.9 6.74 0.67 0.40  23.1 1.62 0.55 0.67 Onto-mz  43.4 5.60 0.69 0.71  39.7 3.33 0.64 0.76 Onto-nw  37.0 4.94 0.61 0.65  36.1 3.66 0.54 0.70 Onto-tc  24.8 4.19 0.64 0.54  23.8 2.46 0.64 0.54 Onto-wb  60.9 3.31 0.61 0.46  52.8 2.40 0.51 0.60 SB  19.7 8.75 0.60 0.48  18.9 5.21 0.51 0.54 DD  30.3 5.25 0.47 0.60  29.1 3.32 0.45 0.62 IWSLT  23.3 5.21 0.61 0.32  20.9 2.99 0.55 0.40 Table 4: PPL, EMD-l, EMD-f and human evaluation score of MC-TailorERS with 3 layers and fine-tuning. MCT means whether to use our proposed MC-Tailor or to direct fine-tune. SB and DD represent the Switchboard and DailyDialog data sets, respectively. By onetail t-tests, we find that improvements in human evaluation scores are significant, with p-values smaller than 0.05. References Mauro Cettolo, Niehues Jan, St¨uker Sebastian, Luisa Bentivogli, Roldano Cattoni, and Marcello Federico. 2016. The iwslt 2016 evaluation campaign. In IWSLT. Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. In Proceedings of ACL. Arnaud Doucet, Simon Godsill, and Christophe Andrieu. 2000. On sequential monte carlo sampling 3441 methods for bayesian filtering. Statistics and computing, 10(3):197–208. Daniel Jurafsky, Elizabeth Shriberg, and Debra Biasca. 1997. Switchboard SWBD-DAMSL shallowdiscourse-function annotation coders manual, draft 13. Technical report. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. Dailydialog: A manually labelled multi-turn dialogue dataset. In Proceedings of IJCNLP. Ning Miao, Hao Zhou, Lili Mou, Rui Yan, and Lei Li. 2019. Cgmh: Constrained sentence generation by metropolis-hastings sampling. In Proceedings of AAAI. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj¨orkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using ontonotes. In Proceedings of CoNLL. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Bakhtiyar Syed, Gaurav Verma, Balaji Vasan Srinivasan, Vasudeva Varma, et al. 2019. Adapting language models for non-parallel author-stylized rewriting. In arXiv:1909.09962. Lucas Theis, A¨aron van den Oord, and Matthias Bethge. 2016. A note on the evaluation of generative models. In Preceedings of ICLR. Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. In arXiv:1901.08149. Song Yuxuan, Miao Ning, Zhou Hao, and Li Lei. 2020. Improving maximum likelihood training for text generation with density ratio estimation. In Proceedings of AISTATS.
2020
314
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3442–3448 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3442 Enhancing Pre-trained Chinese Character Representation with Word-aligned Attention Yanzeng Li, Bowen Yu, Mengge Xue, Tingwen Liu∗ Institute of Information Engineering, Chinese Academy of Sciences School of Cyber Security, University of Chinese Academy of Sciences {liyanzeng, yubowen, xuemengge, liutingwen}@iie.ac.cn Abstract Most Chinese pre-trained models take character as the basic unit and learn representation according to character’s external contexts, ignoring the semantics expressed in the word, which is the smallest meaningful utterance in Chinese. Hence, we propose a novel wordaligned attention to exploit explicit word information, which is complementary to various character-based Chinese pre-trained language models. Specifically, we devise a pooling mechanism to align the character-level attention to the word level and propose to alleviate the potential issue of segmentation error propagation by multi-source information fusion. As a result, word and character information are explicitly integrated at the fine-tuning procedure. Experimental results on five Chinese NLP benchmark tasks demonstrate that our method achieves significant improvements against BERT, ERNIE and BERT-wwm. 1 Introduction Pre-trained language Models (PLM) such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), ERNIE (Sun et al., 2019), BERT-wwm (Cui et al., 2019) and XLNet (Yang et al., 2019) have been proven to capture rich language information from text and then benefit many NLP applications by simple fine-tuning, including sentiment classification (Pang et al., 2002), natural language inference (Bowman et al., 2015), named entity recognition (Sang and De Meulder, 2003) and so on. Generally, most popular PLMs prefer to use attention mechanism (Vaswani et al., 2017) to represent the natural language, such as word-to-word self-attention for English. Unlike English, in Chinese, words are not separated by explicit delimiters. Since without word boundaries information, it is ∗Corresponding author intuitive to model characters in Chinese tasks directly. However, in most cases, the semantic of a single Chinese character is ambiguous. For example, the character “拍” in word “球拍(bat)” and “拍卖(auction)” has entirely different meanings. Moreover, several recent works have demonstrated that considering the word segmentation information can lead to better language understanding, and accordingly benefits various Chinese tasks (Wang et al., 2017; Li et al., 2018; Zhang and Yang, 2018; Gui et al., 2019; Mengge et al., 2019). All these factors motivate us to expand the character-level attention mechanism in Chinese PLMs to represent the semantics of words 1. To this end, there are two main challenges. (1) How to seamlessly integrate the segmentation information into character-based attention module of PLM is an important problem. (2) Gold-standard segmentation is rarely available in the downstream tasks, and how to effectively reduce the cascading noise caused by Chinese word segmentation (CWS) tools (Li et al., 2019) is another challenge. In this paper, we propose a new architecture, named Multi-source Word Aligned Attention (MWA), to solve the above issues. (1) Psycholinguistic experiments (Bai et al., 2008; Meng et al., 2014) have shown that readers are likely to pay approximate attention to each character in one Chinese word. Drawing inspiration from such findings, we introduce a novel word-aligned attention, which could aggregate attention weight of characters in one word into a unified value with the mixed pooling strategy (Yu et al., 2014). (2) For reducing segmentation error, we further extend our word-aligned attention with multi-source segmentation produced by various segmenters and deploy 1Considering the enormous cost of re-training a language model, we hope to incorporate word segmentation information to the fine-tuning process to enhance performance, and leave how to improve the pre-training procedure for a future work. 3443 𝑐! 𝑐" 𝑐# 𝑐$ 𝑐% 𝑐& Character-based Model Encoder Tokenizer 𝑐! 𝑐" 𝑐# 𝑐$ 𝑐% 𝑐& Gain partition 𝜋 ℱ 𝑠𝑜𝑓𝑡𝑚𝑎𝑥(𝑄𝐾3 𝑑) Apply partition 𝜋 𝑓 𝑓 𝑓 𝑓:Mix Pooling Enhanced Character Representation A78 A8 Figure 1: Architecture of Word-aligned Attention a fusion function to pull together their disparate outputs. As shown in Table 1, different CWS tools may have different annotation granularity. Through comprehensive consideration of multi-granularity segmentation results, we can implicitly reduce the error caused by automatic annotation. Extensive experiments are conducted on various Chinese NLP tasks including sentiment classification, named entity recognition, sentence pair matching, natural language inference and machine reading comprehension. The results and analysis show that the proposed method boosts BERT, ERNIE and BERT-wwm significantly on all the datasets 2. 2 Methodology 2.1 Character-level Pre-trained Encoder The primary goal of this work is to inject the word segmentation knowledge into character-level Chinese PLMs and enhance original models. Given the strong performance of deep Transformers trained on language modeling, we adopt BERT and its updated variants (ERNIE, BERT-wwm) as the basic encoder in this work, and the outputs from the last layer of encoder are treated as the character-level enriched contextual representations H. 2.2 Word-aligned Attention Although character-level Chinese PLM has remarkable ability to capture language knowledge from text, it neglects the semantic information expressed in the word level. Therefore we apply a wordaligned layer on top of the encoder to integrate the 2The source code of this paper can be obtained from https://github.com/lsvih/MWA. Chinese 北京西山森林公园 Segmenter thulac 北京 西山 森林 公园 ictclas 北京 西 山 森林 公园 hanlp 北京 西山 森林公园 Table 1: Results of different popular CWS tools over “北京西山森林公园(Beijing west mount forest park)”. word boundary information into the representation of characters with an attention aggregation module. For an input sequence with n characters S = [c1, c2, ..., cn], where cj denotes the j-th character, CWS tool π is used to partition S into nonoverlapping word blocks: π(S) = [w1, w2, ..., wm], (m ≤n) (1) where wi = {cs, cs+1, ..., cs+l−1} is the i-th segmented word with a length of l and s is the index of wi’s first character in S. We apply self-attention operation with the representations of all input characters to get the character-level attention score matrix Ac ∈Rn×n. It can be formulated as: Ac = F(H) = softmax((KWk)(QWq)T √ d ) (2) where Q and K are both equal to the collective representation H at the last layer of the Chinese PLM, Wk ∈Rd×d and Wq ∈Rd×d are trainable parameters for projection. While Ac models the relationship between two arbitrarily characters without regard to the word boundary, we argue that incorporating word as atom in the attention can better represent the semantics, as the literal meaning of each individual character can be quite different from the implied meaning of the whole word, and the simple weighted sum in the character level may lose word and word sequence information. To address this issue, we propose to align Ac in the word level and integrate the inner-word attention. For ease of exposition, we rewrite Ac as [a1 c, a2 c, ..., an c ], where ai c ∈Rn denotes the i-th row vector of Ac, that is, ai c represents the attention score vector of the i-th character. Then we deploy π to segment Ac according to π(S). For example, if π(S) = [{c1, c2}, {c3}, ..., {cn−1, cn}], then π(Ac) = [{a1 c, a2 c}, {a3 c}, ..., {an−1 c , an c }] (3) In this way, an attention vector sequence is divided into several subsequences and each subsequence represents the attention of one word. 3444 Then, motivated by the psycholinguistic finding that readers are likely to pay similar attention to each character in one Chinese word, we devise an appropriate aggregation module to fuse the innerword attention. Concretely, we first transform {as c, ..., as+l−1 c } into one attention vector ai w for wi with the mixed pooling strategy (Yu et al., 2014) 3. Then we execute the piecewise upsampling operation over each ai w to keep input and output dimensions unchanged for the sake of plug and play. The detailed process can be summarized as: ai w = λ Maxpooling({as c, ..., as+l−1 c }) (4) + (1 −λ) Meanpooling({as c, ..., as+l−1 c }) ˆAc[s : s + l −1] = el ⊗ai w (5) where λ ∈R1 is a weighting trainable variable to balance the mean and max pooling, el = [1, ..., 1]T represents a l-dimensional all-ones vector, l is the length of wi, el ⊗ai w = [ai w, ..., ai w] denotes the kronecker product operation between el and ai w, ˆAc ∈Rn×n is the aligned attention matrix. Eqs. 4 and 5 can help incorporate word segmentation information into character-level attention calculation process, and determine the attention vector of one character from the perspective of the whole word, which is beneficial for eliminating the attention bias caused by character ambiguity. Finally, we can obtain the enhanced character representation produced by word-aligned attention as follows: ˆH = ˆAcVWv (6) where V = H, Wv ∈Rd×d is a trainable projection matrix. Besides, we also use multi-head attention (Vaswani et al., 2017) to capture information from different representation subspaces jointly, thus we have K different aligned attention matrices ˆA k c(1 ≤k ≤K) and corresponding representation ˆH k. With multi-head attention architecture, the output can be expressed as follows: H = Concat(ˆH 1, ˆH 2, ..., ˆH K)Wo (7) 2.3 Multi-source Word-aligned Attention As mentioned in Section 1, our proposed wordaligned attention relies on the segmentation results 3Other pooling methods such as max pooling or mean pooling also works. Here we choose mixed pooling because it has the advantages of distilling the global and the most prominent features in one word at the same time. of CWS tool π. Unfortunately, a segmenter is usually unreliable due to the risk of ambiguous and non-formal input, especially on out-of-domain data, which may lead to error propagation and an unsatisfactory model performance. In practice, the ambiguous distinction between morphemes and compound words leads to the cognitive divergence of words concepts, thus different π may provide diverse π(S) with various granularities. To reduce the impact of segmentation error and effectively mine the common knowledge of different segmenters, it’s natural to enhance the word-aligned attention layer with multi-source segmentation inputs. Formally, assume that there are M popular CWS tools employed, we can obtain M different representations H1, ..., HM by Eq. 7. Then we propose to fuse these semantically different representations as follows: ˜H = M X m=1 tanh(HmWg) (8) where Wg is a parameter matrix and ˜H denotes the final output of the MWA attention layer. 3 Experiments 3.1 Experiments Setup To test the applicability of the proposed MWA attention, we choose three publicly available Chinese pre-trained models as the basic encoder: BERT, ERNIE, and BERT-wwm. In order to make a fair comparison, we keep the same hyper-parameters (such maximum length, warm-up steps, initial learning rate, etc.) as suggested in BERT-wwm (Cui et al., 2019) for both baselines and our method on each dataset. We run the same experiment for five times and report the average score to ensure the reliability of results. Besides, three popular CWS tools: thulac (Sun et al., 2016), ictclas (Zhang et al., 2003) and hanlp (He, 2014) are employed to segment sequence. The experiments are carried out on five Chinese NLP tasks and six public benchmark datasets: Sentiment Classification (SC): We adopt ChnSentiCorp4 and weibo-100k sentiment dataset5 in this task. ChnSentiCorp dataset has about 10k sentences, which express positive or negative emotion. weibo-100k dataset contains 1.2M microblog 4https://github.com/pengming617/bert_ classification 5https://github.com/SophonPlus/ ChineseNlpCorpus/ 3445 Dataset Task Max length Batch size Epoch lr∗ Dataset Size Train Dev Test ChnSentiCorp SC 256 16 3 3 × 10−5 9.2K 1.2K 1.2K weibo-100k 128 64 2 2 × 10−5 100K ∼10K 10K ontonotes NER 256 16 5 3 × 10−5 15.7K 4.3K 4.3K LCQMC SPM 128 64 3 3 × 10−5 ∼239K 8.8K 12.5K XNLI NLI 128 64 2 3 × 10−5 ∼392K 2.5K 2.5K DRCD MRC 512 16 2 3 × 10−5 27K 3.5K 3.5K Table 2: Summary of datasets and the corresponding hyper-parameters setting. Reported learning rates∗are the initial values of BertAdam. texts and each microblog is tagged as positive or negative emotion. Named Entity Recognition (NER): this task is to test model’s capacity of sequence tagging. We use a common public dataset Ontonotes 4.0 (Weischedel et al., 2011) in this task. Sentence Pair Matching (SPM): We use the most widely used dataset LCQMC (Liu et al., 2018) in this task, which aims to identify whether two questions are in a same intention. Natural Language Inference (NLI): this task is to exploit the contexts of text and concern inference relationships between sentences. XNLI (Conneau et al., 2018) is a cross-language language understanding dataset; we only use the Chinese language part of XNLI to evaluate the language understanding ability. And we processed this dataset in the same way as ERNIE (Sun et al., 2019) did. Machine Reading Comprehension (MRC): MRC is a representative document-level modeling task which requires to answer the questions based on the given passages. DRCD (Shao et al., 2018) is a public span-extraction Chinese MRC dataset, whose answers are spans in the document. We implement our model with PyTorch (Paszke et al., 2019), and all baselines are converted weights into PyTorch version. All experiments employ modified Adam (Devlin et al., 2019) as optimizer with 0.01 weight decay and 0.1 warmup ratio. All pre-trained models are configured to 12 layers and 768 hidden dimension. The detail settings are shown in Table 2. 3.2 Experiment Results Table 3 shows the performances on five classical Chinese NLP tasks with six public datasets. Generally, our method consistently outperforms all baselines on all five tasks, which demonstrates the effectiveness and universality of the proposed approach. Moreover, the Wilcoxon’s test shows that a significant difference (p < 0.05) exits between our model and baseline models. In detail, on the two datasets of SC task, we observe an average of 0.53% and 0.83% absolute improvement in F1 score, respectively. SPM and NLI tasks can also gain benefits from our enhanced representation. For the NER task, our method obtains 0.92% improvement averagely over all baselines. Besides, introducing word segmentation information into the encoding of character sequences improves the MRC performance on average by 1.22 points and 1.65 points in F1 and Exact Match (EM) score respectively. We attribute such significant gain in NER and MRC to the particularity of these two tasks. Intuitively, Chinese NER is correlated with word segmentation, and named entity boundaries are also word boundaries. Thus the potential boundary information presented by the additional segmentation input can provide better guidance to label each character, which is consistent with the conclusion in (Zhang and Yang, 2018). Similarly, the span-extraction MRC task is to extract answer spans from document (Shao et al., 2018), which also faces the same word boundary problem as NER, and the long sequence in MRC exacerbates the problem. Therefore, our method gets a relatively greater improvement on the DRCD dataset. 3.3 Ablation Study To demonstrate the effectiveness of our multisource fusion method, we carry out experiments on the DRCD dev set with different segmentation inputs. Besides, we also design two strong baselines by introducing a Transformer layer (1T) and a random tokenizer model (WArandom) to exclude the benefits from additional parameters. As shown in Table 4, adding additional parameters by introducing an extra transformer layer can benefit the PLMs. Compared with 1T and WArandom, our proposed word-aligned attention gives quite stable improvements no matter what CWS tool we use, which again confirms the effectiveness and rationality of incorporating word segmentation information into character-level PLMs. Another observation is that 3446 Task SC NER SPM NLI MRC Dataset ChnSenti2,3 weibo-100k2 Ontonotes4 LCQMC2,3,4 XNLI1,2,3,4 DRCD2,3 [EM|F1] Prev. SOTA† 93.1(2019a) 74.89(2019b) 85.68(2019c) 67.5(2017d) 75.12(2019e) 87.26(2019e) BERT 94.72 97.31 79.18 86.50 78.19 85.57 91.16 +MWA 95.34(+0.62) 98.14(+0.83) 79.86(+0.68) 86.92(+0.42) 78.42(+0.23) 86.86(+1.29) 92.22(+1.06) BERT-wwm 94.38 97.36 79.28 86.11 77.92 84.11 90.46 +MWA 95.01(+0.63) 98.13(+0.77) 80.32(+1.04) 86.28(+0.17) 78.68(+0.76) 87.00(+2.89) 92.21(+1.75) ERNIE 95.17 97.30 77.74 87.27 78.04 87.85 92.85 +MWA 95.52(+0.35) 98.18(+0.88) 78.78(+1.04) 88.73(+1.46) 78.71(+0.67) 88.61(+0.76) 93.72(+0.87) Table 3: Evaluation results regarding each model on different datasets. Bold marks highest number among all models. Numbers in brackets indicate the absolute increase over baseline models. Superscript number1,2,3,4 respectively represents that the corresponding dataset is also used by BERT (Devlin et al., 2019), BERT-wwm (Wu et al., 2016; Cui et al., 2019), ERNIE (Sun et al., 2019) and Glyce (Meng et al., 2019a), respectively. The results of all baselines are produced by our implementation or retrieved from original papers, and we report the higher one among them. The improvements over baselines are statistically significant (p < 0.05). † denotes the results of previous state-of-the-art models on these datasets without using BERT. Model BERT BERT-wwm ERNIE Original 92.06 91.68 92.61 +1T 92.37 92.22 93.42 +WArandom 91.83 90.33 92.12 +WAthulac 92.84 92.73 93.89 +WAictclas 93.05 92.90 93.75 +WAhanlp 92.91 93.21 93.91 +MWA 93.59 93.72 94.21 Table 4: F1 results of ablation experiments on the DRCD dev set. employing multiple segmenters and fusing them together could introduce richer segmentation information and further improve the performance. 3.4 Parameter Scale Analysis For fair comparison and demonstrating the improvement of our model is not only rely on more trainable parameters, we also conduct experiments on the DRCD dev set to explore whether the performance keeps going-up with more parameters by introducing additional transformer blocks on top of the representations of PLMs. Model F1 Param. Number BERT-wwm 91.68 110M BERT-wwm+1T 92.23 110M+7.1M BERT-wwm+2T 91.99 110M+14.2M BERT-wwm+3T 91.68 110M+21.3M BERT-wwm+MWA 93.72 110M+7.6M Robust-BERT-wwm-ext-large 94.40 340M Table 5: Comparison on the DRCD dev set. The nT denotes the number of additional transformer layers. In Table 5, +1T denotes that we introduce another one Transformer layer on top of BERT-wwm and +2T means additional 2 layers, M denotes million. As the experimental results showed, when the number of additional layers exceeds 1, the performance starts to decline, which demonstrates that using an extensive model on top of the PLM representations may not bring additional benefits. We can conclude that MWA doesn’t introduce too many parameters, and MWA achieves better performance than +1T under the similar parameter numbers. Besides, we also make comparison with the current best Chinese PLM: Robust-BERT-wwmext-large (Cui et al., 2019), a 24-layers Chinese PLM with 13.5 times more pre-training data and 3.1 times more parameters than BERT-wwm, experimental results show that our model can achieve comparable performance, which again confirms the effectiveness of incorporating word segmentation information into character-level PLMs. 4 Conclusion In this paper, we develop a novel Multi-source Word Aligned Attention model (referred as MWA), which integrates word segmentation information into character-level self-attention mechanism to enhance the fine-tuning performance of Chinese PLMs. We conduct extensive experiments on five NLP tasks with six public datasets. The proposed approach yields substantial improvements compared to BERT, BERT-wwm and ERNIE, demonstrating its effectiveness and universality. Furthermore, the word-aligned attention can also be applied to English PLMs to bridge the semantic gap between the whole word and the segmented WordPiece tokens, which we leave for future work. Acknowledgement We would like to thank reviewers for their insightful comments. This work is supported by the Strategic Priority Research Program of Chinese Academy of Sciences, Grant No. XDC02040400. 3447 References Xuejun Bai, Guoli Yan, Simon P Liversedge, Chuanli Zang, and Keith Rayner. 2008. Reading spaced and unspaced chinese text: Evidence from eye movements. Journal of Experimental Psychology: Human Perception and Performance, 34(5):1277. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating crosslingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pre-training with whole word masking for chinese bert. arXiv preprint arXiv:1906.08101. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Tao Gui, Ruotian Ma, Qi Zhang, Lujun Zhao, Yu-Gang Jiang, and Xuanjing Huang. 2019. Cnn-based chinese ner with lexicon rethinking. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 4982–4988. AAAI Press. Tao Gui, Yicheng Zou, Qi Zhang, Minlong Peng, Jinlan Fu, Zhongyu Wei, and Xuanjing Huang. 2019b. A lexicon-based graph neural network for chinese ner. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1039– 1049. Han He. 2014. HanLP: Han Language Processing. Qiang Huang, Jianhui Bu, Weijian Xie, Shengwen Yang, Weijia Wu, and Liping Liu. 2019c. Multi-task sentence encoding model for semantic retrieval in question answering systems. In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1–8. Chia-Hsuan Lee and Hung-Yi Lee. 2019e. Crosslingual transfer learning for question answering. Xiaoya Li, Yuxian Meng, Xiaofei Sun, Qinghong Han, Arianna Yuan, and Jiwei Li. 2019. Is word segmentation necessary for deep learning of chinese representations? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3242–3252. Yanzeng Li, Tingwen Liu, Diying Li, Quangang Li, Jinqiao Shi, and Yanqiu Wang. 2018. Character-based bilstm-crf incorporating pos and dictionaries for chinese opinion target extraction. In Asian Conference on Machine Learning, pages 518–533. Xin Liu, Qingcai Chen, Chong Deng, Huajun Zeng, Jing Chen, Dongfang Li, and Buzhou Tang. 2018. Lcqmc: A large-scale chinese question matching corpus. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1952–1962. Hongxia Meng, Xuejun Bai, Chuanli Zang, and Guoli Yan. 2014. Landing position effects of coordinate and attributive structure compound words. Acta Psychologica Sinica, 46(1):36–49. Yuxian Meng, Wei Wu, Fei Wang, Xiaoya Li, Ping Nie, Fan Yin, Muyu Li, Qinghong Han, Xiaofei Sun, and Jiwei Li. 2019a. Glyce: Glyph-vectors for chinese character representations. In NeurIPS 2019 : Thirtythird Conference on Neural Information Processing Systems, pages 2746–2757. Xue Mengge, Yu Bowen, Liu Tingwen, Wang Bin, Meng Erli, and Li Quangang. 2019. Porous latticebased transformer encoder for chinese ner. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 79–86. Association for Computational Linguistics. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. Erik F Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: language-independent named entity recognition. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 142–147. Association for Computational Linguistics. 3448 Chih Chieh Shao, Trois Liu, Yuting Lai, Yiying Tseng, and Sam Tsai. 2018. Drcd: a chinese machine reading comprehension dataset. arXiv preprint arXiv:1806.00920. Maosong Sun, Xinxiong Chen, Kaixu Zhang, Zhipeng Guo, and Zhiyuan Liu. 2016. Thulac: An efficient lexical analyzer for chinese. Technical report. Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Shaonan Wang, Jiajun Zhang, and Chengqing Zong. 2017. Exploiting word internal structures for generic Chinese sentence representation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 298– 303, Copenhagen, Denmark. Association for Computational Linguistics. Zhiguo Wang, Wael Hamza, and Radu Florian. 2017d. Bilateral multi-perspective matching for natural language sentences. In Twenty-Sixth International Joint Conference on Artificial Intelligence, pages 4144–4150. Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2011. Ontonotes 4.0. Linguistic Data Consortium LDC2011T03. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Dingjun Yu, Hanli Wang, Peiqiu Chen, and Zhihua Wei. 2014. Mixed pooling for convolutional neural networks. In International Conference on Rough Sets and Knowledge Technology, pages 364–375. Springer. Hua-Ping Zhang, Hong-Kui Yu, De-Yi Xiong, and Qun Liu. 2003. Hhmm-based chinese lexical analyzer ictclas. In Proceedings of the second SIGHAN workshop on Chinese language processing-Volume 17, pages 184–187. Association for Computational Linguistics. Yue Zhang and Jie Yang. 2018. Chinese ner using lattice lstm. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1554–1564.
2020
315
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3449–3464 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3449 On the Encoder-Decoder Incompatibility in Variational Text Modeling and Beyond Chen Wu1, Prince Zizhuang Wang2, William Yang Wang2 1Department of Foreign Languages and Literatures, Tsinghua University 2Department of Computer Science, University of California, Santa Barbara [email protected], zizhuang [email protected], [email protected] Abstract Variational autoencoders (VAEs) combine latent variables with amortized variational inference, whose optimization usually converges into a trivial local optimum termed posterior collapse, especially in text modeling. By tracking the optimization dynamics, we observe the encoder-decoder incompatibility that leads to poor parameterizations of the data manifold. We argue that the trivial local optimum may be avoided by improving the encoder and decoder parameterizations since the posterior network is part of a transition map between them. To this end, we propose Coupled-VAE, which couples a VAE model with a deterministic autoencoder with the same structure and improves the encoder and decoder parameterizations via encoder weight sharing and decoder signal matching. We apply the proposed Coupled-VAE approach to various VAE models with different regularization, posterior family, decoder structure, and optimization strategy. Experiments on benchmark datasets (i.e., PTB, Yelp, and Yahoo) show consistently improved results in terms of probability estimation and richness of the latent space. We also generalize our method to conditional language modeling and propose Coupled-CVAE, which largely improves the diversity of dialogue generation on the Switchboard dataset.1 1 Introduction The variational autoencoder (VAE) (Kingma and Welling, 2014) is a generative model that combines neural latent variables and amortized variational inference, which is efficient in estimating and sampling from the data distribution. It infers a posterior distribution for each instance with a shared inference network and optimizes the evidence lower bound (ELBO) instead of the intractable marginal 1Our code is publicly available at https://github. com/ChenWu98/Coupled-VAE. log-likelihood. Given its potential to learn representations from massive text data, there has been much interest in using VAE for text modeling (Zhao et al., 2017; Xu and Durrett, 2018; He et al., 2019). Prior work has observed that the optimization of VAE suffers from the posterior collapse problem, i.e., the posterior becomes nearly identical to the prior and the decoder degenerate into a standard language model (Bowman et al., 2016; Zhao et al., 2017). A widely mentioned explanation is that a strong decoder makes the collapsed posterior a good local optimum of ELBO, and existing solutions include weakened decoders (Yang et al., 2017; Semeniuta et al., 2017), modified regularization terms (Higgins et al., 2017; Wang and Wang, 2019), alternative posterior families (Rezende and Mohamed, 2015; Davidson et al., 2018), richer prior distributions (Tomczak and Welling, 2018), improved optimization strategies (He et al., 2019), and narrowed amortization gaps (Kim et al., 2018). In this paper, we provide a novel perspective for the posterior collapse problem. By comparing the optimization dynamics of VAE with deterministic autoencoders (DAE), we observe the incompatibility between a poorly optimized encoder and a decoder with too strong expressiveness. From the perspective of differential geometry, we show that this issue indicates poor chart maps from the data manifold to the parameterizations, which makes it difficult to learn a transition map between them. Since the posterior network is a part of the transition map, we argue that the posterior collapse would be mitigated with better parameterizations. To this end, we propose the Coupled-VAE approach, which couples the VAE model with a deterministic network with the same structure. For better encoder parameterization, we share the encoder weights between the coupled networks. For better decoder parameterization, we propose a signal matching loss that pushes the stochastic decod3450 ing signals to the deterministic ones. Notably, our approach is model-agnostic since it does not make any assumption on the regularization term, the posterior family, the decoder architecture, or the optimization strategy. Experiments on PTB, Yelp, and Yahoo show that our method consistently improves the performance of various VAE models in terms of probability estimation and the richness of the latent space. The generalization to conditional modeling, i.e., Coupled-CVAE, largely improves the diversity of dialogue generation on the Switchboard dataset. Our contributions are as follows: • We observe the encoder-decoder incompatibility in VAE and connect it to the posterior collapse problem. • We propose the Coupled-VAE, which helps the encoder and the decoder to learn better parameterizations of the data manifold with a coupled deterministic network, via encoder weight sharing and decoder signal matching. • Experiments on PTB, Yelp, and Yahoo show that our approach improves the performance of various VAE models in terms of probability estimation and richness of the latent space. We also generalize Coupled-VAE to conditional modeling and propose Coupled-CVAE, which largely improves the diversity of dialogue generation on the Switchboard dataset. 2 Background 2.1 Variational Inference for Text Modeling The generative process of VAE is first to sample a latent code z from the prior distribution P(z) and then to sample the data x from P(x|z; θ) (Kingma and Ba, 2015). Since the exact marginalization of the log-likelihood is intractable, a variational family of posterior distributions Q(z|x; φ) is adopted to derive the evidence lower bound (ELBO), i.e., log P(x; θ) ≥Ez∼Q(z|x;φ)[log P(x|z; θ)] −KL[Q(z|x; φ) ∥P(z)] (1) For training, as shown in Figure 1(a), the encoded text e is transformed into its posterior via a posterior network. A latent code is sampled and mapped to the decoding signal h. Finally, the decoder infers the input with the decoding signal. The objective can be viewed as a reconstruction loss Lrec plus a regularization loss Lreg (whose form varies), i.e., L = Lrec + Lreg (2)  Encoder () f' {  Posterior (|{) ' Decoder (|ℎ) l MLP () g ℎ reg rec (a) Variational autoencoder  Encoder |() { Decoder l(|ℎ) ℎ rec MLP g({) (b) Deterministic autoencoder Figure 1: VAE and DAE for text modeling. However, the optimization of the VAE objective is challenging. We usually observe a very small Lreg and a Lrec similar to a standard language model, i.e., the well-known posterior collapse problem. 2.2 Deterministic Autoencoders An older family of autoencoders is the deterministic autoencoder (DAE) (Rumelhart et al., 1986; Ballard, 1987). Figure 1(b) shows an overview of DAE for text modeling, which is composed of a text encoder, an optional MLP, and a text decoder. The reconstruction loss of DAE is usually much lower than that of VAE after convergence. 3 Encoder-Decoder Incompatibility in VAE for Text Modeling To understand the posterior collapse problem, we take a deeper look into the training dynamics of VAE. We investigate the following questions. How much backpropagated gradient does the encoder receive from reconstruction? How much does it receive from regularization? How much information does the decoder receive from the encoded text? 3.1 Tracking Training Dynamics To answer the first question, we study the gradient norm of the reconstruction loss w.r.t. the encoded text, i.e., ∥∂Lrec/∂e∥2, which shows the magni3451 0k 1k 2k 3k 4k 5k 6k Iteration 0 500 1000 1500 2000 2500 3000 3500 Value Model DAE VAE Couple-VAE (a) Gradient norm of the reconstruction loss w.r.t. the encoded text 0k 1k 2k 3k 4k 5k 6k Iteration 0 20 40 60 80 100 120 140 160 Value Model DAE VAE Couple-VAE (b) Gradient norm of the regularization loss w.r.t. the encoded text 0k 1k 2k 3k 4k 5k 6k Iteration 0.0 0.5 1.0 1.5 2.0 2.5 Value Model DAE VAE Couple-VAE (c) Gradient norm of the decoding signal w.r.t. the encoded text Figure 2: Training dynamics of DAE, VAE, and the proposed Coupled-VAE on the Yelp test set. Please find the analysis in Section 3 and Section 5.7. Best viewed in color (yet the models are distinguished by line markers). tude of gradients received by the encoder parameters. From Figure 2(a), we observe that it constantly increases in DAE, while in VAE it increases marginally in the early stage and then decreases continuously. It shows that the reconstruction loss actively optimizes the DAE encoder, while the VAE encoder lacks backpropagated gradients after the early stage of training. We seek the answer to the second question by studying the gradient norm of the regularization loss w.r.t. the encoded text, i.e., ∥∂Lreg/∂e∥2. In a totally collapsed posterior, i.e., Q(z|x; φ) = P(z) for each x, ∥∂Lreg/∂e∥2 would be zero. Thus, ∥∂Lreg/∂e∥2 can show how far the posterior of each instance is from the aggregate posterior or the prior. Figure 2(b) shows a constant decrease of the gradient norm in VAE from the 2.5K step until convergence, which shows that the posterior collapse is aggravated as the KL weight increases. For the third question, we compute the normalized gradient norm of the decoding signal w.r.t. the encoded text, i.e., ∥∂h/∂e∥F / ∥h∥2. As this term shows how relatively the decoding signal changes with the perturbation of the encoded text, it reflects the amount of information passed from the encoder to the decoder. Figure 2(c) shows that for DAE, it constantly increases. For VAE, it at first increases even faster than DAE, slows down, and finally decreases until convergence, indicating that the VAE decoder, to some extent, ignores the encoder in the late stage of training. 3.2 Encoder-Decoder Incompatibility Based on the training dynamics in Section 3.1 and the observations in previous work (Bowman et al., 2016; Zhao et al., 2017), text VAE has three features, listed as follows. First, the encoder is poorly optimized, as shown by the low ∥∂Lrec/∂e∥2. Second, the decoder degenerates into a powerful language model. Third, h contains less information from e in VAE than in DAE, which is indicated by the lower ∥∂h/∂e∥F / ∥h∥2. We call these features as encoder-decoder incompatibility. To bridge the incompatibility and posterior collapse, we start with the manifold hypothesis which states that real-world data concentrates near a manifold with a lower dimensionality than the ambient space (Narayanan and Mitter, 2010; Bengio et al., 2013). In our case, we denote the manifold of text data as X ⊂S l∈N Vl where V is the vocabulary. In the language of differential geometry, the encoded text e ∈E ⊂Rd and the decoding signal h ∈H ⊂Rd can be viewed as the parameterizations (or coordinates) of x ∈X under two different charts (or coordinate systems). Formally, we denote the chart maps as ϕe : X →E and ϕh : X →H, which satisfy e = ϕe(x) and h = ϕh(x) for any x ∈X. Given the two charts, the map from E to H is called the transition map ϕh ◦ϕ−1 e : E →H between the two charts. In DAE, the two chart maps and the transition map between them are learned simultaneously via the single reconstruction loss, which we rewrite as Lrec = Ex∈X [L(x, ϕ−1 h (ϕh ◦ϕ−1 e (ϕe(x))))] (3) where ϕe, ϕh ◦ϕ−1 e , and ϕ−1 h are modeled as the encoder, the MLP, and the decoder (strictly speaking, in text modeling, the range of ϕ−1 h is not X but distributions on X), as illustrated in Figure 3. In VAE, as discussed before, both ϕe and ϕh inadequately parameterize the data manifold. We argue that the inadequate parameterizations make it harder to find a smooth transition map in VAE than in DAE, as shown by the lower ∥∂h/∂e∥F / ∥h∥2. 3452 ⊂⋃∈ℕ    (encoder) {  ℎ { ℎ −1 {  (decoder) −1 ℎ ∘ ℎ −1 { (transition map)  Encoder { y Posterior ℎy y rec Posteriory MLPy Decodery  ℎ MLP rec Decoder match  Shared Structure Detach reg = Figure 3: Left: DAE and VAE interpreted as manifold parameterizations and a transition map. Right: A graphical overview of the proposed Coupled-VAE. The upper path is deterministic, and the lower path is stochastic. Since the posterior network is a part of the transition map, it consequently seeks to map each instance to the prior (discussed in Section 3.1) rather than learning the transition map. 4 Coupling Variational and Deterministic Networks Based on the above analysis, we argue that posterior collapse could be alleviated by learning chart maps (i.e., ϕe and ϕh) that better parameterize the data manifold. Inspired by the chart maps in DAE, we propose to couple the VAE model with a deterministic network, outlined in Figure 3. Modules with a subscript c are deterministic networks that share the structure with those in the stochastic network. Sampling is disabled in the deterministic network, e.g., in the case of Gaussian posterior, we use the predicted mean vector for later computation. Please find details for other posterior families in Appendix B. Similar to DAE, the coupled deterministic network is optimized solely by the coupled reconstruction loss Lc rec, which is the same autoregressive cross-entropy loss as Lrec. To learn a well-optimized ϕe, we share the encoder between the stochastic and the deterministic networks, which leverages the rich gradients backpropagated from Lc rec. To learn better ϕh, we propose to guide ϕh with a well-learned chart map, i.e., the one characterized by Decoderc. Thus, we introduce a signal matching loss Lmatch that pushes the h to hc. The objective of our approach is L = Lrec + Lreg + λrLc rec + λmLmatch (4) where λr and λm are hyperparameters2, Lc rec is the coupled reconstruction loss, and the signal matching loss Lmatch is essentially a distance function 2To avoid heavy hyperparameter tuning, we set λr = 1.0 unless otherwise specified. between h and hc. We evaluate both the Euclidean distance and the Rational Quadratic kernel3, i.e., Lmatch = ( ∥h −Detach(hc)∥2 Eucl P s −s·C s·C+∥h−Detach(hc)∥2 RQ (5) where s ∈{0.1, 0.2, 0.5, 1, 2, 5, 10}, C is a hyperparameter, and Detach prevents gradients to be propagated into hc since we would like hc to guide h but not the opposite. One would question the necessity of sharing the structure of the posterior network by resorting to universal approximation (Hornik et al., 1989). Specifically, a common question is: why not using an MLP as Posteriorc? We argue that each structure has a favored distribution of H in Rd, so structure sharing facilitates the optimization when we are learning by gradient descent. For example, the latent space learned by planar flows (Rezende and Mohamed, 2015) has compression and expansion, and vMF-VAE (Xu and Durrett, 2018), which is supported on a sphere, may significantly influence the distribution of H in its ambient space Rd. 5 Experiments 5.1 Datasets We conduct the experiments on three commonly used datasets for text modeling, i.e., the Penn Treebank (PTB) (Marcus et al., 1993), Yelp (Xu et al., 2016), and Yahoo. The training/validation/test splits are 42K/3370/3761 for PTB, 63K/7773/8671 for Yelp, and 100K/10K/10K for Yahoo. The vocabulary size for PTB/Yelp/Yahoo is 10K/15K/20K. We discard the sentiment labels in Yelp. 5.2 Baselines We evaluate the proposed Coupled-VAE approach by applying it to various VAE models, which in3To avoid heavy hyperparameter tuning, we use the Rational Quadratic kernel unless otherwise specified. 3453 PTB Yelp Yahoo NLL (KL) PPL NLL (KL) PPL NLL (KL) PPL GRU-LM* 105.8 (-) 125.3 196.3 (-) 57.3 347.9 (-) 78.0 VAE 103.6 (8.6) 112.9 193.7 (7.2) 54.3 344.5 (12.4) 74.7 Coupled-VAE 103.1 (9.5) 110.5 191.2 (8.0) 51.6 342.4 (12.8) 72.8 β(0.8)-VAE 103.8 (11.0) 113.9 193.8 (10.2) 54.5 344.9 (16.1) 75.1 Coupled-β(0.8)-VAE 103.3 (12.1) 111.5 191.5 (12.2) 51.9 342.8 (17.0) 73.2 β(1.2)-VAE 103.7 (7.8) 113.3 193.7 (6.0) 54.3 345.3 (10.5) 75.5 Coupled-β(1.2)-VAE 102.9 (8.6) 109.6 191.2 (6.9) 51.6 342.3 (11.3) 72.7 vMF-VAE 103.6 (2.0) 113.2 195.4 (0.0) 56.3 344.5 (2.5) 74.7 Coupled-vMF-VAE 103.0 (3.0) 110.1 191.2 (2.8) 51.6 342.2 (4.0) 72.5 CNN-VAE 118.5 (29.6) 222.6 194.2 (12.8) 54.8 344.3 (19.7) 74.5 Coupled-CNN-VAE 118.2 (30.2) 219.7 193.9 (13.7) 54.6 343.3 (22.4) 73.6 WAE 103.7 (11.0) 113.3 193.7 (10.7) 54.3 344.7 (16.6) 74.9 Coupled-WAE 103.2 (12.5) 110.9 191.3 (12.5) 51.7 343.3 (18.2) 73.6 VAE-NF 103.3 (5.5) 111.3 193.9 (5.3) 54.5 344.3 (8.1) 74.5 Coupled-VAE-NF 102.6 (5.7) 108.1 191.8 (5.6) 52.2 342.6 (8.8) 73.0 WAE-NF 103.4 (6.7) 111.9 194.1 (7.0) 54.7 344.3 (10.6) 74.5 Coupled-WAE-NF 102.7 (7.4) 108.4 192.1 (7.4) 52.5 342.7 (11.0) 73.1 CycAnn-VAE 104.2 (1.6) 116.3 192.5 (1.2) 53.0 345.4 (3.9) 75.5 Coupled-CycAnn-VAE 103.7 (2.4) 113.3 190.8 (2.0) 51.1 342.4 (4.4) 72.7 PreFB-VAE 103.4 (14.6) 111.9 190.4 (14.1) 50.7 341.4 (17.6) 71.8 Coupled-PreFB-VAE 103.3 (15.6) 111.4 189.9 (14.4) 50.3 341.3 (17.9) 71.7 SA-VAE† 100.7 (7.7) 98.7 183.5 (3.8) 44.0 327.5 (7.2)‡ 60.4‡ Lagging-VAE† 98.8 (6.0) 90.7 182.5 (1.2) 43.1 326.7 (6.0) 59.7 Coupled-Lagging-VAE† 98.7 (11.0) 90.4 182.3 (3.8) 42.9 326.2 (7.4) 59.3 Table 1: Language modeling results. NLL is estimated with importance sampling. PPL is based on the estimated NLL. KL and MI are approximated by their Monte Carlo estimates. Coupled- stands for “with the coupled deterministic network”. The better results in each block are shown in bold. *The exact NLL is reported. †Modifying open-source implementation which does not follow our setup and evaluation. ‡Previously reported. clude VAE (Kingma and Welling, 2014), β-VAE (Higgins et al., 2017), vMF-VAE (Xu and Durrett, 2018; Davidson et al., 2018) with learnable κ, CNN-VAE (Yang et al., 2017), WAE (Tolstikhin et al., 2018), VAE with normalizing flows (VAENF) (Rezende and Mohamed, 2015), WAE with normalizing flows (WAE-NF), VAE with cyclic annealing schedule (CycAnn-VAE) (Fu et al., 2019), VAE with encoder pretraining and the free bits objective (PreFB-VAE) (Li et al., 2019), and LaggingVAE (He et al., 2019). We also show the result of GRU-LM (Cho et al., 2014) and SA-VAE (Kim et al., 2018). We do not apply our method to SAVAE since it does not follow amortized variational inference. Please find more details in Appendix C and previous footnotes. 5.3 Language Modeling Results We report negative log-likelihood (NLL), KL divergence, and perplexity as the metrics for language modeling. NLL is estimated with importance sampling, KL is approximated by its Monte Carlo estimate, and perplexity is computed based on NLL. Please find the metric details in Appendix D. Table 1 displays the language modeling results. For all models, our proposed approach achieves smaller negative log-likelihood and lower perplexity, which shows the effectiveness of our method to improve the probability estimation capability of various VAE models. Larger KL divergence is also observed, showing that our approach helps address the posterior collapse problem. 5.4 Mutual Information and Reconstruction Language modeling results only evaluate the probability estimation ability of VAE. We are also interested in how rich the latent space is. We report the mutual information (MI) between the text x and the latent code z under Q(z|x), which is approximated with Monte Carlo estimation. Better 3454 PTB Yelp Yahoo MI BLEU-1/2 MI BLEU-1/2 MI BLEU-1/2 VAE 10.48 23.2 / 4.4 8.28 28.7 / 5.3 15.43 21.2 / 3.6 Coupled-VAE 11.99 23.4 / 4.5 9.65 30.4 / 5.8 16.44 23.1 / 4.1 β(0.8)-VAE 15.43 24.5 / 4.9 13.52 30.6 / 6.0 24.16 24.0 / 4.3 Coupled-β(0.8)-VAE 18.13 24.3 / 4.8 17.69 32.6 / 6.6 28.03 26.4 / 4.9 β(1.2)-VAE 9.16 22.8 / 4.3 6.60 28.0 / 5.0 11.83 18.2 / 2.9 Coupled-β(1.2)-VAE 10.28 22.9 / 4.2 7.90 29.8 / 5.6 13.51 22.4 / 3.8 vMF-VAE 1.74 15.2 / 2.0 0.03 22.4 / 2.8 2.06 8.5 / 1.1 Coupled-vMF-VAE 2.37 16.1 / 2.3 2.60 25.1 / 4.0 3.37 10.3 / 1.4 CNN-VAE 78.49 32.0 / 7.8 17.26 32.9 / 7.1 30.18 24.9 / 5.3 Coupled-CNN-VAE 80.54 31.8 / 7.7 19.15 33.4 / 7.3 37.62 26.9 / 5.9 WAE 15.09 24.8 / 5.1 15.08 30.7 / 6.1 24.73 24.2 / 4.5 Coupled-WAE 18.51 24.7 / 5.1 18.56 32.5 / 6.6 30.08 27.7 / 5.3 VAE-NF 5.63 19.2 / 3.3 5.64 25.6 / 4.5 8.02 13.7 / 2.1 Coupled-VAE-NF 5.86 19.4 / 3.3 6.06 26.3 / 4.6 9.14 15.3 / 2.5 WAE-NF 7.18 19.7 / 3.5 7.95 26.0 / 4.6 11.43 13.8 / 2.2 Coupled-WAE-NF 8.10 20.7 / 3.7 8.53 27.2 / 5.0 12.56 14.9 / 2.5 CycAnn-VAE 1.55 16.3 / 2.3 1.18 22.6 / 3.2 3.09 8.3 / 1.1 Coupled-CycAnn-VAE 2.27 16.7 / 2.6 2.01 23.1 / 3.4 3.89 10.9 / 1.5 PreFB-VAE 20.6 25.5 / 5.7 20.3 33.1 / 6.8 26.2 27.2 / 5.2 Coupled-PreFB-VAE 23.2 25.8 / 5.8 21.0 33.3 / 6.8 27.0 27.2 / 5.3 Lagging-VAE† 2.90 0.96 3.04 Coupled-Lagging-VAE† 3.29 2.36 3.06 Table 2: Mutual information (MI) and reconstruction. †Modifying the open-source implementation. reconstruction from the encoded text is another way to show the richness of the latent space. For each text x, we sample ten latent codes from Q(z|x) and decode them with greedy search. We report the BLEU-1 and BLEU-2 scores between the reconstruction and the input. Please find the metric details in Appendix E. In Table 2, we observe that our approach improves MI on all datasets, showing that our approach helps learn a richer latent space. BLEU-1 and BLEU-2 are consistently improved on Yelp and Yahoo, but not on PTB. Given that text samples in PTB are significantly shorter than those in Yelp and Yahoo, we conjecture that it is easier for the decoder to reconstruct on PTB by exploiting its autoregressive expressiveness, even without a rich latent space. 5.5 Hyperparameter Analysis: Distance Function, λr, and λm We investigate the effect of key hyperparameters. Results are shown in Table 3. Note that the lowest NLL does not guarantee the best other metrics, which shows the necessity to use multiple metrics for a more comprehensive evaluation. For the distance function, we observe that the Euclidean distance (denoted as Eucl in Table 3) is more sensitive to λm than the Rational Quadratic kernel (denoted as RQ in Table 3). The first and the third block in Table 3 show that, with larger λm, the model achieves higher KL divergence, MI, and reconstruction metrics. Our interpretation is that by pushing the stochastic decoding signals closer to the deterministic ones, we get latent codes with richer text information. We leave the analysis of λm = 0.0 in Section 5.6. The second block in Table 3 shows the role of λr, which we interpret as follows. When λr is too small (e.g., 0.5), the learned parameterizations are still inadequate for a smooth transition map; when λr is too large (e.g., 5.0), it distracts the optimization too far away from the original objective (i.e., Lrec + Lreg). Note that λr = 0.0 is equivalent to removing the coupled reconstruction loss Lc rec in Eq. (4)). 5.6 The Heterogeneous Effect of Signal Matching on Probability Estimation In Section 5.5 we observe richer latent space (i.e., larger MI and BLEU scores) with larger λm. However, a richer latent space does not guarantee a better probability estimation result. Thus, in this 3455 PTB Yelp Dist λm λr NLL (KL) PPL MI BLEU-1/2 NLL (KL) PPL MI BLEU-1/2 RQ 0.1* 1.0 103.1 (9.5) 110.5 11.99 23.4 / 4.5 191.2 (8.0) 51.6 9.65 30.4 / 5.8 1.0 103.3 (10.7) 111.4 14.32 24.0 / 4.8 191.1 (8.1) 51.5 9.92 30.5 / 5.8 5.0 103.7 (16.1) 113.2 32.78 26.5 / 5.8 191.5 (12.8) 51.9 19.77 32.8 / 6.5 RQ 0.1 0.0 104.1 (7.3) 115.3 8.60 21.0 / 3.7 191.7 (5.8) 52.1 6.40 27.7 / 5.0 0.5 103.4 (9.2) 111.8 11.58 23.1 / 4.3 191.3 (7.8) 51.7 9.32 29.8 / 5.7 1.0* 103.1 (9.5) 110.5 11.99 23.4 / 4.5 191.2 (8.0) 51.6 9.65 30.4 / 5.8 5.0 103.1 (9.1) 110.6 11.15 22.9 / 4.4 192.9 (8.0) 53.4 9.53 30.0 / 5.8 Eucl 0.1 1.0 103.3 (10.1) 111.5 13.25 23.4 / 4.7 191.2 (9.2) 51.6 11.69 31.1 / 6.0 1.0 103.9 (17.4) 114.5 30.52 27.7 / 6.1 192.1 (14.3) 52.5 23.14 33.8 / 6.9 5.0 108.9 (33.3) 144.0 98.02 32.0 / 8.5 194.4 (25.0) 55.1 61.62 36.8 / 8.2 VAE 103.6 (8.6) 112.9 10.48 23.2 / 4.4 193.7 (7.2) 54.3 8.28 28.7 / 5.3 Table 3: Hyperparameter analysis. The best results in each block are shown in bold. *Reported in Table 1 and 2. PTB Yelp NLL PPL NLL PPL Coupled-VAE* 103.1 110.5 191.2 51.6 Coupled-VAE (λm=0) 103.1 110.3 190.7 51.1 Coupled-VAE-NF* 102.6 108.1 191.8 52.2 Coupled-VAE-NF (λm=0) 102.8 109.1 192.7 53.2 Coupled-vMF-VAE* 103.0 110.1 191.2 51.6 Coupled-vMF-VAE (λm=0) 104.4 117.1 193.5 54.1 Table 4: The effect of signal matching on probability estimation. * Reported in Table 1. part, we delve deeper into whether the decoder signal matching mechanism helps improve probability estimation. We study three models of different posterior families (i.e., Coupled-VAE, Coupled-VAENF, and Coupled-vMF-VAE). Results are shown in Table 4, where we do not report the KL, MI, and BLEU scores because they have been shown to be improved with larger λm in Table 3. We observe that the effects of signal matching on probability estimation vary in different posterior families. 5.7 Is the Incompatibility Mitigated? We study the three gradient norms defined in Section 3 on the test sets, displayed in Table 5 (for Coupled-VAE, λm = 0.1). Notably, ∥∂Lc rec/∂e∥2 in Coupled-VAE is even larger than ∥∂Lrec/∂e∥2 in DAE. It has two indications. First, the encoder indeed encodes rich information of the text. Second, compared with DAE, Coupled-VAE better generalizes to the test sets, which we conjecture is due to the regularization on the posterior. Coupled-VAE also has a larger ∥∂Lreg/∂e∥2 compared with VAE, which based on the argument in Section 3.1 indicates that, in Coupled-VAE, the posterior of each instance is not similar to the prior. We also observe larger ∥∂h/∂e∥F / ∥h∥2 in Coupled-VAE, which indicates a better transition map between the two parameterizations in Coupled-VAE than in VAE. We also track the gradient norms of CoupledVAE (λm = 10.0 for a clearer comparison), plotted along with VAE and DAE in Figure 2. The curve for Coupled-VAE in Figure 2(a) stands for ∥∂(Lrec + Lc rec)/∂e∥2. We observe that CoupledVAE receives constantly increasing backpropagated gradients from the reconstruction. In contrast to VAE, the ∥∂Lreg/∂e∥2 in Coupled-VAE does not decrease significantly as the KL weight increases. The decrease of ∥∂h/∂e∥F / ∥h∥2, which VAE suffers from, is not observed in Coupled-VAE. Plots on more datasets are in Appendix F. 5.8 Sample Diversity We evaluate the diversity of the samples from the prior distribution. We sample 3200 texts from the prior distribution and report the Dist-1 and Dist-2 metrics (Li et al., 2016), which are the ratios of distinct unigrams and bigrams over all generated unigrams and bigrams. Distinct-1 and Distinct-2 in Table 6 show that texts sampled from CoupledVAE (λm = 10.0) are more diverse than those from VAE. Given limited space, we put several samples in Appendix G for qualitative analysis. 5.9 Interpolation A property of VAE is to match the interpolation in the latent space with the smooth transition in the data space (Bowman et al., 2016). In Table 7, we show the interpolation of VAE and CoupledVAE on PTB. It shows that compared with VAE, Coupled-VAE has smoother transitions of subjects 3456 ∥∂Lrec/∂e∥2 ∥∂Lc rec/∂e∥2 ∥(∂Lrec + Lc rec)/∂e∥2 ∥∂Lreg/∂e∥2 ∥∂h/∂e∥F / ∥h∥2 PTB DAE 1719.8 3.14 VAE 112.5 19.4 2.05 Coupled-VAE 148.5 2109.6 2320.2 27.7 2.12 Yelp DAE 2443.6 2.55 VAE 59.7 18.8 1.62 Coupled-VAE 84.8 3640.8 3764.7 25.0 2.25 Yahoo DAE 4104.6 3.39 VAE 257.9 52.8 2.92 Coupled-VAE 335.3 5105.0 5615.0 65.0 3.91 Table 5: Gradient norms defined in Section 3.1 on each test set. λm = 0.1. PTB Yelp Yahoo D-1 D-2 D-1 D-2 D-1 D-2 VAE 4.61 16.36 0.62 2.48 0.44 2.11 Coupled-VAE 5.51 24.46 1.15 5.93 0.75 3.97 Table 6: Diversity of samples from the prior.  Encoder { y Posterior ℎy y rec Posteriory MLPy Decodery  ℎ MLP rec Decoder match  Shared Structure Detach reg = Š Encoderu {Š Figure 4: A graphical overview of the generalization to Coupled-CVAE. u is the condition, encoded as eu. (both sides →it) and verbs (are expected →have been →has been →has), indicating that the linguistic information is more smoothly encoded in the latent space of Coupled-VAE. 5.10 Generalization to Conditional Language Modeling: Coupled-CVAE To generalize our approach to conditional language modeling, we propose Coupled-CVAE. A graphical overview is displayed in Figure 4. Specifically, the (coupled) posterior network and the (coupled) decoder are additionally conditioned. The objective of Coupled-CVAE is identical to Eq. (4). We compare Couple-CVAE with GRU encoderdecoder (Cho et al., 2014) and CVAE (Zhao et al., 2017) for dialogue generation. We use the Switchboard dataset (John and Holliman, 1993), whose training/validation/test splits are 203K/5K/5K, and the vocabulary size is 13K. For probability estimation, we report the NLL, KL, and PPL based on the gold responses. Since the key motivation of using CVAE in Zhao et al. (2017) is the diversity of responses, we sample one response for each post and report the Distinct-1 and Distinct-2 metrics over all samples. Please find more details in Appendix I. Table 8 shows that Coupled-CVAE greatly increases the diversity of dialogue modeling, while it only slightly harms the probability estimation capability. It indicates that Coupled-CVAE better captures the one-to-many nature of conversations than CVAE and GRU encoder-decoder. We also observe that the diversity is improved with increasing λm, which shows that λm can control diversity via specifying the richness of the latent space. 6 Relation to Related Work Bowman et al. (2016) identify the posterior collapse problem of text VAE and propose KL annealing and word drop to handle the problem. Zhao et al. (2017) propose the bag-of-words loss to mitigate this issue. Later work on this problem focuses on less powerful decoders (Yang et al., 2017; Semeniuta et al., 2017), modified regularization objective (Higgins et al., 2017; Bahuleyan et al., 2019; Wang and Wang, 2019), alternative posterior families (Rezende and Mohamed, 2015; Xu and Durrett, 2018; Davidson et al., 2018; Xiao et al., 2018), richer prior distributions (Tomczak and Welling, 2018), improved optimization (He et al., 2019) or KL annealing strategy (Fu et al., 2019), the use of skip connections (Dieng et al., 2019), hierarchical or autoregressive posterior distributions (Park et al., 2018; Du et al., 2018), and narrowing the amortization gap (Hjelm et al., 2016; Kim et al., 2018; Marino et al., 2018). We provide the encoderdecoder incompatibility as a new perspective on the posterior collapse problem. Empirically, our approach can be combined with the above ones to alleviate the problem further. 3457 VAE Coupled-VAE (λm = 10.0) Text A (sampled from PTB): now those routes are n’t expected to begin until jan they are n’t expected to be completed both sides are expected to be delivered at their contract the new york stock exchange is scheduled to resume today both sides are expected to be delivered at least the new york stock exchange is scheduled to resume both sides have been able to produce up with the current level it is n’t clear that it will be sold through its own account it also has been used for comment it is n’t a major source of credit it also has been working for the first time it also has a major chunk of its assets it also has a new drug for two years it also has a major pharmaceutical company it also has a $ N million defense initiative Text B (sampled from PTB): it also has a unk facility in california Table 7: Latent space interpolation. NLL (KL) PPL D-1 D-2 GRU Encoder-Decoder* 53.9 (-) 41.6 0.33 0.80 CVAE 54.0 (3.8) 41.8 0.61 2.60 Coupled-CVAE (λm=0.1) 54.1 (4.6) 42.2 0.71 3.18 Coupled-CVAE (λm=0.5) 54.2 (5.3) 42.5 0.78 3.63 Coupled-CVAE (λm=1.0) 54.3 (6.1) 42.7 0.86 4.10 Coupled-CVAE (λm=2.0) 54.6 (7.8) 43.6 0.99 5.16 Table 8: Dialogue generation. *Exact NLL is reported. A model to be noted is β-VAE (Higgins et al., 2017), in which the reconstruction and regularization are modeled as a hyperparameterized trade-off, i.e., the improvement of one term compromises the other. Different from β-VAE, we adopt the idea of multi-task learning, i.e., the coupled reconstruction task helps improve the encoder chart map and the signal matching task helps improve the decoder chart map. Both our analysis in Section 3.2 and the empirical results show that the modeling of posterior distribution can be improved (but not necessarily compromised) with the additional tasks. Ghosh et al. (2020) propose to substitute stochasticity with explicit and implicit regularizations, which is easier to train and empirically improves the quality of generated outputs. Different from their work, we still strictly follow the generative nature (i.e., data density estimation) of VAE, and the deterministic network in our approach serves as an auxiliary to aid the optimization. Encoder pretraining (Li et al., 2019) initializes the text encoder and the posterior network with an autoencoding objective. Li et al. (2019) shows that encoder pretraining itself does not improve the performance of VAE, which indicates that initialization is not strong enough as an inductive bias to learn a meaningful latent space. Given the discrete nature of text data, we highlight the two-level representation learning for text modeling: 1) the encoder and decoder parameterizations via autoencoding and 2) a transition map between the parameterizations. Notably, the transition map has large freedom. In our case, the transition map decides the amount and type of information encoded in the variational posterior, and there are other possible instances of the transition map, e.g., flow-based models (Dinh et al., 2015). 7 Conclusions In this paper, we observe the encode-decoder incompatibility of VAE for text modeling. We bridge the incompatibility and the posterior collapse problem by viewing the encoder and the decoder as two inadequately learned chart maps from the data manifold to the parameterizations, and the posterior network as a part of the transition map between them. We couple the VAE model with a deterministic network and improve the parameterizations via encoder weight sharing and decoder signal matching. Our approach is model-agnostic and can be applied to a wide range of models in the VAE family. Experiments on benchmark datasets, i.e., PTB, Yelp, and Yahoo, show that our approach improves various VAE models in terms of probability estimation and the richness of the latent space. We also generalize Coupled-VAE to conditional language modeling and propose Coupled-CVAE. Results on Switchboard show that Coupled-CVAE largely improves diversity in dialogue generation. Acknowledgments We would like to thank the anonymous reviewers for their thorough and helpful comments. 3458 References Hareesh Bahuleyan, Lili Mou, Hao Zhou, and Olga Vechtomova. 2019. Stochastic wasserstein autoencoder for probabilistic sentence generation. In Proceedings of NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4068–4076. Dana H. Ballard. 1987. Modular learning in neural networks. In Proceedings of the 6th National Conference on Artificial Intelligence. Seattle, WA, USA, July 1987., pages 279–284. Yoshua Bengio, Aaron C. Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell., 35(8):1798–1828. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal J´ozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016, Berlin, Germany, August 11-12, 2016, pages 10–21. Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1724–1734. Tim R. Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, and Jakub M. Tomczak. 2018. Hyperspherical variational auto-encoders. In Proceedings of UAI 2018, Monterey, California, USA, August 610, 2018, pages 856–865. Adji B. Dieng, Yoon Kim, Alexander M. Rush, and David M. Blei. 2019. Avoiding latent variable collapse with generative skip models. In AISTATS 2019, 16-18 April 2019, Naha, Okinawa, Japan, volume 89 of Proceedings of Machine Learning Research, pages 2397–2405. Laurent Dinh, David Krueger, and Yoshua Bengio. 2015. NICE: non-linear independent components estimation. In ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Workshop Track Proceedings. Jiachen Du, Wenjie Li, Yulan He, Ruifeng Xu, Lidong Bing, and Xuan Wang. 2018. Variational autoregressive decoder for neural response generation. In Proceedings of EMNLP 2018, Brussels, Belgium, October 31 - November 4, 2018, pages 3154–3163. Hao Fu, Chunyuan Li, Xiaodong Liu, Jianfeng Gao, Asli C¸ elikyilmaz, and Lawrence Carin. 2019. Cyclical annealing schedule: A simple approach to mitigating KL vanishing. In Proceedings of NAACLHLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 240–250. Partha Ghosh, Mehdi S. M. Sajjadi, Antonio Vergari, Michael Black, and Bernhard Scholkopf. 2020. From variational to deterministic autoencoders. In ICLR 2020, Addis Ababa, Ethiopia, April 30, 2020. Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Sch¨olkopf, and Alexander J. Smola. 2012. A kernel two-sample test. J. Mach. Learn. Res., 13:723–773. Junxian He, Daniel Spokoyny, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019. Lagging inference networks and posterior collapse in variational autoencoders. In ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. Irina Higgins, Lo¨ıc Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. β-VAE: Learning basic visual concepts with a constrained variational framework. In ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. R. Devon Hjelm, Ruslan Salakhutdinov, Kyunghyun Cho, Nebojsa Jojic, Vince D. Calhoun, and Junyoung Chung. 2016. Iterative refinement of the approximate posterior for directed belief networks. In NIPS 2016, December 5-10, 2016, Barcelona, Spain, pages 4691–4699. Kurt Hornik, Maxwell B. Stinchcombe, and Halbert White. 1989. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359–366. Godfrey John and Edward Holliman. 1993. Switchboard-1 release 2 ldc97s62. Web Download. Philadelphia: Linguistic Data Consortium. Yoon Kim, Sam Wiseman, Andrew C. Miller, David Sontag, and Alexander M. Rush. 2018. Semiamortized variational autoencoders. In ICML, volume 80 of Proceedings of Machine Learning Research, pages 2683–2692. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings. Bohan Li, Junxian He, Graham Neubig, Taylor BergKirkpatrick, and Yiming Yang. 2019. A surprisingly effective fix for deep latent variable modeling of text. In Proceedings of EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3601–3612. Association for Computational Linguistics. 3459 Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of NAACL-HLT 2016, San Diego California, USA, June 12-17, 2016, pages 110–119. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19(2):313–330. Joseph Marino, Yisong Yue, and Stephan Mandt. 2018. Iterative amortized inference. In Proceedings of ICML 2018, Stockholmsm¨assan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 3400–3409. Hariharan Narayanan and Sanjoy K. Mitter. 2010. Sample complexity of testing the manifold hypothesis. In NIPS 2010, December 6-9, 2010, Vancouver, British Columbia, Canada., pages 1786–1794. Yookoon Park, Jaemin Cho, and Gunhee Kim. 2018. A hierarchical latent structure for variational conversation modeling. In Proceedings of NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1792–1801. Danilo Jimenez Rezende and Shakir Mohamed. 2015. Variational inference with normalizing flows. In Proceedings of ICML 2015, Lille, France, 6-11 July 2015, pages 1530–1538. DE Rumelhart, GE Hinton, and RJ Williams. 1986. Learning internal representations by error propagation. In Parallel distributed processing: explorations in the microstructure of cognition, vol. 1, pages 318–362. MIT Press. Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. A hybrid convolutional variational autoencoder for text generation. In Proceedings of EMNLP 2017, Copenhagen, Denmark, September 911, 2017, pages 627–637. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929–1958. Ilya O. Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Sch¨olkopf. 2018. Wasserstein autoencoders. In ICLR 2018. Jakub M. Tomczak and Max Welling. 2018. VAE with a vampprior. In AISTATS 2018, 9-11 April 2018, Playa Blanca, Lanzarote, Canary Islands, Spain, volume 84 of Proceedings of Machine Learning Research, pages 1214–1223. Prince Zizhuang Wang and William Yang Wang. 2019. Riemannian normalizing flow on variational wasserstein autoencoder for text modeling. In Proceedings of NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 284–294. Yijun Xiao, Tiancheng Zhao, and William Yang Wang. 2018. Dirichlet variational autoencoder for text modeling. CoRR, abs/1811.00135. Jiacheng Xu, Danlu Chen, Xipeng Qiu, and Xuanjing Huang. 2016. Cached long short-term memory neural networks for document-level sentiment classification. In Proceedings of EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 1660–1669. Jiacheng Xu and Greg Durrett. 2018. Spherical latent spaces for stable variational autoencoders. In Proceedings of EMNLP 2018, Brussels, Belgium, October 31 - November 4, 2018, pages 4503–4513. Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. 2017. Improved variational autoencoders for text modeling using dilated convolutions. In ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 3881–3890. Tiancheng Zhao, Ran Zhao, and Maxine Esk´enazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 654–664. Appendix A Notations We first introduce the notations used in the following parts. Calligraphic letters (e.g., Q0) denotes continuous distributions, and the corresponding lowercase letters (e.g., q0) stands for probability density functions. The probability of the text is represented as P. B Deterministic Networks for Different Posterior Families In this part, we detail the forward computation of the deterministic networks for different posterior families, including multivariate Gaussian, Gaussian with normalizing flows, and von MisesFisher. B.1 Multivariate Gaussian For multivariate Gaussian, we compute the coupled latent code zc as zc = Ez∼Qc(z|x)[z] (6) where Qc(z|x) is the posterior distribution learned by the coupled deterministic network. In effect, z is the mean vector predicted by the coupled posterior network Posteriorc. 3460 B.2 Gaussian with Normalizing Flows We first review the background and notations of normalizing flows. An initial latent code is first sampled from an initial distribution, i.e., z0 ∼ Q0(z0|x). The normalizing flow is defined as a series of reversible transformations f1, . . . , fK, i.e., zk = fk ◦· · · ◦f1(z0) (7) where k = 1, . . . , K. The evidence lower bound (ELBO) for normalizing flows is derived as log P(x) ≥EzK∼QK(zK|x)[log P(x|zK)] −KL[QK(zK|x) ∥PK(zK)] = Ez0∼Q0(z0|x) h log P(x|zK) −log q0(z0|x) + log pK(zK) + K X k=1 log | det ∂fk ∂zk−1 | i (8) where PK(zK) is the prior distribution of the transformed latent variable and the reversibility of the transformations guarantees non-zero determinants. Obviously, the optimization of the ELBO for normalizing flows requires sampling from the initial distribution; thus, we compute the coupled latent code zc by transforming the predicted mean vector of the coupled initial distribution, i.e., zc = fc k ◦· · · ◦fc 1(Ez0∼Qc 0(z0|x)[z0]) (9) where Qc 0(z0|x) is the coupled initial distribution and fc 1, . . . , fc K are the coupled transformations. Note that all modules in the deterministic network share the structure with those in the stochastic network. We do not use the posterior mean as the coupled latent code for two reasons. First, our interest is to acquire a deterministic representation that guides the stochastic network, but not necessarily the mean vector. Second, the computation of the posterior mean after the transformations is intractable. B.3 Von Mises-Fisher The von Mises-Fisher distribution is supported on a (d−1)-dimensional sphere in Rd and parameterized by a direction parameter µ ∈Rd (∥µ∥= 1) and a concentration parameter κ, both of which are mapped from the encoded text by the posterior network. The probability density function is q(z|µ, κ) = κd/2−1 · exp(κµTz) (2π)d/2Id/2−1(κ) (10) where Iv is the modified Bessel function of the first kind at order v. We use the direction parameter µ as the coupled latent code zc. Note that we do not use the posterior mean as the coupled latent code for two reasons. First, similar to normalizing flows, our interest is a deterministic representation rather than the mean vector. Second, the posterior mean of von Mises-Fisher never lies on the support of the distribution, which is suboptimal to guide the stochastic network. C Details of the Experimental Setup The dimension of latent vectors is 32. The dimension of word embeddings is 200. The encoder and the decoder are one-layer GRUs with the hidden state size of 128 for PTB and 256 for Yelp and Yahoo. For optimization, we use Adam (Kingma and Ba, 2015) with a learning rate of 10−3 and β1 = 0.9, β1 = 0.999. The decoding signal is viewed as the first word embedding and also concatenated to the word embedding in each decoding step. After 30K steps, the learning rate is decayed by half each 2K steps. Dropout (Srivastava et al., 2014) rate is 0.2. KL-annealing (Bowman et al., 2016) is applied from step 2K to 42K (on Yelp, it is applied from step 1K to 41K for VAE, Coupled-VAE, β-VAE, and Coupled-β-VAE; otherwise, the KL divergence becomes very large in the early stage of training). For each 1K steps, we estimate the NLL for validation. For normalizing flows (NF), we use planar flows (Rezende and Mohamed, 2015) with three contiguous transformations. For WAE and WAE-NF, we use Maximum Mean Discrepancy (MMD) (Gretton et al., 2012) as the regularization term. An additional KL regularization term with the weight β = 0.8 (also with KL-annealing) is added to WAE and WAE-NF since MMD does not guarantee the convergence of the KL divergence. D Estimation of Language Modeling Metrics For language modeling, we report negative loglikelihood (NLL), KL divergence, and perplexity. To get more reliable results, we make the estimation of each metric explicit. For each test sample x, NLL is estimated by importance sampling, and KL 3461 is approximated by its Monte Carlo estimate: NLLx = −log P(x) ≈−log( 1 N N X i=1 p(z(i))P(x|z(i)) q(z(i)|x) ) KLx = KL[Q(z|x) ∥P(z)] ≈1 N N X i=1 log q(z(i)|x) p(z(i)) (11) where z(i) ∼Q(z|x) are sampled latent codes and all notations follow Eq. (1) in the main text. We report the averaged NLL and KL on all test samples. Perplexity is computed based on the estimated NLL. For validation, the number of samples is N = 10; for evaluation, the number of samples is N = 100. E Estimation of Mutual Information and Reconstruction Metrics We report the mutual information (MI) between the text x and the latent code z under Q(z|x) to investigate how much useful information is encoded. The MI component of each test sample x is approximated by Monte Carlo estimation: MIx = Ez∼Q(z|x)[log q(z|x) q(z) ] ≈1 N N X i=1 (log q(z(i)|x) −log q(z(i))) (12) where the aggregated posterior density q(z(i)) is approximated with its Monte Carlo estimate: q(z(i)) = Ex[q(z(i)|x)] ≈1 M M X j=1 q(z(i)|x(j)) (13) where x(j) are sampled from the test set. For convenience, most previous work uses the texts within each batch as the sampled x(j)’s (which are supposed to be sampled from the entire test set). However, this convention results in a biased estimation since the q(z(i)|x(i)) is computed when j = i, i.e., the text itself is always sampled when computing its MI component. We remedy it by skipping the term when j = i. The overall MI = Ex[MIx] is then estimated by averaging MIx over all test samples. We set the numbers of samples as N = 100 and M = 512. For reconstruction, we sample ten latent codes from the posterior of each text input and decode them with greedy search. We compute BLEU-1 and BLEU-2 between the reconstruction and the input with the Moses script. F Training Dynamics of Gradient Norms We show the tracked gradient norms on all datasets in Figure 5. The observations are consistent with those discussed in Section 5.7 in the main text. G Diversity and Samples from the Prior Distribution Given the limited space in the main text, we place the comprehensive evaluation of samples from the prior distribution in this part. Table 9 shows the diversity metrics and the first three (thus totally random) samples from each model. Qualitatively, samples from Coupled-VAE is more diverse than those from VAE. The long texts generated from VAE have more redundancies compared with CoupledVAE. Given that both models have the same latent dimension, it indicates that Coupled-VAE is using the latent codes more efficiently. H Interpolation A property of VAE is to match the interpolation in the latent space with the smooth transition in the text space (Bowman et al., 2016). In Table 7, we show the interpolation of VAE and CoupledVAE on PTB. It shows that compared with VAE, Coupled-VAE has smoother transitions of subjects (both sides →it) and verbs (are expected →have been →has been →has), indicating that the information about subjects and verbs is more smoothly encoded in the latent space of Coupled-VAE. I Generalization to Conditional Generation: Coupled-CVAE To generalize our approach to conditional generation, we focus on whether it can improve the CVAE model (Zhao et al., 2017) for dialogue generation. To this end, we propose the Coupled-CVAE model. I.1 CVAE CVAE adopts a two-step view of diverse dialogue generation. Let x be the response and y be the post (or the context). CVAE first samples the latent code z from the prior distribution P(z|y) and then samples the response from the decoder P(x|z, y; θ). Given the post y, the marginal distribution of the response x is P(x|y; θ) = Ez∼P(z|y)[P(x|z, y; θ)] (14) Similar to VAE, the exact marginalization is intractable, and we derive the evidence lower bound 3462 0k 1k 2k 3k 4k 5k 6k Iteration 0 500 1000 1500 2000 2500 Value Model DAE VAE Couple-VAE (a) PTB 0k 1k 2k 3k 4k 5k 6k Iteration 0 200 400 600 800 1000 Value Model DAE VAE Couple-VAE (b) PTB 0k 1k 2k 3k 4k 5k 6k Iteration 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Value Model DAE VAE Couple-VAE (c) PTB 0k 1k 2k 3k 4k 5k 6k Iteration 0 500 1000 1500 2000 2500 3000 3500 Value Model DAE VAE Couple-VAE (d) Yelp 0k 1k 2k 3k 4k 5k 6k Iteration 0 20 40 60 80 100 120 140 160 Value Model DAE VAE Couple-VAE (e) Yelp 0k 1k 2k 3k 4k 5k 6k Iteration 0.0 0.5 1.0 1.5 2.0 2.5 Value Model DAE VAE Couple-VAE (f) Yelp 0k 1k 2k 3k 4k 5k 6k Iteration 0 1000 2000 3000 4000 5000 Value Model DAE VAE Couple-VAE (g) Yahoo 0k 1k 2k 3k 4k 5k 6k Iteration 0 25 50 75 100 125 150 175 200 Value Model DAE VAE Couple-VAE (h) Yahoo 0k 1k 2k 3k 4k 5k 6k Iteration 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Value Model DAE VAE Couple-VAE (i) Yahoo Figure 5: Training dynamics of DAE, VAE, and Coupled-VAE (λm = 10.0). (a), (d), and (g) are ∥∂Lrec/∂e∥2 for DAE and VAE, and ∥∂(Lrec + Lc rec)/∂e∥2 for Coupled-VAE. (b), (e), (h) denote ∥∂Lreg/∂e∥2. (c), (f), (i) stand for ∥∂h/∂e∥F / ∥h∥2. Best viewed in color (yet the models are distinguished by line markers). (ELBO) of CVAE as log P(x|y; θ) ≥Ez∼Q(z|x,y;φ)[log P(x|z, y; θ)] −KL[Q(z|x, y; φ) ∥P(z|y)] (15) During training, the response and the post are encoded as ex and ey, respectively. The two vectors are concatenated and transformed into the posterior via the posterior network. A latent code is then sampled and mapped to a higher-dimensional h. The decoding signal in CVAE is computed by h and ey and utilized to infer the response. Similar to VAE, the objective of CVAE can also be viewed as a reconstruction loss and a regularization term in Eq. (15). I.2 Coupled-CVAE As observed in Zhao et al. (2017), the CVAE model also suffers from the posterior collapse problem. We generalize our approach to the conditional setting and arrive at Coupled-CVAE. A graphical overview is displayed in Figure 4. The difference from Coupled-VAE is shown in red. Specifically, the (coupled) posterior network and the (coupled) decoder are additionally conditioned on the post representation. The objective of Coupled-CVAE is identical to Eq. (4) in the main text. The coupled reconstruction loss Lc rec in CoupledCVAE has two functions. First, it improves the encoded response ex, which is similar to CoupledVAE. Second, it encourages hc to encode more response information rather than the post information, which collaborates with Lmatch to improve the parameterization h. 3463 I.3 Dataset We use the Switchboard dataset (John and Holliman, 1993). We split the dialogues into single-turn post-response pairs, and the number of pairs in the training/validation/test split is 203K/5K/5K. The vocabulary size is 13K. I.4 Evaluation For probability estimation, we report the NLL, KL, and PPL based on the gold responses. NLL, KL, and PPL are as computed in Appendix D except for the additional condition on the post. Since the key motivation of using CVAE in Zhao et al. (2017) is the response diversity, we sample one response for each post and report the Distinct-1 and Distinct-2 metrics over all test samples. I.5 Experimental Setup We compare our Coupled-CVAE model with two baselines: GRU encoder-decoder (Cho et al., 2014) and CVAE (Zhao et al., 2017). The detailed setup follows that of the PTB dataset in Appendix C. For each 1K steps, we estimate the NLL for validation. I.6 Results Experimental results of Coupled-CVAE are shown in the main text. 3464 VAE (PTB) Dist-1 = 0.0461 Dist-2 = 0.1636 1. but the market is a bit of the market ’s recent slide and the fed is trying to sell investors to buy back and forth between the s&p N and N 2. the company said it will be developed by a joint venture with the u.s. 3. the new york stock exchange composite index rose N to N Coupled-VAE (λm = 10.0) (PTB) Dist-1 = 0.0551 Dist-2 = 0.2446 1. dd acquisition said it will offer to acquire N shares of lin ’s shares to be sold 2. but the u.s. would be closed at N p.m. edt in N but that was caused by lower rates 3. $ N billion in the stock market was a lot of it to be worth for each of N VAE (Yelp) Dist-1 = 0.0062 Dist-2 = 0.0248 1. the food is good , but the food is good . i had the chicken fried steak with a side of mashed potatoes , and it was a good choice . the fries were good , but the fries were good . i had the chicken breast with a side 2. ok , so i was excited to check out this place for a while . i was in the area , and i was n’t sure what to expect . i was a little disappointed with the food , but i was n’t sure what to expect . i was 3. we went to the biltmore fashion park . we were seated right away , but we were seated right away . we were seated right away , but we were seated right away . we were seated right away and we were seated right away . the staff was very Coupled-VAE (λm = 10.0) (Yelp) Dist-1 = 0.0115 Dist-2 = 0.0593 1. i ’m a fan of the “ asian ” restaurants in the valley , and i ’m not sure what to expect , but i ’m not sure what the fuss is about . the meat is fresh and delicious . i ’m not a fan of the “ skinny 2. i ’m not a fan of the fox restaurants in phoenix , but i have to say that the service is always a great experience . the atmosphere is a little dated and there is a great view of the mountains . 3. i have been here twice , and the food was good , but the service was good , but the food was good . i had a great time , but the service was great . the food was a bit pricey , but the service was a bit slow VAE (Yahoo) Dist-1 = 0.0044 Dist-2 = 0.0211 1. what is the difference between the two and the UNK ? i am not sure what you mean , but i ’m not sure what you mean . i ’m not sure what you mean , but i ’m not sure what you mean . the answer is : 1 . the first person is the first person to be the first person to be the first person to be the first person . 2 . the first person is the first person to be the first person to be the first person . the first thing is that the person who is the best person is to be a person , and the person who is the best person to be born . the person who is not the best person is to be a person , and the person who is not the best person to be born . 2. what do you think of the song “ UNK ” ? i ’m not sure what you ’re talking about . i ’m not sure what you ’re talking about . i ’m not sure what you ’re talking about . i ’m not sure what you ’re talking about . i ’m not sure what you ’re talking about . i ’m not sure what you ’re talking about . i ’m not sure what you ’re talking about . 3. what is the name of the song ? i heard that the song was a song called “ UNK ” . it was a song called “ UNK ” . it was a song called “ UNK ” . it was a song called “ UNK ” . it was a song called “ UNK ” . it was a song called “ UNK ” . it was a song called “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” , “ UNK ” Coupled-VAE (λm = 10.0) (Yahoo) Dist-1 = 0.0075 Dist-2 = 0.0397 1. if you are looking for a good wrestler , what do you think about the future ? i am not sure what i mean . i have been watching the ufc for 3 months . i have been watching the ufc and i have to be able to see what happens . 2. is it true that the war is not a hoax ? it is a myth that the UNK of the war is not a war , but it is not possible to be able to see the war . the UNK is not a war , but it ’s not a crime . 3. how do i get a UNK on ebay ? ebay is free and they are free ! Table 9: Diversity metrics and the first three samples from each model. Redundancies (pieces of text that appeared before) are shown in red.
2020
316
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3465–3475 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3465 SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions Mao Ye∗ UT Austin [email protected] Chengyue Gong∗ UT Austin [email protected] Qiang Liu UT Austin [email protected] Abstract State-of-the-art NLP models can often be fooled by human-unaware transformations such as synonymous word substitution. For security reasons, it is of critical importance to develop models with certified robustness that can provably guarantee that the prediction is can not be altered by any possible synonymous word substitution. In this work, we propose a certified robust method based on a new randomized smoothing technique, which constructs a stochastic ensemble by applying random word substitutions on the input sentences, and leverage the statistical properties of the ensemble to provably certify the robustness. Our method is simple and structure-free in that it only requires the black-box queries of the model outputs, and hence can be applied to any pre-trained models (such as BERT) and any types of models (world-level or subwordlevel). Our method significantly outperforms recent state-of-the-art methods for certified robustness on both IMDB and Amazon text classification tasks. To the best of our knowledge, we are the first work to achieve certified robustness on large systems such as BERT with practically meaningful certified accuracy. 1 Introduction Deep neural networks have achieved state-of-theart results in many NLP tasks, but also have been shown to be brittle to carefully crafted adversarial perturbations, such as replacing words with similar words (Alzantot et al., 2018), adding extra text (Wallace et al., 2019), and replacing sentences with semantically similar sentences (Ribeiro et al., 2018). These adversarial perturbations are imperceptible to humans, but can fool deep neural networks and break their performance. Efficient methods for defending these attacks are of critical im∗Equal contribution portance for deploying modern deep NLP models to practical automatic AI systems. In this paper, we focus on defending the synonymous word substitution attacking (Alzantot et al., 2018), in which an attacker attempts to alter the output of the model by replacing words in the input sentence with their synonyms according to a synonym table, while keeping the meaning of this sentence unchanged. A model is said to be certified robust if such an attack is guaranteed to fail, no matter how the attacker manipulates the input sentences. Achieving and verifying certified robustness is highly challenging even if the synonym table used by the attacker is known during training (see Jia et al., 2019), because it requires to check every possible synonymous word substitution, whose number is exponentially large. Various defense methods against synonymous word substitution attacks have been developed (e.g., Wallace et al., 2019; Ebrahimi et al., 2018), most of which, however, are not certified robust in that they may eventually be broken by stronger attackers. Recently, Jia et al. (2019); Huang et al. (2019) proposed the first certified robust methods against word substitution attacking. Their methods are based on the interval bound propagation (IBP) method (Dvijotham et al., 2018) which computes the range of the model output by propagating the interval constraints of the inputs layer by layer. However, the IBP-based methods of Jia et al. (2019); Huang et al. (2019) are limited in several ways. First, because IBP only works for certifying neural networks with continuous inputs, the inputs in Jia et al. (2019) and Huang et al. (2019) are taken to be the word embedding vectors of the input sentences, instead of the discrete sentences. This makes it inapplicable to character-level (Zhang et al., 2015) and subword-level (Bojanowski et al., 2017) model, which are more widely used in practice (Wu et al., 2016). 3466 In this paper, we propose a structure-free certified defense method that applies to arbitrary models that can be queried in a black-box fashion, without any requirement on the model structures. Our method is based on the idea of randomized smoothing, which smooths the model with random word substitutions build on the synonymous network, and leverage the statistical properties of the randomized ensembles to construct provably certification bounds. Similar ideas of provably certification using randomized smoothing have been developed recently in deep learning (e.g., Cohen et al., 2019; Salman et al., 2019; Zhang et al., 2020; Lee et al., 2019), but mainly for computer vision tasks whose inputs (images) are in a continuous space (Cohen et al., 2019). Our method admits a substantial extension of the randomized smoothing technique to discrete and structured input spaces for NLP. We test our method on various types of NLP models, including text CNN (Kim, 2014), CharCNN (Zhang et al., 2015), and BERT (Devlin et al., 2019). Our method significantly outperforms the recent IBP-based methods (Jia et al., 2019; Huang et al., 2019) on both IMDB and Amazon text classification. In particular, we achieve an 87.35% certified accuracy on IMDB by applying our method on the state-of-the-art BERT, on which previous certified robust methods are not applicable. 2 Adversarial Word Substitution In a text classification task, a model f(X) maps an input sentence X ∈X to a label c in a set Y of discrete categories, where X = x1, . . . , xL is a sentence consisting of L words. In this paper, we focus on adversarial word substitution in which an attacker arbitrarily replaces the words in the sentence by their synonyms according to a synonym table to alert the prediction of the model. Specifically, for any word x, we consider a pre-defined synonym set Sx that contains the synonyms of x (including x itself). We assume the synonymous relation is symmetric, that is, x is in the synonym set of all its synonyms. The synonym set Sx can be built based on GLOVE (Pennington et al., 2014). With a given input sentence X = x1,..., xL, the attacker may construct an adversarial sentence X′ = x′ 1, . . . , x′ L by perturbing at most R ≤L words xi in X to any of their synonyms x′ i ∈Sxi, SX :=  X′ : X′ −X 0 ≤R, x′ i ∈Sxi, ∀i , where SX denotes the candidate set of adversarial sentences available to the attacker. Here ∥X′ −X∥0 := PL i=1 I {x′ i ̸= xi} is the Hamming distance, with I{·} the indicator function. It is expected that all X′ ∈SX have the same semantic meaning as X for human readers, but they may have different outputs from the model. The goal of the attacker is to find X′ ∈SX such that f(X) ̸= f(X′). Certified Robustness Formally, a model f is said to be certified robust against word substitution attacking on an input X if it is able to give consistently correct predictions for all the possible word substitution perturbations, i.e, y = f(X) = f(X′), for all X′ ∈SX, (1) where y denotes the true label of sentence X. Deciding if f is certified robust can be highly challenging, because, unless additional structural information is available, it requires to exam all the candidate sentences in SX, whose size grows exponentially with R. In this work, we mainly consider the case when R = L, which is the most challenging case. 3 Certifying Smoothed Classifiers Our idea is to replace f with a more smoothed model that is easier to verify by averaging the outputs of a set of randomly perturbed inputs based on random word substitutions. The smoothed classifier fRS is constructed by introducing random perturbations on the input space, fRS(X) = arg max c∈Y PZ∼ΠX (f(Z) = c) , where ΠX is a probability distribution on the input space that prescribes a random perturbation around X. For notation, we define gRS(X, c) := PZ∼ΠX (f(Z) = c) , which is the “soft score” of class c under fRS. The perturbation distribution ΠX should be chosen properly so that fRS forms a close approximation to the original model f (i.e., fRS(X) ≈f(X)), and is also sufficiently random to ensure that fRS is smooth enough to allow certified robustness (in the sense of Theorem 1 below). In our work, we define ΠX to be the uniform distribution on a set of random word substitutions. Specifically, let Px be a perturbation set for word x in the vocabulary, which is different from the synonym set Sx. In this work, we construct Px based on the top K nearest neighbors under the cosine 3467 similarity of GLOVE vectors, where K is a hyperparameter that controls the size of the perturbation set; see Section 4 for more discussion on Px. For a sentence X = x1, . . . , xL, the sentencelevel perturbation distribution ΠX is defined by randomly and independently perturbing each word xi to a word in its perturbation set Pxi with equal probability, that is, ΠX(Z) = L Y i=1 I {zi ∈Pxi} |Pxi| , where Z = z1, . . . , zL is the perturbed sentence and |Pxi| denotes the size of Pxi. Note that the random perturbation Z and the adversarial candidate X′ ∈SX are different. 3.1 Certified Robustness We now discuss how to certify the robustness of the smoothed model fRS. Recall that fRS is certified robust if y = fRS(X′) for any X′ ∈SX, where y is the true label. A sufficient condition for this is min X′∈SX gRS(X′, y) ≥max X′∈SX gRS(X′, c) ∀c ̸= y, where the lower bound of gRS(X′, y) on X′ ∈SX is larger than the upper bound of gRS(X′, c) on X′ ∈SX for every c ̸= y. The key step is hence to calculate the upper and low bounds of gRS(X′, c) for ∀c ∈Y and X′ ∈SX, which we address in Theorem 1 below. All proofs are in Appendix A.2. Theorem 1. (Certified Lower/Upper Bounds) Assume the perturbation set Px is constructed such that |Px| = |Px′| for every word x and its synonym x′ ∈Sx. Define qx = min x′∈Sx |Px ∩Px′|/|Px|, where qx indicates the overlap between the two different perturbation sets. For a given sentence X = x1, . . . , xL, we sort the words according to qx, such that qxi1 ≤qxi2 ≤· · · ≤qxiL. Then min X′∈SX gRS(X′, c) ≥max(gRS(X, c) −qX, 0) max X′∈SX gRS(X′, c) ≤min(gRS(X, c) + qX, 1). where qX := 1−QR j=1 qxij . Equivalently, this says gRS(X′, c) −gRS(X, c) ≤qX, any label c ∈Y. The idea is that, with the randomized smoothing, the difference between gRS(X′, c) and gRS(X, c) is at most qX for any adversarial candidate X′ ∈SX. Therefore, we can give adversarial upper and lower bounds of gRS(X′, c) by gRS(X, c) ± qX, which, importantly, avoids the difficult adversarial optimization of gRS(X′, c) on X′ ∈SX, and instead just needs to evaluate gRS(X, c) at the original input X. We are ready to describe a practical criterion for checking the certified robustness. Proposition 1. For a sentence X and its label y, we define yB = arg max c∈Y,c̸=y gRS(X, c). Then under the condition of Theorem 1, we can certify that f(X′) = f(X) = y for any X′ ∈SX if ∆X def = gRS(X, y) −gRS(X, yB) −2qX > 0. (2) Therefore, certifying whether the model gives consistently correct prediction reduces to checking if ∆X is positive, which can be easily achieved with Monte Carlo estimation as we show in the sequel. Estimating gRS(X, c) and ∆X Recall that gRS(X, c) = PZ∼ΠX(f(Z) = c). We can estimate gRS(X, c) with a Monte Carlo estimator Pn i=1 I{f(Z(i)) = c}/n, where Z(i) are i.i.d. samples from ΠX. And ∆X can be approximated accordingly. Using concentration inequality, we can quantify the non-asymptotic approximation error. This allows us to construct rigorous statistical procedures to reject the null hypothesis that fRS is not certified robust at X (i.e., ∆X ≤0) with a given significance level (e.g., 1%). See Appendix A.1 for the algorithmic details of the testing procedure. We can see that our procedure is structure-free in that it only requires the black-box assessment of the output f(Z(i)) of the random inputs, and does not require any other structural information of f and fRS, which makes our method widely applicable to various types of complex models. Tightness A key question is if our bounds are sufficiently tight. The next theorem shows that the lower/upper bounds in Theorem 1 are tight and can not be further improved unless further information of the model f or fRS is acquired. Theorem 2. (Tightness) Assume the conditions of Theorem 1 hold. For a model f that satisfies fRS(X) = y and yB as defined in Proposition 1, there exists a model f∗such that its related smoothed classifier gRS ∗satisfies gRS ∗(X, c) = 3468 ... Synonym Network An old story for young girls ... Input Sentence Story ... Young Tale ... Boyish ... ... ... Perturbation Set Randomized Inputs Sample 1: An aged tale for boyish ladies ... ... Sample n: An oldish epic for youthful girls ... Classifier f Output 1 Output n ... Test if △X > 0 holds Certified Robust! Figure 1: A pipeline of the proposed robustness certification approach. gRS(X, c) for c = y and c = yB, and min X′∈SX gRS ∗(X′, y) = max(gRS ∗(X, y) −qX, 0) max X′∈SX gRS ∗(X′, yB) = min(gRS ∗(X, yB) + qX, 1), where qX is defined in Theorem 1. In other words, if we access gRS only through the evaluation of gRS(X, y) and gRS(X, yB), then the bounds in Theorem 1 are the tightest possible that we can achieve, because we can not distinguish between gRS and the gRS ∗ in Theorem 2 with the information available. 3.2 Practical Algorithm Figure 1 visualizes the pipeline of the proposed approach. Given the synonym sets SX, we generate the perturbation sets PX from it. When an input sentence X arrives, we draw perturbed sentences {Z(i)} from ΠX and average their outputs to estimate ∆X, which is used to decide if the model is certified robust for X. Training the Base Classifier f Our method needs to start with a base classifier f. Although it is possible to train f using standard learning techniques, the result can be improved by considering that the method uses the smoothed fRS, instead of f. To improve the accuracy of fRS, we introduce a data augmentation induced by the perturbation set. Specifically, at each training iteration, we first sample a mini-batch of data points (sentences) and randomly perturbing the sentences using the perturbation distribution ΠX. We then apply gradient descent on the model based on the perturbed minibatch. Similar training procedures were also used for Gaussian-based random smoothing on continuous inputs (see e.g., Cohen et al., 2019). Our method can easily leverage powerful pretrained models such as BERT. In this case, BERT is used to construct feature maps and only the top layer weights are finetuned using the data augmentation method. 4 Experiments We test our method on both IMDB (Maas et al., 2011) and Amazon (McAuley, 2013) text classification tasks, with various types of models, including text CNN (Kim, 2014), Char-CNN (Zhang et al., 2015) and BERT (Devlin et al., 2019). We compare with the recent IBP-based methods (Jia et al., 2019; Huang et al., 2019) as baselines. Text CNN (Kim, 2014) was used in Jia et al. (2019) and achieves the best result therein. All the baseline models are trained and tuned using the schedules recommended in the corresponding papers. We consider the case when R = L during attacking, which means all words in the sentence can be perturbed simultaneously by the attacker. Code for reproducing our results can be found in https://github.com/ lushleaf/Structure-free-certified-NLP. Synonym Sets Similar to Jia et al. (2019); Alzantot et al. (2018), we construct the synonym set Sx of word x to be the set of words with ≥0.8 cosine similarity in the GLOVE vector space. The word vector space is constructed by post-processing the pretrained GLOVE vectors (Pennington et al., 2014) using the counter-fitted method (Mrkˇsi´c et al., 2016) and the “all-but-the-top” method (Mu and Viswanath, 2018) to ensure that synonyms are near to each other while antonyms are far apart. Perturbation Sets We say that two words x and x′ are connected synonymously if there exists a path of words x = x1, x2, . . . , xℓ= x′, such that all the successive pairs are synonymous. Let Bx to be the set of words connected to x synonymously. Then we define the perturbation set Px to consist of the top K words in Bx with the largest GLOVE cosine similarity if |Bx| ≥K, and set Px = Bx if |Bx| < K. Here K is a hyper-parameter that controls the size of Px and hence trades off the smoothness and accuracy of fRS. We use K = 100 by default and investigate its effect in Section 4.2. 3469 Method IMDB Amazon Jia et al. (2019) 79.74 14.00 Huang et al. (2019) 78.74 12.36 Ours 81.16 24.92 Table 1: The certified accuracy of our method and the baselines on the IMDB and Amazon dataset. Evaluation Metric We evaluate the certified robustness of a model fRS on a dataset with the certified accuracy (Cohen et al., 2019), which equals the percentage of data points on which fRS is certified robust, which, for our method, holds when ∆X > 0 can be verified. 4.1 Main Results We first demonstrate that adversarial word substitution is able to give strong attack in our experimental setting. Using IMDB dataset, we attack the vanilla BERT (Devlin et al., 2019) with the adversarial attacking method of Jin et al. (2020). The vanilla BERT achieves a 91% clean accuracy (the testing accuracy on clean data without attacking), but only a 20.1% adversarial accuracy (the testing accuracy under the particular attacking method by Jin et al. (2020)). We will show later that our method is able to achieve 87.35% certified accuracy and thus the corresponding adversarial accuracy must be higher or equal to 87.35%. We compare our method with IBP (Jia et al., 2019; Huang et al., 2019). in Table 1. We can see that our method clearly outperforms the baselines. In particular, our approach significantly outperforms IBP on Amazon by improving the 14.00% baseline to 24.92%. Thanks to its structure-free property, our algorithm can be easily applied to any pre-trained models and character-level models, which is not easily achievable with Jia et al. (2019) and Huang et al. (2019). Table 2 shows that our method can further improve the result using Char-CNN (a character-level model) and BERT (Devlin et al., 2019), achieving an 87.35% certified accuracy on IMDB. In comparison, the IBP baseline only achieves a 79.74% certified accuracy under the same setting. 4.2 Trade-Off between Clean Accuracy and Certified Accuracy We investigate the trade-off between smoothness and accuracy while tuning K in Table 3. We can Method Model Accuracy Jia et al. (2019) CNN 79.74 Huang et al. (2019) CNN 78.74 Ours CNN 81.16 Char-CNN 82.03 BERT 87.35 Table 2: The certified accuracy of different models and methods on the IMDB dataset. see that the clean accuracy decreases when K increases, while the gap between the clean accuracy and certified accuracy, which measures the smoothness, decreases when K increases. The best certified accuracy is achieved when K = 100. K 20 50 100 250 1000 Clean (%) 88.47 88.48 88.09 84.83 67.54 Certified (%) 65.58 77.32 81.16 79.98 65.13 Table 3: Results of the smoothed model f RS with different K on IMDB using text CNN. “Clean” represents the accuracy on the clean data without adversarial attacking and “Certified” the certified accuracy. 5 Conclusion We proposed a robustness certification method, which provably guarantees that all the possible perturbations cannot break down the system. Compared with previous work such as Jia et al. (2019); Huang et al. (2019), our method is structure-free and thus can be easily applied to any pre-trained models (such as BERT) and character-level models (such as Char-CNN). The construction of the perturbation set is of critical importance to our method. In this paper, we used a heuristic way based on the synonym network to construct the perturbation set, which may not be optimal. In further work, we will explore more efficient ways for constructing the perturbation set. We also plan to generalize our approach to achieve certified robustness against other types of adversarial attacks in NLP, such as the out-of-list attack. An na¨ıve way is to add the “OOV” token into the synonyms set of every word, but potentially better procedures can be further explored. Acknowledgement This work is supported in part by NSF CRII 1830161 and NSF CAREER 1846421. 3470 References Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In ACL. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. In ACL. Jeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. 2019. Certified adversarial robustness via randomized smoothing. In ICML. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL. Krishnamurthy Dvijotham, Sven Gowal, Robert Stanforth, Relja Arandjelovic, Brendan O’Donoghue, Jonathan Uesato, and Pushmeet Kohli. 2018. Training verified learners with learned verifiers. arXiv preprint arXiv:1805.10265. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. Hotflip: White-box adversarial examples for text classification. In ACL. Po-Sen Huang, Robert Stanforth, Johannes Welbl, Chris Dyer, Dani Yogatama, Sven Gowal, Krishnamurthy Dvijotham, and Pushmeet Kohli. 2019. Achieving verified robustness to symbol substitutions via interval bound propagation. In EMNLP. R. Jia, A. Raghunathan, K. Gksel, and P. Liang. 2019. Certified robustness to adversarial word substitutions. In EMNLP. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is bert really robust? natural language attack on text classification and entailment. In AAAI. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In AAAI. Guang-He Lee, Yang Yuan, Shiyu Chang, and Tommi Jaakkola. 2019. Tight certificates of adversarial robustness for randomly smoothed classifiers. In NeurIPS. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In ACL. Jure McAuley, Julian; Leskovec. 2013. Hidden factors and hidden topics: understanding rating dimensions with review text. In ACM RecSys. Nikola Mrkˇsi´c, Diarmuid O S´eaghdha, Blaise Thomson, Milica Gaˇsi´c, Lina Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In NAACL. Jiaqi Mu and Pramod Viswanath. 2018. All-but-thetop: Simple and effective postprocessing for word representations. In ICLR. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging nlp models. In ACL. Hadi Salman, Jerry Li, Ilya Razenshteyn, Pengchuan Zhang, Huan Zhang, Sebastien Bubeck, and Greg Yang. 2019. Provably robust deep learning via adversarially trained smoothed classifiers. In NeurIPS. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for nlp. In EMNLP. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Dinghuai Zhang, Mao Ye, Chengyue Gong, Zhanxing Zhu, and Qiang Liu. 2020. Black-box certification with randomized smoothing: A functional optimization based framework. arXiv preprint arXiv:2002.09169. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In NIPS. 3471 A Appendix A.1 Bounding the Error of Monte Carlo Estimation As shown in Proposition 1, the smoothed model fRS is certified robust at an input X in the sense of (1) if ∆X = gRS(X, y) −gRS(X, yB) −2qX = gRS(X, y) −max c̸=y gRS(X, c) −2qX > 0, where y is the true label of X, and gRS(X, c) := PZ∼ΠX (f(Z) = c) = EZ∼ΠX [I{f(Z) = c}] . Assume {Z(i)}n i=1 is an i.i.d. sample from ΠX. By Monte Carlo approximation, we can estimate gRS(X, c) for all c ∈Y jointly, via ˆgRS(X, c) := 1 n n X i=1 I n f(Z(i)) = c o , and estimate ∆X via ˆ∆X := 1 n n X i=1 I n f(Z(i)) = y o −max c̸=y 1 n n X i=1 I n f(Z(i)) = c o −2qX. To develop a rigorous procedure for testing ∆X > 0, we need to bound the non-asymptotic error of the Monte Carlo estimation, which can be done with a simple application of Hoeffding’s concentration inequality and union bound. Proposition 2. Assume {Z(i)} is i.i.d. drawn from ΠX. For any δ ∈(0, 1), with probability at least 1 −δ, we have ∆X ≥ˆ∆X −2 s log 1 δ + log |Y| 2n . We can now frame the robustness certification problem into a hypothesis test problem. Consider the null hypothesis H0 and alternatively hypothesis Ha: H0 :∆X ≤0 (fRS is not certified robust to X) Ha :∆X > 0 (fRS is certified robust to X). Then according to Proposition 2, we can reject the null hypothesis H0 with a significance level δ if ˆ∆X −2 s log 1 δ + log |Y| 2n > 0. In all the experiments, we set δ = 0.01 and n = 5000. A.2 Proof of the Main Theorems In this section, we give the proofs of the theorems in the main text. A.2.1 Proof of Proposition 1 According to the definition of fRS, it is certified robust at X, that is, y = fRS(X′) for ∀X′ ∈SX, if gRS(X′, y) ≥max c̸=y gRS(X′, c), X′ ∈SX. (3) Obviously gRS(X′, y) −max c̸=y gRS(X′, c) ≥min X′∈SX gRS(X′, y) −max c̸=y max X′∈SX gRS(X′, c) ≥ gRS(X, y) −qX  −max c̸=y gRS(X, c) + qX  //by Theorem 1. = ∆X. Therefore, ∆X > 0 must imply (3) and hence certified robustness. 3472 A.2.2 Proof of Theorem 1 Our goal is to calculate the upper and lower bounds maxX′∼ΠX gRS(X′, c) and minX′∼ΠX gRS(X′, c). Our key idea is to frame the computation of the upper and lower bounds into a variational optimization. Lemma 1. Define H[0,1] to be the set of all bounded functions mapping from X to [0, 1], For any h ∈H[0,1], define ΠX[h] = EZ∼ΠX[h(Z)]. Then we have for any X and c ∈Y, min X′∼ΠX gRS(X′, c) ≥ min h∈H[0,1] min X′∼ΠX  ΠX′[h] s.t. ΠX[h] = gRS(X, c) := gRS low(X, c), max X′∼ΠX gRS(X′, c) ≤ max h∈H[0,1] max X′∼ΠX  ΠX′[h] s.t. ΠX[h] = gRS(X, c) := gRS up(X, c). Proof of Lemma 1. The proof is straightforward. Define h0(X) = I{f(X) = c}. Recall that gRS(X, c) = PZ∼ΠX (f(Z) = c) = ΠX[h0]. Therefore, h0 satisfies the constraints in the optimization, which makes it obvious that gRS(X′, c) = ΠX′[h0] ≥ min h∈H[0,1]  ΠX′[h] s.t. ΠX[h] = gRS(X, c) . Taking minX′∈SX on both sides yields the lower bound. The upper bound follows the same derivation. Therefore, the problem reduces to deriving bounds for the optimization problems. Theorem 3. Under the assumptions of Theorem 1, for the optimization problems in Lemma 1, we have gRS low(X, c) ≥max(gRS(X, c) −qX, 0), gRS up(X, c) ≤min(gRS(X, c) + qX, 1). where qX is the quantity defined in Theorem 1 in the main text. Now we proceed to prove Theorem 3. Proof of Theorem 3. We only consider the minimization problem because the maximization follows the same proof. For notation, we denote p = gRS(X, c). Applying the Lagrange multiplier to the constraint optimization problem and exchanging the min and max, we have gRS low(X, c) = min X′∈SX min h∈H[0,1] max λ∈R ΠX′[h] −λΠX[h] + λp ≥max λ∈R min X′∈SX min h∈H[0,1] ΠX′[h] −λΠX[h] + λp =max λ∈R min X′∈SX min h∈H[0,1] Z h(Z) (dΠX′(Z) −λdΠX(Z)) + λp = −max λ∈R max X′∈SX Z (λdΠX(Z) −dΠX′(Z))+ + λp = −max λ≥0 max X′∈SX Z (λdΠX(Z) −dΠX′(Z))+ + λp. Here dΠ0 X(Z) and dΠ0 X′(Z) is the counting measure and (s)+ = max(s, 0). Now we calculate R (λdΠX(Z) −dΠX′(Z))+. Lemma 2. Given x, x′, define nx = |Px|, nx′ = |Px′| and nx,x′ = |Px ∩Px′|. We have the following identity Z (λdΠX(Z) −dΠX′(Z))+ =λ  1 − Y j∈[L],xj̸=x′ j nxj,x′ j nxj  +   Y j∈[L],xj̸=x′ j nxj,x′ j nxj    λ − Y j∈[L],xj̸=x′ j nxj nx′ j   + . 3473 As a result, under the assumption that nx = |Px| = |Px′| = nx′ for every word x and its synonym x′ ∈Sx, we have Z (λdΠX(Z) −dΠX′(Z))+ =λ  1 − Y j∈[L],xj̸=x′ j nxj,x′ j nxj  +   Y j∈[L],xj̸=x′ j nxj,x′ j nxj  (λ −1)+ . We now need to solve the optimization of maxX′∈SX R (λdΠX(Z) −dΠX′(Z))+. Lemma 3. For any word x, define ˜x∗= arg min x′∈Sx nx,x′/nx. For a given sentence X = x1, . . . , xL, we define an ordering of the words xℓ1, . . . , xℓL such that nxℓi,˜x∗ ℓi/nxℓi ≤nxℓj ,˜x∗ ℓj /nxℓj for any i ≤j. For a given X and R, we define an adversarial perturbed sentence X∗= x∗ 1, . . . , x∗ L, where x∗ i = ( ˜x∗ i if i ∈[ℓ1, . . . , ℓR] xi if i /∈[ℓ1, . . . , ℓR]. Then for any λ ≥0, we have that X∗is the optimal solution of maxX′∈SX R (λdΠX(Z) −dΠX′(Z))+, that is, max X′∈SX Z (λdΠX(Z) −dΠX′(Z))+ = Z (λdΠX(Z) −dΠX∗(Z))+ . Now by Lemma 3, the lower bound becomes gRS low(X, c) = −max λ≥0 max X′∈SX Z (λdΠX(Z) −dΠX′(Z))+ + λp = −max λ≥0 Z (λdΠX(Z) −dΠX∗(Z))+ + λp = max λ≥0 (p −qX)λ −(1 −qX)(λ −1)+ (4) = max(p −qX, 0), where qX is consistent with the definition in Theorem 1: qX = 1 − Y j∈[L],xj̸=˜x∗ j nxj,˜x∗ j nxj = 1 − R Y j=1 qxℓj . Here equation (4) is by calculation using the assumption of Theorem 1. The optimization of maxλ≥0 in (4) is an elementary step: if p ≤q, we have λ∗= 0 with solution 0; if p ≥q, we have λ∗= 1 with solution (p −qX). This finishes the proof of the lower bound. The proof the upper bound follows similarly. Proof of Lemma 2 Notice that we have Z (λdΠX(Z) −dΠX′(Z))+ = X Z∈SX′∩SX  λ |SX|−1 −|SX′|−1 + + λ X Z∈SX−SX′ |SX|−1 = |SX′ ∩SX|  λ |SX|−1 −|SX′|−1 + + λ |SX −SX′| |SX|−1 . Also notice that |SX| = QL j=1 nxj; |SX′| = QL j=1 nx′ j; |SX′ ∩SX| = QL j=1 nxj,x′ j and |SX −SX′| = QL j=1 nxj −QL j=1 nxj,x′ j. Plugging in the above value, we have |SX −SX′| |SX|−1 = QL j=1 nxj −QL j=1 nxj,x′ j QL j=1 nxj =1 − L Y j=1 nxj,x′ j nxj =1 − Y j∈[L],xj̸=x′ j nxj,x′ j nxj . 3474 And also,  λ |SX|−1 −|SX′|−1 + =  λ L Y j=1 n−1 xj − L Y j=1 n−1 x′ j   + =  λ Y j∈[L],xj=x′ j n−1 xj Y j∈[L],xj̸=x′ j n−1 xj − Y j∈[L],xj=x′ j n−1 xj Y j∈[L],xj̸=x′ j n−1 x′ j   + = Y j∈[L],xj=x′ j n−1 xj  λ Y j∈[L],xj̸=x′ j n−1 xj − Y j∈[L],xj̸=x′ j n−1 x′ j   + . Plugging in the above value, we have |SX′ ∩SX|  λ |SX|−1 −|SX′|−1 + = L Y j=1 nxj,x′ j  λ |SX|−1 −|SX′|−1 + = Y j∈[L],xj=x′ j nxj Y j∈[L],xj̸=x′ j nxj,x′ j  λ |SX|−1 −|SX′|−1 + = Y j∈[L],xj̸=x′ j nxj,x′ j  λ Y j∈[L],xj̸=x′ j n−1 xj − Y j∈[L],xj̸=x′ j n−1 x′ j   + = Y j∈[L],xj̸=x′ j nxj,x′ j Y j∈[L],xj̸=x′ j n−1 xj  λ − Y j∈[L],xj̸=x′ j nxj nx′ j   + =   Y j∈[L],xj̸=x′ j nxj,x′ j nxj    λ − Y j∈[L],xj̸=x′ j nxj nx′ j   + . Combining all the calculation, we get Z (λdΠX(Z) −dΠX′(Z))+ =λ  1 − Y j∈[L],xj̸=x′ j nxj,x′ j nxj  +   Y j∈[L],xj̸=x′ j nxj,x′ j nxj    λ − Y j∈[L],xj̸=x′ j nxj nx′ j   + . Proof of Lemma 3 It is sufficient to proof that, for any X′ ̸= X∗, we have Z (λdΠX(Z) −dΠX∗(Z))+ ≥ Z (λdΠX(Z) −dΠX′(Z))+ . Notice that for any λ ≥0, define Q(X, X′′) = λ  1 − Y j∈[L],xj̸=x′ j nxj,x′′ j nxj  +   Y j∈[L],xj̸=x′ j nxj,x′′ j nxj  (λ −1)+ . Given any X, we can view Q(X, X′′) as the function of nxi,x′′ i /nxi, i ∈[L]. And Q(X, X′′) is a decreasing function of nxi,x′′ i /nxi for any i ∈[L] when fixing nxj,x′′ j nxj for all other j ̸= i. Suppose ˜rk is the k-th smallest quantities of nxi,˜x∗ i /nxi, i ∈[L] and r′ k is the k-th smallest quantities of nxj,˜x∗ j /nxi, i ∈[L]. By the construction of X∗, we have ˜rk ≤r′ k for any k ∈[L]. This implies that Q(X, X∗) ≥Q(X, X′). 3475 A.2.3 Proof of Theorem 2 We denote gRS(X, y) = pA, gRS(X, yB) = pB and q = qX in this proof for simplicity. The X∗below is the one defined in the proof of Lemme 3. Our proof is based on constructing a randomized smoothing classifier that satisfies the desired property we want to prove. Case 1 pA ≥q and pB +q ≤1 Note that in this case |SX ∩SX∗| / |SX| = 1−q ≥(pA−q)+pB, where the inequality is due to pA + pB ≤1. Therefore, we can choose set U1 and U2 such that U1 ⊆SX ∩SX∗; U2 ⊆SX ∩SX∗; U1 ∩U2 = ∅; |U1| / |SX| = pA −q and |U2| / |SX| = pB. We define the classifier: f∗(Z) =            y if Z ∈(SX −SX∗) ∩U1 yB if Z ∈(SX∗−SX) ∪U2 other class (c ̸= y or yB) if Z ∈SX ∩SX∗−(U1 ∪U2) any class (c ∈Y) otherwise This classifier is well defined for binary classification because SX ∩SX∗−(U1 ∪U2) = ∅. Case 2 pA < q and pB + q ≤1 In this case, we can choose set U1 and U2 such that U1 ⊆SX −SX∗; U2 ⊆SX ∩SX∗; |U1| / |SX| = pA and |U2| / |SX| = pB. We define the classifier: f∗(Z) =            y if Z ∈U1 yB if Z ∈U2 ∪(SX∗−SX) other class (c ̸= y or yB) if Z ∈SX −(U1 ∪U2) any class (c ∈Y) otherwise This classifier is well defined for binary classification because SX −(U1 ∪U2) = ∅. Case 3 pA ≥q and pB + q > 1 This case does not exist since we would have pA + pB > 1. Case 4 pA < q and pB + q > 1 We choose set U1 and U2 such that U1 ⊆SX −SX∗; U2 ∈SX −SX∗; U1 ∩U2 = ∅; |U1| / |SX| = pA and |U2| / |SX| = pB −(1 −q). Notice that the intersect of U1 and U2 can be empty as |U1| / |SX| + |U2| / |SX| = pA + pB −(1 −q) ≤1 −(1 −q) = q = |SX −SX∗| / |SX|. We define the classifier: f∗(Z) =            y if Z ∈U1 yB if Z ∈U2 ∪SX∗ other class (c ̸= y or yB) if Z ∈(SX −SX∗) −(U1 ∪U2) any class (c ∈Y) otherwise This classifier is well defined for binary classification because SX −SX∗−(U1 ∪U2) = ∅. It can be easily verified that for each case, the defined classifier satisfies all the conditions in Theorem 2. B Additional Experiment Details We set R = L in adversarial attacking, that is, all words in the sentence can be perturbed simultaneously by the attacker. We use 5,000 random draws in the Monte Carlo estimation of ∆X, and use the same method in Jia et al. (2019) to tune the hyper-parameters when training the base models e.g. learning rate, batch size and the schedule of loss function. For the IMDB dataset, we train the IBP models and ours for 60 and 10 epochs, respectively. For the Amazon dataset, we train the IBP models and ours for 100 and 20 epochs, respectively. We test our algorithm on two different datasets, IMDB and Amazon. The IMDB movie review dataset (Maas et al., 2011) is a sentiment classification dataset. It consists of 50,000 movie review comments with binary sentiment labels. The Amazon review dataset (McAuley, 2013) is an extremely large dataset that contains 34,686,770 reviews with 5 different types of labels. Similar to Cohen et al. (2019), we test the models on randomly selected subsets of the test set with 1,250 and 6,500 examples for IMDB and Amazon dataset, respectively.
2020
317
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3476–3485 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3476 A Graph-based Coarse-to-fine Method for Unsupervised Bilingual Lexicon Induction Shuo Ren†‡∗, Shujie Liu§, Ming Zhou§, Shuai Ma†‡ †SKLSDE Lab, Beihang University, Beijing, China ‡Beijing Advanced Innovation Center for Big Data and Brain Computing, China §Microsoft Research Asia, Beijing, China †{shuoren,mashuai}@buaa.edu.cn §{shujliu,mingzhou}@microsoft.com Abstract Unsupervised bilingual lexicon induction is the task of inducing word translations from monolingual corpora of two languages. Recent methods are mostly based on unsupervised cross-lingual word embeddings, the key to which is to find initial solutions of word translations, followed by the learning and refinement of mappings between the embedding spaces of two languages. However, previous methods find initial solutions just based on word-level information, which may be (1) limited and inaccurate, and (2) prone to contain some noise introduced by the insufficiently pre-trained embeddings of some words. To deal with those issues, in this paper, we propose a novel graph-based paradigm to induce bilingual lexicons in a coarse-to-fine way. We first build a graph for each language with its vertices representing different words. Then we extract word cliques from the graphs and map the cliques of two languages. Based on that, we induce the initial word translation solution with the central words of the aligned cliques. This coarse-to-fine approach not only leverages clique-level information, which is richer and more accurate, but also effectively reduces the bad effect of the noise in the pre-trained embeddings. Finally, we take the initial solution as the seed to learn cross-lingual embeddings, from which we induce bilingual lexicons. Experiments show that our approach improves the performance of bilingual lexicon induction compared with previous methods. 1 Introduction Bilingual lexicon induction (BLI) is an important task of machine translation and becomes an essential part of recent unsupervised machine translation approaches (Lample et al., 2018; Artetxe et al., 2018c; Marie and Fujita, 2018; Ren et al., 2019; Artetxe et al., 2019). Previous methods for BLI are ∗Contribution during internship at MSRA. mostly based on unsupervised cross-lingual word embeddings (Zhang et al., 2017; Artetxe et al., 2017; Conneau et al., 2017; Artetxe et al., 2018b; Xu et al., 2018; Hoshen and Wolf, 2018; AlvarezMelis and Jaakkola, 2018), the goal of which is to find a mapping function, typically a linear transformation (Mikolov et al., 2013), to map the source embeddings into the target embedding spaces. To do this, they first build a seed dictionary (known as the initial solution) with different methods and then learn the optimal mapping function that fits the seed dictionary. Based on the mapping function, a new dictionary of higher quality is inferred from the cross-lingual word embeddings by finding nearest neighbors in the target embedding space. With the new dictionary, the mapping function is further refined to fit it. The inference of the dictionary and the refinement of the mapping function are iteratively done until the final convergence. During the whole procedure, the initialization stage is important and heavily focused in previous work. Previous methods for finding the initial solution fall into three categories. The first one is heuristic rules such as treating identical words as the seed (Artetxe et al., 2017), but this kind of method is restricted to languages sharing the alphabet. The second category is adversarial methods (Zhang et al., 2017; Conneau et al., 2017; Xu et al., 2018; Alvarez-Melis and Jaakkola, 2018), but suffering from the drawbacks of generative adversarial models, i.e., the sensitivity of hyper-parameters, long training time, etc. The third category is structurebased methods (Artetxe et al., 2018b; Hoshen and Wolf, 2018), which is more flexible and robust than other categories, and achieve the state-of-the-art BLI performance. In Artetxe et al. (2018b), they first compute a similarity matrix of all words in the vocabulary, and then represent each word with the distribution of the similarity values, while in Hoshen and Wolf (2018), they project the word 3477 vectors to the top 50 principal components of the embedding spaces. After that, both of them directly use the word representation of two languages to retrieve the initial bilingual lexicons by computing the cosine distances of source and target word representations. However, directly finding word alignments from scratch has some demerits. (1) The information that a word can provide is limited and independent of each other. (2) According to our observation, there is some noise in the pre-trained embeddings even for high-frequency words so that the initial word alignments derived from them are not accurate. Those mistakes in the initial wordlevel alignments can hurt the performance in the following iteration steps. To solve those issues, we propose a novel graphbased coarse-to-fine paradigm to generate initial solutions for learning cross-lingual word embeddings, from which we induce bilingual lexicons. Specifically, given source and target languages, our method first uses pre-trained monolingual embeddings to construct a graph for each language, with the vertices representing different words, so that the mutual relationship between words is preserved. Next, we use the Bron–Kerbosch algorithm (Akkoyunlu, 1973) to extract cliques (a subset of vertices in which every two distinct vertices are adjacent) in the source and target graphs. After that, we calculate the clique embeddings and map the cliques from two graphs. We then treat the central words of the aligned cliques as the seeds to learn the mapping of the two word embedding spaces. Our contributions are threefold. (1) By building word graphs, we leverage the clique-level information extracted from them. The cliques cluster similar words and assemble their mutual relationship of them, providing richer and more accurate information. (2) We propose the coarse(clique extraction)to-fine(seed induction) procedure for the BLI task, which effectively reduces the bad effect of the noise in the pre-trained embeddings; (3) We improve the BLI performance on the MUSE dataset with our method, even compared with strong baselines. 2 Background Unsupervised bilingual lexicon induction (BLI) is the task of inducing word translations from monolingual corpora of two languages. Recently proposed methods follow the same procedure, i.e., first learning cross-lingual embeddings in an unsupervised way (§2.1) and then inducing bilingual lexicons from the embedding spaces (§2.2). 2.1 Unsupervised Cross-lingual Embeddings Previous methods for learning cross-lingual embeddings can be roughly divided into two categories (Ormazabal et al., 2019), i.e., mapping methods and joint learning methods. As the second category, the skip-gram (Luong et al., 2015) for example, requires bilingual corpus during training, current methods for unsupervised cross-lingual embeddings mainly fall into the first category. Given pretrained monolingual embeddings of two languages, the mapping methods try to map the source and target embedding spaces through a linear transformation (Mikolov et al., 2013) W ∈Md×d(R), where Md×d(R) is the space of d × d matrices of real numbers and d is the dimension of the embeddings. Based on that, Xing et al. (2015) propose to constrain W to be orthogonal, i.e., W⊤W = I, and Conneau et al. (2017) find this is a Procrustes problem which advantageously offers a closed-form solution obtained from singular value decomposition (SVD) of YX⊤as follows: W∗= arg min W ||WX −Y||F = UV⊤, with UΣV⊤= SVD (YX⊤) (1) where X and Y ∈Md×n(R) consist of the embeddings of the bilingual lexicons {xi, yi}n i=1 in the seed dictionary. Therefore, there are two steps to learn unsupervised cross-lingual embeddings. The first step is to find an initial solution (also known as the seed dictionary), and the second one is to obtain the desired W according to Eq. (1). The above two steps can be iteratively done, by inducing new seed dictionary from the learned cross-lingual embeddings with the method introduced next, and using the new dictionary to refine the matrix W (known as the “refinement” process in some literature). The first step, i.e., finding the initial solution, is crucial because it decides the direction of the following iteration. Loads of previous work are devoted to finding good initial solutions with different methods, as is described in §1. But their methods only exploit word-level information, which is limited and may be inaccurate due to the noise in pretrained monolingual embeddings, leading to mistakes in the initial word-level alignments. Therefore, we propose a novel graph-based coarse-to-fine paradigm to find the initial solution of higher qual3478 ity, leveraging clique-level information which we think is richer and more accurate. 2.2 Bilingual Lexicon Induction Based on the learned cross-lingual embeddings, bilingual lexicons can be induced from the mapped spaces via the nearest neighbor (NN) method by calculating the cosine distance of the mapped source embeddings and the target embeddings. However, this method suffers from the “hubness” problem (Dinu et al., 2014) such that some target words appear as the nearest neighbors of many source words. To mitigate this problem, alternatives of the distance function have been proposed, such as invsoftmax (Smith et al., 2017), CSLS (Conneau et al., 2017) and margin-based scores (Artetxe and Schwenk, 2018). Among them, CSLS, as a special case of margin-based scores, is widely used in the SOTA embedding-based BLI methods. Formally, CSLS calculates the distance between the mapped and the target embeddings as follows: CSLS(Wx, y) = 2 cos(Wx, y)−rT(Wx)−rS(y) (2) where rT(Wx) = 1 K X y∈NT(Wx) cos(Wx, y) (3) is the mean similarity of a source embedding x to its K target neighborhoods (NT(Wx)). Similarly, rS(y) is the mean similarity of a target embedding y to its neighborhoods. 3 Methodology As is mentioned before, recent work on bilingual lexicon induction (BLI) is mostly based on unsupervised cross-lingual embeddings, whose key point is to find initial solutions to learn the mapping function. However, previous methods find initial solutions just based on word-level information, which may be limited and inaccurate due to the noise in pre-trained monolingual embeddings. Therefore, we exploit the information provided by word cliques and figure out a coarse-to-fine procedure to denoise and find the initial solution of higher quality. Based on that, we learn the cross-lingual embeddings and induce word translations. As shown in Figure 1, our method for BLI can be roughly divided into several steps. Given the source and target languages, we first build a graph for each language. The graph vertex represents the word. Next, we extract word cliques from the graphs and map the cliques of two languages in an unsupervised way. Then, we induce the seed dictionary from the bilingual cliques by choosing the respective central words of the aligned cliques. After that, we learn cross-lingual embeddings with the help of the induced seed dictionary. The above steps can be iteratively done until the final convergence. By building word graphs, we can use the clique-level information which is richer and more accurate than what a single word provides. Besides, the whole coarse-to-fine procedure also reduces the bad effect of the noise in the pre-trained embeddings, because the clique-level alignment (coarse) is more accurate at the beginning and the word alignments inferred from it (fine) are more reasonable. We will next introduce each step. 3.1 Word Graph Construction Given the pre-trained monolingual embeddings, we can derive an edge-weighted graph from them by regarding words as the vertices and their similarities as edges. Formally, the graph is G =< V, E > (4) where V is the vertex set (vocabulary of each language) and E is the edge set. The edges are built with monolingual embedding similarities. For example, for language x, to define the edges, we first get the word-to-word similarity matrix M with Mi,j = ( CSLS(xi, xj), i ̸= j 0, i = j (5) where xi and xj are the normalized embeddings of two words respectively. We set the main diagonal elements to zero to avoid self-loop. Theoretically, there is one edge between any two arbitrary words with the edge weight to be Mi,j, but if the weight of an edge is too small, it will provide little information and introduce a lot of noise. Therefore, we prune these non-informative edges with Mi,j less than a threshold of θ. Meanwhile, the pruning greatly reduces the computation time of the next step. We build two graphs Gx and Gy for two languages x and y in this way respectively. 3.2 Clique Extraction and Mapping Different from previous methods, we infer the initial solution not using word-level information but from word cliques, which we think is richer and more accurate. Following Wang et al. (2016), the 3479 Figure 1: Overview of our method. In each iteration, based on the word graphs, we first map the cliques of two languages in an unsupervised way, and then infer the seed dictionary to learn cross-lingual word embeddings. “clique” here means a maximum complete subgraph where every two distinct vertices in the clique are adjacent. Extracting cliques from a given graph is a nontrivial problem and is shown to be NP-complete (Karp, 1972). In this paper, we adopt Bron-Kerbosch (BK) algorithm (Akkoyunlu, 1973) with pivoting (Johnston, 1976) to extract the cliques from a given graph. Having extracted the word cliques of two languages, we calculate clique embeddings by averaging the embedding vectors of all words in each clique. We choose the word whose embedding is closest to its clique embedding as the central word of each clique. After that, we follow Artetxe et al. (2018b) to map the cliques of two languages in a fully unsupervised way, i.e. to learn cross-lingual clique embeddings. We use the clique extraction rather than clustering methods because (1) a word may fall into different categories because of polysemy, which can be well modeled by the cliques, and (2) the BK algorithm is much more efficient than clustering. 3.3 Seed Dictionary Induction §3.2 maps the clique embeddings of two languages into the same space so that we can retrieve aligned cliques. For each source clique, we choose the nearest target clique according to the CSLS similarity score calculated by Eq. (2). Remember that we have chosen the central word for each clique after the clique extraction in §3.2, so the seed dictionary inferring process is simply picking the central words of the aligned cliques just as shown in Figure 1. Note that we remove the duplication of seed word pairs in this process. 3.4 Cross-lingual Embedding Learning Based on the initial solution (known as the seed dictionary), we then learn cross-lingual word embeddings following the Procrustes and refinement process introduced in §2.1. After obtaining the learned cross-lingual word embeddings, we rebuild the word graphs with the help of them and iterate the whole process again until the final convergence as shown in Figure 1. Previously methods used a single matrix W as transformation function between the embedding spaces of two languages, based on the assumption that the embedding spaces of different languages are isomorphic (Mikolov et al., 2013). However, this is doubtful because the isomorphic assumption may not hold all the time (Søgaard et al., 2018). Fortunately, the cliques we extracted naturally provide good local features for us, because they are usually much different from each other in meanings, which enables us to investigate alternatives to a single mapping matrix W. Therefore, after the final iteration, we divide all the cliques into K groups via clustering, i.e., {Li}K i=1 , and train an individual matrix Wi for each of them. We denote this process as “group mapping”. Each Wi is initialized with the learned W and fine-tuned as Wi = arg min Wi ||WiXi −Yi||F, s.t. W⊤ i Wi = I (6) where Xi and Yi are the embedding matrices of words belonging to Li. We divide each word into the group closest to its word embedding. The whole training procedure is shown in Algorithm 1. 3.5 Inference After the training, we can obtain the renewed word graphs of both languages as well as their cliques, and get a set of group mapping matrices {Wi}k i=1. During the inference, for each source word x, we first find its closest clique Cs by calculating the similarities of x’s embeddings to all clique embeddings. Next, we retrieve the group Ls that Cs belongs to, and choose the corresponding Ws. Then, 3480 Algorithm 1: Training procedure of the proposed graph-based coarse-to-fine method. Input: Monolingual embeddings of two languages X, Y Output: Multiple local mapping matrices {Wi}m i=1 while not convergence do 1 Build the word graphs Gx and Gy by calculating the embedding similarities of each language. 2 Extract cliques {Cx i }m i=0 and {Cy j }n j=0 from each graph using the Bron-Kerbosch algorithm. 3 Calculate the clique embeddings by averaging the embeddings of all the words belonging to it. 4 Map the source and target cliques with the method of Artetxe et al. (2018b). 5 Build seed dictionary with the central words of the aligned cliques. 6 Do the Procrustes and refinement iteration described in §2.1 and learn the mapping matrix W. 7 Renew the embeddings of the source language as X := WX. 8 Divide {Cx i }m i=0 into K groups via clustering. 9 Initialize {Wi}K i=0 with W. 10 Fine-tune each Wi according to Eq. (6) and do the refinement. 11 return {Wi}K i=0 we retrieve the translation of x by calculating the CSLS score of Wsx and each target embedding y, similar to Eq. (2) introduced in §2.2. 4 Experiment 4.1 Dataset Bilingual lexicon induction (BLI) measures the word translation accuracy in comparison to a gold standard. We report results on the widely used MUSE dataset (Conneau et al., 2017). This dataset consists of monolingual fastText (Bojanowski et al., 2017) embeddings of many languages and dictionaries for many language pairs divided into training and test sets. The evaluation follows the setups of Conneau et al. (2017). 4.2 Implementation Details 4.2.1 Pre-processing We choose the top 10,000 word embeddings to build word graph because the monolingual embeddings of low-frequency words may be trained insufficiently. The embeddings are normalized following Artetxe et al. (2018b). Specifically, we first apply length normalization to the embeddings, and then mean center each dimension. After that, we do length normalization again to ensure the word embeddings have a unit length. 4.2.2 Clique Extraction An efficient algorithm for clique extraction is the Bron-Kerbosch (BK) algorithm, which is a recursive backtracking algorithm that searches for all maximal cliques in a given graph G. The pruning operation described in §3.1 makes the word graph a sparse graph, for which the BK algorithm can be made to run in time O(dn3d/3) (Eppstein and Strash, 2011), where n is the number of vertexes in G, and d is the degeneracy 1 of the graph. We choose a public efficient C implementation of BK algorithm 2, and only extract the cliques that contain no less than three words. According to our observation, the cliques can be extracted within several seconds with this code. 4.2.3 Clique and Word Embedding Mapping In our experiment, the clique embeddings of two languages are mapped with the method proposed by Artetxe et al. (2018b). We use their public code to finish this step. We initialized W with a random orthogonal matrix. After building the seed dictionary, we first solve the Procrustes problem (Eq. (1)), followed by the refinement process. 4.3 Main Results 4.3.1 Baselines We choose several supervised and unsupervised methods to be our baselines. The supervised baselines include: (1) The iterative Procrustes method proposed by Smith et al. (2017); (2) The multi-step framework proposed by Artetxe et al. (2018a); (3) a geometric method proposed by Jawanpuria et al. (2019). The unsupervised baselines include (1) MUSE proposed by Conneau et al. (2017), which is a GAN based method followed by a refinement process; (2) a Wasserstein GAN based method combined with distribution matching and back translation, proposed by Xu et al. (2018); (3) a method proposed by Alvarez-Melis and Jaakkola (2018) that views the mapping problem as optimal transportation and optimize the Gromov-Wasserstein distance between embedding spaces; (4) A robust self-learning method proposed by Artetxe et al. (2018b), which leverages the intra-linguistic word similarity information to infer initial solutions, followed by a self-learning iteration; (5) A nonadversarial method proposed by Hoshen and Wolf 1In graph theory, a k-degenerate graph is an undirected graph in which every subgraph has a vertex of degree ≤k 2https://github.com/aaronmcdaid/MaximalCliques 3481 Method en-fr en-de en-es en-it en-ru en-zh → ← → ← → ← → ← → ← → ← Supervised (Smith et al., 2017) 81.1 82.4 73.5 72.4 81.4 82.9 43.1 38.0 51.7 63.7 42.7 36.7 (Artetxe et al., 2018a) 80.5 83.1 73.5 73.5 80.5 83.8 61.3 39.6 50.5 67.3 32.3 43.4 (Joulin et al., 2018) 83.3 84.1 79.1 76.3 84.1 86.3 57.9 67.2 45.9 46.4 (Jawanpuria et al., 2019) 82.1 84.2 74.9 76.7 81.9 85.5 52.8 67.6 49.1 45.3 Unsupervised (Conneau et al., 2017) 82.3 81.1 74.0 72.2 81.7 83.3 77.4 76.1 44.0 59.1 32.5 31.4 (Xu et al., 2018) 77.9 75.5 69.3 67.0 79.5 77.8 72.6 73.4 (Alvarez-Melis and Jaakkola, 2018) 81.3 78.9 71.9 72.8 81.7 80.4 78.9 75.2 45.1 43.7 (Artetxe et al., 2018b) 82.3 83.6 75.1 74.3 82.3 84.7 78.8 79.5 49.2 65.6 (Hoshen and Wolf, 2018) 82.3 84.1 74.7 73.0 82.1 84.1 77.9 77.5 47.5 61.8 Ours (without GM) 82.7 83.4 75.5 75.7 82.6 84.8 78.6 79.5 48.9 63.9 38.1 35.2 Ours (with GM) 82.9 83.9 75.3 76.1 82.9 85.3 79.1 79.9 49.7 64.7 38.9 35.9 Table 1: Precision@1 for the MUSE BLI task. All baselines leverage CSLS to be the retrieve metric during inference except for Xu et al. (2018) which uses cosine similarity. The bold numbers indicate the best results of supervised and unsupervised methods. “GM” means applying the group mapping technique described in §3.4. (2018), which uses PCA-based alignment to initialize and iteratively refine the alignment. 4.3.2 Results of Common Languages We report the result of the BLI task on the MUSE dataset (Conneau et al., 2017). The language pairs we choose are French (fr), German (de), Spanish (es), Italian (it), Russian (ru), Chinese (zh) from and to English(en), as shown in Table 1. From Table 1, we find that our proposed method significantly outperforms previous methods on nearly all directions, especially on en-de and enzh pairs, with the improvements of 2 to 6 points compared with previous state-of-the-art unsupervised approaches. The results on some language pairs such as en-fr, en-de and en-es are remarkably competitive with strong supervised methods. We also see that for distant languages, i.e., enru and en-zh, our method achieves good results, on which some unsupervised baselines fail to converge. However, the results are still far lagging behind the supervised methods, indicating that the seed dictionaries built with our method may not be perfect for these distant languages. This may root in the original diversified training data of the monolingual embeddings on those pairs. Even so, we still significantly outperforms the MUSE (Conneau et al., 2017) for the en-ru and en-zh pairs. 4.3.3 Results of Morphologically Rich Languages We also list results of some morphologically rich languages, i.e., Finnish (fi), Polish (pl) and Turkish (tr) in Table 2, which are selected by Søgaard et al. (2018). They find that these languages are differMethod en-fi en-pl en-tr → ← → ← → ← Supervised 5k+Pro.+Ref. 47.3 59.5 58.2 66.9 46.3 59.2 Unsupervised (Conneau et al., 2017) 0.1 59.8 53.9 0.0 45.4 0.0 (Søgaard et al., 2018) 45.0 59.1 57.3 66.7 45.4 61.4 Ours (without GM) 47.1 59.2 59.7 68.4 50.2 59.7 Ours (with GM) 48.1 60.4 60.8 69.0 51.4 60.9 Table 2: Precision@1 for the MUSE BLI task of morphologically rich languages. The bold numbers indicate the best results of all methods. Pro.: Procrustes; Ref.: Refinement. ent in morphological traits from commonly benchmarked languages which are morphological poor isolating or exclusively concatenating languages. For these languages, Søgaard et al. (2018) leverage identical tokens in both languages as the seeds (Artetxe et al., 2017), followed by the Procrustes solution plus the refinement process, which generates relatively good results. We compare our results with the supervised method, i.e., use 5k dictionary to start up followed by Procrustes + refinement, MUSE (Conneau et al., 2017) and Søgaard et al. (2018) on these languages. From the table, we see that the GAN-based method (MUSE) fails to give good results of some directions, maybe due to its unstable training. Using identical tokens as the seed gives good results (Søgaard et al., 2018) and compares with the supervised method. Our method performs well on these morphologically rich languages, and even outperforms the supervised method. We also conduct experiments on other morphologically rich 3482 languages such as Estonian, Greek, and Hungarian, but fail to converge. 4.3.4 Effect of Group Mapping From Table 1 and Table 2, we also find that leveraging the group mapping (GM, §3.4) contributes to bilingual lexicon induction, especially for some distant languages such as en-ru, en-zh, and morphologically rich languages, with the improvement from 0.7 to 1.2 points. This result indicates the assumption that the embedding spaces of different languages are isomorphic may only hold locally. With the help of the cliques we extracted, we can find those locality features via clustering. 4.4 Sensitivity to Hyper-parameters Notice that our method depends on three major hyper-parameters: (1) the number of words N we use to build word graphs; (2) the threshold θ to prune the edges in the graphs; (3) the number of iterations I we do. In this subsection, we discuss the impact of these hyper-parameters on the BLI results, taking en2fr as an example. We depict the precision@1 on different hyper-parameter settings in Figure 2. Figure 2: Influence of the hyper-parameters. From the figure, we find that the performance of our method is sensitive to the choice of N and θ. If N is too small, the cliques extracted cannot reach agreement semantically across different languages because of the sparsity of semantic units. If N is too large, the improperly trained low-frequency word vectors will impair the performance too. As for θ, if the threshold is too small, then much noise will be introduced into the word graphs, not only reducing the quality of extracted cliques but increasing the execution time of the BK algorithm. For I, we find that the performance improves fast when I is increased from 0 to 2, but reaches convergence at 5. Too many iterations hurt the performance because, at this time, the seed dictionary inferred from the mapped cliques is redundant. 4.5 Influence to Unsupervised MT It has been shown that BLI can benefit unsupervised machine translation (MT) (Lample et al., 2018; Marie and Fujita, 2018; Ren et al., 2019) by building Statistical Machine Translation (SMT) with the induced bilingual lexicons and language models as SMT features, followed by an iterative back-translation process. In this part, we will discuss the influence of different bilingual lexicon induction methods (Conneau et al., 2017; Artetxe et al., 2018b) to the performance of the initial SMT model, and report the BLEU scores3 on newstest2014 en-fr and en-de tasks in Table 3. Note that we do not do the subsequent iterative backtranslation process. From the table, we see that the performance of unsupervised SMT is restricted to the quality of BLI results. As our method provides better word translations, the initial SMT models benefit from ours accordingly. BLI Method en2fr fr2en en2de de2en MUSE 11.74 15.34 8.14 11.03 VecMap 13.04 16.40 9.12 11.98 Ours 13.91 17.21 10.24 12.41 Table 3: BLEU of initial unsupervised SMT. The SMT features are word translation tables inferred from different BLI methods and pre-trained language models. 5 Case Study 5.1 Extracted Cliques In this part, we give some examples of the English cliques extracted with our method, as listed in Table 5. From the table, we see that our method can extract reasonable cliques containing words that share similar meanings. Each clique can be regarded as a semantic unit, which is more explicit than the PCA-based initialization method (Hoshen and Wolf, 2018) where they represent the semantic units with a fixed number of principal components. An interesting phenomenon is that “May” is not in the fifth clique which groups all the words of months. This is because, in this dataset, all the words are lower-cased so that “may” is also a modal verb. Besides, we observe the extracted cliques of other languages and find they are also reasonable, which are not listed here due to space limitation. 3Tested by multi-bleu.pl. 3483 en fr zh MUSE VecMap Ours MUSE VecMap Ours and part(share) ´etablir(establish) et(and) 也(too) / 和(and) his n matin(morning) lui(him) 此(now) 第六(sixth) 他(he) south un (a) avait(had) ouest(west) 台北(Taipei) (prize) 北(north) august flotte(fleet) mars(march) mars (march) 电影(film) 第五(fifth) 三月(march) build paris(Paris) seule(alone) faire(make) 用作(used as) 了解(understand) 形成(form) Table 4: Examples of seeds produced with different methods. Inside the brackets is the interpretation of the words. id words 1 , . - ) ( 2 and also both well addition additionally besides 3 his himself him he her 4 northeastern west south southeastern southeast east southwest northeast northwest southwestern north 5 january march august july september october june april december november february 6 science scientists scientific biology mathematics physics chemistry sciences ... ... Table 5: Examples of English cliques extracted from the word graph in the first iteration. The bold words are the central words in their respective cliques. 5.2 Seed Dictionary To demonstrate that our method can produce good initial solutions for learning cross-lingual embeddings, in this part, we give an example of the seed dictionary inferred during the first iteration with our method, compared with that inferred by MUSE (Conneau et al., 2017) and VecMap (Artetxe et al., 2018b). The language pairs we choose are en-fr and en-zh, as listed in Table 4. From the table, we find that our method produces initial solutions with higher quality. This is because our coarse-to-fine process can effectively filter out the noise from the start. Notice that the initial solution produced by MUSE in the first iteration is not good, which may be because the GAN based method is not stable enough at the beginning of the training. 6 Related Work Bilingual lexicon induction (BLI) is an important task of machine translation. Recent methods for bilingual lexicon induction are mostly based on unsupervised cross-lingual word embeddings (Zhang et al., 2017; Artetxe et al., 2017; Conneau et al., 2017; Artetxe et al., 2018b; Xu et al., 2018; Hoshen and Wolf, 2018; Alvarez-Melis and Jaakkola, 2018). They follow the same procedure that is first building initial solutions (a seed dictionary) and then learning a mapping function between the two word embedding spaces. During inference, for a given source word, they find the target word via the nearest neighbors search by calculating the distance of the mapped source embedding and all target word embeddings. The main focus of the previous methods is how to find the initial solution, which is the most important part. Their methods can be divided into three categories according to the way of finding the initial solution. The first category is using heuristic rules such as treating identical words as the seed (Artetxe et al., 2017), but this kind of method is restricted to languages sharing the vocabulary or at least the notation of numbers. The second category is adversarial methods (Zhang et al., 2017; Conneau et al., 2017; Xu et al., 2018; AlvarezMelis and Jaakkola, 2018). They train a generator to finish mapping between the two word embedding spaces, and a discriminator to distinguish the mapped embeddings from the target embeddings. However, they suffer from the drawbacks of generative adversarial models, i.e., the sensitivity of hyper-parameters, long training time and lack of interpretability (Hoshen and Wolf, 2018). The third category is structure-based methods, which achieve the state-of-the-art performance on BLI. They either leverage the intra-linguistic word similarity information (Artetxe et al., 2018b) or principal components of monolingual word embeddings (Hoshen and Wolf, 2018), but their methods infer initial solutions just based on word-level information which is limited and prone to contain some noise due to the insufficient training of pre-trained embeddings. Different from their methods, ours leverages clique-level information which is richer and more accurate, and uses a coarse-to-fine procedure to reduce the adverse effect of the noise mentioned above. 7 Conclusion In this paper, we propose a novel graph-based coarse-to-fine paradigm for unsupervised bilingual 3484 lexicon induction. Our method uses clique-level information and reduces the bad effect of noise in the pre-trained embeddings. The experiments show that our method can significantly improve the bilingual word induction performance after several iterations compared with strong baselines, even for distant language pairs. In the future, we will consider combining our method with Graph Neural Networks to update the word graphs we build. Acknowledgments This work is supported in part by National Key R&D Program of China AAA0102301, and NSFC 61925203 & U1636210 & 61421003. References Eralp Abdurrahim Akkoyunlu. 1973. The enumeration of maximal cliques of large graphs. SIAM Journal on Computing, 2(1):1–6. David Alvarez-Melis and Tommi Jaakkola. 2018. Gromov-wasserstein alignment of word embedding spaces. In Proceedings of the 2018 Conference on EMNLP, pages 1881–1890. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of ACL (Volume 1: Long Papers), pages 451–462. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In Thirty-Second AAAI Conference on Artificial Intelligence. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of ACL (Volume 1: Long Papers), pages 789–798. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018c. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on EMNLP, Brussels, Belgium. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019. An effective approach to unsupervised machine translation. In Proceedings of the 57th Annual Meeting of ACL. Mikel Artetxe and Holger Schwenk. 2018. Marginbased parallel corpus mining with multilingual sentence embeddings. arXiv preprint arXiv:1811.01136. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087. Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2014. Improving zero-shot learning by mitigating the hubness problem. arXiv preprint arXiv:1412.6568. David Eppstein and Darren Strash. 2011. Listing all maximal cliques in large sparse real-world graphs. In International Symposium on Experimental Algorithms, pages 364–375. Springer. Yedid Hoshen and Lior Wolf. 2018. Non-adversarial unsupervised word translation. In Proceedings of the 2018 Conference on EMNLP, pages 469–478. Pratik Jawanpuria, Arjun Balgovind, Anoop Kunchukuttan, and Bamdev Mishra. 2019. Learning multilingual word embeddings in latent metric space: a geometric approach. Transactions of the Association for Computational Linguistics, 7:107–120. HC Johnston. 1976. Cliques of a graph-variations on the bron-kerbosch algorithm. International Journal of Computer & Information Sciences, 5(3):209–238. Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Herv´e J´egou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Conference on EMNLP, pages 2979–2984. Richard M Karp. 1972. Reducibility among combinatorial problems. In Complexity of computer computations, pages 85–103. Springer. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, et al. 2018. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on EMNLP, pages 5039– 5049. Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Bilingual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151–159. Benjamin Marie and Atsushi Fujita. 2018. Unsupervised neural machine translation initialized by unsupervised statistical machine translation. arXiv preprint arXiv:1810.12703. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. 3485 Aitor Ormazabal, Mikel Artetxe, Gorka Labaka, Aitor Soroa, and Eneko Agirre. 2019. Analyzing the limitations of cross-lingual word embedding mappings. In Proceedings of the 57th Annual Meeting of ACL. Shuo Ren, Zhirui Zhang, Shujie Liu, Ming Zhou, and Shuai Ma. 2019. Unsupervised neural machine translation with smt as posterior regularization. In Thirty-Three AAAI Conference on Artificial Intelligence. Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. arXiv preprint arXiv:1702.03859. Anders Søgaard, Sebastian Ruder, and Ivan Vuli´c. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th Annual Meeting of ACL (Volume 1: Long Papers), pages 778–788. Rui Wang, Hai Zhao, Sabine Ploux, Bao-Liang Lu, and Masao Utiyama. 2016. A bilingual graph-based semantic model for statistical machine translation. In IJCAI, pages 2950–2956. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of NAACL: Human Language Technologies, pages 1006–1011. Ruochen Xu, Yiming Yang, Naoki Otani, and Yuexin Wu. 2018. Unsupervised cross-lingual transfer of word embedding spaces. In Proceedings of the 2018 Conference on EMNLP, pages 2465–2474. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of ACL (Volume 1: Long Papers), pages 1959–1970.
2020
318
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3486–3497 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3486 A Reinforced Generation of Adversarial Examples for Neural Machine Translation Wei Zou1 Shujian Huang1 Jun Xie2 Xinyu Dai1 Jiajun Chen1 1National Key Laboratory for Novel Software Technology, Nanjing University, China 2Tencent Technology Co, China [email protected], {huangsj,daixinyu,chenjj}@nju.edu.cn [email protected] Abstract Neural machine translation systems tend to fail on less decent inputs despite its significant efficacy, which may significantly harm the credibility of these systems—fathoming how and when neural-based systems fail in such cases is critical for industrial maintenance. Instead of collecting and analyzing bad cases using limited handcrafted error features, here we investigate this issue by generating adversarial examples via a new paradigm based on reinforcement learning. Our paradigm could expose pitfalls for a given performance metric, e.g., BLEU, and could target any given neural machine translation architecture. We conduct experiments of adversarial attacks on two mainstream neural machine translation architectures, RNN-search, and Transformer. The results show that our method efficiently produces stable attacks with meaning-preserving adversarial examples. We also present a qualitative and quantitative analysis for the preference pattern of the attack, demonstrating its capability of pitfall exposure. 1 Introduction Neural machine translation (NMT) based on the encoder-decoder framework, such as RNN-Search (Bahdanau et al., 2014; Luong et al., 2015, RNNSearch) or Transformer (Vaswani et al., 2017, Transformer), has achieved remarkable progress and become a de-facto in various machine translation applications. However, there are still pitfalls for a well-trained neural translation system, especially when applied to less decent real-world inputs compared to training data (Belinkov and Bisk, 2017). For example, typos may severely deteriorate system outputs (Table 1). Moreover, recent studies show that a neural machine translation system can also be broken by noisy synthetic inputs (Belinkov and Bisk, 2017; Lee et al., 2018). Due to the black-box nature of a neural system, it has in 耶路撒冷发生自杀爆 爆 爆炸 炸 炸事件 out suicide bombing in jerusalem in 耶路撒冷发生自杀爆 爆 爆事件 out eastern jerusalem explores a case of eastern europe Table 1: Fragility of neural machine translation. A typo leaving out a Chinese character “炸” leads to significant errors (noted by italics) in English translation. Both “爆” and “爆炸” mean “bombing” in English. been a challenge to fathom when and how the system tends to fail. Intuitively, researchers seek to apprehend such failures by the analysis of handcrafted error indicating features (Zhao et al., 2018; Karpukhin et al., 2019). This strategy is costly because it requires expert knowledge for both linguistics and the target neural architecture. Such features are also less applicable because some common errors in deep learning systems are hard to formulate, or very specific to certain architectures. Instead of designing error features, recent researchers adopt ideas from adversarial learning (Goodfellow et al., 2014) to generate adversarial examples for mining pitfalls of NLP systems (Cheng et al., 2018a; Ebrahimi et al., 2018; Zhao et al., 2017). Adversarial examples are minor perturbed inputs that keep the semantic meaning, yet yield degraded outputs. The generation of valid adversarial examples provides tools for error analysis that is interpretable for ordinary users, which can contribute to system maintenance. Though it has achieved success concerning continuous input, e.g., images, there are following major issues for NLP tasks. First, it is non-trivial to generate valid discrete tokens for natural language, e.g., words or characters. Cheng et al. (2018a) follow Goodfellow et al. (2014) to learn noised representation then sample tokens accordingly. However, there is no guaranteed correspondence between arbitrary representation and valid tokens. Therefore, it may gen3487 in Two man are playing on the street corner. perturbed in Two man are playing frisbee in the park. out Zwei M¨anner spielen an einer Straßenecke. perturbed out Zwei M¨anner spielen frisbee im park. Table 2: Example of undesirable perturbation in adversarial examples for machine translation in (Zhao et al., 2017), though it yields very different output compare to the origin, it does not indicate system malfunction. erate tokens departing from learned representation, which undermines the generation. Ebrahimi et al. (2018) turns to a search paradigm by a bruteforce search for direct perturbations on the token level. To lead the search, a gradient-based surrogate loss must be designed upon every token modification by given target annotations. However, this paradigm is inefficient due to the formidable computation for gradients. Furthermore, surrogate losses defined upon each token by targets requires high-quality targets, and risks being invalidated by any perturbation that changes tokenization. Another issue is to keep the semantics of original inputs. Different from the fact that minor noises on images do not change the semantics, sampling discrete tokens from arbitrary perturbed representation (Cheng et al., 2018a) may generate tokens with different semantics and lead to ill-perturbed samples (Table 2). Searching for the perturbed input also requires a semantic constraint of the search space, for which handcrafted constraints are employed (Ebrahimi et al., 2018). Though constraints can also be introduced by multitask modeling with additional annotations (Zhao et al., 2017), this is still not sufficient for tasks requiring strict semantic equivalence, such as machine translation. In this paper, we adopt a novel paradigm that generates more reasonable tokens and secures semantic constraints as much as possible. We summarize our contributions as the following: • We introduce a reinforcement learning (Sutton and Barto, 2018, RL) paradigm with a discriminator as the terminal signal in its environment to further constrain semantics. This paradigm learns to apply discrete perturbations on the token level, aiming for direct translation metric degradation. Experiments show that our approach not only achieves semantically constrained adversarial examples but also leads to effective attacks for machine translation. • Our paradigm can achieve the adversarial example generation with outclassed efficiency by only given source data. Since our method is model-agnostic and free of handcrafted error feature targeting architectures, it is also viable among different machine translation models. • We also present some analysis upon the stateof-the-art Transformer based on its attack, showing our method’s competence in system pitfall exposure. 2 Preliminaries 2.1 Neural Machine Translation The most popular architectures for neural machine translation are RNN-search (Bahdanau et al., 2014) and Transformer (Vaswani et al., 2017). They share the paradigm to learn the conditional probability P(Y |X) of a target translation Y = [y1, y2, ..., ym] given a source input X = [x1, x2, ..., xn]. A typical NMT architecture consists of an encoder, a decoder and attention networks. The encoder encodes the source embedding Xemb = [emb1, emb2, ...embn] into hidden representation H = [h1, h2, ..., hn]. Then a decoder fdec with attention network attentively accesses H for an auto-regressive generation of each yi until the end of sequence symbol (EOS) is generated: P(yi|y<i, X) = softmax(fdec(yi−1, st, ct; θdec)) (1) where ct is the attentive result for current decoder state st given H. 2.2 Actor-Critic for Reinforcement Learning Reinforcement learning (Sutton and Barto, 2018, RL) is a widely used machine learning technique following the paradigm of explore and exploit, which is apt for unsupervised policy learning in many challenging tasks (e.g., games (Mnih et al., 2015)). It is also used for direct optimization for non-differentiable learning objectives (Wu et al., 2018; Bahdanau et al., 2016) in NLP. Actor-critic (Konda and Tsitsiklis, 2000) is one of the most popular RL architectures where the agent consists of a separate policy and value networks called actor and critic. They both take in environment state st at each time step as input, while 3488 softmax linear+dropout ✓ × { tgt bi-GRU src bi-GRU Tokens embedding actor linear linear+dropout Y N { src bi-GRU V(st) critic linear Mean Mean Mean Sum Environment TGT SRC } xt xt+1 xt−1 ✓ × Yemb Y X Xemb X Xemb discriminator agent actor critic survival/terminal signals st at } & (a) Overview (b) Discriminator (c) Agent victim NMT final degradation rt softmax Time (seconds) 0 15000 30000 45000 60000 Transformer-bpe RNN-search-bpe 56467 31508 35671 18843 295 297 Ours Search(0.2) Search(0.4) 0 0.225 0.45 0.675 0.9 A B C D I J K M N ND NH Ni NL NS NT Nz P q R U V Wp Ws Search(0.4) Ours 1⃝ 2⃝ 3⃝ 4⃝ 5⃝ Figure 1: [a] Overview of our RL architecture. 1⃝Environment states are processed as inputs for agent; 2⃝agent yield modification upon SRC in environment; 3⃝determine survival and step reward of environment; 4⃝determine degradation with victim NMT as episodic reward; 5⃝update agent with total rewards. During a training episode, we loop 1⃝to 3⃝and accumulate step rewards until environment terminates. Dash line indicates execution at the end of an episode. [b] Architecture of discriminator. [c] Architecture of agent. actor determines an action at among possible action set A and critic yields value estimation Vt(st) . In general, the agent is trained to maximize discounted rewards Rt = P∞ i=0 γirt+i for each state, where γ ∈(0, 1] is the discount factor. Such goal can be further derived as individual losses applied to actor and critic. Thus the actor policy loss Lπ on step t is: Lπ t (θπ) = log P(at|st)At(st, at); at ∈A (2) where θπ denotes actor parameters, At(st, at) denotes general advantage function (Schulman et al., 2015) on state st for action at given by Pk−1 i=0 γirt+i + γkV (st+k) −V (st), which can be further derived as: At(st, at) = γAt+1(st+1, at+1) + rt +γVt+1(st+1) −Vt(st) (3) Meanwhile, critic learns to estimate Rt via minimizing a temporal difference loss Lv on each step t: Lv t (θv) = 1 2(rt + γRt+1 −Vt(st))2 (4) where θv denotes critic parameter. Usually, the training is regularized by maximizing policy entropy Hπ to avoid exploration failure before exploiting optimum policy (Ziebart, 2010). Thus the total loss becomes: L(θ) = X t (αLv t −Lπ t −βHπ(·|st)) (5) where α and β are hyperparameters for value loss and entropy coefficients. 2.3 adversarial examples in NLP A general adversarial example generation can be described as the learning process to find a perturbation δ on input X that maximize system degradation Ladv within a certain constraint C(δ): argmax δ Ladv(X + δ) −λC(δ) (6) where λ denotes the constraint coefficient, Ladv is determined by the goal of the attack. However, currently effective adversarial generation for NLP is to search by maximizing a surrogate gradientbased loss: argmax 1≤i≤n,x′∈vocab Ladv(x0, x1, ...x′ i...xn) (7) where Ladv is a differentiable function indicating the adversarial object. Due to its formidable search space, this paradigm simply perturbs on a small ratio of token positions and greedy search by brute force among candidates. Note that adversarial example generation is fundamentally different from noised hidden representation in adversarial training (Cheng et al., 2019; Sano et al., 2019), which is not to be concerned in this work. 3 Approach In this section, we will describe our reinforced learning and generation of adversarial examples (Figure 1) in detail. Overall, the victim model is a part of the environment (denoted as Env), which yields rewards indicating overall degradation based on modified inputs. A reinforced agent 3489 learns to modify every source position from left to right sequentially. Meanwhile, a discriminator in Env provides every-step survival signals by determining whether SRC is ill-perturbed. 3.1 Environment We encapsulate the victim translation model with a discriminative reward process as an Env for a reinforced agent to interact. 3.1.1 Environment State The state of the Env is described as st = (SRC, t), where SRC = [src0, src1, ..., srcN] are N sequences processed by victim model’s vocabulary and tokenization. Each sequence srci = [x1, x2, ..., xn] is concatenated with BOS, EOS, which indicate the begin and end of the sequence, then padded to same length. Time step t ∈[1, n] also indicates the token position to be perturbed by the agent. Env will consecutively loop for all token positions and update st based on the agent’s modification. Env also yields reward signals until the end or intermediately terminated. That is, all sequences in SRC are determined by D as ill-perturbed during the reward process. Once the Env terminates, it finishes the current episode and reset its state with a new batch of sequences as SRC. 3.1.2 Reward Process with Discriminator The reward process is only used during training. It consists of a survival reward rs on every step and a final degradation rd concerning an overall metric if the agent survives till the end. Overall, we have: rt =      −1, terminated 1 N P N a · rs, survive & t ∈[1, n) 1 N P N(a · rs + b · rd), survive & t = n (8) where a, b are hyper parameters that keeps the overall rs and rd within similar magnitude. Instead of direct optimization of the constrained adversarial loss in Eq.6, we model discriminator D’s output as survival rewards similar to that in gaming (Mnih et al., 2015). That is, the agent must survive for its goal by also fooling D, which attempts to terminate ill-perturbed modifications. We define an ill-perturbed source by determining whether it still matches the original target tgt. Discriminator As it is shown in Figure 1(b), discriminator D consists of bi-directional GRU encoders for both source and target sequence. Their corresponding representation is averaged and concatenated before passed to a feedforward layer with dropout. Finally, the output distribution is calculated by a softmax layer. Once D determines the pair as positive, its corresponding possibility is regarded as the reward, otherwise 0: rs = ( P(positive|(src′, tgt); θd), positive 0, otherwise (9) As long as the environment survives, it yields averaged reward among samples from SRC (Eq.8) to mitigate rewards’ fluctuation that destabilize training. Discriminator Training Similar to GAN training, the environment’s D must update as the agent updates. During its training, the agent’s parameter is freezed to provide training samples. For every D’s training epoch, we randomly choose half of the batch and perturb its source using the current agent as negative samples. During D’s updates, we randomly generate a new batch of pairs from parallel data likewise to test its accuracy. D is updated at most stepD epochs, or until its test accuracy reaches acc bound. Env only1 yields -1 as overall terminal rewards when all sequences in SRC are intermediately terminated. For samples classified as negative during survival, their follow-up rewards and actions are masked as 0. If the agent survives until the end, Env yields additional averaged rd as final rewards for an episode. We follow Michel et al. (2019) to adopt relative degradation: rd = score(y, refs) −score(y′, refs) score(y, refs) (10) where y and y′ denote original and perturbed output, refs are references, and score is a translation metric. If score(y, refs) is zero, we return zero as rd. To calculate score we retokenize perturbed SRC by victim models vocabulary and tokenizer before translation. 1It is commonly accepted that frequent negative rewards result in agents’ tendency to regard zero-reward as optimum and fail exploration, which further leads to training failure. 3490 3.2 Agent As it is shown in Figure 1 (c), the agent’s actor and critic share the same input layers and encoder, but later processed by individual feedforward layers and output layers. Actor takes in SRC and current token with its surrounding (xt−1, xt, xt+1), then yields a binary distribution to determine whether to attack a token on step t, while critic emits a value V (st) for every state. Once the actor decides to perturb a specific token, this token will be replaced by another token in its candidate set. Candidate Set We collect at most K candidates for each token in the victim’s vocabulary within a distance of ϵ. ϵ is the averaged Euclidean distance of K-nearest embedding for all tokens in victim vocabulary. We note that there shall always be candidates for a token in test scenarios that are beyond victim’s vocabulary, for those without a nearby candidate, we assign UNK as its candidate. Once the agent chooses to replace a token with UNK, we follow Michel et al. (2019) to present a valid token that is also UNK to the victim’s vocabulary. Agent Training The agent is trained by algorithm in appendix A. Since the agent is required to explore with stochastic policy during training, it will first sample based on its actor’s output distribution on whether to perturb the current position, then randomly choose among its candidates. The agent and discriminator take turns to update. We assume the training is converged when test accuracy for D does not reach over a certain value within certain continuous learning rounds of agent and discriminator. Agent Inference To generate adversarial examples, the agent will take in source sequences and perturb on each position based on the actor’s output from left to right, then choose the nearest candidate. As the agent’s critic learns to estimate expected future rewards for a step, only when it yields positive value will agent perturb, otherwise it indicates an undesirable perturbation; thus, the agent is muted. 4 Experiments 4.1 Data Sets We test our adversarial example generations on Zh→En, En→Fr, and En→De translation tasks, which provide relatively strong baselines for victim models and mass test samples. We train our agent using only parallel data that is used for victims’ training. we train on LDC Zh→En2(1.3M pairs), WMT14 En→De3 (4.5M pairs) and WMT15 En→Fr4(2.2M pairs) for victim models respectively. For subword level translation, we apply byte pair encoding (Sennrich et al., 2015, BPE) for both source and target languages with the vocabulary size of 37k. We also use join-BPE for En-De and En-Fr experiments with 34k and 33k vocabulary size, respectively. For word-level translation, we use NLPIRICTCLAS and Moses tokenizer for Chinese and English tokenization, respectively. We adopt 30k as vocabulary size for both source and target language. We adopt NIST test sets 5 for Zh→En and WMT test sets for En→De and En→Fr, then generate adversarial examples for these sources for analysis. 4.2 Victim Models We choose the state-of-the-art RNN-search and Transformer as victim translation models. For RNN-search, we train subword level models and strictly follow the architecture in Bahdanau et al. (2014). As for Transformer, we train both word-level and subword-level model for Zh→En and only subword-level models for En→De and En→Fr with the architecture and the base parameter settings by Vaswani et al. (2017). For the above models, we apply the same batch scheme and Adam optimizer following Vaswani et al. (2017). We choose MT03, newsdiscuss2015 and newstest2013 for Zh→En, En→Fr, En→De as validation set respectively. 4.3 Metrics We first report attack results both in terms of charlevel BLEU (chrBLEU) of perturbed source by the origin to indicate modification rate, and relative decrease in target BLEU (RD): RD = BLEU(y, refs) −BLEU(y′, refs) (1 −chrBLEU(x′, x)) × BLEU(y, refs) (11) We adopt sacreBLEU (Post, 2018) to test caseinsensitive BLEU on detokenized targets. 2ldc2002E18, ldc2003E14, ldc2004T08, ldc2005T06 3https://nlp.stanford.edu/projects/nmt/ 4Europarl-v7, news-commentary-v10 5MT02,03,04,05,06 3491 Zh-En MT02-06 BLEU chrBLEU RD↑ HE↑ Transformer-word 41.16 RSNI (0.2)∗ 29.68 0.892 2.580∗ 1.39∗ RSNI (0.3)∗ 19.94 0.781 2.350∗ 1.10∗ GS (0.2) 33.46 0.749 0.746 3.23 GS (0.3) 29.86 0.676 0.847 2.49 Ours 33.72 0.804 0.952 3.73 Transformer-BPE 44.06 RSNI (0.2)∗ 34.44 0.892 2.019∗ 1.45∗ RSNI (0.4)∗ 25.78 0.781 1.891∗ 1.08∗ GS (0.2) 35.52 0.823 1.094 3.88 GS (0.4) 28.18 0.675 1.004 2.90 Ours 35.48 0.807 1.009 3.79 RNN-search-BPE 40.90 RSNI (0.2)∗ 32.54 0.892 1.891∗ 1.44∗ RSNI (0.4)∗ 25.54 0.781 1.712∗ 1.36∗ GS (0.2) 32.94 0.823 1.102 3.79 GS (0.4) 27.02 0.678 1.053 2.88 Ours 31.58 0.785 1.059 3.81 Table 3: Experiment results for Zh→En MT attack. We list BLEU for perturbed test sets generated by each adversarial example generation method, which is expect to deteriorate. An ideal adversarial example should achieve high RD with respect to high HE. As Michel et al. (2019) suggest, there is a tradeoff between achieving high RD and maintaining semantic. One can achieve rather high RD by testing with mismatched references, making degradation less meaningful. Therefore, we also test source semantic similarity with human evaluation (HE) ranging from 0 to 5 used by Michel et al. (2019) by randomly sampling 10% of total sequences mixed with baselines for a double-blind test. 4.4 Results We implement state-of-the-art adversarial example generation by gradient search (Michel et al., 2019) (GS) as a baseline, which can be currently applied to various translation models. We also implemented random synthetic noise injection (Karpukhin et al., 2019) (RSNI) as an unconstrained contrast. Both baselines are required to provide a ratio for the amount of tokens to perturb during an attack, where we present the best results. Unlike our paradigm can generate on monolingual data, GS also requires target annotations, where we use one of the references to provide a strong baseline. Note that RSNI can significantly break semantics with distinctly lower HE to achieve rather high RD, which we do not consider as legit adversarial example generation and noted with “*” for exclusion. As it is shown in Table 3 and 4, our model En-De newstest13-16 BLEU chrBLEU RD↑ HE↑ RNN-search-BPE 25.35 RSNI (0.2)∗ 16.70 0.949 6.691∗ 2.32∗ RSNI (0.4)∗ 10.05 0.897 5.860∗ 1.58∗ GS (0.2) 19.42 0.881 1.966 3.81 GS (0.4) 9.27 0.680 1.982 3.01 Ours 21.27 0.921 2.037 3.95 Transformer-BPE 29.05 RSNI (0.2)∗ 18.775 0.949 6.935∗ 2.39∗ RSNI (0.4)∗ 11.125 0.897 5.991∗ 1.58∗ GS (0.2) 18.29 0.861 2.665 3.69 GS (0.4) 10.03 0.751 2.629 3.33 Ours 19.29 0.875 2.688 3.79 En-Fr newstest13-14 + newsdiscuss15 RNN-search-BPE 32.6 RSNI (0.2)∗ 21.93 0.947 6.175∗ 2.23∗ RSNI (0.4)∗ 14.3 0.894 5.271∗ 1.56∗ GS (0.2) 22.7 0.833 1.818 3.80 GS (0.4) 15.2 0.708 1.828 3.25 Ours 22.3 0.843 2.009 3.87 Transformer-BPE 34.7 RSNI (0.2)∗ 24.0 0.947 5.774∗ 2.34∗ RSNI (0.4)∗ 15.8 0.894 5.114∗ 1.67∗ GS (0.2) 23.01 0.830 1.982 3.74 GS (0.4) 19.6 0.788 2.053 3.68 Ours 21.33 0.798 1.907 3.78 Table 4: Experiment results for En→De and En→Fr MT attack. stably generate adversarial examples without significant change in semantics by the same training setting among different models and language pairs, achieving stably high HE (>3.7) without any handcrafted semantic constraints, while search methods (GS) must tune for proper ratio of modification, which can hardly strike a balance between semantic constraints and degradation. Unlike search paradigm relying on reference and victim gradients, our paradigm is modelagnostic yet still achieving comparable RD with relatively high HE. 4.5 Case Study As it is shown in Table 5, our method is less likely to perturb some easily-modified semantics (e.g. numbers are edited to other “forms”, but not different numbers), while search tends to generate semantically different tokens to achieve degradation. Thus our agent can lead to more insightful and plausible analyses for neural machine translation than search by gradient. 5 Analysis 5.1 Efficiency As it is shown in Figure 2, given the same amount of memory cost, our method is significantly more 3492 a origin in 全国4000 万选民将在16名候选人中选举法兰西第五共和国第七任总统。 origin out 40 million voters throughout the country will elect the seventh president of the fifth republic of france among the 16 candidates references 40 million voters in the nation will elect the 7th president for the french fifth republic from 16 candidates. there are 40 million voters and they have to pick the fifth republic france’s seventh president amongst the sixteen candidates. forty million voters across the country are expected to choose the 7th president of the 5th republic of france from among 16 candidates. 40 million voters around france are to elect the 7th president of the 5 republic of france from 16 candidates . GS (0.4) in 全国性 性 性4000 万市 市 市民将在6 名候选人中选举法兰西第五国 国 国家 家 家第七任外 外 外交 交 交部 部 部长 长 长。 GS (0.4) out of the 6 candidates, 40 million people will elect the seventh foreign minister of the five countries. ours in 全国性 性 性4000万选民将在16位 位 位候选人中选举法兰西第5共和国第7任总统 ours out among the 16 candidates , 40 million voters will elect five presidents of France and seven presidents of the republic of France. b origin in 干案者目前被也门当局扣留。 origin out the persons involved in the case are currently detained by the yemeni authorities. references the perpetrator is currently in the custody of the yemeni authorities. yemeni authority apprehended the suspect. the suspect is now in custody of yemeni authorities . the ones involed in this case were also detained by the authority. GS (0.4) in 干案者目前为 为 为也门现 现 现局留。 GS (0.4) out the person involved in the case is now detained by the authorities! ours in 干案方 方 方目前被也门当局扣留。 ours out the victim is currently detained by the yemeni authorities. Table 5: (a) an example of perturbed number and quantifier severely damaging outputs in Zh→En translation, where we highlight the changes. “五” is the character for 5 and “七” for 7, “名” and “位” are both commonly used quantifiers for people. However, search-based attack achieves degradation by some significant changes of semantics, where number “16” is changed to “6”, and “外交部长” means “foreign minister”. (b) an example of changed suffix which breaks the result. “方” and “者” are common suffixes (K) sharing same meaning used for people. Our model spots that victim model’s fragility upon such perturb, while search does not. efficient compared to the search paradigm. Gradient computation concerning every modified source sequence can cost considerably in time or space for a state-of-the-art system, which could be even worse for systems with recurrent units. When it comes to mass production of adversarial examples for a victim translation system, our method can also generate by given only monolingual inputs. In contrast, search methods must be provided the same amount of well-informed targets. ftmax +dropout × { tgt bi-GRU Tokens embedding actor linear linear+dropout Y N { src bi-GRU V(st) critic linear Mean Mean Sum xt xt+1 xt−1 ✓ × Yemb Y X Xemb criminator (c) Agent softmax Time (seconds) 0 15000 30000 45000 60000 Transformer-bpe RNN-search-bpe 56467 31508 35671 18843 295 297 Ours GS (0.2) GS (0.4) U V Wp Ws Figure 2: Time consumption of different methods: we limit memory usage to 2.5G on single Nvidia 1080, and generate adversarial examples for the same 800 inputs in Zh→En MT with different methods, our method significantly outclasses the state-of-the-art search paradigm (GS). 5.2 Attack Patterns NMT systems may have different robustness over different parts of the inputs, thus some researchers implement input preprocessing targeting certain empirically weak parts, e.g., named entities(Li et al., 2018). Since the agent’s policy is to attack without handcrafted error features, we can further investigate vulnerability by its attack preferences of different parts of speech. We choose Chinese, for example, and adopt LTP POS tagger6 to label NIST test sets, then check the modification rate for each POS. To ensure the reliability of our analysis, we run three rounds of experiments on both baselines and our agent with similar modification rate targeting state-of-the-art Transformer with BPE, and collect overall results. We also present random synthetic noise injection (Karpukhin et al., 2019) (RSNI), which is not intended for any preference as an additional baseline. As it is shown in Figure 3, our reinforced paradigm shows distinct preference upon certain POS tags, indicating pitfalls of a victim translation system. At the same time, RSNI distributed almost evenly upon different POS tags. Though the search paradigm (GS) does expose some types of pitfall, our method can further expose those omitted by the search. Note that unlike existing work relying on feature engineering to indicate errors, we have no such features implemented for an agent. However, our agent can still spot error patterns by favoring some of the POS, such as 6https://github.com/HIT-SCIR/ltp 3493 (a) Overview (b) Discriminator Time (seconds) 0 15000 30000 45000 60000 0.0000 0.2250 0.4500 0.6750 0.9000 A B C D I J K M N ND NH Ni NL NS NT Nz P q R U V Wp Ws RSNI (0.4) GS (0.4) Ours Figure 3: Attack preferences of different paradigms targeting Zh→En Transformer-BPE model. All share a similar modification rate. Our agent shows a significant preference for some POS (e.g., Ni, Nh, Nz, I), which are commonly regarded as hard-to-translate phrases among industrial implementations, while some (e.g., K) are less noticed. Preference among different choices. Attack by BLEU(∆) Zh-En MT02-06 RNN-search-BPE 40.90 agent-RNN 31.58(-9.32) agent-TF 32.14(-8.76) Transformer-BPE 44.06 agent-TF 35.48(-8.58) agent-RNN 33.14(-10.92) En-De Newstest13-16 RNN-search-BPE 25.35 agent-RNN 21.27(-4.08) agent-TF 17.18(-8.18) Transformer-BPE 29.05 agent-TF 19.29(-9.76) agent-RNN 24.2(-4.85) En-Fr Newstest13-14+newsdiscuss15 RNN-search-BPE 32.60 agent-RNN 22.3(-10.30) agent-TF 19.83(-14.87) Transformer-BPE 34.70 agent-TF 21.33(-13.37) agent-RNN 22.35(-10.25) Table 6: Attacks targeting different architecture from the trained one. We note agent with the architecture that is trained with(e.g., agent-RNN stands for agent trained by targeting RNN-search). Ni (organization name), Nh (person name), Nl (location name), M (numbers), which are commonly accepted as hard-to-translate parts. Moreover, the agent also tends to favor K (suffix) more, which is less noticed. 5.3 Attack Generalization We additionally test agents by attacking different model architecture from the one that it’s trained. As it is shown in Table 6, we perturb the inputs by agents trained to attack a different architecture, then test for degradation. The results show that our agent trained by targeting Transformer architecture can still achieve degradation on RNN-search, and vice-versa. Clean test Noisy test IWSLT11-17 Transformer-BPE 44.06 35.48 11.27 Tuned 43.60(-0.46) 40.31(+4.83) 11.73(+0.46) Table 7: Tuning Zh→En Transformer-BPE model with adversarial examples. We generate adversarial examples for every training sources for tuning, achieving overall improvements for noisy tests. 5.4 Tuning with Adversarial Examples Since the agent generates meaning-preserving adversarial examples efficiently, we can directly tune the original model with those samples. We choose Zh→En Transformer-BPE, for example, and generate the same amount of adversarial examples given original training sources(1.3M pairs), then paired with initial targets. We mix the augmented pairs with original pairs for a direct tuning. We test the tuned model on original test data and noisy test data generated by the attacking agent. We additionally test on IWSLT11-17 Zh→En test data, which is not used for training or tuning, to verified robustness improvement. As Table 7 shows, our agent can achieve substantial improvement(+4.83) on noisy tests with only minor loss on clean tests(0.46). The improvement on the IWSLT test also indicates the adversarial tuning contributes to not only defending the agent’s attack, but also overall robustness. 5.5 Reinforced Examples for Machine translation We additionally switched the episodic rewards in the environment, then ignored all modifications that induce UNK tokens to train an agent, hoping to generate minor perturbed samples that can improve the translation metric. Though we failed to achieve overall improvements, we do succeed for quite a portion of samples, as shown in Table 8. Similar to adversarial examples, we call them reinforced examples. Such improvement is different from adversarial training that tunes model 3494 in 钱其琛同突尼斯外长会谈。 perturbed in 钱其琛同突尼斯外长会谈out Chinese, Tunisian minsters hold talks. perturbed out qian qichen holds talks with Tunisian foreign minister. in 中巴及城巴车辆在南区通宵停泊 perturbed in 中巴及城巴车辆在南区通宵停车 车 车 out overnight parking of cmb and city bus perturbed out overnight parking of cmb and city bus in southern district Table 8: Example of minor perturbed samples that improves machine translation for Zh→En TransformerBPE model. The “。” in first sample is modified to “-”, then model yields the omitted “钱其琛(qian qi chen)”. The “停泊” in second sample is modified to “停车”, where they both mean “parking”, then comes the omitted “in southern district” for “在南区”. for defense or strict text correction before the test phase. Reinforced examples are still noisy and can be directly applied for a test without any model updates to achieve improvements, which to our best knowledge is less investigated by researchers. Since we discovered that not all perturbed inputs are harmful, such an issue can be a good hint and alternative for better adversarial defense in NLP and should be further considered. 6 Related Work Cheng et al. (2018a) and Cheng et al. (2018b) applied continuous perturbation learning on token’s embedding and then manage a lexical representation out of a perturbed embedding. Zhao et al. (2017) learned such perturbation on the encoded representation of a sequence, and then decode it back as an adversarial example. These methods are applicable for simple NLP classification tasks, while failing machine translation which requires higher semantic constraints. Zhao et al. (2017) further attempted to constrain semantic in such paradigm by introducing multi-task modeling with accessory annotation, which further limits applicability. On the other hand, Ebrahimi et al. (2018), Chaturvedi et al. (2019) and Cheng et al. (2019) regarded it as a search problem by maximizing surrogate gradient losses. Due to the formidable gradient computation, such methods are less viable to more complex neural architectures. Cheng et al. (2019) introduced a learned language model to constrain generation. However, a learned language model is not apt for common typos or UNK. Another pitfall of this paradigm is that surrogate losses defined by a fixed tokenization for noncharacter level systems, risks being invalidated once the attack changes tokenization. Therefore, Ebrahimi et al. (2018) simply focused on charlevel systems, while Michel et al. (2019) specially noted to exclude scenarios where attack changes tokenization in their paradigm. Other works turn to more sophisticated generation paradigms, e.g., Vidnerov´a and Neruda (2016) adopts a genetic algorithm for an evolutionary generation targeting simple machine learning models. Zang et al. (2019) consider adversarial generation as a word substitution-based combinatorial optimization problem tackled by particle swarm algorithm. Our paradigm shares some common ideology with Miao et al. (2019) and Xiao et al. (2018), which iteratively edit inputs constrained by generative adversarial learning. 7 Conclusion We propose a new paradigm to generate adversarial examples for neural machine translation, which is capable of exposing translation pitfalls without handcrafted error features. Experiments show that our method achieves stable degradation with meaning preserving adversarial examples over different victim models. It is noticeable that our method can generate adversarial examples efficiently from monolingual data. As a result, the mass production of adversarial examples for the victim model’s analysis and further improvement of robustness become convenient. Furthermore, we notice some exceptional cases which we call as “reinforced samples”, which we leave as the future work. Acknowledgement We would like to thank the anonymous reviewers for their insightful comments. Shujian Huang is the corresponding author. This work is supported by National Science Foundation of China (No. 61672277, 61772261), the Jiangsu Provincial Research Foundation for Basic Research (No. BK20170074). References Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016. An actor-critic algorithm for sequence prediction. arXiv preprint arXiv:1607.07086. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR. 3495 Yonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine translation. arXiv preprint arXiv:1711.02173. Akshay Chaturvedi, KP Abijith, and Utpal Garain. 2019. Exploring the robustness of nmt systems to nonsensical inputs. arXiv: Learning. Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, and Cho-Jui Hsieh. 2018a. Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples. arXiv preprint arXiv:1803.01128. Yong Cheng, Lu Jiang, and Wolfgang Macherey. 2019. Robust neural machine translation with doubly adversarial inputs. arXiv preprint arXiv:1906.02443. Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. 2018b. Towards robust neural machine translation. arXiv preprint arXiv:1805.06130. Javid Ebrahimi, Daniel Lowd, and Dejing Dou. 2018. On adversarial examples for characterlevel neural machine translation. arXiv preprint arXiv:1806.09030. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Vladimir Karpukhin, Omer Levy, Jacob Eisenstein, and Marjan Ghazvininejad. 2019. Training on synthetic noise improves robustness to natural noise in machine translation. arXiv preprint arXiv:1902.01509. Vijay R Konda and John N Tsitsiklis. 2000. Actorcritic algorithms. In Advances in neural information processing systems, pages 1008–1014. Katherine Lee, Orhan Firat, Ashish Agarwal, Clara Fannjiang, and David Sussillo. 2018. Hallucinations in neural machine translation. NIPS 2018 Workshop IRASL. Zhongwei Li, Xuancong Wang, Ai Ti Aw, Eng Siong Chng, and Haizhou Li. 2018. Named-entity tagging and domain adaptation for better customized translation. In Proceedings of the Seventh Named Entities Workshop, pages 41–46, Melbourne, Australia. Association for Computational Linguistics. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. In EMNLP. Ning Miao, Hao Zhou, Lili Mou, Rui Yan, and Lei Li. 2019. Cgmh: Constrained sentence generation by metropolis-hastings sampling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6834–6842. Paul Michel, Xian Li, Graham Neubig, and Juan Miguel Pino. 2019. On evaluation of adversarial perturbations for sequence-to-sequence models. arXiv preprint arXiv:1903.06620. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. Nature, 518(7540):529. Matt Post. 2018. A call for clarity in reporting bleu scores. arXiv preprint arXiv:1804.08771. Motoki Sano, Jun Suzuki, and Shun Kiyono. 2019. Effective adversarial regularization for neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 204–210. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. 2015. Highdimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv. Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. arXiv preprint arXiv:1804.04235. Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction. MIT press. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Petra Vidnerov´a and Roman Neruda. 2016. Evolutionary generation of adversarial examples for deep and shallow machine learning models. In Multidisciplinary International Social Networks Conference. Lijun Wu, Fei Tian, Tao Qin, Jianhuang Lai, and TieYan Liu. 2018. A study of reinforcement learning for neural machine translation. arXiv preprint arXiv:1808.08866. Chaowei Xiao, Bo Li, Jun yan Zhu, Warren He, Mingyan Liu, and Dawn Song. 2018. Generating adversarial examples with adversarial networks. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI18, pages 3905–3911. International Joint Conferences on Artificial Intelligence Organization. Yuan Zang, Chenghao Yang, Fanchao Qi, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2019. Textual adversarial attack as combinatorial optimization. arXiv:1910.12196v2. Yang Zhao, Jiajun Zhang, Zhongjun He, Chengqing Zong, and Hua Wu. 2018. Addressing troublesome words in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 391–400. 3496 Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2017. Generating natural adversarial examples. arXiv preprint arXiv:1710.11342. Brian D Ziebart. 2010. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. Ph.D. thesis, figshare. A Training Details for Agent We adopt commonly accepted translation metric BLEU as score in Eq.9. We use 50 sequence pairs per batch both in environment initialization and training of discriminator and agent. It is essential to train on batches of sequences to stabilize reinforced training. Furthermore, note that D can be too powerful during the early training stage compared to the agent’s actor that it can quickly terminate an exploration. Therefore, we must train on batches and determine an overall terminal signal as aforementioned to ensure early exploration. The stepD and stepA are set as 80 7 and 120. acc bound for discriminator training is set to 0.85. The a and b in Eq.8 are set to 0.5 and 10. The dimension of feedforward layers in the agent’s actorcritic and discriminator are all 256. We initialize the embedding of both agent and discriminator by the victim’s embedding. For reinforcement learning, we adopt asynchronous learning with an additional global agent with an additional set of parameter θΩ, we set discount factor γ to 0.99, α and β in Eq.5 to 0.5 and 0.05 respectively. As for the stop criterion, we set patience round to 15 with convergence boundary for accD to 0.52. We adopt Adafactor(Shazeer and Stern, 2018) for training, which is a memoryefficient Adam. The learning rate for agent’s optimizer is initiated as 0.001 and scheduled by rsqrt with 100 steps of warmup. The K for the candidate set is 12. Our agent takes around 30 hours to converge on a single Nvidia 1080ti. Note that higher acc bound and lower convergence boundary for D indicates higher semantic constraints, which will increase training time. B Search-based Attack Search-based adversarial generation is currently widely applied in various robustness machine translation system. We generally follow the strategy of Ebrahimi et al. (2018); Michel et al. (2019) 7Three times the average convergence episodes to train a discriminator with initial agent by the given batch size. Algorithm 1: Reinforced training for agent Result: A learned global agent πθΩ 1 Assume global agent as πθΩwith parameter θΩ 2 Assume agent as πθ with parameter set θ 3 initialize: Env with D, θΩ, θ ; 4 while not Stop Criterion do 5 for stepD do 6 train D with current agent πθ ; 7 if accD > acc bound break; 8 end 9 test current D’s accuracy accD for stop criterion; 10 for stepA do 11 initialize Env state s0; 12 synchronize πθ with πθΩ; 13 t = tstart ; 14 while st survive and t −tstart < tmax do 15 get outactor t , Vt = πθ(st) ; 16 compute entropy H(outactor t ) ; 17 sample at based on outactor t ; 18 perform at and receive rt and st+1 ; 19 t ←t + 1 ; 20 end 21 R = ( 0 for terminal st V (st) for non-terminal st 22 for i ∈{t −1, ..., tstart} do 23 R ←γR + ri ; 24 accumulate Lv t (θ) ; 25 accumulate Lπ t (θ) ; 26 end 27 compute overall loss L(θ) ; 28 perform asynchronous updates on θΩwith gradient ∂L(θ) ∂θ ; 29 end 30 end which is applicable for both RNN-search and Transformer. More specifically, the Ladv in Eq.7 is derived as: argmax 1≤i≤n,emb′ i∈vocab |emb′ −embi|∇embiLadv, (12) Ladv(X′, Y ) = |y| X t=1 log(1 −P(yt|X′, y<t−1)) 3497 where each P(yt|X) is calculated by Eq.1 given a corresponding target. For every source sequence, a small ratio of positions is sampled for search. Then we greedy search8 by the corresponding loss upon those positions with given candidates. For better comparison, we adopt the candidate set used in our model instead of naive KNN candidates. Both baseline and our model share the same UNK generation for presentation. We use homophone replacement for Chinese, and strategy by Michel et al. (2019) for English. 8Ebrahimi et al. (2018) suggest that greedy search is a good enough approximation.
2020
319
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 340–350 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 340 Neural Topic Modeling with Bidirectional Adversarial Training Rui Wang† Xuemeng Hu† Deyu Zhou†∗Yulan He§ Yuxuan Xiong† Chenchen Ye† Haiyang Xu‡ †School of Computer Science and Engineering, Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, China §Department of Computer Science, University of Warwick, UK ‡AI Labs - Didi Chuxing Co., Ltd. - Beijing, China {rui wang, xuemenghu, d.zhou, yuxuanxiong, chenchenye}@seu.edu.cn, [email protected], [email protected] Abstract Recent years have witnessed a surge of interests of using neural topic models for automatic topic extraction from text, since they avoid the complicated mathematical derivations for model inference as in traditional topic models such as Latent Dirichlet Allocation (LDA). However, these models either typically assume improper prior (e.g. Gaussian or Logistic Normal) over latent topic space or could not infer topic distribution for a given document. To address these limitations, we propose a neural topic modeling approach, called Bidirectional Adversarial Topic (BAT) model, which represents the first attempt of applying bidirectional adversarial training for neural topic modeling. The proposed BAT builds a twoway projection between the document-topic distribution and the document-word distribution. It uses a generator to capture the semantic patterns from texts and an encoder for topic inference. Furthermore, to incorporate word relatedness information, the Bidirectional Adversarial Topic model with Gaussian (Gaussian-BAT) is extended from BAT. To verify the effectiveness of BAT and GaussianBAT, three benchmark corpora are used in our experiments. The experimental results show that BAT and Gaussian-BAT obtain more coherent topics, outperforming several competitive baselines. Moreover, when performing text clustering based on the extracted topics, our models outperform all the baselines, with more significant improvements achieved by Gaussian-BAT where an increase of near 6% is observed in accuracy. 1 Introduction Topic models have been extensively explored in the Natural Language Processing (NLP) community for unsupervised knowledge discovery. Latent Dirichlet Allocation (LDA) (Blei et al., 2003), the ∗corresponding author Logistic-Normal Dirichlet ¡ + Figure 1: Illustrated probability simplex with LogisticNormal distribution and Dirichlet distribution. most popular topic model, has been extended (Lin and He, 2009; Zhou et al., 2014; Cheng et al., 2014) for various extraction tasks. Due to the difficulty of exact inference, most LDA variants require approximate inference methods, such as mean-field methods and collapsed Gibbs sampling. However, these approximate approaches have the drawback that small changes to the modeling assumptions result in a re-derivation of the inference algorithm, which can be mathematically arduous. One possible way in addressing this limitation is through neural topic models which employ blackbox inference mechanism with neural networks. Inspired by variational autoencoder (VAE) (Kingma and Welling, 2013), Srivastava and Sutton (2017) used the Logistic-Normal prior to mimic the simplex in latent topic space and proposed the Neural Variational LDA (NVLDA). Moreover, they replaced the word-level mixture in NVLDA with a weighted product of experts and proposed the ProdLDA (Srivastava and Sutton, 2017) to further enhance the topic quality. Although Srivastava and Sutton (2017) used the Logistic-Normal distribution to approximate the Dirichlet distribution, they are not exactly the same. An illustration of these two distributions is shown in Figure 1 in which the Logistic-Normal distribution does not exhibit multiple peaks at the vertices of the simplex as that in the Dirichlet distribution and as such, it is less capable to capture 341 the multi-modality which is crucial in topic modeling (Wallach et al., 2009). To deal with the limitation, Wang et al. (2019a) proposed the Adversarialneural Topic Model (ATM) based on adversarial training, it uses a generator network to capture the semantic patterns lying behind the documents. However, given a document, ATM is not able to infer the document-topic distribution which is useful for downstream applications, such as text clustering. Moreover, ATM take the bag-of-words assumption and do not utilize any word relatedness information captured in word embeddings which have been proved to be crucial for better performance in many NLP tasks (Liu et al., 2018; Lei et al., 2018). To address these limitations, we model topics with Dirichlet prior and propose a novel Bidirectional Adversarial Topic model (BAT) based on bidirectional adversarial training. The proposed BAT employs a generator network to learn the projection function from randomly-sampled documenttopic distribution to document-word distribution. Moreover, an encoder network is used to learn the inverse projection, transforming a document-word distribution into a document-topic distribution. Different from traditional models that often resort to analytic approximations, BAT employs a discriminator which aims to discriminate between real distribution pair and fake distribution pair, thereby helps the networks (generator and encoder) to learn the two-way projections better. During the adversarial training phase, the supervision signal provided by the discriminator will guide the generator to construct a more realistic document and thus better capture the semantic patterns in text. Meanwhile, the encoder network is also guided to generate a more reasonable topic distribution conditioned on specific document-word distributions. Finally, to incorporate the word relatedness information captured by word embeddings, we extend the BAT by modeling each topic with a multivariate Gaussian in the generator and propose the Bidirectional Adversarial Topic model with Gaussian (Gaussian-BAT). The main contributions of the paper are: • We propose a novel Bidirectional Adversarial Topic (BAT) model, which is, to our best knowledge, the first attempt of using bidirectional adversarial training in neural topic modeling; • We extend BAT to incorporate the word relatedness information into the modeling process and propose the Bidirectional Adversarial Topic model with Gaussian (Gaussian-BAT); • Experimental results on three public datasets show that BAT and Gaussian-BAT outperform the state-of-the-art approaches in terms of topic coherence measures. The effectiveness of BAT and Gaussian-BAT is further verified in text clustering. 2 Related work Our work is related to two lines of research, which are adversarial training and neural topic modeling. 2.1 Adversarial Training Adversarial training, first employed in Generative Adversarial Network (GAN) (Goodfellow et al., 2014), has been extensively studied from both theoretical and practical perspectives. Theoretically, Arjovsky (2017) and Gulrajani (2017) proposed the Wasserstein GAN which employed the Wasserstein distance between data distribution and generated distribution as the training objective. To address the limitation that most GANs (Goodfellow et al., 2014; Radford et al., 2015) could not project data into a latent space, Bidirectional Generative Adversarial Nets (BiGAN) (Donahue et al., 2016) and Adversarially Learned Inference (ALI) (Dumoulin et al., 2016) were proposed. Adversarial training has also been extensively used for text generation. For example, SeqGAN (Yu et al., 2017) incorporated a policy gradient strategy for text generation. RankGAN (Lin et al., 2017) ranked a collection of human-written sentences to capture the language structure for improving the quality of text generation. To avoid mode collapse when dealing with discrete data, MaskGAN (Fedus et al., 2018) used an actor-critic conditional GAN to fill in missing text conditioned on the context. 2.2 Neural Topic Modeling To overcome the challenging exact inference of topic models based on directed graph, a replicated softmax model (RSM), based on the Restricted Boltzmann Machines was proposed in (Hinton and Salakhutdinov, 2009). Inspired by VAE, Miao et al. (2016) used the multivariate Gaussian as the prior distribution of latent space and proposed the 342 fake distribution pair S-dim ~df Generator Network (G) Discriminator Network (D) Din Dout ~µf » Dir(~µfj~®) (V+K)-dim Encoder Network(E) Representation layer Document-topic distribution layer Document-word distribution layer ~dr ~µr real distribution pair ~pr ~pf Representation layer Joint distributions layer Representation layer V-dim S-dim K-dim V-dim S-dim K-dim Document-topic distribution layer Document-word distribution layer Figure 2: The framework of the Bidirectional Adversarial Topic (BAT) model. Neural Variational Document Model (NVDM) for text modeling. To model topic properly, the Gaussian Softmax Model (GSM) (Miao et al., 2017) which constructs the topic distribution using a Gaussian distribution followed by a softmax transformation was proposed based on the NVDM. Likewise, to deal with the inappropriate Gaussian prior of NVDM, Srivastava and Sutton (2017) proposed the NVLDA which approximates the Dirichlet prior using a Logistic-Normal distribution. Recently, the Adversarial-neural Topic Model (ATM) (Wang et al., 2019a) is proposed based on adversarial training, it models topics with Dirichlet prior which is able to capture the multi-modality compared with logistic-normal prior and obtains better topics. Besides, the Adversarial-neural Event (AEM) (Wang et al., 2019b) model is also proposed for open event extraction by representing each event as an entity distribution, a location distribution, a keyword distribution and a date distribution. Despite the extensive exploration of this research field, scarce work has been done to incorporate Dirichlet prior, word embeddings and bidirectional adversarial training into neural topic modeling. In this paper, we propose two novel topic modeling approaches, called BAT and Gaussian-BAT, which are different from existing approaches in the following aspects: (1) Unlike NVDM, GSM, NVLDA and ProdLDA which model latent topic with Gaussian or logistic-normal prior, BAT and Gaussian-BAT explicitly employ Dirichlet prior to model topics; (2) Unlike ATM which could not infer topic distribution of a given document, BAT and Gaussian-BAT uses a encoder to generate the topic distribution corresponding to the document; (3) Unlike neural topic models that only utilize word co-occurrence information, Gaussian-BAT models topic with multivariate Gaussian and incorporates the word relatedness into modeling process. 3 Methodology Our proposed neural topic models are based on bidirectional adversarial training (Donahue et al., 2016) and aim to learn the two-way non-linear projection between two high-dimensional distributions. In this section, we first introduce the Bidirectional Adversarial Topic (BAT) model that only employs the word co-occurrence information. Then, built on BAT, we model topics with multivariate Gaussian in the generator of BAT and propose the Bidirectional Adversarial Topic model with Gaussian (Gaussian-BAT), which naturally incorporates word relatedness information captured in word embeddings into modeling process. 3.1 Bidirectional Adversarial Topic model As depicted in Figure 2, the proposed BAT consists of three components: (1) The Encoder E takes the V -dimensional document representation ⃗dr sampled from text corpus C as input and transforms it into the corresponding K-dimensional topic distribution ⃗θr; (2) The Generator G takes a random 343 topic distribution ⃗θf drawn from a Dirichlet prior as input and generates a V -dimensional fake word distribution ⃗df; (3) The Discriminator D takes the real distribution pair ⃗pr = [⃗θr; ⃗dr] and fake distribution pair ⃗pf = [⃗θf; ⃗df] as input and discriminates the real distribution pairs from the fake ones. The outputs of the discriminator are used as supervision signals to learn E, G and D during adversarial training. In what follows, we describe each component in more details. 3.1.1 Encoder Network The encoder learns a mapping function to transform document-word distribution to document-topic distribution. As shown in the top-left panel of Figure 2, it contains a V -dimensional document-word distribution layer, an S-dimensional representation layer and a K-dimensional document-topic distribution layer, where V and K denote vocabulary size and topic number respectively. More concretely, for each document d in text corpus, E takes the document representation ⃗dr as input, where ⃗dr is the representation weighted by TF-IDF, and it is calculated by: tfi,d = ni,d P v nv,d , idfi = log |C| |Ci| tf-idfi,d = tfi,d ∗idfi, di r = tf-idfi,d P v tf-idfv,d where ni,d denotes the number of i-th word appeared in document d, |C| represents the number of documents in the corpus, and |Ci| means the number of documents that contain i-th word in the corpus. Thus, each document could be represented as a V -dimensional multinomial distribution and the i-th dimension denotes the semantic consistency between i-th word and the document. With ⃗dr as input, E firstly projects it into an S-dimensional semantic space through the representation layer as follows: ⃗he s = BN(W e s ⃗dr +⃗be s) (1) ⃗oe s = max(⃗he s, leak ∗⃗he s) (2) where W e s ∈RS×V and⃗be s are weight matrix and bias term of the representation layer, ⃗he s is the state vector normalized by batch normalization BN(·), leak denotes the parameter of LeakyReLU activation and ⃗oe s represents the output of representation layer. Then, the encoder transforms ⃗oe s into a Kdimensional topic space based on the equation below: ⃗θr = softmax(W e t ⃗oe s +⃗be t) (3) where W e t ∈RK×S is the weight matrix of topic distribution layer, ⃗be t represents the bias term, ⃗θr denotes the corresponding topic distribution of the input ⃗dr and the k-th (k ∈{1, 2, ..., K}) dimension θk r represents the proportion of k-th topic in document d. 3.1.2 Generator network The generator G is shown in the bottom-left panel of Figure 2. Contrary to encoder, it provides an inverse projection from document-topic distribution to document-word distribution and contains a K-dimensional document-topic layer, an S-dimensional representation layer and a V dimensional document-word distribution layer. As pointed out in (Wallach et al., 2009), the choice of Dirichlet prior over topic distribution is important to obtain interpretable topics. Thus, BAT employs the Dirichlet prior parameterized with ⃗α to mimic the multi-variate simplex over topic distribution ⃗θf. It can be drawn randomly based on the equation below: p(⃗θf|⃗α) = Dir(⃗θf|⃗α) ≜ 1 ∆(⃗α) K Y k=1 h θk f iαk−1 (4) where ⃗α is the K-dimensional hyper-parameter of Dirichlet prior, K is the topic number that should be set in BAT, θk f ∈[0, 1], follows the constrain that PK k=1 θk f = 1, represents the proportion of the k-th topic in the document, and normalization term ∆(⃗α) is defined as QK k=1 Γ(αk) Γ(PK k=1 αk). To learn the transformation from documenttopic distribution to document-word distribution, G firstly projects ⃗θf into an S-dimensional representation space based on equations: ⃗hg s = BN(W g s ⃗θf +⃗bg s) (5) ⃗og s = max(⃗hg s, leak ∗⃗hg s) (6) where W g s ∈RS×K is weight matrix of the representation layer, ⃗bg s represents bias term, ⃗hg s is the state vector normalized by batch normalization, Eq. 6 represents the LeakyReLU activation parameterized with leak, and ⃗og s is the output of the representation layer. 344 Then, to project ⃗og s into word distribution ⃗df, a subnet contains a linear layer and a softmax layer is used and the transformation follows: ⃗df = softmax(W g w⃗og s +⃗bg w) (7) where W g w ∈RV ×S and ⃗bg w are weight matrix and bias of word distribution layer, ⃗df is the word distribution correspond to ⃗θf. For each v ∈{1, 2, ..., V }, the v-th dimension dv f is the probability of the v-th word in fake document ⃗df. 3.1.3 Discriminator network The discriminator D is constituted by three layers (a V + K-dimensional joint distribution layer, an S-dimensional representation layer and an output layer) as shown in the right panel of Figure 2. It employs real distribution pair ⃗pr and fake distribution pair ⃗pf as input and then outputs Dout to identify the input sources (fake or real). Concretely, a higher value of Dout represents that D is more prone to predict the input as real and vice versa. 3.2 BAT with Gaussian (Gaussian-BAT) In BAT, the generator models topics based on the bag-of-words assumption as in most other neural topic models. To incorporate the word relatedness information captured in word embeddings (Mikolov et al., 2013a,b; Pennington et al., 2014; Joulin et al., 2017; Athiwaratkun et al., 2018) into the inference process, we modify the generator of BAT and propose Gaussian-BAT, in which G models each topic with a multivariate Gaussian as shown in Figure 3. Gaussian distributions Word Embedding fake distribution pair ~df ~µf » Dir(~µfj~®) Document-topic distribution Documentword distribution ~pf V-dim K-dim Topic-word distributions ... ... Generator Network (G) N(~¹1; §1) N(~¹2; §2) N(~¹K; §K) ~Á1 ~Á2 ~ÁK ... ... Figure 3: The generator of Gaussian-BAT. Concretely, Gaussian-BAT employs the multivariate Gaussian N(⃗µk, Σk) to model the k-th topic. Here, ⃗µk and Σk are trainable parameters, they represent mean and covariance matrix respectively. Following its probability density, for each word v ∈{1, 2, ..., V }, the probability in the k-th topic φk,v is calculated by: p(⃗ev|topic = k) = N(⃗ev; ⃗µk, Σk) = exp(−1 2(⃗ev −⃗µk)TΣ−1 k (⃗ev −⃗µk)) p (2π)De|Σk| (8) φk,v = p(⃗ev|topic = k) PV v=1 p(⃗ev|topic = k) (9) where ⃗ev means the word embedding of v-th word, V is the vocabulary size, |Σk| = det Σk is the determinant of covariance matrix Σk, De is the dimension of word embeddings, p(⃗ev|topic = k) is the probability calculated by density, and ⃗φk is the normalized word distribution of k-th topic. With randomly sampled topic distribution ⃗θf and the calculated topic-word distributions {⃗φ1, ⃗φ2, ..., ⃗φK}, the fake word distribution ⃗df corresponding to ⃗θf can be obtained by: ⃗df = K X k=1 ⃗φk ∗θk (10) where θk is the topic proportion of the k-th topic. Then, ⃗θf and ⃗df are concatenated to form the fake distribution pair ⃗pf as shown in Figure 3. And encoder and discriminator of Gaussian-BAT are same as BAT, shown as Figure 2. In our experiments, the pre-trained 300-dimensional Glove (Pennington et al., 2014) embedding is used. 3.3 Objective and Training Procedure In Figure 2, the real distribution pair ⃗pr = [⃗θr; ⃗dr] and the fake distribution pair ⃗pf = [⃗θf; ⃗df] can be viewed as random samples drawn from two (K + V )-dimensional joint distributions Pr and Pf, each of them comprising of a K-dimensional Dirichlet distribution and a V -dimensional Dirichlet distribution. The training objective of BAT and Gaussian-BAT is to make the generated joint distribution Pf close to the real joint distribution Pr as much as possible. In this way, a two-way projection between document-topic distribution and document-word distribution could be built by the learned encoder and generator. To measure the distance between Pr and Pf, we use the Wasserstein-distance as the optimization objective, since it was shown to be more effective compared to Jensen-Shannon divergence (Arjovsky et al., 2017): Loss = E⃗pf∼Pf [D(⃗pf)] −E⃗pr∼Pr [D(⃗pr)] (11) 345 where D(·) represents the output signal of the discriminator. A higher value denotes that the discriminator is more prone to consider the input as a real distribution pair and vice versa. In addition, we use weight clipping which was proposed to ensure the Lipschitz continuity (Arjovsky et al., 2017) of D. Algorithm 1 Training procedure for BAT and Gaussian-BAT Input: K, c, nd, m, α1, β1, β2 Output: The trained encoder E and generator G. 1: Initialize D, E and G with ωd, ωe and ωg 2: while ωe and ωg have not converged do 3: for t = 1, ..., nd do 4: for j = 1, ..., m do 5: Sample ⃗dr ∼Pd r, 6: Sample a random ⃗θf ∼Dir(⃗θf|⃗α) 7: ⃗df ←G(⃗θf), ⃗θr ←E(⃗dr) 8: ⃗pr = [⃗θr; ⃗dr], ⃗pf = [⃗θf; ⃗df] 9: L(j) = D(⃗pf) −D(⃗pr) 10: end for 11: ωd ←Adam(∇ωd 1 m Pm j=1 L(j), ωd, pa) 12: ωd ←clip(ωd, −c, c) 13: end for 14: ωg ←Adam(∇ωg −1 m Pm j=1 D(⃗pj f), ωg, pa) 15: ωe ←Adam(∇ωe 1 m Pm j=1 D(⃗pj r), ωe, pa) 16: end while The training procedure of BAT and GaussianBAT is given in Algorithm. 1. Here, c is the clipping parameter, nd represents the number of discriminator iterations per generator iteration, m is the batch size, α1 is the learning rate, β1 and β2 are hyper-parameters of Adam (Kingma and Ba, 2014), and pa represents {α1, β1, β2}. In our experiments, we set the nd = 5, m = 64, α1 = 1e−4, c = 0.01, β1 = 0.5 and β2 = 0.999. 3.4 Topic Generation and Cluster Inference After model training, learned G and E will build a two-way projection between document-topic distribution and document-word distribution. Thus, G and E could be used for topic generation and cluster inference. To generate the word distribution of each topic, we use ⃗ts(k), a K-dimensional vector, as the onehot encoding of the k-th topic. For example, ⃗ts2 = [0, 1, 0, 0, 0, 0]T in a six topic setting. And the word distribution of the k-th topic is obtained by: ⃗φk = G(⃗ts(k)) (12) Likewise, given the document representation ⃗dr, topic distribution ⃗θr obtained by BAT/GaussianBAT could be used for cluster inference based on: ⃗θr = E(⃗dr); cr = arg max ⃗θr (13) where cr denotes the inferred cluster of ⃗dr. 4 Experiments In this section, we first present the experimental setup which includes the datasets used and the baselines, followed by the experimental results. 4.1 Experimental Setup We evaluate BAT and Gaussian-BAT on three datasets for topic extraction and text clustering, 20Newsgroups1, Grolier2 and NYTimes3. Details are summarized below: 20Newsgroups (Lang, 1995) is a collection of approximately 20,000 newsgroup articles, partitioned evenly across 20 different newsgroups. Grolier is built from Grolier Multimedia Encycopedia, which covers almost all the fields in the world. NYTimes is a collection of news articles published between 1987 and 2007, and contains a wide range of topics, such as sports, politics, education, etc. We use the full datasets of 20Newsgroups1 and Grolier2. For the NYTimes dataset, we randomly select 100,000 articles and remove the low frequency words. The final statistics are shown in Table 1: Dataset #Doc (Train) #Doc (Test) #Words 20Newsgroups 11,259 7,488 1,995 Grolier 29,762 15,276 NYtimes 99,992 12,604 Table 1: The statistics of datasets. We choose the following models as baselines: LDA (Blei et al., 2003) extracts topics based on word co-occurrence patterns from documents. We implement LDA following the parameter setting suggested in (Griffiths and Steyvers, 2004). NVDM (Miao et al., 2016) is an unsupervised text modeling approach based on VAE. We use the original implementation of the paper4. 1http://qwone.com/ jason/20Newsgroups/ 2https://cs.nyu.edu/∼roweis/data/ 3http://archive.ics.uci.edu/ml/datasets/Bag+of+Words 4https://github.com/ysmiao/nvdm 346 50 % 70 % 90 % 100 % 0.2 0.0 0.2 0.4 0.6 C_P on 20Newsgroups 50 % 70 % 90 % 100 % 0.00 0.05 0.10 0.15 0.20 0.25 0.30 C_A on 20Newsgroups 50 % 70 % 90 % 100 % 0.10 0.05 0.00 0.05 0.10 0.15 NPMI on 20Newsgroups 50 % 70 % 90 % 100 % 3 2 1 0 1 UCI on 20Newsgroups 50 % 70 % 90 % 100 % 0.4 0.2 0.0 0.2 0.4 0.6 C_P on NYTimes 50 % 70 % 90 % 100 % 0.00 0.05 0.10 0.15 0.20 0.25 0.30 C_A on NYTimes 50 % 70 % 90 % 100 % 0.15 0.10 0.05 0.00 0.05 0.10 0.15 NPMI on NYTimes 50 % 70 % 90 % 100 % 4 3 2 1 0 1 UCI on NYTimes 50 % 70 % 90 % 100 % 0.2 0.1 0.0 0.1 0.2 0.3 0.4 C_P on Grolier 50 % 70 % 90 % 100 % 0.00 0.05 0.10 0.15 0.20 0.25 C_A on Grolier 50 % 70 % 90 % 100 % 0.05 0.00 0.05 0.10 NPMI on Grolier 50 % 70 % 90 % 100 % 2 1 0 1 UCI on Grolier Gaussian-BAT BAT ATM LDA GSM ProdLDA NVLDA NVDM Figure 4: The comparison of average topic coherence vs. different topic proportion on three datasets. GSM(Miao et al., 2017) is an enhanced topic model based on NVDM, we use the original implementation in our experiments5. NVLDA (Srivastava and Sutton, 2017), also built on VAE but with the logistic-normal prior. We use the implementation provided by the author6. ProdLDA (Srivastava and Sutton, 2017), is a variant of NVLDA, in which the distribution over individual words is a product of experts. The original implementation is used. ATM (Wang et al., 2019a), is a neural topic modeling approach based on adversarial training, we implement the ATM following the parameter setting suggested in the original paper. 4.2 Topic Coherence Evaluation Topic models are typically evaluated with the likelihood of held-out documents and topic coherence. However, Chang et al. (2009) showed that a higher likelihood of held-out documents does not correspond to human judgment of topic coherence. Thus, we follow (R¨oder et al., 2015) and employ four topic coherence metrics (C P, C A, NPMI and UCI) to evaluate the topics generated by various models. In all experiments, each topic is represented by the top 10 words according to the topic-word probabilities, and all the topic coherence values are calculated using the Palmetto library7. We firstly make a comparison of topic coherence vs. different topic proportions. Experiments are 5https://github.com/linkstrife/NVDM-GSM 6https://github.com/akashgit/autoencoding vi for topic models 7https://github.com/dice-group/Palmetto Dataset Model C P C A NPMI UCI 20Newsgroups NVDM -0.2558 0.1286 -0.0984 -2.9496 GSM -0.2318 0.1067 -0.0400 -1.6083 NVLDA 0.1205 0.1763 -0.0207 -1.3466 ProdLDA 0.1858 0.2155 -0.0083 -1.5044 LDA 0.2361 0.1769 0.0523 0.3399 ATM 0.1914 0.1720 0.0207 -0.3871 BAT 0.2597 0.1976 0.0472 0.0969 Gaussian-BAT 0.3758 0.2251 0.0819 0.5925 Grolier NVDM -0.1877 0.1456 -0.0619 -2.1149 GSM 0.1974 0.1966 0.0491 -0.0410 NVLDA -0.2205 0.1504 -0.0653 -2.4797 ProdLDA -0.0374 0.1733 -0.0193 -1.6398 LDA 0.1908 0.2009 0.0497 -0.0503 ATM 0.2105 0.2188 0.0582 0.1051 BAT 0.2312 0.2108 0.0608 0.1709 Gaussian-BAT 0.2606 0.2142 0.0724 0.2836 NYtimes NVDM -0.4130 0.1341 -0.1437 -4.3072 GSM 0.3426 0.2232 0.0848 0.6224 NVLDA -0.1575 0.1482 -0.0614 -2.4208 ProdLDA -0.0034 0.1963 -0.0282 -1.9173 LDA 0.3083 0.2127 0.0772 0.5165 ATM 0.3568 0.2375 0.0899 0.6582 BAT 0.3749 0.2355 0.0951 0.7073 Gaussian-BAT 0.4163 0.2479 0.1079 0.9215 Table 2: Average topic coherence on three datasets with five topic settings [20, 30, 50, 75, 100]. conducted on the datasets with five topic number settings [20, 30, 50, 75, 100]. We calculate the average topic coherence values among topics whose coherence values are ranked at the top 50%, 70%, 90%, 100% positions. For example, to calculate the average C P value of BAT @90%, we first compute the average C P coherence with the selected topics whose C P values are ranked at the top 90% for each topic number setting, and then average the five coherence values with each corresponding to a particular topic number setting. The detailed comparison is shown in Figure 4. It can be observed that BAT outperforms the baselines on all the coherence metrics for NYTimes datasets. For Grolier dataset, BAT outperforms all the baselines on C P, NPMI and UCI metrics, but 347 20 30 50 75 100 0.3 0.2 0.1 0.0 0.1 0.2 0.3 0.4 C_P on 20Newsgroups 20 30 50 75 100 0.10 0.12 0.14 0.16 0.18 0.20 0.22 0.24 C_A on 20Newsgroups 20 30 50 75 100 0.10 0.05 0.00 0.05 0.10 NPMI on 20Newsgroups 20 30 50 75 100 3 2 1 0 1 UCI on 20Newsgroups 20 30 50 75 100 0.4 0.2 0.0 0.2 0.4 C_P on NYTimes 20 30 50 75 100 0.100 0.125 0.150 0.175 0.200 0.225 0.250 C_A on NYTimes 20 30 50 75 100 0.15 0.10 0.05 0.00 0.05 0.10 NPMI on NYTimes 20 30 50 75 100 5 4 3 2 1 0 1 UCI on NYTimes 20 30 50 75 100 0.3 0.2 0.1 0.0 0.1 0.2 0.3 C_P on Grolier 20 30 50 75 100 0.14 0.16 0.18 0.20 0.22 C_A on Grolier 20 30 50 75 100 0.075 0.050 0.025 0.000 0.025 0.050 0.075 NPMI on Grolier 20 30 50 75 100 3.0 2.5 2.0 1.5 1.0 0.5 0.0 0.5 UCI on Grolier Gaussian-BAT BAT ATM LDA GSM ProdLDA NVLDA NVDM Figure 5: The comparison of average topic coherence vs. different topic number on 20Newsgroups, Grolier and NYTimes. Model Topics Gaussian-BAT voter campaign poll candidates democratic election republican vote presidential democrat song album music band rock pop sound singer jazz guitar film movie actor character movies director series actress young scenes flight airline passenger airlines aircraft shuttle airport pilot carrier planes BAT vote president voter campaign election democratic governor republican black candidates album band music rock song jazz guitar pop musician record film actor play acting role playing character father movie actress flight airline delay airlines plane pilot airport passenger carrier attendant LDA voter vote poll election campaign primary candidates republican race party music song band sound record artist album show musical rock film movie character play actor director movies minutes theater cast flight plane ship crew air pilot hour boat passenger airport ATM voter vote poll republican race primary percent election campaign democratic music song musical album jazz band record recording mp3 composer film movie actor director award movies character theater production play jet flight airline hour plane passenger trip plan travel pilot Table 3: Topic examples extracted by models, italics means out-of-topic words. These topics correspond to ‘election’, ‘music’, ‘film’ and ‘airline’ respectively, and topic examples of other models are omitted due to poor quality. gives slightly worse results compared to ATM on C A. For 20Newsgroups dataset, BAT performs the best on C P and NPMI, but gives slightly worse results compared to ProdLDA on C A, and LDA on UCI. By incorporating word embeddings through trainable Gaussian distribution, Gaussian-BAT outperforms all the baselines and BAT on four coherence metrics, often by a large margin, across all the three datasets except for Grolier dataset on C A when considering 100% topics. This may be attribute to the following factors: (1) The Dirichlet prior employed in BAT and Gaussian-BAT could exhibit a multi-modal distribution in latent space and is more suitable for discovering semantic patterns from text; (2) ATM does not consider the relationship between topic distribution and word distribution since it only carry out adversarial training in word distribution space; (3) The incorporation of word embeddings in Gaussian-BAT helps generating more coherent topics. We also compare the average topic coherence values (all topics taken into account) numerically to show the effectiveness of proposed BAT and Gaussian-BAT. The results of numerical topic coherence comparison are listed in Table 2 and each value is calculated by averaging the average topic coherences over five topic number settings. The best coherence value on each metric is highlighted in bold. It can be observed that Gaussian-BAT gives the best overall results across all metrics and on all the datasets except for Grolier dataset on C A. To make the comparison of topics more intuitive, we provide four topic examples extracted by models in Table 3. It can be observed that the proposed BAT and Gaussian-BAT can generate more coherent topics. 348 Moreover, to explore how topic coherence varies with different topic numbers, we also provide the comparison of average topic coherence vs. different topic number on 20newsgroups, Grolier and NYTimes (all topics taken into account). The detailed comparison is shown in Figure 5. It could be observed that Gaussian-BAT outperforms the baselines with 20, 30, 50 and 75 topics except for Grolier dataset on C A metric. However, when the topic number is set to 100, Gaussian-BAT performs slightly worse than LDA (e.g., UCI for 20Newsgroups and C A for NYTimes). This may be caused by the increased model complexity due to the larger topic number settings. Likewise, BAT can achieve at least the second-best results among all the approaches in most cases for NYTimes dataset. For Grolier, BAT also performs the second-best except on C A metric. However, for 20newsgroups, the results obtained by BAT are worse than ProdLDA (C A) and LDA (UCI) due to the limited training documents in the dataset, though it still largely outperforms other baselines. 4.3 Text Clustering We further compare our proposed models with baselines on text clustering. Due to the lack of document label information in Grolier and NYTimes, we only use 20Newsgroups dataset in our experiments. The topic number is set to 20 (ground-truth categories) and the performance is evaluated by accuracy (ACC): ACC = max map PNt i=1 ind(li = map(ci)) Nt (14) where Nt is the number of documents in the test set, ind(·) is the indicator function, li is the groundtruth label of i-th document, ci is the category assignment, and map ranges over all possible oneto-one mappings between labels and clusters. The optimal map function can be obtained by the KuhnMunkres algorithm (Kuhn, 1955). A larger accuracy value indicates a better text clustering results. Dataset NVLDA ProdLDA LDA BAT G-BAT 20NG 33.31% 33.82% 35.36% 35.66% 41.25% Table 4: Text clustering accuracy on 20Newsgroups (20NG). ‘G-BAT’ refers to ‘Gaussian-BAT’. The best result is highlighted in bold. The comparison of text clustering results on 20Newsgroups is shown in Table 4. Due to the poor performance of NVDM in topic coherence evaluation, its result is excluded here. Not surprisingly, NVLDA and ProdLDA perform worse than BAT and Gaussian-BAT that model topics with the Dirichlet prior. This might be caused by the fact that Logistic-Normal prior does not exhibit multiple peaks at the vertices of the simplex, as depicted in Figure 1. Compared with LDA, BAT achieves a comparable result in accuracy since both models have the same Dirichlet prior assumption over topics and only employ the word co-occurrence information. Gaussian-BAT outperforms the second best model, BAT, by nearly 6% in accuracy. This shows that the incorporation of word embeddings is important to improve the semantic coherence of topics and thus results in better consistency between cluster assignments and ground-truth labels. 5 Conclusion In this paper, we have explored the use of bidirectional adversarial training in neural topic models and proposed two novel approaches: the Bidirectional Adversarial Topic (BAT) model and the Bidirectional Adversarial Topic model with Gaussian (Gaussian-BAT). BAT models topics with the Dirichlet prior and builds a two-way transformation between document-topic distribution and document-word distribution via bidirectional adversarial training. Gaussian-BAT extends from BAT by incorporating word embeddings into the modeling process, thereby naturally considers the word relatedness information captured in word embeddings. The experimental comparison on three widely used benchmark text corpus with the existing neural topic models shows that BAT and Gaussian-BAT achieve improved topic coherence results. In the future, we would like to devise a nonparametric neural topic model based on adversarial training. Besides, developing correlated topic modelsis another promising direction. Acknowledgements We would like to thank anonymous reviewers for their valuable comments and helpful suggestions. This work was funded by the National Key Research and Development Program of China(2017YFB1002801) and the National Natural Science Foundation of China (61772132). And YH is partially supported by EPSRC (grant no. EP/T017112/1). 349 References Martin Arjovsky, Soumith Chintala, and L´eon Bottou. 2017. Wasserstein generative adversarial networks. In International conference on machine learning, pages 214–223. Ben Athiwaratkun, Andrew Wilson, and Anima Anandkumar. 2018. Probabilistic fasttext for multi-sense word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1–11. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022. Jonathan Chang, Sean Gerrish, Chong Wang, Jordan L Boyd-Graber, and David M Blei. 2009. Reading tea leaves: How humans interpret topic models. In Advances in neural information processing systems, pages 288–296. Xueqi Cheng, Xiaohui Yan, Yanyan Lan, and Jiafeng Guo. 2014. Btm: Topic modeling over short texts. IEEE Transactions on Knowledge and Data Engineering, 26(12):2928–2941. Jeff Donahue, Philipp Kr¨ahenb¨uhl, and Trevor Darrell. 2016. Adversarial feature learning. arXiv preprint arXiv:1605.09782. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville. 2016. Adversarially learned inference. arXiv preprint arXiv:1606.00704. William Fedus, Ian Goodfellow, and Andrew M Dai. 2018. Maskgan: better text generation via filling in the . arXiv preprint arXiv:1801.07736. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680. Thomas L Griffiths and Mark Steyvers. 2004. Finding scientific topics. Proceedings of the National academy of Sciences, 101(suppl 1):5228–5235. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. 2017. Improved training of wasserstein gans. In Advances in neural information processing systems, pages 5767– 5777. Geoffrey E Hinton and Ruslan R Salakhutdinov. 2009. Replicated softmax: an undirected topic model. In Advances in neural information processing systems, pages 1607–1614. Armand Joulin, Edouard Grave, and Piotr Bojanowski Tomas Mikolov. 2017. Bag of tricks for efficient text classification. EACL 2017, page 427. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. Harold W Kuhn. 1955. The hungarian method for the assignment problem. Naval research logistics quarterly, 2:83–97. Ken Lang. 1995. Newsweeder: Learning to filter netnews. In Proceedings of the Twelfth International Conference on Machine Learning, pages 331–339. Zeyang Lei, Yujiu Yang, and Min Yang. 2018. Saan: A sentiment-aware attention network for sentiment analysis. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pages 1197–1200. ACM. Chenghua Lin and Yulan He. 2009. Joint sentiment/topic model for sentiment analysis. In Proceedings of the 18th ACM conference on Information and knowledge management, pages 375–384. ACM. Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, and Ming-Ting Sun. 2017. Adversarial ranking for language generation. In Advances in Neural Information Processing Systems, pages 3155–3165. Qiao Liu, Haibin Zhang, Yifu Zeng, Ziqi Huang, and Zufeng Wu. 2018. Content attention model for aspect based sentiment analysis. In Proceedings of the 2018 World Wide Web Conference, pages 1023– 1032. International World Wide Web Conferences Steering Committee. Yishu Miao, Edward Grefenstette, and Phil Blunsom. 2017. Discovering discrete latent topics with neural variational inference. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2410–2419. JMLR. org. Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In International conference on machine learning, pages 1727– 1736. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. 350 Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Michael R¨oder, Andreas Both, and Alexander Hinneburg. 2015. Exploring the space of topic coherence measures. In Proceedings of the eighth ACM international conference on Web search and data mining, pages 399–408. ACM. Akash Srivastava and Charles Sutton. 2017. Autoencoding variational inference for topic models. arXiv preprint arXiv:1703.01488. Hanna M Wallach, David M Mimno, and Andrew McCallum. 2009. Rethinking lda: Why priors matter. In Advances in neural information processing systems, pages 1973–1981. Rui Wang, Deyu Zhou, and Yulan He. 2019a. Atm: Adversarial-neural topic model. Information Processing & Management, 56(6):102098. Rui Wang, Deyu Zhou, and Yulan He. 2019b. Open event extraction from online text using a generative adversarial network. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 282–291, Hong Kong, China. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Thirty-First AAAI Conference on Artificial Intelligence. Deyu Zhou, Liangyu Chen, and Yulan He. 2014. A simple bayesian modelling approach to event extraction from twitter. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 700–705.
2020
32
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3498–3504 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3498 A Retrieve-and-Rewrite Initialization Method for Unsupervised Machine Translation Shuo Ren†‡∗, Yu Wu §, Shujie Liu§, Ming Zhou§, Shuai Ma†‡ †SKLSDE Lab, Beihang University, Beijing, China ‡Beijing Advanced Innovation Center for Big Data and Brain Computing, China §Microsoft Research Asia, Beijing, China †{shuoren,mashuai}@buaa.edu.cn §{Wu.Yu,shujliu,mingzhou}@microsoft.com Abstract The commonly used framework for unsupervised machine translation builds initial translation models of both translation directions, and then performs iterative back-translation to jointly boost their translation performance. The initialization stage is very important since bad initialization may wrongly squeeze the search space, and too much noise introduced in this stage may hurt the final performance. In this paper, we propose a novel retrieval and rewriting based method to better initialize unsupervised translation models. We first retrieve semantically comparable sentences from monolingual corpora of two languages and then rewrite the target side to minimize the semantic gap between the source and retrieved targets with a designed rewriting model. The rewritten sentence pairs are used to initialize SMT models which are used to generate pseudo data for two NMT models, followed by the iterative back-translation. Experiments show that our method can build better initial unsupervised translation models and improve the final translation performance by over 4 BLEU scores. 1 Introduction Recent work has shown successful practices of unsupervised machine translation (UMT) (Artetxe et al., 2017; Lample et al., 2017, 2018; Artetxe et al., 2018b; Marie and Fujita, 2018; Ren et al., 2019; Lample and Conneau, 2019). The common framework is to build two initial translation models (i.e., source to target and target to source) and then do iterative back-translation (Sennrich et al., 2016a; Zhang et al., 2018) with pseudo data generated by each other. The initialization stage is important because bad initialization may wrongly squeeze the search space, and too much noise introduced in this stage may hurt the final performance. ∗Contribution during internship at MSRA. Previous methods for UMT (Lample et al., 2018; Artetxe et al., 2018b; Marie and Fujita, 2018; Ren et al., 2019) usually use the following n-gram embeddings based initialization. They first build phrase translation tables with the help of unsupervised cross-lingual n-gram embeddings (Conneau et al., 2017; Artetxe et al., 2018a), and then use them to build two initial Phrase-based Statistical Machine Translation (PBSMT) (Koehn et al., 2003) models with two language models. However, there are two problems with their initialization methods. (1) Some complex sentence structures of original training sentences are hard to be recovered with the n-gram translation tables. (2) The initial translation tables inevitably contain much noise, which will be amplified in the subsequent process. In this paper, we propose a novel retrieve-andrewrite initialization method for UMT. Specifically, we first retrieve semantically similar sentence pairs from monolingual corpora of two languages with the help of unsupervised cross-lingual sentence embeddings. Next, with those retrieved similar sentence pairs, we run GIZA++ (Och and Ney, 2003) to get word alignments which are used to delete unaligned words in the target side of the retrieved sentences. The modified target sentences are then rewritten with a designed sequence-to-sequence rewriting model to minimize the semantic gap between the source and target sides. Taking the pairs of the source sentences and corresponding rewritten targets as pseudo parallel data, we then build two initial PBSMT models (source-to-target and targetto-source), which are used to generate pseudo parallel data to warm up NMT models, followed by an iterative back-translation training process. Our code is released at https://github.com/ImagistShuo/RRforUNMT.git. Our contributions are threefold. (1) We propose a novel method to initialize unsupervised MT models with a retrieve-and-rewrite schema, which can 3499 Figure 1: Method overview. (In the figure, “embs” means “embeddings” and “x-lingual” means “cross-lingual”.) preserve the rich sentence structure and provide high-quality phrases. (2) We design an effective seq-to-seq architecture based on the Transformer to rewrite sentences with semantic constraints. (3) Our method significantly outperforms the previous non-pre-training based UMT results on en-fr and en-de translation tasks, and give the first unsupervised en-zh translation results on WMT17. 2 Method Our method can be divided into three steps as shown in Figure 1. First, we do similar sentences retrieval (§2.1) from two monolingual corpora with the help of unsupervised cross-lingual sentence embeddings. Next, to minimize the semantic gap between the source and retrieved targets, we do target sentences rewriting (§2.2) by deleting unaligned words in the target side, and generate complete and better-aligned targets via our rewriting model with the help of missing information provided by the source. After that, we treat the rewritten pairs as the pseudo parallel data for translation models initialization and training (§2.3). 2.1 Similar Sentences Retrieval Given two monolingual corpora Dx and Dy of two languages X and Y respectively, we first build unsupervised cross-lingual word embeddings of X and Y using fastText (Bojanowski et al., 2017) and vecmap (Artetxe et al., 2018a), and then we obtain cross-lingual sentence embeddings based on the cross-lingual word embeddings via SIF (Arora et al., 2017). After that, we use the marginal-based scoring (Artetxe and Schwenk, 2018) to retrieve Figure 2: Example of rewriting. The unaligned words, i.e., 250 and 建议(suggestion), proposed by GIZA++ have been removed in y′, which is then rewritten by the model to the right target ˆy (40 and 反馈(responses)). More examples of the sentences before and after rewriting are shown in Appendix B. similar sentences from two corpora1. Examples retrieved from monolingual English and Chinese corpora are shown in Figure 1 in the Appendix A. 2.2 Target Sentences Rewriting As shown in Figure 2, having retrieved similar sentence pairs {x, y}, we first run GIZA++ (Och and Ney, 2003) on these pairs and obtain the word alignment information. Then, for each target sentence y, we remove the unaligned words from it according to lexical translation probabilities of GIZA++ output. We replace each deleted word with ⟨DEL⟩in y to get the incomplete target sentence y′. Meanwhile, we record the unaligned words in the source as xm 1 where m is the number of the unaligned source words. Next, we feed y′ and xm 1 into a sequence-to-sequence model to generate the refined target sentence ˆy. The rewritten pairs {x, ˆy} are 1For each source sentence, we choose 30 nearest neighbors in the target language, which have approximately similar lengths to the source (within the difference of ±5 words), and keep the neighbors with the scores more than 0.6. 3500 used as training data to train initial UMT systems. Figure 3: The architecture of the rewriting model. We modify the input of the Transformer encoder into two parts. The first part is the incomplete target sentence y′, which is the same as the original Transformer input, and the second part is a sequence of unaligned source words xm 1 , for which we remove positional encoding because the order of these words is not a concern. Our rewriting model is a modification of Transformer (Vaswani et al., 2017) shown as Figure 3. We initialize the embedding layer of the second input part with pre-trained cross-lingual word embeddings because its content should be independent of languages. We keep it fixed during training. Thus the second part is like a memory recording semantic information of words. We concatenate the readout embeddings of both parts with a separator, and feed them to the Transformer encoder, so that the attention mechanism will take effect on both parts together. For model training, due to the lack of references, we need to build training data for the rewriting model from monolingual corpus Dy. Firstly, we remove 20 to 30 percent of words from a given sentence y ∈Dy, and replace them with ⟨DEL⟩to get y′. Next, we randomly swap contiguous words in y′ with the probability of 0.2 to introduce some noises. Then we record the removed words as set sm 1 and randomly drop/add some words from/to this set. We then treat y′ and sm 1 as the inputs, and y as the output to train the model. For model inference, we feed the incomplete sentence y′ and unaligned source words xm 1 into the trained model and generate the refined sentence ˆy. Note there seems to be a bias between the training and inference that sm 1 during training are in the same language as y, while during inference, they are from the source language X. But the bias has been eliminated since the second input part of the encoder is the readout cross-lingual embeddings, which is independent of languages. 2.3 Translation Models Initialization and Training Once we get {x, ˆy} generated above, we use them to train initial PBSMT models, and use the SMT models to produce pseudo data to setup two NMT models, followed by the iterative back-translation. 3 Experiments 3.1 Setup Dataset In our experiments, we consider three language pairs, English-French (en-fr), English-German (en-de) and English-Chinese (en-zh). For en, fr and de, we use 50 million monolingual sentences in NewsCrawl from 2007 to 2017. As for zh, we use the Chinese side from WMT17 en-zh parallel data.2 For the convenience of comparison, we use newstest 2014 as the test set for en-fr, newstest 2016 for en-de, and newstest 2017 for en-zh. The data preprocessing is described in Appendix D. Baselines Our method is compared with eight baselines of unsupervised MT systems listed in the upper area of Table 1. The first three baselines are unsupervised NMT models, and the fourth baseline is an unsupervised PBSMT model. The fifth baseline is an extract-and-edit schema for unsupervised neural machine translation. The sixth and seventh baselines are hybrid models of NMT and PBSMT. And the last baseline is a pre-training based method. 3.2 Results Overall Results The comparison results are reported in Table 1. From the table, we find that our method significantly outperforms the best non-pre-training based baseline with an average of 4.63 BLEU scores on all pairs. Note that Lample and Conneau (2019) is based on pre-training, which uses much more monolingual data than our method. Even so, we reach comparable results on the en-fr pair. Comparison of Initial SMT Models We compare the performance of SMT models initialized with different methods in Table 2. All 2Note that we only retrieve similar sentences from sampled 20 million sentences in each monolingual corpus and use Hierarchical Navigable Small World (HNSW) (Malkov and Yashunin, 2018) to build embedding index for space and time efficiency. During the iterative back-translation process in §2.3, we use the whole monolingual corpora. 3501 Method fr2en en2fr de2en en2de zh2en en2zh (Artetxe et al., 2017) 15.6 15.1 (Lample et al., 2017) 14.3 15.1 13.3 9.6 (Yang et al., 2018) 15.6 17.0 14.6 10.9 (Artetxe et al., 2018b) 25.9 26.2 23.1 18.2 (Wu et al., 2019) 26.9 27.6 23.3 19.6 (Lample et al., 2018) 27.7 28.1 25.2 20.2 (Ren et al., 2019) 28.9 29.5 26.3 21.7 11.2 18.7 (Lample et al.,2019)* 33.3 33.4 34.3 26.4 Ours 33.3 34.0 31.6 26.0 15.3 23.9 Table 1: Comparison of the final test BLEU. en2zh: character-level BLEU. *: pre-training based method. three baselines initialize their SMT models with phrase tables inferred from n-gram embeddings and language models. From the table, we find that our proposed method gives better initialization to SMT models. Even the SMT models trained with only the retrieved sentences reach higher performance than previous methods, which verifies that the noise within the retrieved sentences is random to a greater extent and can be easily eliminated by SMT models, which is consistent with Khayrallah and Koehn (2018). With the target sentences rewritten by our rewriting model, the quality of extracted phrases can be further improved. We also try to directly train NMT models with the rewritten pseudo data, but only get the BLEU scores under 10, which means there is still much noise for SMT to eliminate in the pseudo pairs. Initialization Method fr2en en2fr de2en en2de (Ren et al., 2019) 15.34 11.74 11.03 8.14 (Lample et al., 2018) 17.50 15.63 (Artetxe et al., 2018b) 21.16 20.13 13.86 10.59 Only retrieval 21.36 20.23 15.96 12.03 + target rewriting 25.21 23.58 20.41 15.98 Table 2: BLEU of different initial SMT models. Discussion of Rewriting Model We build two test sets to quantify the performance of our rewriting models. The first test set denoted as “in-domain”, is from our synthetic training data. As described before, we build training samples using monolingual data according to the rules in §2.2. We select 8M sentences from the monolingual corpus of a certain language for model training and randomly sample 8k sentences as development and test sets respectively. In addition, we also test our rewriting model on newstest2014 (en-fr), which is denoted as “out-domain”. We first run GIZA++ on the parallel sentences in the original test set to find the golden alignments between source and target words. Next, we randomly delete up to 30% words in the target side and record their aligned source words. Then we feed the incomplete target sentence and the recorded source words into our model to recover the original target. The BLEU scores on both test sets are listed in Table 3, which shows our rewriting model has good performance. Test sets en as target fr as target In-domain 59.87 58.71 Out-domain 48.52 47.63 Table 3: Test BLEU scores of the rewriting models. 4 Related Work Unsupervised machine translation becomes a hot research topic in recent years. The pioneering methods are based on NMT models (Transformer) (Artetxe et al., 2017; Lample et al., 2017; Yang et al., 2018) trained with denoising auto-encoder (Vincent et al., 2010) and iterative back-translation. The following work shows that SMT methods and the hybrid of NMT and SMT can be more effective (Artetxe et al., 2018b; Lample et al., 2018; Marie and Fujita, 2018; Ren et al., 2019; Artetxe et al., 2019). They build the initial PBSMT models with language models and phrase tables inferred from unsupervised cross-lingual n-gram embeddings. Recently, Lample and Conneau (2019) propose a pre-training method and achieve state-of-theart performance on unsupervised en-fr and en-de translation tasks. But they use much more monolingual data from Wikipedia than previous work and this paper. We must also mention the work of Wu et al. (2019). They similarly use retrieval and rewriting framework for unsupervised MT. However, ours is different from theirs in two aspects. First, we efficiently calculate the cross-lingual sentence embeddings via a training-free method SIF rather than a pre-trained language model. Second, our rewriting method is based on the word alignment information which is more explicit than their max pooling, and our rewriting model is more simple but effective so that the rewriting results can be directly used without extra training techniques. 5 Conclusion In this paper, we propose a novel method for unsupervised machine translation with a retrieve-andrewrite schema. We first retrieve similar sentences 3502 from monolingual corpora and then rewrite the targets with a rewriting model. With the pseudo parallel data, we better initialize PBSMT models and significantly improve the final iteration performance as the experiments show. Acknowledgments This work is supported in part by National Key R&D Program of China AAA0102301, and NSFC 61925203 & U1636210 & 61421003. References Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 789–798. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019. An effective approach to unsupervised machine translation. In Proceedings of the 57th Annual Meeting of ACL. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2017. Unsupervised neural machine translation. arXiv preprint arXiv:1710.11041. Mikel Artetxe and Holger Schwenk. 2018. Marginbased parallel corpus mining with multilingual sentence embeddings. arXiv preprint arXiv:1811.01136. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087. Howard Johnson, Joel Martin, George Foster, and Roland Kuhn. 2007. Improving translation quality by discarding most of the phrasetable. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL). Huda Khayrallah and Philipp Koehn. 2018. On the impact of various types of noise on neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 74–83. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyVolume 1, pages 48–54. Association for Computational Linguistics. Guillaume Lample and Alexis Conneau. 2019. Cross-lingual language model pretraining. arXiv:1901.07291. Guillaume Lample, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2017. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, et al. 2018. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049. Yury A Malkov and Dmitry A Yashunin. 2018. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE transactions on pattern analysis and machine intelligence. Benjamin Marie and Atsushi Fujita. 2018. Unsupervised neural machine translation initialized by unsupervised statistical machine translation. arXiv preprint arXiv:1810.12703. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational linguistics, 29(1):19–51. Shuo Ren, Zhirui Zhang, Shujie Liu, Ming Zhou, and Shuai Ma. 2019. Unsupervised neural machine translation with smt as posterior regularization. arXiv preprint arXiv:1901.04112. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 86–96. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1715–1725. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all 3503 you need. In Advances in Neural Information Processing Systems, pages 6000–6010. Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(Dec):3371–3408. Jiawei Wu, Xin Wang, and William Yang Wang. 2019. Extract and edit: An alternative to back-translation for unsupervised neural machine translation. arXiv preprint arXiv:1904.02331. Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2018. Unsupervised neural machine translation with weight sharing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 46–55. Zhirui Zhang, Shujie Liu, Mu Li, Ming Zhou, and Enhong Chen. 2018. Joint training for neural machine translation models with monolingual data. In ThirtySecond AAAI Conference on Artificial Intelligence. A Examples of Retrieval Examples retrieved from monolingual English and Chinese corpora are shown in Figure 5. With this method, we can retrieve not only highly similar sentences like the first case, but also sentence pairs with rich sentence structures like the second one. The rest retrieved pairs, though containing some noise, also provide high-quality alignments after rewriting according to our observation. Figure 4: Examples of similar sentences retrieved by our method. The underlined words are already aligned. The note is a hierarchical translation rule, which belongs to a rich sentence structure. B Examples of Rewriting We list some rewriting cases from en to zh in this section. Figure 6 shows some retrieved sentence pairs before and after being rewritten, to demonstrate the effectiveness of our retrieval method and rewriting model. From the first case, we see that the unaligned word “CPSC” is replaced with the right one “她” (she); unrelated words “锂离子” (lithium-ion) and “消费者” (consumer) are removed; “设备” (device) and “爆炸” (explosion) are added into the rewritten sentence. From the second case, we see that the unaligned word “小 组” (group) is replaced with the right one “科学家 们” (scientists); unrelated words “迎来” (welcome) and “天文学” (astronomy) are removed; “最大” (biggest) and “突破” (breakthrough) are added in the rewritten sentence. The two cases show that our rewriting model can produce the target sentences that are better aligned with the given sources. C Examples of Translation Figure 5 shows some translation results generated by our unsupervised MT models to exemplify the final performance. The cases verify that our method empowers the models to learn rich sentence structure such as the hierarchical translation rules of “be A that B” →“是B 的A” in the first case and “act as if A” →“表现的好像A 一样” in the second one. This means that our initialization method can preserve the rich sentence structures of the original monolingual sentences, thus giving better initialization for initial UMT models. D Data Preprocessing We use Moses scripts3 for tokenization and truecasing. For Chinese tokenization, we use our in-house tool. For SMT, we use the Moses implementation of hierarchical PBSMT systems with Salm (Johnson et al., 2007). For the rewriting and NMT models, we use the modified version of the public implementation4 of the Transformer (Vaswani et al., 2017) base model. The rewriting model is based on word level with the vocabulary size of 200,000, while the unsupervised NMT model is based on BPE (Sennrich et al., 2016b) level with the vocabulary size of 60,000. The BPE vocabulary space is shared for each language pair. 3https://github.com/moses-smt/mosesdecoder 4https://github.com/tensorflow/tensor2tensor 3504 Figure 5: Cases of the WMT17 English-Chinese translation results. The underlined words are in hierarchical rules. Figure 6: Cases of the retrieved and rewritten sentences. The bold words are unaligned source words while the strikethrough words are unaligned target words. Human references are given by a translation expert.
2020
320
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3505–3511 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3505 A Simple and Effective Unified Encoder for Document-Level Machine Translation Shuming Ma, Dongdong Zhang, Ming Zhou Microsoft Research Asia {shumma, dozhang, mingzhou}@microsoft.com Abstract Most of the existing models for documentlevel machine translation adopt dual-encoder structures. The representation of the source sentences and the document-level contexts1 are modeled with two separate encoders. Although these models can make use of the document-level contexts, they do not fully model the interaction between the contexts and the source sentences, and can not directly adapt to the recent pre-training models (e.g., BERT) which encodes multiple sentences with a single encoder. In this work, we propose a simple and effective unified encoder that can outperform the baseline models of dualencoder models in terms of BLEU and METEOR scores. Moreover, the pre-training models can further boost the performance of our proposed model. 1 Introduction Thanks to the development of the deep learning methods, the machine translation systems have achieved good performance that is even comparable with human translation in the news domain (Hassan et al., 2018). However, there are still some problems with machine translation in the documentlevel context (L¨aubli et al., 2018). Therefore, more recent work (Jean et al., 2017; Wang et al., 2017; Tiedemann and Scherrer, 2017; Maruf and Haffari, 2018; Bawden et al., 2018; Voita et al., 2019a; Junczys-Dowmunt, 2019) is focusing on the document-level machine translation. Most of the existing models (Zhang et al., 2018; Maruf et al., 2019; Werlen et al., 2018) for document-level machine translation use two encoders to model the source sentences and the document-level contexts. Figure 1a illustrates the structure of these models. They extend the standard 1In this work, document-level contexts denote the surrounding sentences of the current source sentence. Transformer Encoder Transformer Encoder Transformer Decoder Translation Context Source (a) Dual-Encoder Structure Transformer Encoder Transformer Decoder Translation Context Source + (b) Uni-Encoder Structure Figure 1: The overview of the dual-encoder structure and the uni-encoder structure for document-level machine translation. Transformer model with a new context encoder, and the encoder for source sentences is conditioned on this context encoder. However, they do not fully model the interaction between the contexts and the source sentences because the self-attention layers are performed inside each encoder separately. Moreover, it cannot be directly adapted to the recent pre-training models (Devlin et al., 2019; Peters et al., 2018; Radford et al., 2019; Dong et al., 2019; Song et al., 2019; Lample and Conneau, 2019), which encodes multiple sentences with a single encoder. Different from the dual-encoder structure, the uni-encoder structure takes the concatenation of contexts and source sentences as the input (as shown in Figure 1b). Therefore, when modeling the contexts, it can make full use of the interaction between the source sentences and the contexts, while the dual-encoder model fails to exploit this information. Moreover, the uni-encoder structure is identical to the recent pre-training models (e.g., 3506 BERT). However, the previous uni structure suffers from two problems for document-level machine translation. First, the attention is distracted due to longer sequences. Second, the source sentences and the contexts are modeled equally, which is contrary to the fact that the translation is more related to the current source sentences. To address these problems, we propose a novel flat structure with a unified encoder called FlatTransformer. It separates the encoder of standard Transformers into two parts so that the attention can concentrate at both the global level and the local level. At the bottom of the encoder blocks, the self-attention is applied to the whole sequence. At the top of the blocks, it is only implemented at the position of the source sentences. We evaluate this model on three document-level machine translation datasets. Experiments show that it can achieve better performance than the baseline models of dual-encoder structures in terms of BLEU and METEOR scores. Moreover, the pre-training models can further boost the performance of the proposed structure. 2 Flat-Transformer In this section, we introduce our proposed flat structured model, which we denote as FlatTransformer. 2.1 Document-Level Translation Formally, we denote X = {x1, x2, · · · , xN} as the source document with N sentences, and Y = {y1, y2, · · · , yM} as the target document with M sentences. We assume that N = M because the sentence mismatches can be fixed by merging sentences with sentence alignment algorithms (Sennrich and Volk, 2011). Therefore, we can assume that (xi, yi) is a parallel sentence pair. Following Zhang et al. (2018), y<i can be omitted because x<i and y<i conveys the same information. As a result, the probability can be approximated as: P(Y |X) ≈ N Y i=1 P(yi|xi; x<i; x>i) (1) where xi is the source sentence aligned to yi, and (x<i, x>i) is the document-level context used to translate yi. Transformer Bottom Blocks Transformer Decoder Translation Context Source + + Word Embedding Segment Embedding Transformer Top Blocks Encoder Figure 2: The architecture of the proposed FlatTransformer model. 2.2 Segment Embedding The flat structure adopts a unified encoder that does not distinguish the context sentences and the source sentences. Therefore, we introduce the segment embedding to identify these two types of inputs. Formally, given the source input of the surrounding context c and the current sentence x, we project them into word embedding and segment embedding. Then, we perform a concatenation operation to unify them into a single input: e = [E(c); E(x)] (2) s = [S(c); S(x)] (3) where [; ] denotes the concatenation operation, E is the word embedding matrix, and S is the segment embedding matrix. Finally, we add e and s as the input of the encoder. 2.3 Unified Flat Encoder Given the document context, the input sequences of Flat-Transformer are much longer than the standard Transformer, which brings additional challenges. First, the attention is distracted, and its weights become much smaller after the normalization operation. Second, the memory consumption and the computation cost increase, so it is difficult to enlarge the model size, which hinders the adaptation to the pre-training model. To address this problem, we introduce a unified flat encoder. As shown in Figure 2, at the bottom of the encoder blocks, we apply self-attention and the feed-forward layer to the concatenated sequence of the contexts and the current sentence: h1 = Transformer(e + s; θ) (4) 3507 where θ is the parameter of the Transformer blocks. At the top of encoder blocks, each self-attention and feed-forward layer is only implemented on the position of the current sentences: h2 = Transformer(h1[s : t]; θ) (5) where s and t are the starting and ending positions of the source sentences in the concatenation sequence. In this way, the attention can focus more on the current sentences, while the contexts are served as the supplemental semantics for the current sentences. It is noted that the total number of the bottom blocks and the top blocks is equal to the number of standard Transformer’s blocks, so there is no more parameter than that of the standard Transformer. 2.4 Training and Decoding The training of Flat-Transformer is consistent with that of standard Transformer, using the cross entropy loss: L = − n X i=1 log P(Yi|Xi) (6) At the decoding step, it translates the document sentence-by-sentence. When translating each sentences, it predicts the target sequence with the highest probability given the current sentence xi and the surrounding contexts x<i, x>i: ˆyi = arg max yi∈V P(yi|xi; x<i; x>i) (7) 2.5 Comparison with Existing Models Here, we summarize some significant differences compared with the existing models for documentlevel machine translation: 1. Compared with the dual-encoder models, our model uses a unified encoder. To combine the representation of two encoders for the decoder, these dual-encoder models should add a layer inside the encoders. Flat-Transformer does not put any layer on top of the standard Transformer, so it is consistent with the recent pre-training models. 2. Compared with the previous uni-encoder models, our model limits the top transformer layers to only model the source sentences. In this way, our model has an inductive bias of modeling on more current sentences than the contexts, because the translation is more related to the current sentences. Dataset #Sent Avg. #Sent TED 0.21M/9K/2.3K 121/96/99 News 0.24M/2K/3K 39/27/19 Europarl 1.67M/3.6K/5.1K 14/15/14 Table 1: Statistics of three document-level machine translation datasets. 3. There are also some alternative approaches to limit the use of context vectors. For example, we can limit only the top attention layers to attend to the source sentence while keeping the feed-forward layers the same. Compared with this approach, our model does not feed the output vectors of the context encoder to the decoder, so that the decoder attention is not distracted by the contexts. The context vectors in our model is only to help encode a better representation for current source sentences. 3 Experiments We evaluate the proposed model and several stateof-the-art models on three document-level machine translation benchmarks. We denote the proposed model as Flat-Transformer. 3.1 Datasets Following the previous work (Maruf et al., 2019), we use three English-German datasets as the benchmark datasets, which are TED, News, and Europarl. The statistic of these datasets can be found in Table 1. We obtain the processed datasets from Maruf et al. (2019)2, so that our results can be compared with theirs reported in Maruf et al. (2019). We use the scripts of Moses toolkit3 to tokenize the sentences. We also split the words into subword units (Sennrich et al., 2016) with 30K mergeoperations. The evaluation metrics are BLEU (Papineni et al., 2002) and Meteor (Banerjee and Lavie, 2005). 3.2 Implementation Details The batch size is limited to 4, 000 tokens for all models. We set the hidden units of the multi-head component and the feed-forward layer as 512 and 1024. The embedding size is 512, the number of heads is 4, and the dropout rate (Srivastava et al., 2014) is 0.3. The number of Transformer blocks 2https://github.com/sameenmaruf/selective-attn 3https://github.com/moses-smt/mosesdecoder 3508 Model TED News Europarl BLEU METR BLEU METR BLEU METR Dual HAN (Werlen et al., 2018) 24.58 45.48 25.03 44.02 29.58 46.91 SAN (Maruf et al., 2019) 24.62 45.32 24.84 44.27 29.90 47.11 QCN (Yang et al., 2019) 25.19 45.91 22.37 41.88 29.82 47.86 Transformer (Zhang et al., 2018) 24.01 45.30 22.42 42.30 29.93 48.16 +BERT 23.19 45.25 22.06 42.25 30.72 48.62 Uni RNN (Bahdanau et al., 2015) 19.24 40.81 16.51 36.79 26.26 44.14 Transformer (Vaswani et al., 2017) 23.28 44.17 22.78 42.19 28.72 46.22 Our Flat-Transformer 24.87 47.05 23.55 43.97 30.09 48.56 +BERT 26.61 48.53 24.52 45.40 31.99 49.76 Table 2: Results on three document-level machine translation benchmarks (“Dual” denotes dual-encoder, while “Uni” means uni-encoder). TED BLEU METEOR Flat-Transformer 24.87 47.05 w/o Segment 24.36 46.20 w/o Unified 23.28 44.17 Table 3: Ablation study on the TED dataset. for the top encoder is 5, while that for the bottom encoder is 1. When fine-tuning on the pre-training BERT, we adopt the base setting, and the hidden size, the feed-forward dimension, and the number of heads are 768, 3072, 12. To balance the accuracy and the computation cost, we use one previous sentence and one next sentence as the surrounding contexts. We use the Adam (Kingma and Ba, 2014) optimizer to train the models. For the hyper-parameters of Adam optimizer, we set two momentum parameters β1 = 0.9 and β2 = 0.98, and ϵ = 1 × 10−8. The learning rate linearly increases from 0 to 5 × 10−4 for the first 4, 000 warming-up steps and then decreases proportional to the inverse square root of the update numbers. We also apply label smoothing to the cross-entropy loss, and the smoothing rate is 0.1. We implement the early stopping mechanism with patience that the loss on the validation set does not fall in 10 epochs. 3.3 Baselines We compare our models with two categories of baseline models: the dual-encoder models and the uni-encoder models. Uni-encoder: RNNSearch (Bahdanau et al., 2015) is an RNN-based sequence-to-sequence model with the attention mechanism. Transformer (Vaswani et al., 2017) is a popular model for machine translation, based solely on attention mechanisms. For a fair comparison, we use the same hyper-parameters as our model’s, which is described in Section 3.2. Dual-encoder: Zhang et al. (2018) extends the Transformer model with a new context encoder to represent the contexts. HAN (Werlen et al., 2018) is the first to use a hierarchical attention model to capture the context in a structured and dynamic manner. SAN (Maruf et al., 2019) proposes a new selective attention model that uses sparse attention to focus on relevant sentences in the document context. QCN (Yang et al., 2019) proposes a query-guided capsule networks to cluster context information into different perspectives. 3.4 Results We compare our Flat-Transformer model with the above baselines. Table 2 summarizes the results of these models. It shows that our Flat-Transformer can obtain scores of 24.87/23.55/30.09 on three datasets in terms of BLEU, and 47.05/43.97/48.56 in terms of METEOR, which significantly outperforms the previous flat models (RNNSearch and Transformer). By fine-tuning on BERT, Flat-Transformer can achieve improvements of +1.74/+0.97/+1.90 BLEU scores as well as +1.48/+1.43/+1.20 METEOR scores. It proves that Flat-Transformer can be compatible with the pre-training BERT model. Except for the BLEU score on the News dataset, the Flat-Transformer can significantly outperform the dual-encoder models, achieving state-of-the3509 art performance in terms of both BLEU and METEOR scores. On the contrary, the dual-encoder Transformer is not compatible with BERT. It gets slightly worse performance on two datasets, mainly because the model size becomes larger to adapt the setting of BERT. Still, BERT does not provide a good prior initialization for modeling the uni-directional relationship from contexts to source sentences. 3.5 Ablation Study To analyze the effect of each component of FlatTransformer, we conduct an ablation study by removing them from our models on the TED dataset. Table 3 summarizes the results of the ablation study. We remove the segment embedding but reserve the unified structure. It concludes that the segment embedding contributes to an improvement of 0.51 BLEU score and 0.85 METEOR score, showing the importance of explicitly identifying the contexts and the source sentences. After further removing the unified structure of Flat-Transformer, the model becomes a standard Transformer. It shows that the unified structures contribute a gain of 1.08 in terms of BLEU and 2.03 in terms of METEOR. The reason is that the unified structures encourage the model to focus more on the source sentences, while the contexts can be regarded as the semantic supplements. 4 Related Work Here we summarize the recent advances in document-level neural machine translation. Some work focuses on improving the architectures of the document machine translation models. Tiedemann and Scherrer (2017) and Wang et al. (2017) explore possible solutions to exploit the crosssentence contexts for neural machine translation. Zhang et al. (2018) extends the Transformer model with a new context encoder to represent documentlevel context. Werlen et al. (2018) and (Maruf et al., 2019) propose two different hierarchical attention models to model the contexts. Yang et al. (2019) introduces a capsule network to improve these hierarchical structures. There are also some works analyzing the contextual errors (Voita et al., 2018, 2019b; Bawden et al., 2018) and providing the test suites (M¨uller et al., 2018). More recently, Voita et al. (2019a) explores the approaches to incorporate the mono-lingual data to augment the document-level bi-lingual dataset. Different from these works, this paper mainly discusses the comparison between dual-encoder models and uniencoder models and proposes a novel method to improve the uni-encoder structure. 5 Conclusions In this work, we explore the solutions to improve the uni-encoder structures for document-level machine translation. We propose a Flat-Transformer model with a unified encoder, which is simple and can model the bi-directional relationship between the contexts and the source sentences. Besides, our Flat-Transformer is compatible with the pretraining model, yielding a better performance than both the existing uni-encoder models and the dualencoder models on two datasets. Acknowledgments The authors would like to thank the anonymous reviewers for their valuable suggestions and comments. We appreciate Sameen Maruf providing the same processed document data as in their work. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization@ACL 2005, Ann Arbor, Michigan, USA, June 29, 2005, pages 65–72. Rachel Bawden, Rico Sennrich, Alexandra Birch, and Barry Haddow. 2018. Evaluating discourse phenomena in neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1304–1313, New Orleans, Louisiana. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL 2019, pages 4171–4186. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language 3510 model pre-training for natural language understanding and generation. CoRR, abs/1905.03197. Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, Shujie Liu, Tie-Yan Liu, Renqian Luo, Arul Menezes, Tao Qin, Frank Seide, Xu Tan, Fei Tian, Lijun Wu, Shuangzhi Wu, Yingce Xia, Dongdong Zhang, Zhirui Zhang, and Ming Zhou. 2018. Achieving human parity on automatic chinese to english news translation. CoRR, abs/1803.05567. Sebastien Jean, Stanislas Lauly, Orhan Firat, and Kyunghyun Cho. 2017. Does neural machine translation benefit from larger context? arXiv preprint arXiv:1704.05135. Marcin Junczys-Dowmunt. 2019. Microsoft translator at WMT 2019: Towards large-scale document-level neural machine translation. In Proceedings of the Fourth Conference on Machine Translation, WMT 2019, Florence, Italy, August 1-2, 2019 - Volume 2: Shared Task Papers, Day 1, pages 225–233. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. CoRR, abs/1901.07291. Samuel L¨aubli, Rico Sennrich, and Martin Volk. 2018. Has machine translation achieved human parity? A case for document-level evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 4791–4796. Sameen Maruf and Gholamreza Haffari. 2018. Document context neural machine translation with memory networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1275– 1284, Melbourne, Australia. Association for Computational Linguistics. Sameen Maruf, Andr´e F. T. Martins, and Gholamreza Haffari. 2019. Selective attention for context-aware neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 3092–3102. Mathias M¨uller, Annette Rios, Elena Voita, and Rico Sennrich. 2018. A large-scale test set for the evaluation of context-aware pronoun translation in neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 61–72, Brussels, Belgium. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL 2018, pages 2227–2237. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Rico Sennrich and Martin Volk. 2011. Iterative, mtbased sentence alignment of parallel texts. In Proceedings of the 18th Nordic Conference of Computational Linguistics, NODALIDA 2011, May 11-13, 2011, Riga, Latvia, pages 175–182. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. MASS: masked sequence to sequence pre-training for language generation. In ICML 2019, pages 5926–5936. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958. J¨org Tiedemann and Yves Scherrer. 2017. Neural machine translation with extended context. In Proceedings of the Third Workshop on Discourse in Machine Translation, pages 82–92, Copenhagen, Denmark. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5998–6008. Elena Voita, Rico Sennrich, and Ivan Titov. 2019a. Context-aware monolingual repair for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 876–885. Association for Computational Linguistics. Elena Voita, Rico Sennrich, and Ivan Titov. 2019b. When a good translation is wrong in context: Context-aware machine translation improves on deixis, ellipsis, and lexical cohesion. In Proceedings 3511 of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1198–1212, Florence, Italy. Association for Computational Linguistics. Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine translation learns anaphora resolution. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1264–1274, Melbourne, Australia. Association for Computational Linguistics. Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2017. Exploiting cross-sentence context for neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2826–2831, Copenhagen, Denmark. Association for Computational Linguistics. Lesly Miculicich Werlen, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Documentlevel neural machine translation with hierarchical attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2947–2954. Zhengxin Yang, Jinchao Zhang, Fandong Meng, Shuhao Gu, Yang Feng, and Jie Zhou. 2019. Enhancing context modeling with a query-guided capsule network for document-level translation. CoRR, abs/1909.00564. Jiacheng Zhang, Huanbo Luan, Maosong Sun, Feifei Zhai, Jingfang Xu, Min Zhang, and Yang Liu. 2018. Improving the transformer translation model with document-level context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 November 4, 2018, pages 533–542.
2020
321
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3512–3518 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3512 Does Multi-Encoder Help? A Case Study on Context-Aware Neural Machine Translation Bei Li1, Hui Liu1, Ziyang Wang1, Yufan Jiang1, Tong Xiao1,2∗, Jingbo Zhu1,2, Tongran Liu3, Changliang Li4 1NLP Lab, Northeastern University, Shenyang, China 2NiuTrans Research, Shenyang, China 3CAS Key Laboratory of Behavioral Science, Institute of Psychology, CAS, Beijing, China 4Kingsoft AI Lab, Beijing, China {libei neu,jiangyufan2018}@outlook.com, {huiliu,wangziyang}@stumail.neu.edu.cn, {xiaotong,zhujingbo}@mail.neu.edu.com, [email protected],[email protected] Abstract In encoder-decoder neural models, multiple encoders are in general used to represent the contextual information in addition to the individual sentence. In this paper, we investigate multi-encoder approaches in document-level neural machine translation (NMT). Surprisingly, we find that the context encoder does not only encode the surrounding sentences but also behaves as a noise generator. This makes us rethink the real benefits of multi-encoder in context-aware translation - some of the improvements come from robust training. We compare several methods that introduce noise and/or well-tuned dropout setup into the training of these encoders. Experimental results show that noisy training plays an important role in multi-encoder-based NMT, especially when the training data is small. Also, we establish a new state-of-the-art on IWSLT FrEn task by careful use of noise generation and dropout methods. 1 Introduction Sentence-level neural machine translation (NMT) systems ignore the discourse phenomena and encode the individual source sentences with no use of contexts. In recent years, the context-aware models which learn contextual information from surrounding sentences have shown promising results in generating consistent and coherent translations (Zhang et al., 2018; Voita et al., 2018; Kim et al., 2019; Voita et al., 2019; Bawden et al., 2018; Miculicich et al., 2018; Maruf and Haffari, 2018; Maruf et al., 2019). There are two common approaches to incorporating contexts into NMT: the simple way is to concatenate the context and the current sentence ∗Corresponding author. to form a context-aware input sequence (Agrawal et al., 2018; Tiedemann and Scherrer, 2017), whereas a more widely-used approach utilizes additional neural networks to encode context sentences (Jean et al., 2017; Voita et al., 2018; Zhang et al., 2018). Here we name the former as the single-encoder approach and name the latter as the multi-encoder approach. However, large-scale document corpora are not easily available. Most context-aware NMT systems are evaluated on small datasets and significant BLEU improvements are reported (Wang et al., 2017; Zhang et al., 2018; Tu et al., 2018). In our experiments, we find that the improvement persists if we feed pseudo sentences into the context encoder, especially when we train the system on small-scale data. A natural question here is: How much does the improvement come from the leverage of contextual information in multi-encoder? In this work, we aim to investigate what kinds of information that the context-aware model captures. We re-implement several widely used context-aware architectures based on the multiencoder paradigm, and do an in-depth analysis to study whether the context encoder captures the contextual information. By conducting extensive experiments on several document-level translation benchmarks, we observe that: • The BLEU gaps between sentence-level and context-aware models decrease when the sentence baselines are carefully tuned, e.g., proper use of dropout. • The multi-encoder systems are insensitive to the context input. Even randomly sampled sentences can bring substantial improvements. • The model trained with the correct context can achieve better performance during inference without the context input. 3513 Encoderc Encoders Attention Attention Context Source Target Hc Hs Hc′ Decoder ⊕ ... ... (a) Outside Encoderc Encoders Attentionc Attentions Hc Hs Decoder ⊕ Context Source Target ... ... (b) Inside Figure 1: An overview of two multi-encoder systems. In the Outside approach, Hs is the query and Hc is the key/value. In the Inside approach, Target is the query, Hs and Hc represent key/value. Our contribution is two folds: (i) We find that the benefit of the multi-encoder context-aware approach is not from the leverage of contextual information. Instead, the context encoder acts more like a noise generator to provide richer training signals. (ii) The finding here inspires us to develop a simple yet effective training strategy: we add a Gaussian-noise to the encoder output, which can effectively alleviate the overfitting, especially on small datasets. 2 Approaches to Incorporating Contexts into NMT Here we describe two ways of introducing contextual information into NMT systems. 2.1 The Single-Encoder Approach The input of the single-encoder system is the concatenation of the context sentences and the current sentence, with a special symbol inserted to distinguish them (Tiedemann and Scherrer, 2017; Agrawal et al., 2018). Then the extended sentence is fed into the standard Transformer. These systems may face the challenge of encoding extremely long inputs, resulting in inefficient computation. 2.2 The Multi-Encoder Approach The multi-encoder models take the surrounding sentences as the context and employ an additional neural network to encode the context, that is, we have a source-sentence encoder and a context encoder. Figure 1 shows two methods of integrating the context into NMT in the multi-encoder paradigm. Next we show that most of the multi-encoder approaches (Voita et al., 2018; Zhang et al., 2018) are instances of the models described below. • Outside integration. As shown in Figure 1(a), the representations of the context and the current sentence are firstly transformed into a new representation by an attention network. Then the attention output and the source sentence representation are fused by a gated sum. • Inside integration. Alternatively, the decoder can attend to two encoders respectively (Figure 1(b)). Then, the gating mechanism inside the decoder is employed to obtain the fusion vector. 3 Experimental Setup 3.1 Data and Settings We evaluated the document-level approaches on several publicly available datasets. For ChineseEnglish (Zh-En) and French-English (Fr-En), we used Ted talks from IWSLT15 and IWSLT16 (Cettolo et al., 2012) evaluation campaigns as the training data. We validated on dev2010, and tested on tst2010-2013 (Zh-En), tst2010 (Fr-En) respectively. For English-German (En-De), we evaluated on WMT18 task 1. For more convincing results, we also randomly sampled 500k/1M/2M/5M sentence pairs from the Chinese-English corpus provided by WMT2 and test on newstest2017. We preprocessed the sentences with Moses tokenizer3 except Chinese sentences and used byte pair encoding (Sennrich et al., 2016) with 32K merged operations to 1We used the News-Commentary v14 as the train set 2http://www.statmt.org/wmt19/translation-task.html 3http://www.statmt.org/moses 3514 Lang. Train Valid Test doc. sent. doc. sent. doc. sent. Zh-En 1708 209K 8 887 56 5473 Fr-En 1803 220K 8 887 11 1664 En-De 8462 329K 130 3004 122 2998 En-Ru 2M 10k 10k Table 1: Details of datasets on different language pairs. segment words into sub-word units. The Chinese sentences were word segmented by the tool provided within NiuTrans (Xiao et al., 2012). For Fr-En and Zh-En tasks, we lowercased all sentences to obtain comparable results with previous work. We also conducted experiments on a larger EnglishRussian (En-Ru) dataset provided by Voita et al. (2018), consisting of 2M sentence pairs selected from publicly available OpenSubtitles2018 corpus. The data statistics of each language pair can be seen in Table 1. We chose the Transformer-base model as the sentence-level baseline. The context encoder also used the same setting as the sentence-level baseline. We used Adam (Kingma and Ba, 2014) for optimization, and trained the systems on a single TiTan V GPU4. The learning rate strategy was the same as that used in Vaswani et al. (2017). Our implementation was based on Fairseq (Ott et al., 2019). More details can be found in our repository5. 4 Results and Discussion To study whether the context-encoder network captures contextual information in training, we present three types of context as the input of the contextencoder: • Context: the previous sentence of the current sentence. • Random: a sentence consisting of words randomly sampled from the source vocabulary. • Fixed: a fixed sentence input for contextencoder. 4.1 Baseline Selection Weight sharing (Voita et al., 2018) and two-stage training (Zhang et al., 2018) strategies have been proven essential to build strong context-aware systems. The former shared the first N-1 blocks of 4For En-Ru and Zh-En we trained models on 4 GPUs 5The source code is available at https://github. com/libeineu/Context-Aware System Layers WS TS BLEU Sentence-level 28.9 Outside Context 6 × × 28.5 6 ✓ × 29.3 6 × ✓ 29.6 1 × ✓ 29.4 Table 2: Comparison of context-aware model with two training strategies on En-De task. WS represents weight-sharing and TS represents two-stage training. context encoder with the source encoder, and the latter first trained a standard sentence-level Transformer and finetuned the document-level Transformer with an extra context-encoder. We first evaluated the importance of two training strategies for multi-encoder systems. We selected the multiencoder with Outside integration (see Section 2) as the context-aware model and trained systems with two training strategies on the En-De task respectively. As shown in Table 2, we find that both two strategies outperform the sentence-level baseline by a large margin. The model with two-stage training performs slightly better than the weightsharing system in terms of BLEU. To our surprise, the context-encoder with a single-layer can compete with a six-layers model. We suspect that this is because the training data is limited and we do not need a sophisticated model to fit it. Therefore, we choose the two-stage training and single-layer context-encoder for all experiments in the remainder of this paper. 4.2 Results Table 3 shows the results of several context-aware models on different datasets. We see, first of all, that all multi-encoder models, including both Inside and Outside approaches outperform the sentencelevel baselines by a large margin on the Zh-En and En-De datasets with a small p value of dropout. Also, there are modest BLEU improvements on the Fr-En and En-Ru tasks. When the models are regularized by a larger dropout, all systems obtain substantial improvements - but the gaps between sentence-level and multi-encoder systems decrease significantly. We deduce that if the context-aware systems rely on the contextual information from the preceding sentence, the performance of Random and Fixed should dramatically decrease due to the incorrect context. Surprisingly, both Random and Fixed systems achieve comparable performance or even 3515 System Zh-En Fr-En En-De En-Ru p = 0.1 p = 0.3 p = 0.1 p = 0.3 p = 0.1 p = 0.3 p = 0.1 p = 0.3 Sentence-level 18.0 19.7 36.5 36.9 28.9 30.2 30.3 31.1 Single-encoder 18.1 19.1 36.2 37.3 28.5 30.2 30.4 31.2 Inside Context 19.4 20.0 36.8 37.5 29.7 31.0 30.8 31.3 Random 19.5 20.3 37.0 37.4 29.9 30.7 30.8 31.4 Fixed 19.5 20.3 37.0 37.2 29.3 30.8 30.8 31.4 Outside Context 19.4 19.8 36.8 37.4 29.4 30.7 30.9 31.1 Random 19.4 20.1 36.8 37.3 29.6 31.1 30.7 31.1 Fixed 19.4 20.0 36.7 37.2 29.5 31.1 30.8 31.1 Table 3: The BLEU scores [%] of different context-aware models with three context inputs. We use dropout = 0.1 and dropout = 0.3 respectively. System Inside Outside Aware Agnostic Aware Agnostic Context 31.0 31.0 30.7 31.1 Random 30.7 30.8 31.1 31.3 Fixed 30.8 30.8 31.1 31.1 Table 4: The BLEU scores [%] of context-aware systems with two inference schemas. Aware represents the inference process matches the training. Agnostic represents that models ignore context encoder during inference. higher BLEU scores than Context in most cases (See Table 3). A possible explanation is that the context encoder does not only model the context. Instead, it acts more like a noise generator to provide additional supervised signals to train the sentence-level model. 4.3 Robust Training To verify the assumption of robust training, we followed the work (Srivastava et al., 2014; Berger et al., 1996). We turned off the context-encoder during the inference process, and made the inference system perform as the sentence-level baseline. Table 4 shows that both Context and Random inference without context-encoder obtain modest BLEU improvements. This confirms that the information extracted by context-encoder just plays a role like introducing randomness into training (e.g., dropout), which is a popular method used in robust statistics. We argue that three types of context provide noise signals to disturb the distribution of the sentence-level encoder output. The BLEU improvements of both Outside and Inside are mainly due to the richer noise signals which can effectively alleviate the overfitting. Inspired by Outside integration manner, we deSystem Zh-En Fr-En En-De En-Ru Baseline 19.7 36.9 30.2 31.1 Context 19.8 37.4 30.7 31.1 Noise 19.9 37.4 30.9 31.3 Context+Noise 19.9 37.3 30.9 31.3 Table 5: Comparison of Outside Context and Gaussiannoise methods on three tasks, with dropout = 0.3, σ = 0.3. signed a simple yet effective method to regularize the training process: A Gaussian noise is added to the encoder output instead of the embedding (Cheng et al., 2018). We sample a vector ϵ ∼N 0, σ2I  from a Gaussian distribution with variance σ2, where σ is a hyper-parameter. As seen in Table 5, the systems with Gaussian-noise significantly outperform the sentence-level baselines, and are slightly better than the Outside-context counterpart. Moreover, a natural question is whether further improvement can be achieved by combining the Context with the Gaussian-noise method. From the last line in Table 5, we observe no more improvement at all. The observation here convinced the assumption again that the context-encoder plays a similar role with the noise generator. 4.4 Large Scale Training Most previous results are reported on small training datasets. Here we examine the effects of the noise-based method on different sized datasets. We trained the Inside-Random model and the Gaussiannoise model on different datasets consisting of 500K to 5M sentence pairs. Seen from Figure 2, the baseline model achieves better translation performance when we increase the data size. More interestingly, it is observed that Inside-Random and Gaussian-noise perform slightly better than 3516 500k 1M 2M 5M 18 20 22 24 Data Volume BLEU Base Inside Gaussian Figure 2: BLEU scores vs. different data volume on ZhEn sentence-level dataset. dropout = 0.1 and σ = 0.3. the baseline, and the gaps gradually decrease with the volume increasing. This is reasonable that models trained on large-scale data may suffer less from the overfitting problem. 5 Related Work Context-aware NMT systems incorporating the contextual information generate more consistent and coherent translations than sentence-level NMT systems. Most of the current context-aware NMT models can be classified into two main categories, single-encoder systems (Tiedemann and Scherrer, 2017) and multi-encoder systems (Jean et al., 2017; Voita et al., 2018; Zhang et al., 2018). Voita et al. (2018) and Zhang et al. (2018) integrated an additional encoder to leverage the contextual information into Transformer-based NMT systems. Miculicich et al. (2018) employed a hierarchical attention network to model the contextual information. Maruf and Haffari (2018) built a context-aware NMT system using a memory network, and Maruf et al. (2019) encoded the whole document with selective attention network. However, most of the work mentioned above utilized more complex modules to capture the contextual information, which can be approximately regarded as multi-encoder systems. For a fair evaluation of context-aware NMT methods, we argue that one should build a strong enough sentence-level baseline with carefully regularized methods, especially on small datasets (Kim et al., 2019; Sennrich and Zhang, 2019). Beyond this, Bawden et al. (2018) and Voita et al. (2019) acknowledged that BLEU score is insufficient to evaluate context-aware models, and they emphasized that multi-encoder architectures alone had a limited capacity to exploit discourse-level context. In this work, we take a further step to explore the main cause, showing that the context-encoder acts more like a noise generator, and the BLEU improvements mainly come from the robust training instead of the leverage of contextual information. Additionally, Cheng et al. (2018) added the Gaussian noise to word embedding to simulate lexical-level perturbations for more robust training. Differently, we added the Gaussian noise to the encoder output which plays a similar role with context-encoder, which provides additional training signals. 6 Conclusions We have shown that, in multi-encoder contextaware NMT, the BLEU improvement is not attributed to the leverage of contextual information. Even though we feed the incorrect context into training, the NMT system can still obtain substantial BLEU improvements on several small datasets. Another observation is that the NMT models can even achieve better translation quality without the context encoder. This gives us an interesting finding that the context-encoder acts more like a noise generator, which provides rich supervised training signals for robust training. Motivated by this, we significantly improve the sentence-level systems with a Gaussian noise imposed on the encoder output. Experiments on large-scale training data demonstrate the effectiveness of this method. Acknowledgments This work was supported in part by the National Science Foundation of China (Nos. 61876035 and 61732005), the National Key R&D Program of China (No. 2019QY1801) and the Opening Project of Beijing Key Laboratory of Internet Culture and Digital Dissemination Research. The authors would like to thank anonymous reviewers for their comments. References Ruchit Agrawal, Marco Turchi, and Matteo Negri. 2018. Contextual handling in neural machine translation: Look behind, ahead and on both sides. Rachel Bawden, Rico Sennrich, Alexandra Birch, and Barry Haddow. 2018. Evaluating discourse phenomena in neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1304–1313, New Orleans, Louisiana. Association for Computational Linguistics. Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy ap3517 proach to natural language processing. Computational Linguistics, 22(1):39–71. Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. Wit3: Web inventory of transcribed and translated talks. In Proceedings of the 16th Conference of the European Association for Machine Translation (EAMT), pages 261–268, Trento, Italy. Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. 2018. Towards robust neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1756– 1766, Melbourne, Australia. Association for Computational Linguistics. S´ebastien Jean, Stanislas Lauly, Orhan Firat, and Kyunghyun Cho. 2017. Does neural machine translation benefit from larger context? CoRR, abs/1704.05135. Yunsu Kim, Duc Thanh Tran, and Hermann Ney. 2019. When and why is document-level context useful in neural machine translation? In Proceedings of the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019), pages 24–34, Hong Kong, China. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Sameen Maruf and Gholamreza Haffari. 2018. Document context neural machine translation with memory networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1275–1284, Melbourne, Australia. Association for Computational Linguistics. Sameen Maruf, Andr´e F. T. Martins, and Gholamreza Haffari. 2019. Selective attention for context-aware neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3092–3102, Minneapolis, Minnesota. Association for Computational Linguistics. Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural machine translation with hierarchical attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2947–2954, Brussels, Belgium. Association for Computational Linguistics. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Rico Sennrich and Biao Zhang. 2019. Revisiting lowresource neural machine translation: A case study. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 211– 221, Florence, Italy. Association for Computational Linguistics. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929–1958. J¨org Tiedemann and Yves Scherrer. 2017. Neural machine translation with extended context. In Proceedings of the Third Workshop on Discourse in Machine Translation, pages 82–92, Copenhagen, Denmark. Association for Computational Linguistics. Zhaopeng Tu, Yang Liu, Shuming Shi, and Tong Zhang. 2018. Learning to remember translation history with a continuous cache. Transactions of the Association for Computational Linguistics, 6:407–420. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010. Elena Voita, Rico Sennrich, and Ivan Titov. 2019. When a good translation is wrong in context: Context-aware machine translation improves on deixis, ellipsis, and lexical cohesion. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1198–1212, Florence, Italy. Association for Computational Linguistics. Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine translation learns anaphora resolution. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1264–1274, Melbourne, Australia. Association for Computational Linguistics. Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2017. Exploiting cross-sentence context for neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2826–2831, Copenhagen, Denmark. Association for Computational Linguistics. Tong Xiao, Jingbo Zhu, Hao Zhang, and Qiang Li. 2012. NiuTrans: An open source toolkit for phrasebased and syntax-based machine translation. In Proceedings of the ACL 2012 System Demonstrations, 3518 pages 19–24, Jeju Island, Korea. Association for Computational Linguistics. Jiacheng Zhang, Huanbo Luan, Maosong Sun, Feifei Zhai, Jingfang Xu, Min Zhang, and Yang Liu. 2018. Improving the transformer translation model with document-level context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 533–542, Brussels, Belgium. Association for Computational Linguistics.
2020
322
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3519–3524 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3519 Dynamically Adjusting Transformer Batch Size by Monitoring Gradient Direction Change Hongfei Xu1,2 Josef van Genabith1,2 Deyi Xiong3 Qiuhui Liu4∗ 1Saarland University / Saarland, Germany 2German Research Center for Artificial Intelligence / Saarland, Germany 3Tianjin University / Tianjin, China 4China Mobile Online Services / Henan, China [email protected], Josef.Van [email protected], [email protected], [email protected] Abstract The choice of hyper-parameters affects the performance of neural models. While much previous research (Sutskever et al., 2013; Duchi et al., 2011; Kingma and Ba, 2015) focuses on accelerating convergence and reducing the effects of the learning rate, comparatively few papers concentrate on the effect of batch size. In this paper, we analyze how increasing batch size affects gradient direction, and propose to evaluate the stability of gradients with their angle change. Based on our observations, the angle change of gradient direction first tends to stabilize (i.e. gradually decrease) while accumulating mini-batches, and then starts to fluctuate. We propose to automatically and dynamically determine batch sizes by accumulating gradients of mini-batches and performing an optimization step at just the time when the direction of gradients starts to fluctuate. To improve the efficiency of our approach for large models, we propose a sampling approach to select gradients of parameters sensitive to the batch size. Our approach dynamically determines proper and efficient batch sizes during training. In our experiments on the WMT 14 English to German and English to French tasks, our approach improves the Transformer with a fixed 25k batch size by +0.73 and +0.82 BLEU respectively. 1 Introduction The performance of neural models is likely to be affected by the choice of hyper-parameters. While much previous research (Sutskever et al., 2013; Duchi et al., 2011; Kingma and Ba, 2015) focuses on accelerating convergence and reducing the effects of the learning rate, comparatively few papers concentrate on the effect of batch size. However, batch size is also an important hyperparameter, and some batch sizes empirically lead to better performance than the others. ∗Corresponding author. Specifically, it has been shown that the performance of the Transformer model (Vaswani et al., 2017) for Neural Machine Translation (NMT) (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017) relies heavily on the batch size (Popel and Bojar, 2018; Ott et al., 2018; Abdou et al., 2017; Zhang et al., 2019a). The influence of batch size on performance raises the question, how to dynamically find proper and efficient batch sizes during training? In this paper, we investigate the relationship between the batch size and gradients, and propose a dynamic batch size approach by monitoring gradient direction changes. Our contributions are as follows: • We observe the effects on gradients with increasing batch size, and find that a large batch size stabilizes the direction of gradients; • We propose to automatically determine dynamic batch sizes in training by monitoring the gradient direction change while accumulating gradients of small batches; • To measure gradient direction change efficiently with large models, we propose an approach to dynamically select those gradients of parameters/layers which are sensitive to the batch size; • In machine translation experiments, our approach improves the training efficiency and the performance of the Transformer model. 2 Gradient Direction Change and Automated Batch Size Gradients indicate the direction and size of parameter updates to minimize the loss function in training. To reveal the effects of the batch size in optimization, we evaluate its influence on the direction change of gradients. 3520 k 1 2 3 4 5 6 7 8 9 10 Size 4064 8994 12768 17105 21265 25571 29411 33947 38429 43412 a(gk−1 0 , gk 0) 51.52 30.37 27.42 22.61 20.87 19.80 19.59 18.92 19.23 a(gk−3 0 , gk 0) 59.53 44.20 41.77 35.34 32.19 32.10 34.29 Table 1: The direction change of gradients while accumulating mini-batches. 2.1 Gradient Direction Change with Increasing Batch Size To investigate the influence of batch size on gradient direction, we gradually accumulate gradients of small mini-batches as the gradients of a large batch that consists of those mini-batches, and observe how the direction of gradients varies. Let dj i : (xj i, yj i ) stands for the large batch concatenated from the ith mini-batch to the jth minibatch, where xj i and yj i are inputs and targets. Then the gradients gj i of model parameters θ on dj i are: gj i = ∂L(θ, xj i, yj i ) ∂θ (1) In gradient accumulation, the gradients gk 0 are the sum of gk−1 0 and gk k: gk 0 = gk−1 0 + gk k (2) To measure the change of gradient direction during accumulation, we regard the two gradients gk−1 0 and gk 0 as 2 vectors, and compute the angle a(gk−1 0 , gk 0) between them: a(gk−1 0 , gk 0) = arccos( gk−1 0 • gk 0 |gk−1 0 ||gk 0| ) (3) where “•” indicates inner-product of vectors. We use the angle of 2 vectors rather than cosine similarity because: • The angle indicates the change between gradient directions; • When the angle is small, a significant change in the angle only results in a subtle difference in cosine similarity.1 We observe the gradient direction varying during accumulating gradients of a Transformer model training on the WMT 14 English-German task following the setting of Vaswani et al. (2017) with a batch size of around 50k target tokens. To achieve the gradient of the large batch size, we gradually 1cos(5◦) ≈0.9961, cos(10◦) ≈0.9848. accumulate gradients of mini-batches with around 4k target tokens. Table 1 shows a typical example: (i) gradient change is high at the beginning, (ii) gradient change reduces with increasing batch size and (iii) eventually it will start fluctuating (here at k=10).2 Intuitively, the less the direction of accumulated gradients is moved by the gradients of a new minibatch, the more certainty there is about the gradient direction. Thus we propose that the magnitude of the angle fluctuation relates to the certainty of the model parameter optimization direction, and may therefore serve as a measure of optimization difficulty. 2.2 Automated Batch Size with Gradient Direction Change Table 1 shows that the optimization direction is less stable with a small batch than with a large batch. But after the direction of gradients has stabilized, accumulating more mini-batches seems useless as the gradient direction starts to fluctuate. Thus, we suggest to compute dynamic and efficient batch sizes by accumulating gradients of mini-batches, while evaluating the gradient direction change with each new mini-batch, and stop accumulating more mini-batches and perform an optimization step when the gradient direction fluctuates. In practice, we only monitor a(gk−1 0 , gk 0) for efficiency. We record the minimum angle change amin while accumulating gradients, and suppose the gradient direction starts to fluctuate, stop accumulating more mini-batches when a(gk−1 0 , gk 0) > amin ∗α. In this way we can achieve a dynamic batch size (the size of dk 0), where α is a pre-specified hyperparameter. 2By comparing nP i=0 a(gk−i−1 0 , gk−i 0 ) with a(gk−n−1 0 , gk 0), we can find the direction changes from gk−i−1 0 to gk 0 are inconsistent. Otherwise, nP i=0 a(gk−i−1 0 , gk−i 0 ) ≈a(gk−n−1 0 , gk 0). 3521 2.3 Efficiently Monitoring Gradient Direction Change In practice, a model may have a large amount of parameters, and the cost of computing the cosine similarity between two corresponding gradient vectors are relatively high. To tackle this issue, we propose to divide model parameters into groups, and monitor gradient direction change only on a selected group in each optimization step. For a multi-layer model, i.e. the Transformer, a group may consist of parameters of 1 layer or several layers. To select the parameter group which is sensitive to the batch size, we record the angles of gradient direction change a(g0 0, g1 0), ..., a(gk−1 0 , gk 0) in the gradient accumulation, and define amax and amin as the maximum and minimum direction change: amax = max(a(g0 0, g1 0), ..., a(gk−1 0 , gk 0)) (4) amin = min(a(g0 0, g1 0), ..., a(gk−1 0 , gk 0)) (5) We then use ∆a to measure the uncertainty reduction in the optimization direction: ∆a = amax −amin (6) Intuitively, the optimization direction of the parameter group which results in a larger ∆a profits more from the batch size, and the group with a larger ∆a should be more frequently sampled. We average the recent history of ∆ak of the kth parameter group into ∆ak. Inspired by Gumbel (1954); Maddison et al. (2014); Zhang et al. (2019b), we first add Gumble noise to each ∆ak to prevent the selection falling into a fixed group: ∆a∗ k = ∆ak −log(−log u) (7) where u ∈(0, 1) is a uniform distribution. Then we zero negative values3 in ∆a∗ 1, ..., ∆a∗ n and normalize them into a probability distribution: pk = ∆a∗ k β nP i=1 ∆a∗ i β (8) We use pk as the probability to sample the kth group, and β is a hyper-parameter to sharpen the probability distribution. We do not use softmax 3∆ak is positive, but after adding Gumble noise, there is a small possibility that it turns negative. In our case, negative values only occur very few times. Batch Size En-De En-Fr Time 25k 27.38 39.34 35h21m 50k 27.93 39.97 60h38m dyn 28.11† 40.16† 33h37m Table 2: Performance. Time is the training time on the WMT 14 En-De task for 100k training steps. † indicates p < 0.01 in the significance test. En-De En-Fr min 7069 8025 avg 26264.19 30248.90 max 102165 103352 Table 3: Statistics of Batch Size. because it would heavily sharpen the distribution when the gap between values is large, and makes it almost impossible to select and evaluate the other groups in addition to the one with highest ∆a∗ k.4 3 Experiments We implemented our approaches based on the Neutron implementation (Xu and Liu, 2019) of the Transformer translation model. We applied our approach to the training of the Transformer, and to compare with Vaswani et al. (2017), we conducted our experiments on the WMT 14 English to German and English to French news translation tasks on 2 GTX 1080Ti GPUs. Hyper parameters were tuned on the development set (newstest 2012 and 2013). We followed all settings of Vaswani et al. (2017) except for the batch size. We used a beam size of 4 for decoding, and evaluated case-sensitive tokenized BLEU5 with significance test (Koehn, 2004). We used an α of 1.1 to determine the fluctuation of gradient direction by default. We regarded each encoder/decoder layer as a parameter group, and used a β of 3 for the parameter group selection. 3.1 Performance We compared the results of our dynamic batch size approach to two fixed batch size baselines, the 25k 4For example, the result of softmax over [22, 31, 60] is [3.13e-17, 2.54e-13, 1.00], the last element takes almost all possibility mass. But we later find that if ∆a is normalized (∆a = (amax −amin)/amax) in Equation 6, the softmax works comparably well, which avoids using the hyper parameter β in Equation 8. 5https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/ generic/multi-bleu.perl 3522 0 2 4 6 8 10 12 14 16 18 20 Figure 1: Distribution of Dynamic Batch Sizes. Values on y-axis are percentages. 12 13 14 15 16 17 18 19 20 21 22 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 Figure 2: Minimum Gradient Direction Change during Training. X-axis 2.5k training steps, y averaged amin (Equation 5). batch size is the empirical value of Vaswani et al. (2017), while Zhang et al. (2019a) investigate 50k batch size. Results are shown in Table 2 with the statistics of batch sizes of our approach shown in Table 3 and the detailed distribution of batch sizes for the En-De task shown in Figure 1. Table 2 and 3 show that our approach outperforms both the fixed 25k and 50k batch size settings with an average batch size around 26k, and our approach is slightly faster than the 25k setting despite of the additional cost for monitoring gradient direction change.6 Figure 1 shows an interesting fact that the most frequently used automated batch sizes were close to the fixed value (25k) of Vaswani et al. (2017). 3.2 Analysis of Minimum Gradient Direction Change In order to observe the varying of minimum gradient direction change during training, we averaged the minimum angle for every 2.5k training steps. 6It is hard to accumulate an accurate 25k target tokens in a batch, and in fact, the fixed 25k setting results in an average batch size of 26729.79. α Batch Size BLEU Time avg max 1.0 19367.76 60945 27.90 24h50m 1.1 26264.19 102165 28.11 33h37m 1.2 36208.47 164908 28.39 46h04m 1.3 51470.34 205210 28.37 63h56m Table 4: Effects of Different α. Results are shown in Figure 2. Figure 2 shows that the minimum direction change of gradients was small at the beginning, and gradually increased with training. Given that a small angle change indicates that there is more certainty in the gradient direction, this observation is consistent with the fact that finding the optimization direction is harder and harder with training. 3.3 Effects of α We studied the effects of different α values on the En-De task, and results are shown in Table 4.7 Table 4 shows that with increasing α, the average batch size and the time cost increases along with the performance. A wide range of values works relatively well indicating that its selection is robust, and 1.1 seems to be a good trade off between the cost and the performance in our experiments.8 It is also worth noting that α = 1 outperforms the 25k baseline while being 1.42 times faster (Table 2). 4 Related Work Popel and Bojar (2018) demonstrate that the batch size affects the performance of the Transformer, and a large batch size tends to benefit performance, but they use fixed batch sizes during training. Abdou et al. (2017) propose to use a linearly increasing batch size from 65 to 100 which slightly outperforms their baseline. Smith et al. (2018) show that the same learning curve on both training and test sets can be obtained by increasing the batch size during training instead of decaying the learning rate. For fast convergence, Balles et al. (2017) propose to approximately estimate the mean value of the batch size for the next batch by maximizing the expected gain with a sample gradient variance (||g||2) computed on the current batch, while our 7We observed that the minimum batch size does not change significantly with increasing α, so we omit it for space. 8For α = 1.2 on the En-Fr task, the corresponding values are: 44294.16, 185972, 40.35 and 54h12m. 3523 approach compares the gradient direction of change (a(gk−1 0 , gk 0)) during accumulation of mini-batches in the assembling of a large batch. We suggest our approach is complementary to Sutskever et al. (2013); Duchi et al. (2011); Kingma and Ba (2015), as their approaches decide the magnitude of the move in the optimization direction, while our approach provides reliable gradient direction. 5 Conclusion In this paper, we analyze the effects of accumulated batches on the gradient direction, and propose to achieve efficient automated batch sizes by monitoring change in gradient accumulation and performing an optimization step when the accumulated gradient direction is almost stable. To improve the efficiency of our approach with large models, we propose a sampling approach to select gradients of parameters sensitive to the batch size. Our approach improves the Transformer with a fixed 25k batch size by +0.73 and +0.82 BLEU on the WMT 14 English to German and English to French tasks respectively while preserving efficiency. Acknowledgments We thank anonymous reviewers for their insightful comments. Hongfei Xu acknowledges the support of China Scholarship Council ([2018]3101, 201807040056). Deyi Xiong is supported by the National Natural Science Foundation of China (Grant No. 61861130364), the Natural Science Foundation of Tianjin (Grant No. 19JCZDJC31400) and the Royal Society (London) (NAF\R1\180122). Hongfei Xu and Josef van Genabith are supported by the German Federal Ministry of Education and Research (BMBF) under the funding code 01IW17001 (Deeplee). References Mostafa Abdou, Vladan Glonˇc´ak, and Ondˇrej Bojar. 2017. Variable mini-batch sizing and pre-trained embeddings. In Proceedings of the Second Conference on Machine Translation, pages 680–686, Copenhagen, Denmark. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Lukas Balles, Javier Romero, and Philipp Hennig. 2017. Coupling adaptive batch sizes with learning rates. In Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence, UAI 2017, Sydney, Australia, August 11-15, 2017. AUAI Press. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1243–1252, International Convention Centre, Sydney, Australia. PMLR. Emil Julius Gumbel. 1954. Statistical theory of extreme values and some practical applications. NBS Applied Mathematics Series, 33. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388– 395, Barcelona, Spain. Association for Computational Linguistics. Chris J Maddison, Daniel Tarlow, and Tom Minka. 2014. A∗sampling. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3086–3094. Curran Associates, Inc. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1–9, Brussels, Belgium. Association for Computational Linguistics. Martin Popel and Ondˇrej Bojar. 2018. Training tips for the transformer model. The Prague Bulletin of Mathematical Linguistics, 110:43–70. Samuel L. Smith, Pieter-Jan Kindermans, and Quoc V. Le. 2018. Don’t decay the learning rate, increase the batch size. In International Conference on Learning Representations. Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. 2013. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Ma3524 chine Learning, volume 28 of Proceedings of Machine Learning Research, pages 1139–1147, Atlanta, Georgia, USA. PMLR. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Hongfei Xu and Qiuhui Liu. 2019. Neutron: An Implementation of the Transformer Translation Model and its Variants. arXiv preprint arXiv:1903.07402. Biao Zhang, Ivan Titov, and Rico Sennrich. 2019a. Improving deep transformer with depth-scaled initialization and merged attention. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 898–909, Hong Kong, China. Association for Computational Linguistics. Wen Zhang, Yang Feng, Fandong Meng, Di You, and Qun Liu. 2019b. Bridging the gap between training and inference for neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4334– 4343, Florence, Italy. Association for Computational Linguistics.
2020
323
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3525–3535 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3525 Knowledge Distillation for Multilingual Unsupervised Neural Machine Translation Haipeng Sun1∗, Rui Wang2, Kehai Chen2, Masao Utiyama2, Eiichiro Sumita2, and Tiejun Zhao1 1Harbin Institute of Technology, Harbin, China 2National Institute of Information and Communications Technology (NICT), Kyoto, Japan [email protected], [email protected] {wangrui, khchen, mutiyama, eiichiro.sumita}@nict.go.jp Abstract Unsupervised neural machine translation (UNMT) has recently achieved remarkable results for several language pairs. However, it can only translate between a single language pair and cannot produce translation results for multiple language pairs at the same time. That is, research on multilingual UNMT has been limited. In this paper, we empirically introduce a simple method to translate between thirteen languages using a single encoder and a single decoder, making use of multilingual data to improve UNMT for all language pairs. On the basis of the empirical findings, we propose two knowledge distillation methods to further enhance multilingual UNMT performance. Our experiments on a dataset with English translated to and from twelve other languages (including three language families and six language branches) show remarkable results, surpassing strong unsupervised individual baselines while achieving promising performance between non-English language pairs in zero-shot translation scenarios and alleviating poor performance in low-resource language pairs. 1 Introduction Recently, neural machine translation (NMT) has been adapted to the unsupervised scenario in which NMT is trained without any bilingual data. Unsupervised NMT (UNMT) (Artetxe et al., 2018; Lample et al., 2018a) requires only monolingual corpora. UNMT achieves remarkable results by using a combination of diverse mechanisms (Lample et al., 2018b) such as an initialization with bilingual word embeddings, denoising auto-encoder (Vincent et al., 2010), back-translation (Sennrich et al., 2016a), and shared latent representation. More recently, Lample and Conneau (2019) achieves better ∗Haipeng Sun was an internship research fellow at NICT when conducting this work. UNMT performance by introducing the pretrained language model. However, conventional UNMT can only translate between a single language pair and cannot produce translation results for multiple language pairs at the same time (Wang et al., 2020). Multilingual UNMT (MUNMT) translating multiple languages at the same time can save substantial training time and resources. Moreover, the performance of MUNMT in similar languages can promote each other. Research on MUNMT has been limited and there are only a few pioneer studies. For example, Xu et al. (2019) and Sen et al. (2019) proposed a multilingual scheme that jointly trains multiple languages with multiple decoders. However, the performance of their MUNMT is much worse than our re-implemented individual baselines (shown in Tables 2 and 3) and the scale of their study is modest (i.e., 4-5 languages). In this paper, we empirically introduce an unified framework to translate among thirteen languages (including three language families and six language branches) using a single encoder and single decoder, making use of multilingual data to improve UNMT for all languages. On the basis of these empirical findings, we propose two knowledge distillation methods, i.e., self-knowledge distillation and language branch knowledge distillation, to further enhance MUNMT performance. Our experiments on a dataset with English translated to and from twelve other languages show remarkable results, surpassing strong unsupervised individual baselines.This paper primarily makes the following contributions: • We propose a unified MUNMT framework to translate between thirteen languages using a single encoder and single decoder. This paper is the first step of multilingual UNMT training on a large scale of European languages. • We propose two knowledge distillation meth3526 ods for MUNMT and our proposed knowledge distillation methods consider linguistic knowledge in the specific translation task. • Our proposed MUNMT system achieves stateof-the-art performance on the thirteen languages. It also achieves promising performance in zero-shot translation scenarios and alleviates poor performance in low-resource language pairs. 2 Background of UNMT UNMT can be decomposed into four components: cross-lingual language model pretraining, denoising auto-encoder, back-translation, and shared latent representations. For UNMT, two monolingual corpora X1 = {X1 i } and X2 = {X2 i } in two languages L1 and L2 are given. |X1| and |X2| are the number of sentences in monolingual corpora {X1 i } and {X2 i } respectively. 2.1 Cross-lingual Language Model Pretraining A cross-lingual masked language model, which can encode two monolingual sentences into a shared latent space, is first trained. The pretrained crosslingual encoder is then used to initialize the whole UNMT model (Lample and Conneau, 2019). Compared with previous bilingual embedding pretraining (Artetxe et al., 2018; Lample et al., 2018a; Yang et al., 2018; Lample et al., 2018b; Sun et al., 2019), this pretraining can provide much more crosslingual information, causing the UNMT model to achieve better performance and faster convergence. 2.2 Denoising Auto-encoder Noise obtained by randomly performing local substitutions and word reorderings (Vincent et al., 2010; Hill et al., 2016; He et al., 2016), is added to the input sentences to improve model learning ability and regularization. Consequently, the input data are continuously modified and are different at each epoch. The denoising auto-encoder model objective function can be minimized by encoding a noisy sentence and reconstructing it with the decoder in the same language: LD = |X1| X i=1 −logPL1→L1(X1 i |C(X1 i )) + |X2| X i=1 −logPL2→L2(X2 i |C(X2 i )), (1) where {C(X1 i )} and {C(X2 i )} are noisy sentences. PL1→L1 and PL2→L2 denote the reconstruction probability in language L1 and L2, respectively. 2.3 Back-translation Back-translation (Sennrich et al., 2016a) plays a key role in achieving unsupervised translation that relies only on monolingual corpora in each language. The pseudo-parallel sentence pairs {(M2(X1 i ), X1 i )} and {(M1(X2 i ), X2 i )} produced by the model in the previous iteration are used to train the new translation model. Therefore, the back-translation objective function can be optimized by minimizing: LB = |X1| X i=1 −logPL2→L1(X1 i |M 2(X1 i )) + |X2| X i=1 −logPL1→L2(X2 i |M 1(X2 i )), (2) where PL1→L2 and PL2→L1 denote the translation probability across the two languages. 2.4 Sharing Latent Representations Encoders and decoders are (partially) shared between L1 and L2. Therefore, L1 and L2 must use the same vocabulary. The entire training of UNMT needs to consider back-translation between the two languages and their respective denoising processes. In summary, the entire UNMT model can be optimized by minimizing: Lall = LD + LB. (3) 3 Multilingual UNMT (MUNMT) 3.1 Multilingual Pretraining Motivated by Lample and Conneau (2019), we construct a multilingual masked language model, using a single encoder. For each language, the language model is trained by encoding the masked input and reverting it with this encoder. This pretrained multilingual language model is used to initialize the full set of parameters of MUNMT. 3.2 Multilingual UNMT Training We have established a MUNMT model on N languages with a single encoder and single decoder. We denote a sentence in language Lj as Xj i . For example, L1 indicates English. |Xj| is the number of sentences in the corpus Xj = {Xj i }. 3527 N noise MUNMTM UNMT MUNMT Denoising Back-translation 𝓛𝑴𝑫 M Previous model 𝓛𝑴𝑩 MUNMTM UNMT MUNMT M Previous model 𝓛𝑴𝑩 MUNMTM UNMT MUNMT X𝐢 𝟏 …… X𝐢 𝐣 …… X𝐢 𝐍 X𝐢 𝟏 …… X𝐢 𝐣 …… X𝐢 𝐍 X𝐢 𝟐 …… X𝐢 𝐣 …… X𝐢 𝐍 X𝐢 𝟐 …… X𝐢 𝐣 …… X𝐢 𝐍 X𝐢 𝟏 X𝐢 𝟏 C(X𝐢 𝐣) 𝐌𝟏(X𝐢 𝐣) 𝐌𝟐(X𝐢 𝟏) …… 𝐌𝒋(X𝐢 𝟏) …… 𝐌𝑵(X𝐢 𝟏) Figure 1: MUNMT architecture. We take L1 ↔Lj time-step as an example. The grey symbols indicate that the corresponding data are not used or generated during this time-step. As Figure 1 shows, the entire training process of the MUNMT model is performed through the denoising and back-translation mechanisms, between English and non-English language pairs, by minimizing: LMUNMT = LMD + LMB, (4) where LMD denotes the denoising function and LMB denotes the back-translation function. In the denoising training, noise (in the form of random token deletion and swapping) is introduced into the input sentences for any language Lj. The denoising auto-encoder, which encodes a noisy version and reconstructs it with the decoder in the same language, is optimized by minimizing: LMD = N X j=1 |Xj| X i=1 −logPLj→Lj(Xj i |C(Xj i )), (5) where {C(Xj i )} is a set of noisy sentences for language Lj. PLj→Lj denotes the reconstruction probability in Lj. In this paper, we primarily focus on the translation from English to other languages or from other languages to English. This is because most test dataset contains English. In the process of back-translation training, we only conduct backtranslation from language L1 (English) to other languages and back-translation from other languages to language L1. For any non-English language Lj, the pseudo-parallel sentence pairs {(Mj(X1 i ), X1 i )} and {(M1(Xj i ), Xj i )} are obtained by the previous model in the L1 →Lj Algorithm 1 The SKD algorithm Input: Monolingual training data X1, X2, · · · , XN; The pretrained model θ0; Number of steps K 1: Initialize θ ←θ0 2: while Step q ≤max step K do 3: for j = 1; j < N; j + + do 4: Sample batch {Xj i } from Xj 5: Compute denoising loss LMD 6: Update θ ←optimizer(LMD) 7: end for 8: for j = 2; j < N; j + + do 9: Sample batch {X1 i }from X1 10: Compute back-translation loss LMB 11: Randomly select another language Lz and compute distillation loss LSKD 12: Update θ ←optimizer(LMB + LSKD) 13: Sample batch{Xj i } from Xj 14: Compute back-translation loss LMB 15: Randomly select another language Lz and compute distillation loss LSKD 16: Update θ ←optimizer(LMB + LSKD) 17: end for 18: end while and Lj →L1 direction, respectively. Therefore, the back-translation objective function can be optimized on these pseudo-parallel sentence pairs by minimizing: LMB = N X j=2 |X1| X i=1 −logPLj→L1(X1 i |M j(X1 i )) + N X j=2 |Xj| X i=1 −logPL1→Lj(Xj i |M 1(Xj i )), (6) where PL1→Lj and PLj→L1 denote the translation probabilities, in each direction, between any nonEnglish language and English. 4 Knowledge Distillation for MUNMT To further enhance the performance of our proposed MUNMT described in Section 3, we propose two knowledge distillation methods: selfknowledge distillation (Algorithm 1) and language branch knowledge distillation (Algorithm 2). Figure 2 illustrates the architecture of MUNMT and the proposed knowledge distillation methods. Generally, during UNMT training, an objective function LKD is added, to enhance the generalization ability of the MUNMT model. The general 3528 M Previous model X𝐢 𝟏 …… X𝐢 𝐣 …… X𝐢 𝐍 C(X𝐢 𝐣) N noise MUNMTM UNMT MUNMT Denoising Back-translation X𝐢 𝟐 …… X𝐢 𝐣 …… X𝐢 𝐍 MUNMT MUNMT MUNMT X𝐢 𝟐 …… X𝐢 𝐣 …… X𝐢 𝐍 𝓛𝑺𝑲𝑫 X𝐢 𝟏 MUNMT MUNMT MUNMT X𝐢 𝟏 X𝐢 𝟏 …… X𝐢 𝐣 …… X𝐢 𝐍 M Previous model 𝓛𝑺𝑲𝑫 𝓛𝑴𝑩 𝓛𝑴𝑩 𝓛𝑴𝑫 𝐌𝒛(X𝐢 𝐣) 𝐌𝟐(X𝐢 𝟏) …… 𝐌𝒋(X𝐢 𝟏) …… 𝐌𝑵(X𝐢 𝟏) 𝐌𝟏(X𝐢 𝐣) 𝐌𝒛(X𝐢 𝟏) (a) N noise MUNMTM UNMT MUNMT Denoising Back-translation 𝓛𝑴𝑫 M Previous model 𝓛𝑳𝑩𝑲𝑫 𝓛𝑴𝑩 MUNMTM UNMT MUNMT MUNMTM UNMT LBUNMT M Previous model 𝓛𝑳𝑩𝑲𝑫 𝓛𝑴𝑩 MUNMTM UNMT MUNMT MUNMTM UNMT LBUNMT X𝐢 𝟏 …… X𝐢 𝐣 …… X𝐢 𝐍 X𝐢 𝟏 …… X𝐢 𝐣 …… X𝐢 𝐍 X𝐢 𝟐 …… X𝐢 𝐣 …… X𝐢 𝐍 X𝐢 𝟐 …… X𝐢 𝐣 …… X𝐢 𝐍 X𝐢 𝟏 X𝐢 𝟏 C(X𝐢 𝐣) 𝐌𝟏(X𝐢 𝐣) 𝐌𝟐(X𝐢 𝟏) …… 𝐌𝒋(X𝐢 𝟏) …… 𝐌𝑵(X𝐢 𝟏) (b) Figure 2: (a) Architecture of MUNMT with self-knowledge distillation; (b) Architecture of MUNMT with language branch knowledge distillation. Similar as Figure 1, we take L1 ↔Lj time-step as an example. The blue lines denote our proposed knowledge distillation methods are added in the MUNMT training. MUNMT objective function can be reformulated as follows: LMUNMT = LMD + LMB′, LMB′ = (1 −α)LMB + αT 2LKD, (7) where α is a hyper-parameter that adjusts the weight of the two loss functions during backtranslation. T denotes the temperature used on the softmax layer. If the temperature is higher, the probability distribution obtained would be softer (Hinton et al., 2015). 4.1 Self-knowledge Distillation On the basis of the existing architecture of MUNMT, we introduce self-knowledge distillation (Hahn and Choi, 2019) (SKD) during backtranslation, to enhance the generalization ability of the MUNMT model, as shown in Figure 2(a). Unlike Hahn and Choi (2019)’s method, using two soft target probabilities that are based on the word embedding space, we make full use of multilingual information via self-knowledge distillation. During back-translation, only language Lj sentences Mj(X1 i ) are generated before training the MUNMT model in the Lj →L1 direction. However, other languages, which have substantial multilingual information, are not used during this training. Motivated by this, we propose to introduce another language Lz (randomly chosen but distinct from L1 and Lj) during this training. We argue that the translation from the source sentences through different paths, L1 →Lj →L1 and L1 →Lz →L1, should be similar. The MUNMT model matches not only the ground-truth output of language Lj sentences Mj(X1 i ), but also the soft probability output of language Lz sentences Mz(X1 i ). The opposite direction is similar. Therefore, this MUNMT model is optimized by minimizing the objective function: LMB′ = (1 −α)LMB + αT 2LSKD, LSKD = N X j=2 |X1| X i=1 KL(X1(M j(X1 i )), X1(M z(X1 i ))) + N X j=2 |Xj| X i=1 KL(Xj(M 1(Xj i )), Xj(M z(Xj i ))), (8) where KL(·) denotes the KL divergence. It is computed over full output distributions to keep these two probability distributions similar. For any language Lj, X1(Mj(X1 i )) and X1(Mz(X1 i )) denote the softened L1 sentence probability distribution after encoding Mj(X1 i ) and Mz(X1 i ), respectively. Mj(X1 i ) and Mz(X1 i ) were generated by the previous model in the L1 →Lj and L1 →Lz directions, respectively. Xj(M1(Xj i )) and Xj(Mz(Xj i )) denote the softened Lj sentence probability distribution after encoding M1(Xj i ) 3529 Uralic language family Altaic language family Turkic language branch Tr Indo-European language family Germanic language branch Baltic language branch Romance language branch Slavic language branch De En Lv Lt Cs It Es Fr Ro Finno-Ugric language branch Fi Et Hu Figure 3: The language distribution of our selected languages. Algorithm 2 The LBKD algorithm Input: Monolingual training data X1, X2, · · · , XN; LBUNMT models θLB 1 , θLB 2 , · · · , θLB M ; The pretrained model θ0; Number of steps K 1: Initialize θ ←θ0 2: while Step q ≤max step K do 3: for j = 1; j < N; j + + do 4: Sample batch {Xj i } from Xj 5: Compute denoising loss LMD 6: Update θ ←optimizer(LMD) 7: end for 8: for j = 2; j < N; j + + do 9: Sample batch {X1 i }from X1 10: Compute back-translation loss LMB 11: Select LBUNMT language L1 belongs and compute distillation loss LLBKD 12: Update θ ←optimizer(LMB + LLBKD) 13: Sample batch{Xj i } from Xj 14: Compute back-translation loss LMB 15: Select LBUNMT language Lj belongs and compute distillation loss LLBKD 16: Update θ ←optimizer(LMB + LLBKD) 17: end for 18: end while and Mz(Xj i ), respectively. M1(Xj i ) and Mz(Xj i ) were generated by the previous model in the Lj → L1 and Lj →Lz directions, respectively. Note that zero-shot translation was used to translate language Lj to language Lz. The direction Lj →Lz was not trained during MUNMT training. 4.2 Language Branch Knowledge Distillation We consider thirteen languages: Czech (Cs), German (De), English (En), Spanish (Es), Estonian (Et), Finnish (Fi), French (Fr), Hungarian (Hu), Lithuanian (Lt), Latvian (Lv), Italian (It), Romanian (Ro), and Turkish (Tr), which belong to three language families including several language branches (Lewis, 2009) as shown in Figure 3. As shown in Figure 2(b), we propose knowledge distillation within a language branch (LBKD), to improve MUNMT performance through the existing teacher models. To the best of our knowledge, this is the first proposal that aims to distill knowledge within a language branch. As the number of languages increases, the cost of training time and resources to train an individual model on any two languages increases rapidly. An alternative knowledge distillation method within a language branch can avoid this prohibitive computational cost. Because languages in the same language branch are similar, we first train small multilingual models across all languages in the same language branch (LBUNMT) before training MUNMT. The LBUNMT model trained in the same language branch performed better than the single model because similar languages have a positive interaction during the training process as shown in Tables 2 and 3. Therefore, the distilled information of LBUNMT is used to guide the MUNMT model during backtranslation. The MUNMT model matches both the ground-truth output and the soft probability output of LBUNMT. Therefore, this MUNMT model is optimized by minimizing the objective function: LMB′ = (1 −α)LMB + αT 2LLBKD, LLBKD = N X j=2 |X1| X i=1 KL(X1(M j(X1 i )), LB1(M j(X1 i ))) + N X j=2 |Xj| X i=1 KL(Xj(M 1(Xj i )), LBj(M 1(Xj i ))), (9) where X1(Mj(X1 i )) and LB1(Mj(X1 i )) denote 3530 the softened L1 sentence probability distribution of the MUNMT and LBUNMT models, respectively, after encoding Mj(X1 i ) generated by the previous MUNMT model in the L1 →Lj direction. Xj(M1(Xj i )) and LBj(M1(Xj i )) denote the softened Lj sentence probability distribution of the MUNMT and LBUNMT models, respectively, after encoding M1(Xj i ) generated by the previous MUNMT model in the Lj →L1 direction. 5 Experiments 5.1 Datasets To establish an MUNMT system, we considered 13 languages from WMT monolingual news crawl datasets: Cs, De, En, Es, Et, Fi, Fr, Hu, It, Lt, Lv, Ro, and Tr. For preprocessing, we used the Moses tokenizer (Koehn et al., 2007). For cleaning, we only applied the Moses script clean-corpus-n.perl to remove lines in the monolingual data containing more than 50 words. We then used a shared vocabulary for all languages, with 80,000 sub-word tokens based on BPE (Sennrich et al., 2016b). The statistics of the data are presented in Table 1. For Cs,De,En, we randomly extracted 50M monolingual news crawl data after cleaning; For other languages, we used all news crawl data after cleaning as shown in Table 1. Language Sentences Words Sub-words Cs 50.00M 860.36M 1.16B De 50.00M 887.37M 1.19B En 50.00M 1.15B 1.32B Es 36.33M 1.01B 1.19B Et 3.00M 51.39M 101.43M Fi 15.31M 189.39M 359.78M Fr 50.00M 1.19B 1.38B Hu 34.35M 708.13M 1.03B It 30.82M 755.56M 911.51M Lt 0.34M 6.38M 14.64M Lv 8.60M 172.56M 281.54M Ro 8.92M 207.07M 279.95M Tr 9.14M 153.03M 254.70M Table 1: Statistics of monolingual corpora. We report the results for WMT newstest2013 for Cs-En, De-En, Es-En, and Fr-En. We can evaluate the translation performance between pairs of nonEnglish languages because newstest2013 includes these five languages parallel to each other. For other language pairs, we chose the newest WMT newstest set. That is, we reported the results on WMT newstest2019 for Fi-En and Lt-En; WMT newstest2018 for Et-En and Tr-En; WMT newstest2017 for Lv-En; WMT newstest2016 for RoEn; and WMT newstest2009 for Hu-En and It-En. Note that the versions of newstest2019 on Fi/Lt→ En and En →Fi / Lt are different. We chose the corresponding newstest2019 for each direction. 5.2 Language Model and UNMT Settings We used a transformer-based XLM toolkit to train a multilingual masked language model and followed the settings used in Lample and Conneau (2019): six layers were used for the encoder. The dimension of hidden layers was set to 1024. The Adam optimizer (Kingma and Ba, 2015) was used to optimize the model parameters. The initial learning rate was 0.0001, β1 = 0.9, and β2 = 0.98. We used the same toolkit and followed the settings of UNMT used in (Lample and Conneau, 2019): six layers were used for the encoder and decoder. The batch size was set to 2000 tokens. The other parameters were the same as those used for training language model. For our proposed knowledge distillation method, α was set to 0.1 and T was set to 2 (the parameters are empirically selected by small-scale experiments and most of the settings achieved good results). The cross-lingual language model was used to pretrain the encoder and decoder of the whole UNMT model. All monolingual data, described in Table 1, were used in the pretraining and MUNMT training phase. The parameters of the multilingual and single models were the same. For evaluation, we used the case-sensitive BLEU scores computed by the Moses script multi-bleu.perl. We executed a single model (two languages) for 60,000 iterations, a small multilingual model (three to five languages) for 30,000 iterations, and a large multilingual model (13 languages) for 15,000 iterations. Eight V100 GPUs were used to train all UNMT models. The single model was trained for approximately two days; the multilingual model (13 languages) costs approximately six days since 13 languages participated in the training. 5.3 Main Results Tables 2 and 3 present the detailed BLEU scores of all systems on the English and non-English language pairs, in each direction1. Our observations 1The translation quality of pretrained model was not presented in the Tables 2 and 3. The result was poor because the pretrained model (cross-lingual language model) was trained within an encoder. The encoder and decoder of UNMT was 3531 Corpus SNMT Sen et al. (2019) Xu et al. (2019) SM LBUNMT MUNMT SKD LBKD En-Cs 19.20 6.79 14.54 14.54 14.40 14.89 15.47 En-De 20.30 8.09 13.25 18.26 18.26 17.58 18.47 19.28 En-Es 30.40 14.82 20.43 25.14 25.40 25.05 25.61 26.79 En-Et 25.20 14.86 15.02 14.09 15.03 15.62 En-Fi 27.40 9.87 9.99 9.75 10.70 10.57 En-Fr 30.60 13.71 20.27 26.02 26.36 25.84 26.45 27.78 En-Hu 11.32 11.40 10.90 11.64 12.03 En-It 24.19 24.30 23.80 24.69 25.52 En-Lt 20.10 0.79 8.29 10.07 11.15 11.11 En-Lv 21.10 1.02 11.55 13.09 13.90 14.33 En-Ro 28.90 29.44 29.58 28.82 29.65 31.28 En-Tr 20.00 11.87 11.87 12.41 13.24 13.83 Average 15.61 17.21 17.15 17.95 18.63 Table 2: BLEU scores of all models on the English to non-English language pairs. Note: The first column shows best-performed (till 2019) BLEU scores of supervised NMT (SNMT) systems reported in the corresponding WMT news translation task (http://matrix.statmt.org). The second and third column show BLEU scores reported in the corresponding papers. SM shows the UNMT single model on these two languages (our baseline); LBUNMT shows the multilingual model across all languages in the same language branch; MUNMT shows the multilingual model across all languages; SKD shows the multilingual model with self-knowledge distillation across all languages; LBKD shows the multilingual model with language branch knowledge distillation across all languages. Note that the results for En-Ro are evaluated on the dataset with diacritics removed in the reference text for all our implemented systems. Corpus SNMT Sen et al. (2019) Xu et al. (2019) SM LBUNMT MUNMT SKD LBKD Cs-En 27.10 11.56 20.62 20.62 20.09 21.05 21.25 De-En 28.40 11.94 16.46 21.31 21.31 21.95 22.54 22.81 Es-En 31.40 15.45 20.35 25.53 25.77 25.37 26.15 26.59 Et-En 30.90 19.48 20.30 19.60 20.95 21.31 Fi-En 33.00 7.62 7.68 7.19 7.92 7.80 Fr-En 32.20 14.47 19.87 25.86 26.02 25.41 26.07 26.48 Hu-En 14.48 14.86 14.54 15.16 15.34 It-En 24.33 24.87 24.77 25.30 25.35 Lt-En 36.30 1.72 11.00 14.04 15.31 15.84 Lv-En 21.90 0.95 12.75 14.90 15.49 15.33 Ro-En 35.20 28.52 29.57 28.38 29.58 30.18 Tr-En 28.00 12.99 12.99 15.65 16.85 17.35 Average 16.95 18.98 19.32 20.20 20.47 Table 3: BLEU scores of all models on the non-English to English language pairs. are as follows: 1) Our proposed LBUNMT model trained in the same language branch performed better than the single model (SM) because similar languages have a positive interaction during the training process. Moreover, SM performed very poorly on lowresource language pairs such as En-Lt and En-Lv in the Baltic language branch. 2) Our proposed MUNMT model trained in all languages significantly outperformed the previous work (Sen et al., 2019; Xu et al., 2019) by 4∼12 BLEU scores. Moreover, the MUNMT model could alleviate the poor performance achieved with initialized with the same parameters of pretrained language model (just an encoder). low-resource language pairs, such as En-Lt and En-Lv. However, the performance of MUNMT is slightly worse than SM in some language pairs. 3) Our proposed knowledge distillation methods outperformed the original MUNMT model by approximately 1 BLEU score. Moreover, our proposed MUNMT with knowledge distillation performed better than SM in all language pairs with fewer training iterations. Regarding our two proposed methods, LBKD achieved better performance since it could obtain much more knowledge distilled from LBUNMT model. 4) There is a gap between the performance of our proposed MUNMT model and that of the su3532 pervised NMT systems. To bridge this gap, relying solely on monolingual training data, is worthy of being studied in the future. 6 Discussion 6.1 Zero-shot Translation Analysis We also studied the zero-shot translation accuracy of the MUNMT model. Although MUNMT could be trained on all translation directions (ordered language pairs), it would require an extremely long training time. Our proposed MUNMT model was trained in 24 translation directions (all English and non-English language pairs, in each direction), whereas 156 translation directions exist. As the number of languages increases, the number of translation directions increases quadratically. Therefore, zero-shot translation accuracy is important to the MUNMT model. Methods → Cs De Es Fr Xu et al. (2019) Cs 11.16 11.29 10.61 Sen et al. (2019) MUNMT 11.91 15.22 14.66 LBKD 13.16 16.63 16.28 SKD 16.96 20.52 20.14 Xu et al. (2019) De 10.52 13.68 9.45 Sen et al. (2019) 7.40 6.78 MUNMT 10.56 16.15 15.85 LBKD 11.53 17.27 16.96 SKD 14.58 20.20 20.61 Xu et al. (2019) Es 8.32 11.20 24.13 Sen et al. (2019) 4.78 13.92 MUNMT 10.04 11.87 21.90 LBKD 10.86 12.98 23.05 SKD 13.63 16.62 27.04 Xu et al. (2019) Fr 8.89 11.24 23.88 Sen et al. (2019) 4.59 13.87 MUNMT 9.77 11.70 22.30 LBKD 10.48 12.67 22.65 SKD 13.04 16.31 25.92 Table 4: BLEU scores of the MUNMT model between pairs of non-English languages. The first two rows of each block are the reported BLEU scores from the corresponding papers. Table 4 shows the performance of translation between non-English language pairs in the zeroshot translation scenario. Note that Xu et al. (2019) (2019) shows the results of direct translation between the two languages, not the result of zero-shot translation. Compared with previous works, our MUNMT model outperformed the previous systems in almost all translation directions, particularly the direct translation results reported in Xu et al. (2019). Compared with the original MUNMT model, our proposed knowledge distillation methods further improved the performance of zero-shot translation. Regarding our two proposed methods, SKD significantly outperformed LBKD by approximately 3 BLEU scores since the third language was introduced during SKD translation training for two language pairs, achieving much more cross-lingual knowledge. 6.2 Further Training (Fine-tuning) Analysis To better assess the effectiveness of our proposed MUNMT model, we further trained the MUNMT and LBKD model individually on each language pair for 15,000 iterations. As shown in Tables 5 and 6, after further training, the model outperformed the original single model on each language pair by approximately 4 BLEU scores. Actually, the number of iterations of the whole process (including training the MUNMT model) is half that of the original single model. This demonstrates that our proposed MUNMT model is a robust system and contains substantial cross-lingual information that could improve translation performance. Corpus SM MUNMT +FT LBKD +FT En-Cs 14.54 14.40 15.79 15.47 15.93 En-De 18.26 17.58 19.57 19.28 20.00 En-Es 25.14 25.05 27.59 26.79 27.80 En-Et 14.86 14.09 16.62 15.62 17.21 En-Fi 9.87 9.75 11.05 10.57 11.58 En-Fr 26.02 25.84 28.56 27.78 28.62 En-Hu 11.32 10.90 12.77 12.03 13.12 En-It 24.19 23.80 25.25 25.52 25.98 En-Lt 0.79 10.07 10.92 11.11 11.22 En-Lv 1.02 13.09 14.33 14.33 15.17 En-Ro 29.44 28.82 32.38 31.28 32.43 En-Tr 11.87 12.41 14.78 13.83 15.30 Average 15.61 17.15 19.13 18.63 19.53 Table 5: The +FT column shows BLEU scores from further training of the MUNMT and LBKD model on the English to non-English language pairs. The other columns show results from Table 2. 7 Related Work Multilingual NMT has attracted much attention in the machine translation community. Dong et al. (2015) first extended NMT from the translation of a single language pair to multiple language pairs, using a shared encoder and multiple decoders and 3533 Corpus SM MUNMT +FT LBKD +FT Cs-En 20.62 20.09 21.50 21.25 22.17 De-En 21.31 21.95 22.41 22.81 23.07 Es-En 25.53 25.37 26.24 26.59 26.78 Et-En 19.48 19.60 21.61 21.31 22.61 Fi-En 7.62 7.19 8.06 7.80 8.34 Fr-En 25.86 25.41 26.30 26.48 26.76 Hu-En 14.48 14.54 15.99 15.34 16.07 It-En 24.33 24.77 25.54 25.35 25.86 Lt-En 1.72 14.04 15.27 15.84 16.86 Lv-En 0.95 14.90 15.57 15.33 15.87 Ro-En 28.52 28.38 29.61 30.18 30.39 Tr-En 12.99 15.65 18.47 17.35 19.48 Average 16.95 19.32 20.55 20.47 21.19 Table 6: The +FT column shows BLEU scores from further training of the MUNMT and LBKD model on the non-English to English language pairs. The other columns show results from Table 3. multiple attention mechanisms, for each language. Luong et al. (2016) translated multiple source languages to multiple target languages using a combination of multiple encoders and multiple decoders. Firat et al. (2016) used a shared attention mechanism but multiple encoders and decoders for each language. Ha et al. (2016) and Johnson et al. (2017) proposed a simpler method to use one encoder and one decoder to translate between multiple languages. Recently, many methods (Lakew et al., 2018; Platanios et al., 2018; Sachan and Neubig, 2018; Blackwood et al., 2018; Lu et al., 2018; Wang et al., 2019a; Aharoni et al., 2019; Wang et al., 2019b; Wang and Neubig, 2019) have been proposed to boost multilingual NMT performance. In particular, Tan et al. proposed a knowledge distillation method (Tan et al., 2019b) and a language clustering method (Tan et al., 2019a) to improve the performance of multilingual NMT. Ren et al. (2018) propose a triangular architecture to tackle the problem of low-resource pairs translation by introducing another rich language. To further tackle the problem of low-resource pairs translation, UNMT (Artetxe et al., 2018; Lample et al., 2018a) has been proposed, using a combination of diverse mechanisms such as initialization with bilingual word embeddings, denoising autoencoder (Vincent et al., 2010), back-translation (Sennrich et al., 2016a), and shared latent representation. Lample et al. (2018b) concatenated two bilingual corpora as one monolingual corpus, and used monolingual embedding pretraining in the initialization step, to achieve remarkable results with some similar language pairs. Lample and Conneau (2019) achieved better UNMT performance by introducing a pretrained language model. Sun et al. (2019, 2020) proposed to train UNMT with cross-lingual language representation agreement, to further improve UNMT performance. Moreover, an unsupervised translation task that evaluated in the WMT19 news translation task (Barrault et al., 2019) attracted many researchers to participate (Marie et al., 2019; Li et al., 2019). For Multilingual UNMT, Xu et al. (2019) exploited multiple auxiliary languages for jointly boosting UNMT models via the Polygon-Net framework. Sen et al. (2019) proposed an MUNMT scheme that jointly trains multiple languages with a shared encoder and multiple decoders. In contrast with their use of multiple decoders, we have constructed a simpler MUNMT model with one encoder and one decoder. Further, we have extended the four or five languages used in their work to thirteen languages, for training our MUNMT model. 8 Conclusion and Future Work In this paper, we have introduced a unified framework, using a single encoder and decoder, for MUNMT training on a large scale of European languages. To further enhance MUNMT performance, we have proposed two knowledge distillation methods. Our extensive experiments and analysis demonstrate the effectiveness of our proposed methods. In the future, we intend to extend the work to include language types such as Asian languages. We will also introduce other effective methods to improve zero-shot translation quality. Acknowledgments We are grateful to the anonymous reviewers and the area chair for their insightful comments and suggestions. The corresponding authors are Rui Wang and Tiejun Zhao. Rui Wang was partially supported by JSPS grant-in-aid for early-career scientists (19K20354): “Unsupervised Neural Machine Translation in Universal Scenarios” and NICT tenure-track researcher startup fund “Toward Intelligent Machine Translation”. Tiejun Zhao was partially supported by National Key Research and Development Program of China via grant 2017YFB1002102. Masao Utiyama was partially supported by JSPS KAKENHI Grant Number 19H05660. 3534 References Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In NAACL, pages 3874–3884, Minneapolis, Minnesota. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. In ICLR, Vancouver, Canada. Lo¨ıc Barrault, Ondˇrej Bojar, Marta R. Costa-juss`a, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M¨uller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). In WMT, pages 1–61, Florence, Italy. Graeme Blackwood, Miguel Ballesteros, and Todd Ward. 2018. Multilingual neural machine translation with task-specific attention. In COLING, pages 3112–3122, Santa Fe, New Mexico, USA. Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In ACL, pages 1723–1732, Beijing, China. Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. In NAACL, pages 866–875, San Diego, California. Thanh-Le Ha, Jan Niehues, and Alexander H. Waibel. 2016. Toward multilingual neural machine translation with universal encoder and decoder. CoRR, abs/1611.04798. Sangchul Hahn and Heeyoul Choi. 2019. Selfknowledge distillation in natural language processing. CoRR, abs/1908.01851. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In NIPS, pages 820–828, Barcelona, Spain. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In NAACL, pages 1367–1377, San Diego California, USA. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. CoRR, abs/1503.02531. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda B. Vi´egas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. TACL, 5:339–351. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR, San Diego, California. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In ACL, pages 177–180, Prague, Czech Republic. Surafel Melaku Lakew, Mauro Cettolo, and Marcello Federico. 2018. A comparison of transformer and recurrent neural networks on multilingual neural machine translation. In COLING, pages 641–652, Santa Fe, New Mexico, USA. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. CoRR, abs/1901.07291. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In ICLR, Vancouver, Canada. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018b. Phrase-based & neural unsupervised machine translation. In EMNLP, pages 5039–5049, Brussels, Belgium. M Paul Lewis. 2009. Ethnologue: Languages of the world. SIL international. Bei Li, Yinqiao Li, Chen Xu, Ye Lin, Jiqiang Liu, Hui Liu, Ziyang Wang, Yuhao Zhang, Nuo Xu, Zeyang Wang, Kai Feng, Hexuan Chen, Tengbo Liu, Yanyang Li, Qiang Wang, Tong Xiao, and Jingbo Zhu. 2019. The NiuTrans machine translation systems for WMT19. In WMT, pages 257–266, Florence, Italy. Yichao Lu, Phillip Keung, Faisal Ladhak, Vikas Bhardwaj, Shaonan Zhang, and Jason Sun. 2018. A neural interlingua for multilingual machine translation. In WMT, pages 84–92, Belgium, Brussels. Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task sequence to sequence learning. In ICLR, San Juan, Puerto Rico. Benjamin Marie, Haipeng Sun, Rui Wang, Kehai Chen, Atsushi Fujita, Masao Utiyama, and Eiichiro Sumita. 2019. NICT’s unsupervised neural and statistical machine translation systems for the WMT19 news translation task. In WMT, pages 294–301, Florence, Italy. Emmanouil Antonios Platanios, Mrinmaya Sachan, Graham Neubig, and Tom Mitchell. 2018. Contextual parameter generation for universal neural machine translation. In EMNLP, pages 425–435, Brussels, Belgium. 3535 Shuo Ren, Wenhu Chen, Shujie Liu, Mu Li, Ming Zhou, and Shuai Ma. 2018. Triangular architecture for rare language translation. In ACL, pages 56–65, Melbourne, Australia. Devendra Sachan and Graham Neubig. 2018. Parameter sharing methods for multilingual self-attentional translation models. In WMT, pages 261–271, Belgium, Brussels. Sukanta Sen, Kamal Kumar Gupta, Asif Ekbal, and Pushpak Bhattacharyya. 2019. Multilingual unsupervised NMT using shared encoder and languagespecific decoders. In ACL, pages 3083–3089, Florence, Italy. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In ACL, pages 86–96, Berlin, Germany. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In ACL, pages 1715–1725, Berlin, Germany. Haipeng Sun, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, and Tiejun Zhao. 2019. Unsupervised bilingual word embedding agreement for unsupervised neural machine translation. In ACL, pages 1235–1245, Florence, Italy. Haipeng Sun, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, and Tiejun Zhao. 2020. Unsupervised neural machine translation with cross-lingual language representation agreement. TASLP. Xu Tan, Jiale Chen, Di He, Yingce Xia, Tao QIN, and Tie-Yan Liu. 2019a. Multilingual neural machine translation with language clustering. In EMNLP, pages 962–972, Hong Kong, China. Xu Tan, Yi Ren, Di He, Tao Qin, Zhou Zhao, and TieYan Liu. 2019b. Multilingual neural machine translation with knowledge distillation. In ICLR, New Orleans, LA, USA. Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. JMLR, 11:3371–3408. Rui Wang, Haipeng Sun, and Sumita Eiichiro Utiyama, Masao. 2020. A survey of advances and challenges in unsupervised neural machine translation. In ANLP, Mito, Japan. Xinyi Wang and Graham Neubig. 2019. Target conditioned sampling: Optimizing data selection for multilingual neural machine translation. In ACL, pages 5823–5828, Florence, Italy. Xinyi Wang, Hieu Pham, Philip Arthur, and Graham Neubig. 2019a. Multilingual neural machine translation with soft decoupled encoding. In ICLR, New Orleans, LA, USA. Yining Wang, Long Zhou, Jiajun Zhang, Feifei Zhai, Jingfang Xu, and Chengqing Zong. 2019b. A compact and language-sensitive multilingual translation method. In ACL, pages 1213–1223, Florence, Italy. Chang Xu, Tao Qin, Gang Wang, and Tie-Yan Liu. 2019. Polygon-net: A general framework for jointly boosting multiple unsupervised neural machine translation models. In IJCAI, pages 5320– 5326. Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2018. Unsupervised neural machine translation with weight sharing. In ACL, pages 46–55, Melbourne, Australia.
2020
324
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3536–3543 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 3536 Lexically Constrained Neural Machine Translation with Levenshtein Transformer Raymond Hendy Susanto, Shamil Chollampatt, and Liling Tan Rakuten Institute of Technology Singapore Rakuten, Inc. {first.last}@rakuten.com Abstract This paper proposes a simple and effective algorithm for incorporating lexical constraints in neural machine translation. Previous work either required re-training existing models with the lexical constraints or incorporating them during beam search decoding with significantly higher computational overheads. Leveraging the flexibility and speed of a recently proposed Levenshtein Transformer model (Gu et al., 2019), our method injects terminology constraints at inference time without any impact on decoding speed. Our method does not require any modification to the training procedure and can be easily applied at runtime with custom dictionaries. Experiments on English-German WMT datasets show that our approach improves an unconstrained baseline and previous approaches. 1 Introduction Neural machine translation (NMT) systems can generate higher-quality translations than phrasebased MT systems, but they come at the cost of losing control over how translations are generated. Without the explicit link between the source and the target vocabulary, enforcing specific terminological translation in domain-specific settings becomes painfully difficult for NMT systems. Consider an example where we have a Chinese-English NMT system trained for the E-commerce domain, and there is no prior knowledge of the brand name “红 米” in the training data, the system would translate the input term literally as “red (红) rice (米)” instead of “Redmi”. In such scenarios, machine translation users often maintain in-domain dictionaries to ensure that specific information is translated accurately and consistently. A line of previous work that tried to address this problem required re-training the NMT models with lexical constraints, either by a placeholder mechanism (Crego et al., 2016) or via code-mixed training (Song et al., 2019; Dinu et al., 2019). However, they do not reliably guarantee the presence of the constraints at test time. Another approach focused on constrained beam search decoding (Hokamp and Liu, 2017; Post and Vilar, 2018; Hu et al., 2019). Although the latter approach has higher control over the target constraint terms, they significantly slow down the decoding. Different from the existing line of work, we invoke lexical constraints using a non-autoregressive approach.1 To do this, we use Levenshtein Transformer (LevT) (Gu et al., 2019), an edit-based generation model that performs deletion and insertion operations during inference iteratively. LevT achieves substantially higher inference speed compared to beam search without affecting quality. We add a constraint insertion step in LevT decoding to seamlessly decode the target language sequence while adhering to specific lexical constraints, achieving the same speed as standard LevT decoding. 2 Related Work Previous approaches integrated lexical constraints in NMT either via constrained training or decoding. Crego et al. (2016) replaced entities with placeholders that remained unchanged during translation and placed them back in a post-processing step. Song et al. (2019) trained a Transformer (Vaswani et al., 2017) model by augmenting the data to include the constraint target phrases in the source sentence. Dinu et al. (2019) proposed a similar idea and additionally used factored training. Other approaches proposed enforcement of lexical constraints during inference with various improvements to constraint-aware beam search, such as 1In literature, non-autoregressive NMT decoding mostly refers to those that do not generate tokens sequentially, although they perform iterative refinement (Lee et al., 2018). 3537 grid beam search (Hokamp and Liu, 2017), dynamic beam allocation (Post and Vilar, 2018), and its optimized vectorized version (Hu et al., 2019). Hasler et al. (2018) built finite-state acceptors to integrate constraints in a multi-stack decoder. These lexically-constrained decoding approaches rely on autoregressive inference that generates one target token at a time, which makes it difficult to parallelize the decoder and monotonically increases decoding time. While being mostly effective at forcing the inclusion of pre-specified terms in the output, these approaches further slow down the beam search process. Post and Vilar (2018) reported 3× slow down compared to standard beam search. Non-autoregressive neural machine translation (NAT) (Gu et al., 2018) attempts to move away from the conventional autoregressive decoding. Such a direction enables parallelization during sequence generation that results in lower inference latency. Recent NAT approaches treat inference as an iterative refinement process, first proposed by Lee et al. (2018). Following this direction, it is intuitive to perform decoding using “edit” operations, such as insertion (Stern et al., 2019) or both insertion and deletion (LevT, Gu et al. (2019)). The LevT model has been shown to outperform existing refinement-based models, such as Ghazvininejad et al. (2019) and performs comparably to autoregressive Transformer models. Our method integrates lexical constraints in NAT decoding utilizing the flexibility, speed, and performance of LevT. 3 Levenshtein Transformer Levenshtein Transformer (LevT) (Gu et al., 2019) has an encoder-decoder framework based on Transformer architecture (Vaswani et al., 2017) with multi-headed self-attention and feed-forward networks. Unlike token generation in a typical Transformer model, LevT decoder models a Markov Decision Process (MDP) that iteratively refines the generated tokens by alternating between the insertion and deletion operations. After embedding the source input through a Transformer encoder block, the LevT decoder follows the MDP formulation for each sequence at the k-th iteration yk = (y1, y2, ..., yn), where y1 and yn are the start (<s>) and end (</s>) symbols. The decoder then generates yk+1 by performing deletion and insertion operations via three classifiers that run sequentially: Constraint Insertion Placeholder Classifier Token Classifier <s> Nevada hat bereits ein Pilot@@ projekt abgeschlossen . </s> Deletion Classifier <s> </s> <s> Nevada Pilot@@ projekt </s> <s> Nevada [PLH] [PLH] [PLH] Pilot@@ projekt  [PLH] [PLH] </s>          <s> Nevada Pilot@@ projekt </s> Figure 1: Levenshtein Transformer decoding with lexical constraints for English-German MT. The source sentence is Nevada has completed a pilot project. and the target constraints are [Nevada, Pilot@@ projekt]. Encoder and attention components are not shown. 1. Deletion Classifier, which predicts for each token position whether they should be “kept” or “deleted”, 2. Placeholder Classifier, which predicts the number of tokens to be inserted between every two consecutive tokens and then inserts the corresponding number of placeholder [PLH] tokens, 3. Token Classifier, which predicts for each [PLH] token an actual target token. Each prediction is conditioned on the source text and the current target text. The same Transformer decoder block is shared among the three classifiers. Decoding stops when the current target text does not change, or a maximum number of refinement iterations has been reached. The LevT model is trained using sequence-level knowledge distillation (Kim and Rush, 2016) from a Transformer teacher whose beam search output is used as ground truth during training. We refer the readers to (Gu et al., 2019) for a detailed description of the LevT model and training routine. 4 Incorporating Lexical Constraints For sequence generation, the LevT decoder typically starts the first iteration of the decoding process with only the sentence boundary tokens y0 = <s></s>. To incorporate lexical constraints, we populate the y0 sequence before the first deletion 3538 operation with the target constraints, as shown in Figure 1. The initial target sequence will pass through the deletion, placeholder, and insertion classifiers sequentially, and the modified sequence will be refined for several iterations. The decoding steps are explained in detail below. Constraint Insertion More formally, given a list of m target constraints C1, C2, ..., Cm, where each constraint Ci is possibly a multi-token phrase Ci = wi 1, wi 2, ..., wi |Ci|, we insert the constraints into the decoding sequence before the deletion operation to form y0 = <s> C1 C2 ... Cn</s>. Deletion Operation Next, y0 passes through the deletion classifier to decide which wi j token to remove. If the deletion operation is allowed on the constraint tokens, the presence of each constraint in the final output is not guaranteed, especially when the supplied constraints are out of context for the decoder. To mitigate this problem, we optionally disallow the deletion operation on the constraint tokens by introducing a constraint mask to indicate the positions of constraint tokens in the sequence. We forcefully set the deletion classifier prediction for all positions in this mask to “keep”. The positions in this mask are re-computed accordingly after each deletion and insertion operation. Insertion Operation Finally, the y0 passes through the placeholder classifier to predict the number of tokens to be inserted and generate the corresponding number of [PLH] tokens and the token classifier assigns an actual target token for every [PLH] token. Each constraint may contain multiple tokens, and the [PLH] tokens may be inserted between the tokens from the same constraint. To prevent this from happening and to keep each constraint intact, we optionally prohibit inserting [PLH] within a multi-token constraint by constraining 0 to the number of such placeholders. In Figure 1, our constraint insertion is executed at the first pass, and subsequent iterations start from deletion (indicated by a loop in the figure). We note that this step happens only at inference; during training, the original LevT training routine is carried out without the constraint insertion. 5 Experiments We extend the FAIRSEQ2 (Ott et al., 2019) implementation of the original LevT architecture to per2https://github.com/pytorch/fairseq/commit/2d51e04 Term% BLEU Speed Full Constr. (sent/sec) Baseline LevT 80.23 26.49 29.86 263.11 + Constr. Ins. 94.43 26.50 29.93 260.19 + No Del. 99.62 26.59 30.43 260.61 + No Ins. 100.00 26.60 30.49 254.64 Table 1: Results of LevT with lexical constraints on WMT14 En-De task form lexically-constrained decoding. All Transformer blocks in our LevT model follow the base configuration that contains 6 layers with 8 attention heads each, with a model size dmodel = 512 and feed-forward layer size dff= 2048; the source and target embeddings share the same vocabulary. The LevT model is trained using knowledge distillation routine using Transformer base output released by Gu et al. (2019). We leave more experimental details in the Appendix. 5.1 Data and evaluation settings We evaluate our approach on the WMT’14 EnglishGerman (En-De) news translation task (Bojar et al., 2014) with En-De bilingual dictionary entries extracted from Wiktionary3 following Dinu et al. (2019), by matching the source and target phrases of the dictionary entries in the source and target sentences, respectively. We also evaluate our approach on two En-De test sets released by Dinu et al. (2019) to compare our approach against previous work on applying lexical constraints in NMT (Post and Vilar, 2018; Dinu et al., 2019). The two test sets are subsets of WMT’17 En-De test set (Bojar et al., 2017) extracted using Wiktionary and the Interactive Terminology for Europe (IATE) terminology database,4 respectively. Both the WMT’14 and WMT’17 EnDe datasets are tokenized using the Moses tokenization scripts and segmented into sub-word units using byte-pair encoding (Sennrich et al., 2016). 5.2 Results We evaluate the systems using BLEU scores (Papineni et al., 2002) and term usage rate (Term%), which is defined as the number of constraints generated in the output divided by the total number of the given constraints. Table 1 shows the result of (i) the baseline LevT model, (ii) with the constraint insertion operation (+ Constr. Ins.), (iii) with the constraint insertion 3https://dumps.wikimedia.org/enwiktionary/ 4https://iate.europa.eu/ 3539 Source “We don’t want to charge that,” she said. Baseline LevT “Das wollen wir nicht in Rechnung stellen”, sagte sie. + Constr. Ins. “Das wollen wir nicht verlangen”, sagte sie. + No Del. + No Ins. “Das wollen wir nicht berechnen”, sagte sie. Reference “Wir m¨ochten diese Summe nicht berechnen”, erkl¨arte sie. Table 2: Example translations from the LevT with constraint insertion to enforce the translation of charge→berechnen. When deletion is allowed (+ Constr. Ins.) the imposed constraint (berechnen) gets deleted during decoding. But when deletion is disallowed (+ No Del.) and unwanted insertion between constraint tokens is prohibited (+ No Ins.), it guarantees the presence of our desired term in the final translation. We show more examples in the Appendix. operation and forcefully disallowing deletion of the constraints (+ No Del.) and (iv) disallowing [PLH] insertion between tokens from the same constraint (+ No Ins.). Table 2 shows an example where prohibiting constraint deletion prevents catastrophic removal of the lexical constraint. We report results on both the filtered test set for sentence pairs that contain at least one target constraint (“Constr.”, 454 sentences) and the full test set (“Full”, 3,003 sentences). The constraint insertion operation increases the term usage rate from about 80% to over 94%, and further disallowing deletion of the constraints achieves above 99% term usage. Prohibiting insertion between each constraint’s tokens guarantees a 100% term usage. For sentences with lexical constraints, we observe a statistically significant improvement of 0.6 BLEU (p-value < 0.05) based on bootstrap resampling (Koehn, 2004). On the full test set, the BLEU improves by 0.1. The small margin of improvement is because only 1% of the total reference tokens are constraint tokens. Unlike previous work that sacrificed decoding speed to enforce lexical constraints (e.g. Hasler et al., 2018; Post and Vilar, 2018), there is no significant difference in the number of sentences decoded per second between the unconstrained and the lexically constrained LevT models. Table 3 presents the comparison to two previous approaches: constrained decoding with dynamic beam allocation (Post and Vilar, 2018) and data augmentation by replacing the source terms with target constraints during training (Dinu et al., 2019). We refer to them as POST18 and DINU19, respectively, in Table 3. We evaluate each approach on the WMT’17 En-De test set with constraint terms from Wiktionary and IATE dictionaries. Note that our baseline LevT model with Transformer blocks of 6 layers is superior to that of Dinu et al. (2019) who used a 2-layer configuration. Despite having a stronger baseline, we obtain higher absolute BLEU Wiktionary IATE Term% BLEU Term% BLEU Previous work Baseline Trans. 76.90 26.00 76.30 25.80 POST18 99.50 25.80 82.00 25.30 DINU19 93.40 26.30 94.50 26.00 This work Baseline LevT 81.11 30.24 80.31 28.97 + Constr. Ins. 93.44 30.82 93.81 29.73 + No Del. 98.53 31.04 99.12 30.09 + No Ins. 100.00 31.20 100.00 30.13 Table 3: Comparison to previous work. Baseline Transformer and POST18 results are from Dinu et al. (2019). score improvements (0.96 and 1.16 BLEU on Wiktionary and IATE, respectively) and achieved 100% term usage. We report additional experiments on WMT’16 Romanian-English news translation task (Bojar et al., 2016) in the Appendix. 5.3 Analysis To analyze if our approach inserts the constraints at correct positions, we compare it to a baseline approach of randomly inserting the constraint terms in the output of our baseline LevT model. Note that we only insert those constraints that are not already present in the output. Although this results in a 100% term usage, we observe that the BLEU score drops from 29.9 to 29.3 on the “Constr.” WMT’14 test set, whereas our approach improves the BLEU score. The LevT model with our proposed constraint insertion seems to inherently have the ability to place the constraints at correct positions in the target sentence. Although prohibiting constraint deletion improves term usage in the final translation and achieves higher BLEU scores, it limits the possibility of reordering when there is more than one constraint during inference. For the English-German test sets we evaluated on, 97-99% of the target constraints appear in the same order as the source terms. This issue may become more apparent in language pairs with more distinct syntactic differences 3540 between the source and target languages. In practice, most of the entries in terminology databases (Wiktionary, IATE, etc.) are often nominal. Thus, the reordering of lexical constraints boils down to whether the source and target language share the same argument-predicate order.5 We will explore potential strategies to reorder constraints dynamically in future work. 6 Conclusion We proposed a non-autoregressive decoding approach to integrate lexical constraints for NMT. Our constraint insertion step is simple and we have empirically validated its effectiveness. The approach demonstrated control over constraint terms in target translations while being able to decode as fast as a baseline Levenshtein Transformer model, which achieves significantly higher decoding speed than traditional beam search.6 In addition to the terminological lexical constraints discussed in this work, future work can potentially modify insertion or selection operations to handle target translations of multiple forms; this can potentially disambiguate the morphological variants of the lexical constraints. References Ondˇrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Aleˇs Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017. Findings of the 2017 conference on machine translation (WMT17). In Proceedings of the Second Conference on Machine Translation. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aur´elie N´ev´eol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Specia, Marco Turchi, Karin Verspoor, and Marcos 5i.e., Subject (S), Verb (V), and Object (O), SOV, or VSO ordering. 6Our implementation will be made publicly available at https://github.com/raymondhs/constrained-levt. Zampieri. 2016. Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers. Josep Maria Crego, Jungi Kim, Guillaume Klein, Anabel Rebollo, Kathy Yang, Jean Senellart, Egor Akhanov, Patrice Brunelle, Aurelien Coquard, Yongchao Deng, Satoshi Enoue, Chiyo Geiss, Joshua Johanson, Ardas Khalsa, Raoum Khiari, Byeongil Ko, Catherine Kobus, Jean Lorieux, Leidiana Martins, Dang-Chuan Nguyen, Alexandra Priori, Thomas Riccardi, Natalia Segal, Christophe Servan, Cyril Tiquet, Bo Wang, Jin Yang, Dakun Zhang, Jing Zhou, and Peter Zoldan. 2016. SYSTRAN’s pure neural machine translation systems. arXiv preprint arXiv:1610.05540. Georgiana Dinu, Prashant Mathur, Marcello Federico, and Yaser Al-Onaizan. 2019. Training neural machine translation to apply terminology constraints. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-predict: Parallel decoding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Jiatao Gu, James Bradbury, Caiming Xiong, Victor O. K. Li, and Richard Socher. 2018. Nonautoregressive neural machine translation. In 6th International Conference on Learning Representations, Conference Track Proceedings. Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. In Advances in Neural Information Processing Systems 32. Eva Hasler, Adri`a de Gispert, Gonzalo Iglesias, and Bill Byrne. 2018. Neural machine translation decoding with terminology constraints. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). Chris Hokamp and Qun Liu. 2017. Lexically constrained decoding for sequence generation using grid beam search. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). J. Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, and Benjamin Van Durme. 2019. Improved lexically constrained decoding for translation and monolingual rewriting. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 3541 Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural sequence modeling by iterative refinement. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations). Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Matt Post and David Vilar. 2018. Fast lexically constrained decoding with dynamic beam allocation for neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Kai Song, Yue Zhang, Heng Yu, Weihua Luo, Kun Wang, and Min Zhang. 2019. Code-switching for enhancing NMT with pre-specified translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. 2019. Insertion transformer: Flexible sequence generation via insertion operations. In Proceedings of the 36th International Conference on Machine Learning. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30. A Datasets We train on 3,961,179 distilled sentence pairs released by Gu et al. (2019) and evaluate on WMT’14 En-De test set (3,003 sentences). The dictionary used in this work is created by sampling 10% EnDe translation entries from Wiktionary, resulting in 10,522 entries. After applying this dictionary to generate constraints for the test set, we obtain 454 sentences that contain at least one constraint. The average number of constraints per sentence is 1.15 and the number of unique source constraints is 220. We use an English frequency list7 to filter the 500 most frequent words. We use the WMT’17 En-De test sets released by Dinu et al. (2019)8 that were created based on Wiktionary and IATE term entries exactly matching the source and target. They contain 727 and 414 sentences, respectively. B Hyperparameters Table 4 shows the hyperparameter settings for our LevT model. We learn a joint BPE vocabulary with 32,000 operations. Their resulting vocabulary size is 39,843. Embedding dim. 512 Learned positional embeddings Yes Tied embeddings Yes Transformer FFN dim. 2,048 Attention heads 8 En/Decoder layers 6 Label smoothing 0.1 Dropout 0.3 Weight decay 0.01 Learning rate 0.005 Warmup updates 10,000 Effective batch size in tokens 64,000 Max. updates 300,000 Table 4: LevT hyperparameter settings C Additional Experiments We train a LevT model on 599,907 training sentence pairs from the WMT’16 Romanian-English (Ro-En) news translation task (Bojar et al., 2016) using knowledge distillation routine based on Transformer base output and evaluate on 1,999 test sentences. Similar to En-De, we create a dictionary 7https://norvig.com/ngrams/count_1w.txt 8https://github.com/mtresearcher/terminology_ dataset 3542 Term% BLEU Speed Full Constr. (sent/sec) Baseline LevT 80.33 33.00 35.35 271.32 + Constr. Ins. 95.33 33.10 35.96 274.01 + No Del. 98.67 33.13 36.09 263.68 + No Ins. 100.00 33.13 36.09 264.45 Table 5: Results of LevT with lexical constraints on WMT16 Ro-En task by sampling 10% Ro-En translation entries from Wiktionary, resulting in 3,490 entries. We use this dictionary to generate 270 test sentences that contain at least one constraint. The average number of constraints per sentence is 1.11, and the number of unique source constraints is 122. Similarly, we filter out the 500 most frequent English words. We train our LevT model using the same hyperparameter settings from Table 4. We learn a joint BPE vocabulary with 40,000 operations, which results in 39,348 vocabulary size. Table 5 shows the experiment results. We observe consistent findings in our En-De experiments in terms of improved term usage rate (from 80% to 100%) and a small margin of improvement of 0.7 BLEU, while being able to decode as fast as a baseline LevT model. D Examples Table 6 shows more example translations of the lexically constrained LevT model. 3543 WMT’14 En-De Source Bwelle and his team spend almost every weekend seeing hundreds of patients {spend→verbringen, almost→beinahe} Baseline LevT Bwelle und sein Team verbringen fast jedes Wochenende mit Hunderte von Patienten. + Constr. Ins. Bwelle und sein Team verbringen beinahe jedes Wochenende mit Hunderte von Patienten. + No Del. + No Ins. Bwelle und sein Team verbringen beinahe jedes Wochenende mit Hunderte von Patienten. Reference Bwelle und sein Team verbringen beinahe jedes Wochenende damit, Hunderte von Patienten zu behandeln Source There have already been two events held in the brightly lit caf´e. {already→schon} Baseline LevT Im hell beleuchteten Caf´e fanden bereits zwei Veranstaltungen statt. + Constr. Ins. Im hell beleuchteten Caf´e fanden bereits zwei Veranstaltungen statt. + No Del. + No Ins. Im hell beleuchteten Caf´e fanden schon zwei Veranstaltungen statt. Reference Zwei Events gab’s auch schon im hellen Caf´e. WMT’17 En-De - Wiktionary Source House searches had revealed evidence and drugs, the police revealed on Friday. {evidence→Beweismittel, police→Polizei} Baseline LevT Durchsuchungen des Hauses hatten Beweise und Drogen enth¨ullt, die Polizei am Freitag enth¨ullt. + Constr. Ins. Hausdurchfragen hatten Beweismittel und Drogen offenbart, hat die Polizei am Freitag enth¨ullt. + No Del. + No Ins. Durchfragen hatten Beweismittel und Drogen offenbart, die Polizei am Freitag enth¨ullt. Reference Bei Wohnungsdurchsuchungen seien Beweismittel und Rauschgift sichergestellt worden, teilte die Polizei am Freitag mit. Source We always say that it has a lot of Latin American influences. {Latin American→lateinamerikanisch} Baseline LevT Wir sagen immer, dass sie viele lateinamerikanische Einfl¨usse hat. + Constr. Ins. Wir sagen immer, dass sie viel lateinamerikanisch beeinflusst. + No Del. + No Ins. Wir sagen immer, dass sie viel lateinamerikanisch beeinflusst. Reference Wir sagen immer, dass sie sehr lateinamerikanisch gepr¨agt ist. WMT’17 En-De - IATE Source What is behind sleep disorders? {sleep disorders→Schlafst¨orungen} Baseline LevT Was steckt hinter Schlafkrankheiten? + Constr. Ins. Was steckt hinter Schlafst¨orungen? + No Del. + No Ins. Was steckt hinter Schlafst¨orungen? Reference Was steckt hinter Schlafst¨orungen? Source He said another stepson who lives nearby alerted him. {stepson→Stiefsohn} Baseline LevT Er sagte, ein weiterer Stiefson, der in der N¨ahe lebt, alarmierte ihn. + Constr. Ins. Er sagte, ein weiterer Stiefsohn, der in der N¨ahe lebt, alarmierte ihn. + No Del. + No Ins. Er sagte, ein weiterer Stiefsohn, der in der N¨ahe lebt, alarmierte ihn. Reference Er sagte, dass ihn ein weiterer Stiefsohn, der in der N¨ahe wohnt, gewarnt h¨atte. Table 6: More example translations from the LevT with constraint insertion. The constraints are in curly brackets.
2020
325