text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5570–5581 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5570 Towards Explainable NLP: A Generative Explanation Framework for Text Classification Hui Liu1, Qingyu Yin2, William Yang Wang3 1 Peking University, China 2 Harbin Institute of Technology, China 3 University of California, Santa Barbara, USA [email protected] [email protected] [email protected] Abstract Building explainable systems is a critical problem in the field of Natural Language Processing (NLP), since most machine learning models provide no explanations for the predictions. Existing approaches for explainable machine learning systems tend to focus on interpreting the outputs or the connections between inputs and outputs. However, the fine-grained information (e.g. textual explanations for the labels) is often ignored, and the systems do not explicitly generate the human-readable explanations. To solve this problem, we propose a novel generative explanation framework that learns to make classification decisions and generate fine-grained explanations at the same time. More specifically, we introduce the explainable factor and the minimum risk training approach that learn to generate more reasonable explanations. We construct two new datasets that contain summaries, rating scores, and fine-grained reasons. We conduct experiments on both datasets, comparing with several strong neural network baseline systems. Experimental results show that our method surpasses all baselines on both datasets, and is able to generate concise explanations at the same time. 1 Introduction Deep learning methods have produced state-ofthe-art results in many natural language processing (NLP) tasks (Vaswani et al., 2017; Yin et al., 2018; Peters et al., 2018; Wang et al., 2018; Hancock et al., 2018; Ma et al., 2018). Though these deep neural network models achieve impressive performance, it is relatively difficult to convince people to trust the predictions of such neural networks since they are actually black boxes for human beings (Samek et al., 2018). For instance, if an essay scoring system only tells the scores of a given essay without providing explicit reasons, the users can hardly be convinced of the judgment. Therefore, the ability to explain the rationale is essential for a NLP system, a need which requires traditional NLP models to provide human-readable explanations. In recent years, lots of works have been done to solve text classification problems, but just a few of them have explored the explainability of their systems (Camburu et al., 2018; Ouyang et al., 2018). Ribeiro et al. (2016) try to identify an interpretable model over the interpretable representation that is locally faithful to the classifier. Samek et al. (2018) use heatmap to visualize how much each hidden element contributes to the predicted results. Although these systems are somewhat promising, they typically do not consider finegrained information that may contain information for interpreting the behavior of models. However, if a human being wants to rate a product, s/he may first write down some reviews, and then score or summarize some attributes of the product, like price, packaging, and quality. Finally, the overall rating for the product will be given based on the fine-grained information. Therefore, it is crucial to build trustworthy explainable text classification models that are capable of explicitly generating fine-grained information for explaining their predictions. To achieve these goals, in this paper, we propose a novel generative explanation framework for text classification, where our model is capable of not only providing the classification predictions but also generating fine-grained information as explanations for decisions. The novel idea behind our hybrid generative-discriminative method is to explicitly capture the fine-grained information inferred from raw texts, utilizing the information to help interpret the predicted classification results and improve the overall performance. Specifically, we introduce the notion of an explainable factor and a minimum risk training method that learn to 5571 generate reasonable explanations for the overall predict results. Meanwhile, such a strategy brings strong connections between the explanations and predictions, which in return leads to better performance. To the best of our knowledge, we are the first to explicitly explain the predicted results by utilizing the abstractive generative fine-grained information. In this work, we regard the summaries (texts) and rating scores (numbers) as the fine-grained information. Two datasets that contain these kinds of fine-grained information are collected to evaluate our method. More specifically, we construct a dataset crawled from a website called PCMag1. Each item in this dataset consists of three parts: a long review text for one product, three short text comments (respectively explains the property of the product from positive, negative and neutral perspectives) and an overall rating score. We regard the three short comments as fine-grained information for the long review text. Besides, we also conduct experiments on the Skytrax User Reviews Dataset2, where each case consists of three parts: a review text for a flight, five sub-field rating scores (seat comfortability, cabin stuff, food, in-flight environment, ticket value) and an overall rating score. As for this dataset, we regard the five sub-field rating scores as fine-grained information for the flight review text. Empirically, we evaluate our model-agnostic method on several neural network baseline models (Kim, 2014; Liu et al., 2016; Zhou and Wang, 2018) for both datasets. Experimental results suggest that our approach substantially improves the performance over baseline systems, illustrating the advantage of utilizing fine-grained information. Meanwhile, by providing the fine-grained information as explanations for the classification results, our model is an understandable system that is worth trusting. Our major contributions are three-fold: • We are the first to leverage the generated finegrained information for building a generative explanation framework for text classification, propose an explanation factor, and introduce minimum risk training for this hybrid generative-discriminative framework; • We evaluate our model-agnostic explanation 1https://www.pcmag.com/ 2https://github.com/quankiquanki/ skytrax-reviews-dataset framework with different neural network architectures, and show considerable improvements over baseline systems on two datasets; • We provide two new publicly available explainable NLP datasets that contain finegrained information as explanations for text classification. 2 Task Definition and Notations The research problem investigated in this paper is defined as: How can we generate fine-grained explanations for the decisions our classification model makes? To answer this question, we may first investigate what are good fine-grained explanations. For example, in sentiment analysis, if a product A has three attributes: i.e., quality, practicality, and price. Each attribute can be described as “HIGH” or “LOW”. And we want to know whether A is a “GOOD” or “BAD” product. If our model categorizes A as “GOOD” and it tells that the quality of A is “HIGH”, the practicality is “HIGH” and the price is “LOW”, we can regard these values of attributes as good explanations that illustrate why the model judges A to be “GOOD”. On the contrary, if our model produces the same values for the attributes, but it tells that A is a “BAD” product, we then think the model gives bad explanations. Therefore, for a given classification prediction made by the model, we would like to explore more on the fine-grained information that can explain why it comes to such a decision for the current example. Meanwhile, we also want to figure out whether the fine-grained information inferred from the input texts can help improve the overall classification performance. We denote the input sequence of texts to be S{s1, s2, . . . , s|S|}, and we want to predict which category yi(i ∈[1, 2, . . . , N]) the sequence S belongs to. At the same time, the model can also produce generative fine-grained explanations ec for yi. 3 Generative Explanation Framework In this part, we introduce our proposed Generative Explanation Framework (GEF). Figure 1 illustrates the architecture of our model. 3.1 Base Classifier and Generator A common way to do text classification tasks is using an Encoder-Predictor architecture (Zhang 5572 … s1 濘瀁濶瀂濷濸瀅澳E s|S| s2 … S = [s1,s2,…,s|S|] 濣瀅濸濷濼濶瀇瀂瀅澳P … ve softmax Ppred = [p1, p2 ,… , pn] y argmax 濚濸瀁濸瀅濴瀇瀂瀅澳G … Generated Explanation ec ec1 ec2 ecm 濖濿濴瀆瀆濼濹濼濸瀅澳C … Golden Explanation eg eg1 eg2 egm 濖濴濿濶瀈濿濴瀇濸澳 濘瀋瀃濿濴瀁濴瀇濼瀂瀁澳濙濴濶瀇瀂瀅 ෤݌௣௥௘ௗ ෤݌௖௟௔௦௦௜௙௜௘ௗ ෤݌௚௢௟ௗ௘௡ EF(S) Result Figure 1: The architecture of the Generative Explanation Framework. E encodes S into a representation vector ve. P gives the probability distribution Ppred for categories. We extract the ground-truth probability ˜ppred from Ppred. Generator G takes ve as input and generates explanations ec. Classifier C and Predictor P both predict classes y. C will predict a probability distribution Pclassified when taking ec as input, and predict Pgolden when taking eg as input, and then output the ground-truth probability ˜pclassified and ˜pgolden. The explanation factor EF(S) is calculated through ˜ppred, ˜pclassified and ˜pgolden. et al., 2015; Lai et al., 2015). As shown in Figure 1, a text encoder E takes the input text sequence S, and encodes S into a representation vector ve. A category predictor P then gets ve as input and outputs the category yi and its corresponding probability distribution Ppred. As mentioned above, a desirable model should not only predict the overall results yi, but also provide generative explanations to illustrate why it makes such predictions. A simple way to generate explanations is to feed ve to an explanation generator G to generate fine-grained explanations ec. This procedure is formulated as: ve = Encoder([s1, s2, · · · , s|S|]) (1) Ppred = Predictor(ve) (2) y = arg max i (Ppred,i) (3) ec = fG(WG · ve + bG) (4) where Encoder maps the input sequence [s1, s2, · · · , s|S|] into the representation vector ve; the Predictor takes the ve as input and outputs the probability distribution over classification categories by using the softmax. During the training process, the overall loss L is composed of two parts, i.e., the classification loss Lp and explanation generation loss Le: L(eg, S, θ) = Lp + Le (5) where θ represents all the parameters. 3.2 Explanation Factor The simple supervised way to generate explanations, as demonstrated in the previous subsection, is quite straightforward. However, there is a significant shortcoming of this generating process: it fails to build strong connections between the generative explanations and the predicted overall results. In other words, the generative explanations seem to be independent of the predicted overall results. Therefore, in order to generate more reasonable explanations for the results, we propose to use an explanation factor to help build stronger connections between the explanations and predictions. As we have demonstrated in the introduction section, fine-grained information will sometimes reflect the overall results more intuitively than the original input text sequence. For example, given a review sentence, “The product is good to use”, we may not be sure if the product should be rated as 5 stars or 4 stars. However, if we see that the attributes of the given product are all rated as 5 stars, we may be more convinced that the overall rating for the product should be 5 stars. So in the first place, we pre-train a classifier C, which also learns to predict the category y by directly taking the explanations as input. More specifically, the goal of C is to imitate human beings’ behavior, which means that C should predict the overall results more accurately than the base model that takes the original text as the input. We prove this assumption in the experiments section. We then use the pre-trained classifier C to help provide a strong guidance for the text encoder E, making it capable of generating a more informative representation vector ve. During the training process, we first get the generative explanations ec by utilizing the explanation generator G. We then feed this generative explanations ec to the classifier C to get the probability distribution of the predicted results Pclassified. Meanwhile, we can 5573 also get the golden probability distribution Pgold by feeding the golden explanations eg to C. The process can be formulated as: Pclassified = softmax(fC(WC · ec + bC)) (6) Pgold = softmax(fC(WC · eg + bC)) (7) In order to measure the distance among predicted results, generated explanations and golden generations, we extract the ground-truth probability ˜pclassified, ˜ppred, ˜pgold from Pclassified, Ppred, Pgold respectively. They will be used to measure the discrepancy between the predicted result and ground-truth result in minimum risk training. We define our explanation factor EF(S) as: EF(S) = |˜pclassified −˜pgold|+ |˜pclassified −˜ppred| (8) There are two components in this formula. • The first part |˜pclassified −˜pgold| represents the distance between the generated explanations ec and the golden explanations eg. Since we pre-train C using golden explanations, we hold the view that if similar explanations are fed to C, similar predictions should be generated. For instance, if we feed a golden explanation “Great performance” to the classifier C and it tells that this explanation means “a good product”, then we feed another explanation “Excellent performance” to C, it should also tell that the explanation means “a good product”. For this task, we hope that ec can express the same or similar meaning as eg. • The second part |˜pclassified −˜ppred| represents the relevance between the generated explanations ec and the original texts S. The generated explanations should be able to interpret the overall result. For example, if the base model predicts S to be “a good product”, but the classifier tends to classify ec to be the explanations for “a bad product”, then ec cannot properly explain the reason why the base model gives such predictions. 3.3 Minimum Risk Training In order to remove the disconnection between finegrained information and input text, we use Minimum risk training (MRT) to optimize our models, which aims to minimize the expected loss, i.e., risk over the training data (Ayana et al., 2016). Given a sequence S and golden explanations eg, we define Y(eg, S, θ) as the set of predicted overall results with parameter θ. We define Δ(y, ˜y) as the semantic distance between predicted overall results y and ground-truth ˜y. Then, the objective function is defined as: LMRT (eg, S, θ) =  (eg,S)∈D EY(eg,S,θ)Δ(y, ˜y) (9) where D presents the whole training dataset. In our experiment, EY(eg,S,θ) is the expectation over the set Y(eg, S, θ), which is the overall loss in Equation 5. And we define Explanation Factor EF(S) as the semantic distance of input texts, generated explanations and golden explanations. Therefore, the objective function of MRT can be further formalized as: LMRT (eg, S, θ) =  (eg,S)∈D L(eg, S, θ)EF(S) (10) MRT exploits EF(S) to measure the loss, which learns to optimize GEF with respect to the specific evaluation metrics of the task. Though LMRT can be 0 or close to 0 when ˜pclassified, ˜ppred and ˜pgold are close, this cannot guarantee that generated explanations are close to the golden explanations. In order to avoid the total degradation of loss, we define our final loss function as the sum of MRT loss and explanation generation loss: Lfinal =  (eg,S)∈D L + LMRT (11) We try different weighting scheme for the overall loss, and get best performance with 1 :1. 3.4 Application Case Generally, the fine-grained explanations are in different forms for a real-world dataset, which means that ec can be in the form of texts or in the form of numerical scores. We apply GEF to both forms of explanations using different base models. 3.4.1 Case 1: Text Explanations To test the performance of GEF on generating text explanations, we apply GEF to Conditional Variational Autoencoder (CVAE) (Sohn et al., 2015). We here utilize CVAE because we want to generate explanations conditioned on different emotions (positive, negative and neural) and CVAE 5574 CVAE ve p1=0.3 p2=0.1 p3=0.2 p4=0.4 Ppred is cheap , but too heavy . it ec ec1: pricy . ec2: heavy . ec3: just good . eg eg1: cheap . eg2: heavy . eg3: worthy to buy . classifier p1=0.5 p2=0.2 p3=0.1 p4=0.2 Pclassified p1=0.2 p2=0.3 p3=0.3 p4=0.2 Pgold ෤݌௖௟௔௦௦௜௙௜௘ௗ= 0.2 ෤݌௚௢௟ௗ= 0.3 ෤݌௣௥௘ௗ= 0.1 ܧܨܵ= | ෤݌௖௟௔௦௦௜௙௜௘ௗെ෤݌௚௢௟ௗ| + | ෤݌௖௟௔௦௦௜௙௜௘ௗെ෤݌௣௥௘ௗ| = 0.2 െ0.3 + 0.2 െ0.1 = 0.2 y=4 Figure 2: Structure of CVAE+GEF. There are totally 4 categories for the classification, and the ground-truth category is 2 in this example. We assume that the pretrained classifier is a ”perfect” classifier that will correctly predict the final label to be 2 when taking eg as input. So we wish the classifier can also predict the final result as label 2 when taking ec as input. This is why we focus on ˜pclassified and ˜pgold. is found to be capable of generating emotional texts and capturing greater diversity than traditional SEQ2SEQ models. We give an example of the structure of CVAE+GEF in Figure 2. For space consideration, we leave out the detailed structure of CVAE, and will elaborate it in the supplementary materials. In this architecture, golden explanations eg and generated explanations ec are both composed of three text comments: positive comments, negative comments, and neutral comments, which are finegrained explanations for the final overall rating. The classifier is a skip-connected model of bidirectional GRU-RNN layers (Felbo et al., 2017). It takes three kinds of comments as inputs, and outputs the probability distribution over the predicted classifications. 3.4.2 Case 2: Numerical Explanations Another frequently employed form of the finegrained explanations for the overall results is numerical scores. For example, when a user wants to rate a product, s/he may first rate some attributes of the product, like the packaging, price, etc. After rating all the attributes, s/he will give an overall rating for the product. So we can say that the rating for the attributes can somewhat explain why the user gives the overall rating. LSTM and CNN are shown to achieve great performance in text classification tasks (Tang et al., 2015), so we use LSTM and CNN models as the encoder E respectively. The numerical explanations are also regarded as a classification problem in this example. 4 Dataset We conduct experiments on two datasets where we use texts and numerical ratings to represent finegrained information respectively. The first one is crawled from a website called PCMag, and the other one is the Skytrax User Reviews Dataset. Note that all the texts in the two datasets are preprocessed by the Stanford Tokenizer3 (Manning et al., 2014). 4.1 PCMag Review Dataset This dataset is crawled from the website PCMag. It is a website providing reviews for electronic products, like laptops, smartphones, cameras and so on. Each item in the dataset consists of three parts: a long review text, three short comments, and an overall rating score for the product. Three short comments are summaries of the long review respectively from positive, negative, neutral perspectives. An overall rating score is a number ranging from 0 to 5, and the possible values that the score could be are {1.0, 1.5, 2.0, ..., 5.0}. Since long text generation is not what we focus on, the items where review text contains more than 70 sentences or comments contain greater than 75 tokens are filtered. We randomly split the dataset into 10919/1373/1356 pairs for train/dev/test set. The distribution of the overall rating scores within this corpus is shown in Table 1. 4.2 Skytrax User Reviews Dataset We incorporate an airline review dataset scraped from Skytraxs Web portal. Each item in this dataset consists of three parts: i.e., a review text, five sub-field scores and an overall rating score. The five sub-field scores respectively stand for the user’s ratings for seat comfortability, cabin stuff, food, in-flight environment, and ticket value, and each score is an integer between 0 and 5. The overall score is an integer between 1 and 10. Similar to the PCMag Review Dataset, we filter out the items where the review contains more than 3https://nlp.stanford.edu/software/ tokenizer.html 5575 Overall Score 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 Number 21 60 283 809 2399 3981 4838 1179 78 Table 1: Distribution of examples by each overall rating score in PCMag Review Dataset. Overall Score 1 2 3 4 5 6 7 8 9 10 Number 4073 2190 1724 1186 1821 1302 2387 3874 4008 4530 Table 2: Distribution of examples by each overall rating score in Skytrax User Reviews Dataset. Embedding hidden batch size PCMag GloVe, 100 128 32 Skytrax random, 100 256 64 Table 3: Experimental settings for our experiments. Note that for CNN, we additionally set filter number to be 256 and filter sizes to be [3, 4, 5, 6]. 300 tokens. Then we randomly split the dataset into 21676/2710/2709 pairs for train/dev/test set. The distribution of the overall rating scores within this corpus is shown in Table 2. 5 Experiments and Analysis 5.1 Experimental Settings As the goal of this study is to propose an explanation framework, in order to test the effectiveness of proposed GEF, we use the same experimental settings on the base model and on the base model+GEF. We use GloVe (Pennington et al., 2014) word embedding for PCMag dataset and minimize the objective function using Adam (Kingma and Ba, 2014). The hyperparameter settings for both datasets are listed in Table 3. Meanwhile, since the generation loss is larger than classification loss for text explanations, we stop updating the predictor after classification loss reaches a certain threshold (adjusted based on dev set) to avoid overfitting. 5.2 Experimental Results 5.2.1 Results of Text Explanations We use BLEU (Papineni et al., 2002) scores to evaluation the quality of generated text explanations. Table 4 shows the comparison results of explanations generated by CVAE and CVAE+GEF. There are considerable improvements on the BLEU scores of explanations generated by CVAE+GEF over the explanations generated by CVAE, which demonstrates that the explanations generated by CVAE+GEF are of higher quality. BLEU-1 BLEU-2 BLEU-3 BLEU-4 Pos. CVAE 36.1 13.5 3.7 2.2 CVAE+GEF 40.1 15.6 4.5 2.6 Neg. CVAE 33.3 14.1 3.1 2.2 CVAE+GEF 35.9 16.0 4.0 2.9 Neu. CVAE 30.0 8.8 2.0 1.2 CVAE+GEF 33.2 10.2 2.5 1.5 Table 4: BLEU scores for generated explanations. Pos., Neg., Neu. respectively stand for positive, negative and neural explanations. The low BLEU-3 and BLEU-4 scores are because the target explanations contain many domain-specific words with low frequency, which makes it hard for the model to generate accurate explanations. Acc% (Dev) Acc% (Test) CVAE 42.07 42.58 CVAE+GEF 44.04 43.67 Oracle 46.43 46.73 Table 5: Classification accuracy on PCMag Review Dataset. Oracle means if we feed ground-truth text explanations to the Classifier C, the accuracy C can achieve to do classification. Oracle confirms our assumption that explanations can do better in classification than the original text. CVAE+GEF can generate explanations that are closer to the overall results, thus can better illustrate why our model makes such a decision. In our opinion, the generated fine-grained explanations should provide the extra guidance to the classification task, so we also compare the performance of classification on CVAE and CVAE+GEF. We use top-1 accuracy and top-3 accuracy as the evaluation metrics for the performance of classification. In Table 5, we compare the results of CVAE+GEF with CVAE in both test and dev set. As shown in the table, CVAE+GEF has better classification results than CVAE, which indicates that the fine-grained information can really help enhance the overall classification results. As aforementioned, we have an assumption that if we use fine-grained explanations for classifica5576 s% c% f% i% t% LSTM 46.59 52.27 43.74 41.82 45.04 LSTM+GEF 49.13 53.16 46.29 42.34 48.25 CNN 46.22 51.83 44.59 43.34 46.88 CNN+GEF 49.80 52.49 48.03 44.67 48.76 Table 6: Accuracy of sub-field numerical explanations on Skytrax User Reviews Dataset. s, c, f, t, v stand for seat comfortability, cabin stuff, food, in-flight environment and ticket value, respectively. tion, we shall get better results than using the original input texts. Therefore, we list the performance of the classifier C in Table 5 to make the comparison. Experiments show that C has better performance than both CVAE and CVAE+GEF, which proves our assumption to be reasonable. 5.2.2 Results of Numerical Explanations In the Skytrax User Reviews Dataset, the overall ratings are integers between 1 to 10, and the five sub-field ratings are integers between 0 and 5. All of them can be treated as classification problems, so we use accuracy to evaluate the performance. The accuracy of predicting the sub-field ratings can indicate the quality of generated numerical explanations. In order to prove that GEF can help generate better explanations, we show the accuracy of the sub-field rating classification in Table 6. The 5 ratings evaluate the seat comfortability, cabin stuff, food, in-flight environment, and ticket value, respectively. As we can see from the results in Table 6, the accuracy for 5 sub-field ratings all get enhanced comparing with the baseline. Therefore, we can tell that GEF can improve the quality of generated numerical explanations. Then we compare the result for classification in Table 7. As the table shows, the accuracy or top-3 accuracy both get improved when the models are combined with GEF. Moreover, the performances of the classifier are better than LSTM (+GEF) and CNN (+GEF), which further confirms our assumption that the classifier C can imitate the conceptual habits of human beings. Leveraging the explanations can provide guidance for the model when doing final results prediction. 5.3 Human Evaluation In order to prove our model-agnostic framework can make the basic model generate explanations more closely aligned with the classification results, we employ crowdsourced judges to evaluate Acc% Top-3 Acc% LSTM 38.06 76.89 LSTM+GEF 39.20 77.96 CNN 37.06 76.85 CNN+GEF 39.02 79.07 Oracle 45.00 83.13 Table 7: Classification accuracy on Skytrax User Reviews Dataset. Oracle means if we feed ground-truth numerical explanation to the Classifier C, the accuracy C can achieve to do classification. Win% Lose% Tie% CVAE+GEF 51.37 42.38 6.25 Table 8: Results of human evaluation. Tests are conducted between the text explanations generated by basic CVAE and CVAE+GEF. a random sample of 100 items in the form of text, each being assigned to 5 judges on the Amazon Mechanical Turk. All the items are correctly classified both using the basic model and using GEF, so that we can clearly compare the explainability of these generated text explanations. We report the results in Table 8, and we can see that over half of the judges think that our GEF can generate explanations more related to the classification results. In particular, for 57.62% of the tested items, our GEF can generate better or equal explanations comparing with the basic model. In addition, we show some the examples of text explanations generated by CVAE+GEF in Table 11. We can see that our model can accurately capture some key points in the golden explanations. And it can learn to generate grammatical comments that are logically reasonable. All these illustrate the efficient of our method. We will demonstrate more of our results in the supplementary materials. 5.4 Error and Analysis We focus on the deficiency of generation for text explanation in this part. First of all, as we can see from Table 11, the generated text explanation tend to be shorter than golden explanations. It is because longer explanations tend to bring more loss, so GEF tends to leave out the words that are of less informative, like function words, conjunctions, etc. In order to solve this problem, we may consider adding length reward/penalty by reinforcement learning to control the length of generated texts. 5577 Product and Overall Rating Explanations Monitor, 3.0 Positive Generated: very affordable. unique and ergonomic design. good port selection. Positive Golden: unique design. dual hdmi ports. good color quality. energy efficient. Negative Generated: relatively faint on some features. relatively high contrast ratio. no auto port. Negative Golden: expensive. weak light grayscale performance. features are scarce. Neutral Generated: the samsung series is a unique touch-screen monitor featuring a unique design and a nice capacitive picture, but its color and grayscale performance could be better. Neutral Golden: the samsung series is a stylish 27-inch monitor offering good color reproduction and sharp image quality. however, it ’s more expensive than most tn monitors and has a limited feature set. Table 9: Examples of our generated explanations. Some key points are underlined. Second, there are ⟨UNK⟩s in the generated explanations. Since we are generating abstractive comments for product reviews, there may exist some domain-specific words. The frequency of these special words is low, so it is relatively hard for GEF to learn to embed and generated these words. A substituted way is that we can use copymechanism (Gu et al., 2016) to generate these domain-specific words. 6 Related Work Our work is closely aligned with Explainable Artificial Intelligence (Gunning, 2017), which is claimed to be essential if users are to understand, and effectively manage this incoming generation of artificially intelligent partners. In artificial intelligence, providing an explanation of individual decisions has attracted attention in recent years. The traditional way of explaining the results is to build connections between the input and output, and figure out how much each dimension or element contributes to the final output. Some previous works explain the result in two ways: evaluating the sensitivity of output if input changes and analyzing the result from a mathematical perspective by redistributing the prediction function backward (Samek et al., 2018). There are some works connecting the result with the classification model. Ribeiro et al. (2016) selects a set of representative instances with explanations via submodular optimization. Although the method is promising and mathematically reasonable, they cannot generate explanations in natural forms. They focus on how to interpret the result. Some of the previous works have similar motivations as our work. Lei et al. (2016) rationalize neural prediction by extracting the phrases from the input texts as explanations. They conduct their work in an extractive way, and focus on rationalizing the predictions. However, our work aims not only to predict the results but also to generate abstractive explanations, and our framework can generate explanations both in the forms of texts and numerical scores. Hancock et al. (2018) proposes to use a classifier with natural language explanations that are annotated by human beings to do the classification. Our work is different from theirs in that we use the natural attributes as the explanations which are more frequent in reality. Camburu et al. (2018) proposes e-SNLI4 by extending SNLI dataset with text explanations. And their simple but effective model proves the feasibility of generating text explanations for neural classification models. 7 Conclusion In this paper, we investigate the possibility of using fine-grained information to help explain the decision made by our classification model. More specifically, we design a Generative Explanation Framework (GEF) that can be adapted to different models. Minimum risk training method is applied to our proposed framework. Experiments demonstrate that after combining with GEF, the performance of the base model can be enhanced. Meanwhile, the quality of explanations generated by our model is also improved, which demonstrates that GEF is capable of generating more reasonable explanations for the decision. Since our proposed framework is modelagnostic, we can combine it with other natural processing tasks, e.g. summarization, extraction, which we leave to our future work. 4The dataset is not publicly available now. We would like to conduct further experiments on this dataset when it is released. 5578 References Shiqi Shen Ayana, Zhiyuan Liu, and Maosong Sun. 2016. Neural headline generation with minimum risk training. arXiv preprint arXiv:1604.01904. Oana-Maria Camburu, Tim Rockt¨aschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natural language inference with natural language explanations. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 9539–9549. Curran Associates, Inc. Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1615–1625. J Gu, Z Lu, H Li, and VOK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Annual Meeting of the Association for Computational Linguistics (ACL), 2016. Association for Computational Linguistics. David Gunning. 2017. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web. Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, and Christopher R´e. 2018. Training classifiers with natural language explanations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1884– 1895. Association for Computational Linguistics. Yoon Kim. 2014. Convolutional neural networks for sentence classification. EMNLP. Diederik P Kingma and Jimmy Lei Ba. 2014. Adam: A method for stochastic optimization. In Proc. 3rd Int. Conf. Learn. Representations. Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In AAAI, volume 333, pages 2267– 2273. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. EMNLP. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Recurrent neural network for text classification with multi-task learning. In Proceedings of International Joint Conference on Artificial Intelligence. Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, and Eduard Hovy. 2018. Stackpointer networks for dependency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1403–1414. Association for Computational Linguistics. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60. Sixun Ouyang, Aonghus Lawlor, Felipe Costa, and Peter Dolog. 2018. Improving explainable recommendations with synthetic reviews. arXiv preprint arXiv:1807.06978. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. ACM. Wojciech Samek, Thomas Wiegand, and Klaus-Robert M¨uller. 2018. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. ITU Journal: ICT Discoveries - Special Issue 1 - The Impact of Artificial Intelligence (AI) on Communication Networks and Services, 1(1):39–48. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems, pages 3483–3491. Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classification. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1422–1432, Lisbon, Portugal. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. 5579 Wei Wang, Ming Yan, and Chen Wu. 2018. Multigranularity hierarchical attention fusion networks for reading comprehension and question answering. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1705–1714. Qingyu Yin, Yu Zhang, Weinan Zhang, Ting Liu, and William Yang Wang. 2018. Deep reinforcement learning for chinese zero pronoun resolution. ACL. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pages 649–657. Xianda Zhou and William Yang Wang. 2018. Mojitalk: Generating emotional responses at scale. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Victoria, Australia. ACL. 5580 Supplemental Material Structure of CVAE By extending the SEQ2SEQ structure, we can easily get a Conditional Variational Antoencoder (CVAE) (Sohn et al., 2015; Zhou and Wang, 2018). Figure 3 shows the structure of the model. Input ncoder Explanations Encoder Input Text Explanations vc v0 Prior Network Recog Network x c z z’ ve Figure 3: The structure of CVAE. The Input Encoder encodes the input text in v0, and vc is the control signal that determines the kind of fine-grained information (positive, negative and neutral). ve is the initial input for the decoder. The Explanations Encoder encodes the short comment in x. Recognition Network takes x as input and produces the latent variable z. In our experiment, the Recognition Network and the Prior Network are both MLPs, and we use bidirectional GRU as the Explanations Encoder and Input Encoder. To train CVAE, we need to maximize a variational lower bound on the conditional likelihood of x given c, where x and c are both random variables. In our experiment,c = [vc; v0], and x is the text explanations we want to generate. This can be rewritten as: p(x|c) =  p(x|z, c)p(z|c)dz (12) z is the latent variable. The decoder is used to approximate p(x|z, c), denoted as pD(x|z, c), and Prior Network is used to approximate p(z|c), denoted as pP (z|c). In order to approximate the true Overall s c f i t 9.0 pred 4.0 5.0 5.0 4.0 5.0 gold 4.0 5.0 5.0 4.0 4.0 6.0 pred 3.0 5.0 3.0 3.0 4.0 gold 4.0 5.0 3.0 3.0 4.0 2.0 pred 2.0 1.0 2.0 2.0 2.0 gold 2.0 2.0 1.0 2.0 2.0 Table 10: Examples from the results on Skytrax User Reviews Dataset. s, c, f, i, t stand for seat comfortability, cabin stuff, food, in-flight environment and ticket value, respectively. posterior p(z|x, c), we introduce Recognition Network qR(z|x, c). According to Sohn et al. (2015), we can have the lower bound of log p(x|c) as: −L(x, c; θ) = KL(qR(z|x, c)||pP (z|c)) −EqR(z|x,c)(log pD(x|z, c)) (13) θ is the parameters in the network. Notice that during training, z is used to train z′ and passed to the decoder, but during testing, the ground truth explanations are absent and z′ is passed to the decoder. Output Sample In this part, we provide some samples from our experiment. Numerical Explanation Cases We provide some numerical explanation cases in Table 10. Text Explanation Cases We provide some text explanation cases in Table 11. 5581 Product and Overall Rating Explanations Television, 4.0 Positive Generated: Good contrast. Good black levels. Affordable. Positive Golden: Gorgeous 4k picture. Good color accuracy. Solid value for a large uhd screen. Negative Generated: Mediocre black levels. Poor shadow detail. Poor off-angle viewing. Negative Golden: Mediocre black levels. Poor input lag. Colors run slightly cool. Disappointing online features. Poor off-angle viewing. Neutral Generated: A solid, gorgeous 4k screen that offers a sharp 4k picture, but it’s missing some features for the competition. Neutral Golden: A solid 4k television line, but you can get an excellent 1080p screen with more features and better performance for much less. Flash Drive, 3.0 Positive Generated: Simple, functional design. Handy features. Positive Golden: Charming design. Reasonably priced. Capless design. Negative Generated: All-plastic construction. No usb or color protection. Negative Golden: All-plastic construction. On the slow side. Crowds neighboring ports. flash drives geared toward younger children don’t have some sort of password protection. Neutral Generated: The tween-friendly ⟨UNK⟩colorbytes are clearly designed and offers a comprehensive usb 3.0, but it’s not as good as the competition. Neutral Golden: The kid-friendly dane-elec sharebytes value pack drives aren’t the quickest or most rugged flash drives out there, but they manage to strike the balance between toy and technology. Careful parents would be better off giving their children flash drives with some sort of password protection. TV, 4.0 Positive Generated: excellent picture. attractive glass-backed screen. hdr10 and dolby vision. Positive Golden: excellent picture with wide color gamut. stylish glass-backed screen. hdr10 and dolby vision. two remotes. Negative Generated: very expensive. Negative Golden: very expensive. Neutral Generated: lg’s new oledg7p series is a stylish, attractive, and attractive hdtv line that’s a bit more but not much more attractive. Neutral Golden: lg’s signature oledg7p series is every bit as attractive and capable as last year’s excellent oledg6p series, but the company has a new flagship oled that’s only slightly more expensive but a lot more impressive. Gaming, 4.0 Positive Generated: best-looking mainline pokemon game for the nintendo 3ds and feel. date, breathing, and dlc. Positive Golden: best-looking mainline pokemon game to date. alola trials mix up and vary progression over the gym-and-badge system, breathing new life into the game for longtime fans. ride pagers improve overworld navigation. Negative Generated: starts out very slow. Negative Golden: starts out very slow. Neutral Generated: the newest pokemon generation of sun/moon for the nintendo 3ds, making the feeling of the nintendo 3ds and remixes enough ideas to new life over making any wild, polarizing changes to the formula. Neutral Golden: the newest pokemon generation, sun/moon for the nintendo 3ds, tweaks and polishes the series’ core concepts and remixes enough ideas to feel fresh without making any wild , polarizing changes to the formula. Desktop, 3.5 Positive Generated: adjustable bulb. attractive design. energy efficient. Positive Golden: compact all in one. $500 price point. lenovo utilities. dynamic brightness system and eye distance system. no bloatware. Negative Generated: limited stand. no keyboard or micro between mac. Negative Golden: low power on benchmark tests. no usb 3.0. no hdmi. no video in or out. only 60-day mcafee anti-virus. camera is “ always on. ”. Neutral Generated: the lenovo thinkcentre edge is a good choice in the attractive design, and a few attractive colors in the price. it has a little bit of the best. Neutral Golden: the lenovo c325 is a good choice for those looking to spend only about $500 for a fully featured desktop pc. it’s bigger than a laptop, and has the power to serve your web surfing and basic pc needs. Table 11: Text examples from our generated explanations. ⟨UNK⟩stands for “unknown word”.
2019
560
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5582–5591 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5582 Combating Adversarial Misspellings with Robust Word Recognition Danish Pruthi Bhuwan Dhingra Zachary C. Lipton Carnegie Mellon University Pittsburgh, USA {ddanish, bdhingra}@cs.cmu.edu, [email protected] Abstract To combat adversarial spelling mistakes, we propose placing a word recognition model in front of the downstream classifier. Our word recognition models build upon the RNN semicharacter architecture, introducing several new backoff strategies for handling rare and unseen words. Trained to recognize words corrupted by random adds, drops, swaps, and keyboard mistakes, our method achieves 32% relative (and 3.3% absolute) error reduction over the vanilla semi-character model. Notably, our pipeline confers robustness on the downstream classifier, outperforming both adversarial training and off-the-shelf spell checkers. Against a BERT model fine-tuned for sentiment analysis, a single adversarially-chosen character attack lowers accuracy from 90.3% to 45.8%. Our defense restores accuracy to 75%1. Surprisingly, better word recognition does not always entail greater robustness. Our analysis reveals that robustness also depends upon a quantity that we denote the sensitivity. 1 Introduction Despite the rapid progress of deep learning techniques on diverse supervised learning tasks, these models remain brittle to subtle shifts in the data distribution. Even when the permissible changes are confined to barely-perceptible perturbations, training robust models remains an open challenge. Following the discovery that imperceptible attacks could cause image recognition models to misclassify examples (Szegedy et al., 2013), a veritable sub-field has emerged in which authors iteratively propose attacks and countermeasures. For all the interest in adversarial computer vision, these attacks are rarely encountered outside of academic research. However, adversarial 1All code for our defenses, attacks, and baselines is available at https://github.com/danishpruthi/ Adversarial-Misspellings Alteration Movie Review Label Original A triumph, relentless and beautiful in its downbeat darkness + Swap A triumph, relentless and beuatiful in its downbeat darkness – Drop A triumph, relentless and beautiful in its dwnbeat darkness – + Defense A triumph, relentless and beautiful in its downbeat darkness + + Defense A triumph, relentless and beautiful in its downbeat darkness + Table 1: Adversarial spelling mistakes inducing sentiment misclassification and word-recognition defenses. misspellings constitute a longstanding real-world problem. Spammers continually bombard email servers, subtly misspelling words in efforts to evade spam detection while preserving the emails’ intended meaning (Lee and Ng, 2005; Fumera et al., 2006). As another example, programmatic censorship on the Internet has spurred communities to adopt similar methods to communicate surreptitiously (Bitso et al., 2013). In this paper, we focus on adversarially-chosen spelling mistakes in the context of text classification, addressing the following attack types: dropping, adding, and swapping internal characters within words. These perturbations are inspired by psycholinguistic studies (Rawlinson, 1976; Matt Davis, 2003) which demonstrated that humans can comprehend text altered by jumbling internal characters, provided that the first and last characters of each word remain unperturbed. First, in experiments addressing both BiLSTM and fine-tuned BERT models, comprising four different input formats: word-only, char-only, word+char, and word-piece (Wu et al., 2016), we demonstrate that an adversary can degrade a classifier’s performance to that achieved by random guessing. This requires altering just two charac5583 ters per sentence. Such modifications might flip words either to a different word in the vocabulary or, more often, to the out-of-vocabulary token UNK . Consequently, adversarial edits can degrade a word-level model by transforming the informative words to UNK . Intuitively, one might suspect that word-piece and character-level models would be less susceptible to spelling attacks as they can make use of the residual word context. However, our experiments demonstrate that character and word-piece models are in fact more vulnerable. We show that this is due to the adversary’s effective capacity for finer grained manipulations on these models. While against a word-level model, the adversary is mostly limited to UNK -ing words, against a word-piece or character-level model, each character-level add, drop, or swap produces a distinct input, providing the adversary with a greater set of options. Second, we evaluate first-line techniques including data augmentation and adversarial training, demonstrating that they offer only marginal benefits here, e.g., a BERT model achieving 90.3 accuracy on a sentiment classification task, is degraded to 64.1 by an adversarially-chosen 1character swap in the sentence, which can only be restored to 69.2 by adversarial training. Third (our primary contribution), we propose a task-agnostic defense, attaching a word recognition model that predicts each word in a sentence given a full sequence of (possibly mispelled) inputs. The word recognition model’s outputs comprise the input to a downstream classification model. Our word recognition models build upon the RNN-based semi-character word recognition model due to Sakaguchi et al. (2017). While our word recognizers are trained on domain-specific text from the task at hand, they often predict UNK at test time, owing to the small domain-specific vocabulary. To handle unobserved and rare words, we propose several backoff strategies including falling back on a generic word recognizer trained on a larger corpus. Incorporating our defenses, BERT models subject to 1-character attacks are restored to 88.3, 81.1, 78.0 accuracy for swap, drop, add attacks respectively, as compared to 69.2, 63.6, and 50.0 for adversarial training Fourth, we offer a detailed qualitative analysis, demonstrating that a low word error rate alone is insufficient for a word recognizer to confer robustness on the downstream task. Additionally, we find that it is important that the recognition model supply few degrees of freedom to an attacker. We provide a metric to quantify this notion of sensitivity in word recognition models and study its relation to robustness empirically. Models with low sensitivity and word error rate are most robust. 2 Related Work Several papers address adversarial attacks on NLP systems. Changes to text, whether word- or character-level, are all perceptible, raising some questions about what should rightly be considered an adversarial example (Ebrahimi et al., 2018b; Belinkov and Bisk, 2018). Jia and Liang (2017) address the reading comprehension task, showing that by appending distractor sentences to the end of stories from the SQuAD dataset (Rajpurkar et al., 2016), they could cause models to output incorrect answers. Inspired by this work, Glockner et al. (2018) demonstrate an attack that breaks entailment systems by replacing a single word with either a synonym or its hypernym. Recently, Zhao et al. (2018) investigated the problem of producing natural-seeming adversarial examples, noting that adversarial examples in NLP are often ungrammatical (Li et al., 2016). In related work on character-level attacks, Ebrahimi et al. (2018b,a) explored gradient-based methods to generate string edits to fool classification and translation systems, respectively. While their focus is on efficient methods for generating adversaries, ours is on improving the worst case adversarial performance. Similarly, Belinkov and Bisk (2018) studied how synthetic and natural noise affects character-level machine translation. They considered structure invariant representations and adversarial training as defenses against such noise. Here, we show that an auxiliary word recognition model, which can be trained on unlabeled data, provides a strong defense. Spelling correction (Kukich, 1992) is often viewed as a sub-task of grammatical error correction (Ng et al., 2014; Schmaltz et al., 2016). Classic methods rely on a source language model and a noisy channel model to find the most likely correction for a given word (Mays et al., 1991; Brill and Moore, 2000). Recently, neural techniques have been applied to the task (Sakaguchi et al., 2017; Li et al., 2018), which model the context and orthography of the input together. Our work extends the ScRNN model of Sakaguchi et al. (2017). 5584 3 Robust Word Recognition To tackle character-level adversarial attacks, we introduce a simple two-stage solution, placing a word recognition model (W) before the downstream classifier (C). Under this scheme, all inputs are classified by the composed model C ◦W. This modular approach, with W and C trained separately, offers several benefits: (i) we can deploy the same word recognition model for multiple downstream classification tasks/models; and (ii) we can train the word recognition model with larger unlabeled corpora. Against adversarial mistakes, two important factors govern the robustness of this combined model: W’s accuracy in recognizing misspelled words and W’s sensitivity to adversarial perturbations on the same input. We discuss these aspects in detail below. 3.1 ScRNN with Backoff We now describe semi-character RNNs for word recognition, explain their limitations, and suggest techniques to improve them. ScRNN Model Inspired by the psycholinguistic studies (Matt Davis, 2003; Rawlinson, 1976), Sakaguchi et al. (2017) proposed a semi-character based RNN (ScRNN) that processes a sentence of words with misspelled characters, predicting the correct words at each step. Let s = {w1, w2, . . . , wn} denote the input sentence, a sequence of constituent words wi. Each input word (wi) is represented by concatenating (i) a one hot vector of the first character (wi1); (ii) a one hot representation of the last character (wil, where l is the length of word wi); and (iii) a bag of characters representation of the internal characters (Pl−1 j=2 wij). ScRNN treats the first and the last characters individually, and is agnostic to the ordering of the internal characters. Each word, represented accordingly, is then fed into a BiLSTM cell. At each sequence step, the training target is the correct corresponding word (output dimension equal to vocabulary size), and the model is optimized with cross-entropy loss. Backoff Variations While Sakaguchi et al. (2017) demonstrate strong word recognition performance, a drawback of their evaluation setup is that they only attack and evaluate on the subset of words that are a part of their training vocabulary. In such a setting, the word recognition performance is unreasonably dependant on the chosen vocabulary size. In principle, one can design models to predict (correctly) only a few chosen words, and ignore the remaining majority and still reach 100% accuracy. For the adversarial setting, rare and unseen words in the wild are particularly critical, as they provide opportunities for the attackers. A reliable word-recognizer should handle these cases gracefully. Below, we explore different ways to back off when the ScRNN predicts UNK (a frequent outcome for rare and unseen words): • Pass-through: word-recognizer passes on the (possibly misspelled) word as is. • Backoff to neutral word: Alternatively, noting that passing UNK -predicted words through unchanged exposes the downstream model to potentially corrupted text, we consider backing off to a neutral word like ‘a’, which has a similar distribution across classes. • Backoff to background model: We also consider falling back upon a more generic word recognition model trained upon a larger, less-specialized corpus whenever the foreground word recognition model predicts UNK 2. Figure 1 depicts this scenario pictorially. Empirically, we find that the background model (by itself) is less accurate, because of the large number of words it is trained to predict. Thus, it is best to train a precise foreground model on an in-domain corpus and focus on frequent words, and then to resort to a general-purpose background model for rare and unobserved words. Next, we delineate our second consideration for building robust word-recognizers. 3.2 Model Sensitivity In computer vision, an important factor determining the success of an adversary is the norm constraint on the perturbations allowed to an image (||x −x′||∞< ϵ). Higher values of ϵ lead to a higher chance of mis-classification for at least one x′. Defense methods such as quantization (Xu et al., 2017) and thermometer encoding (Buckman et al., 2018) try to reduce the space of perturbations available to the adversary by making the model invariant to small changes in the input. 2Potentially the background model could be trained with full vocabulary so that it never predicts UNK 5585 Figure 1: A schematic sketch of our proposed word recognition system, consisting of a foreground and a background model. We train the foreground model on the smaller, domain-specific dataset, and the background model on a larger dataset (e.g., the IMDB movie corpus). We train both models to reconstruct the correct word from the orthography and context of the individual words, using synthetically corrupted inputs during training. Subsequently, we invoke the background model whenever the foreground model predicts UNK . In NLP, we often get such invariance for free, e.g., for a word-level model, most of the perturbations produced by our character-level adversary lead to an UNK at its input. If the model is robust to the presence of these UNK tokens, there is little room for an adversary to manipulate it. Characterlevel models, on the other hand, despite their superior performance in many tasks, do not enjoy such invariance. This characteristic invariance could be exploited by an attacker. Thus, to limit the number of different inputs to the classifier, we wish to reduce the number of distinct word recognition outputs that an attacker can induce, not just the number of words on which the model is “fooled”. We denote this property of a model as its sensitivity. We can quantify this notion for a word recognition system W as the expected number of unique outputs it assigns to a set of adversarial perturbations. Given a sentence s from the set of sentences S, let A(s) = s1′, s2′, . . . , sn′ denote the set of n perturbations to it under attack type A, and let V be the function that maps strings to an input representation for the downstream classifier. For a word level model, V would transform sentences to a sequence of word ids, mapping OOV words to the same UNK ID. Whereas, for a char (or word+char, word-piece) model, V would map inputs to a sequence of character IDs. Formally, sensitivity is defined as SA W,V = Es #u(V ◦W(s1′), . . . , V ◦W(sn′)) n  , (1) where V ◦W(si) returns the input representation (of the downstream classifier) for the output string produced by the word-recognizer W using si and #u(·) counts the number of unique arguments. Intuitively, we expect a high value of SA W,V to lead to a lower robustness of the downstream classifier, since the adversary has more degrees of freedom to attack the classifier. Thus, when using word recognition as a defense, it is prudent to design a low sensitivity system with a low error rate. However, as we will demonstrate, there is often a trade-off between sensitivity and error rate. 3.3 Synthesizing Adversarial Attacks Suppose we are given a classifier C : S →Y which maps natural language sentences s ∈S to a label from a predefined set y ∈Y. An adversary for this classifier is a function A which maps a sentence s to its perturbed versions {s′ 1, s′ 2, . . . , s′ n} such that each s′ i is close to s under some notion of distance between sentences. We define the robustness of classifier C to the adversary A as: RC,A = Es  min s′∈A(s) 1[C(s′) = y]  , (2) where y represents the ground truth label for s. In practice, a real-world adversary may only be able to query the classifier a few times, hence RC,A represents the worst-case adversarial performance of C. Methods for generating adversarial examples, such as HotFlip (Ebrahimi et al., 2018b), focus on efficient algorithms for searching the min 5586 above. Improving RC,A would imply better robustness against all these methods. Allowed Perturbations (A(s)) We explore adversaries which perturb sentences with four types of character-level edits: (1) Swap: swapping two adjacent internal characters of a word. (2) Drop: removing an internal character of a word. (3) Keyboard: substituting an internal character with adjacent characters of QWERTY keyboard (4) Add: inserting a new character internally in a word. In line with the psycholinguistic studies (Matt Davis, 2003; Rawlinson, 1976), to ensure that the perturbations do not affect human ability to comprehend the sentence, we only allow the adversary to edit the internal characters of a word, and not edit stopwords or words shorter than 4 characters. Attack Strategy For 1-character attacks, we try all possible perturbations listed above until we find an adversary that flips the model prediction. For 2-character attacks, we greedily fix the edit which had the least confidence among 1-character attacks, and then try all the allowed perturbations on the remaining words. Higher order attacks can be performed in a similar manner. The greedy strategy reduces the computation required to obtain higher order attacks3, but also means that the robustness score is an upper bound on the true robustness of the classifier. 4 Experiments and Results In this section, we first discuss our experiments on the word recognition systems. 4.1 Word Error Correction Data: We evaluate the spell correctors from §3 on movie reviews from the Stanford Sentiment Treebank (SST) (Socher et al., 2013). The SST dataset consists of 8544 movie reviews, with a vocabulary of over 16K words. As a background corpus, we use the IMDB movie reviews (Maas et al., 2011), which contain 54K movie reviews, and a vocabulary of over 78K words. The two datasets do not share any reviews in common. The spellcorrection models are evaluated on their ability to correct misspellings. The test setting consists of reviews where each word (with length ≥4, barring stopwords) is attacked by one of the attack types (from swap, add, drop and keyboard at3Its complexity is O(l), instead of O(lm) where l is the sentence length and m is the order. tacks). In the all attack setting, we mix all attacks by randomly choosing one for each word. This most closely resembles a real world attack setting. Experimental Setup In addition to our word recognition models, we also compare to After The Deadline (ATD), an open-source spell corrector4. We found ATD to be the best freelyavailable corrector5. We refer the reader to Sakaguchi et al. (2017) for comparisons of ScRNN to other anonymized commercial spell checkers. For the ScRNN model, we use a single-layer BiLSTM with a hidden dimension size of 50. The input representation consists of 198 dimensions, which is thrice the number of unique characters (66) in the vocabulary. We cap the vocabulary size to 10K words, whereas we use the entire vocabulary of 78470 words when we backoff to the background model. For training these networks, we corrupt the movie reviews according to all attack types, i.e., applying one of the 4 attack types to each word, and trying to reconstruct the original words via cross entropy loss. Word Recognition Spell-Corrector Swap Drop Add Key All ATD 7.2 12.6 13.3 6.9 11.2 ScRNN (78K) 6.3 10.2 8.7 9.8 8.7 ScRNN (10K) w/ Backoff Variants Pass-Through 8.5 10.5 10.7 11.2 10.2 Neutral 8.7 10.9 10.8 11.4 10.6 Background 5.4 8.1 6.4 7.6 6.9 Table 2: Word Error Rates (WER) of ScRNN with each backoff strategy, plus ATD and an ScRNN trained only on the background corpus (78K vocabulary) The error rates include 5.25% OOV words. Results We calculate the word error rates (WER) of each of the models for different attacks and present our findings in Table 2. Note that ATD incorrectly predicts 11.2 words for every 100 words (in the ‘all’ setting), whereas, all of the backoff variations of the ScRNN reconstruct better. The most accurate variant involves backing off to the background model, resulting in a low error rate of 6.9%, leading to the best performance on word recognition. This is a 32% relative error 4https://www.afterthedeadline.com/ 5We compared ATD with Hunspell (http: //hunspell.github.io/), which is used in Linux applications. ATD was significantly more robust owing to taking context into account while correcting. 5587 reduction compared to the vanilla ScRNN model with a pass-through backoff strategy. We can attribute the improved performance to the fact that there are 5.25% words in the test corpus that are unseen in the training corpus, and are thus only recoverable by backing off to a larger corpus. Notably, only training on the larger background corpus does worse, at 8.7%, since the distribution of word frequencies is different in the background corpus compared to the foreground corpus. 4.2 Robustness to adversarial attacks We use sentiment analysis and paraphrase detection as downstream tasks, as for these two tasks, 1-2 character edits do not change the output labels. Experimental Setup For sentiment classification, we systematically study the effect of character-level adversarial attacks on two architectures and four different input formats. The first architecture encodes the input sentence into a sequence of embeddings, which are then sequentially processed by a BiLSTM. The first and last states of the BiLSTM are then used by the softmax layer to predict the sentiment of the input. We consider three input formats for this architecture: (1) Word-only: where the input words are encoded using a lookup table; (2) Char-only: where the input words are encoded using a separate singlelayered BiLSTM over their characters; and (3) Word+Char: where the input words are encoded using a concatenation of (1) and (2) 6. The second architecture uses the fine-tuned BERT model (Devlin et al., 2018), with an input format of word-piece tokenization. This model has recently set a new state-of-the-art on several NLP benchmarks, including the sentiment analysis task we consider here. All models are trained and evaluated on the binary version of the sentence-level Stanford Sentiment Treebank (Socher et al., 2013) dataset with only positive and negative reviews. We also consider the task of paraphrase detection. Here too, we make use of the fine-tuned BERT (Devlin et al., 2018), which is trained and evaluated on the Microsoft Research Paraphrase Corpus (MRPC) (Dolan and Brockett, 2005). 6Implementation details: The embedding dimension size for the word, char and word+char models are 64, 32 and 64 + 32 respectively, with 64, 64 and 128 set as the hidden dimension sizes for the three models. Baseline defense strategies Two common methods for dealing with adversarial examples include: (1) data augmentation (DA) (Krizhevsky et al., 2012); and (2) adversarial training (Adv) (Goodfellow et al., 2014). In DA, the trained model is fine-tuned after augmenting the training set with an equal number of examples randomly attacked with a 1-character edit. In Adv, the trained model is fine-tuned with additional adversarial examples (selected at random) that produce incorrect predictions from the current-state classifier. The process is repeated iteratively, generating and adding newer adversarial examples from the updated classifier model, until the adversarial accuracy on dev set stops improving. Results In Table 3, we examine the robustness of the sentiment models under each attack and defense method. In the absence of any attack or defense, BERT (a word-piece model) performs the best (90.3%7) followed by word+char models (80.5%), word-only models (79.2%) and then char-only models (70.3%). However, even singlecharacter attacks (chosen adversarially) can be catastrophic, resulting in a significantly degraded performance of 46%, 57%, 59% and 33%, respectively under the ‘all’ setting. Intuitively, one might suppose that word-piece and character-level models would be more robust to such attacks given they can make use of the remaining context. However, we find that they are the more susceptible. To see why, note that the word ‘beautiful’ can only be altered in a few ways for word-only models, either leading to an UNK or an existing vocabulary word, whereas, word-piece and character-only models treat each unique character combination differently. This provides more variations that an attacker can exploit. Following similar reasoning, add and key attacks pose a greater threat than swap and drop attacks. The robustness of different models can be ordered as word-only > word+char > char-only ∼ word-piece, and the efficacy of different attacks as add > key > drop > swap. Next, we scrutinize the effectiveness of defense methods when faced against adversarially chosen attacks. Clearly from table 3, DA and Adv are not 7The reported accuracy on SST-B by BERT in Glue Benchmarks is slightly higher as it is trained and evaluated on phrase-level sentiment prediction task which has more training examples compared to the sentence-level task we consider. We use the official source code at https: //github.com/google-research/bert 5588 Sentiment Analysis (1-char attack/2-char attack) Model No attack Swap Drop Add Key All Word-Level Models BiLSTM 79.2 (64.3/53.6) (63.7/52.7) (60.0/43.2) (60.2/42.4) (58.6/40.2) BiLSTM + ATD 79.3 (76.2/75.3) (66.5/59.9) (55.6/47.5) (62.6/57.6) (55.8/37.0) BiLSTM + Pass-through 79.3 (78.6/78.5) (69.1/65.3) (65.0/59.2) (69.6/65.6) (63.2/52.4) BiLSTM + Background 78.8 (78.9/78.4) (69.6/66.8) (62.6/56.4) (68.2/62.2) (59.6/49.0) BiLSTM + Neutral 80.1 (80.1/79.9) (72.4/70.2) (67.2/61.2) (69.0/64.6) (63.2/54.0) Char-Level Models BiLSTM 70.3 (53.6/42.9) (48.8/37.1) (33.8/14.8) (40.8/22.0) (32.6/14.0) BiLSTM + ATD 71.0 (66.6/65.2) (58.0/53.0) (54.6/44.4) (61.6/57.5) (46.5/35.4) BiLSTM + Pass-through 70.3 (65.8/62.9) (58.3/54.2) (54.0/44.2) (58.8/52.4) (51.6/39.8) BiLSTM + Background 70.1 (70.3/69.8) (60.4/57.7) (57.4/52.6) (58.8/54.2) (53.6/47.2) BiLSTM + Neutral 70.7 (70.7/70.7) (62.1/60.5) (57.8/53.6) (61.4/58.0) (55.2/48.4) Word+Char Models BiLSTM 80.5 (63.9/52.3) (62.8/50.8) (57.8/39.8) (58.4/40.8) (56.6/35.6) BiLSTM + ATD 80.8 (78.0/77.3) (67.7/60.9) (55.6/50.5) (68.7/64.6) (48.5/37.4) BiLSTM + Pass-through 80.1 (79.0/78.7) (69.5/65.7) (64.0/59.0) (66.0/62.0) (61.5/56.5) BiLSTM + Background 79.5 (79.6/79.0) (69.7/66.7) (62.0/57.0) (65.0/56.5) (59.4/49.8) BiLSTM + Neutral 79.5 (79.5/79.4) (71.2/68.8) (65.0/59.0) (65.5/61.5) (61.5/55.5) Word-piece Models BERT 90.3 (64.1/47.4) (59.2/39.9) (46.2/26.4) (54.3/34.9) (45.8/24.6) BERT + DA 90.2 (68.3/50.6) (62.7/39.9) (43.6/17.0) (57.7/32.4) (41.0/15.8) BERT + Adv 89.6 (69.2/52.9) (63.6/40.5) (50.0/22.0) (60.1/36.6) (47.0/20.2) BERT + ATD 89.0 (84.5/84.5) (73.0/64.0) (77.0/69.5) (80.0/75.0) (67.0/55.0) BERT + Pass-through 89.8 (85.5/83.9) (78.9/75.0) (70.4/64.4) (75.3/70.3) (68.0/58.5) BERT + Background 89.3 (89.1/89.1) (79.3/76.5) (76.5/71.0) (77.5/74.4) (73.0/67.5) BERT + Neutral 88.3 (88.3/88.3) (81.1/79.5) (78.0/74.0) (78.8/76.8) (75.0/68.0) Table 3: Accuracy of various classification models, with and without defenses, under adversarial attacks. Even 1-character attacks significantly degrade classifier performance. Our defenses confer robustness, recovering over 76% of the original accuracy, under the ‘all’ setting for all four model classes. effective in this case. We observed that despite a low training error, these models were not able to generalize to attacks on newer words at test time. ATD spell corrector is the most effective on keyboard attacks, but performs poorly on other attack types, particularly the add attack strategy. The ScRNN model with pass-through backoff offers better protection, bringing back the adversarial accuracy within 5% range for the swap attack. It is also effective under other attack classes, and can mitigate the adversarial effect in wordpiece models by 21%, character-only models by 19%, and in word, and word+char models by over 4.5% . This suggests that the direct training signal of word error correction is more effective than the indirect signal of sentiment classification available to DA and Adv for model robustness. We observe additional gains by using background models as a backoff alternative, because of its lower word error rate (WER), especially, under the swap and drop attacks. However, these gains do not consistently translate in all other settings, as lower WER is necessary but not sufficient. Besides lower error rate, we find that a solid defense should furnish the attacker the fewest options to attack, i.e. it should have a low sensitivity. As we shall see in section § 4.3, the backoff neutral variation has the lowest sensitivity due to mapping UNK predictions to a fixed neutral word. Thus, it results in the highest robustness on most of the attack types for all four model classes. Model No Attack All attacks 1-char 2-char BERT 89.0 60.0 31.0 BERT + ATD 89.9 75.8 61.6 BERT + Pass-through 89.0 84.5 81.5 BERT + Neutral 84.0 82.5 82.5 Table 4: Accuracy of BERT, with and without defenses, on MRPC when attacked under the ‘all’ attack setting. 5589 Sensitivity Analysis Backoff Swap Drop Add Key All Closed Vocabulary Models (word-only) Pass-Through 17.6 19.7 0.8 7.3 11.3 Background 19.5 22.3 1.1 9.5 13.1 Neutral 17.5 19.7 0.8 7.2 11.3 Open Vocab. Models (char/word+char/word-piece) Pass-Through 39.6 35.3 19.2 26.9 30.3 Background 20.7 25.1 1.3 11.6 14.7 Neutral 17.5 19.7 0.8 7.2 11.3 Table 5: Sensitivity values for word recognizers. Neutral backoff shows lowest sensitivity. Table 4 shows the accuracy of BERT on 200 examples from the dev set of the MRPC paraphrase detection task under various attack and defense settings. We re-trained the ScRNN model variants on the MRPC training set for these experiments. Again, we find that simple 1-2 character attacks can bring down the accuracy of BERT significantly (89% to 31%). Word recognition models can provide an effective defense, with both our pass-through and neutral variants recovering most of the accuracy. While the neutral backoff model is effective on 2-char attacks, it hurts performance in the no attack setting, since it incorrectly modifies certain correctly spelled entity names. Since the two variants are already effective, we did not train a background model for this task. 4.3 Understanding Model Sensitivity Experimental setup To study model sensitivity, for each sentence, we perturb one randomlychosen word and replace it with all possible perturbations under a given attack type. The resulting set of perturbed sentences is then fed to the word recognizer (whose sensitivity is to be estimated). As described in equation 1, we count the number of unique predictions from the output sentences. Two corrections are considered unique if they are mapped differently by the downstream classifier. Results The neutral backoff variant has the lowest sensitivity (Table 5). This is expected, as it returns a fixed neutral word whenever the ScRNN predicts an UNK , therefore reducing the number of unique outputs it predicts. Open vocabulary (i.e. char-only, word+char, word-piece) downstream classifiers consider every unique combination of characters differently, whereas wordonly classifiers internally treat all out of vocabulary (OOV) words alike. Hence, for char-only, 12 13 Sensitivity 6 7 8 9 10 11 12 WER 63.2 59.6 63.2 Pass-through Background Neutral 10 20 30 Sensitivity 7 8 9 10 11 WER 51.6 53.6 55.2 Figure 2: Effect of sensitivity and word error rate on robustness (depicted by the bubble sizes) in word-only models (left) and char-only models (right). word+char, and word-piece models, the passthrough version is more sensitive than the background variant, as it passes words as is (and each combination is considered uniquely). However, for word-only models, pass-through is less sensitive as all the OOV character combinations are rendered identical. Ideally, a preferred defense is one with low sensitivity and word error rate. In practice, however, we see that a low error rate often comes at the cost of sensitivity. We visualize this trade-off in Figure 2, where we plot WER and sensitivity on the two axes, and depict the robustness when using different backoff variants. Generally, sensitivity is the more dominant factor out of the two, as the error rates of the considered variants are reasonably low. Human Intelligibility We verify if the sentiment (of the reviews) is preserved with char-level attacks. In a human study with 50 attacked (and subsequently misclassified), and 50 unchanged reviews, it was noted that 48 and 49, respectively, preserved the sentiment. 5 Conclusion As character and word-piece inputs become commonplace in modern NLP pipelines, it is worth highlighting the vulnerability they add. We show that minimally-doctored attacks can bring down accuracy of classifiers to random guessing. We recommend word recognition as a safeguard against this and build upon RNN-based semi-character word recognizers. We discover that when used as a defense mechanism, the most accurate word recognition models are not always the most robust against adversarial attacks. Additionally, we highlight the need to control the sensitivity of these models to achieve high robustness. 5590 6 Acknowledgements The authors are grateful to Graham Neubig, Eduard Hovy, Paul Michel, Mansi Gupta, and Antonios Anastasopoulos for suggestions and feedback. References Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In International Conference on Learning Representations (ICLR). Constance Bitso, Ina Fourie, and Theo JD Bothma. 2013. Trends in transition from classical censorship to internet censorship: selected country overviews. Innovation: journal of appropriate librarianship and information work in Southern Africa, 2013(46):166–191. Eric Brill and Robert C. Moore. 2000. An improved error model for noisy channel spelling correction. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, ACL ’00, pages 286–293, Stroudsburg, PA, USA. Association for Computational Linguistics. Jacob Buckman, Aurko Roy, Colin Raffel, and Ian Goodfellow. 2018. Thermometer encoding: One hot way to resist adversarial examples. International Conference on Learning Representations (ICLR). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. William B Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Javid Ebrahimi, Daniel Lowd, and Dejing Dou. 2018a. On adversarial examples for character-level neural machine translation. In International Conference on Computational Linguistics (COLING). Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018b. Hotflip: White-box adversarial examples for nlp. In Association for Computational Linguistics (ACL). Giorgio Fumera, Ignazio Pillai, and Fabio Roli. 2006. Spam filtering based on the analysis of text information embedded into images. Journal of Machine Learning Research (JMLR). Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking nli systems with sentences that require simple lexical inferences. In Association for Computational Linguistics (ACL). Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR). Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. Empirical Methods in Natural Language Processing (EMNLP). Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (NIPS). Karen Kukich. 1992. Techniques for automatically correcting words in text. Acm Computing Surveys (CSUR), 24(4):377–439. Honglak Lee and Andrew Y Ng. 2005. Spam deobfuscation using a hidden markov model. In CEAS. Hao Li, Yang Wang, Xinyu Liu, Zhichao Sheng, and Si Wei. 2018. Spelling error correction using a nested rnn model and pseudo training data. arXiv preprint arXiv:1811.00238. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Association for Computational Linguistics (ACL). Matt Davis. 2003. Psycholinguistic evidence on scrambled letters in reading. https://www.mrc-cbu.cam.ac.uk/ people/matt.davis/cmabridge/. Eric Mays, Fred J. Damerau, and Robert L. Mercer. 1991. Context based spelling correction. Information Processing & Management, 27(5):517 – 522. Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The conll-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 1–14. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Empirical Methods in Natural Language Processing (EMNLP). Graham Ernest Rawlinson. 1976. The significance of letter position in word recognition. Ph.D. thesis, University of Nottingham. Keisuke Sakaguchi, Kevin Duh, Matt Post, and Benjamin Van Durme. 2017. Robsut wrod reocginiton via semi-character recurrent neural network. In Association for the Advancement of Artificial Intelligence (AAAI). 5591 Allen Schmaltz, Yoon Kim, Alexander M. Rush, and Stuart Shieber. 2016. Sentence-level grammatical error identification as sequence-to-sequence correction. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications, pages 242–251, San Diego, CA. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Empirical Methods in Natural Language Processing (EMNLP). Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Weilin Xu, David Evans, and Yanjun Qi. 2017. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155. Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2018. Generating natural adversarial examples. In International Conference on Learning Representations (ICLR).
2019
561
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5592–5598 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5592 An Empirical Investigation of Structured Output Modeling for Graph-based Neural Dependency Parsing Zhisong Zhang, Xuezhe Ma, Eduard Hovy Language Technologies Institute, Carnegie Mellon University {zhisongz,xuezhem}@cs.cmu.edu, [email protected] Abstract In this paper, we investigate the aspect of structured output modeling for the state-ofthe-art graph-based neural dependency parser (Dozat and Manning, 2017). With evaluations on 14 treebanks, we empirically show that global output-structured models can generally obtain better performance, especially on the metric of sentence-level Complete Match. However, probably because neural models already learn good global views of the inputs, the improvement brought by structured output modeling is modest. 1 Introduction In the past few years, dependency parsers, equipped with neural network models, have led to impressive empirical successes on parsing accuracy (Chen and Manning, 2014; Weiss et al., 2015; Dyer et al., 2015; Andor et al., 2016; Kiperwasser and Goldberg, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017; Ma et al., 2018). Among them, the deep-biaffine attentional parser (BiAF) (Dozat and Manning, 2017) has stood out for its simplicity and effectiveness. BiAF adopts a simple bi-directional LSTM neural architecture (Ma and Hovy, 2016; Kiperwasser and Goldberg, 2016) with the first-order graph parsing algorithm (McDonald et al., 2005a,b). Simple as it appears to be, BiAF has led to several recordbreaking performences in multiple treebanks and languages (Dozat et al., 2017). In their pioneering work, besides the neural architecture, Dozat and Manning (2017) adopt a simple head-selection training object (Zhang et al., 2017) by regarding the original structured prediction task as an head-classification task in training. Although practically this simplification works well, there are still problems with it. Due to local normalization in the training objective (see §2.2), no global tree-structured information can be back-propagated during training. This can lead to the discrepancy between training and testing, since during testing, the MST (Maximum Spanning Tree) algorithm (McDonald et al., 2005b) is used to ensure valid tree structures. This problem raises concerns about the structured output layer. Several previous neural graph parsers utilized structured techniques (Pei et al., 2015; Kiperwasser and Goldberg, 2016; Zhang et al., 2016; Wang and Chang, 2016; Ma and Hovy, 2017), but their neural architectures might not be competitive to the current state-of-the-art BiAF parsing model. In this paper, building upon the BiAF based neural architecture, we empirically investigate the effectiveness of utilizing classical structured prediction techniques of output modeling for graph-based neural dependency parsing. We empirically show that structured output modeling can obtain better performance, especially on the the sentence-level metrics. However, the improvements are modest, probably because neural models make the problem easier to solve locally. 2 Output Modeling In structured prediction tasks, a structured output y is predicted given an input x. We refer to the encoding of the x as input modeling, and the modeling of the structured output y as output modeling. Output modeling concerns modeling dependencies and interactions across multiple output components and assigning them proper scores. A common strategy to score the complex output structure is to factorize it into sub-structures, which is referred as factorization. A further step of normalization is needed to form the final score of an output structure. We will explain more details about these concepts in the situation of graph-based dependency parsing. 5593 2.1 Factorization The output structure of dependency parsing is a collection of dependency edges forming a singlerooted tree. Graph-based dependency parsers factorize the outputs into specifically-shaped subtrees (factors). Based on the assumption that the sub-trees are independent to each other, the score of the output tree structure (T) is the combination of the scores of individual sub-trees in the tree. In the simplest case, the sub-trees are the individual dependency edges connecting each modifier and its head word ((m, h)). This is referred to as first-order factorization (Eisner, 1996; McDonald et al., 2005a), which is adopted in (Dozat and Manning, 2017) and the neural parsing models in this work. There are further extensions to higherorder factors, considering more complex sub-trees with multiple edges (McDonald and Pereira, 2006; Carreras, 2007; Koo and Collins, 2010; Ma and Zhao, 2012). We leave the exploration of these higher-order graph models to future work. 2.2 Normalization After obtaining the individual scores of the substructures, we need to compute the score of the whole output structure. The main question is on what scale to normalize the output scores. For graph-based parsing, there can be mainly three options: Global, Local or Single, following different structured output constraints and corresponding to different loss functions. Global Global models directly normalize at the level of overall tree structures, whose scores are obtained by directly summing the raw scores of the sub-trees without any local normalization. This can be shown clearly if further taking a probabilistic CRF-like treatment, where a final normalization is performed over all possible trees: Scoreg(T) = log exp P (m,h)2T Score(m, h) P T 0 exp P (m,h)2T 0 Score(m, h) Here, the normalization is carried out in the exact output space of all legal trees (T 0). MaxMargin (Hinge) loss (Taskar et al., 2004) adopts the similar idea, though there is no explicit normalization in its formulation. The output space can be further constrained by requiring the projectivity of the trees (Kubler et al., 2009). Several manual-feature-based (McDonald et al., 2005b; Koo and Collins, 2010) and neural-based dependency parsers (Pei et al., 2015; Kiperwasser and Goldberg, 2016; Zhang et al., 2016; Ma and Hovy, 2017) utilize global normalization. Local Local models, in contrast, ignore the global tree constraints and view the problem as a head-selection classification problem (Fonseca and Alu´ısio, 2015; Zhang et al., 2017; Dozat and Manning, 2017). The structured constraint that local models follow is that each word can be attached to one and only one head node. Based on this, the edge scores are locally normalized over all possible head nodes. This can be framed as the softmax output if taking a probabilistic treatment: Scorel(T) = X (m,h)2T log exp Score(m, h) P h0 exp Score(m, h0) In this way, the model only sees and learns headattaching decisions for each individual words. Therefore, the model is unaware of the global tree structures and may assign probabilities to non-tree cyclic structures, which are illegal outputs for dependency parsing. In spite of this defect, the local model enjoys its merits of simplicity and efficiency in training. Single (Binary) If further removing the singlehead constraint, we can arrive at a more simplified binary-classification model for each single edge, referred as the “Single” model, which predicts the presences and absences of dependency relation for every pair of words. Eisner (1996) first used this model in syntactic dependency parsing, and Dozat and Manning (2018) applied it to semantic dependency parsing. Here, the score of each edge is normalized against a fixed score of zero, forming a sigmoid output: Scores(T) = X (m,h)2T log exp Score(m, h) exp Score(m, h) + 1 Here, we only show the scoring formula for brevity. In training, since this binary classification problem can be quite imbalanced, we only sample partial of the negative instances (edges). Practically, we find a ratio of 2:1 makes a good balance, that is, for each token, we use its correct head word as the positive instance and randomly sample two other tokens in the sentence as negative instances. 2.3 Summary The normalization methods that we describe above actually indicate the output structured constraints 5594 Normalization Loss Algorithm Single Prob – Local Prob – Global-NProj Prob Matrix-Tree Theorem Hinge Chu-Liu-Edmonds Global-Proj Prob Inside-Outside Hinge Eisner’s Table 1: Summarization of the methods explored in this work and their corresponding algorithms. that the model is aware of. The global model is aware of all the constraints to ensure a legal dependency tree. The local model maintains the single-head constraint while there are almost no structured constrains under the single model. To be noted, for all these normalization methods, we can take various loss functions. In this work, we study two typical ones: probabilistic MaximumLikelihood loss (Prob), which requires actual normalization over the output space, and Max-Margin Hinge loss (Hinge), which only requires lossaugmented decoding in the same output space. Table 1 summarizes the methods (normalization and loss function) that we investigate in our experiments. For global models, we consider both Projective (Proj) and Non-Projective (NProj) constraints. Specific algorithms are required for probabilistic loss (a variation of Inside-Outside algorithm for projective (Paskin, 2001) and MatrixTree Theorem for non-projective parsing (Koo et al., 2007; Smith and Smith, 2007; McDonald and Satta, 2007)) and hinge loss (Eisner’s algorithm for projective (Eisner, 1996) and ChuLiu-Edmonds’ algorithm for non-projective parsing (Chu and Liu, 1965; Edmonds, 1967; McDonald et al., 2005b)). For Single and Local models, we only utilize probabilistic loss, since in preliminary experiments we found hinge loss performed worse. No special algorithms other than simple enumeration are needed for them in training. In testing, we adopt non-projective algorithms for the non-global models unless otherwise noted. 3 Experiments 3.1 Settings We evaluate the parsers on 14 treebanks: English Penn Treebank (PTB), Penn Chinese Treebank (CTB) and 12 selected treebanks from Universal Dependencies (v2.3) (Nivre et al., 2018). We follow standard data preparing conventions as in Ma et al. (2018). Please refer to the supplementary material for more details of data preparation. For the neural architecture, we also follow the settings in Dozat and Manning (2017) and Ma et al. (2018) and utilize the deep BiAF model. For the input, we concatenate representations of word, part-of-speech (POS) tags and characters. Word embeddings are initialized with the pre-trained fasttext word vectors1 for all languages. For POS tags and Character information, we use POS embeddings and a character-level Convolutional Neural Network (CNN) for the encoding. For the encoder, we adopt three layers of bi-directional LSTM to get contextualized representations, while our decoder is the deep BiAF scorer as in Dozat and Manning (2017). We only slightly tune hyperparameters on the Local model and the development set of PTB, and then use the same ones for all the models and datasets. More details of hyperparameter settings are provided in the supplementary material. Note that our exploration only concerns the final output layer which does not contain any trainable parameters in the neural model, and all our comparisons are based on exactly the same neural architecture and hyper-parameter settings. Only the output normalization methods and the loss functions are different. We run all the experiments with our own implementation2, which is written with PyTorch. All experiments are run with one TITAN-X GPU. In training, global models take around twice the time of the local and single models; while in testing, their decoding costs are similar. 3.2 Results We run all the models three times with different random initialization, and the averaged results on the test sets are shown in Table 2. Due to space limitation, we only report LAS (Labeled Attachment Score) and LCM (Labeled Complete Match) in the main content. We also include the unlabeled scores UAS (Unlabeled Attachment Score) and UCM (Unlabeled Complete Match) in the supplementary material. The evaluations on PTB and CTB exclude punctuations3, while on UD we evaluate on all tokens (including punctuations) as the setting of the LAS metric in the CoNLL shared tasks (Zeman et al., 2017, 2018). 1https://fasttext.cc/docs/en/pretrained-vectors.html 2Our implementation is publicly available at https:// github.com/zzsfornlp/zmsp 3Tokens whose gold POS tag is one of {“ ” : , .} for PTB or “PU” for CTB 5595 Method Single Local Global-NProj Global-Proj Prob Prob Prob Hinge Prob Hinge PTB 93.43/44.67 93.75/46.65 93.84†/47.17 93.91†/47.78† 93.79/47.16 93.96†/48.47† CTB 87.03/31.26 88.16/33.16 88.26/33.73 87.92/32.77 88.46†/35.11† 88.14/34.00† bg-btb 89.97/39.25 90.06/39.99 90.35†/41.25† 90.42†/40.83 90.15/40.98 90.20/40.53 ca-ancora 91.23/25.03 91.54/26.35 91.73†/27.19† 91.73†/26.65 91.39/27.39† 91.51/27.19† cs-pdt 90.95/43.07 91.51/45.62 91.69†/46.60† 91.52/46.02† 91.10/44.43 91.18/44.02 de-gsd 83.68†/22.65 83.43/22.42 83.65†/22.86 83.66†/22.93 83.39/23.37† 83.63/23.51† en-ewt 88.01/55.93 88.33/56.46 88.52†/57.29† 88.59†/57.33† 88.52†/58.29† 88.41/57.31† es-ancora 90.82/27.27 91.05/27.41 91.12/27.89 91.14/27.35 90.84/28.41† 91.03/27.70 fr-gsd 88.00/20.03 88.13/20.83 88.43†/21.71 88.22/20.27 88.59†/23.80† 88.41†/21.88 it-isdt 91.71/44.05 92.01/44.26 92.16/45.30 92.08/45.02 92.49†/48.27† 92.37†/46.75† nl-alpino 88.31/33.11 88.81/33.67 88.94/34.62 88.94/35.12† 88.37/33.05 88.45/33.00 no-bokmaal 92.89/53.60 92.89/53.58 93.02†/54.36† 92.78/53.09 92.82/53.57 92.70/52.71 ro-rrt 85.10†/12.85† 84.58/11.57 84.85†/12.44 85.04†/13.03† 84.89†/12.94† 85.16†/13.76† ru-syntagrus 92.76/48.67 93.29/50.69 93.36†/50.97 93.29/50.72 93.11/50.79 93.19/50.17 Average 89.56/35.82 89.82/36.62 89.99†/37.39† 89.95†/37.07† 89.85/37.68† 89.88/37.21† Table 2: Results (LAS/LCM) on the test sets (averaged over three runs). ‘†’ means that the result of the model is statistically significantly better (by permutation test, p < 0.05) than the Local-Prob model. Overall, the global models4 perform better consistently, especially on the metrics of Complete Match, showing the effectiveness of being aware of global structures. However, the performance gaps between global models and local models are small. More surprisingly, the single models that ignore all the structures only lag behind by around 0.4 averagely. In some way, this shows that input modeling, including the distributed input representations, contextual encoders and parts of the decoders, makes the structured decision problem easier to solve locally. Neural models seem to squeeze the improvement space that structured output modeling can bring. 3.3 Analysis We further analyze on output constraints and input modeling. For brevity, we only analyze on PTB and use probabilistic models. Single models are excluded for their poorer performance. Firstly, we study the influence of output constraint differences in training and testing. Here, we include a naive “Greedy” decoding algorithm which simply selects the most probable head for each token. This does not ensure that the outputs are trees and corresponds to the head-classification method adopted by local models. The results of different models and training/testing algorithms are shown in Figure 1. Interestingly, the discrepancies in training and testing are only detrimen4Projective global models perform averagely poorer than non-projective ones, since some of the treebanks (for example, only 88% of the trees in ‘cs-pdt’ are projective) contain a non-negligible portion of non-projective trees. Figure 1: Results (LAS/LCM, on the PTB test set) of different models (with prob loss) and decoding algorithms. Rows represent the methods used in training and columns denote the decoding algorithms in testing. Darker colors represent better scores. tal when the output constraint in testing is looser than that in training (the left corner in the figure), as shown by the poorer results in the trainingtesting pairs of “NProj-Greedy”, “Proj-Greedy” and “Proj-NProj”. Generally, projective decoding is the best choice since PTB contains mostly (99.9%) projective trees. Next, we study the interactions of “weaker” neural architectures (for input modeling) and output modeling. We consider three “weaker” models: (1) “No-Word” ignores all the lexical inputs and is a pure delexicalized model; (2) “SimpleCNN” replaces the RNN encoder with a much simpler encoder, which is a simple single-layer CNN with a window size of three for the purpose of studying weak models; (3) “No-Encoder” com5596 Figure 2: Evaluation differences (on the PTB test set) between global and local methods when adopting various “weaker” neural architectures. Numbers below xaxis labels denote the evaluation scores (LAS/LCM) of the local models. pletely deletes the encoder, leading to a model that does not take any contextual information. Here, since we are testing on PTB which almost contain only projective trees, we use projective decoding for all models. As shown in Figure 2, when input modeling is weaker, the improvements brought by the global model generally get larger. Here, the LCM for “No-Encoder” is an outlier, probably because this model is too weak to get reasonable complete matches. The results show that with weaker input modeling, the parser can generally benefit more from structured output modeling. In some way, this also indicates that better input modeling can make the problem depend less on the global structures so that local models are able to obtain competitive performance. 4 Discussion and Conclusion In this paper, we call the models that are aware of the whole output structures “global”. In fact, with the neural architecture that can capture features from the whole input sentence, actually all the models we explore have a “global” view of inputs. Our experiments show that with this kind of global input modeling, good results can be obtained even when ignoring certain output structures, and further enhancement of global output structures only provides small benefits. This might suggest that input and output modeling can capture certain similar information and have overlapped functionalities for the structured decisions. In future work, there can be various possible extensions. We will explore more about the interactions between input and output modeling for structured prediction tasks. It will be also interesting to adopt even stronger input models, especially, those enhanced with contextualized representations from Elmo (Peters et al., 2018) or BERT (Devlin et al., 2018). A limitation of this work is that we only explore first-order graph based parser, that is, for the factorization part, we do not consider high-order sub-subtree structures. This part will surely be interesting and important to explore. Acknowledgement This research was supported in part by DARPA grant FA8750-18-2-0018 funded under the AIDA program. References Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2442–2452, Berlin, Germany. Association for Computational Linguistics. Xavier Carreras. 2007. Experiments with a higherorder projective dependency parser. In Proceedings of the CoNLL Shared Task Session of EMNLPCoNLL 2007, pages 957–961, Prague, Czech Republic. Association for Computational Linguistics. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740–750, Doha, Qatar. Association for Computational Linguistics. Y.J. Chu and T.H. Liu. 1965. On the shortest arborescence of a directed graph. Scientia Sinica, 14:1396– 1400. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In ICLR. Timothy Dozat and Christopher D. Manning. 2018. Simpler but more accurate semantic dependency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 484–490, Melbourne, Australia. Association for Computational Linguistics. Timothy Dozat, Peng Qi, and Christopher D Manning. 2017. Stanford’s graph-based neural dependency parser at the conll 2017 shared task. Proceedings 5597 of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 20–30. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 334–343, Beijing, China. Association for Computational Linguistics. Jack Edmonds. 1967. Optimum branchings. Journal of Research of the National Bureau of Standards, B, 71:233–240. Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of the 16th International Conference on Computational Linguistics, pages 340–345, Copenhagen. Erick Fonseca and Sandra Alu´ısio. 2015. A deep architecture for non-projective dependency parsing. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 56–61, Denver, Colorado. Association for Computational Linguistics. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional lstm feature representations. Transactions of the Association for Computational Linguistics, 4:313–327. Terry Koo and Michael Collins. 2010. Efficient thirdorder dependency parsers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1–11, Uppsala, Sweden. Association for Computational Linguistics. Terry Koo, Amir Globerson, Xavier Carreras, and Michael Collins. 2007. Structured prediction models via the matrix-tree theorem. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), pages 141–150, Prague, Czech Republic. Association for Computational Linguistics. Sandra Kubler, Ryan McDonald, and Joakim Nivre. 2009. Dependency Parsing. Morgan & Claypool Publishers. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016. Distilling an ensemble of greedy dependency parsers into one mst parser. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1744–1753, Austin, Texas. Association for Computational Linguistics. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074, Berlin, Germany. Association for Computational Linguistics. Xuezhe Ma and Eduard Hovy. 2017. Neural probabilistic model for non-projective mst parsing. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 59–69, Taipei, Taiwan. Asian Federation of Natural Language Processing. Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, and Eduard Hovy. 2018. Stackpointer networks for dependency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1403–1414, Melbourne, Australia. Association for Computational Linguistics. Xuezhe Ma and Hai Zhao. 2012. Fourth-order dependency parsing. In Proceedings of COLING 2012: Posters, pages 785–796, Mumbai, India. The COLING 2012 Organizing Committee. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005a. Online large-margin training of dependency parsers. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 91–98, Ann Arbor, Michigan. Association for Computational Linguistics. Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of the 11th Conference of the European Chapter of the ACL (EACL 2006), pages 81–88. Association for Computational Linguistics. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005b. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 523–530, Vancouver, British Columbia, Canada. Association for Computational Linguistics. Ryan McDonald and Giorgio Satta. 2007. On the complexity of non-projective data-driven dependency parsing. In Proceedings of the Tenth International Conference on Parsing Technologies, pages 121– 132, Prague, Czech Republic. Association for Computational Linguistics. Joakim Nivre, Mitchell Abrams, ˇZeljko Agi´c, and et al. 2018. Universal dependencies 2.3. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics ( ´UFAL), Faculty of Mathematics and Physics, Charles University. Mark A Paskin. 2001. Cubic-time parsing and learning algorithms for grammatical bigram models. Technical report. 5598 Wenzhe Pei, Tao Ge, and Baobao Chang. 2015. An effective neural network model for graph-based dependency parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 313–322, Beijing, China. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. David A. Smith and Noah A. Smith. 2007. Probabilistic models of nonprojective dependency trees. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 132–140, Prague, Czech Republic. Association for Computational Linguistics. Ben Taskar, Dan Klein, Mike Collins, Daphne Koller, and Christopher Manning. 2004. Max-margin parsing. In Proceedings of EMNLP 2004, pages 1– 8, Barcelona, Spain. Association for Computational Linguistics. Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyVolume 1, pages 173–180. Association for Computational Linguistics. Wenhui Wang and Baobao Chang. 2016. Graph-based dependency parsing with bidirectional lstm. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2306–2315, Berlin, Germany. Association for Computational Linguistics. David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 323–333, Beijing, China. Association for Computational Linguistics. Daniel Zeman, Jan Hajiˇc, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Multilingual parsing from raw text to universal dependencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1–21, Brussels, Belgium. Association for Computational Linguistics. Daniel Zeman, Martin Popel, Milan Straka, Jan Hajic, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkova, Jan Hajic jr., Jaroslava Hlavacova, V´aclava Kettnerov´a, Zdenka Uresova, Jenna Kanerva, Stina Ojala, Anna Missil¨a, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung, Marie-Catherine de Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Valeria dePaiva, Kira Droganova, H´ector Mart´ınez Alonso, C¸ a˘grı C¸ ¨oltekin, Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Strnadov´a, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendonca, Tatiana Lando, Rattima Nitisaroj, and Josie Li. 2017. CoNLL 2017 shared task: Multilingual parsing from raw text to universal dependencies. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1–19, Vancouver, Canada. Association for Computational Linguistics. Xingxing Zhang, Jianpeng Cheng, and Mirella Lapata. 2017. Dependency parsing as head selection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 665–676, Valencia, Spain. Association for Computational Linguistics. Yue Zhang and Stephen Clark. 2008. A tale of two parsers: investigating and combining graphbased and transition-based dependency parsing using beam-search. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 562–571. Association for Computational Linguistics. Zhisong Zhang, Hai Zhao, and Lianhui Qin. 2016. Probabilistic graph-based dependency parsing with convolutional neural network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1382–1392, Berlin, Germany. Association for Computational Linguistics.
2019
562
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5599–5611 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5599 Observing Dialogue in Therapy: Categorizing and Forecasting Behavioral Codes Jie Cao†, Michael Tanana‡, Zac E. Imel‡, Eric Poitras‡, David C. Atkins♦, Vivek Srikumar† †School of Computing, University of Utah ‡Department of Educational Psychology, University of Utah ♦Department of Psychiatry and Public Health, University of Washington {jcao, svivek}@cs.utah.edu, {michael.tanana, zac.imel, eric.poitras}@utah.edu, [email protected] Abstract Automatically analyzing dialogue can help understand and guide behavior in domains such as counseling, where interactions are largely mediated by conversation. In this paper, we study modeling behavioral codes used to asses a psychotherapy treatment style called Motivational Interviewing (MI), which is effective for addressing substance abuse and related problems. Specifically, we address the problem of providing real-time guidance to therapists with a dialogue observer that (1) categorizes therapist and client MI behavioral codes and, (2) forecasts codes for upcoming utterances to help guide the conversation and potentially alert the therapist. For both tasks, we define neural network models that build upon recent successes in dialogue modeling. Our experiments demonstrate that our models can outperform several baselines for both tasks. We also report the results of a careful analysis that reveals the impact of the various network design tradeoffs for modeling therapy dialogue. 1 Introduction Conversational agents have long been studied in the context of psychotherapy, going back to chatbots such as ELIZA (Weizenbaum, 1966) and PARRY (Colby, 1975). Research in modeling such dialogue has largely sought to simulate a participant in the conversation. In this paper, we argue for modeling dialogue observers instead of participants, and focus on psychotherapy. An observer could help an ongoing therapy session in several ways. First, by monitoring fidelity to therapy standards, a helper could guide both veteran and novice therapists towards better patient outcomes. Second, rather than generating therapist utterances, it could suggest the type of response that is appropriate. Third, it could alert a therapist about potentially important cues from a patient. Such assistance would be especially helpful in the increasingly prevalent online or text-based counseling services.1 We ground our study in a style of therapy called Motivational Interviewing (MI, Miller and Rollnick, 2003, 2012), which is widely used for treating addiction-related problems. To help train therapists, and also to monitor therapy quality, utterances in sessions are annotated using a set of behavioral codes called Motivational Interviewing Skill Codes (MISC, Miller et al., 2003). Table 1 shows standard therapist and patient (i.e., client) codes with examples. Recent NLP work (Tanana et al., 2016; Xiao et al., 2016; P´erez-Rosas et al., 2017; Huang et al., 2018, inter alia) has studied the problem of using MISC to assess completed sessions. Despite its usefulness, automated post hoc MISC labeling does not address the desiderata for ongoing sessions identified above; such models use information from utterances yet to be said. To provide real-time feedback to therapists, we define two complementary dialogue observers: 1. Categorization: Monitoring an ongoing session by predicting MISC labels for therapist and client utterances as they are made. 2. Forecasting: Given a dialogue history, forecasting the MISC label for the next utterance, thereby both alerting or guiding therapists. Via these tasks, we envision a helper that offers assistance to a therapist in the form of MISC labels. We study modeling challenges associated with these tasks related to: (1) representing words and utterances in therapy dialogue, (2) ascertaining relevant aspects of utterances and the dialogue history, and (3) handling label imbalance (as evidenced in Table 1). We develop neural models that address these challenges in this domain. Experiments show that our proposed models 1For example, Crisis Text Line (https://www. crisistextline.org), 7 Cups (https://www.7cups.com), etc. 5600 Code Count Description Examples Client Behavioral Codes FN 47715 Follow/ Neutral: unrelated to changing or sustaining behavior. “You know, I didn’t smoke for a while.” “I have smoked for forty years now.” CT 5099 Utterances about changing unhealthy behavior. “I want to stop smoking.” ST 4378 Utterances about sustaining unhealthy behavior. “I really don’t think I smoke too much.” Therapist Behavioral Codes FA 17468 Facilitate conversation “Mm Hmm.”, “OK.”,“Tell me more.” GI 15271 Give information or feedback. “I’m Steve.”, “Yes, alcohol is a depressant.” RES 6246 Simple reflection about the clients most recent utterance. C: “I didn’t smoke last week” T: “Cool, you avoided smoking last week.” REC 4651 Complex reflection based on a client’s history or the broader conversation. C: “I didn’t smoke last week.” T: “You mean things begin to change”. QUC 5218 Closed question “Did you smoke this week?” QUO 4509 Open question “Tell me more about your week.” MIA 3869 Other MI adherent,e.g., affirmation, advising with permission, etc. “You’ve accomplished a difficult task.” “Is it OK if I suggested something?” MIN 1019 MI non-adherent, e.g., confrontation, advising without permission, etc. “You hurt the baby’s health for cigarettes?” “You ask them not to drink at your house.” Table 1: Distribution, description and examples of MISC labels. outperform baselines by a large margin. For the categorization task, our models even outperform previous session-informed approaches that use information from future utterances. For the more difficult forecasting task, we show that even without having access to an utterance, the dialogue history provides information about its MISC label. We also report the results of an ablation study that shows the impact of the various design choices.2. In summary, in this paper, we (1) define the tasks of categorizing and forecasting Motivational Interviewing Skill Codes to provide real-time assistance to therapists, (2) propose neural models for both tasks that outperform several baselines, and (3) show the impact of various modeling choices via extensive analysis. 2 Background and Motivation Motivational Interviewing (MI) is a style of psychotherapy that seeks to resolve a client’s ambivalence towards their problems, thereby motivating behavior change. Several meta-analyses and empirical studies have shown the high efficacy and success of MI in psychotherapy (Burke et al., 2004; Martins and McNeil, 2009; Lundahl et al., 2010). However, MI skills take practice to master and require ongoing coaching and feedback to sustain (Schwalbe et al., 2014). Given the emphasis on using specific types of linguistic behaviors 2The code is available online at https://github.com/ utahnlp/therapist-observer. in MI (e.g., open questions and reflections), finegrained behavioral coding plays an important role in MI theory and training. Motivational Interviewing Skill Codes (MISC, table 1) is a framework for coding MI sessions. It facilitates evaluating therapy sessions via utterance-level labels that are akin to dialogue acts (Stolcke et al., 2000; Jurafsky and Martin, 2019), and are designed to examine therapist and client behavior in a therapy session.3 As Table 1 shows, client labels mark utterances as discussing changing or sustaining problematic behavior (CT and ST, respectively) or being neutral (FN). Therapist utterances are grouped into eight labels, some of which (RES, REC) correlate with improved outcomes, while MI non-adherent (MIN) utterances are to be avoided. MISC labeling was originally done by trained annotators performing multiple passes over a session recording or a transcript. Recent NLP work speeds up this process by automatically annotating a completed MI session (e.g., Tanana et al., 2016; Xiao et al., 2016; P´erez-Rosas et al., 2017). Instead of providing feedback to a therapist after the completion of a session, can a dialogue observer provide online feedback? While past work has shown the helpfulness of post hoc eval3The original MISC description of Miller et al. (2003) included 28 labels (9 client, 19 therapist). Due to data scarcity and label confusion, various strategies are proposed to merge the labels into a coarser set. We adopt the grouping proposed by Xiao et al. (2016); the appendix gives more details. 5601 i si ui li 1 T: Have you used drugs recently? QUC 2 C: I stopped for a year, but relapsed. FN 3 T: You will suffer if you keep using. MIN 4 C: Sorry, I just want to quit. CT · · · · · · · · · Table 2: An example of ongoing therapy session uations of a session, prompt feedback would be more helpful, especially for MI non-adherent responses. Such feedback opens up the possibility of the dialogue observer influencing the therapy session. It could serve as an assistant that offers suggestions to a therapist (novice or veteran) about how to respond to a client utterance. Moreover, it could help alert the therapist to potentially important cues from the client (specifically, CT or ST). 3 Task Definitions In this section, we will formally define the two NLP tasks corresponding to the vision in §2 using the conversation in table 2 as a running example. Suppose we have an ongoing MI session with utterances u1, u2, · · · , un: together, the dialogue history Hn. Each utterance ui is associated with its speaker si, either C (client) or T (therapist). Each utterance is also associated with the MISC label li, which is the object of study. We will refer to the last utterance un as the anchor. We will define two classification tasks over a fixed dialogue history with n elements — categorization and forecasting. As the conversation progresses, the history will be updated with a sliding window. Since the therapist and client codes share no overlap, we will design separate models for the two speakers, giving us four settings in all. Task 1: Categorization. The goal of this task is to provide real-time feedback to a therapist during an ongoing MI session. In the running example, the therapist’s confrontational response in the third utterance is not MI adherent (MIN); an observer should flag it as such to bring the therapist back on track. The client’s response, however, shows an inclination to change their behavior (CT). Alerting a therapist (especially a novice) can help guide the conversation in a direction that encourages it. In essence, we have the following real-time classification task: Given the dialogue history Hn which includes the speaker information, predict the MISC label ln for the last utterance un. The key difference from previous work in predicting MISC labels is that we are restricting the input to the real-time setting. As a result, models can only use the dialogue history to predict the label, and in particular, we can not use models such as a conditional random field or a bi-directional LSTM that need both past and future inputs. Task 2: Forecasting. A real-time therapy observer may be thought of as an expert therapist who guides a session with suggestions to the therapist. For example, after a client discloses their recent drug use relapse, a novice therapist may respond in a confrontational manner (which is not recommended, and hence coded MIN). On the other hand, a seasoned therapist may respond with a complex reflection (REC) such as “Sounds like you really wanted to give up and you’re unhappy about the relapse.” Such an expert may also anticipate important cues from the client. The forecasting task seeks to mimic the intent of such a seasoned therapist: Given a dialogue history Hn and the next speaker’s identity sn+1, predict the MISC code ln+1 of the yet unknown next utterance un+1. The MISC forecasting task is a previously unstudied problem. We argue that forecasting the type of the next utterance, rather than selecting or generating its text as has been the focus of several recent lines of work (e.g., Schatzmann et al., 2005; Lowe et al., 2015; Yoshino et al., 2018), allows the human in the loop (the therapist) the freedom to creatively participate in the conversation within the parameters defined by the seasoned observer, and perhaps even rejecting suggestions. Such an observer could be especially helpful for training therapists (Imel et al., 2017). The forecasting task is also related to recent work on detecting antisocial comments in online conversations (Zhang et al., 2018) whose goal is to provide an early warning for such events. 4 Models for MISC Prediction Modeling the two tasks defined in §3 requires addressing four questions: (1) How do we encode a dialogue and its utterances? (2) Can we discover discriminative words in each utterance? (3) Can we discover which of the previous utterances are relevant? (4) How do we handle label imbalance in our data? Many recent advances in neural networks can be seen as plug-and-play components. To facilitate the comparative study of models, we will describe components that address the above 5602 questions. In the rest of the paper, we will use boldfaced terms to denote vectors and matrices and SMALL CAPS to denote component names. 4.1 Encoding Dialogue Since both our tasks are classification tasks over a dialogue history, our goal is to convert the sequence of utterences into a single vector that serves as input to the final classifier. We will use a hierarchical recurrent encoder (Li et al., 2015; Sordoni et al., 2015; Serban et al., 2016, and others) to encode dialogues, specifically a hierarchical gated recurrent unit (HGRU) with an utterance and a dialogue encoder. We use a bidirectional GRU over word embeddings to encode utterances. As is standard, we represent an utterance ui by concatenating the final forward and reverse hidden states. We will refer to this utterance vector as vi. Also, we will use the hidden states of each word as inputs to the attention components in §4.2. We will refer to such contextual word encoding of the jth word as vij. The dialogue encoder is a unidirectional GRU that operates on a concatenation of utterance vectors vi and a trainable vector representing the speaker si.4 The final state of the GRU aggregates the entire dialogue history into a vector Hn. The HGRU skeleton can be optionally augmented with the word and dialogue attention described next. All the models we will study are twolayer MLPs over the vector Hn that use a ReLU hidden layer and a softmax layer for the outputs. 4.2 Word-level Attention Certain words in the utterance history are important to categorize or forecast MISC labels. The identification of these words may depend on the utterances in the dialogue. For example, to identify that an utterance is a simple reflection (RES) we may need to discover that the therapist is mirroring a recent client utterance; the example in table 1 illustrates this. Word attention offers a natural mechanism for discovering such patterns. We can unify a broad collection of attention mechanisms in NLP under a single high level architecture (Galassi et al., 2019). We seek to define attention over the word encodings vij in the history (called queries), guided by the word encodings in the anchor vnk (called keys). The output is 4For the dialogue encoder, we use a unidirectional GRU because the dialogue is incomplete. For words, since the utterances are completed, we can use a BiGRU. Method fm fc BiDAF vnkvT ij [vij; aij; vij ⊙aij; vij ⊙a′] GMGRU we tanh(W kvnk [vij; aij] + W q[vij; hj−1]) Table 3: Summary of word attention mechanisms. We simplify BiDAF with multiplicative attention between word pairs for fm, while GMGRU uses additive attention influenced by the GRU hidden state. The vector we ∈Rd, and matrices W k ∈Rd×d and W q ∈R2d×2d are parameters of the BiGRU. The vector hj−1 is the hidden state from the BiGRU in GMGRU at previous position j −1. For combination function, BiDAF concatenates bidirectional attention information from both the key-aware query vector aij and a similarly defined query-aware key vector a′. GMGRU uses simple concatenation for fc. a sequence of attention-weighted vectors, one for each word in the ith utterance. The jth output vector aj is computed as a weighted sum of the keys: aij = X k αk j vnk (1) The weighting factor αk j is the attention weight between the jth query and the kth key, computed as αk j = exp (fm(vnk, vij)) P j′ exp fm(vnk, vij′)  (2) Here, fm is a match scoring function between the corresponding words, and different choices give us different attention mechanisms. Finally, a combining function fc combines the original word encoding vij and the above attention-weighted word vector aij into a new vector representation zij as the final representation of the query word encoding: zij = fc(vij, aij) (3) The attention module, identified by the choice of the functions fm and fc, converts word encodings in each utterance vij into attended word encodings zij. To use them in the HGRU skeleton, we will encode them a second time using a BiGRU to produce attention-enhanced utterance vectors. For brevity, we will refer to these vectors as vi for the utterance ui. If word attention is used, these attended vectors will be treated as word encodings. To complete this discussion, we need to instantiate the two functions. We use two commonly used attention mechanisms: BiDAF (Seo et al., 5603 2016) and gated matchLSTM (Wang et al., 2017). For simplicity, we replace the sequence encoder in the latter with a BiGRU and refer to it as GMGRU. Table 3 shows the corresponding definitions of fc and fm. We refer the reader to the original papers for further details. In subsequent sections, we will refer to the two attended versions of the HGRU as BIDAFH and GMGRUH. 4.3 Utterance-level Attention While we assume that the history of utterances is available for both our tasks, not every utterance is relevant to decide a MISC label. For categorization, the relevance of an utterance to the anchor may be important. For example, a complex reflection (REC) may depend on the relationship of the current therapist utterance to one or more of the previous client utterances. For forecasting, since we do not have an utterance to label, several previous utterances may be relevant. For example, in the conversation in Table 2, both u2 and u4 may be used to forecast a complex reflection. To model such utterance-level attention, we will employ the multi-head, multi-hop attention mechanism used in Transformer networks (Vaswani et al., 2017). As before, due to space constraints, we refer the reader to the original work for details. We will use the (Q, K, V ) notation from the original paper here. These matrices represent a query, key and value respectively. The multi-head attention is defined as: Multihead(Q, K, V ) = [head1; · · · ; headh]W O (4) headi = softmax QW Q i KW K i T √dk ! V W V i The W i’s refer to projection matrices for the three inputs, and the final W o projects the concatenated heads into a single vector. The choices of the query, key and value defines the attention mechanism. In our work, we compare two variants: anchor-based attention, and self-attention. The anchor-based attention is defined by Q = [vn] and K = V = [v1 · · · vn]. Self-attention is defined by setting all three matrices to [v1 · · · vn]. For both settings, we use four heads and stacking them for two hops, and refer to them as SELF42 and ANCHOR42. 4.4 Addressing Label Imbalance From Table 1, we see that both client and therapist labels are imbalanced. Moreover, rarer labels are more important in both tasks. For example, it is important to identify CT and ST utterances. For therapists, it is crucial to flag MI nonadherent (MIN) utterances; seasoned therapists are trained to avoid them because they correlate negatively with patient improvements. If not explicitly addressed, the frequent but less useful labels can dominate predictions. To address this, we extend the focal loss (FL Lin et al., 2017) to the multiclass case. For a label l with probability produced by a model pt, the loss is defined as FL(pt) = −αt(1 −pt)γ log(pt) (5) In addition to using a label-specific balance weight αt, the loss also includes a modulating factor (1 −pt)γ to dynamically downweight wellclassified examples with pt ≫0.5. Here, the αt’s and the γ are hyperparameters. We use FL as the default loss function for all our models. 5 Experiments The original psychotherapy sessions were collected for both clinical trials and Motivational Interviewing dissemination studies including hospital settings (Roy-Byrne et al., 2014), outpatient clinics (Baer et al., 2009), college alcohol interventions (Tollison et al., 2008; Neighbors et al., 2012; Lee et al., 2013, 2014). All sessions were annotated with the Motivational Interviewing Skills Codes (MISC) (Atkins et al., 2014). We use the train/test split of Can et al. (2015); Tanana et al. (2016) to give 243 training MI sessions and 110 testing sessions. We used 24 training sessions for development. As mentioned in §2, all our experiments are based on the MISC codes grouped by Xiao et al. (2016). 5.1 Preprocessing and Model Setup An MI session contains about 500 utterances on average. We use a sliding window of size N = 8 utterances with padding for the initial ones. We assume that we always know the identity of the speaker for all utterances. Based on this, we split the sliding windows into a client and therapist windows to train separate models. We tokenized and lower-cased utterances using spaCy (Honnibal and Montani, 2017). To embed words, we concatenated 300-dimensional Glove embeddings (Pennington et al., 2014) with ELMo vectors (Peters et al., 2018). The appendix details the model setup and hyperparameter choices. 5604 5.2 Results Best Models. Our goal is to discover the best client and therapist models for the two tasks. We identified the following best configurations using F1 score on the development set: 1. Categorization: For client, the best model does not need any word or utterance attention. For the therapist, it uses GMGRUH for word attention and ANCHOR42 for utterance attention. We refer to these models as CC and CT respectively 2. Forecasting: For both client and therapist, the best model uses no word attention, and uses SELF42 utterance attention. We refer to these models as FC and FT respectively. Here, we show the performance of these models against various baselines. The appendix gives label-wise precision, recall and F1 scores. Results on Categorization. Tables 4 and 5 show the performance of the CC and CT models and the baselines. For both therapist and client categorization, we compare the best models against the same set of baselines. The majority baseline illustrates the severity of the label imbalance problem. Xiao et al. (2016), BiGRUgeneric, Can et al. (2015) and Tanana et al. (2016) are the previous published baselines. The best results of previous published baselines are underlined. The last row ∆in each table lists the changes of our best model from them. BiGRUELMo, CONCATC, GMGRUH and BiDAFH are new baselines we define below. Method macro FN CT ST Majority 30.6 91.7 0.0 0.0 Xiao et al. (2016) 50.0 87.9 32.8 29.3 BiGRUgeneric 50.2 87.0 35.2 28.4 BiGRUELMo 52.9 87.6 39.2 32.0 Can et al. (2015) 44.0 91.0 20.0 21.0 Tanana et al. (2016) 48.3 89.0 29.0 27.0 CONCATC 51.8 86.5 38.8 30.2 GMGRUH 52.6 89.5 37.1 31.1 BiDAFH 50.4 87.6 36.5 27.1 CC 53.9 89.6 39.1 33.1 ∆= CC −score +3.5 -2.1 +3.9 +3.8 Table 4: Main results on categorizing client codes, in terms of macro F1, and F1 for each client code. Our model CC uses final dialogue vector Hn and current utterance vector vn as input of MLP for final prediction. We found that predicting using MLP(Hn) + MLP(vn) performs better than just MLP(Hn). The first set of baselines (above the line) do not encode dialogue history and use only the current utterance encoded with a BiGRU. The work of Xiao et al. (2016) falls in this category, and uses a 100-dimensional domain-specific embedding with weighted cross-entropy loss. Previously, it was the best model in this class. We also re-implemented this model to use either ELMo or Glove vectors with focal loss.5 The second set of baselines (below the line) are models that use dialogue context. Both Can et al. (2015) and Tanana et al. (2016) use wellstudied linguistic features and then tagging the current utterance with both past and future utterance with CRF and MEMM, respectively. To study the usefulness of the hierarchical encoder, we implemented a model that uses a bidirectional GRU over a long sequence of flattened utterance. We refer to this as CONCATC. This model is representative of the work of Huang et al. (2018), but was reimplemented to take advantage of ELMo. For categorizing client codes, BiGRUELMo is a simple but robust baseline model. It outperforms the previous best no-context model by more than 2 points on macro F1. Using the dialogue history, the more sophisticated model CC further gets 1 point improvement. Especially important is its improvement on the infrequent, yet crucial labels CT and ST. It shows a drop in the F1 on the FN label, which is essentially considered to be an unimportant, background class from the point of view of assessing patient progress. For therapist codes, as the highlighted numbers in Table 5 show, only incorporating GMGRU-based word-level attention, GMGRUH has already outperformed many baselines, our proposed model FT which uses both GMGRU-based word-level attention and anchorbased multi-head multihop sentence-level attention can further achieve the best overall performance. Also, note that our models outperform approaches that take advantage of future utterances. For both client and therapist codes, concatenating dialogue history with CONCATC always performs worse than the hierarchical method and even the simpler BiGRUELMo. Results on Forecasting. Since the forecasting task is new, there are no published baselines to compare against. Our baseline systems essentially differ in their representation of dialogue history. The model CONCATF uses the same architecture 5Other related work in no context exists (e.g., P´erez-Rosas et al., 2017; Gibson et al., 2017), but they either do not outperform (Xiao et al., 2016) or use different data. 5605 Method macro FA RES REC GI QUC QUO MIA MIN Majority 5.87 47.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Xiao et al. (2016) 59.3 94.7 50.2 48.3 71.9 68.7 80.1 54.0 6.5 BiGRUgeneric 60.2 94.5 50.5 49.3 72.0 70.7 80.1 54.0 10.8 BiGRUELMo 62.6 94.5 51.6 49.4 70.7 72.1 80.8 57.2 24.2 Can et al. (2015) 94.0 49.0 45.0 74.0 72.0 81.0 Tanana et al. (2016) 94.0 48.0 39.0 69.0 68.0 77.0 CONCATC 61.0 94.5 54.6 34.3 73.3 73.6 81.4 54.6 22.0 GMGRUH 64.9 94.9 56.0 54.4 75.5 75.7 83.0 58.2 21.8 BiDAFH 63.8 94.7 55.9 49.7 75.4 73.8 80.7 56.2 24.0 CT 65.4 95.0 55.7 54.9 74.2 74.8 82.6 56.6 29.7 ∆= CT −score +5.2 +0.3 +3.9 +3.8 +0.2 +2.8 +1.6 +2.6 +18.9 Table 5: Main results on categorizing therapist codes, in terms of macro F1, and F1 for each therapist code. Models are the same as Table 4, but tuned for therapist codes. For the two grouped MISC set MIA and MIN, their results are not reported in the original work due to different setting. Method Dev Test CT ST macro FN CT ST CONCATF 20.4 30.2 43.6 84.4 23.0 23.5 HGRU 19.9 31.2 44.4 85.7 24.9 22.5 GMGRUH 19.4 30.5 44.3 87.1 23.3 22.4 FC 21.1 31.3 44.3 85.2 24.7 22.7 (a) Main results on forecasting client codes, in terms of F1 for ST, CT on dev set, and macro F1, and F1 for each client code on the test set. Method Recall F1 R@3 macro FA RES REC GI QUC QUO MIA MIN CONCATF 72.5 23.5 63.5 0.6 0.0 53.7 27.0 15.0 18.2 9.0 HGRU 76.0 28.6 71.4 12.7 24.9 58.3 28.8 5.9 17.4 9.7 GMGRUH 76.6 26.6 72.6 10.2 20.6 58.8 27.4 6.0 8.9 7.9 FT 77.0 31.1 71.9 19.5 24.7 59.2 29.1 16.4 15.2 12.8 (b) Main results on forecasting therapist codes, in terms of Recall@3, macro F1, and F1 for each label on test set Table 6: Main results on forecasting task as the model CONCATC from the categorizing task. We also show comparisons to the simple HGRU model and the GMGRUH model that uses a gated matchGRU for word attention.6 Tables 6 (a,b) show our forecasting results for client and therapist respectively. For client codes, we also report the CT and ST performance on the development set because of their importance. For the therapist codes, we also report the recall@3 to show the performance of a suggestion system that displayed three labels instead of one. The results show that even without an utterance, the dialogue history conveys signal about the next MISC label. Indeed, the performance for some labels is even better than some categorization baseline systems. Surprisingly, word attention (GMGRUH) in Table 6 did not help in forecasting setting, and a model with the SELF42 utterance attention is sufficient. 6The forecasting task bears similarity to the next utterance selection task in dialogue state tracking work (Yoshino et al., 2018). In preliminary experiments, we found that the Dual-Encoder approach used for that task consistently underperformed the other baselines described here. For the therapist labels, if we always predicted the three most frequent labels (FA, GI, and RES), the recall@3 is only 67.7, suggesting that our models are informative if used in this suggestion-mode. 6 Analysis and Ablations This section reports error analysis and an ablation study of our models on the development set. The appendix shows a comparison of pretrained domain-specific ELMo/glove with generic ones and the impact of the focal loss compared to simple or weighted cross-entropy. 6.1 Label Confusion and Error Breakdown Figure 1 shows the confusion matrix for the client categorization task. The confusion between FN and CT/ST is largely caused by label imbalance. There are 414 CT examples that are predicted as ST and 391 examples vice versa. To further understand their confusion, we selected 100 of each for manual analysis. We found four broad categories of confusion, shown in Table 7. 5606 Category and Explaination Client Examples (Gold MISC) Reasoning is required to understand whether a client wants to change behavior, even with full context (50,42) T: On a scale of zero to ten how confident are you that you can implement this change ? C: I don’t know, seven maybe (CT); I have to wind down after work (ST) Concise utterances which are easy for humans to understand, but missing information such as coreference, zero pronouns (22,31) I mean I could try it (CT) Not a negative consequence for me (ST) I want to get every single second and minute out of it(CT) Extremely short (≤5) or long sentence (≥40), caused by incorrect turn segementation. (21,23) It is a good thing (ST) Painful (CT) Ambivalent speech, very hard to understand even for human. (7,4) What if it does n’t work I mean what if I can’t do it (ST) But I can stop whenever I want(ST) Table 7: Categorization of CT/ST confusions.The two numbers in the brackets are the count of errors for predicting CT as ST and vice versa. We exampled 100 examples for each case. FN CT ST Predicted label FN CT ST True label 0.86 0.07 0.07 0.39 0.45 0.16 0.36 0.18 0.46 Confusion matrix on Categorizing 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Figure 1: Confusion matrix for categorizing client codes, normalized by row. The first category requires more complex reasoning than just surface form matching. For example, the phrase seven out of ten indicates that the client is very confident about changing behavior; the phrase wind down after work indicates, in this context, that the client drinks or smokes after work. We also found that the another frequent source of error is incomplete information. In a face-to-face therapy session, people may use concise and effient verbal communication, with guestures and other body language conveying information without explaining details about, for example, coreference. With only textual context, it is difficult to infer the missing information. The third category of errors is introduced when speech is transcribed into text. The last category is about ambivalent speech. Discovering the real attitude towards behavior change behind such utterances could be difficult, even for an expert therapist. Figures 1 and 2 show the label confusion matrices for the best categorization models. We will examine confusions that are not caused purely by a label being frequent. We observe a common confusion between the two reflection labels, REC and RES. Compared to the confusion matrix from Xiao et al. (2016), we see that our models show much-decreased confusion here. There are two FA RES REC GI QUC QUO MIA MIN Predicted label FA RES REC GI QUC QUO MIA MIN True label 0.97 0.01 0.00 0.01 0.00 0.00 0.01 0.00 0.02 0.65 0.19 0.08 0.02 0.01 0.02 0.01 0.01 0.30 0.58 0.03 0.02 0.01 0.03 0.02 0.02 0.10 0.04 0.75 0.01 0.01 0.04 0.03 0.01 0.12 0.01 0.03 0.72 0.08 0.02 0.01 0.01 0.02 0.00 0.02 0.07 0.89 0.00 0.00 0.02 0.06 0.07 0.21 0.01 0.02 0.57 0.03 0.02 0.13 0.15 0.25 0.05 0.03 0.02 0.36 Normalized confusion matrix 0.0 0.2 0.4 0.6 0.8 Figure 2: Confusion matrix for categorizing therapist codes, normalized by row. reason for this confusion persisting. First, the reflections may require a much longer information horizon. We found that by increasing the window size to 16, the overall reflection results improved. Second, we need to capture richer meaning beyond surface word overlap for RES. We found that complex reflections usually add meaning or emphasis to previous client statements using devices such as analogies, metaphors, or similes rather than simply restating them. Closed questions (QUC) and simple reflections (RES) are known to be a confusing set of labels. For example, an utterance like Sounds like you’re suffering? may be both. Giving information (GI) is easily confused with many labels because they relate to providing information to clients, but with different attitudes. The MI adherent (MIA) and non-adherent (MIN) labels may also provide information, but with supportive or critical attitude that may be difficult to disentangle, given the limited 5607 Ablation Options macro FN CT ST history window size 0 51.6 87.6 39.2 32.0 4 52.6 88.5 37.8 31.5 8∗ 53.9 89.6 39.1 33.1 16 52.0 89.6 39.1 33.1 word attention + GMGRU 52.6 89.5 37.1 31.1 + BiDAF 50.4 87.6 36.5 27.1 sentence attention + SELF42 53.9 89.2 39.1 33.2 + ANCHOR42 53.0 88.2 38.9 32.0 Table 8: Ablation study on categorizing client code. ∗ is our best model CC. All ablation is based on it. The symbol + means adding a component to it. The default window size is 8 for our ablation models in the word attention and sentence attention parts. number of examples. 6.2 How Context and Attention Help? We evaluated various ablations of our best models to see how changing various design choices changes performance. We focused on the context window size and impact of different word level and sentence level attention mechanisms. Tables 8 and 9 summarize our results. History Size. Increasing the history window size generally helps. The biggest improvements are for categorizing therapist codes (Table 9), especially for the RES and REC. However, increasing the window size beyond 8 does not help to categorize client codes (Table 8) or forecasting (in appendix). Word-level Attention. Only the model CT uses word-level attention. As shown in Table 9, when we remove the word-level attention from it, the overall performance drops by 3.4 points, while performances of RES and REC drop by 3.3 and 5 points respectively. Changing the attention to BiDAF decreases performance by about 2 points (still higher than the model without attention). Sentence-level Attention. Removing sentence attention from the best models that have it decreases performance for the models CT and FT (in appendix). It makes little impact on the FC, however. Table 8 shows that neither attention helps categorizing clients codes. 6.3 Can We Suggest Empathetic Responses? Our forecasting models are trained on regular MI sessions, according to the label distribution on Table 1, there are both MI adherent or non-adherent data. Hence, our models are trained to show how the therapist usually respond to a given statement. Ablation Options macro RES REC MIN history window size 0 62.6 51.6 49.4 24.2 4 64.4 54.3 53.2 23.7 8∗ 65.4 55.7 54.9 29.7 16 65.6 55.4 56.7 26.7 word attention - GMGRU 62.0 51.9 51.7 16.0 \ BiDAF 63.5 54.2 51.3 22.6 sentence attention - ANCHOR42 64.9 56.0 54.4 21.8 \ SELF42 63.4 55.5 48.2 21.1 Table 9: Ablation study on categorizing therapist codes, ∗is our proposed model CT . \ means substituting and −means removing that component. Here, we only report the important REC, RES labels for guiding, and the MIN label for warning a therapist. To show whether our model can mimic good MI policies, we selected 35 MI sessions from our test set which were rated 5 or higher on a 7-point scale empathy or spirit. On these sessions, we still achieve a recall@3 of 76.9, suggesting that we can learn good MI policies by training on all therapy sessions. These results suggest that our models can help train new therapists who may be uncertain about how to respond to a client. 7 Conclusion We addressed the question of providing real-time assistance to therapists and proposed the tasks of categorizing and forecasting MISC labels for an ongoing therapy session. By developing a modular family of neural networks for these tasks, we show that our models outperform several baselines by a large margin. Extensive analysis shows that our model can decrease the label confusion compared to previous work, especially for reflections and rare labels, but also highlights directions for future work. Acknowledgments The authors wish to thank the anonymous reviewers and members of the Utah NLP group for their valuable feedback. This research was supported by an NSF Cyberlearning grant (#1822877) and a GPU gift from NVIDIA Corporation. References David C Atkins, Mark Steyvers, Zac E Imel, and Padhraic Smyth. 2014. Scaling up the evaluation of psychotherapy: evaluating motivational interview5608 ing fidelity via statistical text classification. Implementation Science, 9(1):49. John S Baer, Elizabeth A Wells, David B Rosengren, Bryan Hartzler, Blair Beadnell, and Chris Dunn. 2009. Agency context and tailored training in technology transfer: A pilot evaluation of motivational interviewing training for community counselors. Journal of substance abuse treatment, 37(2):191–202. Brian L Burke, Christopher W Dunn, David C Atkins, and Jerry S Phelps. 2004. The emerging evidence base for motivational interviewing: A meta-analytic and qualitative inquiry. Journal of Cognitive Psychotherapy, 18(4):309–322. Do˘gan Can, David C Atkins, and Shrikanth S Narayanan. 2015. A dialog act tagging approach to behavioral coding: A case study of addiction counseling conversations. In Sixteenth Annual Conference of the International Speech Communication Association. Kenneth Mark Colby. 1975. Artificial Paranoia: A Computer Simulation of Paranoid Process. Pergamon Press. Andrea Galassi, Marco Lippi, and Paolo Torroni. 2019. Attention, please! a critical review of neural attention models in natural language processing. arXiv preprint arXiv:1902.02181. James Gibson, Dogan Can, Panayiotis Georgiou, David C Atkins, and Shrikanth S Narayanan. 2017. Attention networks for modeling behaviors in addiction counseling. In Proceedings of the 2016 Conference of the International Speech Communication Association INTERSPEECH. Matthew Honnibal and Ines Montani. 2017. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing. To appear. Xiaolei Huang, Lixing Liu, Kate Carey, Joshua Woolley, Stefan Scherer, and Brian Borsari. 2018. Modeling temporality of human intentions by domain adaptation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 696–701. Zac E Imel, Derek D Caperton, Michael Tanana, and David C Atkins. 2017. Technology-enhanced human interaction in psychotherapy. Journal of counseling psychology, 64(4):385. Dan Jurafsky and James H Martin. 2019. Speech and language processing. Pearson. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations. Christine M Lee, Jason R Kilmer, Clayton Neighbors, David C Atkins, Cheng Zheng, Denise D Walker, and Mary E Larimer. 2013. Indicated prevention for college student marijuana use: A randomized controlled trial. Journal of consulting and clinical psychology, 81(4):702. Christine M Lee, Clayton Neighbors, Melissa A Lewis, Debra Kaysen, Angela Mittmann, Irene M Geisner, David C Atkins, Cheng Zheng, Lisa A Garberson, Jason R Kilmer, et al. 2014. Randomized controlled trial of a spring break intervention to reduce highrisk drinking. Journal of consulting and clinical psychology, 82(2):189. Jiwei Li, Minh-Thang Luong, and Dan Jurafsky. 2015. A hierarchical neural autoencoder for paragraphs and documents. arXiv preprint arXiv:1506.01057. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Doll´ar. 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980– 2988. Ryan Lowe, Nissan Pow, Iulian V. Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of SIGDIAL. Brad W Lundahl, Chelsea Kunz, Cynthia Brownell, Derrik Tollefson, and Brian L Burke. 2010. A metaanalysis of motivational interviewing: Twenty-five years of empirical studies. Research on social work practice, 20(2):137–160. Renata K Martins and Daniel W McNeil. 2009. Review of motivational interviewing in promoting health behaviors. Clinical psychology review, 29(4):283–293. William Miller and Stephen Rollnick. 2003. Motivational interviewing: Preparing people for change. Journal for Healthcare Quality, 25(3):46. William R Miller, Theresa B Moyers, Denise Ernst, and Paul Amrhein. 2003. Manual for the motivational interviewing skill code (misc). Unpublished manuscript. Albuquerque: Center on Alcoholism, Substance Abuse and Addictions, University of New Mexico. William R Miller and Stephen Rollnick. 2012. Motivational interviewing: Helping people change. Guilford press. Clayton Neighbors, Christine M Lee, David C Atkins, Melissa A Lewis, Debra Kaysen, Angela Mittmann, Nicole Fossos, Irene M Geisner, Cheng Zheng, and Mary E Larimer. 2012. A randomized controlled trial of event-specific prevention strategies for reducing problematic drinking associated with 21st birthday celebrations. Journal of consulting and clinical psychology, 80(5):850. 5609 Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Ver´onica P´erez-Rosas, Rada Mihalcea, Kenneth Resnicow, Satinder Singh, Lawrence Ann, Kathy J Goggin, and Delwyn Catley. 2017. Predicting counselor behaviors in motivational interviewing encounters. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, volume 1, pages 1128–1137. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. Peter Roy-Byrne, Kristin Bumgardner, Antoinette Krupski, Chris Dunn, Richard Ries, Dennis Donovan, Imara I West, Charles Maynard, David C Atkins, Meredith C Graves, et al. 2014. Brief intervention for problem drug use in safety-net primary care settings: a randomized clinical trial. Jama, 312(5):492–501. Jost Schatzmann, Kallirroi Georgila, and Steve Young. 2005. Quantitative evaluation of user simulation techniques for spoken dialogue systems. In 6th SIGdial Workshop on DISCOURSE and DIALOGUE. Craig S Schwalbe, Hans Y Oh, and Allen Zweben. 2014. Sustaining motivational interviewing: a metaanalysis of training studies. Addiction (Abingdon, England), 109(8):1287–94. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. In ICLR. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In AAAI, volume 16, pages 3776–3784. Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and JianYun Nie. 2015. A hierarchical recurrent encoderdecoder for generative context-aware query suggestion. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 553–562. ACM. Andreas Stolcke, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 26(3):339–373. Michael Tanana, Kevin A Hallgren, Zac E Imel, David C Atkins, and Vivek Srikumar. 2016. A comparison of natural language processing methods for automated coding of motivational interviewing. Journal of substance abuse treatment, 65:43–50. Sean J Tollison, Christine M Lee, Clayton Neighbors, Teryl A Neil, Nichole D Olson, and Mary E Larimer. 2008. Questions and reflections: the use of motivational interviewing microskills in a peer-led brief alcohol intervention for college students. Behavior Therapy, 39(2):183–194. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 189–198. Joseph Weizenbaum. 1966. ELIZA – a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1):36–45. Bo Xiao, Dogan Can, James Gibson, Zac E Imel, David C Atkins, Panayiotis G Georgiou, and Shrikanth S Narayanan. 2016. Behavioral coding of therapist language in addiction counseling using recurrent neural networks. In Proceedings of the 2016 Conference of the International Speech Communication Association INTERSPEECH, pages 908–912. Koichiro Yoshino, Chiori Hori, Julien Perez, Luis Fernando D’Haro, Lazaros Polymenakos, Chulaka Gunasekara, Walter S. Lasecki, Jonathan Kummerfeld, Michael Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Sean Gao, Tim K. Marks, Devi Parikh, and Dhruv Batra. 2018. The 7th dialog system technology challenge. arXiv preprint. Justine Zhang, Jonathan P Chang, Cristian DanescuNiculescu-Mizil, Lucas Dixon, Yiqing Hua, Nithum Thain, and Dario Taraborelli. 2018. Conversations Gone Awry: Detecting Early Signs of Conversational Failure. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. A Appendix Different Clustering Strategies for MISC The original MISC description of Miller et al. (2003) included 28 labels (9 client, 19 therapist). Due to data scarcity and label confusion, some labels were merged into a coarser set. Can et al. (2015) retain 6 original labels FA, GI, QUC, QUO, REC, RES, and merge remaining 13 rare labels into a 5610 Code Count Description Examples MIA 3869 Group of MI Adherent codes : Affirm(AF); Reframe(RF); Emphasize Control(EC); Support(SU); Filler(FI); Advise with permission(ADP); Structure(ST); Raise concern with permission(RCP) “You’ve accomplished a difficult task.” (AF) “Its your decision whether you quit or not” (EC) “That must have been difficult.” (SU) “Nice weather today!” (FI) “Is it OK if I suggested something?” (ADP) “Let’s go to the next topic” (ST) “Frankly, it worries me.” (RCP) MIN 1019 Group of MI Non-adherent codes: Confront(CO); Direct(DI); Advise without permission(ADW); Warn(WA); Raise concern without permission(RCW) “You hurt the baby’s health for cigarettes?” (CO) “You need to xxx.” (DI) “You ask them not to drink at your house.” (ADW) “You will die if you don’t stop smoking.” (WA) “You may use it again with your friends.” (RCW) Table 10: Label distribution, description and exmaples for MIA and MIN single COU label, they merge all 9 client codes into a single CLI label. Instead, Tanana et al. (2016) merge only 8 of rare labels into a OTHER label and they cluster client codes according to the valence of changing, sustaining or being neutral on the addictive behavior(Atkins et al., 2014). Then Xiao et al. (2016) combine and improve above two clustering strategies by splitting the all 13 rare labels according to whether the code represents MIadherent(MIA) and MI-nonadherent (MIN) We show more details about the original labels in MIA and MIN in Table 10 Model Setup We use 300-dimensional Glove embeddings pre-trained on 840B tokens from Common Crawl (Pennington et al., 2014). We do not update the embedding during training. Tokens not covered by Glove are using a randomly initialized UNK embedding. We also use characterlevel deep contextualized embedding ELMo 5.5B model by concatenating the corresponding ELMo word encoding after the word embedding vector. For speaker information, we randomly initialize them with 8 dimensional vectors and update them during training. We used a dropout rate of 0.3 for the embedding layers. We trained all models using Adam (Kingma and Ba, 2015) with learning rate chosen by cross validation between [1e−4, 5 ∗1e−4], gradient norms clipping from at [1.0, 5.0], and minibatch sizes of 32 or 64. We use the same hidden size for both utterance encoder, dialogue encoder and other attention memory hidden size; it has been selected from {64, 128, 256, 512}. We set a smaller dropout 0.2 for the final two fully connected layers. All the models are trained for 100 epochs with earlystoping based on macro F1 over development results. Detailed Results of Our Main Models In the main text, we only show the F1 score of each our proposed models. We summarize the performance of our best models for both categorzing and forecasting MISC codes in Table 11 with precision, recall and F1 for each codes. Label Categorizing Forecasting P R F1 P R F1 FN 92.5 86.8 89.6 90.8 80.3 85.2 CT 34.8 44.7 39.1 18.9 28.6 22.7 ST 28.2 39.9 33.1 19.5 33.7 24.7 FA 95.1 94.7 94.9 70.7 73.2 71.9 RES 50.3 61.3 55.2 20.1 18.8 19.5 REC 52.8 55.5 54.1 19.2 34.7 24.7 GI 74.6 75.1 74.8 52.8 67.5 59.2 QUC 80.6 70.4 75.1 36.2 24.3 29.1 QUO 85.3 81.2 83.2 27.0 11.8 16.4 MIA 61.8 52.4 56.7 27.0 10.6 15.2 MIN 27.7 28.5 28.1 17.2 10.2 12.8 Table 11: Performance of our proposed models with respect to precision, recall and F1 on categorizing and forecasting tasks for client and therapist codes Domain Specific Glove and ELMo We use the general psychotherapy corpus with 6.5M words (Alexander Street Press) to train the domain specific word embeddings Glovepsyc with 50, 100, 300 dimension. Also, we trained ELMo with 1 highway connection and 256-dimensional output size to get ELMopsyc. We found that ELMo 5.5B performs better than ELMo psyc in our experiments, and general Glove-300 is better than the Glovepsyc. Hence for main results of our models, we use ELMogeneric by default. Please see more details in Table 12 5611 Model Embedding macro FN CT ST macro FA RES REC GI QUC QUO MIA MIN C ELMo 53.9 89.6 39.1 33.1 65.4 95.0 55.7 54.9 74.2 74.8 82.6 56.6 29.7 ELMopsyc 46.9 88.9 27.5 24.3 64.2 94.9 53.3 53.3 75.8 74.8 82.2 56.1 23.5 Glove 50.6 89.9 33.4 28.6 62.2 94.6 53.7 54.2 70.3 70.0 79.1 54.7 20.9 Glovepysc 47.4 88.4 23.9 30.0 63.4 94.9 54.7 52.8 75.2 71.4 80.8 53.6 23.5 F ELMo 44.3 85.2 24.7 22.7 31.1 71.9 19.5 24.7 59.2 28.3 17.7 15.9 9.0 ELMopsyc 43.8 84.0 22.4 25.0 29.1 73.5 15.5 24.3 59.1 29.1 9.5 12.1 10.1 Glove 42.7 83.9 21.0 23.1 30.0 72.8 20.8 23.7 58.2 26.2 14.5 14.5 9.6 Glovepysc 43.6 81.9 23.3 25.7 30.8 72.1 19.7 24.4 57.3 28.9 13.7 17.8 23.5 Table 12: Ablation study for our proposed model with embeddings trained on the psychotherapy corpus. Ablation Options CT ST R@3 FA RES REC GI QUC QUO MIA MIN history size 1 17.2 15.1 66.4 59.4 12.6 9.0 44.6 16.3 14.8 11.9 4.1 4 16.8 22.6 75.3 71.4 15.6 21.1 57.1 29.3 11.0 11.2 14.4 8∗ 24.7 22.7 77.0 72.8 20.8 23.1 58.1 28.3 17.7 15.9 9.0 16 23.9 20.7 76.5 71.2 13.7 24.1 58.5 25.9 9.7 16.2 12.7 word attention GMGRU 14.0 23.2 75.7 71.7 14.2 23.0 57.5 26.5 8.0 15.4 11.6 GMGRU4h 19.1 22.9 76.3 71.3 12.1 23.3 58.1 24.5 12.6 11.7 14.0 sentence attention −SELF42 24.9 22.5 76.0 71.4 12.7 24.9 58.3 28.8 5.9 17.4 9.7 \ ANCHOR42 22.9 22.9 76.2 72.2 15.5 24.6 59.5 27.1 7.7 16.3 8.3 + GMGRU \ ANCHOR42 6.8 23.4 76.9 70.8 8.0 24.5 58.3 24.6 10.6 14.9 12.1 Table 13: Ablation on forecasting task on both client and therapist code. ∗row are results of our best forecasting model FC, and FT . \ means substitute anchor attention with self attention. +GMGRU ANCHOR42 means using word-level attention and achor-based sentence-level attention together. Full Results for Ablation on Forecasting Tasks In addition to the ablation table in the main paper for categorizing tasks, we reported more ablation details on forecasting task in Table 13. Wordlevel attention shows no help for both client and therapist codes. While sentence-level attention helps more on therapist codes than on client codes. Multi-head self attention alsoachieves better performance than anchor-based attention in forecasting tasks. Label Imbalance We always use the same α for all weighted focal loss. Besides considering the label frequency, we also consider the performance gap between previous reported F1. We choose to balance weights α as {1.0,1.0,0.25} for CT,ST and FN respectively, and {0.5, 1.0, 1.0, 1.0, 0.75, 0.75,1.0,1.0} for FA, RES, REC, GI, QUC, QUO, MIA, MIN. As shown in Table 14, we report our ablation studies on cross-entropy loss, weighted cross-entropy loss, and focal loss. Besides the fixed weights, focal loss offers flexible hyperparameters to weight examples in different tasks. Experiments shows that except for the model CT , focal loss outperforms cross-entropy loss and weighted cross entropy. Loss Client Therapist F1 CT ST F1 RES REC MIA MIN Cce 47.0 28.4 22.0 60.9 54.3 53.8 53.7 4.8 Cwce 53.5 39.2 32.0 65.4 55.7 54.9 56.6 29.7 Cfl 53.9 39.1 33.1 65.4 55.7 54.9 56.6 29.7 F ce 42.1 17.7 18.5 26.8 3.3 20.8 16.3 8.3 F wce 43.1 20.6 23.3 30.7 17.9 25.0 17.7 10.9 F fl 44.2 24.7 22.7 31.1 19.5 24.7 15.2 12.8 Table 14: Abalation study of different loss function on categorizing and forecasting task. Based on our proposed model for our four settings, we compared our best model with crossentropy loss(ce), α balanced cross-entropy(wce) and focal loss. Here we only report the macro F1 for rare labels and the overall macro F1. γ = 1 is the best for both the model CC and FC, while γ = 0 is the best for CT and γ = 3 for FT . Worth to mention, when γ = 0, the focal loss degraded into αbalanced crossentropy, that first two rows are the same for therspit model.
2019
563
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5612–5623 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5612 Multimodal Transformer Networks for End-to-End Video-Grounded Dialogue Systems Hung Le1,2, Doyen Sahoo1, Nancy F. Chen2, Steven C.H. Hoi1,3 1Singapore Management University 2Institute of Inforcomm Research (I2R), Singapore 3Salesforce Research Asia {hungle.2018,doyens}@smu.edu.sg [email protected],[email protected] Abstract Developing Video-Grounded Dialogue Systems (VGDS), where a dialogue is conducted based on visual and audio aspects of a given video, is significantly more challenging than traditional image or text-grounded dialogue systems because (1) feature space of videos span across multiple picture frames, making it difficult to obtain semantic information; and (2) a dialogue agent must perceive and process information from different modalities (audio, video, caption, etc.) to obtain a comprehensive understanding. Most existing work is based on RNNs and sequence-to-sequence architectures, which are not very effective for capturing complex long-term dependencies (like in videos). To overcome this, we propose Multimodal Transformer Networks (MTN) to encode videos and incorporate information from different modalities. We also propose queryaware attention through an auto-encoder to extract query-aware features from non-text modalities. We develop a training procedure to simulate token-level decoding to improve the quality of generated responses during inference. We get state of the art performance on Dialogue System Technology Challenge 7 (DSTC7). Our model also generalizes to another multimodal visual-grounded dialogue task, and obtains promising performance. 1 Introduction A video-grounded dialogue system (VGDS) generates appropriate conversational response to queries of humans, by not only keeping track of the relevant dialogue context, but also understanding the relevance of the query in the context of a given video (knowledge grounded in a video) (Hori et al., 2018). An example dialogue exchange can be seen in Figure 1. Developing such systems has recently received interest from the research community (e.g. DSTC7 challenge (Yoshino et al., 2018)). This task is much C: a man is standing in a kitchen putting groceries away. He closes the cabinet when finished, walks over to a table and pulls out a chair and sits down. S: a man puts away his groceries and then sits at a kitchen table and stares out the window. Q1: how many people are in the video? A1: there is just one person Q2: is there sound to the video? A2: yes there is audio but no one is talking ... Q10: is he happy or sad? A10: he appears to be neutral in expression Figure 1: A sample dialogue from the DSTC7 Video Scene-aware Dialogue training set with 4 example video scenes. C: Video Caption, S: Video Summary, Qi: ith-turn question, Ai: ith-turn answer more challenging than traditional text-grounded or image-grounded dialogue systems because: (1) feature space of videos is larger and more complex than text-based or image-based features because of diverse information, such as background noise, human speech, flow of actions, etc. across multiple video frames; and (2) a conversational agent must have the ability to perceive and comprehend information from different modalities (text from dialogue history and human queries, visual and audio features from the video) and semantically shape a meaningful response to humans. Most existing approaches for multi-modal dialogue systems are based on RNNs as the sequence processing unit and sequence-to-sequence network as the overall architecture to model the sequential information in text (Das et al., 2017a,b; Hori et al., 2018; Kottur et al., 2018). Some efforts adopted query-aware attention to allow the models 5613 to focus on specific parts of the features most relevant to the dialogue context (Hori et al., 2018; Kottur et al., 2018). Despite promising results, these methods are not very effective or efficient for processing video-frames, due to the complexity of long term sequential information from multiple modalities. We propose Multimodal Transformer Networks (MTN) which model the complex sequential information from video frames, and also incorporate information from different modalities. MTNs allow for complex reasoning over multimodal data such as in videos, by jointly attending to information in different representation subspaces, and making it easier (than RNNs) to fuse information from different modalities. Inspired by the success of Transformers (Vaswani et al., 2017)) for text, we propose novel neural architectures for VGDS: (1) We propose to capture complex sequential information from video frames using multi-head attention layers. Multihead attention is applied across several modalities (visual, audio, captions) repeatedly. This works like a memory network to allow the models to comprehensively reason over the video to answer human queries; (2) We propose an autoencoder component, designed as query-aware attention layer, to further improve the reasoning capability of the models on the non-text features of the input videos; and (3) We employ a training approach to improve the generated responses by simulating token-level decoding during training. We evaluated MTN on a video-grounded dialogue dataset (released through DSTC7 (Yoshino et al., 2018)). In each dialogue, video features such as audio, visual, and video caption, are available, which have to be processed and understood to hold a conversation. We conduct comprehensive experiments to validate our approach, including automatic evaluations, ablations, and qualitative analysis of our results. We also validate our approach on the visual-grounded dialogue task (Das et al., 2017a), and show that MTN can generalize to other multimodal dialog systems. 2 Related Work The majority of work in dialogues is formulated as either open-domain dialogues (Shang et al., 2015; Vinyals and Le, 2015; Yao et al., 2015; Li et al., 2016a,b; Serban et al., 2017, 2016) or taskoriented dialogues (Henderson et al., 2014; Bordes and Weston, 2016; Fatemi et al., 2016; Liu and Lane, 2017; Lei et al., 2018; Madotto et al., 2018). Some recent efforts develop conversational agents that ground their responses on external knowledge, e.g. online encyclopedias (Dinan et al., 2018), social networks, or user recommendation sites (Ghazvininejad et al., 2018). The agent generates a response that can relate to the current dialogue context as well as exploit the information source. Recent dialogue systems use Transformer principles (Vaswani et al., 2017) for incorporating attention and focus on different dialogue settings, e.g. text-only or response selection settings (Zhu et al., 2018; Mazar´e et al., 2018; Dinan et al., 2018), These approaches consider the knowledge to be grounded in text, whereas in VGDS, the knowledge is grounded in videos (with multimodal sources of information). There are a few efforts in NLP domain, where multimodal information needs to be incorporated for the task. Popular research areas include image captioning (Vinyals et al., 2015; Xu et al., 2015), video captioning (Hori et al., 2017; Li et al., 2018) and visual question-answering (QA) (Antol et al., 2015; Goyal et al., 2017). Image captioning and video captioning tasks require to output a description sentence about the content of an image or video respectively. This requires the models to be able to process certain visual features (and audio features in video captioning) and generate a reasonable description sentence. Visual QA involves generating a correct response to answer a factual question about a given image. The recently proposed movie QA (Tapaswi et al., 2016) task is similar to visual QA but the answers are grounded in movie videos. However, all of these methods are restricted to answering specific queries, and do not maintain a dialogue context, unlike what we aim to achieve in VGDS. We focus on generating dialogue responses rather than selecting from a set of candidates. This requires the dialogue agents to model the semantics of the visual and/or audio contents to output appropriate responses. Another related task is visual dialogues (Das et al., 2017a,b; Kottur et al., 2018). This is similar to visual QA but the conversational agent needs to track the dialogue context to generate a response. However, the knowledge is grounded in images. In contrast, we focus on knowledge grounded in videos, which is more complex, considering the large feature space spanning across multiple video frames and modalities that need to be understood. 5614 3 Multimodal Transformer Networks Given an input video V , its caption C, a dialogue context of (t −1) turns, each including a pair of (question, answer) (Q1, A1), ..., (Qt−1, At−1), and a factual query Qt on the video content, the goal of a VGDS is to generate an appropriate dialogue response At. We follow the attention-based principle of Transformer network (Vaswani et al., 2017) and propose a novel architecture: Multimodal Transformer Networks to elegantly fuse feature representations from different modalities. MTN enables complex reasoning over long video sequences by attending to important feature representations in different modalities. MTN comprises 3 major components: encoder, decoder, and auto-encoder layers. (i) Encoder layers encode text sequences and input video into continuous representations. Positional encoding is used to inject the sequential characteristics of input text and video features at token and video-frame level respectively; (ii) Decoder layers project the target sequences and perform reasoning over multiple encoded features through a multi-head attention mechanism. Attention layers coupled with feed-forward and residual connections process the projected target sequence over N attention steps before passing to a generative component to generate a response; (iii) Auto-encoder layers enhance video features with a query-aware attentions on the visual and audio aspects of the input video. A network of multi-head attentions layers are employed as a query auto-encoder to learn the attention in an unsupervised manner. We combine these modules as a Multimodal Transformer Network (MTN) model and jointly train the model end-to-end. An overview of the MTN architecture is shown in Figure 2. Next, we will discuss the details of each of these components. 3.1 Encoder Layers Text Sequence Encoders. The encoder layers map each sequence of tokens (x1, ..., xn) to a sequence of continuous representation z = (z1, ..., zn) ∈Rd. An overview of text sequence encoder can be seen in Figure 3. The encoder is composed of a token-level learned embedding, a fixed positional encoding layer, and layer normalization. We use the positional encoding to incorporate sequential information of the source sequences. The token-level positional embedding is added on top of the embedding layer by using element-wise summation. Both learned embedding and positional encoding has the same dimension d. We used the sine and cosine functions for the positional encoding as similarly adopted in (Vaswani et al., 2017). Compared to a Transformer encoder, we do not use stack of encoder layers with self-attention to encode source sequences. Instead, we only use layer normalization (Ba et al., 2016) on top of the embedding. We also experimented with using stacked Transformer encoder blocks, consisting of self-attention and feed-forward layers, and compare with our approach (see Table 4 Row A and B-1). The target sequence At = (y1, ..., ym) is offset by one position to ensure that the prediction in the decoding step i is auto-regressive only on the previously positions 1, ..., (i −1). Here we share the embedding weights of encoders for source sequences i.e. query, video caption, and dialogue history. Video Encoders. For a given video V , its features are extracted with a sliding window of nvideo-frame length. This results in modality feature vector fm ∈RnumSeqs×dm for a modality m. Each fm represents the features for a sequence of n video frames. Here we consider both visual and audio features M = (v, a). We use pretrained feature extractors and keep the weights of the extractors fixed during training. For a set of scene sequences s1, ..., sv, the extracted features for modality m is fm = (f1, ..., fv). We apply a linear network with ReLU activation to transform the feature vectors from dm- to d-dimensional space. We then also employ the same positional encoding as before to inject sequential information into fm. Refer to Figure 3 for an overview of video encoder. 3.2 Decoder Layers Given the continuous representation zs for each source sequence xs and zt for the offset target sequence, the decoder generates an output sequence (y2, ..., ym) (The first token is always an ⟨sos⟩ token). The decoder is composed of a stack of N identical layers. Each layer has 4 + ∥M∥ sub-layers, each of which performs attention on an individual encoded input: the offset target sequence zt, dialogue history zhis, video caption zcap, user query zque, and video non-text features {fa, fv}. Each sub-layer consists of a multi-head attention mechanism and a position-wise feedforward layer. Each feed-forward network con5615 Query-Aware Attention Auto-Encoder (QAE) Video Caption Encoder Query Encoder Output Encoder Masked Self-Attention History Attention Caption Attention Query Attention Linear & Softmax Dialogue History Encoder Query Self-Attention Query-Aware Attention (Audio) Video Attention (Audio) Generated Response Regenerated Query xN Qt: How many people are in the video? At: There is just one person Token-level Decoding Sim. zt zhis zcap zque Video Encoder (Audio) xN At Qt Decoder (D) C: A man is standing in the kitchen... (Q0,A0),...(Qt-1, At-1) zque Query-Aware Attention (Visual) Video Attention (Visual) Linear & Softmax Video Encoder (Visual) fa att fv att fa fv V Figure 2: Our MTN architecture includes 3 major components: (i) encoder layers encode text sequences and video features; (ii) decoder layers (D) project target sequence and attend on multiple inputs; and (iii) Query-Aware AutoEncoder layers (QAE) attend on non-text modalities from query features. For simplicity, Feed Forward, Residual Connection and Layer Normalization layers are not presented. Best viewed in color. Video Feature Extractor Token-Level Embedding Tokenized Sequence Positional Encoding Layer Norm Text Sequence Encoder Linear & ReLU Positional Encoding Layer Norm Video Encoder Input Video fm z Fixed Trained Figure 3: 2 types of encoders are used: text-sequence encoders (left) and video encoders (right). Text-sequence encoders are used on text input, i.e. dialogue history, video caption, query, and output sequence. Video encoders are used on visual and audio features of input video. sists of 2 linear transformation with ReLU activation in between. We employed residual connection (He et al., 2016) and layer normalization (Ba et al., 2016) around each attention block. The multi-head attention on zs is defined as: ms = Concat(h1, ..., hh)W O (1) hi = Attn(zdec outW Q i , zsW K i , zsW V i ) (2) Attn(q, k, v) = softmax( qkT √dk )v (3) where W Q i ∈Rd×dk, W K i ∈Rd×dk, W V i ∈ Rd×dk, W O i ∈Rhdv×d (the superscripts of s and t are not presented for each W for simplicity). zdec out is the output of the previous sub-layer. The multi-head attention allows the model to attend on text sequence features at different positions of the sequences. By using multi-head attention on visual and audio features, the model can attend on frame sequences to project and extract information from different parts of the video. Using multiple attentions for different input components also allows the model attend differently on inputs rather than using the same attention network for all. We also experimented with concatenating the input sequences and only use one attention block in each decoding layer, similarly to a Transformer decoder ( See the appendix Section B). 3.3 Auto-Encoder Layers As the multi-head attentions allow dynamic attentions on different input components, the essential interaction between the input query and nontext features of the input video is not fully implemented. While a residual connection is employed and the video attention block is placed at the end of the decoder layer, the attention on video features might not be optimal. We consider adding queryaware attention on video features as a separate component. We design it as a query auto-encoder to allow the model to focus on query-related features of the video in an unsupervised manner. The auto-encoder is composed of a stack of N layers, each of which includes an query self-attention and 5616 query-aware attention on video features. Hence, the number of sub-layers is 1 + ∥M∥. For selfattention, the output of the previous sub-layer zae out (or zque in case of the first auto-encoder stack) is used identically as q, k and v in Equation 3, while for query-aware attention, zae out is used as q and fm is used as k and v. For an nth auto-encoder layer, each output of the query-aware attention on video features fatt m,n is passed to video attention module of the corresponding nth decoder layer. Each video attention head i for a given modality m at decoding layer nth is defined as: hi = Attn(zdec out,nW Q i , fatt m,nW K i , fatt m,nW V i ) The decoder and auto-encoder create a network similar to the One-to-Many setting in (Luong et al., 2015) as the encoded query features are shared between the two modules. We also consider using the auto-encoder as stacked queryaware encoder layers i.e. use query self-attention and query-based attention on video features and extract the output of final layer at Nth block to the decoder. Comparison of the performance (See Table 4 Row C-5 and D) shows that adopting an auto-encoder architecture is more effective in capturing relevant video features. 3.4 Generative Network Similar to sequence generative models (Sutskever et al., 2014; Manning and Eric, 2017), we use a Linear transformation layer with softmax function on the decoder output to predict probabilities of the next token. In the auto-encoder, the same architecture is used to re-generate the query sequence. We separate the weight matrix between the source sequence embedding, output embedding, and the pre-softmax linear transformation. Simulated Token-level Decoding. Different from training, during test time, decoding is still an auto-regressive process where the decoder generates the sentence token-by-token. We aim to simulate this process during training by performing the following procedures: • Rather than always using the full target sequence of length L, the token-level decoding simulation will do the following: • With a probability p, e.g. p = 0.5 i.e. for 50% of time, crop the target sequence at a uniform-randomly selected position i where i = 2, ..., (L −1) and keep the left sequence as the target sequence e.g. ⟨sos⟩there is just one person ⟨eos⟩→⟨sos⟩there is just one • As before, the target sequence is offset by one position as input to the decoder We employ this approach to reduce the mismatch of input to the decoder during training and test time and hence, improve the quality of the generated responses. We only apply this procedure for the target sequences to the decoder but not the query auto-encoder. 4 Experiments 4.1 Data We used the dataset from DSTC7 (Yoshino et al., 2018) which consists of multi-modal dialogues grounded on the Charades videos (Sigurdsson et al., 2016). Table 1 summarizes the dataset and Figure 1 shows a training example. We used the audio and visual feature extractors pre-trained on YouTube videos and the Kinetics dataset (Kay et al., 2017) (Refer to (Hori et al., 2018) for the detail video features). Specifically we used the 2048-dimensional I3D flow features from the “Mixed 5c” layer of the I3D network (Carreira and Zisserman, 2017) for visual features and 128dimensional Audio Set VGGish (Hershey et al., 2017) for audio features. We concatenated the provided caption and summary for each video from the DSTC7 dataset as the default video caption Cap+Sum. Other data pre-processing procedures are described in the appendix Section A.1. Train Validation Test # of Dialogs 7,659 1,787 1,710 # of Turns 153,180 35,740 13,490 # of Words 1,450,754 339,006 110,252 Table 1: DSTC7 Video Scene-aware Dialogue Dataset 4.2 Training We use the standard objective function loglikelihood of the target sequence T given the dialogue history H, user query Q, video features V , and video caption C. The log-likelihood of re5617 generated query is also added when QAE is used: L = L(T) + L(Q) = X m log P(ym|ym−1, ..., y1, H, Q, V, C)+ = X n log P(xq n|xq n−1, ..., xq 1, Q, V ) We train MTN models in two settings: Base and Large. The Base parameters are N = 6, h = 8, d = 512, dk = dv = d/h = 64, and the Large parameters are N = 10, h = 16, d = 1024, dk = dv = d/h = 64. The probability p for simulating token-level decoding is 0.5. We trained each model up to 17 epochs. We used the Adam optimizer (Kingma and Ba, 2014). The learning rate is varied over the course of training with strategy adopted similarly in (Vaswani et al., 2017). We used warmup steps as 9660. We employed dropout (Srivastava et al., 2014) of 0.1 at all sub-layers and embeddings. Label Smoothing (Szegedy et al., 2016) is also applied during training. For all models, we select the latest checkpoints that achieve the lowest perplexity on the validation set. We used beam search with beam size 5 an a length penalty 1.0. The maximum output length during inference is 30 tokens. All models were implemented using PyTorch (Paszke et al., 2017) 1. 4.3 Video-Grounded Dialogues We compared MTN models with the baseline (Hori et al., 2018) and other submission entries to the DSTC7 Track 3. The evaluation includes 4 word-overlapping-based objective measures: BLEU (1 to 4) (Papineni et al., 2002), CIDEr (Vedantam et al., 2015), ROUGE-L (Lin, 2004), and METEOR (Banerjee and Lavie, 2005). The results were computed based on one reference ground-truth response per test dialogue in the test set. As can be seen in Table 3, both Base- and Large-MTN models outperform the baseline (Hori et al., 2018) in all metrics. Our Large model outperforms the best previously reported models in the challenge across all the metrics. Even our Base model with smaller parameters outperforms most of the previous results, except for entry1, which we outperform in BLEU1-3 and METEOR measures. While some of the submitted models to the 1The code is released at https://github.com/ henryhungle/MTN challenge utilized external data or ensemble techniques (Alamri et al., 2018), we only use the given training data from the DSTC7 dataset similarly as the baseline (Hori et al., 2018). Impact of Token-level Decoding Simulation. We consider text-only dialogues (no visual or audio features) to study the impact of the tokenlevel decoding simulation component. We also remove the auto-encoder module i.e. MTN w/o QAE. We study the differences of performance when the simulation probability p = 0, 0.1, ..., 1. 0 is equivalent to always keeping the target sequences as a whole and 1 is cropping all target sequences at random points during training. As shown in Figure 4, adding the simulation helps to improve the performance in most cases of p > 0 and < 1. At p = 1, the performance is suffered as the decoder receives only fragmented sequences during training. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 p 0.1 0.105 0.11 0.115 0.12 0.125 BLEU4 Figure 4: Impact of simulation probability p in BLEU4 measure on the test data. At p = 0.4 to 0.6, the improvement in BLEU4 scores is more significant. Ablation Study. We tested variants of our models with different combinations of data input in Table 4. With text-only input, compared to our approach (Row B-1), using encoder layers with selfattention blocks (Row A) does not perform well. The self-attention encoders also make it hard to optimize the model as noted by (Liu et al., 2018). When we remove the video caption from the input (hence, no caption attention layers) and use either visual or audio video features, we observe that the proposed auto-encoder with query-aware attention results in better responses. For example, with audio feature, adding the auto-encoder component (Row C-1) increases BLEU4 and CIDEr measures as compared to the case where no autoencoder is used (Row B-2). When using both caption and video features, the proposed auto-encoder (Row C-5) improves all metrics from the decoderonly model (Row B-4). We also consider using the auto-encoder structure as an encoder (i.e. without the generative component to re-generate query) 5618 and decouple from the decoder stacks (i.e. output of the Nth encoder layer is used as input to the 1st decoder layer) (Row D). The results show that an auto-encoder structure is superior to stacked encoder layers. Our architecture is also better in terms of computation speed as both decoder and auto-encoder are processed in parallel, layer by layer. Results of other model variants are available in the appendix Section B. 4.4 Visual Dialogues We also test if MTN could generalize to other multi-modal dialogue settings. We experiment on the visually grounded dialogue task with the VisDial dataset (Das et al., 2017a). The training dataset is much larger than DSTC7 dataset with more than 1.2 million training dialogue turns grounded on images from the COCO dataset (Lin et al., 2014). This task aims to select a response from a set of 100 candidates rather than generating a new complete response. Here we still keep the generative component and maximize the loglikelihood of the ground-truth responses during training. During testing, we use the log-likelihood scores to rank the candidates. We also remove the positional encoding component from the encoder to encode image features as these features do not have sequential characteristics. All other components and parameters remain unchanged. We trained MTN with the Base parameters on the Visual Dialogue v1.0 2 training data and evaluate on the test-std v1.0 set. The image features are extracted by a pre-trained object detection model (Refer to the appendix Section A.2 for data preprocessing). We evaluate our model with Normalized Discounted Cumulative Gain (NDCG) score by submitting the predicted ranks of the response candidates to the evaluation server (as the groundtruth for the test-std v1.0 split is not published). We keep all the training procedures unchanged from the video-grounded dialogue task. Table 2 shows that our proposed MTN is able to generalize to the visually grounded dialogue setting. It is interesting that our generative model outperforms other retrieval-based approaches in NDCG without any task-specific fine-tuning. There are other submissions with higher NDCG scores from the leaderboard 3 but the approaches of these submis2https://visualdialog.org/data 3https://evalai.cloudcv.org/web/ challenges/challenge-page/103/ leaderboard/298 sions are not clearly detailed to compare with. Model NDCG MTN (Base) 55.33 CorefNMN (Kottur et al., 2018) 54.70 MN (Das et al., 2017a) 47.50 HRE (Das et al., 2017a) 45.46 LF (Das et al., 2017a) 45.31 Table 2: Comparison of MTN (Base) to state-of-the-art visual dialogue models on the test-std v1.0. The best measure is highlighted in bold. 5 Qualitative Analysis Figure 6 shows some samples of the predicted test dialogue responses of our model as compared to the baseline (Hori et al., 2018). Our generated responses are more accurate than the baseline to answer human queries. Some of our generated responses are more elaborate e.g. “with a cloth in her hand”. Our responses can correctly describe single actions (e.g. “cleaning the table”, “stays in the same place”) or a series of actions (e.g. “walks over to a closet and takes off her jacket”). This shows that our MTN approach can reason over complex features came from multiple modalities. Figure 5 summarizes the CIDEr measures of the responses generated by our Base model and the baseline (Hori et al., 2018) by their position in dialogue e.g. 1st...10th turn. It shows that our responses are better across all dialogue turns, from 1st to 10th. Figure 5 also shows that MTN perform better at shorter dialogue lengths e.g. 1-turn, 2-turn and 3-turn, in general and the performance could be further improved for longer dialogues. 1 2 3 4 5 6 7 8 9 10 Dialogue position of generated response 0 0.5 1 1.5 2 2.5 CIDEr Ours Baseline Figure 5: Comparison of CIDEr measures on the test data between MTN (Base) and the baseline (Hori et al., 2018) across different turn position of the generated responses. Our model outperforms the baselines at all dialogue turn positions. 5619 BLEU1 BLEU2 BLEU3 BLEU4 METEOR ROUGE-L CIDEr MTN MTN (Base) 0.357 0.241 0.173 0.128 0.162 0.355 1.249 MTN (Large) 0.356 0.242 0.174 0.135 0.165 0.365 1.366 DSTC7 submissions Entry-top1 0.331 0.231 0.171 0.131 0.157 0.363 1.360 Entry-top2 0.329 0.228 0.167 0.126 0.154 0.357 1.306 Entry-top3 0.327 0.225 0.164 0.123 0.155 0.350 1.269 Entry-top4 0.312 0.210 0.152 0.115 0.148 0.357 1.271 Entry-top5 0.329 0.216 0.153 0.114 0.140 0.331 1.103 (Hori et al., 2018) 0.279 0.183 0.13 0.095 0.122 0.303 0.905 Table 3: Evaluated on the test data, the proposed approach achieves better objective measures than the baselines and the submissions to the challenge. The best result in each metric is highlighted in bold. CapFea VidFea BLEU1 BLEU2 BLEU3 BLEU4 METEOR ROUGE-L CIDEr MTN w/o QAE + Stacked Self-Attention in Encoder A Cap+Sum N/A 0.327 0.216 0.154 0.114 0.147 0.332 1.106 MTN w/o QAE B-1 Cap+Sum N/A 0.346 0.231 0.164 0.120 0.158 0.344 1.176 B-2 N/A A 0.316 0.207 0.145 0.105 0.138 0.315 0.963 B-3 N/A V 0.328 0.222 0.158 0.118 0.147 0.331 1.102 B-4 Cap+Sum A+V 0.347 0.234 0.168 0.124 0.158 0.344 1.197 MTN C-1 N/A A 0.324 0.214 0.152 0.113 0.142 0.326 1.031 C-2 N/A V 0.328 0.223 0.155 0.119 0.147 0.330 1.115 C-3 Cap+Sum A 0.344 0.236 0.170 0.127 0.159 0.354 1.220 C-4 Cap+Sum V 0.343 0.229 0.161 0.118 0.160 0.348 1.151 C-5 Cap+Sum A+V 0.357 0.241 0.173 0.128 0.162 0.355 1.249 MTN (replacing QAE with QE - Query-Aware Encoder) D Cap+Sum A+V 0.334 0.227 0.164 0.123 0.153 0.344 1.200 Table 4: Ablation analysis of MTN evaluated on the test data. The video features being used is either VGGish for audio features (A) or I3D-Flow for visual features (V). All models are trained with the Base parameters. Best result in each metric is highlighted in bold. C: man walks over to laptop...he sits down and wipes and scratches his face with his hands... Q: does he ever stand back up ? A_ref: no , the video ends with him sitted A_ours: no , he stays in the same place the whole time A_base: no he stays in the same position the whole time C: a guy is sitting at his desk writing in a notebook . he seems very enthralled with what he is writing... Q: does he pick up a pencil at the start of the video ? A_ref: he is already writing when the video begins A_ours: yes he has it in his hand A_base: yes , he is looking at his phone C: a table is being wiped with a cloth ...someone sits down and takes a drink from a cup. Q: what is she cleaning ? A_ref: she is cleaning off a table A_ours: she is cleaning the table with a cloth in her hand A_base: she is holding a book C:...after she temporarily puts the paper down for a moment so she can take her jacket off ... Q: what is happening in the video ? A_ref: a lady walks over to a closet with papers in hand and then takes her jacket off A_ours: a woman walks over to a closet and takes off her jacket A_base: a man walks into the room Figure 6: Example test dialogue responses extracted from the ground-truth Aref and generated by MTN (Base) Aours and the baseline (Hori et al., 2018) Abase. For simplicity, the dialogue history is not presented and only parts of the video caption C are shown. Our model provides answers that are more accurate than the baseline, capturing single human action or a series of actions in the videos. 5620 6 Conclusion In this paper, we showed that MTN, a multi-head attention-based neural network, can generate good conversational responses in multimodal settings. Our MTN models outperform the reported baseline and other submission entries to the DSTC7. We also adapted our approach to a visual dialogue task and achieved excellent performance. A possible improvement to our work is adding pre-trained embedding such as BERT (Devlin et al., 2018) or image-grounded word embedding (Kiros et al., 2018) to improve the semantic understanding capability of the models. Acknowledgements The first author is supported by A*STAR Computing and Information Science scholarship (formerly A*STAR Graduate scholarship). The third author is supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project #A18A2b0046). References Huda Alamri, Chiori Hori, Tim K Marks, Dhruv Batra, and Devi Parikh. 2018. Audio visual scene-aware dialog (avsd) track for natural language generation in dstc7.-. In DSTC7 at AAAI2019 Workshop. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6077–6086. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425–2433. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72. Antoine Bordes and Jason Weston. 2016. Learning end-to-end goal-oriented dialog. CoRR, abs/1605.07683. Joao Carreira and Andrew Zisserman. 2017. Quo vadis, action recognition? a new model and the kinetics dataset. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 4724–4733. IEEE. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e MF Moura, Devi Parikh, and Dhruv Batra. 2017a. Visual dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 2. Abhishek Das, Satwik Kottur, Jos´e M. F. Moura, Stefan Lee, and Dhruv Batra. 2017b. Learning cooperative visual dialog agents with deep reinforcement learning. 2017 IEEE International Conference on Computer Vision (ICCV), pages 2970–2979. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. arXiv preprint arXiv:1811.01241. Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, and Kaheer Suleman. 2016. Policy networks with two-stage training for dialogue systems. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Association for Computational Linguistics. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Scott Wen-tau Yih, and Michel Galley. 2018. A knowledgegrounded neural conversation model. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In CVPR, volume 1, page 3. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Matthew Henderson, Blaise Thomson, and Steve Young. 2014. Word-based dialog state tracking with recurrent neural networks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL). Association for Computational Linguistics. Shawn Hershey, Sourish Chaudhuri, Daniel PW Ellis, Jort F Gemmeke, Aren Jansen, R Channing Moore, Manoj Plakal, Devin Platt, Rif A Saurous, Bryan Seybold, et al. 2017. Cnn architectures for largescale audio classification. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, pages 131–135. IEEE. 5621 Chiori Hori, Huda Alamri, Jue Wang, Gordon Winchern, Takaaki Hori, Anoop Cherian, Tim K Marks, Vincent Cartillier, Raphael Gontijo Lopes, Abhishek Das, et al. 2018. End-to-end audio visual sceneaware dialog using multimodal attention-based video features. arXiv preprint arXiv:1806.08409. Chiori Hori, Takaaki Hori, Teng-Yok Lee, Ziming Zhang, Bret Harsham, John R Hershey, Tim K Marks, and Kazuhiko Sumi. 2017. Attention-based multimodal fusion for video description. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 4203–4212. IEEE. Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. 2017. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Jamie Kiros, William Chan, and Geoffrey Hinton. 2018. Illustrative language understanding: Largescale visual grounding with image search. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 922–933. Satwik Kottur, Jos´e MF Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. 2018. Visual coreference resolution in visual dialog using neural module networks. In Proceedings of the European Conference on Computer Vision (ECCV), pages 153–169. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32–73. Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, and Dawei Yin. 2018. Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1437–1447. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics. Yehao Li, Ting Yao, Yingwei Pan, Hongyang Chao, and Tao Mei. 2018. Jointly localizing and describing events for dense video captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7492–7500. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer. Bing Liu and Ian Lane. 2017. An end-to-end trainable neural network model with belief tracking for taskoriented dialog. arXiv preprint arXiv:1708.05956. Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. arXiv preprint arXiv:1801.10198. Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114. Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1468–1478. Association for Computational Linguistics. Christopher D. Manning and Mihail Eric. 2017. A copy-augmented sequence-to-sequence architecture gives good performance on task-oriented dialogue. In EACL. Pierre-Emmanuel Mazar´e, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018. Training millions of personalized dialogue agents. arXiv preprint arXiv:1809.01984. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In NIPS Autodiff Workshop. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99. 5622 Iulian Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI’16, pages 3776–3783. AAAI Press. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics. Gunnar A Sigurdsson, G¨ul Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. 2016. Hollywood in homes: Crowdsourcing data collection for activity understanding. In European Conference on Computer Vision, pages 510–526. Springer. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS’14, pages 3104–3112, Cambridge, MA, USA. MIT Press. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826. Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2016. MovieQA: Understanding Stories in Movies through Question-Answering. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566–4575. Oriol Vinyals and Quoc V. Le. 2015. A neural conversational model. CoRR, abs/1506.05869. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3156–3164. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning, pages 2048–2057. Kaisheng Yao, Geoffrey Zweig, and Baolin Peng. 2015. Attention with intention for a neural network conversation model. CoRR, abs/1510.08565. Koichiro Yoshino, Chiori Hori, Julien Perez, Luis Fernando D’Haro, Lazaros Polymenakos, Chulaka Gunasekara, Walter S. Lasecki, Jonathan Kummerfeld, Michael Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Sean Gao, Tim K. Marks, Devi Parikh, and Dhruv Batra. 2018. The 7th dialog system technology challenge. arXiv preprint. Chenguang Zhu, Michael Zeng, and Xuedong Huang. 2018. Sdnet: Contextualized attention-based deep network for conversational question answering. arXiv preprint arXiv:1812.03593. A Data Pre-processing A.1 Video-Grounded Dialogues We split all sequences into (case-insensitive) tokens and selected those in the training data with the frequency more than 1 to build the vocabulary for embeddings. This results in 6175 unique tokens, including the ⟨eos⟩, ⟨sos⟩, ⟨pad⟩, and ⟨unk⟩ tokens. Sentences are batched together by approximate sequence lengths, in order of dialogue history length, video caption length, question length, and target sequence length. We use batch size of 32 during training. A.2 Visual-Grounded Dialogues The test-std v1.0 set include about 4000 dialogues grounded on COCO-like images collected from Flickr. We only selected tokens that have frequency at least 3 in the training data to build the vocabulary. This results in 13832 unique tokens. We use bottom-up attention features (Anderson et al., 2018) extracted from Faster R-CNN (Ren et al., 2015) which is pre-trained on the Visual Genome data (Krishna et al., 2017). This results in 36 2048-dimensional feature vectors per image. 5623 B Additional Experiment Results We experimented our models with text-only input e.g. no video audio or visual features and hence, no auto-encoder layers involved (MTN w/o QAE). We tested cases where the maximum dialogue history length Lmax his is limited to 1, 2, or 3 turns only. For each case, we also tried to concatenate all the source sequences, including dialogue history, video caption, and query, into a single sequence and use only one multi-head attention block on this concatenated sequence in each decoding layer (Similar to a Transformer decoder). Table 5 summarizes the results. The results show that concatenating the sequences into one affects the quality of the generated responses significantly. When the input sequences are separated and attended differently by different attention modules, the results improve. This could be explained as different sequences contain different signals to generate responses e.g. dialogue history contains information of references or ellipses in the user queries, user queries include direct signals for feature attention in input videos. Another observation is using all possible dialogue turns in the dialogue history i.e. Lmax his = 10 achieves the best results. We did not conduct experiments of concatenating source sequences with Lmax his = 10 due to memory issues with large input sequences. Max. HisLen Concat. Source Sequence? BLEU4 ROUGE-L CIDEr 10 No 0.120 0.344 1.176 3 No 0.116 0.343 1.141 3 Yes 0.097 0.308 0.924 2 No 0.115 0.343 1.150 2 Yes 0.090 0.304 0.900 1 No 0.119 0.343 1.163 1 Yes 0.095 0.301 0.894 Table 5: Evaluation results on the test set for MTN w/o QAE models in which maximum history length is range from 1 to 3 or 10 (i.e. all dialogue turns possible). We also experiments when all the source sequences are concatenated into one and the decoder only has one attention block on the concatenated sequence. The autoencoder components are also removed. Best result in each metric is highlighted in bold.
2019
564
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5624–5634 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5624 Target-Guided Open-Domain Conversation Jianheng Tang1,2, Tiancheng Zhao2, Chenyan Xiong3, Xiaodan Liang1∗, Eric P. Xing2,4, Zhiting Hu2,4∗ 1Sun Yat-sen University, 2Carnegie Mellon University, 3Microsoft Research AI, 4Petuum Inc. {sqrt3tjh,xdliang328}@gmail.com, {tianchez,zhitingh}@cs.cmu.edu [email protected], [email protected] Abstract Many real-world open-domain conversation applications have specific goals to achieve during open-ended chats, such as recommendation, psychotherapy, education, etc. We study the problem of imposing conversational goals on open-domain chat agents. In particular, we want a conversational system to chat naturally with human and proactively guide the conversation to a designated target subject. The problem is challenging as no public data is available for learning such a target-guided strategy. We propose a structured approach that introduces coarse-grained keywords to control the intended content of system responses. We then attain smooth conversation transition through turn-level supervised learning, and drive the conversation towards the target with discourse-level constraints. We further derive a keyword-augmented conversation dataset for the study. Quantitative and human evaluations show our system can produce meaningful and effective conversations, significantly improving over other approaches1. 1 Introduction Creating intelligent agent that can carry out opendomain conversation with human is a long-lasting challenge. Impressive progress has been made, advancing from early rule-based systems, e.g., Eliza (Weizenbaum et al., 1966), to recent end-toend neural conversation models that are trained on massive data (Shang et al., 2015; Li et al., 2015) and make use of background knowledge (Fang et al., 2018; Qin et al., 2019; Liu et al., 2018). However, current open-domain systems still struggle to conduct engaging conversations (Ram ∗corresponding authors 1Data and code are publicly available at https://github.com/squareRoot3/ Target-Guided-Conversation Not so good. I am really tired. Oh, I’m sorry to hear. why? I have too much work to do. What kind of work is it? I am writing a computer program. Interesting. I read about coding from a book. work book Target: e-books Hi there, how are you doing? Really? You are smart. e-books sorry I prefer e-books over paperback books. Figure 1: Target-Guided Open-Domain Conversation. The agent is given a target subject e-books which is unknown to the human. The goal is to guide the conversation naturally to the target. Utterance keywords are highlighted in red (agent) and blue (human) and in italic. et al., 2018), and often generate inconsistent or uncontrolled results. Further, many practical opendomain dialogue applications do have specific goals to achieve even though the conversations are open-ended, e.g., accomplishing nursing goals in therapeutic conversation, inspiring ideas in education, making recommendation and persuasion, and so forth. Thus, there is a strong demand to enable the integration of goals and strategy into opendomain dialogue systems, and it imposes challenges to both: first, how to define the goal for an open-domain chat system; and second, how to encode dialogue strategy into the response production process. It is also crucial to attain a general method that is not tailored towards specialized goals that require domain-specific handcrafting and annotations (Yarats and Lewis, 2018; He et al., 2018; Li et al., 2018). 5625 This paper makes a step towards open-domain dialogue agents with conversational goals. In particular, we want the system to chat naturally with humans on open domain topics and proactively guide the conversation to a designated target subject. For example, in Figure 1, given a target e-books and an arbitrary starting topic such as tired, the agent drives the conversation in a natural way following a high-level logical backbone, and effectively reaches the target in the end. Such a target-guided conversation setup is generalpurpose and can entail a large variety of practical applications as above. The above problem is difficult in that the agent has to balance well between chatting naturally and achieving the target; and moreover, to the best of our knowledge, there is no public dataset available for learning targetguided dialogue. This paper proposes a solution to the task. We decouple the whole system into separate modules and address the challenges at different granularity. Specifically, we explicitly model and control the intended content of each system response by introducing coarse-grained utterance keywords. We then impose a discourse-level rule that encourages the keywords to approach the end target during the course of the conversation; and we attain smooth conversation transition at each dialogue turn through turn-level supervised learning. To this end, we further derive a keyword-augmented conversation dataset from an existing daily-life chat corpus (Zhang et al., 2018) and use it for learning keyword transitions and utterance production. We study different keyword transition approaches, including pairwise PMI-based transition, neural-based prediction, and a hybrid kernelbased method. We conduct quantitative and human evaluations to measure the performance of sub-modules and the whole system. Our agent is able to generate meaningful and effective conversations with a decent success rate of reaching the targets, improving over other approaches in different respects. We show target-guided open-domain conversation is a promising and potentially important direction for future research. 2 Related Work The past end-to-end dialogue research can be broadly divided into two categories: task-oriented dialogue systems and chat-oriented (a.k.a opendomain) systems. For task-oriented dialogue systems, the system is designed to accomplish specific goals, e.g., providing bus schedule (Raux et al., 2005; Young et al., 2007; Dhingra et al., 2017). Besides information giving, other tasks have been extensively studied, such as negotiations (DeVault et al., 2015; Lewis et al., 2017; He et al., 2018; Cao et al., 2018), symmetric collaborations (He et al., 2017), etc. On the other hand, chat-oriented dialogue systems have been created to model open-domain conversations without specific goals. Prior work has been focusing on developing novel neural architectures that improve next utterance generation or retrieval task performance by training on large open-domain chit-chat dataset (Sordoni et al., 2015; Serban et al., 2016; Zhou et al., 2016; Wu et al., 2018). However, despite the steady improvement over model architectures, the current systems can still suffer from a range of limitations, e.g., dull responses, inconsistent persona (Li et al., 2016a), etc. The commercial chatbot XiaoIce (Zhou et al., 2018) and the first Amazon Alexa challenge winner (Fang et al., 2018) have stressed to improve engagement with users. Also, to encourage discourse-level strategy, prior work has developed different system action representations that enable the model to reason at the dialogue level. One line of work has utilized latent variable models (Zhao et al., 2017; Yarats and Lewis, 2018; Zhao et al., 2019) to infer a latent representation of system responses, which separates the natural language generation process from decision-making. Another approach has created hybrid systems to incorporate hand-crafted coarse-grained actions (Williams et al., 2017; He et al., 2018) as a part of the neural dialogue systems. These systems have typically focused on specific domains such as price negotiation and movie recommendation. Building upon the prior work, this paper creates novelty in terms of both defining goals for open-domain chatting and creating system actions representations. Our structured solution use predicted keywords as a non-parametric representation of the intended content for the next system response. Due to the lack of full supervision data, the solution proposed in this work divides the task into two competitive sub-objectives, each of which can be conquered with either direct supervision or simple rules. Such a divide-and-conquer approach 5626 represents a general means of addressing complex task objectives with no end-to-end supervision available. A similar approach has been adopted in other contexts, such as text style transfer (Hu et al., 2017; Shen et al., 2017; Yang et al., 2018) and content manipulation (Lin et al., 2019), where content fidelity as a sub-objective is achieved with simple auto-encoding training, while the competitive nature of multiple sub-objectives jointly drives the models to learn desired behaviors. 3 Task Definition: Target-guided Open-domain Conversation We first formally define the task of target-guided open-domain conversation. We also establish the key notations used in the rest of the paper. Briefly, given a target, we want a chat agent to converse with human starting from an arbitrary initial topic, and lead the conversation to the target in the end. In this paper, we define a target to be a word (e.g., an entity name McDonald, or a common noun book, etc.) and denote it as t. We note that a target can also be formulated in other more complex forms depending on specific applications. The target is only presented to the agent and is unknown to the human. The conversation starts with an initial topic which is usually randomly picked by the human. At each dialogue turn where the agent wants to make a response, it has access to the conversation history consisting of a sequence of utterances by either the human or the agent, x1:n = {x1, . . . , xn}. The agent then produces an utterance xn+1 as a response, aiming to satisfy (1) transition smoothness by making the response natural and appropriate in the current conversation context, and (2) target achievement by driving the conversation to reach the designated target. Specifically, we consider a target is achieved when either the human or the agent mentions the target or similar word in an utterance— such a definition is simple and allows easy measurement of the success rate. Again, other more complex and meaningful measures could be considered for specific practical applications. The above two objectives are complementary and competitive. On one hand, an agent cannot simply bring up the target content regardless of the conversation context. For example, given a target cat and conversation history {Human: I went to a movie.}, a response like Do you like cat? is typically not a smooth transition, even though it quickly reaches the target. On the other hand, the agent must avoid being trapped in open-ended chats by producing only smooth yet reactive responses. Instead, it has to proactively lead the conversation to approach the target. The competitive nature of the two desiderata requires the agent to grasp a conversation strategy that balances well between different factors. To the best of our knowledge, there is no public large data that fits the new problem setting and permits end-to-end learning of such a discourselevel strategy in open domain. Instead, we usually only have access to those open-ended conversation data where interlocutors conversed freely without a specified end target. To this end, we propose to break down the problem, leverage partial supervisions and introduce more structures for a solution. In the following, we first present our approach to the task (section 4), and then introduce a large open-ended conversation dataset used for building the conversational agent (section 5). 4 The Proposed Approach We explore a solution that addresses the two desiderata separately. In particular, we maintain smooth conversation transition by turn-level supervised learning on open-domain chat data, and we inject target-guiding behavior with a rulebased guiding strategy. Further, to enable effective control over the transition and guiding strategy, we decouple the decision-making process and utterance generation by explicitly modeling the intended coarse-grained keywords in the next system utterance. Thus the system consists of several core modules, including a turn-level keyword transition predictor (section 4.1), a discourse-level targetguiding strategy (section 4.2), and a response retriever (section 4.3). 4.1 Turn-level Keyword Transition Given the conversation history at each dialogue turn, this module aims to predict keywords of the next response that is appropriate in the conversation context. This part is agnostic to the end target, and therefore aligns with the conventional chitchat objective. We thus can use any open-ended chat data with extracted utterance keywords to learn the prediction module in a supervised manner. We present such a dataset that we posit is par5627 I play basketball, do you play? Yes, I also like basketball. Discourse-level Target-Guided Strategy Do you like rap music? I listen to a lot of rap music. Target: dance Turn-level Keyword Transition Conversation History Keyword Augmented Response Retrieval dance 1.0 basketball 0.47 music 0.65 sport 0.40 Candidate Keyword Set cat 0.45 video 0.55 study 0.36 Keyword Predictor sport music cat 0.03 0.11 0.07 … … Response Retrieval Keyword Selection music party 0.62 Figure 2: Solution Overview. The left panel shows an on-going conversation with a designated target dance. The discourse-level target-guided module (right panel, section 4.2) first picks a set of valid candidate keywords for the next system response. The turn-level keyword transition module (middle panel, section 4.1) computes a distribution over candidate keywords. The most likely valid keyword (music) is then selected, and fed into the keyword-augmented response retrieval module (middle panel, section 4.3) for producing the next response. ticularly suitable for the learning in section 5. Architecturally, we study three different approaches as representative paradigms for predicting the next-turn keyword distribution, including pairwise keyword linear transition, neural-based prediction, and kernel-based method. Pairwise PMI-based Transition The most straightforward way for keyword transition is to construct a keyword pairwise matrix that characterizes the association between keywords in the observed conversation data. We use pointwise mutual information (PMI) (Church and Hanks, 1990) as the measure, which, given two keywords wi and wj, computes likeliness of wj →wi with PMI(wi, wj) = log p(wi|wj)/p(wi), (1) where p(wi|wj) is the ratio of transitioning to wi in the next turn given wj in the current turn, and p(wi) is the ratio of wi occurrence. Both quantities can be directly counted from the conversation data beforehand. At test time, we first use a keyword extractor (section 5) to extract keywords of the current utterance. Assuming all these keywords are independent, for each candidate next keyword, we sum up their PMI scores w.r.t the candidate. The resulting candidate scores are then normalized to obtain a distribution over keywords in the next turn. The approach enjoys simplicity and interpretability, yet can suffer from data sparsity and perform poorly with a priori unseen transition pairs. Neural-based Prediction The second approach predicts the next keywords with a neural network in an end-to-end manner. More concretely, we first use a recurrent network to encode the conversation history, and feed the resulting features to a prediction layer to obtain a distribution over keywords for the next turn. The network is learned by maximizing the likelihood of observed keywords in the data. The neural approach is straightforward, but can rely on a large amount of data for learning. Hybrid Kernel-based Method We further study a hybrid approach that combines neural feature extraction with pairwise closeness measuring. Specifically, given a pair of a current keyword and a candidate next keyword, we follow (Xiong et al., 2017) by first measuring the cosine similarity of their normalized word embeddings, and feeding the quantity to a kernel layer consisting of K RBF kernels. The output of the kernel layer is a K-dimension kernel feature vector, which is then fed to a single-unit dense layer for a candidate score. The score is finally normalized across all candidate keywords to yield the candidate probability distribution. If the current turn has multiple keywords, the corresponding multiple K-dimension kernel features are first summed up before feeding to the dense layer. Thus, the intermediate kernel layer serves as a soft aggregation mechanism to account for multiple-to-one keyword transition. The parameters are learned in the same way as in the neural-based prediction 5628 method. Our empirical study shows the hybrid approach provides the strongest performance. 4.2 Discourse-level Target-Guided Strategy This module aims to fulfill the end target by proactively driving the discussion topic forward in the course of the conversation. As noted above, there is typically no data available for direct learning of such a strategy. Fortunately, the augmentation of interpretable coarse-grained keywords enables us to apply a simple yet effective rule to this end. We constrain that the keyword of each turn must move strictly closer to the end target compared to those of preceding turns. Figure 2, right part, illustrates the rule at a particular step. Given the keyword Basketball of the current turn and its closeness score (0.47) to the target Dance, the only valid candidate keywords for the next turn are those with higher target closeness, such as Party with a closeness score of 0.62. On the other hand, transitioning from Basketball to Sport is not allowed in the context as it does not move towards the target. More concretely, we use cosine similarity between normalized word embeddings as the measure of keyword closeness. At each turn for predicting the next keyword, the above constraint first collects a set of valid candidates, and the turn-level transition module samples or picks the most likely one the from the set according to the keyword distribution. In this way, the predicted keyword for next response can be both a smooth transition and an effective step towards the target. 4.3 Keyword-augmented Response Retrieval The final module in the system aims to produce a response conditioning on both the conversation history and the predicted keyword. In this work, we use a retrieval-based approach, though a generation-based method can also be readily plugged in. The architecture of the module is adapted from the previous work (Wu et al., 2016) with augmented keyword conditioning. More concretely, we use recurrent networks to encode the input conversation history and keyword, as well as each of the candidate responses from a database (e.g., all utterances in the training set). We then compute the element-wise product between the candidate feature with the history feature, and between the candidate feature with the keyword feature, respectively. The resulting two vectors are concateTrain Val Test #Conversations 8,939 500 500 #Utterances 101,935 5,602 5,317 #Keyword types 2,678 2,080 1,571 #Avg. keywords 2.1 2,1 1.9 Table 1: Data Statistics. The last row is the average number of keywords in each utterance. The vocabulary size is around 19K. A: Hi ! I am from India . where are you from? B: I’m from Portland. I just got back from a long walk. A: I just got back from coaching swimming at the pool. Walking where ? B: I like to walk in parks for good health. No soft drinks for me either! ... ... Table 2: An Example Conversation. Only the first 4 utterances are shown. Keywords of each utterance are marked with underline. nated and fed to a final single-unit dense layer with sigmoid to get the matching probability of the candidate response. Same as the turn-level transition module, the conditional response retrieval module can also be learned with open-ended conversation data in a supervised manner. That is, we maximize the likelihood of observed response given its history and predicted keyword, while minimizing the likelihood of randomly sampled negative responses. Section 5 presents more details of the data and negative responses. 5 Dataset We next describe a large conversation dataset that can be useful for studying the task and has been used in our solution. The dataset is derived from the PersonaChat corpus (Zhang et al., 2018) where crowdworkers were asked to chat naturally with given persona. The conversations cover a broad range of topics such as work, family, personal interest, etc; and the discussion topics change frequently during the course of the conversations. These properties make the conversations particularly suitable for learning smooth, natural transitions at each turn. Note that, however, the conversations do not necessarily suit for learning discourse-level strategies, as they were originally created without end targets and do not exhibit target-guided behaviors. 5629 To adapt the corpus for turn-level keyword transition in our new setting, we obtain all conversations while discarding the associated persona information. We then augment the data by automatically extracting keywords of each utterance. Specifically, we apply a rule-based keyword extractor which combines TF-IDF and Part-OfSpeech features for scoring word salience. More details are provided in supplementary materials. We re-split the data into train/valid/test sets, where the test set contains 500 conversations with relatively frequent keywords. Table 1 lists the data statistics. An example conversation with the extracted keywords is shown in Table 2. The resulting dataset is used in our solution for training both the turn-level transition module (section 4.1) and the response retrieval module (section 4.3). We follow the retrieval-based chit-chat literature (Wu et al., 2016) and randomly sample 19 negative responses for each turn as the negative responses for training. 6 Experiments 6.1 Experimental Setup Baselines and Comparison Systems We evaluate a diverse set of approaches for comparison and ablation study. Retrieval (Wu et al., 2016) is the conventional retrieval-based chitchat system which does not permit an end target and is not augmented with coarse-grained utterance keywords. The system thus cannot be deployed for target-guided conversation, and is used to provide reference performance in terms of different metrics in the experiments. The model architecture is adapted from the prior work, the same as used in our full system except for the keyword conditioning part. Retrieval-Stgy augments the above base retrieval system with the proposed target-guided strategy (section 4.2). Specifically, it first extracts the keywords of current utterance with the extractor used in section 5, and applies the target-guided rule to obtain a set of candidate keywords. The base retrieval model is then used to retrieve a response containing at least one keyword from the keyword set. Such a pipeline approach achieves a strong baseline performance, as shown in the following. Ours As in section 4.1, our full system has several variants in the turn-level keyword transition module, including the PMI, Neural, and Kernel methods. For comparison, we also use a Random method which randomly picks a keyword for next response. Training Details We use the same configuration for the common parts of all agents. We apply a single-layer GRU (Chung et al., 2014) in all encoders. Both the word embedding and hidden dimensions are set to 200. We use GloVe (Pennington et al., 2014) to initialize word embeddings. We apply Adam optimization (Kingma and Ba, 2014) with an initial learning rate of 0.001 and annealing to 0.0001 in 10 epochs. Systems are implemented with a text generation toolkit Texar (Hu et al., 2019). 6.2 Turn-level Evaluation We first evaluate the performance of each conversation turn, in terms of both turn-level keyword prediction and response selection. That is, we disable the discourse-level target constraint, and focus on measuring how accurate the systems can predict the next keyword and retrieve the correct response on the test set of the conversation data. The evaluation largely follows the protocol of previous chit-chat systems (e.g., Wu et al., 2016), and validates the effect of the keyword-augmented conversation production. Evaluation metrics For the keyword prediction task, we measure three metrics: (1) Rw@K: keywords recall at position K (= 1, 3, 5) in all (over 2600) possible keywords, (2) P @1: precision at the first position, and (3) Cor.: the word embedding based correlation score (Liu et al., 2016). For the response selection task, we randomly sample 19 negative responses for each test case, and calculate R20@K, i.e., recall at position K in the 20 candidate (positive and negative) responses, as well as MRR, the mean reciprocal rank. Results Table 3 shows the evaluation results. Our system with Kernel transition module outperforms all other systems in terms of all metrics on both two tasks, expect for R20@3 where the system with PMI transition performs best. The Kernel approach can predict the next keywords more precisely. In the task of response selection, our systems that are augmented with predicted keywords significantly outperform the base Retrieval approach, showing predicted keywords are helpful for better retrieving responses by capturing coarsegrained information of the next utterances. Inter5630 Keyword Prediction Response Retrieval System Rw@1 Rw@3 Rw@5 P @1 Cor. R20@1 R20@3 R20@5 MRR Retrieval 0.5196 0.7636 0.8622 0.6661 Ours-Random 0.0005 0.0015 0.0025 0.0009 0.4995 0.5187 0.7619 0.8631 0.6650 Ours-PMI 0.0585 0.1351 0.1872 0.0871 0.7974 0.5441 0.7839 0.8716 0.6847 Ours-Neural 0.0609 0.1324 0.1825 0.1006 0.8075 0.5395 0.7801 0.8790 0.6816 Ours-Kernel 0.0642 0.1431 0.1928 0.1191 0.8164 0.5486 0.7827 0.8845 0.6914 Table 3: Results of Turn-level Evaluation. System Succ. (%) #Turns Retrieval 9.8 3.26 Retrieval-Stgy 67.2 6.56 Ours-PMI 47.4 5.12 Ours-Neural 51.6 4.29 Ours-Kernel 75.0 4.20 Table 4: Results of Self-Play Evaluation. estingly, the system with Random transition has a close performance to the base Retrieval model, indicating that the erroneous keywords can be ignored by the system after training. 6.3 Target-guided Conversation Evaluation We next evaluate system performance in the proposed target-guided conversation setup, with both automatic simulation-based evaluation and human evaluation. 6.3.1 Self-Play Simulation Following the experimental settings in prior work (Lewis et al., 2017; Li et al., 2016b), we developed a task simulator to automatically produce target-guided conversations. Specifically, we use the base Retrieval agent to play the role of human which retrieves a response without knowing the end target. The simulator randomly picks a keyword as the end target, and an utterance as the starting point. Each agent then chats with the Retrieval system, trying to guide the conversation to the given target. To automatically evaluate whether the target is achieved, we use WordNet (Miller, 1998) to identify keywords that are semantically close to the end target. More concretely, if a keyword in an utterance (by either the agent under test or Retrieval) has a WordNet information content similarity score higher than 0.9, we consider the target is successfully achieved. To avoid infinite conversation without ever reaching the target, we set a maximum allowed number of turns, which is 8 in our experiment. That is, an agent that does not achieve the target after producing 8 responses is considered to fail in the case. System Succ. (%) Smoothness Retrieval 18 3.26 Retrieval-Stgy 66 3.24 Ours-PMI 52 3.00 Ours-Neural 56 2.94 Ours-Kernel 76 3.40 Table 5: Results of the Human Rating. Kernel Better(%) No Prefer(%) Kernel Worse(%) Retrieval-Stgy 34 40 26 Ours-PMI 52 22 26 Ours-Neural 56 28 16 Table 6: Results of Human Ranking. We measure the success rate of achieving the targets (Succ.) and the average number of turns used to reach a target (#Turns). Table 4 shows the results of 500 simulations for each of the comparison systems. Our system with Kernel transition obtains the highest success rate, significantly improving over other approaches. The success rate of the base Retrieval agent is lower than 10%, which proves that a chitchat agent without a target-guided strategy can hardly accomplish our task. The Retrieval-Stgy agent has a relatively high success rate, while taking more turns (6.56) to accomplish this. This is partially due to the lack of coarse-grained keyword modeling and transition. We further note that, in the Kernel system, around 81% of predicted keywords eventually occur in the produced utterances, indicating that the predicted keywords have a great impact on the retrieval module. 6.3.2 Human Evaluation We finally perform human evaluation for a more thorough system comparison in terms of different aspects. Specifically, we use the DialCrowd toolkit (Lee et al., 2018) to setup human evaluation interfaces, and undertook two types of human studies as below. The first evaluation is to measure the system 5631 Agent: Retrieval-Stgy Target: Dance Agent: Ours-Kernel Target: Dance A: Hi, how are you today? H: Fine. Just finished riding in the countryside. A: That sounds relaxing. I love the outdoors. H: I rode along a long black river. A: Nice , do you like the lake? I do I go often. H: Yes. I like swimming in the lake. A: I’ve never been surfing but used to do a lot of traveling across the us playing guitar. H: Nice. I also like playing guitar and other musical instruments. A: Oh that’s nice, I’m more into classical music. H: I am interested in pop music. A: Oh cool! I really like music and singing. H: I enjoy singing. Although I cannot sing well. A: Music is life. Dancing to anything with a beat is the best. A: Hi, how are you today? H: Fine. Just finished riding in the countryside. A: I love riding my bike. It is great exercise and fun. (bike) H: I rode along a long black river. A: That sounds very exciting. You can ride bikes, listen to musics there too. (music) H: That’s great. My favorite singer is Taylor Swift. A: Yes I love to sing! I can sing a lot of songs, can you? (sing) H: Yes, although I cannot sing well. A: Dancing? I love dancing! Singing too! Agent: Retrieval-Stgy Target: McDonald’s Agent: Ours-Kernel Target: McDonald’s A: Hello, how are you doing? H: Hi, I am great just finishing up my homework. How are you? A: Oh nothing much just reading. H: Which kind of books are you reading? A: Reading comic books, I love them. What about you? H: I enjoy reading sci-fiand romantic stories. A: That sounds helpful for writing. H: Yeah, but I do not good at writing articles A: Those were good books, what do you like to eat? H: Vegetables and fruit salad are my favorite. A: That reminds me of spicy food. H: Oh spicy food is also very delicious. A: Nothing better than bacon for breakfast. H: I often eating eggs for breakfast. A: I love cheese tacos too. H: It’s too late now, and I’ll ride my bike to home. A: Hello, how are you doing? H: Hi, I am great just finishing up my homework. How are you? A: Just got done eating some chicken and rice after I got done with my gym session. (eat) H: I am gonna eat some vegetable for dinner. A: So you are vegetarian? I am from los angeles so there are lots of vegetarian restaurants. (restaurant) H: I am not a vegetarian. Fried chicken is my favorite. A: Oh boy, when I go to McDonald’s, I always get fries for my 2 dogs. Table 7: Example conversations between human (H) and two different agents (A), with the same targets and starting utterances. Keywords selected or predicted by the agents are highlighted in red and italic, and keywords mentioned by human are highlighted in blue and italic. As keywords predicted by the Kernel agent do not necessarily occur in the retrieved utterances, we put them to the end of each sentence. Targets achieved at the end of conversations are underlined. We present the examples in case-sensitive format for readability. All tokens are in lowercase in the program. performance in terms of the two key desiderata, namely target achievement and transition smoothness, respectively. We first build 50 test cases, each of which has a target and a starting utterance. In each test case, a human turker is asked to converse with a randomly selected agent. The agent informs the turker when it thought the target is achieved or has reached the maximum number of turns (which is set to 8). Then the turker is presented with the designated target, and is asked to judge whether the target has been achieved, as well as rate transition smoothness during the conversation with a score ranging from 1 (strongly bad) to 5 (strongly good). All agents are evaluated on all test cases. Table 5 shows the results of the first evaluation. Our Kernel agent clearly outperforms all other comparison systems in terms of both success rate and transition smoothness. Note that the success rate results of all agent are consistent with those in simulation (Table 4). Comparing the base Retrieval agent and the augmented Retrieval-Stgy agent, we can see that Retrieval-Stgy has almost the same smoothness with Retrieval but achieves a much higher success rate. This validates that our discourse-level strategy (section 4.2) is indeed effective for target-guided conversations. The second evaluation compares our bestperforming Kernel agents with other agents sideby-side. Specifically, we ask a human turker to converse with the Kernel agent and a randomly selected comparison agent in the same test case. We then ask the turker to rank the two conversations by considering all the criteria. Turkers can 5632 Agent: Ours-Kernel Target: Listen A: Hi! How are you? I am very exciting about my new job! H: Fine. What is your new job? A: That sounds exciting. (sound) H: I like reading books. A: What kind of books do you read? (book) H: I read comics. A: I do not have TV. (TV) H: Watching animation is fun. A: What position d you play? (play) H: I play a lot of sports. A: Oh really? What kind of music? Listen to classical. Table 8: An erroneous example. also choose “no preference” if the conversations are equally good or bad. To avoid any bias, in each test case, we randomly pick one from the pair of agents to converse first, and we let the turker to decide when to stop to avoid revealing the target too early. As above, we evaluate on 50 test cases for each pair of agents. Table 6 shows the results of the second evaluation. We see that our Kernel system consistently outperforms the comparison methods with 30-50% wins. 6.4 Qualitative Study We take a close look at the model performance by studying the conversation examples from different agents in human evaluation. Table 7 shows the conversations between human and agents given targets dance and McDonald’s, respectively. We can see that, in general, our Kernel agent can accomplish the task in fewer turns than the Retrieval-Stgy agent. In the first case, the Kernel agent guides the conversation from ride to the crucial topic music smoothly and quickly, and then achieves the target word dance naturally. In contrast, the Retrieve-Stgy agent is trapped in open-ended chats for the first three turns and does not reach the target until the 7th turn. In the second case, the target McDonald’s is relatively uncommon in our dataset. The kernel agent succeeded to achieve the target in the 4th turn while the Retrieval-Stgy agent failed to reach the target within the maximally allowed number of turns. Table 8 shows a failure case by our Kernel agent. Although the agent successfully achieved the target, it sometimes makes non-smooth keyword transition without a clear logic. For instance, the final utterance of the agent, though reaching the target listen, is not appropriate in the conversation context (e.g., in the presence of human’s preceding keyword sports). 7 Conclusions & Discussions We have studied the problem of target-guided open-domain conversation, where an agent converses naturally with the human and proactively guides the conversation to a designated end target. We propose a modular solution with coarsegrained keywords as a logical backbone, and use partial supervision and heuristic rules to achieve the task. We also derive a dataset for the study. Quantitative and human evaluations demonstrate promising and improved results of our approach. This work presents an initial attempt to bridge the gap between open-domain chit-chat and taskoriented dialogue. A target-guided agent can be deployed in practice to converse with users engagingly and guide the users to trigger task-oriented systems (e.g., reserving a restaurant) in the end. An open-domain agent with control over the conversation strategy and end target can also be useful in education, psychotherapy, and others as discussed in section 1. Our treatment of utterance action and conversation target through simple keywords can be preliminary in terms of complex real applications. It would be exciting to explore more sophisticated modeling to enable more finegrained control on both sentence (Hu et al., 2017) and discourse levels (Williams et al., 2017; Fang et al., 2018). References Kris Cao, Angeliki Lazaridou, Marc Lanctot, Joel Z Leibo, Karl Tuyls, and Stephen Clark. 2018. Emergent communication through negotiation. In ICLR. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational linguistics, 16(1):22–29. David DeVault, Johnathan Mell, and Jonathan Gratch. 2015. Toward natural turn-taking in a virtual human negotiation agent. In 2015 AAAI Spring Symposium Series. Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, and Li Deng. 2017. 5633 Towards end-to-end reinforcement learning of dialogue agents for information access. In ACL. Hao Fang, Hao Cheng, Maarten Sap, Elizabeth Clark, Ari Holtzman, Yejin Choi, Noah A Smith, and Mari Ostendorf. 2018. Sounding board: A user-centric and content-driven social chatbot. In NAACL System Demonstrations. He He, Anusha Balakrishnan, Mihail Eric, and Percy Liang. 2017. Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings. In ACL. He He, Derek Chen, Anusha Balakrishnan, and Percy Liang. 2018. Decoupling strategy and generation in negotiation dialogues. arXiv preprint arXiv:1808.09637. Zhiting Hu, Haoran Shi, Bowen Tan, Wentao Wang, Zichao Yang, Tiancheng Zhao, Junxian He, Lianhui Qin, Di Wang, et al. 2019. Texar: A modularized, versatile, and extensible toolkit for text generation. In ACL System Demonstrations. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In ICML. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kyusong Lee, Tiancheng Zhao, Alan W Black, and Maxine Eskenazi. 2018. DialCrowd: A toolkit for easy dialog system assessment. In SIGDAIL, pages 245–248. Mike Lewis, Denis Yarats, Yann N Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or no deal? end-to-end learning for negotiation dialogues. arXiv preprint arXiv:1706.05125. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A persona-based neural conversation model. arXiv preprint arXiv:1603.06155. Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016b. Deep reinforcement learning for dialogue generation. arXiv preprint arXiv:1606.01541. Raymond Li, Samira Ebrahimi Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, and Chris Pal. 2018. Towards deep conversational recommendations. In NeurIPS, pages 9748–9758. Shuai Lin, Wentao Wang, Zichao Yang, Haoran Shi, Frank Xu, Xiaodan Liang, Eric Xing, and Zhiting Hu. 2019. Toward unsupervised text content manipulation. arXiv preprint arXiv:1901.09501. Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023. Shuman Liu, Hongshen Chen, Zhaochun Ren, Yang Feng, Qun Liu, and Dawei Yin. 2018. Knowledge diffusion for neural dialogue generation. In ACL, volume 1, pages 1489–1498. George Miller. 1998. WordNet: An electronic lexical database. MIT press. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Lianhui Qin, Michel Galley, Chris Brockett, Xiaodong Liu, Xiang Gao, Bill Dolan, Yejin Choi, and Jianfeng Gao. 2019. Conversing by reading: Contentful neural conversation with on-demand machine reading. In ACL. Ashwin Ram, Rohit Prasad, Chandra Khatri, Anu Venkatesh, Raefer Gabriel, Qing Liu, Jeff Nunn, Behnam Hedayatnia, Ming Cheng, Ashish Nagar, et al. 2018. Conversational AI: The science behind the Alexa prize. arXiv preprint arXiv:1801.03604. Antoine Raux, Brian Langner, Dan Bohus, Alan W Black, and Maxine Eskenazi. 2005. Let’s go public! taking a spoken dialog system to the real world. In Ninth European conference on speech communication and technology. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016. A hierarchical latent variable encoder-decoder model for generating dialogues. arXiv preprint arXiv:1605.06069. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. arXiv preprint arXiv:1503.02364. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In NeurIPS. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. arXiv preprint arXiv:1506.06714. Joseph Weizenbaum et al. 1966. ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1):36–45. Jason D Williams, Kavosh Asadi, and Geoffrey Zweig. 2017. Hybrid code networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning. In ACL. 5634 Hua Wu, Yi Liu, Ying Chen, Wayne Xin Zhao, Daxiang Dong, Dianhai Yu, Xiangyang Zhou, and Lu Li. 2018. Multi-turn response selection for chatbots with deep attention matching network. In ACL. Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2016. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. arXiv preprint arXiv:1612.01627. Chenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, and Russell Power. 2017. End-to-end neural ad-hoc ranking with kernel pooling. In SIGIR, pages 55–64. ACM. Zichao Yang, Zhiting Hu, Chris Dyer, Eric P Xing, and Taylor Berg-Kirkpatrick. 2018. Unsupervised text style transfer using language models as discriminators. In NeurIPS. Denis Yarats and Mike Lewis. 2018. Hierarchical text generation and planning for strategic dialogue. In ICML. Stephanie Young, Jost Schatzmann, Karl Weilhammer, and Hui Ye. 2007. The hidden information state approach to dialog management. In Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on, volume 4, pages IV– 149. IEEE. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? arXiv preprint arXiv:1801.07243. Tiancheng Zhao, Kaige Xie, and Maxine Eskenazi. 2019. Rethinking action spaces for reinforcement learning in end-to-end dialog agents with latent variable models. arXiv preprint arXiv:1902.08858. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In ACL. Li Zhou, Jianfeng Gao, Di Li, and Heung-Yeung Shum. 2018. The design and implementation of xiaoice, an empathetic social chatbot. arXiv preprint arXiv:1812.08989. Xiangyang Zhou, Daxiang Dong, Hua Wu, Shiqi Zhao, Dianhai Yu, Hao Tian, Xuan Liu, and Rui Yan. 2016. Multi-view response selection for human-computer conversation. In EMNLP.
2019
565
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5635–5649 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5635 Persuasion for Good: Towards a Personalized Persuasive Dialogue System for Social Good Xuewei Wang∗1, Weiyan Shi∗2, Richard Kim2, Yoojung Oh2 Sijia Yang3, Jingwen Zhang2 and Zhou Yu2 1 Zhejiang University,2 University of California, Davis,3 University of Pennsylvania [email protected],{wyshi, khgkim, yjeoh}@ucdavis.edu, [email protected],{jwzzhang, joyu}@ucdavis.edu Abstract Developing intelligent persuasive conversational agents to change people’s opinions and actions for social good is the frontier in advancing the ethical development of automated dialogue systems. To do so, the first step is to understand the intricate organization of strategic disclosures and appeals employed in human persuasion conversations. We designed an online persuasion task where one participant was asked to persuade the other to donate to a specific charity. We collected a large dataset with 1,017 dialogues and annotated emerging persuasion strategies from a subset. Based on the annotation, we built a baseline classifier with context information and sentence-level features to predict the 10 persuasion strategies used in the corpus. Furthermore, to develop an understanding of personalized persuasion processes, we analyzed the relationships between individuals’ demographic and psychological backgrounds including personality, morality, value systems, and their willingness for donation. Then, we analyzed which types of persuasion strategies led to a greater amount of donation depending on the individuals’ personal backgrounds. This work lays the ground for developing a personalized persuasive dialogue system. 1 1 Introduction Persuasion aims to use conversational and messaging strategies to change one specific person’s attitude or behavior. Moreover, personalized persuasion combines both strategies and user information related to the outcome of interest to achieve better persuasion results (Kreuter et al., 1999; Rimer and Kreuter, 2006). Simply put, the goal of personalized persuasion is to produce desired * Equal contribution. 1The dataset and code are released at https:// gitlab.com/ucdavisnlp/persuasionforgood. changes by making the information personally relevant and appealing. However, two questions about personalized persuasion still remain unexplored. First, we concern about how personal information would affect persuasion outcomes. Second, we question about what strategies are more effective considering different user backgrounds and personalities. The past few years have witnessed the rapid development of conversational agents. The primary goal of these agents is to facilitate taskcompletion and human-engagement in practical contexts (Luger and Sellen, 2016; Bickmore et al., 2016; Graesser et al., 2014; Yu et al., 2016b). While persuasive technologies for behavior change have successfully leveraged other system features such as providing simulated experiences and behavior reminders (Orji and Moffatt, 2018; Fogg, 2002), the development of automated persuasive agents remains lagged due to the lack of synergy between the social scientific research on persuasion and the computational development of conversational systems. In this work, we introduced the foundation work on building an automatic personalized persuasive dialogue system. We first collected 1,017 humanhuman persuasion conversations (PERSUASIONFORGOOD) that involved real incentives to participants. Then we designed a persuasion strategy annotation scheme and annotated a subset of the collected conversations. In addition, we came to classify 10 different persuasion strategies using Recurrent-CNN with sentence-level features and dialogue context information. We also analyzed the relations among participants’ demographic backgrounds, personality traits, value systems, and their donation behaviors. Lastly, we analyzed what types of persuasion strategies worked more effectively for what types of personal backgrounds. These insights will serve as important el5636 ements during our design of the personalized persuasive dialogue systems in the next phase. 2 Related Work In social psychology, the rationale for personalized persuasion comes from the Elaboration Likelihood Model (ELM) theory (Petty and Cacioppo, 1986). It argues that people are more likely to engage with persuasive messages when they have the motivation and ability to process the information. The core assumption is that persuasive messages need to be associated with the ways different individuals perceive and think about the world. Hence, personalized persuasion is not simply capitalizing on using superficial personal information such as name and title in the communication; rather, it requires a certain degree of understanding of the individual to craft unique messages that can enhance his or her motivation to process and comply with the persuasive requests (Kreuter et al., 1999; Rimer and Kreuter, 2006; Dijkstra, 2008). There has been an increasing interest in persuasion detection and prediction recently. Hidey et al. (2017) presented a two-tiered annotation scheme to differentiate claims and premises, and different persuasion strategies in each of them in an online persuasive forum (Tan et al., 2016). Hidey and McKeown (2018) proposed to predict persuasiveness by modelling argument sequence in social media and showed promising results. Yang et al. (2019) proposed a hierarchical neural network model to identify persuasion strategies in a semi-supervised fashion. Inspired by these prior work in online forums, we present a persuasion dialogue dataset with user demographic and psychological attributes, and study personalized persuasion in a conversational setting. In the past few years, personalized dialogue systems have come to people’s attention because usertargeted personalized dialogue system is able to achieve better user engagement (Yu et al., 2016a). For instance, Shi and Yu (2018) exploited user sentiment information to make dialogue agent more user-adaptive and effective. But how to get access to user personal information is a limiting factor in personalized dialogue system design. Zhang et al. (2018) introduced a human-human chit-chat dataset with a set of 1K+ personas. In this dataset, each participant was randomly assigned a persona that consists of a few descriptive sentences. However, the brief description of user persona lacks quantitative analysis of users’ sociodemographic backgrounds and psychological characteristics, and therefore is not sufficient for interaction effect analysis between personalities and dialogue policy preference. Recent research has advanced the dialogue system design on certain negotiation tasks such as bargain on goods (He et al., 2018; Lewis et al., 2017). The difference between negotiation and persuasion lies in their ultimate goal. Negotiation strives to reach an agreement from both sides, while persuasion aims to change one specific person’s attitude and decision. Lewis et al. (2017) applied end-to-end neural models with self-play reinforcement learning to learn better negotiation strategies. In order to achieve different negotiation goals, He et al. (2018) decoupled the dialogue act and language generation which helped control the strategy with more flexibility. Our work is different in that we focus on the domain of persuasion and personalized persuasion procedure. Traditional persuasive dialogue systems have been applied in different fields, such as law (Gordon, 1993), car sales (Andr´e et al., 2000), intelligent tutoring (Yuan et al., 2008). However, most of them overlooked the power of personalized design and didn’t leverage deep learning techniques. Recently, Lukin et al. (2017) considered personality traits in single-turn persuasion dialogues on social and political issues. They found that personality factors can affect belief change, with conscientious, open and agreeable people being more convinced by emotional arguments. However, it’s difficult to utilize such a single-turn dataset in the design of multi-turn dialogue systems. 3 Data Collection We designed an online persuasion task to collect emerging persuasion strategies from humanhuman conversations on the Amazon Mechanical Turk platform (AMT). We utilized ParlAI (Miller et al., 2017), a python-based platform that enables dialogue AI research, to assist the data collection. We picked Save the Children2 as the charity to donate to, because it is one of the most well-known charity organizations around the world. Our task consisted of four parts, a pre-task survey, a persuasion dialogue, a donation confirmation and a post-task survey. Before the conversation began, we asked the participants to com2https://www.savethechildren.org/ 5637 Role Utterance Annotation ER Hello, are you interested in protection of rights of children? Source-related inquiry EE Yes, definitely. What do you have in mind? ER There is an organisation called Save the Children and donations are essential to ensure children’s rights to health, education and safety. Credibility appeal EE Is this the same group where people used to ”sponsor” a child? ER Here is their website, https://www.savethechildren.org/. Credibility appeal They help children all around the world. Credibility appeal For instance, millions of Syrian children have grown up facing the daily threat of violence. Emotion appeal In the first two months of 2018 alone, 1,000 children were reportedly killed or injured in intensifying violence. Emotion appeal EE I can’t imagine how terrible it must be for a child to grow up inside a war zone. ER As you mentioned, this organisation has different programs, and one of them is to ”sponsor” child. Credibility appeal You choose the location. Credibility appeal EE Are you connected with the NGO yourself? ER No, but i want to donate some amount from this survey. Self-modeling Research team will send money to this organisation. Donation information EE That sounds great. Does it come from our reward/bonuses? ER Yes, the amount you want to donate is deducted from your reward. Donation information EE What do you have in mind? ER I know that my small donation is not enough, so i am asking you to also donate some small percentage from reward. Proposition of donation EE I am willing to match your donation. ER Well, if you go for full 0.30 i will have no moral right to donate less. Self-modeling EE That is kind of you. My husband and I have a small NGO in Mindanao, Philippines, and it is amazing what a little bit of money can do to make things better. ER Agree, small amount of money can mean a lot for people in third world countries. Foot-in-the-door So agreed? We donate full reward each?? Donation confirmation EE Yes, let’s donate $0.30 each. That’s a whole lot of rice and flour. Or a whole lot of bandages. Table 1: An example persuasion dialogue. ER and EE refer to the persuader and the persuadee respectively. plete a pre-task survey to assess their psychological profile variables. There were four subquestionnaires in our survey, the Big-Five personality traits (Goldberg, 1992) (25 questions), the Moral Foundations endorsement (Graham et al., 2011) (23 questions), the Schwartz Portrait Value (10 questions) (Cieciuch and Davidov, 2012), and the Decision-Making style (4 questions) (Hamilton and Mohammed, 2016). From the pre-task survey, we obtained a 23-dimension psychological feature vector where each element is the score of one characteristic, such as extrovert and agreeable. Next, we randomly assigned the roles of persuader and persuadee to the two participants. The random assignment helped to eliminate the correlation between the persuader’s persuasion strategies and the targeted persuadee’s characteristics. In this task, the persuader needed to persuade the persuadee to donate part of his/her task earning to the charity, and the persuader could also choose to donate. Please refer to Fig. 6 and 7 in Appendix for the data collection interface. For persuaders, we provided them with tips on different persuasion strategies along with some example sentences. For persuadees, they only knew they would talk about a specific charity in the conversation. Participants were encouraged to continue the conversation until an agreement was reached. Each participant was required to complete at least 10 conversational turns and multiple sentences in one turn were allowed. An example dialogue is shown in Table 1. After completing the conversation, both the perDataset Statistics # Dialogues 1,017 # Annotated Dialogues (ANNSET) 300 # Participants 1,285 Avg. donation $0.35 Avg. turns per dialogue 10.43 Avg. words per utterance 19.36 Total unique tokens 8,141 Participants Statistics Metric Persuader Persuadee Avg. words per utterance 22.96 15.65 Donated 424 (42%) 545 (54%) Not donated 593 (58%) 472 (46%) Table 2: Statistics of PERSUASIONFORGOOD suader and the persuadee were asked to input the intended donation amount privately though a text box. The max amount of donation was the task payment. After the conversation ended, all participants were required to finish a post-survey assessing their sociodemographic backgrounds such as age and income. We also included several questions about their engagement in this conversation. The data collection process lasted for two months and the statistics of the collected dataset named PERSUASIONFORGOOD are presented in Table 2. We observed that on average persuaders chose to say longer utterances than persuadees (22.96 tokens compared to 15.65 tokens). During the data collection phase, we were glad to receive some positive comments from the workers. Some mentioned that it was one of the most meaningful tasks they had ever done on the AMT, which 5638 shows an acknowledgement to our task design. 4 Annotation Category Amount Logical appeal 325 Emotion appeal 237 Credibility appeal 779 Foot-in-the-door 134 Self-modeling 150 Personal story 91 Donation information 362 Source-related inquiry 167 Task-related inquiry 180 Personal-related inquiry 151 Non-strategy dialogue acts 1737 Total 4313 Table 3: Statistics of persuasion strategies in ANNSET. After the data collection, we designed an annotation scheme to annotate different persuasion strategies persuaders used. Content analysis method (Krippendorff, 2004) was employed to create the annotation scheme. Since our data was from typing conversation and the task was rather complicated, we observed that half of the conversation turns contained more than two sentences with different semantic meanings. So we chose to annotate each complete sentence instead of the whole conversation turn. We also designed a dialogue act annotation scheme for persuadee’s utterances, shown in Table 6 in Appendix, to capture persuadee’s general conversation behaviors. We also recorded if the persuadee agreed to donate, and the intended donation amount mentioned in the conversation. We developed both persuader and persuadee’s annotation schemes using theories of persuasion and a preliminary examination of 10 random conversation samples. Four research assistants independently coded 10 conversations, discussed disagreement, and revised the scheme accordingly. The four coders conducted two iterations of coding exercises on five additional conversations and reached an inter-coder reliability of Krippendorff’s alpha of above 0.70 for all categories. Once the scheme was finalized, each coder separately coded the rest of the conversations. We named the 300 annotated conversations as the ANNSET. Annotations for persuaders’ utterances included diverse argument strategies and task-related nonpersuasive dialogue acts. Specifically, we identified 10 persuasion strategy categories that can be divided into two types, 1) persuasive appeal and 2) persuasive inquiry. Non-persuasive dialogue acts included general ones such as greeting, and task-specific ones such as donation proposition and confirmation. Please refer to Table 7 in Appendix for the persuader dialogue act scheme. The seven strategies below belong to persuasive appeal, which tries to change people’s attitudes and decisions through different psychological mechanisms. Logical appeal refers to the use of reasoning and evidence to convince others. For instance, a persuader can convince a persuadee that the donation will make a tangible positive impact for children using reasons and facts. Emotion appeal refers to the elicitation of specific emotions to influence others. Specifically, we identified four emotional appeals: 1) telling stories to involve participants, 2) eliciting empathy, 3) eliciting anger, and 4) eliciting the feeling of guilt. (Hibbert et al., 2007). Credibility appeal refers to the uses of credentials and citing organizational impacts to establish credibility and earn the persuadee’s trust. The information usually comes from an objective source (e.g., the organization’s website or other wellestablished websites). Foot-in-the-door refers to the strategy of starting with small donation requests to facilitate compliance followed by larger requests (Scott, 1977). For instance, a persuader first asks for a smaller donation and extends the request to a larger amount after the persuadee shows intention to donate. Self-modeling refers to the strategy where the persuader first indicates his or her own intention to donate and chooses to act as a role model for the persuadee to follow. Personal story refers to the strategy of using narrative exemplars to illustrate someone’s donation experiences or the beneficiaries’ positive outcomes, which can motivate others to follow the actions. Donation information refers to providing specific information about the donation task, such as the donation procedure, donation range, etc. By providing detailed action guidance, this strategy can enhance the persuadee’s self-efficacy and facilitates behavior compliance. The three strategies below belong to persuasive 5639 inquiry, which tries to facilitate more personalized persuasive appeals and to establish better interpersonal relationships by asking questions. Source-related inquiry asks if the persuadee is aware of the organization (i.e., the source in our specific donation task). Task-related inquiry asks about the persuadee’s opinion and expectation related to the task, such as their interests in knowing more about the organization. Personal-related inquiry asks about the persuadee’s previous personal experiences relevant to charity donation. The statistics of the ANNSET are shown in Table 3, where we listed the number of times each persuasion strategy appears. Most of the further studies are on the ANNSET. Example sentences for each persuasion strategy are shown in Table 4. We first explored the distribution of different strategies across conversation turns. We present the number of different persuasion strategies at different conversation turn positions in Fig. 1 (for persuasive appeal) and Fig. 2 (for persuasive inquiry). As shown in Fig. 1, Credibility appeal occurred more at the beginning of the conversations. In contrast, Donation information occurred more in the latter part of the conversations. Logical appeal and Emotion appeal share a similar distribution and also frequently appeared in the middle of the conversations. The rest of the strategies, Personal story, Self-modeling and Foot-in-the-door, are spread out more evenly across the conversations, compared with the other strategies. For persuasive inquiries in Fig. 2, Source-related inquiry mainly appeared in the first three turns, and the other two kinds of inquiries have a similar distribution. Figure 1: Distributions of the seven persuasive appeals across turns. Figure 2: Distributions of the three persuasive inquiries across turns. 5 Donation Strategy Classification FC-Layer(50) Softmax Context Embedding will … donate I donation … children. Your Semantic FC-Layer Sentiment embedding Character embedding Turn Position embedding FC-Layer(11) Max pooling Sentence Embedding Figure 3: The hybrid RCNN model combines sentence embedding, context embedding and sentence-level features. “+” represents vector concatenation. The blue dotted box shows the sentence embedding part. The orange dotted box shows the context embedding part. The green dotted box shows the sentence-level features. In order to build a persuasive dialogue system, we need to first understand human persuasion patterns and differentiate various persuasion strategies. Therefore, we designed a classifier for the 10 persuasion strategies plus one additional “nonstrategy” class for all the non-strategy dialogue acts in the ANNSET. We proposed a hybrid RCNN model which combined the following features, 1) sentence embedding, 2) context embedding and 3) sentence-level feature, for the classification. The model structure is shown in Fig. 3. Sentence embedding used recurrent convolutional neural network (RCNN), which combined CNN and RNN to extract both the global and local semantics, and the recurrent structure may reduce noise compared to the window-based neural network (Lai et al., 2015). We concatenated the word 5640 Persuasion Strategy Example Logical appeal Your donation could possible go to this problem and help many young children. You should feel proud of the decision you have made today. Emotion appeal Millions of children in Syria grow up facing the daily threat of violence. This should make you mad and want to help. Credibility appeal And the charity is highly rated with many positive rewards. You can find reports associated with the financial information by visiting this link. Foot-in-the-door And sometimes even a small help is a lot, thinking many others will do the same. By people like you, making a a donation of just $1 a day, you can feed a child for a month. Self-modeling I will donate to Save the Children myself. I will match your donation. Personal story I like to give a little money to charity each month. My brother and I replaced birthday gifts with charity donations a few years ago. Donation information Your donation will be directly deducted from your task payment. The research team will collect all donations and send it to Save the Children. Source-related inquiry Have you heard of Save the Children? Are you familiar with the organization? Task-related inquiry Do you want to know the organization more? What do you think of the charity? Personal-related inquiry Do you have kids? Have you donated to charity before? Table 4: Example sentences for the 10 persuasion strategies. embedding and the hidden state of the LSTM as the sentence embedding st. Next, a linear semantic transformation was applied on st to obtain the input to a max-pooling layer. Finally, the pooling layer was used to capture the effective information throughout the entire sentence. Context embedding was composed of the previous persuadee’s utterance. Considering the relatively long context, we used the last hidden state of the context LSTM as the initial hidden state of the RCNN. We also experimented with other methods to extract context and will detail them in Section 6. We also designed three sentence-level features to capture meta information other than embeddings. We describe them below. Turn position embedding. According to the previous analysis, different strategies have different distributions across conversation turns, so the turn position may help the strategy classification. We condensed the turn position information into a 10dimension embedding vector. Sentiment. We also extracted sentiment features for each sentence using VADER (Gilbert, 2014), a rule-based sentiment analyzer. It generates negative, positive, neutral scores from zero to one. It is interesting to note that for Emotion appeal, the average negative sentiment score is 0.22, higher than the average positive sentiment score, 0.10. It seems negative sentiment words are used more frequently in Emotion appeal because persuaders tend to describe sad facts to arouse empathy in Emotion appeal. In contrast, positive words are used more frequently in Logical appeal, because persuaders tend to describe more positive results from donation when using Logical appeal. Character embedding. For short text, character level features can be helpful. Bothe et al. (2018) utilized character embedding to improve the dialogue act classification accuracy. Following Bothe et al. (2018), we chose the pre-trained multiplicative LSTM (mLSTM) network on 80 million Amazon product reviews to extract 4096-dimension character-level features (Radford et al., 2017)3. Given the output character embedding, we applied a linear transformation layer with output size 50 to obtain the final character embedding. 6 Experiments Because human-human typing conversations are complex, one sentence may belong to multiple strategy categories; out of the concern for model simplicity, we chose to predict the most salient strategy for each sentence. Table 3 shows the dataset is highly imbalanced, so we used the macro F1 as the evaluation metric, in addition to accuracy. We conducted five-fold cross validation, and used the average scores across folds to compare the performance of different models. We set the initial learning rate to be 0.001 and applied exponential decay every 100 steps. The training batch size was 32 and all models were trained for 20 epochs. In addition, dropout (Srivastava et al., 3https://github.com/openai/ generating-reviews-discovering-sentiment 5641 2014) with a probability of 0.5 was applied to reduce over-fitting. We adopted the 300-dimension pre-trained FastText (Bojanowski et al., 2017) as word embedding. The RCNN model used a single-layer bidirectional LSTM with a hidden size of 200. We describe two baseline models below for comparison. Self-attention BLSTM (BLSTM) only considers a single-layer bidirectional LSTM with selfattention mechanism. After finetuning, we set the attention dimension to be 150. Convolutional neural network (CNN) uses multiple convolution kernels to extract textual features. A softmax layer was applied in the end to generate the probability for each category. The hyperparameters in the original implementation (Kim, 2014) were used. 6.1 Experimental Results Models Accuracy Macro F1 Majority vote 18.1% 5.21% BLSTM + All features 73.4% 57.1% CNN + All features 73.5% 58.0% Hybrid RCNN with different features Sentence only 74.3% 59.0% Sentence + Context CNN 72.5% 54.5% Sentence + Context Mean 74.0% 58.5% Sentence + Context RNN 74.4% 59.3% Sentence + Context tf-idf 73.5% 57.6% Sentence + Turn position 73.8% 59.4% Sentence + Sentiment 73.6% 59.7% Sentence + Character 74.5% 59.3% All features 74.8% 59.6% Table 5: All the features include sentence embedding, context embedding, turn position embedding, sentiment and character embedding. The hybrid RCNN model with all the features performed the best on the ANNSET. Baseline models in the upper section also used all the features but didn’t perform as good as the hybrid RCNN. As shown in Table 5, the hybrid RCNN with all the features (sentence embedding, context embedding, turn position embedding, sentiment and character embedding) reached the highest accuracy (74.8%) and F1 (59.6%). Baseline models in the upper section of Table 5 also used all the features but didn’t perform as good as the hybrid RCNN. We further performed ablation study on the hybrid RCNN to discover different features’ impact on the model’s performance. We experimented with four different context embedding methods, 1) CNN, 2) the mean of word embeddings, 3) RNN (the output of the RNN was the RCNN’s initial hidden state), and 4) tf-idf. We found RNN achieved best result (74.4%) and F1 (59.3%). The experimental results suggest incorporating context improved the model performance slightly but not significantly. This may be because in persuasion conversations, sentences are relatively long and contain complex semantic meanings, which makes it hard to encode the context information. This suggests we develop better methods to extract important semantic meanings from the context in the future. Besides, all three sentence-level features improved the model’s F1. Although the sentiment feature only has three dimensions, it still increased the model’s F1 score. To further analyze the results, we plotted the confusion matrix for the best model in Fig. 5 in Appendix. We found the main error comes from the misclassification of Personal story. Sometimes sentences of Personal story were misclassified as Emotion appeal, because a subjective story can contain sentimental words, which may confuse the model. Besides, Task-related inquiry was hard to classify due to the diversity of inquiries. In addition, Foot-in-the-door strategy can be mistaken for Logical appeal, because when using Foot-inthe-door, people would sometimes make logical arguments about the small donation, such as describing the tangible effects of the small donation. For example, the sentence “Even five cents can help save children’s life.” also mentioned the benefits from the small donation. Besides, certain sentences of Logical appeal may contain emotional words, which led to the confusion between Logical appeal and Emotion appeal. In summary, due to the complex nature of human-human typing dialogues, one sentence may convey multiple meanings, which led to misclassifications. 7 Donation Outcome Analysis After identifying and categorizing the persuasion strategies, the next step is to analyze the factors that contribute to the final donation decision. Specifically, understanding the effects of the persuader’s strategies, the persuadee’s personal backgrounds, and their interactions on donation can greatly enhance the conversational agent’s capability to engage in personalized persuasion. Given the skewed distribution of intended donation amount from the persuadees, the outcome variable was dichotomized to indicate whether they donated or not (1 = making any amount of 5642 donation and 0 = none). Duplicate survey data from participants who did the task more than once were removed before the analysis, and for such duplicates, only data from the first completed task were retained. This pruning process resulted in an analytical sample of 252 unique persuadees in the ANNSET. All measured demographic variables and psychological profile variables were entered into logistic models. Results are presented in Section A.2 in Appendix. Our analysis consisted of three parts, including the effects of persuasion strategies on the donation outcome, the effects of persuadees’ psychological backgrounds on the donation outcome, and the interaction effects among all strategies and personal backgrounds. 7.1 Persuasion Strategies and Donation Overall, among the 10 persuasion strategies, Donation information showed a significant positive effect on the donation outcome (p < 0.05), as shown in Table 8 in Appendix. This confirms previous research which showed efficacy information increases persuasion. More specifically, because Donation information gives the persuadee step-by-step instructions on how to donate, which makes the donation procedure more accessible and as a result, increases the donation probability. An alternative explanation is that persuadees with a strong donation intention were more likely to ask about the donation procedure, and therefore Donation information appeared in most of the successful dialogues resulting in a donation. These compounding factors led us to further analyze the effects of psychological backgrounds on the donation outcome. 7.2 Psychological Backgrounds and Donation We collected data on demographics and four types of psychological characteristics, including moral foundation, decision style, Big-Five personality, and Schwartz Portrait Value, to analyze what types of people are more likely to donate and respond differently to different persuasive strategies. Results of the analysis on demographic characteristics in Table 11 show that the donation probability increases as the participant’s age increases (p < 0.05). This may be due to the fact that older participants may have more money and may have children themselves, and therefore are more willing to contribute to the children’s charity. The Big-Five personality analysis shows that more agreeable participants are more likely to donate (p < 0.001); the moral foundation analysis shows that participants who care for others more have a higher probability for donation (p < 0.001); the portrait value analysis shows that participants who endorse benevolence more are also more likely to donate (p < 0.05). These results suggest people who are more agreeable, caring about others, and endorsing benevolence are in general more likely to comply with the persuasive request (Hoover et al., 2018; Graham et al., 2013). On the decision style side, participants who are rational decision makers are more likely to donate (p < 0.05), whereas intuitive decision makers are less likely to donate. Another observation reveals participants’ inconsistent donation behaviors. We found that some participants promised to donate during the conversation but reduced the donation amount or didn’t donate at all in the end. In order to analyze these inconsistent behaviors, we selected the 236 persudees who agreed to donate in the ANNSET. Among these persuadees, 11% (22) individuals reduced the actual donation amount and 43% (88) individuals did not donate. Also, there are 3% (7) individuals donated more than they mentioned in the conversation. We fitted the Big-Five traits score and the inconsistent behavior with a logistic regression model. The results in Table 9 in Appendix show that people who are more agreeable are more likely to match their words with their donation behaviors. But since the dataset is relatively small, the result is not significant and we should caution against overinterpreting these effects until we obtain more annotated data. 7.3 Interaction Effects of Persuasion Strategies and Psychological Backgrounds To provide the necessary training data to build a personalized persuasion agent, we are interested in assessing not only the main effects of persuasion strategies employed by human persuaders, but more importantly, the presence of (or lack of) heterogeneity of such main effects on different individuals. In the case where the heterogeneous effects were absent, the task of building the persuasive agent would be simplified because it wouldn’t need to pay any attention to the targeted audience’s attribute. Given the evidence shown in personalized persuasion, our expectation was to observe variations in the effects of persuasion strategies 5643 conditioned upon the persuadee’s personal traits, especially the four psychological profile variables identified in the previous analysis (i.e., agreeableness, endorsement of care and benevolence, and rational decision making style). Table 12, 13 and 10 present evidence for heterogeneity, conditioned upon the Big-Five personality traits, the moral foundation scores and the decision style. For example, although Sourcerelated inquiry does not show a significant main effect averaged across all participants, it showed a significant positive effect on the donation probability of participants who are more open (p < 0.05). This suggests when encountering more open persuadees, the agent can initiate Sourcerelated inquiry more. Besides, Personal-related inquiry significantly increases the donation probability of people who endorse freedom and care (p < 0.05), but is negatively associated with the donation probability of people who endorse fairness and authority. Given the relatively small dataset, we caution against overinterpreting these interaction effects until further confirmed after all the conversations in our dataset were content coded. With that said, the current set of evidence supports the presence of heterogeneity in the effects of persuasion strategies, which provide the basis for our next step to design a personalized persuasive system that aims to automatically identify and tailor persuasive messages to different individuals. 8 Ethical Considerations Persuasion is a double-edged sword and has been used for good or evil throughout the history. Given the fast development of automated dialogue systems, an ethical design principle must be in place throughout all stages of the development and evaluation. As the Roman rhetorician Quintilian defined a persuader as “a good man speaking well”, when developing persuasive agents, building an ethical and good intention that benefits the persuadees must come before designing and engineering the conversational capability to persuade. For instance, we choose to use the donation task as a first step to develop a persuasive dialogue system because the relatively simple task involves persuasion to benefit children. Other persuasive contexts can consider designing persuasive agents to help individuals fulfill their goals such as engaging in more exercises or sustaining environmentally friendly actions. Second, when deploying the persuasive agents in real conversations, it is important to keep the persuadees informed of the nature of the dialogue system so they are not deceived. By revealing the identity of the persuasive agent, the persuadees need to have options to communicate directly with the human team behind the system. Similarly, the purpose of the collection of persuadees personal information and analysis on their psychological traits must be clearly communicated to the persuadees and the use of their data requires active consent procedure. Lastly, the design needs to ensure that the generated responses are appropriate and nondiscriminative. This requires continuous monitoring of the conversations to make sure the conversations comply with both universal and local ethical standards. 9 Conclusions and Future Work A key challenge in persuasion study is the lack of high-quality data and the interdisciplinary research between computational linguistics and social science. We proposed a novel persuasion task, and collected a rich human-human persuasion dialogue dataset with comprehensive user psychological study and persuasion strategy annotation. We have also shown that a classifier with three types of features (sentence embedding, context embedding and sentence-level features) can reach good results on persuasion strategy prediction. However, much future work is still needed to further improve the performance of the classifier, such as including more annotations and more dialogue context into the classification. Moreover, we found evidence about the interaction effects between psychological backgrounds and persuasion strategies. For example, when facing participants who are more open, we can consider using the Source-related inquiry strategy. This project lays the groundwork for the next step, which is to design a useradaptive persuasive dialogue system that can effectively choose appropriate strategies based on user profile information to increase the persuasiveness of the conversational agent. Acknowledgments This work was supported by an Intel research gift. We thank Saurav Sahay, Eda Okur and Shachi Kumar for valuable discussions. 5644 References Elisabeth Andr´e, Thomas Rist, Susanne Van Mulken, Martin Klesen, and Stefan Baldes. 2000. The automated design of believable dialogues for animated presentation teams. Embodied conversational agents, pages 220–255. Timothy W Bickmore, Dina Utami, Robin Matsuyama, and Michael K Paasche-Orlow. 2016. Improving access to online health information with conversational agents: a randomized controlled experiment. Journal of medical Internet research, 18(1). Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Chandrakant Bothe, Sven Magg, Cornelius Weber, and Stefan Wermter. 2018. Conversational analysis using utterance-level attention-based bidirectional recurrent neural networks. Proc. Interspeech 2018, pages 996–1000. J. Cieciuch and E. Davidov. 2012. A comparison of the invariance properties of the pvq-40 and the pvq-21 to measure human values across german and polish samples. Survey Research Methods, 6(1):37–48. Arie Dijkstra. 2008. The psychology of tailoringingredients in computer-tailored persuasion. Social and personality psychology compass, 2(2):765–784. Brian J Fogg. 2002. Persuasive technology: using computers to change what we think and do. Ubiquity, 2002(December):5. CJ Hutto Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Eighth International Conference on Weblogs and Social Media (ICWSM-14). Available at (20/04/16) http://comp. social. gatech. edu/papers/icwsm14. vader. hutto. pdf. Lewis R. Goldberg. 1992. The development of markers for the big-five factor structure. Psychological Assessment, 4(1):26–42. Thomas F Gordon. 1993. The pleadings game. Artificial Intelligence and Law, 2(4):239–292. Arthur C Graesser, Haiying Li, and Carol Forsyth. 2014. Learning by communicating in natural language with conversational agents. Current Directions in Psychological Science, 23(5):374–380. J. Graham, B. A. Nosek, J. Haidt, R. Iyer, S. Koleva, and P. H. Ditto. 2011. Mapping the moral domain. Journal of Personality and Social Psychology, 101(2):366–385. Jesse Graham, Jonathan Haidt, Sena Koleva, Matt Motyl, Ravi Iyer, Sean P Wojcik, and Peter H Ditto. 2013. Moral foundations theory: The pragmatic validity of moral pluralism. In Advances in experimental social psychology, volume 47, pages 55–130. Elsevier. Shih S. I. Hamilton, K. and S. Mohammed. 2016. The development and validation of the rational and intuitive decision styles scale. Journal of personality assessment, 98(5):523–535. He He, Derek Chen, Anusha Balakrishnan, and Percy Liang. 2018. Decoupling strategy and generation in negotiation dialogues. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2333–2343. Sally Hibbert, Andrew Smith, Andrea Davies, and Fiona Ireland. 2007. Guilt appeals: Persuasion knowledge and charitable giving. Psychology & Marketing, 24(8):723–742. Christopher Hidey, Elena Musi, Alyssa Hwang, Smaranda Muresan, and Kathy McKeown. 2017. Analyzing the semantic types of claims and premises in an online persuasive forum. In Proceedings of the 4th Workshop on Argument Mining, pages 11–21. Christopher Thomas Hidey and Kathleen McKeown. 2018. Persuasive influence detection: The role of argument sequencing. In Thirty-Second AAAI Conference on Artificial Intelligence. Joe Hoover, Kate Johnson, Reihane Boghrati, Jesse Graham, and Morteza Dehghani. 2018. Moral framing and charitable donation: Integrating exploratory social media analyses and confirmatory experimentation. Collabra: Psychology, 4(1). Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. Matthew W Kreuter, Victor J Strecher, and Bernard Glassman. 1999. One size does not fit all: the case for tailoring print materials. Annals of behavioral medicine, 21(4):276. Klaus Krippendorff. 2004. Reliability in content analysis: Some common misconceptions and recommendations. Human communication research, 30(3):411–433. Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In Proceedings of the TwentyNinth AAAI Conference on Artificial Intelligence, AAAI’15, pages 2267–2273. AAAI Press. Mike Lewis, Denis Yarats, Yann Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or no deal? end-to-end learning of negotiation dialogues. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2443–2453. Ewa Luger and Abigail Sellen. 2016. Like having a really bad pa: the gulf between user expectation and experience of conversational agents. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pages 5286–5297. ACM. 5645 Stephanie Lukin, Pranav Anand, Marilyn Walker, and Steve Whittaker. 2017. Argument strength is in the eye of the beholder: Audience effects in persuasion. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, volume 1, pages 742–753. Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. Parlai: A dialog research software platform. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 79–84. Rita Orji and Karyn Moffatt. 2018. Persuasive technology for health and wellness: State-of-the-art and emerging trends. Health informatics journal, 24(1):66–91. Richard E Petty and John T Cacioppo. 1986. The elaboration likelihood model of persuasion. In Communication and persuasion, pages 1–24. Springer. Alec Radford, Rafal Jozefowicz, and Ilya Sutskever. 2017. Learning to generate reviews and discovering sentiment. arXiv preprint arXiv:1704.01444. Barbara K Rimer and Matthew W Kreuter. 2006. Advancing tailored health communication: A persuasion and message effects perspective. Journal of communication, 56:S184–S201. Carol A Scott. 1977. Modifying socially-conscious behavior: The foot-in-the-door technique. Journal of Consumer Research, 4(3):156–164. Weiyan Shi and Zhou Yu. 2018. Sentiment adaptive end-to-end dialog systems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1509–1519. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Chenhao Tan, Vlad Niculae, Cristian DanescuNiculescu-Mizil, and Lillian Lee. 2016. Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. In Proceedings of the 25th international conference on world wide web, pages 613–624. International World Wide Web Conferences Steering Committee. Diyi Yang, Jiaao Chen, Zichao Yang, Dan Jurafsky, and Eduard Hovy. 2019. Lets make your request more persuasive: Modeling persuasive strategies via semi-supervised neural nets on crowdfunding platforms. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3620–3630. Zhou Yu, Xinrui He, Alan W Black, and Alexander I Rudnicky. 2016a. User engagement study with virtual agents under different cultural contexts. In International Conference on Intelligent Virtual Agents, pages 364–368. Springer. Zhou Yu, Ziyu Xu, Alan W Black, and Alexander Rudnicky. 2016b. Strategy and policy learning for nontask-oriented conversational systems. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 404– 412. Tangming Yuan, David Moore, and Alec Grierson. 2008. A human-computer dialogue system for educational debate: A computational dialectics approach. International Journal of Artificial Intelligence in Education, 18(1):3–26. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2204–2213. 5646 A Appendices A.1 Annotation Scheme Table 6 and 7 show the annotation schemes for selected persuadee acts and persuader acts respectively. For the full annotation scheme, please refer to https://gitlab.com/ucdavisnlp/ persuasionforgood. In the persuader’s annotation scheme, there is a series of acts related to persuasive proposition (proposition of donation, proposition of amount, proposition of confirmation, and proposition of more donation). In general, proposition is needed in persuasive requests because the persuader needs to clarify the suggested behavior changes. In our specific task, donation propositions have to happen in every conversation regardless of the donation outcome, and therefore is not influential on the final outcome. Further, its high frequency might dilute the results. Given these reasons, we didn’t consider propositions as a strategy in our specific context. Category Description Ask org info Ask questions about the charity Ask donation procedure Ask questions about how to donate Positive reaction Express opinions/thoughts that may lead to a donation Neutral reaction Express opinions/thoughts neutral towards a donation Negative reaction Express opinions/thoughts against a donation Agree donation Agree to donate Disagree donation Decline to donate Positive to inquiry Show positive responses to persuader’s inquiry Negative to inquiry Show negative responses to persuader’s inquiry Table 6: Descriptions of selected important persuadee dialogue acts. A.2 Donation Outcome Analysis Results We used ANNSET for the analysis except for Fig. 4 and Table 11. Estimated coefficients of the logistic regression models predicting the donation probability (1 = donation, 0 = no donation) with different variables are shown in Table 8, 9, 10, 11, Category Description Proposition of donation Propose donation Proposition of amount Ask the specific donation amount Proposition of confirmation Confirm donation Proposition of more donation Ask the persuadee to donate more Experience affirmation Comment on the persuadee’s statements Greeting Greet the persuadee Thank Thank the persuadee Table 7: Descriptions of selected important nonstrategy persuader dialogue acts. 12, and 13. Two-tailed tests are applied for statistical significance where *p < 0.05, **p < 0.01 and ***p < 0.001 . Persuasion Strategy Coefficient Logical appeal 0.06 Emotion appeal 0.03 Credibility appeal -0.11 Foot-in-the-door 0.06 Self-modeling -0.02 Personal story 0.36 Donation information 0.31* Source-related inquiry 0.11 Task-related inquiry -0.004 Personal-related inquiry 0.02 Table 8: Associations between the persuasion strategies and the donation (dichotomized). *p < 0.05. ANNSET was used for the analysis. Big-Five Coefficient extrovert 0.22 agreeable -0.34 conscientious -0.27 neurotic -0.11 open -0.19 Table 9: Associations between the Big-Five traits and the inconsistent donation behavior (dichotomized, 1 = inconsistent donation behavior, 0 = consistent behavior). *p < 0.05. ANNSET was used for the analysis. A.3 Classification Confusion Matrix Fig. 5 shows the classification confusion matrix. 5647 Figure 4: Big-Five traits score distribution for people who donated and didn’t donate. For all the 471 persuadees who did not donate in the PERSUASIONFORGOOD, we compared their personalities score with the other 546 persuadees who donated. The result shows that people who donated have a higher score on agreeableness and openness in the Big-Five analysis. Because strategy annotation was not involved in the psychological analysis, we used the whole dataset (1017 dialogues) for this analysis. Decision Style by Strategy Coefficient Rational by Logical appeal 0.01 Emotion appeal 0.08 Credibility appeal -0.01 Foot-in-the-door -0.25 Self-modeling 0.007 Personal story 0.26 Donation information 0.09 Source-related inquiry 0.33 Task-related inquiry -0.03 Personal-related inquiry -0.03 Intuitive by Logical appeal 0.04 Emotion appeal -0.07 Credibility appeal -0.02 Foot-in-the-door 0.37 Self-modeling 0.01 Personal story -0.27 Donation information -0.02 Source-related inquiry -0.43 Task-related inquiry 0.05 Personal-related inquiry 0.04 Table 10: Interaction effects between decision style and the donation (dichotomized). *p < 0.05 . Coefficients of the logistic regression predicting the donation probability (1 = donation, 0 = no donation) are shown here. ANNSET was used for the analysis. Predictor Coefficient Demographics Age 0.02* Sex: Male vs. Female -0.11 Sex: Other vs. Female -0.14 Race: White vs. Other 0.28 Less Than Four-Year College vs. 0.16 Four-Year College Postgraduate vs. Four-Year College -0.20 Marital: Unmarried vs. Married -0.21 Employment: Other vs. Employed 0.17 Income (continuous) -0.01 Religion: Catholic vs. Atheist 0.34 Religion: Other Religion vs. Atheist 0.21 Religion: Protestant vs. Atheist 0.15 Ideology: Liberal vs. Conservative 0.11 Ideology: Moderate vs. Conservative -0.04 Big-Five Personality Traits Extrovert -0.17 Agreeable 0.58*** Conscientious -0.15 Neurotic 0.09 Open -0.01 Moral Foundation Care/Harm 0.38*** Fairness/Cheating 0.08 Loyalty/Betrayal 0.09 Authority/Subversion 0.04 Purity/Degradation -0.02 Freedom/Suppression -0.13 Schwartz Portrait Value Conform -0.07 Tradition 0.06 Benevolence 0.18* Universalism 0.05 Self-Direction -0.06 Stimulation -0.08 Hedonism -0.10 Achievement -0.03 Power -0.05 Security 0.09 Decision-Making Style Rational 0.25* Intuitive -0.02 Table 11: Associations between the psychological profile and the donation (dichotomized). *p < 0.05, ***p < 0.001 . Estimated coefficients from a logistic regression predicting the donation probability ((1 = donation, 0 = no donation)) are shown here. Because strategy annotation is not involved in the demographical and psychological analysis, we used the whole dataset (1017 dialogues) for this analysis. A.4 Data Collection Interface Fig. 6 and 7 shows the data collection interface. 5648 Figure 5: Confusion matrix for the ten persuasion strategies and the non-strategy category on the ANNSET using the hybrid RCNN model with all the features. Figure 6: Screenshot of the persuader’s chat interface Figure 7: Screenshot of the persuadee’s chat interface 5649 Big-Five by Strategy Coefficient Extrovert by Logical appeal -0.06 Emotion appeal 0.15 Credibility appeal 0.07 Foot-in-the-door 0.21 Self-modeling -0.28 Personal story -0.18 Donation information -0.11 Source-related inquiry -0.02 Task-related inquiry -0.26 Personal-related inquiry 0.09 Agreeable by Logical appeal -0.11 Emotion appeal 0.25 Credibility appeal 0.25 Foot-in-the-door -0.02 Self-modeling -0.30 Personal story 0.77 Donation information 0.08 Source-related inquiry -0.84 Task-related inquiry -0.61 Personal-related inquiry -0.07 Neurotic by Logical appeal 0.12 Emotion appeal -0.14 Credibility appeal -0.03 Foot-in-the-door 0.05 Self-modeling -0.20 Personal story -0.22 Donation information 0.15 Source-related inquiry -0.22 Task-related inquiry 0.03 Personal-related inquiry 0.23 Open by Logical appeal 0.13 Emotion appeal 0.21 Credibility appeal -0.20 Foot-in-the-door -0.97 Self-modeling 0.38 Personal story -0.17 Donation information -0.33 Source-related inquiry 1.21* Task-related inquiry 0.63 Personal-related inquiry -0.21 Conscientious by Logical appeal -0.02 Emotion appeal -0.40 Credibility appeal -0.14 Foot-in-the-door 0.67 Self-modeling 0.34 Personal story -0.28 Donation information 0.33 Source-related inquiry -0.03 Task-related inquiry 0.21 Personal-related inquiry 0.06 Table 12: Interaction effects between Big-Five personality scores and the donation (dichotomized). *p < 0.05, **p < 0.01. Coefficients of the logistic regression predicting the donation probability (1 = donation, 0 = no donation) are shown here. ANNSET was used for the analysis. Moral Foundation by Strategy Coefficient Care by Logical appeal 0.05 Emotion appeal -0.19 Credibility appeal 0.21 Foot-in-the-door 0.03 Self-modeling 0.54 Personal story 0.12 Donation information -0.21 Source-related inquiry 0.14 Task-related inquiry 0.09 Personal-related inquiry 1.10* Fairness by Logical appeal 0.12 Emotion appeal 0.06 Credibility appeal -0.10 Foot-in-the-door -0.40 Self-modeling -0.09 Personal story -0.30 Donation information 0.06 Source-related inquiry 0.46 Task-related inquiry 0.41 Personal-related inquiry -1.15* Loyalty by Logical appeal -0.10 Emotion appeal -0.13 Credibility appeal 0.07 Foot-in-the-door 0.45 Self-modeling 0.04 Personal story -0.31 Donation information -0.25 Source-related inquiry 0.57 Task-related inquiry -0.26 Personal-related inquiry -0.04 Authority by Logical appeal 0.31 Emotion appeal -0.12 Credibility appeal 0.10 Foot-in-the-door -0.31 Self-modeling 0.08 Personal story -0.19 Donation information 0.03 Source-related inquiry -0.23 Task-related inquiry -0.14 Personal-related inquiry -0.86* Purity by Logical appeal -0.30 Emotion appeal 0.25 Credibility appeal -0.15 Foot-in-the-door -0.004 Self-modeling -0.21 Personal story 0.43 Donation information 0.30 Source-related inquiry -0.41 Task-related inquiry 0.31 Personal-related inquiry 0.44 Freedom by Logical appeal 0.10 Emotion appeal -0.05 Credibility appeal -0.16 Foot-in-the-door -0.50 Self-modeling -0.35 Personal story 0.32 Donation information 0.17 Source-related inquiry -0.13 Task-related inquiry -0.29 Personal-related inquiry 0.60* Table 13: Interaction effects between moral foundation and the donation (dichotomized). *p < 0.05.
2019
566
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5650–5669 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5650 Improving Neural Conversational Models with Entropy-Based Data Filtering Richard Csaky Department of Automation and Applied Informatics Budapest University of Technology and Economics [email protected] Patrik Purgai Department of Automation and Applied Informatics Budapest University of Technology and Economics [email protected] Gabor Recski Apollo.AI [email protected] Abstract Current neural network-based conversational models lack diversity and generate boring responses to open-ended utterances. Priors such as persona, emotion, or topic provide additional information to dialog models to aid response generation, but annotating a dataset with priors is expensive and such annotations are rarely available. While previous methods for improving the quality of open-domain response generation focused on either the underlying model or the training objective, we present a method of filtering dialog datasets by removing generic utterances from training data using a simple entropy-based approach that does not require human supervision. We conduct extensive experiments with different variations of our method, and compare dialog models across 17 evaluation metrics to show that training on datasets filtered this way results in better conversational quality as chatbots learn to output more diverse responses. 1 Introduction Current open-domain neural conversational models (NCM) are trained on pairs of source and target utterances in an effort to maximize the likelihood of each target given the source (Vinyals and Le, 2015). However, real-world conversations are much more complex, and a plethora of suitable targets (responses) can be adequate for a given input. We propose a data filtering approach where the “most open-ended” inputs - determined by calculating the entropy of the distribution over target utterances - are excluded from the training set. We show that dialog models can be improved using this simple unsupervised method which can be applied to any conversational dataset. We conduct several experiments to uncover how some of the current open-domain dialog evaluation methods behave with respect to overfitting and random data. Our software for filtering dialog data and automatic evaluation using 17 metrics is released on GitHub under an MIT license12. 2 Background Most open-domain NCMs are based on neural network architectures developed for machine translation (MT, Sutskever et al. (2014); Cho et al. (2014); Vaswani et al. (2017)). Conversational data differs from MT data in that targets to the same source may vary not only grammatically but also semantically (Wei et al., 2017; Tandon et al., 2017): consider plausible replies to the question What did you do today?. Dialog datasets also contain generic responses, e.g. yes, no and i don’t know, that appear in a large and diverse set of contexts (Mou et al., 2016; Wu et al., 2018). Following the approach of modeling conversation as a sequence to sequence (seq2seq, Sutskever et al. (2014)) transduction of single dialog turns, these issues can be referred to as the one-to-many, and many-to-one problem. seq2seq architectures are not suited to deal with the ambiguous nature of dialogs since they are inherently deterministic, meaning that once trained they cannot output different sequences to the same input. Consequently they tend to produce boring and generic responses 1https://github.com/ricsinaruto/ Seq2seqChatbots 2https://github.com/ricsinaruto/ dialog-eval 5651 (Li et al., 2016a; Wei et al., 2017; Shao et al., 2017; Zhang et al., 2018a; Wu et al., 2018). Previous approaches to the one-to-many, manyto-one problem can be grouped into three categories. One approach involves feeding extra information to the dialog model such as dialog history (Serban et al., 2016; Xing et al., 2018), categorical information like persona (Li et al., 2016b; Joshi et al., 2017; Zhang et al., 2018b), mood/emotion (Zhou et al., 2018; Li et al., 2017c), and topic (Xing et al., 2017; Liu et al., 2017; Baheti et al., 2018), or through knowledge-bases (Dinan et al., 2019; Ghazvininejad et al., 2018; Zhu et al., 2017; Moghe et al., 2018). A downside to these approaches is that they require annotated datasets which are not always available, or might be smaller in size. Augmenting the model itself, with e.g. latent variable sampling (Serban et al., 2017b; Zhao et al., 2017, 2018; Gu et al., 2019; Park et al., 2018; Shen et al., 2018b; Gao et al., 2019), or improving the decoding process (Shao et al., 2017; Kulikov et al., 2018; Mo et al., 2017; Wang et al., 2018) is also a popular approach. Sampling provides a way to generate more diverse responses, however such models are more likely to output ungrammatical or irrelevant responses. Finally, directly modifying the loss function (Li et al., 2016a), or training by reinforcement (Li et al., 2016d; Serban et al., 2017a; Li et al., 2016c; Lipton et al., 2018; Lewis et al., 2017) or adversarial learning (Li et al., 2017b; Ludwig, 2017; Olabiyi et al., 2018; Zhang et al., 2018c) has also been proposed, but this is still an open research problem, as it is far from trivial to construct objective functions that capture conversational goals better than cross-entropy loss. Improving dataset quality through filtering is frequently used in the machine learning literature (Sedoc et al., 2018; Ghazvininejad et al., 2018; Wojciechowski and Zakrzewicz, 2002) and data distillation methods in general are used both in machine translation and dialog systems (Axelrod et al., 2011; Li et al., 2017a). Xu et al. (2018b) introduced coherence for measuring the similarity between contexts and responses, and then filtered out pairs with low coherence. This improves datasets from a different aspect and could be combined with our present approach. However, natural conversations allow many adequate responses that are not similar to the context, thus it is not intuitively clear why filtering these should improve dialog models. Our experiments also further support that cross-entropy is not an adequate loss function (shown qualitatively by Csaky (2019) and Tandon et al. (2017)), by showing that many automatic metrics continue to improve after the validation loss reaches its minimum and starts increasing. However, we found that the metrics steadily improve even after we can be certain that the model overfitted (not just according to the loss function). Further research is required, to determine whether this indicates that overfitted model responses are truly better or if it’s a shortcoming of the metrics that they prefer such models. Currently, there is no well-defined automatic evaluation method (Liu et al., 2016), and while some metrics that correlate more with human judgment have been proposed recently (Li et al., 2017b; Lowe et al., 2017; Tao et al., 2018), they are harder to measure than simpler automatic metrics like perplexity or BLEU (Papineni et al., 2002). Furthermore, even human evaluation has its downsides, like high variance, high cost, and difficulty of replicating experimental setups (Zhang et al., 2018b; Tao et al., 2018). Some works resort to human evaluations (Krause et al., 2017; Fang et al., 2018), others use automatic metrics only (Olabiyi et al., 2018; Xing and Fern´andez, 2018; Kandasamy et al., 2017; Shalyminov et al., 2018; Xu et al., 2018b), and some use both (Shen et al., 2018a; Xu et al., 2018a; Baheti et al., 2018; Ram et al., 2018). While extensive human evaluation of the methods presented here is left for future work, we do conduct an especially thorough automatic evaluation both at the validation loss minimum and of overfitted models. We believe our experiments also shed light on the limitations of frequently used automatic metrics. 3 Methods 3.1 Intuition We approach the one-to-many, many-to-one problem from a relatively new perspective: instead of adding more complexity to NCMs, we reduce the complexity of the dataset by filtering out a fraction of utterance pairs that we assume are primarily responsible for generic/uninteresting responses. Of the 72 000 unique source utterances in the DailyDialog dataset (see Section 4.1 for details), 60 000 occur with a single target only. For these it seems straightforward to maximize the conditional probability P(T|S), S and T denoting a specific 5652 source and target utterance. However, in the case of sources that appear with multiple targets (oneto-many), models are forced to learn some “average” of observed responses (Wu et al., 2018). The entropy of response distribution of an utterance s is a natural measure of the amount of “confusion” introduced by s. For example, the context What did you do today? has high entropy, since it is paired with many different responses in the data, but What color is the sky? has low entropy since it’s observed with few responses. The many-toone scenario can be similarly formulated, where a diverse set of source utterances are observed with the same target (e.g. I don’t know has high entropy). While this may be a less prominent issue in training NCMs, we shall still experiment with excluding such generic targets, as dialog models tend to generate them frequently (see Section 2). 3.2 Clustering Methods and Filtering We refer with IDENTITY to the following entropy computation method. For each source utterance s in the dataset we calculate the entropy of the conditional distribution T|S = s, i.e. given a dataset D of source-target pairs, we define the target entropy of s as Htgt(s, D) = − X (s,ti)∈D p(ti|s) log2 p(ti|s) (1) Similarly, source entropy of a target utterance is Hsrc(t, D) = − X (si,t)∈D p(si|t) log2 p(si|t) (2) The probabilities are based on the observed relative frequency of utterance pairs in the data. For the purposes of this entropy-based filtering, we considered the possibility of also including some form of similarity measure between utterances that would allow us to detect whether a set of responses is truly diverse, as in the case of a question like What did you do today?, or diverse only on the surface, such as in the case of a question like How old are you? (since answers to the latter are semantically close). Measuring the entropy of semantic clusters as opposed to individual utterances may improve our method by reducing data sparsity. For example How are you? can appear in many forms, like How are you <name>? (see Section 4.2). While the individual forms have low entropy (because they have low frequency), we may decide to filter them all if together they form a high-entropy cluster. To this end we performed the filtering based not only on the set of all utterances, as in the case of IDENTITY, but also on clusters of utterances established by clustering their vector representations using the Mean Shift algorithm (Fukunaga and Hostetler, 1975). Source and target utterances are clustered separately. In the AVG-EMBEDDING setup the representation R(U) of utterance U is computed by taking the average word embedding weighted by the smooth inverse frequency R(U) = 1 |U| P w∈U E(w)·0.001 0.001+p(w) of words (Arora et al., 2017), where E(w) and p(w) are the embedding and the probability3 of word w respectively. We also experiment with SENT2VEC4, a more sophisticated sentence embedding approach, which can be thought of as an extension of word2vec to sentences (Pagliardini et al., 2018). The target entropy of a source cluster cs is Htgt(cs, C) = − X ci∈C p(ci|cs) log2 p(ci|cs) (3) where C is the set of all clusters and p(ci|cs) is the conditional probability of observing an utterance from cluster i after an utterance from cluster s. In the context of these methods, the entropy of an utterance will mean the entropy of its cluster. Note that IDENTITY is a special case of this cluster-based entropy computation method, since in IDENTITY a “cluster” is comprised of multiple examples of one unique utterance. Thus a target cluster’s entropy is computed similarly to Equation 2, but using clusters as in Equation 3. Entropy values obtained with each of these methods were used to filter dialog data in three ways. The SOURCE approach filters utterance pairs in which the source utterance has high entropy, TARGET filters those with a high entropy target, and finally the BOTH strategy filters all utterance pairs that are filtered by either SOURCE or TARGET. Some additional techniques did not yield meaningful improvement and were excluded from further evaluation. Clustering based on the Jaccard similarity of the bag of words of utterances only added noise to IDENTITY and resulted in much worse clusters than SENT2VEC. Clustering single occurrences of each unique utterance (as opposed to datasets with multiplicity) lead to less useful 3Based on the observed relative frequency in the data. 4https://github.com/epfml/sent2vec 5653 clusters than when clustering the whole dataset, probably because it resulted in less weight being given to the frequent utterances that we want to filter out. K-means proved inferior to the Mean Shift algorithm, which is a density-based clustering algorithm and seems to work better for clustering vectors of sentences. Filtering stop words before clustering did not improve the quality of clusters, probably because many utterances that we want to filter out contain a large number of stop words. 4 Data Analysis 4.1 Dataset With 90 000 utterances in 13 000 dialogs, DailyDialog (Li et al., 2017c), our primary dataset, is comparable in size with the Cornell MovieDialogs Corpus (Danescu-Niculescu-Mizil and Lee, 2011), but contains real-world conversations. Using the IDENTITY approach, about 87% of utterances have 0 entropy (i.e. they do not appear with more than one target), 5% have an entropy of 1 (e.g. they appear twice, with different targets), remaining values rise sharply to 7. This distribution is similar for source and target utterances. Figure 1: Entropy of source utterances (computed with IDENTITY) with respect to utterance frequency. Entropy is clearly proportional to utterance frequency (Figure 1), but has a wide range of values among utterances of equal frequency. For example, utterances with a frequency of 3 can have entropies ranging from 0 to log2 3 ≈1.58, the latter of which would be over our filtering threshold of 1 (see Section 5.1 for details on selecting thresholds). Since high-entropy utterances are relatively short, we also examined the relationship between entropy and utterance length (Figure 2). Given Figure 2: Entropy of source utterances (computed with IDENTITY) with respect to utterance length. the relationship between frequency and entropy, it comes as no surprise that longer utterances have lower entropy. 4.2 Clustering Results Compared to IDENTITY, both SENT2VEC and AVG-EMBEDDING produce a much lower number of clusters with 0 entropy, but also a huge cluster with more than 5000 elements (the size of the second largest cluster is below 500), which we didn’t filter since it clearly doesn’t group utterances with similar meaning. Generally, clusters were formed of similar utterances with the occasional exception of longer outlier utterances clustered together (instead of creating a separate cluster for each outlier), which can be attributed to the nature of the clustering algorithm. Overall, SENT2VEC appeared to produce better clusters than AVG-EMBEDDING, as reflected in the evaluation in Section 5. We experimented with different bandwidth values5 for the Mean Shift algorithm to produce clusters with as many elements as possible while also keeping the elements semantically similar. In an example cluster (Figure 3) we can see that the clustering was able to group together several variants of How are you?, in particular, those with different names. In general, we noticed that both in the case of IDENTITY and the clustering methods, utterances labeled with the highest entropy are indeed those generic sources and replies which we hoped to eliminate. See Appendix A.1 for a selection of high entropy utterances and clusters. 5Bandwidth is like a radius in the latent space of utterance representations (Fukunaga and Hostetler, 1975). 5654 Figure 3: A cluster produced by SENT2VEC. 5 Experiments In this section the model and parameter setups are presented along with 17 evaluation metrics. Limitations of these metrics are discussed and a comparison between our filtering methods is presented on DailyDialog (Section 5.3), and other datasets (Section 5.4). 5.1 Model and Parameters Dataset Type Th. SOURCE TARGET BOTH DailyDialog ID 1 5.64% 6.98% 12.2% AE 3.5 5.39% 7.06% 12.0% SC 3.5 6.53% 8.45% 14.3% Cornell ID 4 7.39% 14.1% Twitter ID 0.5 1.82% 9.96% Table 1: Entropy threshold (Th.) and amount of data filtered for all datasets in the 3 filtering scenarios. ID stands for IDENTITY, AE stands for AVG-EMBEDDING, and SC for SENT2VEC. We use transformer (Vaswani et al., 2017) as our dialog model, an encoder-decoder architecture relying solely on attention mechanisms (Bahdanau et al., 2015). transformer has already been applied to a plethora of natural language processing tasks, including dialog modeling (Dinan et al., 2019; Mazare et al., 2018; Devlin et al., 2018). We used the official implementation6 (see Appendix A.2 for a report of hyperparameters). 6https://github.com/tensorflow/ tensor2tensor The vocabulary for DailyDialog was limited to the most frequent 16 384 words, and train / validation / test splits contained 71 517 / 9 027 / 9 318 examples, respectively. Clustering and Filtering. For AVGEMBEDDING fastText7 embeddings were used. The bandwidth of Mean Shift was set to 0.7 and 3.5 for AVG-EMBEDDING and SENT2VEC, which produced 40 135 and 23 616 clusters, respectively. Entropy thresholds and amount of data filtered can be found in Table 1. Generally we set the threshold so that filtered data amount is similar to the DailyDialog IDENTITY scenario. We also set a threshold for the maximum average utterance length (15 and 20 for AVG-EMBEDDING and SENT2VEC) in clusters that we considered for filtering, excluding outliers from the filtering process (see Section 4.2). Training and Decoding. Word embeddings of size 512 were randomly initialized, batch size was set to 2048 tokens, and we used the Adam optimizer (Kingma and Ba, 2014). We experimented with various beam sizes (Graves, 2012), but greedy decoding performed better according to all metrics, also observed previously (Asghar et al., 2017; Shao et al., 2017; Tandon et al., 2017). 5.2 Evaluation Metrics As mentioned in Section 2, automatic evaluation of chatbots is an open research problem. In order to get as complete a picture as possible, we use 17 metrics that have been applied to dialog models over the past years, briefly described below. These metrics assess different aspects of response quality, thus models should be compared on the whole set of metrics. Response length. Widely used as a simple engagement indicator (Serban et al., 2017b; Tandon et al., 2017; Baheti et al., 2018). Word and utterance entropy. The per-word entropy Hw = −1 |U| P w∈U log2 p(w) of responses is measured to determine their non-genericness (Serban et al., 2017b). Probabilities are calculated based on frequencies observed in the training data. We introduce the bigram version of this metric, to measure diversity at the bigram level as well. Utterance entropy is the product of Hw and |U|, also reported at the bigram level. 7https://fasttext.cc/ 5655 KL divergence. We use the KL divergence between model and ground truth (GT) response sets to measure how well a model can approximate the GT distribution of words. Specifically, we define distributions pgt and pm based on each set of responses and calculate the KL divergence Dkl = 1 |Ugt| P w∈Ugt log2 pgt(w) pm(w) for each GT response. The bigram version of this metric is also reported. Embedding metrics. Embedding average, extrema, and greedy are widely used metrics (Liu et al., 2016; Serban et al., 2017b; Zhang et al., 2018c). average measures the cosine similarity between the averages of word vectors of response and target utterances. extrema constructs a representation by taking the greatest absolute value for each dimension among the word vectors in the response and target utterances and measures the cosine similarity between them. Finally, greedy matches each response token to a target token (and vice versa) based on the cosine similarity between their embeddings and averages the total score across all words. For word embeddings and average word embedding representations, we used the same setup as in AVG-EMBEDDING. Coherence. We measure the cosine similarity between pairs of input and response (Xu et al., 2018b). Although a coherence value of 1 would indicate that input and response are the same, generally a higher value seems better as model responses tend to have lower coherence than targets. Distinct metrics. Distinct-1 and distinct-2 are widely used in the literature (Li et al., 2016a; Shen et al., 2018a; Xu et al., 2018b), measuring the ratio of unique unigrams/bigrams to the total number of unigrams/bigrams in a set of responses. However, they are very sensitive to the test data size, since increasing the number of examples in itself lowers their value. While the number of total words increases linearly, the number of unique words is limited by the vocabulary, and we found that the ratio decreases even in human data (see Appendix A.3 for details). It is therefore important to only compare distinct metrics computed on the same test data. Bleu. Measuring n-gram overlap between response and target is widely used in the machine learning and dialog literature (Shen et al., 2018a; Xu et al., 2018b). We report BLEU-1, BLUE2, BLEU-3, and BLEU-4 computed with the 4th smoothing algorithm described in Chen and Cherry (2014). Figure 4: Embedding metrics and coherence (on validation data) as a function of the training evolution of transformer on unfiltered data (DailyDialog). Figure 5: Training (bottom) and validation (top) loss with respect to training steps of transformer trained on unfiltered data (DailyDialog). Normally metrics are computed at the validation loss minimum of a model, however in the case of chatbot models loss may not be a good indicator of response quality (Section 2), thus we also looked at how our metrics progress during training. Figure 4 shows how coherence and the 3 embedding metrics saturate after about 80-100k steps, and never decrease (we ran the training for 300k steps, roughly 640 epochs). Most metrics show a similar trend of increasing until 100k steps, and then stagnating (see Appendix A.3 for more figures). In contrast, validation loss for the same training reaches its minimum after about 10-20k steps (Figure 5). This again suggests the inadequacy of 5656 |U| Hu w Hb w Hu u Hb u Du kl Db kl AVG EXT GRE COH d1 d2 b1 b2 b3 b4 TRF 8.6 7.30 12.2 63.6 93 .330 .85 .540 .497 .552 .538 .0290 .149 .142 .135 .130 .119 ID B 9.8 7.44 12.3 71.9 105 .315 .77 .559 .506 .555 .572 .0247 .138 .157 .151 .147 .136 T 10.9 7.67 12.7 83.2 121 .286 .72 .570 .507 .554 .584 .0266 .150 .161 .159 .156 .146 S 9.4 7.19 11.9 66.4 98 .462 1.08 .540 .495 .553 .538 .0262 .130 .139 .133 .128 .117 AE B 7.9 7.25 12.0 57.7 83 .447 1.05 .524 .486 .548 .524 .0283 .132 .128 .121 .115 .105 T 8.6 7.26 12.1 61.4 90 .425 1.12 .526 .492 .548 .529 .0236 .115 .133 .127 .121 .111 S 9.0 7.21 11.9 65.1 95 .496 1.16 .536 .490 .548 .538 .0232 .109 .134 .130 .126 .116 SC B 10.0 7.40 12.3 72.6 108 .383 .97 .544 .497 .549 .550 .0257 .131 .145 .142 .138 .128 T 11.2 7.49 12.4 82.2 122 .391 .97 .565 .500 .552 .572 .0250 .132 .153 .153 .152 .142 S 11.1 7.15 11.9 74.4 114 .534 1.27 .546 .501 .560 .544 .0213 .102 .144 .139 .135 .125 Table 2: Metrics computed at the minimum of the validation loss on the unfiltered test set (DailyDialog). TRF refers to transformer, ID to IDENTITY, AE to AVG-EMBEDDING, and SC to SENT2VEC. SOURCE-side, TARGET-side, and filtering BOTH sides are denoted by initials. Best results are highlighted with bold and best results separately for each entropy computing method are in italic (and those within a 95% confidence interval). |U| Hu w Hb w Hu u Hb u Du kl Db kl AVG EXT GRE COH d1 d2 b1 b2 b3 b4 TRF 11.5 7.98 13.4 95 142 .0360 .182 .655 .607 .640 .567 .0465 .297 .333 .333 .328 .315 ID B 13.1 8.08 13.6 107 162 .0473 .210 .668 .608 .638 .598 .0410 .275 .334 .340 .339 .328 T 12.2 8.04 13.6 100 150 .0335 .181 .665 .610 .640 .589 .0438 .289 .338 .341 .339 .328 S 12.3 7.99 13.5 101 153 .0406 .187 .662 .610 .641 .578 .0444 .286 .339 .342 .338 .326 AE B 11.9 7.98 13.5 98 147 .0395 .197 .649 .600 .628 .574 .0434 .286 .318 .321 .318 .306 T 12.5 7.99 13.5 102 155 .0436 .204 .656 .602 .634 .580 .0423 .279 .324 .327 .325 .313 S 12.1 7.93 13.4 99 148 .0368 .186 .658 .605 .636 .578 .0425 .278 .325 .328 .324 .311 SC B 12.8 8.07 13.6 105 159 .0461 .209 .655 .600 .629 .583 .0435 .282 .322 .328 .327 .316 T 13.0 8.06 13.6 107 162 .0477 .215 .657 .602 .632 .585 .0425 .279 .324 .330 .329 .318 S 12.1 7.96 13.4 100 150 .0353 .183 .657 .606 .638 .576 .0443 .286 .331 .333 .329 .317 RT 13.5 8.40 14.2 116 177 .0300 .151 .531 .452 .481 .530 .0577 .379 .090 .121 .130 .125 GT 14.1 8.39 13.9 122 165 0 0 1 1 1 .602 .0488 .362 1 1 1 1 Table 3: Metrics computed on the unfiltered test set (DailyDialog) after 150 epochs of training. TRF refers to transformer, ID to IDENTITY, AE to AVG-EMBEDDING, and SC to SENT2VEC. SOURCE-side, TARGET-side, and filtering BOTH sides are denoted by initials. Best results are highlighted with bold and best results separately for each entropy computing method are in italic (and those within a 95% confidence interval). GT refers to ground truth responses and RT refers to randomly selected responses from the training set. the loss function, but it also questions the validity of these metrics, as they seem to favor a model that overfitted the training data, which we can assume after 640 epochs. This could be due to the many identical inputs in train and test splits, because of the nature of dialog data. Most interesting are embedding metrics and BLEU scores (Section 5.3), since they show that even after overfitting responses do not get farther from targets. This is in line with other findings reporting that qualitatively responses are better after overfitting (Csaky, 2019; Tandon et al., 2017), however occasionally they also tend to be too specific and irrelevant. We leave it for future work to conduct human evaluation between non-overfitted and overfitted models to solidify these claims. In light of these issues, we compare trainings on the DailyDialog dataset both at the validation loss minimum and at an overfitted point (150 epochs). 5.3 DailyDialog Results We compute metrics on the unfiltered test set to show that filtered trainings perform better even on utterances that would have been filtered from the training data. TRF, the baseline transformer model trained on unfiltered data is compared to the 9 trainings on filtered data. In all tables the 17 metrics from left to right are: response length, unigram and bigram entropy, unigram and bigram utterance entropy, unigram and bigram KL divergence, embedding average, extrema and greedy, coherence, distinct-1 and distinct-2, and finally, BLEU-1, BLEU-2, BLEU-3 and BLEU-4 (see Section 5.2). Evaluating at the minimum validation loss (Ta5657 Input Response your starting salary is 2500 yuan a month and after you become a permanent employee it will be higher . BASE: i can tell you what is the best way to find a job . BASE-O: do you know what it is ? TARGET: i ’m very interested in the position . TARGET-O: that ’s very nice of you . i ’ll have to think about it . you can greatly improve your grades by always reflecting on what you can improve on . BASE: i want to be a great singer . BASE-O: i really appreciate it . thanks . TARGET: i think i am experienced in that position . i think i would like to make a good impression . TARGET-O: i always liked it . thank you . umm i think i would be hard to find something in that range in this area . you know the environment in the south district is the nicest . BASE: what about the kitchen ? BASE-O: what about the kitchen ? TARGET: what about the kitchen ? TARGET-O: what about the kitchen ? Table 4: Example inputs and responses from DailyDialog. BASE is trained on unfiltered data, and TARGET is the model trained on IDENTITY, TARGET filtered data. Models marked with O are evaluated at an overfitted point. ble 2) clearly shows that models trained on data filtered by IDENTITY and SENT2VEC are better than the baseline. IDENTITY performs best among the three methods, surpassing the baseline on all but the distinct-1 metric. SENT2VEC is a close second, getting higher values on fewer metrics than IDENTITY, but mostly improving on the baseline. Finally, AVG-EMBEDDING is inferior to the baseline, as it didn’t produce clusters as meaningful as SENT2VEC, and thus produced a lower quality training set. It seems like filtering high entropy targets (both in the case of IDENTITY and SENT2VEC) is more beneficial than filtering sources, and BOTH falls mostly in the middle as expected, since it combines the two filtering types. By removing example responses that are boring and generic from the dataset the model learns to improve response quality. Finding such utterances is useful for a number of purposes, but earlier it has been done mainly manually (Li et al., 2016d; Shen et al., 2017), whereas we provide an automatic, unsupervised method of detecting them based on entropy. Every value is higher after 150 epochs of training than at the validation loss minimum (Table 3). The most striking change is in the unigram KL divergence, which is now an order of magnitude lower. IDENTITY still performs best, falling behind the baseline on only the two distinct metrics. Interestingly this time BOTH filtering was better than the TARGET filtering. SENT2VEC still mostly improves the baseline and AVG-EMBEDDING now also performs better or at least as good as the baseline on most metrics. In some cases the best performing model gets quite close to the ground truth performance. On metrics that evaluate utterances without context (i.e. entropy, divergence, distinct), randomly selected responses achieve similar values as the ground truth, which is expected. However, on embedding metrics, coherence, and BLEU, random responses are significantly worse than those of any model evaluated. Computing the unigram and bigram KL divergence with a uniform distribution instead of the model yields a value of 4.35 and 1.87, respectively. Thus, all models learned a much better distribution, suggesting that this is indeed a useful metric. We believe the main reason that clustering methods perform worse than IDENTITY is that clustering adds some noise to the filtering process. Conducting a good clustering of sentence vectors is a hard task. This could be remedied by filtering only utterances instead of whole clusters, thus combining IDENTITY and the clustering methods. In this scenario, the entropy of individual utterances is computed based on the clustered data. The intuition behind this approach would be that the noise in the clusters based on which we compute entropy is less harmful than the noise in clusters which we consider for filtering. Finally, Table 4 shows responses from the baseline and the best performing model to 3 randomly selected inputs from the test set (which we made sure are not present in the training set) to show that training on filtered data does not degrade response quality. We show more example responses in Appendix A.3. 5.4 Cornell and Twitter Results To further solidify our claims we tested the two best performing variants of IDENTITY (BOTH and TARGET) on the Cornell Movie-Dialogs Corpus and on a subset of 220k examples from the Twit5658 |U| Hu w Hb w Hu u Hb u Du kl Db kl AVG EXT GRE COH d1 d2 b1 b2 b3 b4 TRF 8.1 6.55 10.4 54 75 2.29 3.40 .667 .451 .635 .671 4.7e-4 1.0e-3 .108 .120 .120 .112 ID B 7.4 6.67 10.8 50 69 1.96 2.91 .627 .455 .633 .637 2.1e-3 7.7e-3 .106 .113 .111 .103 T 12.0 6.44 10.4 74 106 2.53 3.79 .646 .456 .637 .651 9.8e-4 3.2e-3 .108 .123 .125 .118 RT 13.4 8.26 14.2 113 170 .03 .12 .623 .386 .601 .622 4.6e-2 3.2e-1 .079 .102 .109 .105 GT 13.1 8.18 13.8 110 149 0 0 1 1 1 .655 4.0e-2 3.1e-1 1 1 1 1 Table 5: Metrics on the unfiltered test set (Cornell) at the validation loss minimum. TRF refers to transformer, ID to IDENTITY. TARGET-side, and filtering BOTH sides are denoted by initials. Best results are highlighted with bold. GT refers to ground truth responses and RT refers to randomly selected responses from the training set. |U| Hu w Hb w Hu u Hb u Du kl Db kl AVG EXT GRE COH d1 d2 b1 b2 b3 b4 TRF 20.6 6.89 11.4 121 177 2.28 3.40 .643 .395 .591 .659 2.1e-3 6.2e-3 .0519 .0666 .0715 .0693 ID B 20.3 6.95 11.4 119 171 2.36 3.41 .657 .394 .595 .673 1.2e-3 3.4e-3 .0563 .0736 .0795 .0774 T 29.0 6.48 10.7 157 226 2.68 3.69 .644 .403 .602 .660 1.4e-3 4.6e-3 .0550 .0740 .0819 .0810 RT 14.0 9.81 15.9 136 171 .05 .19 .681 .334 .543 .695 8.5e-2 5.4e-1 .0444 .0751 .0852 .0840 GT 14.0 9.78 15.8 135 167 0 0 1 1 1 .734 8.1e-2 5.3e-1 1 1 1 1 Table 6: Metrics on the unfiltered test set (Twitter) at the validation loss minimum. TRF refers to transformer, ID to IDENTITY. TARGET-side, and filtering BOTH sides are denoted by initials. Best results are highlighted with bold. GT refers to ground truth responses and RT refers to randomly selected responses from the training set. ter corpus8. Entropy thresholds were selected to be similar to the DailyDialog experiments (Table 1). Evaluation results at the validation loss minimum on the Cornell corpus and the Twitter dataset are presented in Table 5 and Table 6, respectively. On these noisier datasets our simple IDENTITY method still managed to improve over the baseline, but the impact is not as pronounced and in contrast to DailyDialog, BOTH and TARGET perform best on nearly the same number of metrics. On these noisier datasets the clustering methods might work better, this is left for future work. Compared to DailyDialog there are some important distinctions that also underline that these datasets are of lesser quality. The COHERENCE metric is worse on the ground truth responses than on model responses (Table 5, and some embedding metrics and BLEU scores are better on randomly selected responses than on model responses (Table 6). 6 Conclusion We proposed a simple unsupervised entropy-based approach that can be applied to any conversational dataset for filtering generic sources/targets that cause “confusion” during the training of opendomain dialog models. We compared various setups in an extensive quantitative evaluation, and showed that the best approach is measuring the 8https://github.com/Marsan-Ma/chat_ corpus/ entropy of individual utterances and filtering pairs based on the entropy of target, but not source utterances. Some limitations of current automatic metrics and the loss function have also been shown, by examining their behavior on random data and with overfitting. In the future, we plan to explore several additional ideas. As mentioned in Section 5.3, we want to extend our clustering experiments combining the ideas behind IDENTITY and the clustering methods to make them more robust to noise. We wish to conduct clustering experiments on noisier datasets and try other sentence representations (Devlin et al., 2018). We also plan to combine our method with coherence-based filtering (Xu et al., 2018b). Furthermore, we intend to perform a direct quantitative evaluation of our method based on human evaluation. Finally, we believe our method is general enough that it could also be applied to datasets in other similar NLP tasks, such as machine translation, which could open another interesting line of future research. Acknowledgments We wish to thank Evelin ´Acs, P´eter Ih´asz, M´arton Makrai, Luca Szegletes, and all anonymous reviewers for their thoughtful feedback. Work partially supported by Project FIEK 16-1-2016-0007, financed by the FIEK 16 funding scheme of the Hungarian National Research, Development and Innovation Office (NKFIH). 5659 References Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. In International Conference on Learning Representations. Nabiha Asghar, Pascal Poupart, Xin Jiang, and Hang Li. 2017. Deep active learning for dialogue generation. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017), pages 78–83. Association for Computational Linguistics. Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in-domain data selection. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 355–362, Edinburgh, Scotland, UK. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR 2015). Ashutosh Baheti, Alan Ritter, Jiwei Li, and Bill Dolan. 2018. Generating more interesting responses in neural conversation models with distributional constraints. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3970–3980. Association for Computational Linguistics. Boxing Chen and Colin Cherry. 2014. A systematic comparison of smoothing techniques for sentencelevel BLEU. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 362–367, Baltimore, Maryland, USA. Association for Computational Linguistics. Kyunghyun Cho, Bart van Merri¨eenboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734, Doha, Qatar. Richard Csaky. 2019. Deep learning based chatbot models. National Scientific Students’ Associations Conference. Https://tdk.bme.hu/VIK/DownloadPaper/asdad. Cristian Danescu-Niculescu-Mizil and Lillian Lee. 2011. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the 2nd Workshop on Cognitive Modeling and Computational Linguistics, pages 76–87. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In International Conference on Learning Representations. Hao Fang, Hao Cheng, Maarten Sap, Elizabeth Clark, Ari Holtzman, Yejin Choi, Noah A. Smith, and Mari Ostendorf. 2018. Sounding board: A user-centric and content-driven social chatbot. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 96–100. Association for Computational Linguistics. Keinosuke Fukunaga and Larry Hostetler. 1975. The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Transactions on information theory, 21(1):32–40. Xiang Gao, Sungjin Lee, Yizhe Zhang, Chris Brockett, Michel Galley, Jianfeng Gao, and Bill Dolan. 2019. Jointly optimizing diversity and relevance in neural response generation. arXiv preprint arXiv:1902.11205. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Thirty-Second AAAI Conference on Artificial Intelligence. Association for the Advancement of Artificial Intelligence. Alex Graves. 2012. Sequence transduction with recurrent neural networks. In Representation Learning Workshop, ICML 2012, Edinburgh, Scotland. Xiaodong Gu, Kyunghyun Cho, Jung-Woo Ha, and Sunghun Kim. 2019. DialogWAE: Multimodal response generation with conditional wasserstein auto-encoder. In International Conference on Learning Representations. Chaitanya K Joshi, Fei Mi, and Boi Faltings. 2017. Personalization in goal-oriented dialog. arXiv preprint arXiv:1706.07503. Kirthevasan Kandasamy, Yoram Bachrach, Ryota Tomioka, Daniel Tarlow, and David Carter. 2017. Batch policy gradient methods for improving neural conversation models. arXiv preprint arXiv:1702.03334. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Ben Krause, Marco Damonte, Mihai Dobre, Daniel Duma, Joachim Fainberg, Federico Fancellu, Emmanuel Kahembwe, Jianpeng Cheng, and Bonnie Webber. 2017. Edina: Building an open domain socialbot with self-dialogues. In 1st Proceedings of Alexa Prize (Alexa Prize 2017). 5660 Ilya Kulikov, Alexander H Miller, Kyunghyun Cho, and Jason Weston. 2018. Importance of a search strategy in neural dialogue modelling. arXiv preprint arXiv:1811.00907. Mike Lewis, Denis Yarats, Yann N Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or no deal? end-to-end learning for negotiation dialogues. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2443–2453. Association for Computational Linguistics. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of NAACL-HLT 2016, pages 110–119. Association for Computational Linguistics. Jiwei Li, Michel Galley, Chris Brockett, Georgios P Spithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 994– 1003. Association for Computational Linguistics. Jiwei Li, Alexander H Miller, Sumit Chopra, Marc’Aurelio Ranzato, and Jason Weston. 2016c. Dialogue learning with human-in-the-loop. arXiv preprint arXiv:1611.09823. Jiwei Li, Will Monroe, and Dan Jurafsky. 2017a. Data distillation for controlling specificity in dialogue generation. arXiv preprint arXiv:1702.06703. Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016d. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1192– 1202. Association for Computational Linguistics. Jiwei Li, Will Monroe, Tianlin Shi, Alan Ritter, and Dan Jurafsky. 2017b. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2157–2169. Association for Computational Linguistics. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017c. Dailydialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the The 8th International Joint Conference on Natural Language Processing, pages 986– 995. AFNLP. Zachary Lipton, Xiujun Li, Jianfeng Gao, Lihong Li, Faisal Ahmed, and Li Deng. 2018. Bbq-networks: Efficient exploration in deep reinforcement learning for task-oriented dialogue systems. In The ThirtySecond AAAI Conference on Artificial Intelligence (AAAI-18). Association for the Advancement of Artificial Intelligence. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122–2132. Association for Computational Linguistics. Huiting Liu, Tao Lin, Hanfei Sun, Weijian Lin, ChihWei Chang, Teng Zhong, and Alexander Rudnicky. 2017. Rubystar: A non-task-oriented mixture model dialog system. In 1st Proceedings of Alexa Prize (Alexa Prize 2017). Ryan Lowe, Michael Noseworthy, Iulian Vlad Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1116–1126. Association for Computational Linguistics. Oswaldo Ludwig. 2017. End-to-end adversarial learning for generative conversational agents. arXiv preprint arXiv:1711.10122. Pierre-Emmanuel Mazare, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018. Training millions of personalized dialogue agents. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2775–2779. Association for Computational Linguistics. Kaixiang Mo, Yu Zhang, Qiang Yang, and Pascale Fung. 2017. Fine grained knowledge transfer for personalized task-oriented dialogue systems. arXiv preprint arXiv:1711.04079. Nikita Moghe, Siddhartha Arora, Suman Banerjee, and Mitesh M. Khapra. 2018. Towards exploiting background knowledge for building conversation systems. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2322–2332, Brussels, Belgium. Association for Computational Linguistics. Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3349–3358. The COLING 2016 Organizing Committee. Oluwatobi Olabiyi, Alan Salimov, Anish Khazane, and Erik Mueller. 2018. Multi-turn dialogue response generation in an adversarial learning framework. arXiv preprint arXiv:1805.11752. Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2018. Unsupervised learning of sentence embeddings using compositional n-gram features. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational 5661 Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 528–540. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia. Yookoon Park, Jaemin Cho, and Gunhee Kim. 2018. A hierarchical latent structure for variational conversation modeling. In Proceedings of NAACL-HLT 2018, pages 1792–1801. Association for Computational Linguistics. Ashwin Ram, Rohit Prasad, Chandra Khatri, Anu Venkatesh, Raefer Gabriel, Qing Liu, Jeff Nunn, Behnam Hedayatnia, Ming Cheng, Ashish Nagar, et al. 2018. Conversational ai: The science behind the alexa prize. arXiv preprint arXiv:1801.03604. Joao Sedoc, Daphne Ippolito, Arun Kirubarajan, Jai Thirani, Lyle Ungar, and Chris Callison-Burch. 2018. Chateval: A tool for the systematic evaluation of chatbots. In Proceedings of the Workshop on Intelligent Interactive Systems and Language Generation (2IS&NLG), pages 42–44. Association for Computational Linguistics. Iulian V Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, et al. 2017a. A deep reinforcement learning chatbot. arXiv preprint arXiv:1709.02349. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In AAAI, pages 3776–3784. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. 2017b. A hierarchical latent variable encoder-decoder model for generating dialogues. In Thirty-First AAAI Conference on Artificial Intelligence. Association for the Advancement of Artificial Intelligence. Igor Shalyminov, Ondˇrej Duˇsek, and Oliver Lemon. 2018. Neural response ranking for social conversation: A data-efficient approach. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI, pages 1–8. Association for Computational Linguistics. Yuanlong Shao, Stephan Gouws, Denny Britz, Anna Goldie, Brian Strope, and Ray Kurzweil. 2017. Generating high-quality and informative conversation responses with sequence-to-sequence models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2210–2219. Association for Computational Linguistics. Xiaoyu Shen, Hui Su, Wenjie Li, and Dietrich Klakow. 2018a. Nexus network: Connecting the preceding and the following in dialogue generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4316– 4327. Association for Computational Linguistics. Xiaoyu Shen, Hui Su, Yanran Li, Wenjie Li, Shuzi Niu, Yang Zhao, Akiko Aizawa, and Guoping Long. 2017. A conditional variational framework for dialog generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 504–509. Association for Computational Linguistics. Xiaoyu Shen, Hui Su, Shuzi Niu, and Vera Demberg. 2018b. Improving variational encoder-decoders in dialogue generation. In The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18). Association for the Advancement of Artificial Intelligence. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proc. NIPS, pages 3104–3112, Montreal, CA. Shubhangi Tandon, Ryan Bauer, et al. 2017. A dual encoder sequence to sequence model for open-domain dialogue modeling. arXiv preprint arXiv:1710.10520. Chongyang Tao, Lili Mou, Dongyan Zhao, and Rui Yan. 2018. Ruber: An unsupervised method for automatic evaluation of open-domain dialog systems. In The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18). Association for the Advancement of Artificial Intelligence. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Oriol Vinyals and Quoc V. Le. 2015. A neural conversational model. In Proceedings of the 31st International Conference on Machine Learning. Yansen Wang, Chenyi Liu, Minlie Huang, and Liqiang Nie. 2018. Learning to ask questions in opendomain conversational systems with typed decoders. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2193–2203. Association for Computational Linguistics. Bolin Wei, Shuai Lu, Lili Mou, Hao Zhou, Pascal Poupart, Ge Li, and Zhi Jin. 2017. Why do neural dialog systems generate short and meaningless 5662 replies? a comparison between dialog and translation. arXiv preprint arXiv:1712.02250. Marek Wojciechowski and Maciej Zakrzewicz. 2002. Dataset filtering techniques in constraint-based frequent pattern mining. In Pattern detection and discovery, pages 77–91. Springer. Bowen Wu, Nan Jiang, Zhifeng Gao, Suke Li, Wenge Rong, and Baoxun Wang. 2018. Why do neural response generation models prefer universal replies? arXiv preprint arXiv:1808.09187. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17). Association for the Advancement of Artificial Intelligence. Chen Xing, Yu Wu, Wei Wu, Yalou Huang, and Ming Zhou. 2018. Hierarchical recurrent attention network for response generation. In The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI18). Association for the Advancement of Artificial Intelligence. Yujie Xing and Raquel Fern´andez. 2018. Automatic evaluation of neural personality-based chatbots. In Proceedings of The 11th International Natural Language Generation Conference, pages 189–194. Association for Computational Linguistics. Can Xu, Wei Wu, and Yu Wu. 2018a. Towards explainable and controllable open domain dialogue generation with dialogue acts. arXiv preprint arXiv:1807.07255. Xinnuo Xu, Ondˇrej Duˇsek, Ioannis Konstas, and Verena Rieser. 2018b. Better conversations by modeling, filtering, and optimizing for coherence and diversity. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3981–3991. Association for Computational Linguistics. Hainan Zhang, Yanyan Lan, Jiafeng Guo, Jun Xu, and Xueqi Cheng. 2018a. Reinforcing coherence for sequence to sequence model in dialogue generation. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI18), pages 4567–4573. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018b. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2204–2213. Association for Computational Linguistics. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018c. Generating informative and diverse conversational responses via adversarial information maximization. In 32nd Conference on Neural Information Processing Systems (NeurIPS 2018). Tiancheng Zhao, Kyusong Lee, and Maxine Eskenazi. 2018. Unsupervised discrete sentence representation learning for interpretable neural dialog generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1098–1107. Association for Computational Linguistics. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 654–664. Association for Computational Linguistics. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional chatting machine: Emotional conversation generation with internal and external memory. In The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI18). Association for the Advancement of Artificial Intelligence. Wenya Zhu, Kaixiang Mo, Yu Zhang, Zhangbin Zhu, Xuezheng Peng, and Qiang Yang. 2017. Flexible end-to-end dialogue system for knowledge grounded conversation. arXiv preprint arXiv:1709.04264. 5663 A Appendix A.1 High Entropy Utterances A.1.1 Top 20 high entropy utterances Utterance Frequency Entropy yes . 173 7.06 thank you . 141 6.57 why ? 104 6.33 here you are . 99 6.10 ok . 75 6.00 what do you mean ? 77 5.97 may i help you ? 72 5.96 can i help you ? 80 5.93 really ? 74 5.91 sure . 66 5.66 what can i do for you ? 51 5.63 why not ? 61 5.42 what ? 48 5.27 what happened ? 44 5.18 anything else ? 43 5.17 thank you very much . 72 5.14 what is it ? 41 5.06 i see . 42 5.05 no . 42 5.04 thanks . 50 5.03 Table 7: Top 20 source utterances (from DailyDialog) sorted by entropy. The entropy was calculated with IDENTITY. A.1.2 High Entropy Clusters Figure 6: A high entropy cluster from DailyDialog. Figure 7: A high entropy cluster from DailyDialog. Figure 8: A high entropy cluster from DailyDialog. 5664 A.2 Model Parameters Name Value Hidden size 512 Number of hidden layers 6 Label smoothing 0.1 Filter size 2048 Number of attention heads 8 Layer dropout 0.2 Relu dropout 0.1 Attention dropout 0.1 Learning rate 0.2 Learning rate warmup steps 8000 Table 8: Transformer hyperparameters. A.3 Evaluation Metrics and Examples Figure 9: Distinct-1 metric with respect to number of test examples (on DailyDialog). Model responses were evaluated on 9000 examples only, since the rest were training examples. Figure 10: Distinct-2 metric with respect to number of test examples (on DailyDialog). Model responses were evaluated on 9000 examples only, since the rest were training examples. 5665 Figure 11: Average length of responses (computed on the validation set) with respect to the number of training steps of the transformer trained on unfiltered data (DailyDialog). Figure 12: Word entropy of responses (computed on the validation set) with respect to the number of training steps of the transformer trained on unfiltered data (DailyDialog). 5666 Figure 13: Utterance entropy of responses (computed on the validation set) with respect to the number of training steps of the transformer trained on unfiltered data (DailyDialog). Figure 14: KL divergence of responses (computed on the validation set) with respect to the number of training steps of the transformer trained on unfiltered data (DailyDialog). 5667 Figure 15: Distinct-1 and distinct-2 metrics (computed on the validation set) with respect to the number of training steps of the transformer trained on unfiltered data (DailyDialog). 5668 Input Unfiltered Unfiltered-Overfitted IDENTITY TARGET filtered IDENTITY TARGET filtered-overfitted some high speed chase scene i think . i ’m not sure . but many things are went wrong ! i ’m so sorry . enough ! how long is this bus ride ? about twelve hours . it ’s about fortyfive minutes to an hour . it ’s about 20 minutes . it ’s only about fortyfive minutes to an hour . is that yellow bus it ? no it ’s not far . it ’s a bit cold . yeah . there s a new one in america . no it ’s a yellow one . no it ’s not very expensive these days . hi mary ! what do you like to do in your spare time ? i like playing basketball . well i like to play the piano . i like playing basketball . i like playing chess . let me see . we can visit the country . we have n’t done that for such a long time . i hope so . yes there are many famous paintings on show . i hope it ’s the nicest city in the united states . i ’m glad to hear that . where are your parents ? thirty u . s . dollars . . . and it ’s only been used for six months . that ’s fine . i see . is it a long time to come to the station ? how many months did you get there ? what ’re two weeks ? five hundred dollars altogether . here you are . here you are . here is your bill . here you are . thank you byebye . look at this bridge . we will soon be crossing the thames . yes i agree . oh yeah . how far is it ? well there ’s a spanish teacher . so i ’ll often go back to the spanish museum . is the furniture crafted well ? yes it is . yes it ’s fully furnished . yes the furniture is right over there . yes it ’s a furnished eatin kitchen . the furniture and the icebox are all pretty outdated . where do they go ? they ’re on the bottom floor . i have no idea . they re in the cabinet next to the library . go to them department please . let me pay for it . ok . here you are . cash credit card or debit card ? ok . here is the money . cash credit card or debit card ? he never does and in fact he seldom communicates with him . he is a hard job . he can also learn a lot from the government . that ’s right . he has no children and his wife experience too . he is a very kind of man . yeah . he ’s a very easy teacher . so you must have had a busy and enjoyable day yesterday . yes i am . i really enjoyed it . yes . i am very sorry i can t make it . i will . thanks for coming . yes . let ’s go . yes . the reservation under mr . watson . ok . could you spell that please ? i ’m afraid there ’s no seat in the middle of the table . ok . would you please fill out the table ? there are so many to choose from . i did n’t know you could do all of this using a card . i see . what ’s the best part ? i ’m not sure . i ’ll go to the doctor and get my card right away . you mean you do n’t have to tell me the truth ? it was great ! how was your year ? it was great . i had a great time . my mother had a birthday party for me . it was great . i ’m glad i did n’t like it . it was ok . there was a lot of turbulence though . the worst thing was n’t too good ! tomorrow ? no i am afraid i won t be able to . i m going on business to birmingham . i think you are looking for a .<unk><unk>in the future . the best reason why are you want to see the results . well there is a problem with the exchange rate . it s the same size that you have in the u . s . you should go to the dentist . you might want to change your plans . with the cold and wet weather that we re supposed to have tomorrow the roads are going to be rather slippery . Table 9: Responses to randomly selected test inputs which we made sure were not in the training data (DailyDialog). Unfiltered is the model trained on unfiltered data, and IDENTITY TARGET is the model trained on IDENTITY, TARGET filtered data. Overfitted means that the respective model is evaluated at an overfitted point. 5669 Input Unfiltered Unfiltered-Overfitted IDENTITY TARGET filtered IDENTITY TARGET filtered-overfitted you ’re flattering me . no . no really . no but you can t . no really . i can only hear a little accent . three bags and a suitcase . this is my luggage to check . ok . i will take one . what ’s this ? essential oil ? wow ! this is a small bag . i ’ll put you in the bag . ok . is this your luggage properly labelled sir ? i like orange better . what ’s your favorite flavor ? how about drinks ? i ’ll tell you what you ’re doing . could i have some water ? i like to watch barbie sticking her tongue out . what ’s her figure ? oh she ’s a pretty woman . you are not supposed to be serious . oh shut up ! she ’s not what you are thinking about . if you give us your inquiry i shall go very carefully into the price and try my best to put you on the best of the terms . ok . i will take it . thank you very much . we will try our best to do our best to help you . i understand . i understand that you are right . ok i will . i look forward to it . ok . any time on friday will be ok with me . ok . see you then . i think so . i ’ll be expecting you then . great ! no wonder you can control your voice so well . you are a professional singer . i am not a singer . i would n’t mind a position that would be great . i am a singer . i think i can be a singer in the lead . i would n’t like to be a singer but i love to sing that dream . when can i get high speed internet installed ? you can make a list of the company and make sure that you do n’t have to use the machine . you ’ll need to make an appointment before the company starts . you can get up at the airport and you ’ll have to make an appointment . you ’ll have to make an appointment before we can come in and install it . i like those kinds of programmes too . they re very informative . i think that many people underrate the education value of tv . i agree . people often criticize tv for showing too much sex and violence . i agree . people often criticize tv for showing too much sex and violence . i think that teenagers should be often become addicted and violence . i agree . people often criticize tv for showing too much sex and violence . can you tell that i m excited ? i m afraid i m not a child . i m a fashion designer . next month i have a fashion show . i m not sure . what do you want to know ? i m sure that i ll cook the same for you . would you like to have a shampoo sir ? yes i ’d like a glass of iced tea . no thank you . no i ’d rather have it with me . yes with the lux dandruff shampoo . it helps a lot to get rid of my dandruff . what else would i prepare sir ? you can bring your beef with your chicken and a bottle of wine . oh good ! can you make some recommendation for me ? well the interviewers are not interested in the position . let me see . everything looks fine . he always says i am a hard worker with consciousness of responsibility sufficient education and enough experience . i think that maybe you are right . what s the matter with you ? how many years of experience will you earn ? would you like to work in the finance department ? what made you think that ? i like it a lot . i love that shirt on you . i think it ’s great . i think i ve learned a lot from different countries . i will care for it . i can tell you what bus to catch but you have to walk a little bit . i do n’t know . tell me how to get a ticket and a student bus station . i ’m sorry but i do n’t have to wait . you ’re going to have a car . Table 10: Responses to randomly selected test inputs which we made sure were not in the training data (DailyDialog). Unfiltered is the model trained on unfiltered data, and IDENTITY TARGET is the model trained on IDENTITY, TARGET filtered data. Overfitted means that the respective model is evaluated at an overfitted point.
2019
567
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5670–5681 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5670 Zero-shot Word Sense Disambiguation using Sense Definition Embeddings Sawan Kumar1 Sharmistha Jat1 Karan Saxena2,∗ Partha Talukdar1 1 Indian Institute of Science, Bangalore 2 Carnegie Mellon University, Pittsburgh {sawankumar,sharmisthaj,ppt}@iisc.ac.in, [email protected] Abstract Word Sense Disambiguation (WSD) is a longstanding but open problem in Natural Language Processing (NLP). WSD corpora are typically small in size, owing to an expensive annotation process. Current supervised WSD methods treat senses as discrete labels and also resort to predicting the Most-FrequentSense (MFS) for words unseen during training. This leads to poor performance on rare and unseen senses. To overcome this challenge, we propose Extended WSD Incorporating Sense Embeddings (EWISE), a supervised model to perform WSD by predicting over a continuous sense embedding space as opposed to a discrete label space. This allows EWISE to generalize over both seen and unseen senses, thus achieving generalized zeroshot learning. To obtain target sense embeddings, EWISE utilizes sense definitions. EWISE learns a novel sentence encoder for sense definitions by using WordNet relations and also ConvE, a recently proposed knowledge graph embedding method. We also compare EWISE against other sentence encoders pretrained on large corpora to generate definition embeddings. EWISE achieves new stateof-the-art WSD performance. 1 Introduction Word Sense Disambiguation (WSD) is an important task in Natural Language Processing (NLP) (Navigli, 2009). The task is to associate a word in text to its correct sense, where the set of possible senses for the word is assumed to be known a priori. Consider the noun “tie” and the following examples of its usage (Miller, 1995). • “he wore a vest and tie” • “their record was 3 wins, 6 losses and a tie” ∗Work done as a Research Assistant at Indian Institute of Science, Bangalore. It is clear that the implied sense of the word “tie” is very different in the two cases. The word is associated with “neckwear consisting of a long narrow piece of material” in the first example, and with “the finish of a contest in which the winner is undecided” in the second. The goal of WSD is to predict the right sense, given a word and its context. WSD has been shown to be useful for popular NLP tasks such as machine translation (Neale et al., 2016; Pu et al., 2018), information extraction (Zhong and Ng, 2012; Delli Bovi et al., 2015) and question answering (Ramakrishnan et al., 2003). The task of WSD can also be viewed as an intrinsic evaluation benchmark for the semantics learned by sentence comprehension models. WSD remains an open problem despite a long history of research. In this work, we study the all-words WSD task, where the goal is to disambiguate all ambiguous words in a corpus. Supervised (Zhong and Ng, 2010; Iacobacci et al., 2016; Melamud et al., 2016) and semisupervised approaches (Taghipour and Ng, 2015; Yuan et al., 2016) to WSD treat the target senses as discrete labels. Treating senses as discrete labels limits the generalization capability of these models for senses which occur infrequently in the training data. Further, for disambiguation of words not seen during training, these methods fall back on using a Most-Frequent-Sense (MFS) strategy, obtained from an external resource such as WordNet (Miller, 1995). To address these concerns, unsupervised knowledge-based (KB) approaches have been introduced, which rely solely on lexical resources (e.g., WordNet). KB methods include approaches based on context-definition overlap (Lesk, 1986; Basile et al., 2014), or on the structural properties of the lexical resource (Moro et al., 2014; Weissenborn et al., 2015; Chaplot et al., 2015; Chaplot and Salakhutdinov, 2018; 5671 Scores Sense Labels 0 1 0 Sense Embeddings necktie.n.01 link.n.02 cat.n.01 !"#$%&'#( necktie.n.01 fastener.n.02 neckwear.n.01 Tail Entities Head Entity Triplet Scores 0 1 Knowledge Graph Embedding Training Labels neckwear consisting of long a narrow piece Definition Encoder Max Pooling necktie.n.01 embedding neckwear.n.01, hypernym of, necktie.n.01 fastener.n.01, hypernym of, necktie.n.01   he wore a tie BiLSTM Selfattention Linear Sense Embedding Prediction Context Embedding Attentive Context Encoder Natural Language Text BiLSTM .. . . Figure 1: Overview of WSD in EWISE: A sequence of input tokens is encoded into context-aware embeddings using a BiLSTM and a self-attention layer (⊕indicates concatenation). The context-aware embeddings are then projected on to the space of sense embeddings. The score for each sense in the sense inventory is obtained using a dot product (indicated by ⊙) of the sense embedding with the projected word embedding. Please see Section 4.2 for details on the context encoding and training of the context encoder. The sense embedding for each sense in the inventory is generated using a BiLSTM-Max definition encoder. The encoder is learnt using the training signal present in WordNet Graph. An example signal with hypernym relation is depicted. Please see Section 4.3 for details on learning sense embeddings. Tripodi and Pelillo, 2017). While knowledge-based approaches offer a way to disambiguate rare and unseen words into potentially rare senses, supervised methods consistently outperform these methods in the general setting where inference is to be carried over both frequently occurring and rare words. Recently, Raganato et al. (2017b) posed WSD as a neural sequence labeling task, further improving the stateof-the-art. Yet, owing to an expensive annotation process (Lopez de Lacalle and Agirre, 2015), there is a scarcity of sense-annotated data thereby limiting the generalization ability of supervised methods. While there has been recent interest in incorporating definitions (glosses) to overcome the supervision bottleneck for WSD (Luo et al., 2018b,a), these methods are still limited due to their treatment of senses as discrete labels. Our hypothesis is that supervised methods can leverage lexical resources to improve on WSD for both observed and unobserved words and senses. We propose Extended WSD Incorporating Sense Embeddings (EWISE). Instead of learning a model to choose between discrete labels, EWISE learns a continuous space of sense embeddings as target. This enables generalized zero-shot learning, i.e., the ability to recognize instances of seen as well as unseen senses. EWISE utilizes sense definitions and additional information from lexical resources. We believe that natural language information manually encoded into definitions contains a rich source of information for representation learning of senses. To obtain definition embeddings, we propose a novel learning framework which leverages recently successful Knowledge Graph (KG) embedding methods (Bordes et al., 2013; Dettmers et al., 2018). We also compare against sentence encoders pretrained on large corpora. In summary, we make the following contributions in this work. • We propose EWISE, a principled framework to learn from a combination of senseannotated data, dictionary definitions and lexical knowledge bases. • We propose the use of sense embeddings instead of discrete labels as the targets for supervised WSD, enabling generalized zeroshot learning. • Through extensive evaluation, we demonstrate the effectiveness of EWISE over stateof-the-art baselines. EWISE source code is available at https:// github.com/malllabiisc/EWISE 2 Related Work Classical approaches to supervised WSD relied on extracting potentially relevant features and learning classifiers independently for each word 5672 (Zhong and Ng, 2010). Extensions to use distributional word representations have been proposed (Iacobacci et al., 2016). Semi-supervised approaches learn context representations from unlabeled data, followed by a nearest neighbour classification (Melamud et al., 2016) or label propagation (Yuan et al., 2016). Recently, Raganato et al. (2017b) introduced neural sequence models for joint disambiguation of words in a sentence. All of these methods rely on sense-annotated data and, optionally, additional unlabeled corpora. Lexical resources provide an important source of knowledge about words and their meanings. Recent work has shown that neural networks can extract semantic information from dictionary definitions (Bahdanau et al., 2017; Bosc and Vincent, 2018). In this work, we use dictionary definitions to get representations of word meanings. Dictionary definitions have been used for WSD, motivated by the classical method of Lesk (Lesk, 1986). The original as well as subsequent modifications of the algorithm (Banerjee and Pedersen, 2003), including using word embeddings (Basile et al., 2014), operate on the hypothesis that the definition of the correct sense has a high overlap with the context in which a word is used. These methods tend to rely on heuristics based on insights about natural language text and their definitions. More recently, gloss (definition)-augmented neural approaches have been proposed which integrate a module to score definition-context similarity (Luo et al., 2018b,a), and achieve state-ofthe-art results. We differ from these works in that we use the embeddings of definitions as the target space of a neural model, while learning in a supervised setup. Also, we don’t rely on any overlap heuristics, and use a single definition for a given sense as provided by WordNet. One approach for obtaining continuous representations for definitions is to use Universal Sentence Representations, which have been explored to allow transfer learning from large unlabeled as well as labeled data (Conneau et al., 2017; Cer et al., 2018). There has also been interest in learning deep contextualized word representations (Peters et al., 2018; Devlin et al., 2019). In this work, we evaluate definition embeddings obtained using these methods. Structural Knowledge available in lexical resources such as WordNet has motivated several unsupervised knowledge-based approaches for WSD. Graph based techniques have been used to match words to the most relevant sense (Navigli and Lapata, 2010; Sinha and Mihalcea, 2007; Agirre et al., 2014; Moro et al., 2014; Chaplot and Salakhutdinov, 2018). Our work differs from these methods in that we use structural knowledge to learn better representations of definitions, which are then used as targets for the WSD model. To learn a meaningful encoder for definitions we rely on knowledge graph embedding methods, where we represent an entity by the encoding of its definition. TransE (Bordes et al., 2013) models relations between entities as translations operating on the embeddings of the corresponding entities. ConvE (Dettmers et al., 2018), a more recent method, utilizes a multi-layer convolutional network, allowing it to learn more expressive features. Predicting in an embedding space is key to our methods, allowing generalized zero shot learning capability, as well as incorporating definitions and structural knowledge. The idea has been explored in the context of zero-shot learning (Xian et al., 2018). Tying the input and output embeddings of language models (Press and Wolf, 2017) resembles our approach. 3 Background In this work, we propose to use the training signal present in WordNet relations to learn encoders for definitions (Section 4.3.2). To learn from WordNet relations, we employ recently popular Knowledge Graph (KG) Embedding learning methods. In Section 3.1, we briefly introduce the framework for KG Embedding learning, and present the specific formulations for TransE and ConvE. 3.1 Knowledge Graph Embeddings Knowledge Graphs, a set of relations defined over a set of entities, provide an important field of research for representation learning. Methods for learning representations for both entities and relations have been explored (Wang et al., 2017) with an aim to represent graphical knowledge. Of particular significance is the task of link prediction, i.e., predicting missing links (edges) in the graph. A Knowledge Graph is typically comprised of a set K of N triples (h, l, t), where head h and tail t are entities, and l denotes a relation. TransE defines a scoring function for a triple (h, l, t), as the dissimilarity between the head em5673 bedding, translated by the relation embedding, and the tail embedding: dh,l,t = ||eh + el −et||2 2, (1) where, eh, et and el are parameters to be learnt. A margin based criterion, with margin γ, can then be formulated as: LT = X (h,l,t)∈K X (h′,l,t′)∈K′ [γ + dh,l,t −dh′,l,t′]+, (2) where K′ is a set of corrupted triples (Bordes et al., 2013), and [x]+ refers to the positive part of x. ConvE formulates the scoring function ψl(eh, et) for a triple (h, l, t) as: ψl(eh, et) = f(vec(f([eh; el] ∗w))W)et, (3) where eh and et are entity parameters, el is a relation parameter, x denotes a 2D reshaping of x, w denotes the filters for 2D convolution, vec(x) denotes the vectorization of x, W represents a linear transformation, and f denotes a rectified linear unit. For a given head entity h, the score ψl(eh, et) is computed with each entity in the graph as a tail. Probability estimates for the validity of a triple are obtained by applying a logistic sigmoid function to the scores: p = σ(ψl(eh, et)). (4) The model is then trained using a binary cross entropy loss: LC = −1 N X i (ti.log(pi) + (1 −ti).log(1 −pi)), (5) where ti is 1 when (h, l, t) ∈K and 0, otherwise. 4 EWISE EWISE is a general WSD framework for learning from sense-annotated data, dictionary definitions and lexical knowledge bases (Figure 1). EWISE addresses a key issue with existing supervised WSD systems. Existing systems use discrete sense labels as targets for WSD. This limits the generalization capability to only the set of annotated words in the corpus, with reliable learning only for the word-senses which occur with high relative frequency. In this work, we propose using continuous space embeddings of senses as targets for WSD, to overcome the aforementioned supervision bottleneck. To ensure generalized zero-shot learning capability, it is important that the target sense embeddings be obtained independent of the WSD task learning. We use definitions of senses available in WordNet to obtain sense embeddings. Using Dictionary Definitions to obtain the representation for a sense enables us to benefit from the semantic overlap between definitions of different senses, while also providing a natural way to handle unseen senses. In Section 4.1, we state the task of WSD formally. We then describe the components of EWISE in detail. Here, we briefly discuss the components: • Attentive Context Encoder: EWISE uses a Bi-directional LSTM (BiLSTM) encoder to convert the sequence of tokens in the input sentence into context-aware embeddings. Self-attention is used to enhance the context for disambiguating the current word, followed by a projection layer to produce sense embeddings for each input token. The architecture is detailed in Section 4.2. • Definition Encoder: In EWISE, definition embeddings are learnt independent of the WSD task. In Section 4.3.1, we detail the usage of pretrained sentence encoders as baseline models for encoding definitions. In Section 4.3.2, we detail our proposed method to learn an encoder for definitions using structural knowledge in WordNet. 4.1 The WSD Task WSD is a classification problem for a word w (e.g., bank) in a context c, with class labels being the word senses (e.g., financial institution). We consider the all-words WSD task, where all content words - nouns, verbs, adjectives, adverbs need to be disambiguated (Raganato et al., 2017a). The set of all possible senses for a word is given by a predefined sense inventory, such as WordNet. In this work, we use sense candidates as provided in the evaluation framework of (Raganato et al., 2017a) which has been created using WordNet. More precisely, given a variable-length sequence of words x =< x1 . . . xT >, we need to predict a sequence of word senses y =< 5674 y1 . . . yT >. Output word sense yi comes from a predefined sense inventory S. During inference, the set of candidate senses Sw for input word w is assumed to be known a priori. 4.2 Attentive Context Encoder In this section, we detail how EWISE encodes the context of a word to be disambiguated using BiLSTMs (Hochreiter and Schmidhuber, 1997). BiLSTMs have been shown to be successful for generating effective context dependent representations for words. Following Raganato et al. (2017b), we use a BiLSTM with a self-attention layer to obtain sense-aware context specific representations of words. The sense embedding for a word is obtained through a projection of the context embedding. We then train the model with independently trained sense embeddings (Section 4.3) as target embeddings. Our model architecture is shown in Figure 1. The model processes a sequence of tokens xi, i ∈ [T] in a given sentence input by first representing each token with a real-valued vector representation, ei, via an embedding matrix We ∈R|V |∗d, where V is the vocabulary size and d is the size of the embeddings. The vector representations are then input to a 2 layer bidirectional LSTM encoder. Each word is represented by concatenating the forward hi f and backward hi b hidden state vectors of the second LSTM layer. ui = [hi f, hi b] (6) Following Vaswani et al. (2017), we use a scaled dot-product attention mechanism to get context information at each timestep t. Attention queries, keys and values are obtained using projection matrices Wq, Wk and Wv respectively, while the size of the projected key (dk) is used to scale the dotproduct between queries and values. ei t = dot(Wqui, Wkut); t ∈[1, T] ai = softmax( ei √dk ) ci = X t∈[1,T] ai t.Wvut ri = [ui, ci] (7) A projection layer (fully connected linear layer) maps this context-aware word representation ri to vi in the space of sense embeddings. vi = Wlri (8) During training, we multiply this with the sense embeddings of all senses in the inventory, to obtain a score for each output sense. A bias term is added to this score, where the bias is obtained as the dot product between the sense embedding and a learned parameter b. A softmax layer then generates probability estimates for each output sense. ˆpi j = softmax(dot(vi, ρj) + dot(b, ρj)); ρj ∈S (9) The cross entropy loss for annotated word xi is given by: Li wsd = − X j (zi j log(ˆpi j)), (10) where zi is the one-hot representation of the target sense yi in the sense inventory S. The network parameters are learnt by minimizing the average cross entropy loss over all annotated words in a batch. During inference, for each word xi, we select the candidate sense with the highest score. ˆyi = argmaxj(dot(vi, ρj) + dot(b, ρj)); ρj ∈Sxi (11) 4.3 Definition Encoder In this section, we detail how target sense embeddings are obtained in EWISE. 4.3.1 Pretrained Sentence Encoders We use pretrained sentence representation models, InferSent (Conneau et al., 2017) and USE (Cer et al., 2018) to encode definitions, producing sense embeddings of sizes 4096 and 512, respectively. We also experiment with deep context encoders, ELMO (Peters et al., 2018) and BERT (Devlin et al., 2019) to obtain embeddings for definitions. In each case, we encode a definition using the available pretrained models, producing a context embedding for each word in the definition. A fixed length representation is then obtained by averaging over the context embeddings of the words in the definition, from the final layer. This produces sense embeddings of sizes 1024 with both ELMO and BERT. 4.3.2 Knowledge Graph Embedding WordNet contains a knowledge graph, where the entities of the graph are senses (synsets), and re5675 Dev Test Datasets Concatenation of All Test Datasets SE7 SE2 SE3 SE13 SE15 Nouns Verbs Adj. Adv. ALL WordNet S1 55.2 66.8 66.2 63.0 67.8 67.6 50.3 74.3 80.9 65.2 Non-neural baselines MFS (Using training data) 54.5 65.6 66.0 63.8 67.1 67.7 49.8 73.1 80.5 65.5 IMS+emb (2016)ˆ 62.6 72.2 70.4 65.9 71.5 71.9 56.6 75.9 84.7 70.1 Leskext+emb (2014)* 56.7 63.0 63.7 66.2 64.6 70.0 51.1 51.7 80.6 64.2 UKBgloss+w2w (2014)* 42.9 63.5 55.4 62.9 63.3 64.9 41.4 69.5 69.7 61.1 Babelfy (2014) 51.6 67.0 63.5 66.4 70.3 68.9 50.7 73.2 79.8 66.4 Context2Vec (2016) ˆ 61.3 71.8 69.1 65.6 71.9 71.2 57.4 75.2 82.7 69.6 WSD-TM (2018) 55.6 69.0 66.9 65.3 69.6 69.7 51.2 76.0 80.9 66.9 Neural baselines BiLSTM+att+LEX (2017b) 63.7 72.0 69.4 66.4 70.8 71.6 57.1 75.6 83.2 69.7 BiLSTM+att+LEX+POS (2017b) 64.8 72.0 69.1 66.9 71.5 71.5 57.5 75.0 83.8 69.9 GASext (Linear) (2018b)* – 72.4 70.1 67.1 72.1 71.9 58.1 76.4 84.7 70.4 GASext (Concatenation) (2018b)* – 72.2 70.5 67.2 72.6 72.2 57.7 76.6 85.0 70.6 CANs (2018a)* – 72.2 70.2 69.1 72.2 73.5 56.5 76.6 83.3 70.9 HCAN (2018a)* – 72.8 70.3 68.5 72.8 72.7 58.2 77.4 84.1 71.1 EWISE (ConvE)* 67.3 73.8 71.1 69.4 74.5 74.0 60.2 78.0 82.1 71.8 Table 1: Comparison of F1-scores for fine-grained all-words WSD on Senseval and SemEval datasets in the framework of Raganato et al. (2017a). The F1 scores on different POS tags (Nouns, Verbs, Adjectives, and Adverbs) are also reported. WordNet S1 and MFS provide most-frequent-sense baselines. * represents models which access definitions, while ˆ indicates models which don’t access any external knowledge. EWISE (ConvE) is the proposed approach, where the ConvE method was used to generate the definition embeddings. Both the non-neural and neural supervised baselines presented here rely on a back-off mechanism, using WordNet S1 for words unseen during training. For each dataset, the highest score among existing systems with a statistically significant difference (unpaired t-test, p < 0.05) from EWISE is underlined. EWISE, which is capable of generalizing to unseen words and senses, doesn’t use any back-off. EWISE consistently outperforms all supervised and knowledge-based systems, except for adverbs. Please see Section 6.1 for details. While the overall performance of EWISE is comparable to the neural baselines in terms of statistical significance, the value of EWISE lies in its ability to handle unseen and rare words and senses (See Section 6.3). Further, among the models compared, EWISE is the only system which is statistically significant (unpaired t-test, p < 0.01) with respect to the WordNet S1 baseline across all test datasets. lations are defined over these senses. Example relations include hypernym and part of. With each entity (sense), there is an associated text definition. We propose to use WordNet relations as the training signal for learning definition encoders. The training set K is comprised of triples (h, l, t), where head h and tail t are senses, and l is a relation. Also, gx denotes the definition of entity x, as provided by WordNet. The dataset contains 18 WordNet relations (Bordes et al., 2013). The goal is to learn a sentence encoder for definitions and we select the BiLSTM-Max encoder architecture due to its recent success in sentence representation (Conneau et al., 2017). The words in the definition are encoded by a 2-layer BiLSTM to obtain context-aware embeddings for each word. A fixed length representation is then obtained by Max Pooling, i.e., selecting the maximum over each dimension. We denote this definition encoder by q(.). TransE We modify the dissimilarity measure in TransE (Equation 1) to represent both head (h) and tail (t) entities by an encoding of their definitions. dh,l,t = −cosine(q(h) + el, q(t)) (12) The parameters of the BiLSTM model q and the relation embeddings el are then learnt by minimizing the loss function in Equation 2. ConvE We modify the scoring function of ConvE (Equation 3), to represent a head entity by the encoding of its definition. ψl(eh, et) = f(vec(f([q(h); el] ∗w))W)et (13) Note that we represent only the head entity with an encoding of its definition while the tail entity t is still represented by parameter et. This helps restrict the size of the computation graph. The parameters of the model q, el and et are then learnt by minimizing the binary cross-entropy loss function in Equation 5. 5 Experimental Setup In this section, we provide details on the training and evaluation datasets. The training details are 5676 captured in Appendix A. 5.1 Data We use the English all-words WSD benchmarks for evaluating our models: 1. SensEval-2 (Palmer et al., 2001) 2. SensEval-3 (Snyder and Palmer, 2004) 3. SemEval-2013 (Navigli et al., 2013) 4. SemEval-2015 (Moro and Navigli, 2015) 5. ALL (Raganato et al., 2017a) Following (Raganato et al., 2017b), we use SemEval-2007 (Pradhan et al., 2007) as our development set. We use SemCor 3.0 (Miller et al., 1993) as our training set. To enable a fair comparison, we used the dataset versions provided by (Raganato et al., 2017a). For our experiments, we used the definitions available in WordNet 3.0. 6 Evaluation In this section, we aim to answer the following questions: • Q1: How does EWISE compare to stateof-the-art methods on standardized test sets? (Section 6.1) • Q2: What is the effect of ablating key components from EWISE? (Section 6.2) • Q3: Does EWISE generalize to rare and unseen words (Section 6.3.1) and senses (Section 6.3.2)? • Q4: Can EWISE learn with less annotated data? (Section 6.4) 6.1 Overall Results In this section, we report the performance of EWISE on the fine-grained all-words WSD task, using the standardized benchmarks and evaluation methodology introduced in Raganato et al. (2017a). In Table 1, we report the F1 scores for EWISE, and compare against the best reported supervised and knowledge-based methods. WordNet S1 is a strong baseline obtained by using the most frequent sense of a word as listed in WordNet. MFS is a most-frequent-sense baseline obtained through the sense frequencies in the training corpus. Context2Vec (Melamud et al., 2016), an unsupervised model for learning generic context embeddings, enables a strong baseline for supervised WSD while using a simplistic approach (nearestneighbour algorithm). IMS+emb (Iacobacci et al., 2016) takes the classical approach of extracting relevant features and learning an SVM for WSD. Leskext+emb (Basile et al., 2014) relies on definition-context overlap heuristics. UKBglossw2w (Agirre et al., 2014), Babelfy (Moro et al., 2014) and WSD-TM (Chaplot and Salakhutdinov, 2018) provide unsupervised knowledge-based methods. Among neural baselines, we compare against the neural sequence modeling approach in BiLSTM+att+LEX(+POS) (Raganato et al., 2017b). GAS (Luo et al., 2018b) and HCAN (Luo et al., 2018a) are recent neural models which exploit sense definitions. EWISE consistently outperforms all supervised and knowledge-based methods, improving upon the state-of-the-art by 0.7 point in F1 on the ALL dataset. Further, EWISE improves WSD performance across all POS tags (Table 1) except adverbs. Back-off : Traditional supervised approaches can’t handle unseen words. WordNet S1 is used as a back-off strategy for words unseen during training. EWISE is capable of generalizing to unseen words and senses and doesn’t use any back-off. 6.2 Ablation Study for EWISE Ablation on ALL dataset EWISE (ConvE) 71.8 - w/o Sense embeddings (with back-off) 69.3 - w/o Sense embeddings (w/o back-off) 61.8 WordNet S1 65.2 Table 2: Ablation study for EWISE (ConvE) on the ALL dataset. Removal of sense embeddings (rows 2 and 3) results in significant performance degradation, establishing their importance in WSD. Please see Section 6.2 for details. We provide an ablation study of EWISE on the ALL dataset in Table 2. To investigate the effect of using definition embeddings in EWISE, we trained a BiLSTM model without any externally obtained sense embeddings. This model can make predictions only on words seen during training, and is evaluated with or without a back-off strategy (WordNet S1) for unseen words (row 2 and 3). The results demonstrate that incorporating sense 5677 embeddings is key to EWISE’s performance. Further, the generalization capability of EWISE is illustrated by the improvement in F1 in the absence of a back-off strategy (10.0 points). Test Datasets SE2 SE3 SE13 SE15 ALL USE 73.0 70.6 70.9 73.7 71.5 InferSent 72.7 70.2 69.9 73.7 71.2 ELMO 72.5 70.7 68.6 72.6 70.8 BERT 73.0 69.7 70.0 73.7 71.2 DeConf 71.3 67.0 67.9 73.0 69.3 TransE 72.8 71.4 70.5 73.1 71.6 ConvE 73.8 71.1 69.4 74.5 71.8 Table 3: Comparison of F1 scores with different sense embeddings as targets for EWISE. While pre-trained embedding methods (USE, InferSent, ELMO, BERT) and DeConf provide impressive results, the KG embedding methods (TransE and ConvE) perform competitively or better by learning to encode definitions using WordNet alone. Please see Section 6.2 for details. Next, we investigate the impact of the choice of sense embeddings used as the target for EWISE (Table 3), on the ALL dataset. We compare definition embeddings learnt using structural knowledge (TransE, ConvE; See Section 4.3.2) against definition embeddings obtained from pre-trained sentence and context encoders (USE, InferSent, ELMO, BERT; See Section 4.3.1). We also compared with off-the-shelf sense embeddings (DeConf) (Pilehvar and Collier, 2016), where definitions are not used. The results justify the choice of learning definition embeddings to represent senses. 6.3 Detailed Results We provide detailed results for EWISE on the ALL dataset, compared against BiLSTM-A (BiLSTM+attention) baseline which is trained to predict in the discrete label space (Raganato et al., 2017b). We also compare against WordNet S1 and knowledge-based methods, Leskext+emb and Babelfy, available in the evaluation framework of Raganato et al. (2017a). 6.3.1 WSD on Rare Words In this section, we investigate a key claim of EWISE - the ability to disambiguate unseen and rare words. We evaluate WSD models based on different frequencies of annotated words in the training set in Figure 2. EWISE outperforms the supervised as well as knowledge-based baselines for rare as well as frequent words. The bar plot 84.9 70.6 65.4 58 88.2 68.6 64.6 55.2 89 71.4 67.3 56 84.9 71.4 69.4 64.8 91 73.4 72.5 66.3 45 55 65 75 85 95 0 1-10 11-50 >50 WordNet S1 Lesk(ext)+emb Babelfy BiLSTM-A EWISE Figure 2: Comparison of F1 scores for different frequencies of annotated words in the train set. EWISE provides significant gains for unseen, rare as well as frequently observed annotated words. Please see Section 6.3.1 for details. on the left (frequency=0) indicates the zero-shot learning capability of EWISE. While traditional supervised systems are limited to WordNet S1 performance (by using it as back-off for words with no annotations in the training set), EWISE provides a significant boost over both WordNet S1 as well as knowledge-based systems. 6.3.2 WSD on Rare Senses MFS LFS WordNet S1 100.0 0.0 Lesk(ext)+emb 92.7 9.4 Babelfy 93.9 12.2 BiLSTM-A 93.4 22.9 EWISE 93.5 31.2 Table 4: Comparison of F1 scores on different sense frequencies. EWISE outperforms baselines on infrequent senses, without sacrificing the performance on the most frequent sense examples. Please see Section 6.3.2 for details. To investigate the ability to generalize to rare senses, we partition the ALL test set into two parts - the set of instances labeled with the most frequent sense of the corresponding word (MFS), and the set of remaining instances (LFS: Least Frequent Senses). Postma et al. (2016) note that existing methods learn well on the MFS set, while doing poorly (∼20%) on the LFS set. In Table 4, we evaluate the performance of EWISE and baseline models on MFS and LFS sets. We note that EWISE provides significant gains over a neural baseline (BiLSTM-A), as well as knowledge based methods on the LFS set, while maintaining high accuracy on the MFS set. The gain obtained on the LFS set is consistent with our hypothesis that predicting over sense embeddings enables generalization to rare senses. 5678 6.4 Size of Training Data Size of training data F1 Without back-off With back-off WordNet S1 65.2 EWISE 20% 66.8 67.0 50% 70.1 69.2 100% 71.8 71.0 Table 5: Performance of EWISE with varying sizes of training data. With only 20% of training data, EWISE is able to outperform the most-frequent-sense baseline of WordNet S1. Please see Section 6.4 for details. In this section, we investigate if EWISE can learn efficiently from less training data, given its increased supervision bandwidth (sense embeddings instead of sense labels). In Table 5, we report the performance of EWISE on the ALL dataset with varying sizes of the training data. We note that with only 50% of training data, EWISE already competes with several supervised approaches (Table 1), while with just 20% of training data, EWISE is able to outperform the strong WordNet S1 baseline. For reference, we also present the performance of EWISE when we use back-off (WordNet S1) for words unseen during training. 7 Conclusion and Future Work We have introduced EWISE, a general framework for learning WSD from a combination of senseannotated data, dictionary definitions and Lexical Knowledge Bases. EWISE uses sense embeddings as targets instead of discrete sense labels. This helps the model gain zero-shot learning capabilities, demonstrated through ablation and detailed analysis. EWISE improves state-of-the-art results on standardized benchmarks for WSD. We are releasing EWISE code to promote reproducible research. This paper should serve as a starting point to better investigate WSD on out-of-vocabulary words. Our modular architecture opens up various avenues for improvements in few-shot learning for WSD, viz., context encoder, definition encoder, and leveraging structural knowledge. Another potential future work would be to explore other ways of providing rich supervision from textual descriptions as targets. Acknowledgments We thank the anonymous reviewers for their constructive comments. This work is supported in part by the Ministry of Human Resource Development (Government of India), and by a travel grant from Microsoft Research India. References Eneko Agirre, Oier L´opez de Lacalle, and Aitor Soroa. 2014. Random walks for knowledge-based word sense disambiguation. Computational Linguistics, 40(1):57–84. Dzmitry Bahdanau, Tom Bosc, Stanisaw Jastrzebski, Edward Grefenstette, Pascal Vincent, and Yoshua Bengio. 2017. Learning to compute word embeddings on the fly. arXiv preprint arXiv:1706.00286. Satanjeev Banerjee and Ted Pedersen. 2003. Extended gloss overlaps as a measure of semantic relatedness. In Ijcai, volume 3, pages 805–810. Pierpaolo Basile, Annalina Caputo, and Giovanni Semeraro. 2014. An enhanced Lesk word sense disambiguation algorithm through a distributional semantic model. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1591–1600, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in neural information processing systems, pages 2787–2795. Tom Bosc and Pascal Vincent. 2018. Auto-encoding dictionary definitions into consistent word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1522–1532, Brussels, Belgium. Association for Computational Linguistics. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for English. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169–174, Brussels, Belgium. Association for Computational Linguistics. Devendra Singh Chaplot, Pushpak Bhattacharyya, and Ashwin Paranjape. 2015. Unsupervised word sense disambiguation using markov random field and dependency parser. In AAAI, pages 2217–2223. 5679 Devendra Singh Chaplot and Ruslan Salakhutdinov. 2018. Knowledge-based word sense disambiguation using topic models. In Thirty-Second AAAI Conference on Artificial Intelligence. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680, Copenhagen, Denmark. Association for Computational Linguistics. Claudio Delli Bovi, Luis Espinosa-Anke, and Roberto Navigli. 2015. Knowledge base unification via sense embeddings and disambiguation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 726– 736, Lisbon, Portugal. Association for Computational Linguistics. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Thirty-Second AAAI Conference on Artificial Intelligence. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735– 1780. Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Embeddings for word sense disambiguation: An evaluation study. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 897–907, Berlin, Germany. Association for Computational Linguistics. Oier Lopez de Lacalle and Eneko Agirre. 2015. A methodology for word sense disambiguation at 90% based on large-scale CrowdSourcing. In Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics, pages 61–70, Denver, Colorado. Association for Computational Linguistics. Michael Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone. In Proceedings of the 5th annual international conference on Systems documentation, pages 24–26. ACM. Fuli Luo, Tianyu Liu, Zexue He, Qiaolin Xia, Zhifang Sui, and Baobao Chang. 2018a. Leveraging gloss knowledge in neural word sense disambiguation by hierarchical co-attention. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1402–1411, Brussels, Belgium. Association for Computational Linguistics. Fuli Luo, Tianyu Liu, Qiaolin Xia, Baobao Chang, and Zhifang Sui. 2018b. Incorporating glosses into neural word sense disambiguation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2473–2482, Melbourne, Australia. Association for Computational Linguistics. Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context embedding with bidirectional LSTM. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 51–61, Berlin, Germany. Association for Computational Linguistics. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39– 41. George A. Miller, Claudia Leacock, Randee Tengi, and Ross T. Bunker. 1993. A semantic concordance. In HUMAN LANGUAGE TECHNOLOGY: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993. Andrea Moro and Roberto Navigli. 2015. SemEval2015 task 13: Multilingual all-words sense disambiguation and entity linking. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 288–297, Denver, Colorado. Association for Computational Linguistics. Andrea Moro, Alessandro Raganato, and Roberto Navigli. 2014. Entity linking meets word sense disambiguation: a unified approach. Transactions of the Association for Computational Linguistics, 2:231– 244. Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Computing Surveys (CSUR), 41(2):10. Roberto Navigli, David Jurgens, and Daniele Vannella. 2013. SemEval-2013 task 12: Multilingual word sense disambiguation. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 222–231, Atlanta, Georgia, USA. Association for Computational Linguistics. Roberto Navigli and Mirella Lapata. 2010. An experimental study of graph connectivity for unsupervised word sense disambiguation. IEEE transactions on pattern analysis and machine intelligence, 32(4):678–692. 5680 Steven Neale, Lu´ıs Gomes, Eneko Agirre, Oier Lopez de Lacalle, and Ant´onio Branco. 2016. Word senseaware machine translation: Including senses as contextual features for improved translation models. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 2777–2783, Portoroˇz, Slovenia. European Language Resources Association (ELRA). Martha Palmer, Christiane Fellbaum, Scott Cotton, Lauren Delfs, and Hoa Trang Dang. 2001. English tasks: All-words and verb lexical sample. In Proceedings of SENSEVAL-2 Second International Workshop on Evaluating Word Sense Disambiguation Systems, pages 21–24, Toulouse, France. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Mohammad Taher Pilehvar and Nigel Collier. 2016. De-conflated semantic representations. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1680–1690, Austin, Texas. Association for Computational Linguistics. Marten Postma, Ruben Izquierdo Bevia, and Piek Vossen. 2016. More is not always better: balancing sense distributions for all-words word sense disambiguation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3496–3506, Osaka, Japan. The COLING 2016 Organizing Committee. Sameer Pradhan, Edward Loper, Dmitriy Dligach, and Martha Palmer. 2007. SemEval-2007 task-17: English lexical sample, SRL and all words. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 87–92, Prague, Czech Republic. Association for Computational Linguistics. Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 157–163, Valencia, Spain. Association for Computational Linguistics. Xiao Pu, Nikolaos Pappas, James Henderson, and Andrei Popescu-Belis. 2018. Integrating weakly supervised word sense disambiguation into neural machine translation. Transactions of the Association for Computational Linguistics, 6:635–649. Alessandro Raganato, Jose Camacho-Collados, and Roberto Navigli. 2017a. Word sense disambiguation: A unified evaluation framework and empirical comparison. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 99–110, Valencia, Spain. Association for Computational Linguistics. Alessandro Raganato, Claudio Delli Bovi, and Roberto Navigli. 2017b. Neural sequence learning models for word sense disambiguation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1156–1167, Copenhagen, Denmark. Association for Computational Linguistics. Ganesh Ramakrishnan, Apurva Jadhav, Ashutosh Joshi, Soumen Chakrabarti, and Pushpak Bhattacharyya. 2003. Question answering via Bayesian inference on lexical relations. In Proceedings of the ACL 2003 Workshop on Multilingual Summarization and Question Answering, pages 1–10, Sapporo, Japan. Association for Computational Linguistics. Ravi Sinha and Rada Mihalcea. 2007. Unsupervised graph-basedword sense disambiguation using measures of word semantic similarity. In Semantic Computing, 2007. ICSC 2007. International Conference on, pages 363–369. IEEE. Benjamin Snyder and Martha Palmer. 2004. The English all-words task. In Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, pages 41–43, Barcelona, Spain. Association for Computational Linguistics. Kaveh Taghipour and Hwee Tou Ng. 2015. Semisupervised word sense disambiguation using word embeddings in general and specific domains. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 314–323, Denver, Colorado. Association for Computational Linguistics. Rocco Tripodi and Marcello Pelillo. 2017. A gametheoretic approach to word sense disambiguation. Computational Linguistics, 43(1):31–70. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions 5681 on Knowledge and Data Engineering, 29(12):2724– 2743. Dirk Weissenborn, Leonhard Hennig, Feiyu Xu, and Hans Uszkoreit. 2015. Multi-objective optimization for the joint disambiguation of nouns and named entities. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 596–605, Beijing, China. Association for Computational Linguistics. Yongqin Xian, Christoph H Lampert, Bernt Schiele, and Zeynep Akata. 2018. Zero-shot learning-a comprehensive evaluation of the good, the bad and the ugly. IEEE transactions on pattern analysis and machine intelligence. Dayu Yuan, Julian Richardson, Ryan Doherty, Colin Evans, and Eric Altendorf. 2016. Semi-supervised word sense disambiguation with neural models. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1374–1385, Osaka, Japan. The COLING 2016 Organizing Committee. Zhi Zhong and Hwee Tou Ng. 2010. It makes sense: A wide-coverage word sense disambiguation system for free text. In Proceedings of the ACL 2010 System Demonstrations, pages 78–83, Uppsala, Sweden. Association for Computational Linguistics. Zhi Zhong and Hwee Tou Ng. 2012. Word sense disambiguation improves information retrieval. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 273–282, Jeju Island, Korea. Association for Computational Linguistics. A Training Details For both context and definition encoding, we used BiLSTMs of hidden size 2048. The input embeddings for the BiLSTM was initialized with GloVe1 (Pennington et al., 2014) embeddings and kept fixed during training. We used the Adam optimizer for learning all our models. WSD: We used an initial learning rate of 0.0001, a batch size of 32, and trained our models for a maximum of 200 epochs. For each run, we select the model with the best F1 score on the development set (SemEval-2007). During training, we consider the entire sense inventory (the global pool of candidate senses of all words) for learning. During inference, for fair 1http://nlp.stanford.edu/data/glove. 840B.300d.zip comparison with baselines, we disambiguate between candidates senses of a word as provided in WordNet. TransE: We use training data from Bordes et al. (2013)2. We used an initial learning rate of 0.001, a batch size of 32, and trained for a maximum of 1000 epochs. The embedding size was fixed to 4096. ConvE: We use the learning framework of Dettmers et al. (2018), and learned the model with an inital learning rate of 0.0001, a batch size of 128, label smoothing of 0.1, and a maximum of 500 epochs. We found that the best results were obtained by pretraining the entity and relation embedding using Equation 3 and then training the definition encoder using Equation 13 while allowing all parameters to train. The embedding size was fixed to 4096. 2https://everest.hds.utc.fr/lib/exe/ fetch.php?media=en:wordnet-mlj12.tar.gz
2019
568
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5682–5691 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5682 Language Modelling Makes Sense: Propagating Representations through WordNet for Full-Coverage Word Sense Disambiguation Daniel Loureiro, Al´ıpio M´ario Jorge LIAAD - INESC TEC Faculty of Sciences - University of Porto, Portugal [email protected], [email protected] Abstract Contextual embeddings represent a new generation of semantic representations learned from Neural Language Modelling (NLM) that addresses the issue of meaning conflation hampering traditional word embeddings. In this work, we show that contextual embeddings can be used to achieve unprecedented gains in Word Sense Disambiguation (WSD) tasks. Our approach focuses on creating sense-level embeddings with full-coverage of WordNet, and without recourse to explicit knowledge of sense distributions or task-specific modelling. As a result, a simple Nearest Neighbors (kNN) method using our representations is able to consistently surpass the performance of previous systems using powerful neural sequencing models. We also analyse the robustness of our approach when ignoring part-of-speech and lemma features, requiring disambiguation against the full sense inventory, and revealing shortcomings to be improved. Finally, we explore applications of our sense embeddings for concept-level analyses of contextual embeddings and their respective NLMs. 1 Introduction Word Sense Disambiguation (WSD) is a core task of Natural Language Processing (NLP) which consists in assigning the correct sense to a word in a given context, and has many potential applications (Navigli, 2009). Despite breakthroughs in distributed semantic representations (i.e. word embeddings), resolving lexical ambiguity has remained a long-standing challenge in the field. Systems using non-distributional features, such as It Makes Sense (IMS, Zhong and Ng, 2010), remain surprisingly competitive against neural sequence models trained end-to-end. A baseline that simply chooses the most frequent sense (MFS) has also proven to be notoriously difficult to surpass. Several factors have contributed to this limited progress over the last decade, including lack of standardized evaluation, and restricted amounts of sense annotated corpora. Addressing the evaluation issue, Raganato et al. (2017a) has introduced a unified evaluation framework that has already been adopted by the latest works in WSD. Also, even though SemCor (Miller et al., 1994) still remains the largest manually annotated corpus, supervised methods have successfully used label propagation (Yuan et al., 2016), semantic networks (Vial et al., 2018) and glosses (Luo et al., 2018b) in combination with annotations to advance the state-of-the-art. Meanwhile, taskspecific sequence modelling architectures based on BiLSTMs or Seq2Seq (Raganato et al., 2017b) haven’t yet proven as advantageous for WSD. Until recently, the best semantic representations at our disposal, such as word2vec (Mikolov et al., 2013) and fastText (Bojanowski et al., 2017), were bound to word types (i.e. distinct tokens), converging information from different senses into the same representations (e.g. ‘play song’ and ‘play tennis’ share the same representation of ‘play’). These word embeddings were learned from unsupervised Neural Language Modelling (NLM) trained on fixed-length contexts. However, by recasting the same word types across different sense-inducing contexts, these representations became insensitive to the different senses of polysemous words. Camacho-Collados and Pilehvar (2018) refer to this issue as the meaning conflation deficiency and explore it more thoroughly in their work. Recent improvements to NLM have allowed for learning representations that are context-specific and detached from word types. While word embedding methods reduced NLMs to fixed representations after pretraining, this new generation of contextual embeddings employs the pretrained 5683 NLM to infer different representations induced by arbitrarily long contexts. Contextual embeddings have already had a major impact on the field, driving progress on numerous downstream tasks. This success has also motivated a number of iterations on embedding models in a short timespan, from context2vec (Melamud et al., 2016), to GPT (Radford et al., 2018), ELMo (Peters et al., 2018), and BERT (Devlin et al., 2019). Being context-sensitive by design, contextual embeddings are particularly well-suited for WSD. In fact, Melamud et al. (2016) and Peters et al. (2018) produced contextual embeddings from the SemCor dataset and showed competitive results on Raganato et al. (2017a)’s WSD evaluation framework, with a surprisingly simple approach based on Nearest Neighbors (k-NN). These results were promising, but those works only produced sense embeddings for the small fraction of WordNet (Fellbaum, 1998) senses covered by SemCor, resorting to the MFS approach for a large number of instances. Lack of high coverage annotations is one of the most pressing issues for supervised WSD approaches (Le et al., 2018). Our experiments show that the simple k-NN w/MFS approach using BERT embeddings suffices to surpass the performance of all previous systems. Most importantly, in this work we introduce a method for generating sense embeddings with full-coverage of WordNet, which further improves results (additional 1.9% F1) while forgoing MFS fallbacks. To better evaluate the fitness of our sense embeddings, we also analyse their performance without access to lemma or part-ofspeech features typically used to restrict candidate senses. Representing sense embeddings in the same space as any contextual embeddings generated from the same pretrained NLM eases introspections of those NLMs, and enables token-level intrinsic evaluations based on k-NN WSD performance. We summarize our contributions1 below: • A method for creating sense embeddings for all senses in WordNet, allowing for WSD based on k-NN without MFS fallbacks. • Major improvement over the state-of-the-art on cross-domain WSD tasks, while exploring the strengths and weaknesses of our method. • Applications of our sense embeddings for concept-level analyses of NLMs. 1Code and data: github.com/danlou/lmms 2 Language Modelling Representations Distributional semantic representations learned from Unsupervised Neural Language Modelling (NLM) are currently used for most NLP tasks. In this section we cover aspects of word and contextual embeddings, learned from from NLMs, that are particularly relevant for our work. 2.1 Static Word Embeddings Word embeddings are distributional semantic representations usually learned from NLM under one of two possible objectives: predict context words given a target word (Skip-Gram), or the inverse (CBOW) (word2vec, Mikolov et al., 2013). In both cases, context corresponds to a fixed-length window sliding over tokenized text, with the target word at the center. These modelling objectives are enough to produce dense vector-based representations of words that are widely used as powerful initializations on neural modelling architectures for NLP. As we explained in the introduction, word embeddings are limited by meaning conflation around word types, and reduce NLM to fixed representations that are insensitive to contexts. However, with fastText (Bojanowski et al., 2017) we’re not restricted to a finite set of representations and can compositionally derive representations for word types unseen during training. 2.2 Contextual Embeddings The key differentiation of contextual embeddings is that they are context-sensitive, allowing the same word types to be represented differently according to the contexts in which they occurr. In order to be able to produce new representations induced by different contexts, contextual embeddings employ the pretrained NLM for inferences. Also, the NLM objective for contextual embeddings is usually directional, predicting the previous and/or next tokens in arbitrarily long contexts (usually sentences). ELMo (Peters et al., 2018) was the first implementation of contextual embeddings to gain wide adoption, but it was shortly after followed by BERT (Devlin et al., 2019) which achieved new state-of-art results on 11 NLP tasks. Interestingly, BERT’s impressive results were obtained from task-specific fine-tuning of pretrained NLMs, instead of using them as features in more complex models, emphasizing the quality of these representations. 5684 3 Word Sense Disambiguation (WSD) There are several lines of research exploring different approaches for WSD (Navigli, 2009). Supervised methods have traditionally performed best, though this distinction is becoming increasingly blurred as works in supervised WSD start exploiting resources used by knowledge-based approaches (e.g. Luo et al., 2018a; Vial et al., 2018). We relate our work to the best-performing WSD methods, regardless of approach, as well as methods that may not perform as well but involve producing sense embeddings. In this section we introduce the components and related works that are most relevant for our approach. 3.1 Sense Inventory, Attributes and Relations The most popular sense inventory is WordNet, a semantic network of general domain concepts linked by a few relations, such as synonymy and hypernymy. WordNet is organized at different abstraction levels, which we describe below. Following the notation used in related works, we represent the main structure of WordNet, called synset, with lemma# POS, where lemma corresponds to the canonical form of a word, POS corresponds to the sense’s part-of-speech (noun, verb, adjective or adverb), and # further specifies this entry. • Synsets: groups of synonymous words that correspond to the same sense, e.g. dog1 n. • Lemmas: canonical forms of words, may belong to multiple synsets, e.g. dog is a lemma for dog1 n and chase1 v, among others. • Senses: lemmas specifed by sense (i.e. sensekeys), e.g. dog%1:05:00::, and domestic dog%1:05:00:: are senses of dog1 n. Each synset has a number of attributes, of which the most relevant for this work are: • Glosses: dictionary definitions, e.g. dog1 n has the definition ‘a member of the genus Ca...’. • Hypernyms: ‘type of’ relations between synsets, e.g. dog1 n is a hypernym of pug1 n. • Lexnames: syntactical and logical groupings, e.g. the lexname for dog1 n is noun.animal. In this work we’re using WordNet 3.0, which contains 117,659 synsets, 206,949 unique senses, 147,306 lemmas, and 45 lexnames. 3.2 WSD State-of-the-Art While non-distributional methods, such as Zhong and Ng (2010)’s IMS, still perform competitively, there are have been several noteworthy advancements in the last decade using distributional representations from NLMs. Iacobacci et al. (2016) improved on IMS’s performance by introducing word embeddings as additional features. Yuan et al. (2016) achieved significantly improved results by leveraging massive corpora to train a NLM based on an LSTM architecture. This work is contemporaneous with Melamud et al. (2016), and also uses a very similar approach for generating sense embeddings and relying on k-NN w/MFS for predictions. Although most performance gains stemmed from their powerful NLM, they also introduced a label propagation method that further improved results in some cases. Curiously, the objective Yuan et al. (2016) used for NLM (predicting held-out words) is very evocative of the cloze-style Masked Language Model introduced by Devlin et al. (2019). Le et al. (2018) replicated this work and offers additional insights. Raganato et al. (2017b) trained neural sequencing models for end-to-end WSD. This work reframes WSD as a translation task where sequences of words are translated into sequences of senses. The best result was obtained with a BiLSTM trained with auxilliary losses specific to parts-ofspeech and lexnames. Despite the sophisticated modelling architecture, it still performed on par with Iacobacci et al. (2016). The works of Melamud et al. (2016) and Peters et al. (2018) using contextual embeddings for WSD showed the potential of these representations, but still performed comparably to IMS. Addressing the issue of scarce annotations, recent works have proposed methods for using resources from knowledge-based approaches. Luo et al. (2018a) and Luo et al. (2018b) combine information from glosses present in WordNet, with NLMs based on BiLSTMs, through memory networks and co-attention mechanisms, respectively. Vial et al. (2018) follows Raganato et al. (2017b)’s BiLSTM method, but leverages the semantic network to strategically reduce the set of senses required for disambiguating words. All of these works rely on MFS fallback. Additionally, to our knowledge, all also perform disambiguation only against the set of admissible senses given the word’s lemma and part-of-speech. 5685 3.3 Other methods with Sense Embeddings Some works may no longer be competitive with the state-of-the-art, but nevertheless remain relevant for the development of sense embeddings. We recommend the recent survey of CamachoCollados and Pilehvar (2018) for a thorough overview of this topic, and highlight a few of the most relevant methods. Chen et al. (2014) initializes sense embeddings using glosses and adapts the Skip-Gram objective of word2vec to learn and improve sense embeddings jointly with word embeddings. Rothe and Sch¨utze (2015)’s AutoExtend method uses pretrained word2vec embeddings to compose sense embeddings from sets of synonymous words. Camacho-Collados et al. (2016) creates the NASARI sense embeddings using structural knowledge from large multilingual semantic networks. These methods represent sense embeddings in the same space as the pretrained word embeddings, however, being based on fixed embedding spaces, they are much more limited in their ability to generate contextual representations to match against. Furthermore, none of these methods (or those in §3.2) achieve full-coverage of the +200K senses in WordNet. 4 Method Figure 1: Illustration of our k-NN approach for WSD, which relies on full-coverage sense embeddings represented in the same space as contextualized embeddings. For simplification, we label senses as synsets. Grey nodes belong to different lemmas (see §5.3). Our WSD approach is strictly based on k-NN (see Figure 1), unlike any of the works referred previously. We avoid relying on MFS for lemmas that do not occur in annotated corpora by generating sense embeddings with full-coverage of WordNet. Our method starts by generating sense embeddings from annotations, as done by other works, and then introduces several enhancements towards full-coverage, better performance and increased robustness. In this section, we cover each of these techniques. 4.1 Embeddings from Annotations Our set of full-coverage sense embeddings is bootstrapped from sense-annotated corpora. Sentences containing sense-annotated tokens (or spans) are processed by a NLM in order to obtain contextual embeddings for those tokens. After collecting all sense-labeled contextual embeddings, each sense embedding is determined by averaging its corresponding contextual embeddings. Formally, given n contextual embeddings ⃗c for some sense s: ⃗vs = 1 n n X i=1 ⃗ci, dim(⃗vs) = 1024 In this work we use pretrained ELMo and BERT models to generate contextual embeddings. These models can be identified and replicated with the following details: • ELMo: 1024 (2x512) embedding dimensions, 93.6M parameters. Embeddings from top layer (2). • BERT: 1024 embedding dimensions, 340M parameters, cased. Embeddings from sum of top 4 layers ([-1,-4])2. BERT uses WordPiece tokenization that doesn’t always map to token-level annotations (e.g. ‘multiplication’ becomes ‘multi’, ‘##plication’). We use the average of subtoken embeddings as the token-level embedding. Unless specified otherwise, our LMMS method uses BERT. 4.2 Extending Annotation Coverage As many have emphasized before (Navigli, 2009; Camacho-Collados and Pilehvar, 2018; Le et al., 2018), the lack of sense annotations is a major limitation of supervised approaches for WSD. We address this issue by taking advantage of the semantic relations in WordNet to extend the annotated signal to other senses. Semantic networks are often explored by knowledge-based approaches, and some recent works in supervised approaches as well (Luo et al., 2018a; Vial et al., 2018). The 2This was the configuration that performed best out of the ones on Table 7 of Devlin et al. (2018). 5686 guiding principle behind these approaches is that sense-level representations can be imputed (or improved) from other representations that are known to correspond to generalizations due to the network’s taxonomical structure. Vial et al. (2018) leverages relations in WordNet to reduce the sense inventory to a minimal set of entries, making the task easier to model while maintaining the ability to distinguish senses. We take the inverse path of leveraging relations to produce representations for additional senses. On §3.1 we covered synsets, hypernyms and lexnames, which correspond to increasingly abstract generalizations. Missing sense embeddings are imputed from the aggregation of sense embeddings at each of these abstraction levels. In order to get embeddings that are representative of higher-level abstractions, we simply average the embeddings of all lower-level constituents. Thus, a synset embedding corresponds to the average of all of its sense embeddings, a hypernym embedding corresponds to the average of all of its synset embeddings, and a lexname embedding corresponds to the average of a larger set of synset embeddings. All lower abstraction representations are created before next-level abstractions to ensure that higher abstractions make use of lower generalizations. More formally, given all missing senses in WordNet ˆs ∈W, their synset-specific sense embeddings Sˆs, hypernym-specific synset embeddings Hˆs, and lexname-specific synset embeddings Lˆs, the procedure has the following stages: (1) if|Sˆs| > 0, ⃗vˆs = 1 |Sˆs| P⃗vs, ∀⃗vs ∈Sˆs (2) if|Hˆs| > 0, ⃗vˆs = 1 |Hˆs| P⃗vsyn, ∀⃗vsyn ∈Hˆs (3) if|Lˆs| > 0, ⃗vˆs = 1 |Lˆs| P⃗vsyn, ∀⃗vsyn ∈Lˆs In Table 1 we show how much coverage extends while improving both recall and precision. F1 / P / R (without MFS) Source Coverage BERT ELMo SemCor 16.11% 68.9 / 72.4 / 65.7 63.0 / 66.2 / 60.1 + synset 26.97% 70.0 / 72.6 / 70.0 63.9 / 66.3 / 61.7 + hypernym 74.70% 73.0 / 73.6 / 72.4 67.2 / 67.7 / 66.6 + lexname 100% 73.8 / 73.8 / 73.8 68.1 / 68.1 / 68.1 Table 1: Coverage of WordNet when extending to increasingly abstract representations along with performance on the ALL test set of Raganato et al. (2017a). 4.3 Improving Senses using the Dictionary There’s a long tradition of using glosses for WSD, perhaps starting with the popular work of Lesk (1986), which has since been adapted to use distributional representations (Basile et al., 2014). As a sequence of words, the information contained in glosses can be easily represented in semantic spaces through approaches used for generating sentence embeddings. There are many methods for generating sentence embeddings, but it’s been shown that a simple weighted average of word embeddings performs well (Arora et al., 2017). Our contextual embeddings are produced from NLMs using attention mechanisms, assigning more importance to some tokens over others, so they already come ‘pre-weighted’ and we embed glosses simply as the average of all of their contextual embeddings (without preprocessing). We’ve also found that introducing synset lemmas alongside the words in the gloss helps induce better contextualized embeddings (specially when glosses are short). Finally, we make our dictionary embeddings (⃗vd) sense-specific, rather than synsetspecific, by repeating the lemma that’s specific to the sense, alongside the synset’s lemmas and gloss words. The result is a sense-level embedding, determined without annotations, that is represented in the same space as the sense embeddings we described in the previous section, and can be trivially combined through concatenation or average for improved performance (see Table 2). Our empirical results show improved performance by concatenation, which we attribute to preserving complementary information from glosses. Both averaging and concatenating representations (previously L2 normalized) also serves to smooth possible biases that may have been learned from the SemCor annotations. Note that while concatenation effectively doubles the size of our embeddings, this doesn’t equal doubling the expressiveness of the distributional space, since they’re two representations from the same NLM. This property also allows us to make predictions for contextual embeddings (from the same NLM) by simply repeating those embeddings twice, aligning contextual features against sense and dictionary features when computing cosine similarity. Thus, our sense embeddings become: ⃗vs = ||⃗vs||2 ||⃗vd||2  , dim(⃗vs) = 2048 5687 Configurations LMMS1024 LMMS2048 LMMS2348 Embeddings Contextual (d=1024)      Dictionary (d=1024)      Static (d=300)    Operation Average  Concatenation     Perf. (F1 on ALL) Lemma & POS 73.8 58.7 75.0 75.4 73.9 58.7 75.4 Token (Uninformed) 42.7 6.1 36.5 35.1 64.4 45.0 66.0 Table 2: Overview of the different performance of various setups regarding choice of embeddings and combination strategy. All results are for the 1-NN approach on the ALL test set of Raganato et al. (2017a). We also show results that ignore the lemma and part-of-speech features of the test sets to show that the inclusion of static embeddings makes the method significantly more robust to real-world scenarios where such gold features may not be available. 4.4 Morphological Robustness WSD is expected to be performed only against the set of candidate senses that are specific to a target word’s lemma. However, as we’ll explain in §5.3, there are cases where it’s undesirable to restrict the WSD process. We leverage word embeddings specialized for morphological representations to make our sense embeddings more resilient to the absence of lemma features, achieving increased robustness. This addresses a problem arising from the susceptibility of contextual embeddings to become entirely detached from the morphology of their corresponding tokens, due to interactions with other tokens in the sentence. We choose fastText (Bojanowski et al., 2017) embeddings (pretrained on CommonCrawl), which are biased towards morphology, and avoid Out-of-Vocabulary issues as explained in §2.1. We use fastText to generate static word embeddings for the lemmas (⃗vl) corresponding to all senses, and concatenate these word embeddings to our previous embeddings. When making predictions, we also compute fastText embeddings for tokens, allowing for the same alignment explained in the previous section. This technique effectively makes sense embeddings of morphologically related lemmas more similar. Empirical results (see Table 2) show that introducing these static embeddings is crucial for achieving satisfactory performance when not filtering candidate senses. Our final, most robust, sense embeddings are thus: ⃗vs =   ||⃗vs||2 ||⃗vd||2 ||⃗vl||2  , dim(⃗vs) = 2348 5 Experiments Our experiments centered on evaluating our solution on Raganato et al. (2017a)’s set of crossdomain WSD tasks. In this section we compare our results to the current state-of-the-art, and provide results for our solution when disambiguating against the full set of possible senses in WordNet, revealing shortcomings to be improved. 5.1 All-Words Disambiguation In Table 3 we show our results for all tasks of Raganato et al. (2017a)’s evaluation framework. We used the framework’s scoring scripts to avoid any discrepancies in the scoring methodology. Note that the k-NN referred in Table 3 always refers to the closest neighbor, and relies on MFS fallbacks. The first noteworthy result we obtained was that simply replicating Peters et al. (2018)’s method for WSD using BERT instead of ELMo, we were able to significantly, and consistently, surpass the performance of all previous works. When using our method (LMMS), performance still improves significantly over the previous impressive results (+1.9 F1 on ALL, +3.4 F1 on SemEval 2013). Interestingly, we found that our method using ELMo embeddings didn’t outperform ELMo k-NN with MFS fallback, suggesting that it’s necessary to achieve a minimum competence level of embeddings from sense annotations (and glosses) before the inferred sense embeddings become more useful than MFS. In Figure 2 we show results when considering additional neighbors as valid predictions, together with a random baseline considering that some target words may have less senses than the number of accepted neighbors (always correct). 5688 Model Senseval2 Senseval3 SemEval2007 SemEval2013 SemEval2015 ALL (n=2,282) (n=1,850) (n=455) (n=1,644) (n=1,022) (n=7,253) MFS† (Most Frequent Sense) 65.6 66.0 54.5 63.8 67.1 64.8 IMS† (2010) 70.9 69.3 61.3 65.3 69.5 68.4 IMS + embeddings† (2016) 72.2 70.4 62.6 65.9 71.5 69.6 context2vec k-NN† (2016) 71.8 69.1 61.3 65.6 71.9 69.0 word2vec k-NN (2016) 67.8 62.1 58.5 66.1 66.7 LSTM-LP (Label Prop.) (2016) 73.8 71.8 63.5 69.5 72.6 Seq2Seq (Task Modelling) (2017b) 70.1 68.5 63.1* 66.5 69.2 68.6* BiLSTM (Task Modelling) (2017b) 72.0 69.1 64.8* 66.9 71.5 69.9* ELMo k-NN (2018) 71.5 67.5 57.1 65.3 69.9 67.9 HCAN (Hier. Co-Attention) (2018a) 72.8 70.3 -* 68.5 72.8 -* BiLSTM w/Vocab. Reduction (2018) 72.6 70.4 61.5 70.8 71.3 70.8 BERT k-NN 76.3 73.2 66.2 71.7 74.1 73.5 LMMS2348 (ELMo) 68.1 64.7 53.8 66.9 69.0 66.2 LMMS2348 (BERT) 76.3 75.6 68.1 75.1 77.0 75.4 Table 3: Comparison with other works on the test sets of Raganato et al. (2017a). All works used sense annotations from SemCor as supervision, although often different pretrained embeddings. † - reproduced from Raganato et al. (2017a); * - used as a development set; bold - new state-of-the-art (SOTA); underlined - previous SOTA. 1 2 3 4 5 Neighbors 10 20 30 40 50 60 70 80 90 100 F1 (ALL) LMMS (WSD) LMMS (USM) RAND (WSD) Figure 2: Performance gains with LMMS2348 when accepting additional neighbors as valid predictions. 5.2 Part-of-Speech Mismatches The solution we introduced in §4.4 addressed missing lemmas, but we didn’t propose a solution that addressed missing POS information. Indeed, the confusion matrix in Table 4 shows that a large number of target words corresponding to verbs are wrongly assigned senses that correspond to adjectives or nouns. We believe this result can help motivate the design of new NLM tasks that are more capable of distinguishing between verbs and nonverbs. WN-POS NOUN VERB ADJ ADV NOUN 96.95% 1.86% 0.86% 0.33% VERB 9.08% 70.82% 19.98% 0.12% ADJ 4.50% 0% 92.27% 2.93% ADV 2.02% 0.29% 2.60% 95.09% Table 4: POS Confusion Matrix for Uninformed Sense Matching on the ALL testset using LMMS2348. 5.3 Uninformed Sense Matching WSD tasks are usually accompanied by auxilliary parts-of-speech (POSs) and lemma features for restricting the number of possible senses to those that are specific to a given lemma and POS. Even if those features aren’t provided (e.g. real-world applications), it’s sensible to use lemmatizers or POS taggers to extract them for use in WSD. However, as is the case with using MFS fallbacks, this filtering step obscures the true impact of NLM representations on k-NN solutions. Consequently, we introduce a variation on WSD, called Uninformed Sense Matching (USM), where disambiguation is always performed against the full set of sense embeddings (i.e. +200K vs. a maximum of 59). This change makes the task much harder (results on Table 2), but offers some insights into NLMs, which we cover briefly in §5.4. 5.4 Use of World Knowledge It’s well known that WSD relies on various types of knowledge, including commonsense and selectional preferences (Lenat et al., 1986; Resnik, 1997), for example. Using our sense embeddings for Uninformed Sense Matching allows us to glimpse into how NLMs may be interpreting contextual information with regards to the knowledge represented in WordNet. In Table 5 we show a few examples of senses matched at the tokenlevel, suggesting that entities were topically understood and this information was useful to disambiguate verbs. These results would be less conclusive without full-coverage of WordNet. 5689 Marlon⋆ Brando⋆ played Corleone⋆ in Godfather⋆ person1 n person1 n act3 v syndicate1 n movie1 n location1 n womanizer1 n group1 n make42 v mafia1 n telefilm1 n here1 n bustle1 n location1 n emote1 v person1 n final cut1 n there1 n act3 v: play a role or part; make42 v : represent fictiously, as in a play, or pretend to be or act like; emote1 v: give expression or emotion to, in a stage or movie role. Serena⋆ Williams played Kerber⋆ in Wimbledon⋆ person1 n professional tennis1 n play1 v person1 n win1 v tournament1 n therefore1 r tennis1 n line up6 v group1 n romp3 v world cup1 n reef 1 n singles1 n curl5 v take orders2 v carry38 v elimination tournament1 n play1 v: participate in games or sport; line up6 v: take one’s position before a kick-off; curl5 v: play the Scottish game of curling. David Bowie⋆ played Warszawa⋆ in Tokyo person1 n person1 n play14 v poland1 n originate in1 n tokyo1 n amati2 n folk song1 n play6 v location1 n in1 r japan1 n guarnerius3 n fado1 n riff 2 v here1 n take the field2 v japanese1 a play14 v : perform on a certain location; play6 v: replay (as a melody); riff2 v: play riffs. Table 5: Examples controlled for syntactical changes to show how the correct sense for ‘played’ can be induced accordingly with the mentioned entities, suggesting that disambiguation is supported by world knowledge learned during LM pretraining. Words with ⋆never occurred in SemCor. Senses shown correspond to the top 3 matches in LMMS1024 for each token’s contextual embedding (uninformed). For clarification, below each set of matches are the WordNet definitions for the top disambiguated senses of ‘played’. 6 Other Applications Analyses of conventional word embeddings have revealed gender or stereotype biases (Bolukbasi et al., 2016; Caliskan et al., 2017) that may have unintended consequences in downstream applications. With contextual embeddings we don’t have sets of concept-level representations for performing similar analyses. Word representations can naturally be derived from averaging their contextual embeddings occurring in corpora, but then we’re back to the meaning conflation issue described earlier. We believe that our sense embeddings can be used as representations for more easily making such analyses of NLMs. In Figure 3 we provide an example that showcases meaningful differences in gender bias, including for lemmas shared by different senses (doctor: PhD vs. medic, and counselor: therapist vs. summer camp supervisor). The bias score for a given synset s was calculated as following: bias(s) = sim(⃗vman1n,⃗vs) −sim(⃗vwoman1n,⃗vs) Besides concept-level analyses, these sense embeddings can also be useful in applications that don’t rely on a particular inventory of senses. In Loureiro and Jorge (2019), we show how similarities between matched sense embeddings and contextual embeddings are used for training a classifier that determines whether a word that occurs in two different sentences shares the same meaning. −0.050 −0.025 0.000 0.025 0.050 doctor4 n programmer1 n counselor2 n doctor1 n teacher1 n florist1 n counselor1 n receptionist1 n nurse1 n LMMS1024 LMMS2048 Figure 3: Examples of gender bias found in the sense vectors. Positive values quantify bias towards man1 n, while negative values quantify bias towards woman1 n. 7 Future Work In future work we plan to use multilingual resources (i.e. embeddings and glosses) for improving our sense embeddings and evaluating on multilingual WSD. We’re also considering exploring a semi-supervised approach where our best embeddings would be employed to automatically annotate corpora, and repeat the process described on this paper until convergence, iteratively fine-tuning sense embeddings. We expect our sense embeddings to be particularly useful in downstream tasks that may benefit from relational knowledge made accessible through linking words (or spans) to commonsense-level concepts in WordNet, such as Natural Language Inference. 5690 8 Conclusion This paper introduces a method for generating sense embeddings that allows a clear improvement of the current state-of-the-art on cross-domain WSD tasks. We leverage contextual embeddings, semantic networks and glosses to achieve fullcoverage of all WordNet senses. Consequently, we’re able to perform WSD with a simple 1-NN, without recourse to MFS fallbacks or task-specific modelling. Furthermore, we introduce a variant on WSD for matching contextual embeddings to all WordNet senses, offering a better understanding of the strengths and weaknesses of representations from NLM. Finally, we explore applications of our sense embeddings beyond WSD, such as gender bias analyses. 9 Acknowledgements This work is financed by National Funds through the Portuguese funding agency, FCT - Fundac¸˜ao para a Ciˆencia e a Tecnologia within project: UID/EEA/50014/2019. References Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. In International Conference on Learning Representations (ICLR). Pierpaolo Basile, Annalina Caputo, and Giovanni Semeraro. 2014. An enhanced Lesk word sense disambiguation algorithm through a distributional semantic model. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1591–1600, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, pages 4356–4364, USA. Curran Associates Inc. Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Jose Camacho-Collados and Mohammad Taher Pilehvar. 2018. From word to sense embeddings: A survey on vector representations of meaning. J. Artif. Int. Res., 63(1):743–788. Jose Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Nasari: Integrating explicit knowledge and corpus statistics for a multilingual representation of concepts and entities. Artificial Intelligence, 240:36 – 64. Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1025–1035, Doha, Qatar. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805v1. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Christiane Fellbaum. 1998. In WordNet : an electronic lexical database. MIT Press. Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Embeddings for word sense disambiguation: An evaluation study. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 897–907, Berlin, Germany. Association for Computational Linguistics. Minh Le, Marten Postma, Jacopo Urbani, and Piek Vossen. 2018. A deep dive into word sense disambiguation with LSTM. In Proceedings of the 27th International Conference on Computational Linguistics, pages 354–365, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Doug Lenat, Mayank Prakash, and Mary Shepherd. 1986. Cyc: Using common sense knowledge to overcome brittleness and knowledge acquistion bottlenecks. AI Mag., 6(4):65–85. Michael Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone. In Proceedings of the 5th Annual International Conference on Systems Documentation, SIGDOC ’86, pages 24–26, New York, NY, USA. ACM. Daniel Loureiro and Al´ıpio M´ario Jorge. 2019. Liaad at semdeep-5 challenge: Word-in-context (wic). In SemDeep-5@IJCAI 2019, page forthcoming. 5691 Fuli Luo, Tianyu Liu, Zexue He, Qiaolin Xia, Zhifang Sui, and Baobao Chang. 2018a. Leveraging gloss knowledge in neural word sense disambiguation by hierarchical co-attention. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1402–1411, Brussels, Belgium. Association for Computational Linguistics. Fuli Luo, Tianyu Liu, Qiaolin Xia, Baobao Chang, and Zhifang Sui. 2018b. Incorporating glosses into neural word sense disambiguation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2473–2482, Melbourne, Australia. Association for Computational Linguistics. Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context embedding with bidirectional LSTM. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 51–61, Berlin, Germany. Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems Volume 2, NIPS’13, pages 3111–3119, USA. Curran Associates Inc. George A. Miller, Martin Chodorow, Shari Landes, Claudia Leacock, and Robert G. Thomas. 1994. Using a semantic concordance for sense identification. In HUMAN LANGUAGE TECHNOLOGY: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994. Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Computing Surveys, 41(2):10:1– 10:69. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Alessandro Raganato, Jose Camacho-Collados, and Roberto Navigli. 2017a. Word sense disambiguation: A unified evaluation framework and empirical comparison. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 99–110, Valencia, Spain. Association for Computational Linguistics. Alessandro Raganato, Claudio Delli Bovi, and Roberto Navigli. 2017b. Neural sequence learning models for word sense disambiguation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1156–1167, Copenhagen, Denmark. Association for Computational Linguistics. Philip Resnik. 1997. Selectional preference and sense disambiguation. In Tagging Text with Lexical Semantics: Why, What, and How? Sascha Rothe and Hinrich Sch¨utze. 2015. AutoExtend: Extending word embeddings to embeddings for synsets and lexemes. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1793–1803, Beijing, China. Association for Computational Linguistics. Lo¨ıc Vial, Benjamin Lecouteux, and Didier Schwab. 2018. Improving the coverage and the generalization ability of neural word sense disambiguation through hypernymy and hyponymy relationships. CoRR, abs/1811.00960. Dayu Yuan, Julian Richardson, Ryan Doherty, Colin Evans, and Eric Altendorf. 2016. Semi-supervised word sense disambiguation with neural models. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1374–1385, Osaka, Japan. The COLING 2016 Organizing Committee. Zhi Zhong and Hwee Tou Ng. 2010. It makes sense: A wide-coverage word sense disambiguation system for free text. In Proceedings of the ACL 2010 System Demonstrations, pages 78–83, Uppsala, Sweden. Association for Computational Linguistics.
2019
569
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 602–607 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 602 A Corpus for Modeling User and Language Effects in Argumentation on Online Debating Esin Durmus Cornell University [email protected] Claire Cardie Cornell University [email protected] Abstract Existing argumentation datasets have succeeded in allowing researchers to develop computational methods for analyzing the content, structure and linguistic features of argumentative text. They have been much less successful in fostering studies of the effect of “user” traits — characteristics and beliefs of the participants — on the debate/argument outcome as this type of user information is generally not available. This paper presents a dataset of 78, 376 debates generated over a 10-year period along with surprisingly comprehensive participant profiles. We also complete an example study using the dataset to analyze the effect of selected user traits on the debate outcome in comparison to the linguistic features typically employed in studies of this kind. 1 Introduction Previous work from Natural Language Processing (NLP) and Computational Social Science (CSS) that studies argumentative text and its persuasive effects has mainly focused on identifying the content and structure of an argument (e.g. Feng and Hirst (2011)) and the linguistic features that are indicative of effective argumentation strategies (e.g. Tan et al. (2016)). The effectiveness of an argument, however, cannot be determined solely by its textual content; rather, it is important to consider characteristics of the reader, listener or participants in the debate or discussion. Does the reader already agree with the argument’s stance? Is she predisposed to changing her mind on the particular topic of the debate? Is the style of the argument appropriate for the individual? To date, existing argumentation datasets have permitted only limited assessment of such “user” traits because information on the background of users is generally unavailable. In this paper, we present a dataset of 78, 376 debates from October of 2007 until November of 2017 drawn from debate.org along with quite comprehensive user profile information — for debate participants as well as users voting on the debate quality and outcome. Background information on users includes demographics (e.g. education, income, religion) and stance on a variety of controversial debate topics as well as a record of user activity on the debate platform (e.g. debates won and lost). We view this new dataset as a resource that affords the NLP and CSS communities the opportunity to understand the effect of audience characteristics on the efficacy of different debating and persuasion strategies as well as to model changes in user’s opinions and activities on a debate platform over time. (To date, part of our debate.org dataset has been used in one such study to understand the effect of prior beliefs in persuasion1 (Durmus and Cardie, 2018). Here, we focus on the properties of the dataset itself and study a different task.) In the next section, we describe the dataset in the context of existing argumentation datasets. We then provide statistics on key aspects of the collected debates and user profiles (Section 3). Section 4 reports a study in which we investigate the predictive effect of selected user traits (namely, the debaters’ and audience’s experience, prior debate success, social interactions, and demographic information) vs. standard linguistic features. Experimental results show that features of the user traits are significantly more predictive of a debater’s success than the linguistic features that are shown to be predictive of debater success by the previous work (Zhang et al., 2016). This suggests that user traits are important to take into account in studying success in online debating. 1That study is distinct from those presented here. See Section 4 for details. 603 The dataset will be made publicly available2. 2 Related Work and Datasets There has been a tremendous amount of research effort to understand the important linguistic features for identifying argument structure and determining effective argumentation strategies in monologic text (Mochales and Moens, 2011; Feng and Hirst, 2011; Stab and Gurevych, 2014; Guerini et al., 2015). For example, Habernal and Gurevych (2016) has experimented with different machine learning models to predict which of two arguments is more convincing. To understand what kind of persuasive strategies are effective, Hidey et al. (2017) has further annotated different modes of persuasion (ethos, logos, pathos) and looked at which combinations appear most often in more persuasive arguments. Understanding argumentation strategies in conversations and the effect of interplay between the language of the participants has also been an important avenue of research. Tan et al. (2016), for example, has examined the effectiveness of arguments on ChangeMyView3, a debate forum website in which people invite others to challenge their opinions. They found that the interplay between the language of the opinion holder and that of the counterargument provides highly predictive cues of persuasiveness. Zhang et al. (2016) has examined the effect of conversational style in Oxford-style debates and found that the side that can best adapt in response to opponents’ discussion points over the course of the debate is more likely to be more persuasive. Although research on computational argumentation has mainly focused on identifying important linguistic features of the text, there is also evidence that it is important to model the debaters themselves and the people who are judging the quality of the arguments: multiple studies show that people perceive arguments from different perspectives depending on their backgrounds and experiences (Correll et al., 2004; Hullett, 2005; Petty et al., 1981; Lord et al., 1979; Vallone et al., 1985; Chambliss and Garner, 1996). As a result, we introduce data from a social media debate site that also includes substantial information about its users and their activity and interaction on the website. This is in contrast 2Link to the dataset: http://www.cs.cornell.edu/ esindurmus/. 3https://www.reddit.com/r/changemyview/. to the datasets commonly employed in studies of argument strategies (Johnson and Goldman, 2009; Walker et al., 2012; Zhang et al., 2016; Wang et al., 2017; Cano-Basave and He, 2016; Al Khatib et al., 2016). Lukin et al. (2017) is the closest work to ours as it studies the effect of OCEAN personality traits (Roccas et al., 2002; T. Norman, 1963) of the audience on how they perceive the persuasiveness of monologic arguments. Note that, in our dataset, we do not have information about users’ personality traits; however, we have extensive information about their demographics, social interactions, beliefs and language use. 3 Dataset4 Debates. The dataset includes 78, 376 debates from 23 different topic categories including Politics, Religion, Technology, Movies, Music, PlacesTravel. Each debate consists of different rounds in which opposing sides provide their arguments. An example debate along with the user information for PRO and CON debaters and corresponding comments and votes are shown in Figure 1. The majority of debates have three or more rounds; Politics, Religion, and Society are the most common debate categories. Each debate includes comments as well as the votes provided by other users in the community. We collected all the comments and votes for each debate with 606,102 comments and 199,210 votes in total. Voters evaluate each debater along diverse set of criteria such as convincingness, conduct during the debate, reliability of resources cited, spelling and grammar. With this fine-grained evaluation scheme, we can study the quality of arguments from different perspectives. User Information. The dataset also includes self-identified information for 45, 348 users participating in the debates or voting for the debates: demographic information such as age, gender, education, ethnicity; prior belief and personal information such as political, religious ideology, income, occupation and the user’s stance on a set of 48 controversial topics chosen by the website. The controversial debate topics5 include ABORTION, DEATH PENALTY, GAY MARRIAGE, and AFFIRMATIVE ACTION. Information about user’s activity is also provided and includes their debates, votes, comments, opinion questions they ask, poll 4Data is crawled in accordance to the terms and conditions of the website. 5Full list of topics: https://www.debate.org/big-issues/. 604 Figure 1: Example debate along with the user profile information for PRO and CON debaters and the corresponding comments and votes. The full information for this debate can be found at https://www.debate.org/debates/Lateterm-abortion-is-morally-correct-in-every-situation/1/. votes they participated in, overall success in winning debates as well as their social network information. 4 Task: What makes a debater successful? To understand the effect of user characteristics vs. language features, and staying consistent with majority of previous work, we conduct the task of predicting the winner of a debate by looking at accumulated scores from the voters. We model this as a binary classification task and experiment with a logistic regression model, optimizing the regularizer (ℓ1 or ℓ2) and the regularization parameter C (between 10−5 and 105) with 3-fold cross validation. 4.1 Data preprocessing Controlling for the debate text. We eliminate debates where a debater forfeits before the debate ends. From the remaining debates, we keep only the ones with three or more rounds with at least 20 sentences by each debater in each round to be able to study the important linguistic features 6. Determining the winner. For this particular dataset, the winning debater is determined by the votes of other users on different aspects of the arguments as outlined in Section 3, and the debaters are scored accordingly7. We determine the winner by the total number of points the debaters get from 6After all the eliminations, we have 1635 debates in our dataset. 7Having better conduct: 1 point, having better spelling and grammar: 1 point, making more convincing arguments: 3 points, using the most reliable sources: 2 points. the voters. We consider the debates with at least 5 voters and remove the debates resulting in a tie. 4.2 Features Experience and Success Prior. We define the experience of a user during a debate dt at time t as the total number of debates participated as a debater by the user before time t. The success prior is defined as the ratio of the number of debates the user won before time t to the total number of debates before time t. Similarity with audience’s user profile. We encode the similarity of each of the debaters and the voters by comparing each debaters’ opinions on controversial topics, religious ideology, genders, political ideology, ethnicity and education level to same of the audience. We include the features that encode the similarity by counting number of voters having the same values as each of the debaters for each of these characteristics. We also include features that corresponds to cosine distance between the vectors of each debater and each voter where the user vector is one-hot representation for each user characteristic. Social Network. We extract features that represent the debaters’ social interactions before a particular debate by creating the network for their commenting and voting activity before that debate. We then computed the degree, centrality, hub and authority scores from these graphs and include them as features in our model. Linguistic features of the debate. We perform ablation analysis with various linguistic features shown to be effective in determining 605 Accuracy Majority baseline 57.23 User features Debate experience 63.54 Success prior 65.78 Overall similarity with audience 62.52 Social network features 62.93 All user features 68.43 Linguistic features Length 58.45 Flow features 58.66 All linguistic features 60.28 User+Linguistic Features 71.35 Table 1: Ablation tests for the features. persuasive arguments including argument lexicon features (Somasundaran et al., 2007), politeness marks (Danescu-Niculescu-Mizil et al., 2013), sentiment, connotation (Feng and Hirst, 2011), subjectivity (Wilson et al., 2005), modal verbs, evidence (marks of showing evidence including words and phrases like “evidence” ,“show”, “according to”, links, and numbers), hedge words (Tan and Lee, 2016), positive words, negative words, swear words, personal pronouns, typetoken ratio, tf-idf, and punctuation. To get a text representation for the debate, we concatenated all the turns of each of the participants, extracted features for each and finally concatenated the feature representation of each participant’s text. We also experimented with conversational flow features shown to be effective in determining the successful debaters by (Zhang et al., 2016) to track how ideas flow between debaters throughout a debate. Consistent with (Zhang et al., 2016), to extract these features, we determine the talking points that are most discriminating words for each side from the first round of the debate applying the method introduced by (Monroe et al.) which estimates the divergence between the two sides word-usage. 4.3 Results and Analysis Table 1 shows the results for the user and linguistic features. We find that combination of the debater experience, debater success prior, audience similarity features and debaters’ social network features performs significantly better8 than the major8We measure the significance performing t-test. ity baseline and linguistic features achieving the best accuracy (68.43%). We observe that experience and social interactions are positively correlated with success. It suggests that as debaters spend more time on the platform, they probably learn strategies and adjust to the norms of the platform and this helps them to be more successful. We also find that success prior is positively correlated with success in a particular debate. In general, the debaters who win the majority of the debates when first join the platform, tend to be successful in debating through their lifetime. This may imply that some users may already are good at debating or develop strategies to win the debates when they first join to the platform. Moreover, we find that similarity with audience is positively correlated with success which shows that accounting for the characteristics of the audience is important in persuasion studies (Lukin et al., 2017). Although the linguistic features perform better than the majority baseline, they are not able to achieve as high performance as the features encoding debater and audience characteristics. This suggest that success in online debating may be more related to the users’ characteristics and social interactions than the linguistic characteristics of the debates. We find that use of argument lexicon features and subjectivity are the most important features and positively correlated with success whereas conversational flow features do not perform significantly better than length. This may be because debates in social media are much more informal compare to Oxford style debates and therefore, in the first round, the debaters may not necessarily present an overview of their arguments (talking points) they make through the debate. We observe that (44%) of the mistakes made by the model with user features are classified correctly by the linguistic model. This motivated us to combine the user features with linguistic features which gives the best overall performance (71.35%). This suggests that user aspects and linguistic characteristics are both important components to consider in persuasion studies. We believe that these aspects complement each other and it is crucial to account for them to understand the actual effect of each of these components. For future work, it may be interesting to understand the role of these components in persuasion further and think about the best ways to combine the information from these two components to better represent 606 a user. Acknowledgments This work was supported in part by NSF grants IIS-1815455 and SES-1741441. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NSF or the U.S. Government. References Khalid Al Khatib, Henning Wachsmuth, Johannes Kiesel, Matthias Hagen, and Benno Stein. 2016. A news editorial corpus for mining argumentation strategies. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3433–3443. The COLING 2016 Organizing Committee. Amparo Elizabeth Cano-Basave and Yulan He. 2016. A study of the impact of persuasive argumentation in political debates. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1405–1413. Association for Computational Linguistics. Marilyn J. Chambliss and Ruth Garner. 1996. Do adults change their minds after reading persuasive text? Written Communication, 13(3):291–313. Joshua Correll, Steven J Spencer, and Mark P Zanna. 2004. An affirmed self and an open mind: Self-affirmation and sensitivity to argument strength. Journal of Experimental Social Psychology, 40(3):350–356. Cristian Danescu-Niculescu-Mizil, Moritz Sudhof, Dan Jurafsky, Jure Leskovec, and Christopher Potts. 2013. A computational approach to politeness with application to social factors. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 250–259. Association for Computational Linguistics. Esin Durmus and Claire Cardie. 2018. Exploring the role of prior beliefs for argument persuasion. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1035–1045, New Orleans, Louisiana. Association for Computational Linguistics. Vanessa Wei Feng and Graeme Hirst. 2011. Classifying arguments by scheme. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 987–996. Association for Computational Linguistics. Marco Guerini, Gozde Ozbal, and Carlo Strapparava. 2015. Echoes of persuasion: The effect of euphony in persuasive communication. Ivan Habernal and Iryna Gurevych. 2016. What makes a convincing argument? empirical analysis and detecting attributes of convincingness in web argumentation. In EMNLP, pages 1214–1223. Christopher Hidey, Elena Musi, Alyssa Hwang, Smaranda Muresan, and Kathy McKeown. 2017. Analyzing the semantic types of claims and premises in an online persuasive forum. In Proceedings of the 4th Workshop on Argument Mining, pages 11–21. Association for Computational Linguistics. Craig R Hullett. 2005. The impact of mood on persuasion: A meta-analysis. Communication Research, 32(4):423–442. Timothy R. Johnson and Jerry Goldman. 2009. A good quarrel: America’s top legal reporters share stories from inside the supreme court. University of Michigan Press. Charles G Lord, Lee Ross, and Mark R Lepper. 1979. Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of personality and social psychology, 37(11):2098. Stephanie M Lukin, Pranav Anand, Marilyn Walker, and Steve Whittaker. 2017. Argument strength is in the eye of the beholder : Audience effects in persuasion. arXiv preprint arXiv:1708.09085. Raquel Mochales and Marie-Francine Moens. 2011. Argumentation mining. 19:1–22. Burt L. Monroe, Michael P. Colaresi, and Kevin M. Quinn. Fightin’ words: Lexical feature selection and evaluation for identifying the content of political conflict. Political Analysis, 16(4):372403. Richard E Petty, John T Cacioppo, and Rachel Goldman. 1981. Personal involvement as a determinant of argument-based persuasion. Journal of personality and social psychology, 41(5):847. Sonia Roccas, Lilach Sagiv, Shalom H. Schwartz, and Ariel Knafo. 2002. The big five personality factors and personal values. Personality and Social Psychology Bulletin, 28(6):789–801. Swapna Somasundaran, Josef Ruppenhofer, and Janyce Wiebe. 2007. Detecting arguing and sentiment in meetings. In Proceedings of the SIGdial Workshop on Discourse and Dialogue, volume 6. Christian Stab and Iryna Gurevych. 2014. Identifying argumentative discourse structures in persuasive essays. In EMNLP. 607 Warren T. Norman. 1963. Toward an adequate taxonomy of personality attributes: Replicated factor structure in peer nomination personality ratings. Journal of abnormal and social psychology, 66:574– 83. Chenhao Tan and Lillian Lee. 2016. Talk it up or play it down? (un)expected correlations between (de-)emphasis and recurrence of discussion points in consequential u.s. economic policy meetings. Presented in Text as Data. Chenhao Tan, Vlad Niculae, Cristian DanescuNiculescu-Mizil, and Lillian Lee. 2016. Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. CoRR, abs/1602.01103. Robert P Vallone, Lee Ross, and Mark R Lepper. 1985. The hostile media phenomenon: biased perception and perceptions of media bias in coverage of the beirut massacre. Journal of personality and social psychology, 49(3):577. Marilyn Walker, Jean Fox Tree, Pranav Anand, Rob Abbott, and Joseph King. 2012. A corpus for research on deliberation and debate. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012), pages 812–817, Istanbul, Turkey. European Language Resources Association (ELRA). ACL Anthology Identifier: L12-1643. Lu Wang, Nick Beauchamp, Sarah Shugars, and Kechen Qin. 2017. Winning on the merits: The joint effects of content and style on debate outcomes. TACL, 5:219–232. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phraselevel sentiment analysis. In Proceedings of the conference on human language technology and empirical methods in natural language processing, pages 347–354. Association for Computational Linguistics. Justine Zhang, Ravi Kumar, Sujith Ravi, and Cristian Danescu-Niculescu-Mizil. 2016. Conversational flow in oxford-style debates. arXiv preprint arXiv:1604.03114.
2019
57
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5692–5705 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5692 Word2Sense : Sparse Interpretable Word Embeddings Abhishek Panigrahi Microsoft Research India [email protected] Harsha Vardhan Simhadri Microsoft Research India [email protected] Chiranjib Bhattacharyya Microsoft Research India, and Indian Institute of Science [email protected] Abstract We present an unsupervised method to generate Word2Sense word embeddings that are interpretable — each dimension of the embedding space corresponds to a fine-grained sense, and the non-negative value of the embedding along the j-th dimension represents the relevance of the j-th sense to the word. The underlying LDA-based generative model can be extended to refine the representation of a polysemous word in a short context, allowing us to use the embeddings in contextual tasks. On computational NLP tasks, Word2Sense embeddings compare well with other word embeddings generated by unsupervised methods. Across tasks such as word similarity, entailment, sense induction, and contextual interpretation, Word2Sense is competitive with the state-of-the-art method for that task. Word2Sense embeddings are at least as sparse and fast to compute as prior art. 1 Introduction Several unsupervised methods such as SkipGram (Mikolov et al., 2013) and Glove (Pennington et al., 2014) have demonstrated that co-occurrence data from large corpora can be used to compute low-dimensional representations of words (a.k.a. embeddings) that are useful in computational NLP tasks. While not as accurate as semi-supervised methods such as BERT (Devlin et al., 2018) and ELMO (Peters et al., 2018) that are trained on various downstream tasks, they do not require massive amounts of compute unaccessible to all but few. Nearly all such methods produce dense representations for words whose coordinates in themselves have no meaningful interpretation. The numerical values of a word’s embedding are meaningful only in relation to representations of other words. A unitary rotation can be applied to many of these embeddings retaining their utility for computational tasks, and yet completely changing the values of individual coordinates. Can we design an interpretable embedding whose coordinates have a clear meaning to humans? Ideally such an embedding would capture the multiple senses of a word, while being effective at computational tasks that use inter-word spacing of embeddings. Loosely, a sense is a set of semantically similar words that collectively evoke a bigger picture than individual words in the reader’s mind. In this work, we mathematically define a sense to be a probability distribution over the vocabulary, just as topics in topic models. A human can relate to a sense through the words with maximum probability in the sense’s probability distribution. Table 1 presents the top 10 words for a few senses. We describe precisely such an embedding of words in a space where each dimension corresponds to a sense. Words are represented as probability distributions over senses so that the magnitude of each coordinate represents the relative importance of the corresponding sense to the word. Such embeddings would naturally capture the polysemous nature of words. For instance, the embedding for a word such as cell with many senses – e.g. “biological entity”, “mobile phones”, “excel sheet”, “blocks”, “prison” and “battery” (see Table 1) – will have support over all such senses. To recover senses from a corpus and to represent word embeddings as (sparse) probability distributions over senses, we propose a generative model (Figure 1) for the co-occurrence matrix: (1) associate with each word w a sense distribution θw with Dirichlet prior; (2) form a context around a target word w by sampling senses z according to θw, and sample words from the distribution of sense z. This allows us to use fast inference tools such as WarpLDA (Chen et al., 2016) to recover few thousand fine-grained senses from large cor5693 Word Rank Top 10 words with the highest probability in the sense’s distribution Tie 1 hitch, tying, magnet, tied, knots, tie, loops, rope, knot, loop 2 shirts, wore, shoes, jacket, trousers, worn, shirt, dress, wearing, wear 3 against, scored, round, 2-1, champions, match, finals, final, win, cup Bat 1 species, myotis, roosts, pipistrelle, reservoir, roost, dam, horseshoe, bats, bat 2 smuggling, smoked, cigars, smokers, cigar, smoke, smoking, cigarette, cigarettes, tobacco 3 bowled, bowler, first-class, bowling, batsman, wicket, overs, innings, cricket, wickets Apple 1 player, micro, zen, portable, shuffle, mini, nano, mp3, apple, ipod 2 graphics, g4, pc, hardware, pci, macintosh, intel, os, apple, mac 3 vegetables, lemon, grapes, citrus, orange, apple, apples, fruits, juice, fruit Star 1 vulcan, archer, picard, enterprise, voyager, starship, spock, kirk, star trek 2 obi-wan, luke, anakin, skywalker, sith, vader, darth, star, jedi, wars 3 cluster, nebula, dwarf, magnitude, ngc, constellation, star, stars, galaxies, galaxy 4 inn, guest, star, b&b, rooms, bed, accommodation, breakfast, hotels, hotel Cell 1 plasma, cellular, membranes, molecular, cells, molecules, cell, protein, membrane, proteins 2 kinase, immune, gene, activation, proteins, receptors, protein, receptor, cell, cells 3 transfusion, renal, liver, donor, transplantation, bone, kidney, marrow, transplant, blood 8 top, squares, stack, bottom, the, table, columns, row, column, rows 10 inmate, correctional, workhouse, jail, prisoner, hmp, inmates, prisons, prisoners, prison 12 aaa, powered, nimh, mains, lithium, aa, rechargeable, charger, batteries, battery 15 handset, bluetooth, ericsson, ringtones, samsung, mobile, phones, phone, motorola, nokia Table 1: Top senses of polysemous words as identified Word2Sense embeddings. Each row lists the rank of the sense in terms of its weight in the word’s embedding, and the top 10 words in the senses’ probability distribution. pora and construct the embeddings. Word2Sense embeddings are extremely sparse despite residing in a higher dimensional space (few thousand), and the number of nonzeros in the embeddings is no more than 100. In comparison, Word2vec performs best on most tasks when computed in 500 dimensions. These sparse single prototype embeddings effectively capture the senses a word can take in the corpus, and can outperform probabilistic embeddings (Athiwaratkun and Wilson, 2017) at tasks such as word entailment, and compete with Word2vec embeddings and multi-prototype embeddings (Neelakantan et al., 2015) in similarity and relatedness tasks. Unlike prior work such as Word2vec and GloVe, our generative model has a natural extension for disambiguating the senses of a polysemous word in a short context. This allows the refinement of the embedding of a polysemous word to a WordCtx2Sense embedding that better reflects the senses of the word relevant in the context. This is useful for tasks such as Stanford contextual word similarity (Huang et al., 2012) and word sense induction (Manandhar et al., 2010). Our methodology does not suffer from computational constraints unlike Word2GM (Athiwaratkun and Wilson, 2017) and MSSG (Neelakantan et al., 2015) which are constrained to learning 2-3 senses for a word. The key idea that gives us this advantage is that rather than constructing a per-word representation of senses, we construct a global pool of senses from which the senses a word takes in the corpus are inferred. Our methodology takes just 5 hours on one multicore processor to recover senses and embeddings from a concatenation of UKWAC (2.5B tokens) and Wackypedia (1B tokens) co-occurrence matrices (Baroni et al., 2009) with a vocabulary of 255434 words that occur at least 100 times. Our major contributions include: • A single prototype word embedding that encodes information about the senses a word takes in the training corpus in a human interpretable way. This embedding outperforms Word2vec in rare word similarity task and word relatedness task and is within 2% in other similarity and relatedness tasks; and outperforms Word2GM on the entailment task of (Baroni et al., 2012). • A generative model that allows for disambiguating the sense of a polysemous word in a short context that outperforms the state-of-the-art unsupervised methods on Word Sense Induction for Semeval-2010 (Manandhar et al., 2010) and MakeSense-2016 (Mu et al., 2017) datasets and is within 1% of the best models for the contextual word similarity task of (Huang et al., 2012). 5694 2 Related Work Several unsupervised methods generate dense single prototype word embeddings. These include Word2vec (Mikolov et al., 2013), which learns embeddings that maximize the cosine similarity of embeddings of co-occurring words, and Glove (Pennington et al., 2014) and Swivel (Shazeer et al., 2016) that learn embeddings by factorizing the word co-occurrence matrix. (Dhillon et al., 2015; Stratos et al., 2015) use canonical correlation analysis (CCA) to learn word embeddings that maximize correlation with context. (Levy and Goldberg, 2014; Levy et al., 2015) showed that SVD based methods can compete with neural embeddings. (Lebret and Collobert, 2013) use Hellinger PCA, and claim that Hellinger distance is a better metric than Euclidean distance in discrete probability space. Multiple works have considered converting the existing embeddings to interpretable ones. Murphy et al. (2012) use non-negative matrix factorization of the word-word co-occurrence matrix to derive interpretable word embeddings. (Sun et al., 2016; Han et al., 2012) change the loss function in Glove to incorporate sparsity and non negativity respectively to capture interpretability. (Faruqui et al., 2015) propose Sparse Overcomplete Word Vectors (SPOWV ), by solving an optimization problem in dictionary learning setting to produce sparse non-negative high dimensional projection of word embeddings. (Subramanian et al., 2018) use a k-sparse denoising autoencoder to produce sparse non-negative high dimensional projection of word embeddings, which they called SParse Interpretable Neural Embeddings (SPINE). However, all these methods lack a natural extension for disambiguating the sense of a word in a context. In a different line of work, Vilnis and McCallum (2015) proposed representing words as Gaussian distributions to embed uncertainty in dimensions of the embedding to better capture concepts like entailment. However, Athiwaratkun and Wilson (2017) argued that such a single prototype model can’t capture multiple distinct meanings and proposed Word2GM to learn multiple Gaussian embeddings per word. The prototypes were generalized to ellipical distributions in (Muzellec and Cuturi, 2018). A major limitation with such an approach is the restriction on the number of prototypes per word that can be learned, which is limited to 2 or 3 due to computational constraints. Many words such as ‘Cell’ can have more than 5 senses. Another open issue is that of disambiguating senses of a polysemous word in a context – there is no obvious way to embed phrases and sentences with such embeddings. Multiple works have proposed multi-prototype embeddings to capture the senses of a polysemous word. For example, Neelakantan et al. (2015) extends the skipgram model to learn multiple embeddings of a word, where the number of senses of a word is either fixed or is learned through a non-parametric approach. Huang et al. (2012) learns multi-prototype embeddings by clustering the context window features of a word. However, these methods can’t capture concepts like entailment. Tian et al. (2014) learns a probabilistic version of skipgram for learning multi-sense embeddings and hence, can capture entailment. However, all these models suffer from computational constraints and either restrict the number of prototypes learned for each word to 2-3 or restrict the words for which multiple prototypes are learned to the top k frequent words in the vocabulary. Prior attempts at representing polysemy include (Pantel and Lin, 2002), who generate global senses by figuring out the best representative words for each sense from co-occurrence graph, and (Reisinger and Mooney, 2010), who generate senses for each word by clustering the context vectors of the occurrences of the word. Further attempts include Arora et al. (2018), who express single prototype dense embeddings, such as Word2vec and Glove, as linear combinations of sense vectors. However, their underlying linearity assumption breaks down in real data, as shown by Mu et al. (2017). Further, the linear coefficients can be negative and have values far greater than 1 in magnitude, making them difficult to interpret. Neelakantan et al. (2015) and Huang et al. (2012) represent a context by the average of the embeddings of the words to disambiguate the sense of a target word present in the context. On the other hand, Mu et al. (2017) suggest representing sentences as a hyperspace, rather than a single vector, and represent words by the intersection of the hyperspaces representing the sentences it occurs in. A number of works use na¨ıve Bayesian method (Charniak et al., 2013) and topic models (Brody and Lapata, 2009; Yao and Van Durme, 2011; Pedersen, 2000; Lau et al., 2012, 2013, 2014) to learn senses from local contexts, treating each in5695 Figure 1: Generative model for co-occurrence matrix. Dirichlet prior γ is used in WarpLDA. stance of a word within a context as a pseudodocument, and achieve state of the art results in WSI task (Manandhar et al., 2010). Since this approach requires training a single topic model per target word, it does not scale to all the words in the vocabulary. In a different line of work, (Tang et al., 2014; Guo and Diab, 2011; Wang et al., 2015; Tang et al., 2015; Xun et al., 2017) transform topic models to learn local context level information through sense latent variable, in addition to the document level information through topic latent variable, for producing more fine grained topics from the corpus. 3 Notation Let V = {w1, w2, ..w|V |} denote the set of unique tokens in corpus (vocabulary). Let C denote the word-word co-occurrence matrix constructed from the corpus, i.e., Cij is the number of times wj has occurred in the context of wi. We define a context around a token w as the set of n words to the left and n words to the right of w. We denote the size of context window by n. Typically n = 5. Our algorithm uses LDA to infer a sense model β – essentially a set of k probability distributions over V – from the corpus. It then uses the sense model to encode a word w as a k′-dimensional µ-sparse vector θw. Here, we use α and γ, respectively, to denote the Dirichlet priors of θw, the sense distribution of a word w, and βz, context word distribution in a sense z. JS is a k × k matrix that measures the similarity between senses. We denote the zth row of a matrix M by Mz. 4 Recovering senses To recover senses, we suppose the following generative model for generating words in a context of size n (see Figure 1). 1. For each word w ∈V , generate a distribution over senses θw from the Dirichlet distribution with prior α. 2. For each context cw around target word w, and for each of the 2n tokens ∈cw, do (a) Sample sense z ∼Multinomial(θw). (b) Sample token c ∼Multinomial(βzn). Such a generative model will generate a cooccurrence matrix C that can also be generated by another model. C is a matrix whose columns Cw are interpreted as a document formed from the count of all the tokens that have occurred in a context centered at w. Given a Dirichlet prior of parameter α on sense distribution of Cw and β, the distribution over context words for each sense, document Cw (and thus the co-occurrence matrix C) is generated as follows: 1. Generate θw ∼Dirichlet(α). 2. Repeat N times to generate Cw: (a) Sample sense z ∼Multinomial(θw). (b) Sample token c ∼Multinomial(βz). Based on this generative model, given the cooccurrence matrix C, we infer the matrix β and the maximum aposteriori estimate θw for each word using a fast variational inference tool such as WarpLDA (Chen et al., 2016). 5 Word2Sense embeddings Word2Sense embeddings are probability distributions over senses. We discuss how to use the senses recovered by inference on the generative model in section 4 to construct word embeddings. We demonstrate that the embeddings so computed are competitive with various multi-modal embeddings in semantic similarity and entailment tasks. 5.1 Computing Word2Sense embeddings Denote the probability of occurrence of a word in the corpus by p(w). We approximate the probability of the word p(w) by its empirical estimate ∥Cw∥1/ P w′∈V ∥Cw′∥1. We define the global probability pZ(z) of a sense z as the probability that a randomly picked token in the corpus has that sense in it’s context window. We approximate the global distribution of generated senses using the following formulation. pZ(z) = X w∈V θw[z]p(w) ∀z ∈{1..k}. 5696 Then, for each word w ∈V , we compute pc(w), its sense distribution (when acting as a context word) as follows: pc(z|w) = p(w|z)pZ(z) p(w) = βw,zpZ(z) p(w) . Eliminating redundant senses. LDA returns a number of topics that are very similar to each other. Examples of such topics are given in Table 11 in appendix. These topics need to be merged, since inferring two similar words against such senses can cause them to be (predominantly) assigned to two different topic ids, causing them to look more dissimilar than they actually are. In order to eliminate redundant senses, we use the similarity of topics according to the Jensen Shannon (JS) divergence. We construct the topic similarity matrix JS ∈Rk×k, whose [i, j]−th entry JS[i, j] is the JS divergence between senses βi and βj. Recall that JS divergence JSdiv(p, q) between two multinomial distributions p, q ∈Rk is given by k X i=1 −pi log 2pi pi + qi −qi log 2qi pi + qi . (1) We run agglomerative clustering on the JS matrix to merge similar topics. We use the following distance metric to merge two clusters Di and Dj: d(Di, Dj) = 1 |Di||Dj| X a∈Di, b∈Dj JS[a, b]0.5 Let Di=1..k′ denote the final set of k′ clusters obtained after clustering. We approximate the occurrence probability of the merged cluster of senses Di by pD(Di) = P a∈Di pZ(a). Table 11 in appendix shows some clusters formed after clustering. Using the merged senses, we compute the embedding vw of word w — a distribution over senses indexed by z ∈{1..k} — as follows: ˆvw[z] = pc(z|w) + θw[z] v′ w = Truncateµ(Project(ˆvw) ⊘pD(.)) vw = v′ w/||v′ w||1. (2) Project is the function that maps v ∈Rk to v′ ∈Rk′ by merging the coordinates corresponding to the merged senses: v′[i] = P a∈Di v[a]. Truncateµ sparsifies the input by truncating it to the µ highest non-zeros in the vector. 5.2 Evaluation We compare Word2Sense embeddings with the state-of-the-art on word similarity and entailment tasks as well as on benchmark downstream tasks. 5.2.1 Hyperparameters We train Word2vec Skip-Gram embeddings with 10 passes over the data, using separate embeddings for the input and output contexts, 5 negative samples per positive example, window size n = 2 and the same sub-sampling and dynamic window procedure as in (Mikolov et al., 2013). For Word2GM, we make 5 passes over the data (due to very long training time of the published code 1), using 2 modes per word, 1 negative sample per positive example, spherical covariance model, window size n = 10 and the same sub-sampling and dynamic window procedure as in (Athiwaratkun and Wilson, 2017). Since there is no recommended dimension in these papers, we report the numbers for the best performing embedding size. We report the performance of Word2vec and Word2GM at dimension 500 and 400 respectively2. We report the performance of SPOWV and SPINE in benchmark downstream tasks, that use Word2vec as base embeddings, using the recommended settings as given in (Faruqui et al., 2015) and (Subramanian et al., 2018) respectively3. For Multi-Sense Skip-Gram model (MSSG) (Neelakantan et al., 2015), we use pre-trained word and sense representations 4. We found k = 3000, α = 0.1 and γ = 0.001 to be good hyperparamters for WarpLDA to recover fine-grained senses from the corpus. A choice of k′ ≈ 3 4k that merges k/4 senses improved results. We use a context window size n = 5 and truncation parameter µ = 75. We think µ = 75 works best because we found the average sparsity of pc(.|w) to be around 100. Since we decrease the number of senses by 1/4th after post-processing, the average sparsity reduces to close to 75. If a word is not present in the vocabulary, we take an embedding on the unit simplex, that contains equal values in all the dimensions. 5.2.2 Word Similarity We evaluate our embeddings at scoring the similarity or relatedness of pairs of words on several 1https://github.com/benathi/word2gm 2We tried 100, 200, 300, 400, 500 dimensions for Word2vec, and 50, 100, 200, 400 dims for Word2GM 3The two models don’t perform better than Word2vec in similarity tasks and don’t show performance in entailment. 4bitbucket.org/jeevan shankar/multi-sense-skipgram 5697 Dataset Word Word Word2GM MSSG 300-dim 2Sense 2Vec 30K 6K WS353-S 0.747 0.769 0.756(0.767) 0.753 0.761 WS353-R 0.708 0.703 0.609(0.717) 0.598 0.607 WS353 0.723 0.732 0.669(0.734) 0.685 0.694 Simlex-999 0.388 0.393 0.399(0.293) 0.350 0.351 MT-771 0.685 0.688 0.686(0.608) 0.646 0.645 MEN 0.772 0.780 0.740(0.736) 0.665 0.675 RG 0.790 0.824 0.755(0.745) 0.719 0.714 MC 0.806 0.827 0.819(0.791) 0.684 0.763 RW 0.374 0.365 0.339(0.286a) 0.15 0.15 Table 2: Comparison of word embeddings on word similarity evaluation datasets. For MSSG learned for top 30K and 6k words, we report the similarity of the global vectors of word, which we find to be better than comparing all the local vectors of words. For Word2GM, we report numbers from our tuning as well as from the paper (in paranthesis). Note that we report higher numbers in all cases, except on WS353-S and WS353-R datasets. We attribute this to fewer passes over the data and possibly different pre-processing. a 0.353 with a different metric. Method Best AP Best F1 (Baroni et al., 2012) 0.751 Word2GM(10)-Cos 0.729 0.757 Word2GM(10)-KL 0.747 0.763 Word2Sense 0.751 0.761 Word2Sense -full 0.791 0.798 Table 3: Comparison of embeddings on word entailment. The number reported for (Baroni et al., 2012) has been taken from original paper and uses the balAPinc metric. For Word2GM, we were able to reproduce results in the original paper; we report results using both Cosine and KL divergence metrics. For Word2Sense , we use KL divergence. datasets annotated with human scores: Simlex999 (Hill et al., 2015), WS353-S and WS353R (Finkelstein et al., 2002), MC (Miller and Charles, 1991), RG (Rubenstein and Goodenough, 1965), MEN (Bruni et al., 2014), RW (Luong et al., 2013) and MT-771 (Radinsky et al., 2011; Halawi et al., 2012). We predict similarity/relatedness score of a pair of words {w1, w2} by computing the JS divergence (see Equation 1) between the embeddings {vw1, vw2} as computed in Equation 2. For other embeddings, we use cosine similarity metric to measure similarity between embeddings. The final prediction effectiveness of an embedding is given by computing Spearman correlation between the predicted scores and the human annotated scores. Table 2 compares our embeddings to multimodal Gaussian mixture (Word2GM) model (Athiwaratkun and Wilson, 2017) and Word2vec (Mikolov et al., 2013). We extensively tune hyperparameters of prior work, often achieving better results than previously reported. We concluded from this exercise that SkipGram (Word2vec) is the best among all the unsupervised embeddings at similarity and relatedness tasks. We see that while being interpretable and sparser than the 500-dimensional Word2vec, Word2Sense embeddings is competitive with Word2vec on all the datasets. 5.2.3 Word entailment Given two words w1 and w2, w2 entails w1 (denoted by w1 |= w2) if all instances of w1 are w2. We compare Word2Sense embeddings with Word2GM on the entailment dataset provided by (Baroni et al., 2012). We use KL divergence to generate entailment scores between words w1 and w2. For Word2GM, we use both cosine similarity and KL divergence, as used in the original paper. We report the F1 scores and Average Precision(AP) scores for reporting the quality of prediction. Table 3 compares the performance of our embedding with Word2GM. We notice that Word2Sense embeddings with µ = k′ (denoted Word2Sense -full in the table), i.e., with no truncation, yields the best results. We do not compare with hyperbolic embeddings (Tifrea et al., 2019; Dhingra et al., 2018) because these embeddings are designed mainly to perform well on entailment tasks, but are far off from the performance of Euclidean embeddings on similarity tasks. 5.2.4 Downstream tasks We compare the performance of Word2Sense with Word2vec, SPINE and SPOWV embeddings on the following downstream classification tasks: sentiment analysis (Socher et al., 2013), news classification5, noun phrase chunking (Lazaridou et al., 2013) and question classification (Li and Roth, 2006). We do not compare with Word2GM and MSSG as there is no obvious way to compute sentence embeddings from multi-modal word embeddings. The sentence embedding needed for text classification is the average of the embeddings of words in the sentence, 5http://qwone.com/ jason/20Newsgroups/ 5698 Task Word Word SPOWV SPINE 2Sense 2Vec Sports news 0.865 0.826 0.834 0.810 Computer news 0.861 0.838 0.862 0.856 Religion news 0.965 0.975 0.966 0.936 NP Bracketing 0.693 0.686 0.687 0.665 Sentiment analysis 0.815 0.812 0.816 0.778 Question clf. 0.970 0.969 0.980 0.940 Table 4: Comparison on benchmark downstream tasks. as in (Subramanian et al., 2018). We pick the best among SVMs, logistic regression and random forest classifier to classify the sentence embeddings based on accuracy on the development set. Table 4 reports the accuracies on the test set. More details of the tasks are provided in Appendix E. 6 Interpretability We evaluate the interpretability of the Word2Sense embeddings against Word2vec, SPINE and SPOWV models using the word intrusion test following the procedure in (Subramanian et al., 2018). We select the 15k most frequent words in the intersection of our vocabulary and the Leipzig corpus (Goldhahn et al., 2012). We select a set H of 300 random dimensions or senses from 2250 senses. For each dimension h ∈H, we sort the words in the 15k vocabulary based on their weight in dimension h. We pick the top 4 words in the dimension and add to this set a random intruder word that lies in the bottom half of the dimension h and in the top 10 percentile of some other dimension h′ ∈H \ {h} (Fyshe et al., 2014; Faruqui et al., 2015). For the dimension h to be claimed interpretable, independent judges must be able to easily separate the intruder word from the top 4 words. We split the 300 senses into ten sets of 30 senses, and assigned 3 judges to annotate the intruder in each of the 30 senses in a set (we used a total of 30 judges). For each question, we take the majority voted word as the predicted intruder. If a question has 3 different annotations, we count that dimension as non interpretable6. Since, we followed the procedure as in (Subramanian et al., 2018), we compare our performance with the results reported in their paper. Table 5 shows that Word2Sense is competitive with the best interpretable embeddings. 6(Subramanian et al., 2018) used a randomly picked intruder in this case. Method Agreement Precision Word2vec 0.77/0.18 0.261 SPOWV 0.79/0.28 0.418 SPINE 0.91/0.48 0.748 Word2Sense 0.891/0.589 0.753 Table 5: Comparison of embeddings on for Word Intrusion tasks. The second column indicates the inter annotator agreement – the first number is the fraction of questions for which at least 2 annotators agreed and the second indicates the fraction on which all three agreed. The last column is the precision of the majority vote. 6.1 Qualitative evaluation We show the effectiveness of our embeddings at capturing multiple senses of a polysemous word in Table 1. For e.g. ”tie” can be used as a verb to mean tying a rope, or drawing a match, or as a noun to mean clothing material. These three senses are captured in the top 3 dimensions of Word2Sense embedding for ”tie”. Similarly, the embedding for ”cell” captures the 5 senses discussed in section 1 within the top 15 dimensions of the embedding. The remaining top senses capture fine grained senses such as different kinds of biological cells – e.g. bone marrow cell, liver cell, neuron – that a subject expert might relate to. 7 WordCtx2Sense embeddings A word with several senses in the training corpus, when used in a context, would have a narrower set of senses. It is therefore important to be able to refine the representation of a word according to its usage in a context. Note that Word2vec and Word2GM models do not have such a mechanism. Here, we present an algorithm that generates an embedding for a target word ˆw in a short context T = {w1, .., wN} that reflects the sense in which the target word was used in the context. For this, we suppose that the senses of the word ˆw in context T are an intersection of the senses of ˆw and T. We therefore infer the sense distribution of T by restricting the support of the distribution to those senses ˆw can take. 7.1 Methodology We suppose that the words in the context T were picked from a mixture of a small number of senses. Let Sk = {ψ = (ψ1, ψ2, ..., ψk) : ψz ≥ 0; P z ψz = 1} be the unit positive simplex. The generative model is as follows. Pick a ψ ∈Sk, and let P = βψ, where β is the collection of sense probability distributions recovered by LDA from 5699 the corpus. Pick N words from P independently. Let A ∼P = βψ, ψ ∈Sk, (3) where A is a vocabulary-sized vector containing the count of each word, normalized to sum 1. We do not use the Dirichlet prior over sense distribution as in the generative model in section 4, as we found its omission to be better at inferring the sense distribution of contexts. Given A and β, we want to infer the sense distribution ψ ∈Sk that minimizes the log perplexity f(ψ; A, β) = −P|V | i Ailog(βψ)i according to the generative model in Equation 3. The MWU – multiplicative weight update – algorithm (See Appendix A for details) is a natural choice to find such a distribution ψ, and has an added advantage. The MWU algorithm’s estimate of a variable ψ w.r.t. a function f after t iterations (denoted ψ(t)) satisfies ψ(t)[i] = 0, if ψ(0)[i] = 0 ∀i ∈{1..k} and ∀t ≥0. Therefore, to limit the set of possible senses in the inference of ψ to the µ senses that ˆw can take, we initialize ψ(0) to the embedding v ˆw. We used the embedding obtained in Equation 2 without the Project operator that adds probabilities of similar senses, to correspond with the use of the original matrix β for MWU. Further, to keep iterates close to the initial ψ(0), we add a regularizer to log perplexity. This is necessary to bias the final inference towards the senses that the target word has higher weights on. Thus the loss function on which we run MWU with starting point ψ(0) = v ˆw is f(ψ; A, β) = − |V | X i=1 Ailog(βψ)i +λKL(ψ, ψ(0)) (4) where the second term is the KL divergence between two distributions scaled by a hyperparameter λ. Recall that KL(p, q) = −Pk i=1 pi log(pi/qi) for two distributions p, q ∈ Rk. We use the final estimate ψ(t) as the WordCtx2Sense distribution of a word in the context. 7.2 Evaluation We demonstrate that the above construction of a word’s representation disambiguated in a context is useful by comparing with state-of-the-art unsupervised methods for polysemy disambiguation on two tasks: Word Sense Induction and contextual similarity. Specifically, we compare with MSSG, the K-Grassmeans model of (Mu et al., 2017), and the sparse coding method of (Arora et al., 2018). 7 7.2.1 Hyperparameters We use the same hyperparameter values for α, β, k and n as in section 5.2.1. We use µ = 100 since we do not merge senses in this construction. We tune the hyperparameter λ to the task at hand. 7.2.2 Word Sense Induction The WSI task requires clustering a collection of (say 40) short texts, all of which share a common polysemous word, in such a way that each cluster uses the common word in the same sense. Two datasets for this task are Semeval-2010 (Manandhar et al., 2010) and MakeSense-2016 (Mu et al., 2017). The evaluation criteria are F-score (Artiles et al., 2009) and V-Measure (Rosenberg and Hirschberg, 2007). V-measure measures the quality of a cluster as the harmonic mean of homogeneity and coverage, where homogeneity checks if all the data-points that belong to a cluster belong to the same class and coverage checks if all the data-points of the same class belong to a single cluster. F-score is the harmonic mean of precision and recall on the task of classifying whether the instances in a pair belong to the same cluster or not. F-score tends to be higher with a smaller number of clusters and the V-Measure tends to be higher with a larger number of clusters, and it is important to show performance in both metrics. For each text corresponding to a polysemous word, we learn a sense distribution ψ using the steps in section 7.1. We tuned the parameter λ and found the best performance at λ = 10−2. We use hard decoding to assign a cluster label to each text, i.e., we assign a label k⋆= argmaxk ψk to a text with inferred sense vector ψk. Suppose that this yields ˆk distinct clusters for the instances corresponding to a polysemous word. We cluster them using agglomerative clustering into a final set of K clusters. The distance metric used to group two clusters Di and Dj is d(Di, Dj) = maxa∈Di,b∈Dj(JS[a, b])0.5 7 Note that we report baseline numbers from the original papers. These papers have trained their models on newer versions of Wikipedia dump that contain more than 3 billion tokens (MSSG uses a 1 billion token corpus). However, our model has been trained on a combined dataset of wiki-2009 dump and ukWaC, which contains around 3B tokens. Hence, there might be minor differences in comparing our model to the baseline models. 5700 MakeSense-2016 SemEval-2010 Method K F-scr V-msr F-scr V-msr (Huang et al., 2012) 47.40 15.50 38.05 10.60 (Neelakantan et al., 2015) 300D.30K.key 54.49 19.40 47.26 9.00 300D.6K.key 57.91 14.40 48.43 6.90 (Mu et al., 2017) 2 64.66 28.80 57.14 7.10 5 58.25 34.30 44.07 14.50 (Arora et al., 2018) 2 58.55 6.1 5 46.38 11.5 WordCtx2Sense 2 63.71 22.20 59.38 6.80 λ = 0.0 5 59.75 32.90 46.47 13.20 6 59.13 34.20 44.04 14.30 WordCtx2Sense 2 65.27 24.40 59.15 6.70 λ = 10−2 5 62.88 35.00 47.34 13.70 6 61.43 35.30 44.70 15.00 Table 6: Comparison of WordCtx2Sense with the state-of-the-art methods for Word Sense Induction on MakeSense-2016 and SemEval-2010 dataset. We report Fscore and V-measure scores multiplied by 100. Method Pearson-coefficient WordCtx2Sense (a) 0.666 WordCtx2Sense (b) 0.670 Word2Sense 0.644 Word2vec 0.651 (Mu et al., 2017) 0.637 (Huang et al., 2012) 0.657 (Arora et al., 2018) 0.652 Word2GM 0.655 MSSG.300D.30K 0.679a MSSG.300D.6K 0.678a Table 7: Comparison on the SCWS task. Setting (a) for WordCtx2Sense uses λ = 0.1 for all pairs, and setting (b) uses λ = 10−3 for pairs containing same target words and λ = 0.1 for all other pairs. Word2Sense, Word2V ec and Word2GM neglect context and compare target words. a numbers reported from (Mu et al., 2017) whose experimental setup we could replicate. where JS is the similarity matrix defined in section 5. Results Table 6 shows the results of clustering on WSI SemEval-2010 dataset. WordCtx2Sense outperforms (Arora et al., 2018) and (Mu et al., 2017) on both F-score and V-measure scores by a considerable margin. We observe similar improvements on the MakeSense-2016 dataset. 7.2.3 Word Similarity in Context The Stanford Contextual Word Similarity task (Huang et al., 2012) consists of 2000 pairs of words, along with the contexts the words occur in. Ten human raters were asked to rate the similarity of each pair words according to their use in the corresponding contexts, and their average score (on a 1 to 10 scale) is provided as the ground-truth similarity score. The goal of a contextual embedding would be to score these examples to maximize the correlation with this ground-truth. We compute the WordCtx2Sense of each word in its respective context as in section 7.1. For comparing the meaning of two words in context, we use the JS divergence between their WordCtx2Sense embeddings. We report the coefficient between the ground-truth and WordCtx2Sense according to two different settings of λ. (a) λ = 0.1, and b) λ = 10−3 for inferring the contextual embedding of a word in those pairs that contain same target words, and λ = 0.1 for all other pairs. The main idea is to reduce unnecessary bias for comparing sense of a polysemous word in two different contexts. Results Table 7 shows that sense embeddings using context information perform better than all the existing models, except MSSG models (Neelakantan et al., 2015). Also, computing the embeddings of a word using the contextual information improves results by aprox. 0.025, compared to the case when words embeddings are used directly. 8 Conclusion and future work We motivated an efficient unsupervised method to embed words, in and out of context, in a way that captures their multiple senses in a corpus in an interpretable manner. We demonstrated that such interpretable embeddings can be competitive with dense embeddings like Word2vec on similarity tasks and can capture entailment effectively. Further, the construction provides a natural mechanism to refine the representation of a word in a short context by disambiguating its senses. We have demonstrated the effectiveness of such contextual representations. A natural extension to this work would be to capture the sense distribution of sentences using the same framework. This will make our model more comprehensive by enabling the embedding of words and short texts in the same space. 9 Acknowledgements We thank Monojit Choudhury, Ravindran Kannan, Adithya Pratapa and Anshul Bawa for many helpful discussions. We thank all anonymous reviewers for their constructive comments. 5701 References Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2018. Linear algebraic structure of word senses, with applications to polysemy. Transactions of the Association of Computational Linguistics, 6:483–495. Javier Artiles, Enrique Amig´o, and Julio Gonzalo. 2009. The role of named entities in web people search. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2-Volume 2, pages 534–542. Association for Computational Linguistics. Ben Athiwaratkun and Andrew Gordon Wilson. 2017. Multimodal word distributions. arXiv preprint arXiv:1704.08424. Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 23–32. Association for Computational Linguistics. Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed web-crawled corpora. Language resources and evaluation, 43(3):209–226. Samuel Brody and Mirella Lapata. 2009. Bayesian word sense induction. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 103–111. Association for Computational Linguistics. Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Artificial Intelligence Research, 49:1–47. Eugene Charniak et al. 2013. Naive Bayes word sense induction. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1433–1437. Jianfei Chen, Kaiwei Li, Jun Zhu, and Wenguang Chen. 2016. WarpLDA: a cache efficient O(1) algorithm for Latent Dirichlet Allocation. Proceedings of the VLDB Endowment, 9(10):744–755. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Paramveer S Dhillon, Dean P Foster, and Lyle H Ungar. 2015. Eigenwords: Spectral word embeddings. The Journal of Machine Learning Research, 16(1):3035–3078. Bhuwan Dhingra, Christopher J Shallue, Mohammad Norouzi, Andrew M Dai, and George E Dahl. 2018. Embedding text in hyperbolic spaces. arXiv preprint arXiv:1806.04313. Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah Smith. 2015. Sparse overcomplete word vector representations. arXiv preprint arXiv:1506.02004. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2002. Placing search in context: The concept revisited. ACM Transactions on information systems, 20(1):116–131. Alona Fyshe, Partha P Talukdar, Brian Murphy, and Tom M Mitchell. 2014. Interpretable semantic vectors from a joint model of brain-and text-based meaning. In Proceedings of the conference. Association for Computational Linguistics. Meeting, volume 2014, page 489. NIH Public Access. Dirk Goldhahn, Thomas Eckart, and Uwe Quasthoff. 2012. Building large monolingual dictionaries at the Leipzig corpora collection: From 100 to 200 languages. In LREC, volume 29, pages 31–43. Weiwei Guo and Mona Diab. 2011. Semantic topic models: Combining word distributional statistics and dictionary definitions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 552–561. Association for Computational Linguistics. Guy Halawi, Gideon Dror, Evgeniy Gabrilovich, and Yehuda Koren. 2012. Large-scale learning of word relatedness with constraints. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1406– 1414. ACM. Lushan Han, Tim Finin, Paul McNamee, Anupam Joshi, and Yelena Yesha. 2012. Improving word similarity by augmenting PMI with estimates of word polysemy. IEEE Transactions on Knowledge and Data Engineering, 25(6):1307–1322. Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, 41(4):665–695. Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 873–882. Association for Computational Linguistics. Jey Han Lau, Paul Cook, and Timothy Baldwin. 2013. unimelb: Topic modelling-based word sense induction for web snippet clustering. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), volume 2, pages 217–221. 5702 Jey Han Lau, Paul Cook, Diana McCarthy, Spandana Gella, and Timothy Baldwin. 2014. Learning word sense distributions, detecting unattested senses and identifying novel senses using topic models. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 259–270. Jey Han Lau, Paul Cook, Diana McCarthy, David Newman, and Timothy Baldwin. 2012. Word sense induction for novel sense detection. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 591–601. Association for Computational Linguistics. Angeliki Lazaridou, Eva Maria Vecchi, and Marco Baroni. 2013. Fish transporters and miracle homes: How compositional distributional semantics can help NP parsing. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1908–1913. R´emi Lebret and Ronan Collobert. 2013. Word emdeddings through Hellinger PCA. arXiv preprint arXiv:1312.5542. Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Advances in neural information processing systems, pages 2177–2185. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211–225. Xin Li and Dan Roth. 2006. Learning question classifiers: the role of semantic information. Natural Language Engineering, 12(3):229–249. Thang Luong, Richard Socher, and Christopher Manning. 2013. Better word representations with recursive neural networks for morphology. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 104–113. Suresh Manandhar, Ioannis P Klapaftis, Dmitriy Dligach, and Sameer S Pradhan. 2010. Semeval-2010 task 14: Word sense induction & disambiguation. In Proceedings of the 5th international workshop on semantic evaluation, pages 63–68. Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. George A Miller and Walter G Charles. 1991. Contextual correlates of semantic similarity. Language and cognitive processes, 6(1):1–28. Jiaqi Mu, Suma Bhat, and Pramod Viswanath. 2017. Geometry of polysemy. In Proceedings of the 5th International Conference on Learning Representations. OpenReview.net. Brian Murphy, Partha Talukdar, and Tom Mitchell. 2012. Learning effective and interpretable semantic models using non-negative sparse embedding. Proceedings of COLING 2012, pages 1933–1950. Boris Muzellec and Marco Cuturi. 2018. Generalizing point embeddings using the wasserstein space of elliptical distributions. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 10237–10248. Curran Associates, Inc. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2015. Efficient non-parametric estimation of multiple embeddings per word in vector space. arXiv preprint arXiv:1504.06654. Patrick Pantel and Dekang Lin. 2002. Discovering word senses from text. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 613–619. ACM. Ted Pedersen. 2000. A simple approach to building ensembles of Naive Bayesian classifiers for word sense disambiguation. arXiv preprint cs/0005006. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Kira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, and Shaul Markovitch. 2011. A word at a time: computing word relatedness using temporal semantic analysis. In Proceedings of the 20th international conference on World wide web, pages 337–346. ACM. Joseph Reisinger and Raymond J Mooney. 2010. Multi-prototype vector-space models of word meaning. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 109–117. Association for Computational Linguistics. Andrew Rosenberg and Julia Hirschberg. 2007. Vmeasure: A conditional entropy-based external cluster evaluation measure. In Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL). 5703 Herbert Rubenstein and John B Goodenough. 1965. Contextual correlates of synonymy. Communications of the ACM, 8(10):627–633. Noam Shazeer, Ryan Doherty, Colin Evans, and Chris Waterson. 2016. Swivel: Improving embeddings by noticing what’s missing. arXiv preprint arXiv:1602.02215. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631–1642. Karl Stratos, Michael Collins, and Daniel Hsu. 2015. Model-based word embeddings from decompositions of count matrices. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1282–1291. Anant Subramanian, Danish Pruthi, Harsh Jhamtani, Taylor Berg-Kirkpatrick, and Eduard Hovy. 2018. Spine: Sparse interpretable neural embeddings. In Thirty-Second AAAI Conference on Artificial Intelligence. Fei Sun, Jiafeng Guo, Yanyan Lan, Jun Xu, and Xueqi Cheng. 2016. Sparse word embeddings using L1 regularized online learning. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pages 2915–2921. AAAI Press. Guoyu Tang, Yunqing Xia, Jun Sun, Min Zhang, and Thomas Fang Zheng. 2014. Topic models incorporating statistical word senses. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 151–162. Springer. Guoyu Tang, Yunqing Xia, Jun Sun, Min Zhang, and Thomas Fang Zheng. 2015. Statistical word sense aware topic models. Soft Computing, 19(1):13–27. Fei Tian, Hanjun Dai, Jiang Bian, Bin Gao, Rui Zhang, Enhong Chen, and Tie-Yan Liu. 2014. A probabilistic model for learning multi-prototype word embeddings. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 151–160. Alexandru Tifrea, Gary B´ecigneul, and OctavianEugen Ganea. 2019. Poincar´e GloVe: Hyperbolic word embeddings. In Proceedings of the 7th International Conference on Learning Representations. Luke Vilnis and Andrew McCallum. 2015. Word representations via Gaussian embedding. In Proceedings of the 3rd International Conference on Learning Representations. Jing Wang, Mohit Bansal, Kevin Gimpel, Brian D Ziebart, and Clement T Yu. 2015. A sense-topic model for word sense induction with unsupervised data enrichment. Transactions of the Association for Computational Linguistics, 3:59–71. Guangxu Xun, Yaliang Li, Jing Gao, and Aidong Zhang. 2017. Collaboratively improving topic discovery and word embeddings by coordinating global and local contexts. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 535–543. ACM. Xuchen Yao and Benjamin Van Durme. 2011. Nonparametric bayesian word sense induction. In Proceedings of TextGraphs-6: Graph-based Methods for Natural Language Processing, pages 10–14. Association for Computational Linguistics. 5704 A Multiplicate Weight Update Algorithm 1 Multiplicative Weight update 1: function MWU(k, Lf, f, θ(0), ITER) ▷k denotes dimension of variable θ, f denotes a function of θ, Lf if lipschitz constant of f, θ0 denotes initial starting point of θ, ITER denotes the number of iterations to run 2: for t do = 1 .. ITER 3: η = 1 k p 2 log(k/t) 4: ˆθ(t) = ˆθ(t−1) exp (η∇f(θ)|θ=θ(t−1))) 5: θt = ˆθ(t)/∥ˆθ(t)∥1 6: end for 7: end function B Hyper-parameter tuning for Word2vec We use the default hyperparameters for training Word2vec, as given in Mikolov et al. (2013). We tuned the embedding size, to see if the performance improves with increasing number of dimensions. Table 8 shows that there is minor improvement in performance in different similarity and relatedness tasks as the embedding size is increased from 100 to 500. C Hyper-parameter tuning for Word2GM We use the default hyperparameters for training Word2GM, as given in Athiwaratkun and Wilson (2017). We tuned the embedding size, to see if the performance improves with increasing number of dimensions. Table 9 shows that there is minor improvement in performance of Word2GM, when the embedding size is increased from 100 to 400. D Hyper-parameter tuning for Word2Sense For generating senses, we use WarpLDA that has 3 different hyperparameters, a) Number of topics k b) α, the dirichlet prior of sense distribution of each word and c) γ, the dirichlet prior of word distribution of each sense. We keep k fixed at 3000 and vary α and β. We show a small subset of the hyperparameter space searched for α and β. We report the performance of word embeddings computed by Equation 3, without the Project step, in different similarity tasks. Table 10 shows that the performance slowly decreases as we increase β and somewhat stays constant with α. Hence, we choose α = 0.1 amd γ = 0.001 for carrying out our experiments. E Benchmark downstream tasks In this section, we discuss about the different downstream tasks considered. We follow the same procedure as (Faruqui et al., 2015) and (Subramanian et al., 2018)8. • Sentiment analysis This is a binary classification task on Sentiment Treebank dataset (Socher et al., 2013). The task is to give a sentence a positive or a negative sentiment label. We used the provided train, dev. and test splits of sizes 6920, 872 and 1821 sentences respectively. • Noun phrase bracketing NP bracketing task (Lazaridou et al., 2013) involves classifying a noun phrase of 3 words as left bracketed or right bracketed. The dataset contains 2,227 noun phrases split into 10 folds. We append the word vectors of three words to get feature representation (Faruqui et al., 2015). We report 10-fold cross validation accuracy. • Question classification Question classification task (Li and Roth, 2006) involves classifying a question into six different types, e.g., whether the question is about a location, about a person or about some numeric information. The training dataset consists of 5452 labeled questions, and the test dataset consists of 500 questions. • News classification We consider three binary categorization tasks from the 20 Newsgroups dataset. Each task involves categorizing a document according to two related categories (a) Sports: baseball vs. hockey (958/239/796) (b) Comp.: IBM vs. Mac (929/239/777) (c) Religion: atheism vs. christian (870/209/717), where the brackets show training/dev./test splits. 8We use the evaluation code given in https://github.com/harsh19/SPINE 5705 Dataset Word2vec −100 Word2vec −200 Word2vec −300 Word2vec −400 Word2vec −500 SCWS 0.638 0.646 0.648 0.649 0.651 Simlex-999 0.365 0.388 0.387 0.393 0.393 MEN 0.749 0.760 0.763 0.767 0.780 RW 0.361 0.361 0.363 0.365 0.365 MT-771 0.684 0.685 0.681 0.681 0.688 WS353 0.705 0.719 0.721 0.733 0.732 WS353-S 0.744 0.766 0.768 0.768 0.769 WS353-R 0.669 0.679 0.670 0.696 0.703 Table 8: Performance of Word2vec at different embedding size, in similarity tasks. Dataset Word2GM −100 Word2GM −200 Word2GM −400 SL 0.345 0.385 0.398 WS353 0.664 0.672 0.669 WS353-S 0.727 0.735 0.751 WS353-R 0.626 0.625 0.607 MEN 0.740 0.755 0.761 MC 0.812 0.802 0.826 RG 0.730 0.772 0.750 MT-771 0.638 0.664 0.682 RW 0.303 0.338 0.338 Table 9: Performance of Word2GM, with spherical covariance matrix for each embeddding, at different embedding sizes in similarity tasks α, γ SCWS MT-771 WS353 RG MC WS353-S WS353-R MEN 0.1, 0.001 0.596 0.623 0.654 0.794 0.767 0.685 0.662 0.754 0.1, 0.005 0.595 0.625 0.647 0.809 0.758 0.699 0.638 0.748 0.1, 0.1 0.584 0.609 0.601 0.733 0.671 0.618 0.626 0.738 1.0, 0.001 0.596 0.607 0.651 0.815 0.700 0.692 0.658 0.743 1.0, 0.005 0.613 0.620 0.640 0.792 0.691 0.676 0.653 0.749 1.0, 0.05 0.559 0.562 0.583 0.730 0.742 0.609 0.581 0.711 1.0, 0.1 0.587 0.602 0.602 0.755 0.720 0.641 0.605 0.727 10.0, 0.001 0.595 0.610 0.628 0.822 0.772 0.664 0.639 0.747 10.0, 0.005 0.608 0.635 0.657 0.808 0.826 0.708 0.648 0.739 10.0, 0.05 0.562 0.539 0.544 0.786 0.710 0.573 0.551 0.717 10.0, 0.1 0.573 0.606 0.570 0.773 0.696 0.612 0.593 0.724 Table 10: Performance of Word2Sense as computed in eq. 3 without the Project step in similarity tasks, at different hyperparameter settings. Cluster size Top 10 words with the highest probability in the sense’s distribution 6 tennessee, kentucky, alabama, mississippi, georgia, arkansas, nashville, memphis, louisville, atlanta state, idaho, oregon, montana, wisconsin, utah, nevada, wyoming, states, california illinois, chicago, wisconsin, michigan, milwaukee, rapids, madison, detroit, iowa, grand 19 lol, im, thats, dont, mrplow, yeah, cant, it, ive, ur im, dont, ive, cant, didnt, thats, lol, ur, my, cos my, ve, have, it, me, n’t, ll, just, blog, but 5 lgame, games, adventure, gameplay, 3d, players, play, arcade, of, fun game, multiplayer, games, gameplay, gaming, xbox, shooter, gamers, mode, halo cheats, mario, super, game, arcade, unlock, mode, nintendo, cheat, bros 7 charlton, striker, midfield, defender, leeds, midfielder, darren, goal, bowyer, danny swansea, derby, leicester, city, wolves, watford, burnley, boss, stoke, swans manager, albion, club, coach, season, football, boss, fa, robson, gary Table 11: Examples of clusters formed after agglomerative clustering. Each group of rows shows a randomly picked cluster, it’s size and top 10 words of 3 randomly picked senses from the cluster. The clusters represent U.S. states, generic words, video games, and soccer respectively.
2019
570
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5706–5715 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5706 Modeling Semantic Compositionality with Sememe Knowledge Fanchao Qi1∗, Junjie Huang2∗†, Chenghao Yang3†, Zhiyuan Liu1, Xiao Chen4, Qun Liu4, Maosong Sun1‡ 1Department of Computer Science and Technology, Tsinghua University Institute for Artificial Intelligence, Tsinghua University State Key Lab on Intelligent Technology and Systems, Tsinghua University 2School of ASEE, Beihang University 3Software College, Beihang Unviersity 4Huawei Noah’s Ark Lab [email protected], {hjj1997,alanyang}@buaa.edu.cn {liuzy,sms}@tsinghua.edu.cn, {chen.xiao2,qun.liu}@huawei.com Abstract Semantic compositionality (SC) refers to the phenomenon that the meaning of a complex linguistic unit can be composed of the meanings of its constituents. Most related works focus on using complicated compositionality functions to model SC while few works consider external knowledge in models. In this paper, we verify the effectiveness of sememes, the minimum semantic units of human languages, in modeling SC by a confirmatory experiment. Furthermore, we make the first attempt to incorporate sememe knowledge into SC models, and employ the sememeincorporated models in learning representations of multiword expressions, a typical task of SC. In experiments, we implement our models by incorporating knowledge from a famous sememe knowledge base HowNet and perform both intrinsic and extrinsic evaluations. Experimental results show that our models achieve significant performance boost as compared to the baseline methods without considering sememe knowledge. We further conduct quantitative analysis and case studies to demonstrate the effectiveness of applying sememe knowledge in modeling SC. All the code and data of this paper can be obtained on https: //github.com/thunlp/Sememe-SC. 1 Introduction Semantic compositionality (SC) is defined as the linguistic phenomenon that the meaning of a syntactically complex unit is a function of meanings of the complex unit’s constituents and their combination rule (Pelletier, 1994). Some linguists regard SC as the fundamental truth of semantics (Pelletier, 2016). In the field of NLP, SC has proved effective in many tasks including language model∗Indicates equal contribution †Work done during internship at Tsinghua University ‡Corresponding author ing (Mitchell and Lapata, 2009), sentiment analysis (Maas et al., 2011; Socher et al., 2013b), syntactic parsing (Socher et al., 2013a), etc. Most literature on SC pays attention to using vector-based distributional models of semantics to learn representations of multiword expressions (MWEs), i.e., embeddings of phrases or compounds. Mitchell and Lapata (2008) conduct a pioneering work in which they introduce a general framework to formulate this task: p = f(w1, w2, R, K)1, (1) where f is the compositionality function, p denotes the embedding of an MWE, w1 and w2 represent the embeddings of the MWE’s two constituents, R stands for the combination rule and K refers to the additional knowledge which is needed to construct the semantics of the MWE. Among the proposed approaches for this task, most of them ignore R and K, centering on reforming compositionality function f (Baroni and Zamparelli, 2010; Grefenstette and Sadrzadeh, 2011; Socher et al., 2012, 2013b). Some try to integrate combination rule R into SC models (Blacoe and Lapata, 2012; Zhao et al., 2015; Weir et al., 2016; Kober et al., 2016). Few works consider external knowledge K. Zhu et al. (2016) try to incorporate task-specific knowledge into an LSTM model for sentence-level SC. As far as we know, however, no previous work attempts to use general knowledge in modeling SC. In fact, there exists general linguistic knowledge which can be used in modeling SC, e.g., sememes. Sememes are defined as the minimum semantic units of human languages (Bloomfield, 1926). It is believed that the meanings of all the 1This formula only applies to two-word MWEs but can be easily extended to longer MWEs. In fact, we also focus on modeling SC for two-word MWEs in this paper because they are the most common. 5707 SCD Our Computation Formulae Examples MWEs and Constituents Sememes 3 Sp = Sw1 ∪Sw2 农民起义(peasant uprising) 事情|fact,职位|occupation,政|politics,暴动|uprise,人|human,农|agricultural 农民 (peasant) 职 职 职位 位 位|occupation,人 人 人|human,农 农 农|agricultural 起义(uprising) 暴 暴 暴动 动 动|uprise,事 事 事情 情 情|fact,政 政 政|politics 2 Sp ⊊(Sw1 ∪Sw2) 几何图形(geometric figure) 数学|math,图像|image 几何 (geometry; how much) 数 数 数学 学 学|math,知识|knowledge,疑问|question,功能词|funcword 图形(figure) 图 图 图像 像 像|image 1 Sp ∩(Sw1 ∪Sw2) ̸= ∅ ∧Sp ̸⊂(Sw1 ∪Sw2) 应考(engage a test) 考试|exam,从事|engage 应 (deal with; echo; agree) 处理|handle,回应|respond,同意|agree,遵循|obey,功能词|funcword,姓|surname 考(quiz; check) 考 考 考试 试 试|exam,查|check 0 Sp ∩(Sw1 ∪Sw2) = ∅ 画句号(end) 完毕|finish 画 (draw) 画|draw,部件|part,图像|image, 文字|character,表示|express 句号(period) 符号|symbol,语文|text Table 1: Sememe-based semantic compositionality degree computation formulae and examples. Bold sememes of constituents are shared with the constituents’ corresponding MWE. words can be composed of a limited set of sememes, which is similar to the idea of semantic primes (Wierzbicka, 1996). HowNet (Dong and Dong, 2003) is a widely acknowledged sememe knowledge base (KB), which defines about 2,000 sememes and uses them to annotate over 100,000 Chinese words together with their English translations. Sememes and HowNet have been successfully utilized in a variety of NLP tasks including sentiment analysis (Dang and Zhang, 2010), word representation learning (Niu et al., 2017), language modeling (Gu et al., 2018), etc. In this paper, we argue that sememes are beneficial to modeling SC2. To verify this, we first design a simple SC degree (SCD) measurement experiment and find that the SCDs of MWEs computed by simple sememe-based formulae are highly correlated with human judgment. This result shows that sememes can finely depict meanings of MWEs and their constituents, and capture the semantic relations between the two sides. Therefore, we believe that sememes are appropriate for modeling SC and can improve the performance of SC-related tasks like MWE representation learning. We propose two sememe-incorporated SC models for learning embeddings of MWEs, namely Semantic Compositionality with Aggregated Sememe (SCAS) model and Semantic Compositionality with Mutual Sememe Attention (SCMSA) model. When learning the embedding of an MWE, SCAS model concatenates the embeddings of the MWE’s constituents and their sememes, while SCMSA model considers the mutual attention be2Since HowNet mainly annotates Chinese words with sememes, we experiment on Chinese MWEs in this paper. But our methods and findings are also applicable to other languages. tween a constituent’s sememes and the other constituent. We also integrate the combination rule, i.e., R in Eq. (1), into the two models. We evaluate our models on the task of MWE similarity computation, finding our models obtain significant performance improvement as compared to baseline methods. Furthermore, we propose to evaluate SC models on a downstream task sememe prediction, and our models also exhibit favorable outcomes. 2 Measuring SC Degree with Sememes In this section, we conduct a confirmatory SCD measurement experiment to present evidence that sememes are appropriate for modeling SC. 2.1 Sememe-based SCD Computation Formulae Although SC widely exists in MWEs, not every MWE is fully semantically compositional. In fact, different MWEs show different degrees of SC. We believe that sememes can be used to measure SCD conveniently. To this end, based on the assumption that all the sememes of a word accurately depict the word’s meaning, we intuitively design a set of SCD computation formulae, which we believe are consistent with the principle of SCD. The formulae are illustrated in Table 1. We define four SCDs denoted by number 3, 2, 1 and 0, where larger numbers mean higher SCDs. Sp, Sw1 and Sw2 represent the sememe sets of an MWE, its first and second constituent respectively. Next, we give a brief explanation for these SCD computation formulae: (1) For SCD 3, the sememe set of an MWE is identical to the union of the two constituents’ sememe sets, which means the meaning of the MWE is exactly the same as the combination of the constituents’ meanings. 5708 Therefore, the MWE is fully semantically compositional and should have the highest SCD. (2) For SCD 0, an MWE has totally different sememes from its constituents, which means the MWE’s meaning cannot be derived from its constituents’ meanings. Hence the MWE is completely noncompositional, and its SCD should be the lowest. (3) As for SCD 2, the sememe set of an MWE is a proper subset of the union of its constituents’ sememe sets, which means the meanings of the constituents cover the MWE’s meaning but cannot precisely infer the MWE’s meaning. (4) Finally, for SCD 1, an MWE shares some sememes with its constituents, but both the MWE itself and its constituents have some unique sememes. In Table 1, we also show an example for each SCD, including a Chinese MWE, its two constituents and their sememes3. 2.2 Evaluating SCD Computation Formulae To evaluate our sememe-based SCD computation formulae, we construct a human-annotated SCD dataset. We ask several native speakers to label SCDs for 500 Chinese MWEs, where there are also four degrees to choose. Before labeling an MWE, they are shown the dictionary definitions of both the MWE and its constituents. Each MWE is labeled by 3 annotators, and the average of the 3 SCDs given by them is the MWE’s final SCD. Eventually, we obtain a dataset containing 500 Chinese MWEs together with their humanannotated SCDs. Then we evaluate the correlativity between SCDs of the MWEs in the dataset computed by sememe-based rules and those given by humans. We find Pearson’s correlation coefficient is up to 0.75, and Spearman’s rank correlation coefficient is 0.74. These results manifest remarkable capability of sememes to compute SCDs of MWEs and provide proof that sememes of a word can finely represent the word’s meaning. Accordingly, we believe that this characteristic of sememes can also be exploited in modeling SC. 3 Sememe-incorporated SC Models In this section, we first introduce our two basic sememe-incorporated SC models in detail, namely Semantic Compositionality with Aggregated Sememe (SCAS) and Semantic Compositionality 3In Chinese, most MWEs are words consisting of more than two characters which are actually single-morpheme words. Figure 1: Semantic Compositionality with Aggregated Sememe (SCAS) model. with Mutual Sememe Attention (SCMSA). SCAS model simply concatenates the embeddings of the MWE’s constituents and their sememes, while SCMSA model takes account of the mutual attention between a constituent’s sememes and the other constituent. Then we describe how to integrate combination rules into the two basic models. Finally, we present the training strategies and losses for two different tasks. 3.1 Incorporating Sememes Only Following the notations in Eq. (1), for an MWE p = {w1, w2}, its embedding can be represented as: p = f(w1, w2, K), (2) where p, w1, w2 ∈Rd and d is the dimension of embeddings. K denotes the sememe knowledge here, and we assume that we only know the sememes of w1 and w2, considering that MWEs are normally not in the sememe KBs. We use S to indicate the set of all the sememes and Sw = {s1, ..., s|Sw|} ⊂S to signify the sememe set of w, where | · | represents the cardinality of a set. In addition, s ∈Rd denotes the embedding of sememe s. SCAS Model The first model we propose is SCAS model, which is illustrated in Figure 1. The idea of SCAS model is straightforward, i.e., simply concatenating word embedding of a constituent and the aggregation of its sememes’ embeddings. Formally, we have: w ′ 1 = X si∈Sw1 si, w ′ 2 = X sj∈Sw2 sj, (3) where w ′ 1 and w ′ 2 represent the aggregated sememe embeddings of w1 and w2 respectively. Then p can be obtained by: p = tanh(Wc[w1 + w2;w ′ 1 + w ′ 2] + bc), (4) 5709 where Wc ∈Rd×2d is the composition matrix and bc ∈Rd is a bias vector. SCMSA Model The SCAS model simply uses the sum of all the sememes’ embeddings of a constituent as the external information. However, a constituent’s meaning may vary with the other constituent, and accordingly, the sememes of a constituent should have different weights when the constituent is combined with different constituents (we show an example in later case study). Correspondingly, we propose SCMSA model (Figure 2), which adopts the mutual attention mechanism to dynamically endow sememes with weights. Formally, we have: e1 = tanh(Waw1 + ba), a2,i = exp (si · e1) P sj∈Sw2 exp (sj · e1), w ′ 2 = X si∈Sw2 a2,isi, (5) where Wa ∈Rd×d is the weight matrix and ba ∈ Rd is a bias vector. Similarly, we can calculate w′ 1. Then we still use Eq. (4) to obtain p. 3.2 Integrating Combination Rules In this section, we further integrate combination rules into our sememe-incorporated SC models. In other words, p = f(w1, w2, K, R). (6) We can use totally different composition matrices for MWEs with different combination rules: Wc = Wr c, r ∈Rs (7) where Wr c ∈Rd×2d and Rs refers to combination rule set containing syntax rules of MWEs, e.g., adjective-noun and noun-noun. However, there are many different combination rules and some rules have sparse instances which are not enough to train the corresponding composition matrices with d×2d parameters. In addition, we believe that the composition matrix should contain common compositionality information except the combination rule-specific compositionality information. Hence we let composition matrix Wc be the sum of a low-rank matrix containing Figure 2: Semantic Compositionality with Mutual Sememe Attention (SCMSA) model. combination rule information and a matrix containing common compositionality information: Wc = UrVr + Wc c, (8) where Ur ∈Rd×hr, Vr ∈Rhr×2d, hr ∈N+ is a hyper-parameter and may vary with the combination rule, and Wc c ∈Rd×2d. 3.3 Training We use the MWE embeddings obtained by abovementioned SC models in downstream tasks. For different tasks, we adopt different training strategies and loss functions. Training for MWE Similarity Computation For the task of MWE similarity computation, we use the squared Euclidean distance loss following Luong et al. (2013). For an MWE p, its training loss is: Lp = ∥pc −pr∥2 2 , (9) where pc ∈Rd is the embedding of p obtained by our SC models , i.e., previous p, and pr ∈Rd is the corresponding reference embedding, which might be obtained by regarding the MWE as a whole and applying word representation learning methods. And the overall loss function is as follows: L = X p∈Pt Lp + λ 2 X θ∈Θ ∥θ∥2 2 , (10) where Pt is the training set, Θ refers to the parameter set including Wc and Wa, and λ is the regularization parameter. 5710 Training for MWE Sememe Prediction Sememe prediction is a well-defined task (Xie et al., 2017; Jin et al., 2018; Qi et al., 2018), aimed at selecting appropriate sememes for unannotated words or phrases from the set of all the sememes. Existing works model sememe prediction as a multi-label classification problem, where sememes are regarded as the labels of words and phrases. For doing MWE sememe prediction, we employ a single-layer perceptron as the classifier: ˆyp = σ(Ws · p), (11) where ˆyp ∈R|S|, Ws ∈R|S|×d and σ is the sigmoid function. [ˆyp]i, the i-th element of ˆyp, denotes the predicted score of i-th sememe, where the higher the score is, the more probable the sememe is selected. And Ws = [s1, · · · , s|S|]⊤is made up of the embeddings of all the sememes. As for the training loss of the classifier, considering the distribution of sememes over words is quite imbalanced, we adopt the weighted crossentropy loss: L = X p∈Pt |S| X i=1 k × [yp]i log[ˆyp]i + (1 −[yp]i) log(1 −[ˆyp]i)  , (12) where [yp]i ∈{0, 1} is the i-th element of yp, which is the true sememe label of p, and k stands for the weight parameter. 4 Experiments We evaluate our sememe-incorporated SC models on two tasks including MWE similarity computation and MWE sememe prediction. For the latter, we also conduct further quantitative analysis and case study. 4.1 Dataset We choose HowNet as the source of sememe knowledge. In HowNet, there are 118,346 Chinese words annotated with 2,138 sememes in total. Following previous work (Xie et al., 2017; Jin et al., 2018), we filter out the low-frequency sememes, which are considered unimportant. The final number of sememes we use is 1,335. We use pretrained word embeddings of MWEs (needed for training in the MWE similarity task) and constituents, which are trained using GloVe (Pennington et al., 2014) on the Sogou-T corpus4. We also utilize pretrained sememe embeddings obtained from the results of a sememe-based word representation learning model5 (Niu et al., 2017). And we build a dataset consisting of 51,034 Chinese MWEs, each of which and its two constituents are annotated with sememes in HowNet and have pretrained word embeddings simultaneously. We randomly split the dataset into training, validation and test sets in the ratio of 8 : 1 : 1. 4.2 Experimental Settings Baseline Methods We choose several typical SC models as the baseline methods, including: (1) ADD and MUL, the simple additive and elementwise multiplicative models (Mitchell and Lapata, 2008); (2) RAE, the recursive autoencoder model (Socher et al., 2011); (3) RNTN, the recursive neural tensor network (Socher et al., 2013b); (4) TIM, the tensor index model (Zhao et al., 2015); and (5) SCAS-S, the ablated version of our SCAS model which removes sememe knowledge6. These baseline methods range from the simplest additive model to complicated tensor-based model, all of which take no knowledge into consideration. Combination Rules For simplicity, we divide all the MWEs in our dataset into four combination types, i.e., adjective-noun (Adj-N), noun-noun (NN), verb-noun (V-N) and other (Other), whose instance numbers are 1302, 8276, 4242 and 37214 respectively. And we use the suffix +R to signify integrating combination rules into the model. Hyper-parameters and Training The dimension of word and sememe embeddings d is empirically set to 200. hr in Eq. (8) is simply set to 5 for all the four combination types. The regularization parameter λ is 10−4 , and k in Eq. (12) is 100. As for training, we use Stochastic Gradient Descent (SGD) for optimization. The learning rate is initialized to 0.01 and 0.2 for the two tasks respectively, and decays by 1% every iteration. During training, word embeddings of an MWE’s constituents are frozen while the sememe embeddings are fine-tuned. For the baseline methods, they all use the same pre-trained word embeddings as our 4Sogou-T is a corpus of web pages containing 2.7 billion words. https://www.sogou.com/labs/ resource/t.php 5https://github.com/thunlp/SE-WRL-SAT 6SCAS-S is very similar to RAE, and the only difference between them is that the former concatenates the embeddings of two constituents while the latter chooses addition. 5711 model and their hyper-parameters are tuned to the best on the validation set. We also use SGD to train them. 4.3 MWE Similarity Computation In this subsection, we evaluate our sememeincorporated SC models and baseline methods on an intrinsic task, MWE similarity computation. Evaluation Datasets and Protocol We use two popular Chinese word similarity datasets, namely WordSim-240 (WS240) and WordSim-297 (WS297) (Chen et al., 2015), and a newly built one, COS960 (Huang et al., 2019), all of which consist of word pairs together with human-assigned similarity scores. The first two datasets have 86 and 97 word pairs appearing in our MWE dataset respectively, and their humanassigned similarity scores are based on relatedness. On the other hand, COS960 has 960 word pairs and all of them are in our MWE dataset. Moreover, its similarity scores are based on similarity. We calculate the Spearman’s rank correlation coefficient between cosine similarities of word pairs computed by word embeddings of SC models and human-annotated scores. Experimental Results Framework Method WS240 WS297 COS960 f(w1, w2) ADD 50.8 53.1 49.1 MUL 19.6 21.6 −3.9 TIM 47.4 54.2 50.5 RNTN 42.5 53.6 55.8 RAE 61.3 59.9 59.6 SCAS-S 61.4 57.0 60.1 f(w1, w2, K) SCAS 60.2 60.5 61.4 SCMSA 61.9 58.7 60.5 f(w1, w2, K, R) SCAS+R 59.0 60.8 61.8 SCMSA+R 61.4 61.2 60.4 Table 2: Spearman’s rank correlation coefficient (ρ × 100) between similarity scores assigned by compositional models with human ratings. The experimental results of MWE similarity computation7 are listed in Table 2. We can find that: (1) By incorporating sememe knowledge, our two SC models SCAS and SCMSA both achieve overall performance enhancement, especially on the COS960 dataset which has the largest size and 7Before training, we remove the MWEs which are in these three datasets from the training set. reflects true word similarity. This result can prove the effectiveness of sememe knowledge in modeling SC. Although SCAS-S even performs better than SCAS on WS240, which is presumably because too few word pairs are used, SCAS significantly outperforms SCAS-S on the other two datasets. (2) After further integrating combination rules, our two SC models basically produce better performance except on WS240, which can demonstrate the usefulness of combination rules to some extent. (3) By comparing our two models SCAS and SCMSA, as well as their variants SCAS+R and SCMSA+R, we find no apparent advantage of attention-considered SCMSA over simple SCAS. We attribute it to insufficient training because SCMSA has more parameters. (4) Among the baseline methods, MUL performs particularly poorly on all the three datasets. Although Mitchell and Lapata (2008) report that multiplicative model yields better results than additive model based on distributional semantic space (SDS) word embeddings, we find it cannot fit the word embeddings obtained by currently popular methods like GloVe, which is consistent with the findings of previous work (Zhao et al., 2015). 4.4 MWE Sememe Prediction According to the conclusion of the confirmatory experiment in Sec. 2, the sememes of a word (or an MWE) can finely depict the semantics of the word (MWE). On the other hand, the highquality embedding of a word (MWE) is also supposed to accurately represent the meaning of the word (MWE). Therefore, we believe that the better the embedding is, the better sememes it can predict. More specifically, whether an SC model can predict correct sememes for MWEs reflects the SC model’s ability to learn the representations of MWEs. Correspondingly, we regard MWE sememe prediction as a credible extrinsic evaluation of SC models. Evaluation Dataset and Protocol We use the above-mentioned test set for evaluation. As for the evaluation protocol, we adopt mean average precision (MAP) and F1 score following previous sememe prediction works (Xie et al., 2017; Qi et al., 2018). Since our SC models and baseline methods yield a score for each se5712 meme in the whole sememe set, we pick the sememes with scores higher than δ to compute F1 score, where δ is a hyper-parameter and also tuned to the best on the validation set. Overall Results Framework Method Sememe Prediction MAP F1 Score f(w1, w2) ADD 40.7 23.2 MUL 11.2 0.3 TIM 46.8 35.3 RNTN 47.7 35.3 RAE 44.0 30.8 SCAS-S 39.0 27.9 f(w1, w2, K) SCAS 52.2 41.3 SCMSA 55.1 43.4 f(w1, w2, K, R) SCAS+R 56.8 46.1 SCMSA+R 58.3 46.0 Table 3: Overall MWE sememe prediction results of all the models. The overall sememe prediction results are exhibited in Table 3. We can observe that: (1) The effectiveness of sememe knowledge in modeling SC is definitively proved again by comparing our sememe-incorporated SC models with baseline methods, especially by the comparison of SCAS and its sememe-ablated version SCAS-S. Besides, the combination rule-integrated variants of our models perform better than corresponding original models, which makes the role of combination rules recognized more obviously. (2) Our two models considering mutual attention, namely SCMSA and SCMSA+R models, produce considerable improvement by comparison with SCAS and SCAS+R models, which manifests the benefit of mutual attention mechanism. (3) MUL still performs the worst, which is consistent with the results of the last experiment. Effect of SCD In this experiment, we explore the effect of SCD (in Sec. 2) on sememe prediction performance. We split the test set into four subsets according to MWE’s SCD, which is computed by the sememebased SCD methods in Table 1. Then we evaluate sememe prediction performance of our models on the four subsets. From the results shown in Table 4, we find that: (1) MWEs with higher SCDs have better sememe prediction performance, which is easy to explain. MWEs with higher SCDs possess more Method SCD 3 2 1 0 SCAS 88.4 63.8 46.9 13.3 SCAS+R 95.9 69.8 50.6 14.3 SCMSA 85.3 66.1 51.5 16.1 SCMSA+R 91.2 71.2 53.3 14.5 Table 4: Sememe prediction MAP of our models on MWEs with different SCDs. The numbers of MWEs with the four SCDs are 180, 2540, 1686 and 698 respectively. meanings from their constituents, and consequently, SC models can better capture the meanings of these MWEs. (2) No matter integrating combination rules or not, our mutual attention models perform better than the aggregated sememe models, other than on the subset of SCD 3. According to previous SCD formulae, an MWE whose SCD is 3 has totally the same sememes as its constituents. That means in sememe prediction, each sememe of its constituents is equally important and should be recommended to the MWE. SCAS model simply adds all the sememes of constituents, which fits the characteristics of MWEs whose SCDs are 3. Thus, SCAS model yields better performance on these MWEs. Effect of Combination Rules In this experiment, we investigate the effect of combination rules on sememe prediction performance. Table 5 shows the MAPs of our models on MWEs with different combination rules. Adj-N N-N V-N Other Average SCD 1.52 1.65 1.37 1.38 SCAS 61.4 64.9 55.5 48.2 SCAS+R 63.1 68.7 61.0 53.0 SCMSA 59.6 66.2 58.8 51.8 SCMSA+R 62.1 69.4 60.7 55.0 Table 5: Sememe prediction MAP of our models on MWEs with different combination rules and average SCDs of the four subsets. The numbers of MWEs with the four combination rules are 157, 893, 443 and 3,611 respectively. We find that integrating combination rules into SC models is beneficial to sememe prediction of MWEs with whichever combination rule. In addition, sememe prediction performance varies with the combination rule. To explain this, we calculate the average SCDs of the four subsets with different 5713 Words Sememes 参(join; ginseng; impeach) 从事|engage,纳入|include,花草|FlowerGrass,药物|medicine,控告|accuse,警|police,政|politics 参战(enter a war) 争 争 争斗 斗 斗|fight, 军 军 军|military, 事 事 事情 情 情|fact, 从 从 从事 事 事|engage, 政|politics 丹参(red salvia) 药 药 药物 物 物|medicine, 花 花 花草 草 草|FlowerGrass, 红|red, 生殖|reproduce, 中国|China Table 6: An example of sememe prediction when two MWEs share the same constituent 参. Top5 predicted sememes are presented in the second and third lines. Bold sememes are correct. combination rules, and find that their sememe prediction performance is positively correlated with their average SCDs basically (the average Pearson’s correlation coefficient of different models is up to 0.87). This conforms to the conclusion of the last experiment. Case Study Here, we give an example of sememe prediction for MWEs comprising polysemous constituents, to show that our model can capture the correct meanings of constituents in SC. As shown in Table 6, the Chinese word 参has three senses including “join”, “ginseng” and “impeach”, and these meanings are represented by their different sememes. For the MWE 参战, whose meaning is “enter a war”, 参expresses its first sense “join”. In the top 5 predicted sememes of our SC model, the first four are the sememes annotated in HowNet, including the sememe “从事|engage” from 参. In addition, the fifth sememe “politics” is also related to the meaning of the MWE. For another MWE 丹参, which means “red salvia”, a kind of red Chinese herbal medicine resembling ginseng, the meaning of 参 here is “ginseng”. Our model also correctly predicts the two sememes “药物|medicine” and “花 草|FlowerGrass”, which are both annotated to 参in HowNet. In addition, other predicted sememes given by our model like “红|red” and “中 国|China” are also reasonable. This case demonstrates that our sememeincorporated SC model can capture the correct meanings of an MWE’s constituents, especially the polysemous constituents. And going further, sememe knowledge is beneficial to SC and our SC model can take advantage of sememes. 5 Related Work 5.1 Semantic Compositionality Based on the development of distributional semantics, vector-based SC modeling has been extensively studied in recent years. Most existing work concentrates on using better compositionality functions. Mitchell and Lapata (2008) first make a detailed comparison of several simple compositionality functions including addition and element-wise multiplication. Then various complicated models are proposed in succession, such as vector-matrix models (Baroni and Zamparelli, 2010; Socher et al., 2012), matrix-space models (Yessenalina and Cardie, 2011; Grefenstette and Sadrzadeh, 2011) and tensor-based models (Grefenstette et al., 2013; Van de Cruys et al., 2013; Socher et al., 2013b). There are also some works trying to integrate combination rules into semantic composition models (Blacoe and Lapata, 2012; Zhao et al., 2015; Kober et al., 2016; Weir et al., 2016). But few works explore the role of external knowledge in SC. Zhu et al. (2016) incorporate prior sentimental knowledge into LSTM models, aiming to improve sentiment analysis performance of sentences. To the best our knowledge, there is no work trying to take account of general linguistic knowledge in SC, especially for the MWE representation learning task. 5.2 Sememes and HowNet HowNet, as the most well-known sememe KB, has attracted wide research attention. Previous work applies the sememe knowledge of HowNet to various NLP applications, such as word similarity computation (Liu and Li, 2002), word sense disambiguation (Gan and Wong, 2000; Zhang et al., 2005; Duan et al., 2007), sentiment analysis (Zhu et al., 2006; Dang and Zhang, 2010; Fu et al., 2013), word representation learning (Niu et al., 2017), language modeling (Gu et al., 2018), lexicon expansion (Zeng et al., 2018) and semantic rationality evaluation (Liu et al., 2018). To tackle the challenge of high cost of annotating sememes for new words, Xie et al. (2017) propose the task of automatic sememe prediction to facilitate sememe annotation. And they also propose two simple but effective models. Jin et al. 5714 (2018) further incorporate Chinese character information into their sememe prediction model and achieve performance boost. Li et al. (2018) explore the effectiveness of words’ descriptive text in sememe prediction task. In addition, Qi et al. (2018) make the first attempt to use cross-lingual sememe prediction to construct sememe KBs for other languages. 6 Conclusion and Future Work In this paper, we focus on utilizing sememes to model semantic compositionality (SC). We first design an SC degree (SCD) measurement experiment to preliminarily prove the usefulness of sememes in modeling SC. Then we make the first attempt to employ sememes in a typical SC task, namely MWE representation learning. In experiments, our proposed sememe-incorporated models achieve impressive performance gain on both intrinsic and extrinsic evaluations in comparison with baseline methods without considering external knowledge. In the future, we will explore the following directions: (1) context information is also essential to MWE representation learning, and we will try to combine both internal information and external context information to learn better MWE representations; (2) many MWEs lack sememe annotation and we will seek to calculate an MWE’s SCD when we only know the sememes of the MWE’s constituents; (3) our proposed models are also applicable to the MWEs with more than two constituents and we will extend our models to longer MWEs; (4) sememe is universal linguistic knowledge and we will explore to generalize our methods to other languages. Acknowledgments This research is jointly supported by the Natural Science Foundation of China (NSFC) project under the grant No. 61661146007 and the NExT++ project, the National Research Foundation, Prime Minister’s Office, Singapore under its IRC@Singapore Funding Initiative. Moreover, it is also funded by the NSFC and the German Research Foundation (DFG) in Project Crossmodal Learning, NSFC 61621136008 / DFG TRR-169, as well as the NSFC project under the grant No. 61572273. Furthermore, we thank the anonymous reviewers for their valuable comments and suggestions. References Marco Baroni and Roberto Zamparelli. 2010. Nouns are Vectors, Adjectives are Matrices: Representing Adjective-noun Constructions in Semantic Space. In Proceedings of EMNLP. William Blacoe and Mirella Lapata. 2012. A Comparison of Vector-based Representations for Semantic Composition. In Proceedings of EMNLP-CoNLL. Leonard Bloomfield. 1926. A Set of Postulates for the Science of Language. Language, 2(3):153–164. Xinxiong Chen, Lei Xu, Zhiyuan Liu, Maosong Sun, and Huanbo Luan. 2015. Joint Learning of Character and Word Embeddings. In Proceedings of IJCAI. Lei Dang and Lei Zhang. 2010. Method of Discriminant for Chinese Sentence Sentiment Orientation Based on HowNet. Application Research of Computers, 4:43. Zhendong Dong and Qiang Dong. 2003. HowNet-a Hybrid Language and Knowledge Resource. In Proceedings of NLP-KE. Xiangyu Duan, Jun Zhao, and Bo Xu. 2007. Word Sense Disambiguation through Sememe Labeling. In Proceedings of IJCAI. Xianghua Fu, Guo Liu, Yanyan Guo, and Zhiqiang Wang. 2013. Multi-aspect Sentiment Analysis for Chinese Online Social Reviews Based on Topic Modeling and HowNet Lexicon. Knowledge-Based Systems, 37:186–195. Kok Wee Gan and Ping Wai Wong. 2000. Annotating Information Structures in Chinese Texts Using HowNet. In Proceedings of Second Chinese Language Processing Workshop. Edward Grefenstette, Georgiana Dinu, Mehrnoosh Sadrzadeh, Marco Baroni, et al. 2013. Multi-step Regression Learning for Compositional Distributional Semantics. In Proceedings of IWCS. Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental Support for a Categorical Compositional Distributional Model of Meaning. In Proceedings of EMNLP. Yihong Gu, Jun Yan, Hao Zhu, Zhiyuan Liu, Ruobing Xie, Maosong Sun, Fen Lin, and Leyu Lin. 2018. Language Modeling with Sparse Product of Sememe Experts. In Proceedings of EMNLP. Junjie Huang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, and Maosong Sun. 2019. COS960: A Chinese Word Similarity Dataset of 960 Word Pairs. arXiv preprint arXiv:1906.00247. Huiming Jin, Hao Zhu, Zhiyuan Liu, Ruobing Xie, Maosong Sun, Fen Lin, and Leyu Lin. 2018. Incorporating Chinese Characters of Words for Lexical Sememe Prediction. In Proceedings of ACL. 5715 Thomas Kober, Julie Weeds, Jeremy Reffin, and David Weir. 2016. Improving Sparse Word Representations with Distributional Inference for Semantic Composition. In Proceedings of EMNLP. Wei Li, Xuancheng Ren, Damai Dai, Yunfang Wu, Houfeng Wang, and Xu Sun. 2018. Sememe Prediction: Learning Semantic Knowledge from Unstructured TYextual Wiki Descriptions. arXiv preprint arXiv:1808.05437. Qun Liu and Sujian Li. 2002. Word Similarity Computing Based on HowNet. International Journal of Computational Linguistics & Chinese Language Processing, 7(2):59–76. Shu Liu, Jingjing Xu, Xuancheng Ren, and Xu Sun. 2018. Evaluating Semantic Rationality of a Sentence: A Sememe-Word-Matching Neural Network Based on HowNet. arXiv preprint arXiv:1809.03999. Thang Luong, Richard Socher, and Christopher D. Manning. 2013. Better Word Representations with Recursive Neural Networks for Morphology. In Proceedings of CoNLL. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning Word Vectors for Sentiment Analysis. In Proceedings of ACL. Jeff Mitchell and Mirella Lapata. 2008. Vector-based Models of Semantic Composition. In Proceedings of ACL. Jeff Mitchell and Mirella Lapata. 2009. Language Models Based on Semantic Composition. In Proceedings of EMNLP. Yilin Niu, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2017. Improved Word Representation Learning with Sememes. In Proceedings of ACL. Francis Jeffry Pelletier. 1994. The Principle of Semantic Compositionality. Topoi, 13(1):11–24. Francis Jeffry Pelletier. 2016. Semantic Compositionality, volume 1. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of EMNLP. Fanchao Qi, Yankai Lin, Maosong Sun, Hao Zhu, Ruobing Xie, and Zhiyuan Liu. 2018. Cross-lingual Lexical Sememe Prediction. In Proceedings of EMNLP. Richard Socher, John Bauer, Christopher D Manning, et al. 2013a. Parsing with Compositional Vector Grammars. In Proceedings of ACL. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic Compositionality through Recursive Matrix-vector Spaces. In Proceedings of EMNLP-CoNLL. Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011. Semi-Supervised Recursive Autoencoders for Predicting Sentiment Distributions. In Proceedings of EMNLP. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D. Manning, Andrew Y Ng, and Christopher Potts. 2013b. Recursive Deep Models for Semantic Compositionality over a Sentiment Treebank. In Proceedings of EMNLP. Tim Van de Cruys, Thierry Poibeau, and Anna Korhonen. 2013. A Tensor-based Factorization Model of Semantic Compositionality. In Proceedings of NAACL-HLT. David Weir, Julie Weeds, Jeremy Reffin, and Thomas Kober. 2016. Aligning Packed Dependency Trees: a Theory of Composition for Distributional Semantics. Computational Linguistics, special issue on Formal Distributional Semantics, 42(4):727–761. Anna Wierzbicka. 1996. Semantics: Primes and Universals: Primes and Universals. Oxford University Press, UK. Ruobing Xie, Xingchi Yuan, Zhiyuan Liu, and Maosong Sun. 2017. Lexical Sememe Prediction via Word Embeddings and Matrix Factorization. In Proceedings of IJCAI. Ainur Yessenalina and Claire Cardie. 2011. Compositional Matrix-space Models for Sentiment Analysis. In Proceedings of EMNLP. Xiangkai Zeng, Cheng Yang, Cunchao Tu, Zhiyuan Liu, and Maosong Sun. 2018. Chinese LIWC Lexicon Expansion via Hierarchical Classification of Word Embeddings with Sememe Attention. In Proceedings of AAAI. Yuntao Zhang, Ling Gong, and Yongcheng Wang. 2005. Chinese Word Sense Disambiguation Using HowNet. In Proceedings of International Conference on Natural Computation. Yu Zhao, Zhiyuan Liu, and Maosong Sun. 2015. Phrase Type Sensitive Tensor Indexing Model for Semantic Composition. In Proceedings of AAAI. Xiaodan Zhu, Parinaz Sobhani, and Hongyu Guo. 2016. DAG-Structured Long Short-Term Memory for Semantic Compositionality. In Proceedings of NAACL-HLT. Yan-Lan Zhu, Jin Min, Ya-qian Zhou, Xuan-jing Huang, and Li-De Wu. 2006. Semantic Orientation Computing Based on HowNet. Journal of Chinese information processing, 20(1):14–20.
2019
571
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5716–5728 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5716 Predicting Humorousness and Metaphor Novelty with Gaussian Process Preference Learning Edwin Simpson* and Erik-Lân Do Dinh* and Tristan Miller*† and Iryna Gurevych* *Ubiquitous Knowledge Processing Lab (UKP-TUDA) Department of Computer Science, Technische Universität Darmstadt https://www.ukp.tu-darmstadt.de/ †Austrian Research Institute for Artificial Intelligence (OFAI) Freyung 6, 1010 Vienna, Austria http://www.ofai.at/ Abstract The inability to quantify key aspects of creative language is a frequent obstacle to natural language understanding. To address this, we introduce novel tasks for evaluating the creativeness of language—namely, scoring and ranking text by humorousness and metaphor novelty. To sidestep the difficulty of assigning discrete labels ornumeric scores, we learnfrom pairwise comparisons between texts. We introduce a Bayesian approach for predicting humorousness and metaphor novelty using Gaussian process preference learning (GPPL), which achieves a Spearman’s ρ of 0.56 against gold using word embeddings and linguistic features. Our experimentsshow that given sparse, crowdsourced annotation data, ranking using GPPL outperforms best–worst scaling. We release a new dataset for evaluating humour containing 28,210 pairwise comparisons of 4030 texts, and make our software freely available. 1 Introduction Creative language, such as humour and metaphor, is an essential part of everyday communication, yet remains a challenge for computational methods. Unlike much literal language, humour and figurative language require complex linguistic and background knowledge to understand, which are difficult to integrate with NLP methods (Hempelmann, 2008; Shutova, 2010). An important step in processing creative language is to recognise its presence in a piece of text. Humour and metaphors are two of the most frequently used types of creative language whose use most obscures the true meaning of a piece of text from its surface interpretation (Raskin, 1985, pp. 1– 5, 100–104; Black, 1955) and whose attributes, such as funniness and novelty, may be present or perceived to varying degrees (Bell, 2017; Dunn, 2010). For example, the level of appreciation (i.e., humorousness or equivalently funniness) of jokes can vary according to their content and structural features, such as nonsense or disparagement (Carretero-Dios et al., 2010) or, in the case of puns, contextual coherence (Lippman and Dunn, 2000) and the cognitive effort required to recover the target word (Hempelmann, 2003, pp. 123–124). With metaphors, the literal meaning of frequently used metaphors can drop out of everyday usage, leaving the metaphorical sense as the expected one (Shutova, 2015). For such conventionalised metaphors, NLP methods may identify the metaphorical sense from training data or resources such as WordNet, whereas novel metaphors require the ability to recognise the analogy being made. While previous work (see §2) has considered mainly binary classification approaches to humour or metaphor recognition, this paper focuses on quantifying humorousness and metaphor novelty. These tasks are important for downstream applications such as conversational agents or machine translation, which must choose the correct tone in response to humour, or find appropriate metaphors or wordplay in a target language. The degree of creativeness may also inform an application whether the semantics of a metaphor or joke can be inferred from similar examples. The examples in Tables 1 and 2 illustrate the difficulty of classifying text as humorous or metaphorical: in both cases, the examples are at least somewhat humorous or somewhat metaphorical, which makes it harder to assign discrete labels such as “funny”/“not funny” or “metaphor”/“literal”. Alternatively, we could assign numerical scores to quantify the humorousness or novelty. However, this can present problems for establishing a gold standard, as human annotators can assign scores inconsistently over time or interpret scores differently to one another (Ovadia, 2004; Yannakakis and Hallam, 2011; Kiritchenko and Mohammad, 2017). For example, if assigning scores between zero and 5717 Money is the Root of All Evil. For more info, send $10. “Have you seen my collection of ancient Chinese artifacts?” asked Tom charmingly. Table 1: Examples from the SemEval-2017 Task 7 dataset (Miller et al., 2017). The upper example was among those rated funniest by our annotators, while the lower example was among those rated least funny (presumably due to its very tortured pun on “Ming”). girls often produce responses like ‘often go through a bad patch for a year’ ‘when you tried to read the book, there was nothing there, because the words started as a coat-hanger to hang pictures on.’ Table 2: Examples of statements from the Metaphor Novelty dataset (Do Dinh et al., 2018) containing highlighted metaphors. The upper example is highly conventionalised, while the lower is more novel and creative. ten, some annotators may choose middling values while others may prefer extremes. To improve the reliability of annotations, we ask annotators to compare pairs of texts and choose the funniest or most metaphorically novel of the two. Unlike categorical labels, pairwise labels allow a total sorting of the texts since they avoid items having the same value, and can reduce the time taken to label a dataset (Yang and Chen, 2011; Kingsley and Brown, 2010; Kendall, 1948). Pairwise labels can be used to infer scores or rankings using techniques such as learning-to-rank (Joachims, 2002), preference learning (Thurstone, 1927), or best–worst scaling (Flynn and Marley, 2014). A drawback of pairwise labelling is that the number of possible pairs scales with O(n2), which becomes impractical for large datasets. To reduce annotation costs and enable quicker learning in new domains, it is therefore desirable to learn from sparse datasets rather than exhaustive pairwise labels. We establish four new tasks for scoring and ranking texts with both sparse and extensive sets of pairwise training labels. We apply these tasks to datasets for humorousness and metaphor novelty, which extend the datasets of Miller et al. (2017) and Do Dinh et al. (2018), respectively, and contain crowdsourced pairwise labels. As a baseline scoring method, we employ the scoring technique for best–worst scaling (BWS; Flynn and Marley, 2014), an established method that can also be applied to pairwise labels to estimate scores very efficiently. Our use of sparse, unreliable crowdsourced data motivates a second, Bayesian approach: Gaussian process preference learning (GPPL; Simpson and Gurevych, 2018), which exploits text features to boost performance when labels are sparse and make predictions for items not compared in the training set. Our main contributions are (1) four novel tasks for quantifying aspects of creative language, (2) an annotated dataset containing pairwise comparisons of humorousness between sentences, (3) a Bayesian approach for scoring short texts by humorousness and metaphor novelty given sparse pairwise annotations, and (4) an empirical investigation showing that word embeddings and linguistic features can be used to predict humorousness and metaphor novelty, and that GPPL outperforms BWS when faced with sparse data. We publish the datasets and software1 to encourage further research on these tasks, and to serve the needs of qualitative humanities research into humour and metaphor. 2 Background and Related Work 2.1 Humorousness The automatic processing of verbal humour has applications in human–computer interaction, machine and machine-assisted translation, and the digital humanities (Miller et al., 2017). To give just one example, an intelligent conversational agent should ideally detect and respond appropriately to comments made in jest. The vast majority of past approaches to the automatic recognition of humour (e.g., Mihalcea and Strapparava, 2006; Purandare and Litman, 2006; Sjöbergh and Araki, 2007; Mihalcea et al., 2010; Zhang and Liu, 2014; Yang et al., 2015; Miller et al., 2017; Mikhalkova and Karyakin, 2017; Chen and Soo, 2018) have framed the problem as a binary classification task, which is sufficient for the detection step of our example. However, the ability to assess the degree of humour embodied in an utterance may be necessary for the agent to make a contextually appropriate, humanlike response – for example, a groan for a terrible joke, a chuckle for a middling one, or uproarious laughter for a clever one. Only a few studies have dealt with determining the (relative) funniness of texts. Shahaf et al. (2015) presented a supervised system for determining which of a given pair of cartoon captions is funnier, using features such as sentiment, perplex1 https://github.com/ukplab/ acl2019-GPPL-humour-metaphor 5718 ity, readability, and keyword descriptions of the cartoon image and its anomalies. While the method achieves promising results (64% accuracy, versus 55% for a bag-of-words baseline), it cannot quantify humorousness on a continuum; multiple captions can be ranked only tournament-style. Moreover, the keyword features are specific to visual rather than verbal humour, and must be manually sourced at great expense, making the method unsuitable for classifying unseen examples. In parallel work, Radev et al. (2016) tested various heuristics for ranking pairs or sets of the same captions by funniness. Such heuristics included tf–idf, n-gram frequency, syntactic complexity, and references to objects in the cartoon (which, again, is specific to this multimodal form of humour and depends on manual annotation). The heuristics were evaluated in isolation, rather than as part of a supervised or ensemble classifier. This, combined with the study’s unusual evaluation metrics, precludes a meaningful comparison with Shahaf et al. (2015). More recently, the #HashtagWars evaluation campaign (Potash et al., 2017) defined two humour ranking tasks for Twitter data. The organisers compiled data from a TV game show whose producers solicit funny tweets for a given hashtag and then partition them into three sets: the funniest tweet, nine runners-up, and the remainder. The campaign had two computational tasks: (a) given a pair of tweets from different sets, determine which tweet is funnier; and (b) classify all tweets according to their set. As with Shahaf et al. (2015), the determination of humour here was coarse-grained, with no attempt to quantify it. A similar corpus (but no classification experiment) was presented by Castro et al. (2018b) and later developed into a shared task (Castro et al., 2018a). The dataset’s crowd annotators were asked to classify the humorousness of tweets on a Likert scale, grouping them into five sets versus Potash et al.’s (2017) three. Mindful of psychological studies on subjective evaluations (Thurstone, 1927), Shahaf et al. (2015) reject the idea that such ordinal rating data can be treated as interval data, and argue that direct comparisons are preferable for humour judgements. 2.2 Metaphor Novelty Most previous work on metaphor detection has been conducted with a binary classification in mind (metaphor vs. literal). This dichotomy is reflected in more widely used datasets, such as the VU Amsterdam Metaphor Corpus (VUAMC; Steen et al., 2010) or the datasets in multiple languages created by Tsvetkov et al. (2014). Advantages include the wide variety of approaches that can be (and have been) employed for automatic detection and a rather straightforward annotation process. This usually also entails a high interannotator agreement, meaning that the annotations are reliable. In the case of VUAMC, this amounts to a Cohen’s κ of 0.80. However, the two-class modelling of metaphor has certain limits. These become obvious when looking at examples from the aforementioned datasets (see Table 2, which includes an example from VUAMC). In particular, many metaphors annotated in the binary datasets differ widely in their metaphoricity – i.e., their degree of being a metaphor. Thus, while the annotations might be reliable, they might not be very meaningful. A graded approach to metaphor better accommodates its subjective and fuzzy nature, but previous work taking such a fine-grained approach is less common. Dunn (2014) conducted experiments regarding the notion of metaphoricity on a sentence basis. Using crowdsourcing, he obtained a small corpus of 60 sentences with metaphoricity scores between 0 (non-metaphoric) and 1 (highly metaphoric). This dataset was then used to determine various features from which a metaphoricity measure could be computed. Due to the lack of a large, graded evaluation corpus, the measure was tested on VUAMC along with a threshold relative to the number of contained metaphors. Haagsma and Bjerva (2016) employed clustering and neural network approaches using selectional preferences to detect novel metaphors. While the violation of selectional preferences had been used in general metaphor detection before, Haagsma and Bjerva (2016) argue that they are specifically indicative of novel metaphors as opposed to conventionalised ones. However, the authors also struggled with the lack of graded annotations to test their approach. More recently, Parde and Nielsen (2018) and Do Dinh et al. (2018) created graded metaphoricity layers for VUAMC using crowdsourcing, with the former approach labelling grammatical constructions and the latter labelling tokens. However, manually labelling larger amounts of data is costly, even with crowdsourcing. Further, while VUAMC covers multiple domains, it is still limited in scope, size, and language. Thus, an approach is needed to generalise from few graded or ranked metaphor 5719 annotations to a larger corpus or different domains. 2.3 Learning from Pairwise Comparisons Pairwise comparisons can be used to infer rankings or ratings by assuming a random utility model (Thurstone, 1927), meaning that the annotator chooses an instance with probability p, where p is a function of the utility of the instance. Therefore, when instances in a pair have similar utilities, the annotator selects one with a probability close to 0.5, while for instances with very different utilities, the instance with higher utility will be chosen consistently. The random utility model forms the core of two popular preference learning models, the Bradley–Terry model (Bradley and Terry, 1952; Luce, 1959; Plackett, 1975), and the Thurstone–Mosteller model (Thurstone, 1927; Mosteller, 1951). Given this model and a set of pairwise annotations, probabilistic inference can be used to retrieve the latent utilities of the instances. Besides pairwise comparisons, a random utility model is also employed by MaxDiff(Marley and Louviere, 2005), a model for best–worst scaling (BWS), in which the annotator chooses the best and worst instances from a set. While the term “best–worst scaling” originally applied to the data collection technique (Finn and Louviere, 1992), it now also refers to models such as MaxDiffthat describe how annotators make discrete choices. Empirical work on BWS has shown that MaxDiff scores (instance utilities) can be inferred using either maximum likelihood or a simple counting procedure that produces linearly scaled approximations of the maximum likelihood scores (Flynn and Marley, 2014). The counting procedure defines the score for an instance as the fraction of times the instance was chosen as best, minus the fraction of times the instance was chosen as worst, out of all comparisons including that instance (Kiritchenko and Mohammad, 2016). From this point on, we refer to the counting procedure as BWS, and apply it to the tasks of inferring scores from both best– worst scaling annotations for metaphor novelty and pairwise annotations for funniness. To make predictions for unlabelled instances and cope better with sparse pairwise labels, Chu and Ghahramani (2005) proposed Gaussian process preference learning (GPPL), a Thurstone– Mosteller–based model that accounts for the features of the instances when inferring their scores. GPPL uses Bayesian inference, which has been shown to cope better with sparse and noisy data (Xiong et al., 2011; Titov and Klementiev, 2012; Beck et al., 2014; Lampos et al., 2014), including disagreements between multiple annotators (Cohn and Specia, 2013; Simpson et al., 2015; Felt et al., 2016; Kido and Okamoto, 2017). Through the random utility model, GPPL is able to handle disagreements between annotators as noise, since no label has a probability of one of being selected. Given a set of pairwise labels, and the features of labelled instances, GPPL can estimate the posterior distribution over the utilities of any instances given their features. Relationships between instances are modelled by a Gaussian process (GP), which computes the covariance between instance utilities as a function of their features (see Rasmussen and Williams, 2006). Since typical methods for posterior inference (Nickisch and Rasmussen, 2008) are not scalable (O(n3), where n is the number of instances), Simpson and Gurevych (2018) introduced a scalable method for GPPL that permits arbitrarily large numbers of instances and pairs. This method uses stochastic variational inference (Hoffman et al., 2013), which limits computational complexity by substituting the instances for a fixed number of inducing points during inference. Simpson and Gurevych (2018) applied GPPL to ranking arguments by convincingness, which, like funniness and metaphor novelty, is an abstract linguistic property that is hard to quantify directly. They found that GPPL outperformed SVM and BiLSTM regression models that were trained directly on gold-standard scores. Regression approaches are also unsuitable for our scenario, since utilities for training the regression model would first need to be estimated from pairwise labels using, for example, BWS. This type of pipeline approach often suffers from error propagation, which integrated methods such as GPPL avoid (Finkel et al., 2006). We therefore propose the use of GPPL for our creative language tasks to provide a strong baseline that, unlike BWS, can exploit textual features as well as pairwise labels. 3 Data Humour dataset. Our humour dataset is an extension of the data provided for the SemEval-2017 pun recognition challenge (Miller et al., 2017). Several factors motivated our selection of this dataset: (1) Unlike the multimodal datasets of Shahaf et al. 5720 (2015) and Radev et al. (2016), the humour in Miller et al. (2017) is purely verbal. (2) Unlike the cartoon caption and Twitter datasets used in previous studies, the SemEval-2017 jokes were sourced largely from professional humorists and curated joke collections, providing a better a priori expectation of their quality and use of standard language. (3) The dataset has seen use even outside the original shared task (e.g., Mikhalkova and Karyakin, 2017; Cai et al., 2018; Poliak et al., 2018). (4) The jokes have been pre-classified according to their type (homographic puns, heterographic puns, and non-puns), so our extension of it could serve the needs of future qualitative research into humour. The original dataset consists of 4030 short texts averaging about 11 words in length. Of the texts, 3398 contain humour (mostly, but not exclusively, punning jokes) and 632 do not (proverbs and aphorisms). Our examination of the data revealed three duplicate instances in the humour class; to preserve the size of the dataset, we replaced these with three new punning jokes provided to us by the dataset’s original compilers. We applied humorousness annotations using a crowdsourcing setup. First, we randomly paired the texts such that each text appeared in exactly 14 unique pairs. Each of these 28,210 unique pairs was then presented to five annotators who were asked to judge which text (if either) was funnier. Annotators were recruited from American users of the Amazon Mechanical Turk crowdsourcing platform and paid at a rate commensurate with the US federal minimum wage. To generate gold-standard scores, we apply BWS to the complete dataset. To evaluate whether the number of annotations is sufficient to produce a reliable gold standard, we randomly subsampled the annotations to produce subsamples with one to four annotators per pair. We then computed Spearman’s rank correlation coefficient, ρ, between the gold-standard ranking and BWS scores computed for each subsample. The results averaged over ten random repeats (see Table 3) show that the rankings are very similar even when fewer annotators label each pair. We also computed the mean interannotator agreement (Krippendorff’s α) across instances. The result, 0.80, indicates a satisfactory level of agreement among the crowd workers (Artstein and Poesio, 2008). Taken together, these results suggest that five annotators per pair is more than sufficient to reach a consensus ranking using BWS. # annotators 1 2 3 4 Spearman’s ρ 0.81 0.92 0.97 0.99 Table 3: Agreement measures for the humour dataset. humour metaphor # instances 4,030 15,181 # unique pairs 28,210 65,323 # unique pairs for each instance 14 (avg) 8.6 annotations/pair 5 (avg) 1.55 Table 4: Statistics for the humour and metaphor novelty datasets. Metaphor Novelty Dataset. We use the metaphor novelty dataset of Do Dinh et al. (2018), which contains novelty scores for metaphors (i.e., metaphoric tokens) from the VU Amsterdam Metaphor Corpus (Steen et al., 2010) across four genres: news, fiction, conversation transcripts, and academic texts. The metaphors were compared by crowd workers using best–worst scaling tuples of four randomly chosen metaphors – that is to say, annotators were presented with random selections of four sentences with the metaphoric tokens highlighted, and they selected the most novel and most conventionalised metaphors from this set. The tuples were chosen such that each metaphor appeared in six different comparisons, and each comparison was labelled by three annotators. For the new tasks proposed in this paper, we extract from each of these four-tuples, for each annotator, the pair comparing the most novel to the most conventionalised metaphor token in context. Since we create only those pairs containing the most and least novel instances in each tuple, each tuple generates only one pairwise comparison per worker. Because not all pairs are unique, and different pairs were extracted for different annotators, the number of unique pairs decreases, and the number of annotations per unique pair is less than three. We also use the gold standard provided by Do Dinh et al. (2018), which was obtained by applying BWS to the complete dataset. Table 4 presents some statistics on the humour and metaphor novelty datasets. 4 Task Definitions We introduce tasks to evaluate models for ranking instances by humorousness and metaphor novelty given pairwise comparisons. For the humorousness dataset, an instance is represented by a short text 5721 (typically 1–2 sentences) that possibly forms a joke. For the metaphor novelty dataset, an instance is represented by a metaphoric token and its sentential context. The tasks are designed to test the following hypotheses regarding our proposed Bayesian approach, GPPL, and other ranking models proposed in future: (a) given a sufficient number of pairwise labels, the proposed model converges close to the gold standard; (b) the proposed model is able to generalise to unseen instances using a combination of embeddings and linguistic features; (c) with a sparser set of pairwise training labels, the proposed model can exploit feature data to produce more accurate predictions than BWS; and (d) obtaining the same number of annotations for each pair to mitigate annotator disagreement is less effective than randomly choosing pairs to be annotated. To test these hypotheses, we devise a number of tasks that can be tested on both datasets. Task 1: Test (a) the convergence of the proposed model to the gold standard. First, train the model on all available annotations without using any feature data – that is, learn a ranking from pairwise comparisons only. Using this model, estimate scores for all instances and rank the instances according to these scores. Compare this ranking to the gold BWS ranking using Spearman’s rank correlation coefficient (ρ). Task 2: Evaluate (b) the predictive ability of the proposed model. Randomly select 60% of the instances as a training set. Train the model on only those annotations that compare instances in the training set, then predict scores for instances in the test set (20%). Rank the test instances according to those scores and evaluate the ranking against BWS gold using ρ. Task 3: Test (c) predictions for test instances when annotation data is sparse. Subsample the training set from Task 2 by randomly selecting 5%, 10%, 20%, 33%, and 66% of the original training annotations. To test hypothesis (d), we compare two subsampling methods: annotation subsampling (choose a random subset of pairwise annotations) and pair subsampling (first choose unique random pairs of instances, then take all annotations associated with those pairs). Pair sampling ensures that all selected pairs have multiple annotations from different annotators, which may help to mitigate noise, while annotation subsampling provides a more diverse coverage of possible pairs of instances. For each subsample, train the model and rank the instances in the test set. Evaluate against the gold-standard ranking using ρ. Task 4: Test (c) the estimated scores for training instances when the pairwise annotation data is sparse. Repeat the same setup as Task 3, but evaluate the rankings for instances in the training set. This allows us to evaluate how many annotations are required to reliably rank a set of instances with each scoring method and subsampling method (d). 5 Experiments 5.1 Experimental Setup We use the tasks defined in the previous section to evaluate the suitability of our proposed Bayesian approach, GPPL. For both datasets, the GPPL model is tested with 300-dimensional average word embeddings, using the word2vec model trained on Google News (Mikolov et al., 2013). For the metaphor task, the embedding for the token used metaphorically is concatenated with the average word embeddings that represent the subsuming context sentence. For Task 2 on both datasets, we augment the average word embeddings with linguistic features: average token frequency (taken from a 2017 Wikipedia dump), a polysemy measure represented by the average number of synsets (taken from WordNet 3.0), and average bigram frequency (taken from Google Books Ngrams). Again for the metaphor task, we additionally append the metaphor token frequency if the frequency feature is selected. We repeat Task 2 with different subsets of these features to determine the most effective combination. The token frequency feature has previously been shown to distinguish between metaphoric and literal use (Beigman Klebanov et al., 2014), but also to be indicative of metaphor novelty (Do Dinh et al., 2018). By incorporating the polysemy feature we seek to increase performance especially for the funniness dataset, which includes many puns. The bigram feature reinforces the frequency feature by highlighting instances that include rare bigrams. For best–worst scaling, we use the implementation provided by Kiritchenko and Mohammad (2016). We use the GPPL implementation provided by Simpson and Gurevych (2018). To ensure a reasonable computation time, we follow the authors’ recommendations for hyperparameters and set the number of inducing points to M = 500 and the length-scales using the median heuristic. In future work, it may be possible to tune these hyperparameters further; however, M is a trade-off 5722 instances humour metaphor all 0.917 0.736 no tied BWS scores 0.951 0.737 Table 5: Task 1. Spearman’s ρ between GPPL and gold-standard scores produced by BWS when trained without features. between computation time and model accuracy, as the training time scales with O(M3) computational cost. With our current setup, the combined training and prediction time was approximately 2 hours for the metaphor novelty dataset and 2.5 hours for the funniness dataset running on a 24-core cluster with 2 GHz CPU cores. 5.2 Results Task 1. We compare the BWS gold-standard ranking to the GPPL ranking produced when trained on all available pairwise annotations. We ignore feature data, representing instances solely by an ID instead of a feature vector. This is feasible because we train and test on the same instances, and so do not need features to generalise from training to test instances. The resulting correlations are shown in the first line of Table 5. While the rankings for the humorousness dataset have high correlation, there is still some discrepancy for metaphor novelty. We note that the BWS scoring method means that multiple instances receive the same scores, while GPPL assigns unique values to all instances. To investigate whether these ties affect the rank correlations, we computed new rankings without ties by randomly sampling one instance for each tie, then computing Spearman’s ρ for the subsampled instances. The mean over ten subsamples is shown in the second row of Table 5. For the humorousness dataset, the correlation increases when ties are excluded, suggesting that ties contribute to the difference between the BWS and GPPL rankings. The differences caused by tied BWS scores do not indicate errors but show a small difference due to the nature of BWS and GPPL scores. However, for metaphor novelty, the difference when tied scores are removed is negligible. Instead, the lower correlation compared to the humour dataset hints at the more uneven annotation of the metaphors – that is, there are many very conventionalised instances, so each one was chosen less frequently as the least novel instance in a four-tuple, whereas the smaller number of novel metaphors means that each one is selected multiple times as the most novel instance in a four-tuple. This results in few pairs containing the highly-conventionalised instances, which introduces noise into the BWS and GPPL rankings. In contrast to the humour dataset, which is roughly balanced between funny and non-funny texts, the metaphor dataset is much more skewed towards one class, the conventionalised metaphors. Unlike GPPL, the BWS score for a given instance does not take into account the scores of the instances that it was compared against. We investigate this effect by computing, for each instance s, the total rank cs of instances compared against s, where cs is the sum of GPPL ranks of instances that were annotated as funnier or more novel than s, minus the sum of ranks of instances that were annotated as less funny or novel than s. We then compute correlations between cs and the difference in ranking between GPPL and BWS, obtaining both Spearman’s ρ and Pearson’s r = 0.21 for the humorousness dataset, and ρ and r = 0.22 for metaphor novelty. This indicates that the choice of instances to compare against contributed to the difference between GPPL and BWS rankings: the GPPL score for an instance is estimated relative to the scores of instances that it was compared against, while BWS scores are not. This difference may be greater for the metaphor dataset, since there are fewer pairs per instance and hence potentially noisier rankings. The distributions of differences between rankings are shown in Figure 1, showing that the majority of differences are small for both datasets. This indicates that our proposed GPPL model can capture the gold-standard ranking adequately given a sufficient amount of pairwise training data. For the humour dataset, we also used the original classifications from Miller et al. (2017) to evaluate how well the BWS and GPPL rankings separate nonpun instances from puns using the area under the receiver operating characteristic curve (AUROC; Fawcett, 2006). This area represents the probability that a randomly chosen pun will be ranked higher than a randomly chosen non-pun. Note, however, that some non-puns may contain other types of humour, so we do not expect to achieve a perfect score. We find that both BWS and GPPL achieve AUROC = 0.8, which reflects a good separation of the two classes. Task 2. The results for predicting unseen instances in Task 2 are shown in Table 6. For both datasets, the combination of word2vec embeddings 5723 # Sentences 2000 0 2000 Rank difference 0 200 400 No. sentences 10000 0 10000 Rank difference 0 500 1000 Rank difference Figure 1: Task 1. Distribution of rank differences between BWS and GPPL scores for humorousness (left) and metaphor novelty (right). features humour metaphor w2v 0.531 0.551 w2v, freq., polysemy 0.552 0.540 w2v, freq., bigrams 0.561 0.562 w2v, polysemy, bigrams 0.537 0.523 w2v, freq., polysemy, bigrams 0.542 0.516 Table 6: Task 2. Predicting rankings on unseen test instances: Spearman’s ρ against BWS gold standard (p ≪0.01). (w2v), average token frequency (freq.), and average bigram frequency performs best. Additionally including the polysemy feature generally decreased performance for the metaphor novelty dataset, but improved performance on the funniness dataset when compared to the word2vec-only experiment. The improvement due to token and bigram frequency suggests that the average word embeddings do not capture all word-level information. We compare the scores produced by BWS and GPPL for the best feature combination in Figures 2 and 3. In the metaphor novelty dataset, the GPPL scores are contained mainly in the range −2 to GPPL score Gold (BWS) scores Figure 2: Gold vs. GPPL scores for the best Task 2 model for humour. GPPL score Gold (BWS) scores Figure 3: Gold vs. GPPL scores for the best Task 2 model for metaphor novelty. Humour Metaphor ρ 0 20 40 1000 pairwise training labels 0.40 0.45 0.50 0.55 Humor annotation pair 0 10 20 30 1000 pairwise training labels 0.48 0.50 0.52 0.54 0.56 Metaphor annotation pair 1000s pairwise training labels Figure 4: Task 3. Spearman’s ρ for rank prediction on test instances (subsampled by pair or by annotation) with decreasing data sparsity (p ≪0.01). 2, with a few extreme outliers. In contrast, the BWS scores are all between −0.8 and 0.8. The ten largest outliers include two occurrences each of the metaphor tokens “fit” and “let”, which are both rated correctly as highly conventionalised (e.g., in the sentence “How many times must I tell you that if you let things go too far, nobody can stop what will undoubtedly happen?”). The extreme outliers for GPPL scores are, however, not present in the humorousness dataset. In GPPL, the scores reflect confidence: the larger number of pairwise annotations in the metaphor dataset may increase the range of scores; smaller values may also correspond to noisier or more contradicting annotations. Task 3. Figure 4 shows the results of Task 3, with the rightmost points corresponding to the Task 2 results. The results show that GPPL handles smaller training set sizes down to 5% with a much smaller decrease in performance compared to BWS. The annotation sampling strategy appears to be 5724 Humour Metaphor ρ 10 20 1000 pairwise training labels 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Humor GPPL, annotation GPPL, pair BWS, annotation BWS, pair 5 10 15 20 25 1000 pairwise training labels 0.2 0.3 0.4 0.5 0.6 0.7 Metaphor GPPL, annotation GPPL, pair BWS, annotation BWS, pair 1000s pairwise training labels Figure 5: Task 4. Spearman’s ρ for rank prediction on training instances (subsampled by pair or by annotation) with decreasing data sparsity (p ≪0.01). beneficial when data is sparse: it provides a greater diversity of pairs, so may provide better coverage over the set of instances, and therefore the feature space. Task 4. In Figure 5, we show the results for Task 4, comparing GPPL against BWS for instances in the training set. Gold-standard rankings were not used in training, and the ranks were inferred by BWS and GPPL from the pairwise labels; hence, reducing the amount of pairwise data available reduces the quality of the rankings. For GPPL, we see that the ranking performance with sparse data is substantially higher than BWS. This is particularly notable for metaphor novelty, while for funniness, using the annotation strategy, the performance of BWS converges to that of GPPL as the dataset is increased. While GPPL performance with the pair strategy is highest with the small training set size for humour, it falls below that of BWS as the dataset increases. The results further suggest that the annotation strategy is preferable, which may inform future crowdsourcing efforts, and that while GPPL performs best with small training data, there are situations where BWS may have an advantage. 6 Conclusion This paper has introduced new tasks for evaluating the degree of humorousness of a short text and the novelty of a metaphor within a short text. For humorousness, we have provided a new set of crowdsourced pairwise comparisons, while for metaphor novelty we extracted pairwise labels from existing best–worst scaling data. We have introduced a Bayesian approach, Gaussian process preference learning, that can use sparse pairwise annotations to estimate humorousness or novelty scores given word embeddings and linguistic features. Our experiments showed that GPPL outperforms BWS at ranking instances in the training set when few pairwise labels are available, and generalises well to ranking test instances that were not compared in the training set. Given that our model achieves good results with rudimentary, task-agnostic linguistic features, in future work we plan to investigate the use of humourand metaphor-specific features, including some of those used in past work (see §2) as well as those inspired by the prevailing linguistic theories of humour (Attardo, 1994) and metaphor (Black, 1955; Lakoffand Johnson, 1980). The benefits of including word and bigram frequency also point to possible further improvements using n-grams, tf–idf, or other task-agnostic linguistic features. Finally, we plan to further extend and use the humour dataset to investigate open questions on the linguistics of humour, such as what relationships hold between a pun’s phonology and its “successfulness” or humorousness (Lagerquist, 1980; Hempelmann and Miller, 2017). Acknowledgments This work has been supported by the German Federal Ministry of Education and Research (BMBF) under the promotional reference 01UG1816B (CEDIFOR), by the German Research Foundation (DFG) as part of the QA-EduInf project (grants GU 798/18-1 and RI 803/12-1), by the DFG-funded research training group “Adaptive Preparation of Information from Heterogeneous Sources” (AIPHES; GRK 1994/1), and by the Austrian Science Fund (FWF) under project M 2625-N31. The Austrian Research Institute for Artificial Intelligence is supported by the Austrian Federal Ministry for Science, Research and Economy. References Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4):555–596. Salvatore Attardo. 1994. Linguistic Theories of Humor. Mouton de Gruyter, Berlin. Daniel Beck, Trevor Cohn, and Lucia Specia. 2014. Joint emotion analysis via multi-task Gaussian processes. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, 5725 pages 1798–1803. Association for Computational Linguistics. Beata Beigman Klebanov, Chee Wee Leong, Michael Heilman, and Michael Flor. 2014. Different texts, same metaphors: Unigrams and beyond. In Proceedings of the Second Workshop on Metaphor in NLP, pages 11–17. Association for Computational Linguistics. Nancy D. Bell. 2017. Failed humor. In Salvatore Attardo, editor, The Routledge Handbook of Language and Humor, Routledge Handbooks in Linguistics, pages 356–370. Routledge, New York. Max Black. 1955. Metaphor. Proceedings of the Aristotelian Society, 55(1):273–294. Ralph Allan Bradley and Milton E. Terry. 1952. Rank analysis of incomplete block designs: I. The method of paired comparisons. Biometrika, 39(3/4):324– 345. Yitao Cai, Yin Li, and Xiaojun Wan. 2018. Sense-aware neural models for pun location in texts. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, volume 2, pages 546– 551. Association for Computational Linguistics. Hugo Carretero-Dios, Cristino Pérez, and Gualberto Buela-Casal. 2010. Assessing the appreciation of the content and structure of humor: Construction of a new scale. Humor: International Journal of Humor Research, 23(3):307–325. Santiago Castro, Luis Chiruzzo, and Aiala Rosá. 2018a. Overview of the HAHA task: Humor analysis based on human annotation at IberEval 2018. In Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages, volume 2150 of CEUR Workshop Proceedings, pages 187–194. Spanish Society for Natural Language Processing. Santiago Castro, Luis Chiruzzo, Aiala Rosá, Diego Garat, and Guillermo Moncecchi. 2018b. A crowdannotated Spanish corpus for humor analysis. In Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media, pages 7–11. Association for Computational Linguistics. Peng-Yu Chen and Von-Wun Soo. 2018. Humor recognition using deep learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 2, pages 113– 117. Association for Computational Linguistics. Wei Chu and Zoubin Ghahramani. 2005. Preference learning with Gaussian processes. In Proceedings of the 22nd International Conference on Machine Learning, pages 137–144. ACM. Trevor Cohn and Lucia Specia. 2013. Modelling annotator bias with multi-task Gaussian processes: An application to machine translation quality estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, volume 1, pages 32–42. Association for Computational Linguistics. Erik-Lân Do Dinh, Hannah Wieland, and Iryna Gurevych. 2018. Weeding out conventionalized metaphors: A corpus of novel metaphor annotations. In 2018 Conference on Empirical Methods in Natural Language Processing, pages 1412–1424. Association for Computational Linguistics. Jonathan Dunn. 2010. Gradient semantic intuitions of metaphoric expressions. Metaphor and Symbol, 26(1):53–67. Jonathan Dunn. 2014. Measuring metaphoricity. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 2, pages 745–751. Association for Computational Linguistics. Tom Fawcett. 2006. An introduction to ROC analysis. Pattern Recognition Letters, 27(8):861–874. Paul Felt, Eric K. Ringger, and Kevin D. Seppi. 2016. Semantic annotation aggregation with conditional crowdsourcing models and word embeddings. In Proceedings of the 26th International Conference on Computational Linguistics, pages 1787–1796. Jenny Rose Finkel, Christopher D. Manning, and Andrew Y. Ng. 2006. Solving the problem of cascading errors: Approximate Bayesian inference for linguistic annotation pipelines. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 618–626. Association for Computational Linguistics. Adam Finn and Jordan J. Louviere. 1992. Determining the appropriate response to evidence of public concern: The case of food safety. Journal of Public Policy & Marketing, 11(2):12–25. Terry N. Flynn and A. A. J. Marley. 2014. Best–worst scaling: Theory and methods. In Stephane Hess and Andrew Daly, editors, Handbook of Choice Modelling, pages 178–201. Edward Elgar Publishing, Cheltenham, UK. Hessel Haagsma and Johannes Bjerva. 2016. Detecting novel metaphor using selectional preference information. In Proceedings of the Fourth Workshop on Metaphor in NLP, pages 10–17. Association for Computational Linguistics. Christian F. Hempelmann. 2003. Paronomasic Puns: Target Recoverability Towards Automatic Generation. Ph.D. thesis, Purdue University, West Lafayette, IN, USA. Christian F. Hempelmann. 2008. Computational humor: Beyond the pun? In Victor Raskin, editor, The Primer of Humor Research, number 8 in Humor Research, pages 333–360. Mouton de Gruyter, Berlin. 5726 Christian F. Hempelmann and Tristan Miller. 2017. Puns: Taxonomy and phonology. In Salvatore Attardo, editor, The Routledge Handbook of Language and Humor, Routledge Handbooks in Linguistics, pages 95–108. Routledge, New York. Matthew D. Hoffman, David M. Blei, Chong Wang, and John William Paisley. 2013. Stochastic variational inference. Journal of Machine Learning Research, 14:1303–1347. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 133–142. ACM. Maurice George Kendall. 1948. Rank Correlation Methods. Griffin, Oxford, UK. Hiroyuki Kido and Keishi Okamoto. 2017. A Bayesian approach to argument-based reasoning for attack estimation. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, pages 249–255. International Joint Conferences on Artificial Intelligence. David C. Kingsley and Thomas C. Brown. 2010. Preference uncertainty, preference refinement and paired comparison experiments. Land Economics, 86(3):530–544. Svetlana Kiritchenko and Saif M. Mohammad. 2016. Capturing reliable fine-grained sentiment associations by crowdsourcing and best–worst scaling. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 811–817. Association for Computational Linguistics. Svetlana Kiritchenko and Saif M. Mohammad. 2017. Best–worst scaling more reliable than rating scales: A case study on sentiment intensity annotation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, volume 2, pages 465–470. Association for Computational Linguistics. Linnea M. Lagerquist. 1980. Linguistic evidence from paronomasia. In Papers from the Sixteenth Regional Meeting Chicago Linguistic Society, pages 185–191. University of Chicago. George Lakoffand Mark Johnson. 1980. Metaphors We Live By. Chicago University Press, Chicago, IL, USA. Vasileios Lampos, Nikolaos Aletras, Daniel PreoţiucPietro, and Trevor Cohn. 2014. Predicting and characterising user impact on Twitter. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 405–413. Association for Computational Linguistics. Louis G. Lippman and Mara L. Dunn. 2000. Contextual connections within puns: Effects on perceived humor and memory. Journal of General Psychology, 127(2):185–197. R. Duncan Luce. 1959. On the possible psychophysical laws. Psychological Review, 66(2):81–95. Anthony A. J. Marley and Jordan J. Louviere. 2005. Some probabilistic models of best, worst, and best– worst choices. Journal of Mathematical Psychology, 49(6):464–480. Rada Mihalcea and Carlo Strapparava. 2006. Learning to laugh (automatically): Computational models for humor recognition. Computational Intelligence, 22(2):126–142. Rada Mihalcea, Carlo Strapparava, and Stephen Pulman. 2010. Computational models for incongruity detection in humour. In Computational Linguistics and Intelligent Text Processing: 11th International Conference, Cicling 2010, number 6008 in Theoretical Computer Science and General Issues, pages 364–374, Berlin/Heidelberg. Springer. Elena Mikhalkova and Yuri Karyakin. 2017. Detecting intentional lexical ambiguity in English puns. In Computational Linguistics and Intellectual Technologies: Papers from the Annual International Conference “Dialogue” (2017), volume 1, pages 167– 178. HSE – Higher School of Economics National Research University. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and JeffDean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems, volume 2, pages 3111–3119. Tristan Miller, Christian F. Hempelmann, and Iryna Gurevych. 2017. SemEval-2017 Task 7: Detection and interpretation of English puns. In Proceedings of the 11th International Workshop on Semantic Evaluation, pages 58–68. Association for Computational Linguistics. Frederick Mosteller. 1951. Remarks on the method of paired comparisons: I. The least squares solution assuming equal standard deviations and equal correlations. Psychometrika, 16(1):3–9. Hannes Nickisch and Carl Edward Rasmussen. 2008. Approximations for binary Gaussian process classification. Journal of Machine Learning Research, 9:2035–2078. Seth Ovadia. 2004. Ratings and rankings: Reconsidering the structure of values and their measurement. International Journal of Social Research Methodology, 7(5):403–414. Nathalie Parde and Rodney D. Nielsen. 2018. A corpus of metaphor novelty scores for syntactically-related word pairs. In Proceedings of the 11th International 5727 Conference on Language Resources and Evaluation, pages 1535–1540. European Language Resources Association. R. L. Plackett. 1975. The analysis of permutations. Journal of the Royal Statistical Society, Series C (Applied Statistics), 24(2):193–202. Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018. Collecting diverse natural language inference problems for sentence representation evaluation. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 337–340. Association for Computational Linguistics. Peter Potash, Alexey Romanov, and Anna Rumshisky. 2017. SemEval-2017 Task 6: #HashtagWars: Learning a sense of humor. In Proceedings of the 11th International Workshop on Semantic Evaluation, pages 49–57. Association for Computational Linguistics. Amruta Purandare and Diane Litman. 2006. Humor: Prosody analysis and automatic recognition for Friends. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 208–215. Association for Computational Linguistics. Dragomir Radev, Amanda Stent, Joel Tetreault, Aasish Pappu, Aikaterini Iliakopoulou, Agustin Chanfreau, Paloma de Juan, Jordi Vallmitjana, Alejandro Jaimes, Rahul Jha, and Robert Mankoff. 2016. Humor in collective discourse: Unsupervised funniness detection in the New Yorker cartoon caption contest. In Proceedings of the Tenth International Conference on Language Resources and Evaluation. European Language Resources Association. Victor Raskin. 1985. Semantic Mechanisms of Humor, volume 24 of Synthese Language Library: Texts and Studies in Linguistics and Philosophy. D. Reidel Publishing, Dordrecht, Netherlands. Carl E. Rasmussen and Christopher K. I. Williams. 2006. Gaussian Processes for Machine Learning. Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA, USA. Dafna Shahaf, Eric Horvitz, and Robert Mankoff. 2015. Inside jokes: Identifying humorous cartoon captions. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1065–1074. ACM. Ekaterina Shutova. 2010. Models of metaphor in NLP. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 688– 697. Association for Computational Linguistics. Ekaterina Shutova. 2015. Design and evaluation of metaphor processing systems. Computational Linguistics, 41(4):579–623. Edwin Simpson and Iryna Gurevych. 2018. Finding convincing arguments using scalable Bayesian preference learning. Transactions of the Association for Computational Linguistics, 6:357–371. Edwin D. Simpson, Matteo Venanzi, Steven Reece, Pushmeet Kohli, John Guiver, Stephen J. Roberts, and Nicholas R. Jennings. 2015. Language understanding in the wild: Combining crowdsourcing and machine learning. In Proceedings of the 24th International Conference on World Wide Web, pages 992–1002. International World Wide Web Conferences Steering Committee. Jonas Sjöbergh and Kenji Araki. 2007. Recognizing humor without recognizing meaning. In Applications of Fuzzy Sets Theory: 7th International Workshop on Fuzzy Logic and Applications, WILF 2007, Camogli, Italy, July 7–10, 2007. Proceedings, number 4578 in Lecture Notes in Artificial Intelligence, pages 469– 476, Berlin/Heidelberg. Springer. Gerard J Steen, Aletta G Dorst, J Berenike Herrmann, Anna Kaal, Tina Krennmayr, and Trijntje Pasma. 2010. A Method for Linguistic Metaphor Identification: From MIP to MIPVU, volume 14 of Converging Evidence in Language and Communication Research. John Benjamins Publishing, Amsterdam. Louis L. Thurstone. 1927. A law of comparative judgment. Psychological Review, 34(4):273–286. Ivan Titov and Alexandre Klementiev. 2012. A Bayesian approach to unsupervised semantic role induction. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 12–22. Association for Computational Linguistics. Yulia Tsvetkov, Leonid Boytsov, Anatole Gershman, Eric Nyberg, and Chris Dyer. 2014. Metaphor detection with cross-lingual model transfer. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 1, pages 248– 258. Association for Computational Linguistics. Hui Yuan Xiong, Yoseph Barash, and Brendan J. Frey. 2011. Bayesian prediction of tissue-regulated splicing using RNA sequence and cellular context. Bioinformatics, 27(18):2554–2562. Diyi Yang, Alon Lavie, Chris Dyer, and Eduard Hovy. 2015. Humor recognition and humor anchor extraction. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2367–2376. Association for Computational Linguistics. Yi-Hsuan Yang and Homer H. Chen. 2011. Rankingbased emotion recognition for music organization and retrieval. IEEE Transactions on Audio, Speech, and Language Processing, 19(4):762–774. Georgios N. Yannakakis and John Hallam. 2011. Ranking vs. preference: A comparative study of selfreporting. In Affective Computing and Intelligent Interaction: 4th International Conference, ACII 2011, 5728 Memphis, TN, USA, October 9–12, 2011, Proceedings, Part I, volume 6974 of Lecture Notes in Computer Science, pages 437–446, Berlin/Heidelberg. Springer. Renxian Zhang and Naishi Liu. 2014. Recognizing humor on Twitter. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, pages 889–898. ACM.
2019
572
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5729–5739 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5729 Empirical Linguistic Study of Sentence Embeddings Katarzyna Krasnowska-Kiera´s Alina Wróblewska Institute of Computer Science, Polish Academy of Sciences ul. Jana Kazimierza 5, 01-248 Warsaw, Poland [email protected] [email protected] Abstract The purpose of the research is to answer the question whether linguistic information is retained in vector representations of sentences. We introduce a method of analysing the content of sentence embeddings based on universal probing tasks, along with the classification datasets for two contrasting languages. We perform a series of probing and downstream experiments with different types of sentence embeddings, followed by a thorough analysis of the experimental results. Aside from dependency parser-based embeddings, linguistic information is retained best in the recently proposed LASER sentence embeddings. 1 Introduction Modelling natural language with neural networks has been an extensively researched area for several years now. On the one hand, deep learning enormously reduced the cost of feature engineering. On the other hand, we are largely unaware of features that are used in estimating a neural model and, therefore, kinds of information that a trained neural model relies most heavily on. Since neural network-based models work very well in many NLP tasks and often provide state-of-the-art results, it is extremely interesting and desirable to understand which properties of words, phrases or sentences are retained in their embeddings. An approach to investigate whether linguistic properties of English sentences are encoded in their embeddings is proposed by Shi et al. (2016), Adi et al. (2017), and Conneau et al. (2018). It consists in designing a series of classification problems focusing on linguistic properties of sentences, so called probing tasks (Conneau et al., 2018). In a probing task, sentences are labelled according to a particular linguistic property. Given a model that generates an embedding vector for any sentence, the model is applied to the probing sentences. A classifier is then trained with the resulting embeddings as inputs and probing labels as targets. The performance of the resulting classifier is considered a proxy for how well the probing property is retained in the sentence embeddings. We propose an extension and generalisation of the methodology of the probing tasks-based experiments. First, the current experiments are conducted on two typologically and genetically different languages: English, which is an isolating Germanic language and Polish, which is a fusional Slavic one. Our motivation for conducting experiments on two contrasting languages is as follows. English is undoubtedly the most prominent language with multiple resources and tools. However, English language processing is only a part of NLP in general. Methods designed for English are not guaranteed to be universal. In order to verify whether an NLP algorithm is powerful, it is not enough to evaluate it solely on English. Evaluation on additional languages can shed light on an investigated method. We select Polish as our contrasting language for pragmatic reasons, i.e. there is a Polish dataset – CDSCorpus (Wróblewska and Krasnowska-Kiera´s, 2017) – which is comparable to the SICK relatedness/entailment corpus (Bentivogli et al., 2014). Both datasets are used in downstream evaluation. Second, the designed probing tests are universal for both tested languages. For syntactic processing of both languages, we use the Universal Dependencies schema (UD, Nivre et al., 2016).1 Since we use automatically parsed UD trees for generating probing datasets, analogous tests can be generated for any language with a UD treebank on which a parser can be trained. 1The Universal Dependencies initiative aims at developing a cross-linguistically consistent morphosyntactic annotation schema and at building a large multilingual collection of treebanks annotated according to this schema. It is worth nothing that the UD schema has become the de facto standard for syntactic annotation in the recent years. 5730 The contributions of this work are twofold. (1) We introduce a method of analysing the content of sentence embeddings based on universal probing tasks, along with the classification datasets for two contrasting languages. (2) We carry out a series of empirical experiments based on publicly released probing datasets2 created within the described work and the obtainable downstream task datasets with different types of sentence embeddings, followed by a thorough analysis of the experimental results. We test sentence embeddings obtained with maxpooling and mean-pooling operations over word embeddings or contextualised word embeddings, sentence embeddings estimated on small corpora, and sentence embeddings estimated on large monolingual or multilingual corpora. 2 Experimental Methodology The purpose of the research is to answer the question whether linguistic information is retained in vector representations of sentences. Assessment of the linguistic content in sentence embeddings is not a trivial task and we verify whether it is possible with a probing task-based method (see Section 2.1). Probing sentence embeddings for individual linguistic properties do not examine the overall performance of embeddings in composing the meaning of the represented sentence. We therefore provide two downstream tasks for a general evaluation (see Section 2.2). 2.1 Probing Task-based Method A probing task can be defined as “a classification problem that focuses on simple linguistic properties of sentences” (Conneau et al., 2018). A probing dataset contains the pairs of sentences and their categories. For example, the dataset for the Passive probing task (the binary classification) consists of two types of the pairs: ⟨a passive voice sentence, 1⟩and ⟨a non-passive (active) voice sentence, 0⟩. The sentence–category pairs are automatically extracted from a corpus of dependency parsed sentences. The extraction procedure is based on a set of rules compatible with the Universal Dependencies annotation schema. The proposed rules of creating the probing task datasets are thus universal for languages with the UD style dependency treebanks. A classifier is trained and tested on vector representations of the probing sentences generated with 2http://git.nlp.ipipan.waw.pl/Scwad/ SCWAD-probing-data a sentence embedding model. If a linguistic property is encoded in the sentence embeddings and the classifier learns how this property is encoded, it will correctly classify the test sentence embeddings. The efficiency of the classifiers for each probing task is measured with accuracy. The probing tasks are described in Section 3. 2.2 Downstream Task-based Method Two downstream tasks are proposed in our experiments: Relatedness and Entailment. The semantic relatedness3 task is to measure the degree of any kind of lexical or functional association between two terms, phrases or sentences. The efficiency of the classifier for semantic relatedness is measured with Pearson’s r and Spearman’s ρ coefficients. The textual entailment task is to assess whether the meaning of one sentence is entailed by the meaning of another sentence. There are three entailment classes: entailment, contradiction, and neutral. The efficiency of the classifier for entailment, in turn, is measured with accuracy. 3 Probing Tasks The point of reference for designing our probing tasks is the work by Conneau et al. (2018). The authors propose several probing tasks and divide them into those pertaining to surface, syntactic and semantic phenomena. However, we decide to discard the ‘syntactic versus semantic’ distinction and consider all tasks either surface (see Section 3.1) or compositional (see Section 3.2). This decision is motivated by the fact that both syntactic and semantic principles are undoubtedly compositional by their nature. The syntax admitting well-formed expressions on the basis of the lexicon works in tandem with the semantics. According to Jacobson’s notion of Direct Compositionality (Jacobson, 2014, 43), “each syntactic rule which predicts the existence of some well-formed expression (as output) is paired with a semantic rule which gives the meaning of the output expression in terms of the meaning(s) of the input expressions”. 3.1 Tests on Surface Properties The tests investigate whether surface properties of sentences (i.e. sentence length and lexical content) 3Semantic relatedness is not equivalent to semantic similarity. Semantic similarity is only a special case of semantic relatedness, e.g. CAR and AUTO are similar terms and CAR and GARAGE are related terms. 5731 ROOT She has starred with many leading actors . root nsubj aux obl case amod amod punct Figure 1: An example UD tree of the sentence She has starred with many leading actors. are retained in their embeddings. We follow the definition of surface probing tasks and the procedure of preparing training data as described by Conneau et al. (2018). SentLen (sentence length) This task consists in classifying sentences by their length. There are 6 sentence length classes with the following token intervals: 0: (3, 5), 1: (6, 8), 2: (9, 11), 3: (12, 14), 4: (15, 17), 5: (18, 20), 6: (21, 23). Example: The sentence from Figure 1 has the category 1, since it contains 8 tokens. WC (word content) This task consists in a 750way classification of sentences containing exactly one of pre-selected 750 target words (i.e. the categories correspond to the 750 words). The words are selected based on their frequency ranking in the corpus from which the probing datasets were extracted: top 2000 words are discarded and the next 750 words are used as task categories.4 3.2 Compositional Tests The tests on compositional principles are significantly modified (e.g. TreeDepth, TopDeps, Tense) with respect to Conneau et al. (2018) or designed anew (i.e. Passive and SentType), because the basis for preparing probing datasets is constituted by dependency trees.5 4Conneau et al. (2018) use 1000 target words selected in a similar manner, but since our datasets are smaller, we proportionally decreased this number in order to maintain the same number of training/validation/testing instances per target word. 5We reject the bigram shift task (BShift) as it is applicable only for isolating languages and practically useless for fusional languages with relatively free word order. This task consists in detecting sentences with two random, adjacent words switched. According to Conneau et al. (2018), such shift generally leads to an erroneous utterance (acceptable sentences can be generated accidentally). However, given a language with less strict word order, the intuition is that the BShift procedure could produce too many correct sentences. A very preliminary case study involving several shift strategies and one sentence (Autorka we wszystkich ksi ˛a˙zkach ka˙ze bohaterom szuka´c to˙zsamo´sci. ‘The author tells the characters in her all TreeDepth (dependency tree depth) This task consists in classifying sentences based on the depth of the corresponding dependency trees. The task is defined similarly to Conneau et al. (2018), but dependency trees are used instead of constituent trees. Similarly to the original TreeDepth task, the data is decorrelated with respect to sentence length. Example: The dependency tree in Figure 1 has a depth of 3, because the path from the root node to any token node contains 3 tokens at most. TopDeps (top dependency schema) The idea of this task is based on TopConst task6 (Conneau et al., 2018), but adapted to dependency trees. The task consists in predicting a multiset of the dependency types labelling the relations between the top-most node (the ROOT’s only dependent) and all its children, barring punct relations. The position of a phrase in an English sentence largely determines its grammatical function. In Polish, in turn, word order is relatively free and therefore not a strong determinant of grammatical functions. We thus extract multisets of dependency types, not taking into account the text order of their respective phrases. The extracted multisets roughly correspond to predicate-argument structures. There are 20 classes for each language: 19 most common top dependency schemata and the class {OTHER}. Example: The TopDeps class of the sentence in Figure 1 is {aux nsubj obl}. Passive (passive voice) This is a binary classification task where the goal is to predict whether a sentence embedding represents a passive voice sentence (the class 1) or an active sentence (the class books to look for identity.’, lit. ‘The author in her all books tells the characters to look for identity.’) confirmed this intuition, as most of BShift-modified sentences were accepted by Polish speakers. 6In the original TopConst task, the classifier learns to detect one of 19 most common top constructions or <OTHER>, e.g. the top construction sequence of the tree for [Then][very dark gray letters on a black screen][appeared][.] consists of four constituent labels: <ADVP NP VP .>. 5732 0). In case of complex sentences only the voice of the matrix (main) clause is detected.7 In order to identify passive voice sentences, we adhere to the following procedure: the predicate of a passive voice sentence governs an auxiliary verb and the relation is labelled aux:pass. Furthermore, the predicate (part-of-speech VERB or ADJ) has the features Voice=Pass and VerbForm=Part. The dependency nsubj:pass (passive nominal subject) can be helpful, but as the subject may be dropped in Polish, it is not sufficient. Example: The active voice sentence in Figure 1 is classified as 0. Tense (grammatical tense) This is a binary classification of sentences by the grammatical tense of their main predicates. The sentence predicates can be marked for the present (the pres class) or past (the past class) grammatical tense. The present tense predicates have the following properties: the UD POS tag VERB and the feature Tense=Pres. The past tense predicates have the following properties: the UD POS tag VERB and the feature Tense=Past. Example: The sentence in Figure 1 is classified as past. SubjNum (grammatical number of subjects) In this binary classification task, sentences are classified by the grammatical number of nominal subjects (marked with the UD label nsubj) of main predicates. There are two classes: sing (the UD POS tag NOUN and the feature Number=Sing) and plur (the UD POS tag NOUN and the feature Number=Plur). Example: The sentence in Figure 1 is categorised as sing. ObjNum (grammatical number of objects) This binary classification task is analogous to the one above, but this time sentences are classified by the grammatical number of direct objects of main predicates. The classes are again sing to represent the singular nominal objects (the obj label, the NOUN tag, and the feature Number=Sing), and plur for the plural/mass ones (the obj label, the NOUN tag, and the feature Number=Plur). 7The sentence Although the announcement was probably made to show progress in identifying and breaking up terror cells, I don’t find the news that the Baathists continue to penetrate the Iraqi government very hopeful. is classified as 0, even if it contains the passive voice subordinate clause. SentType (sentence type) This is a new probing task consisting in classifying sentences by their types. There are three classes: inter for interrogatve sentences (e.g. Do you like him?), imper for imperative sentences (e.g. Get out of here!), and other for declarative sentences (e.g. He likes her.) and exclamatory sentences (e.g. What a liar!). 4 Experiments 4.1 SentEval Toolkit We use the SentEval toolkit (Conneau and Kiela, 2018) in our experiments. The toolkit provides utility for testing any vector representation of sentences in probing and downstream scenarios. Given a function f mapping a list of sentences to a list of vectors (serving as an interface to the tested sentence embedding model), a task and a dataset (with sentences or pairs of sentences as input data), SentEval performs evaluation in the context of the task. More specifically, it generates vectors for the dataset sentences using f, trains a classifier with vectors as inputs and task-specific labels as outputs, and evaluates it. Applying an identical evaluation procedure with the same dataset to different sentence embedding models provides the meaningful comparison of the models. For the purpose of our tests, the probing datasets provided with the toolkit are replaced with our own, the CDS downstream task dataset is added and the SICK dataset is retained. Other SentEval downstream tasks are not used, having no Polish counterparts. In all experiments we use SentEval’s Multilayer Perceptron classifier.8 4.2 Probing Datasets For English and Polish, 9 probing datasets are extracted from Paralela9 (P˛ezik, 2016), the largest Polish-English parallel corpus with nearly 4M sentence pairs. An important objective is to make the probing datasets in both languages maximally similar. The choice of a parallel corpus as their source allows to draw probing sentences from collections of texts that have analogous distributions of genre, style, sentence complexity etc. Note that we do not extract parallel sentence pairs (sharing common target classes) for individual probing datasets (sentences are often not translated literally), but we construct English and Polish datasets separately. 8With parameters as follows: kfold=10, batch_size=128, nhid=50, optim=adam, tenacity=5, epoch_size=4. 9http://paralela.clarin-pl.eu 5733 The sentences are tokenised with UDPipe10 (Straka and Straková, 2017) and POS-tagged and dependency parsed with COMBO11 (Rybak and Wróblewska, 2018). The UDPipe and COMBO models are trained on the UD English-EWT treebank12 (Silveira et al., 2014) with 16k trees (254k tokens) and on the Polish PDB-UD treebank13 (Wróblewska, 2018) with 22k trees (351k tokens). The set of UD-based rules is applied to dependency-parsed sentences to extract the final probing datasets for both languages. Following Conneau et al. (2018), for the probing tasks constructed by determining selected properties of a certain dependency tree node (e.g. main predicate’s tense, direct object’s number, etc.), the division into training, validation and test sets ensures that all data instances, where the relevant token of the sentence (target token) bears the same word form, are not distributed into different sets. For example, all SubjNum instances, where the subject phrase is headed by the token cats (and the plur class is determined based on the features of this token), are assigned into the same set. For each probing dataset, only relevant sentences are included (sentences with no subject are irrelevant for SubjNum, utterances with no main predicate in present/past tense are irrelevant for Tense etc.). Moreover, the target tokens are filtered based on their frequency (most and least frequent are discarded) and the number of occurrences of any target token is limited (to prevent the more frequent ones from dominating the datasets). Finally, the datasets are balanced with relation to the target class. With the above restrictions implemented, we are able to extract datasets consisting of 90k examples each (75k for training, 7.5k for validation and testing). The dataset sizes are smaller than 120k examples proposed by Conneau et al. (2018), but remain in the same order of magnitude. The lower number of examples per dataset is due to the fact that we strive to build comparable datasets for both investigated languages based on the parallel corpus. 4.3 Downstream Datasets Two datasets for evaluation of compositional distributional semantic models are used in our experi10https://github.com/ufal/udpipe/releases/tag/v1.2.0 11https://github.com/360er0/COMBO 12https://github.com/UniversalDependencies/UD_ English-EWT 13http://git.nlp.ipipan.waw.pl/alina/PDBUD ments. The SICK corpus14 (Bentivogli et al., 2014) consists of 10k pairs of English sentences. Each sentence pair is human-annotated for relatedness in meaning and entailment. The relatedness score indicates the extent to which meanings of two sentences are related and is calculated as the average of ten human ratings collected for this sentence pair on the 5-point Likert scale. The entailment relation between two sentences, in turn, is labelled with entailment, contradiction, or neutral, selected by the majority of human annotators. CDSCorpus15 (Wróblewska and KrasnowskaKiera´s, 2017) is a comparable corpus of 10k pairs of Polish sentences human-annotated for relatedness and entailment. The degree of semantic relatedness between two sentences is calculated as the average of six human ratings on the 0-5-point scale. As an entailment relation between two sentences doesn’t have to be symmetric, sentence pairs are annotated with bi-directional entailment labels, i.e. pairs of entailment, contradiction, and neutral. 4.4 Sentence Embeddings Three types of sentence embeddings are tested in our experiments: (1) sentence embeddings obtained with max-pooling and mean-pooling over pre-trained word embeddings or contextualised word embeddings, (2) sentence embeddings estimated on small comparable corpora, and (3) pretrained sentence embeddings estimated on large monolingual or multilingual corpora. Max/Mean-pool Sentence Embeddings Words can be represented as continuous vectors in a lowdimensional space, i.e. word embeddings. Word embeddings are assumed to capture linguistic (e.g. morphological, syntactic, semantic) properties of words. Recently, they are often learnt as part of a neural network trained on an unsupervised or semi-supervised objective task using massive amounts of data (e.g. Mikolov et al., 2013; Grave et al., 2018).16 In our experiments, we test FASTTEXT embeddings17 (Grave et al., 2018) and contextualised word embeddings provided with the multi-layer 14http://clic.cimec.unitn.it/composes/materials/SICK. zip 15http://git.nlp.ipipan.waw.pl/Scwad/SCWAD-CDSCorpus 16Embeddings can also be estimated by dimensionality reduction on a co-occurrence counts matrix (e.g. Pennington et al., 2014). 17Pre-trained models from https://fasttext.cc. 5734 bidirectional transformer encoder BERT18 (Devlin et al., 2018) for English and Polish.19 Apart from the FASTTEXT and BERT models, we use parts of the dependency parsing models of COMBO to generate sentence embeddings. COMBO has a BiLSTMbased module that produces contextualised word embeddings based on concatenations of word level embeddings and character level embeddings. As the contextualised word embeddings are originally used to predict dependency trees, they should be linguistic information-rich. Since there is some overlap between the PDB-UD treebank (used to train COMBO parsing model for Polish) and CDSCorpus (source of downstream datasets for Polish), a separate COMBO model20 is trained on PDB-UD data without the overlapping sentences. The model is used to obtain the embeddings for both probing and downstream evaluations.21 For all three models listed above, sentence embeddings are obtained by mean or max pooling over individual word embeddings. For FASTTEXT and COMBO, the UDPipe tokenisation of the probing sentences is used and a sequence of embedding vectors is obtained by model lookup and reading the outputs of the parser’s BiLSTM module respectively. In the case of BERT (which uses its own tokenisation mechanism), whole sentences are passed to the module and outputs of its penultimate layer are treated as token embeddings. Small Corpora-based Sentence Embeddings English and Polish sentence embeddings are estimated on Paralela corpus. The sentences that are included in any probing dataset are to be excluded from any data used for training sentence embeddings. Furthermore, Paralela corpus contains not only 1-to-1 sentence alignments, but also 1-tomany or even many-to-many. As we aim at estimating sentence embedding models, only proper sentences are selected from the corpus. English and Polish sentence embedding models are trained 18Pre-trained language model from https://storage. googleapis.com/bert_models/2018_11_23/multi_cased_ L-12_H-768_A-12.zip. 19We also tested BPEmb embeddings (Heinzerling and Strube, 2018) from https://nlp.h-its.org/bpemb. Sentence embeddings estimated on these word embeddings were of a comparable or worse quality, so we do not give the results. 20http://mozart.ipipan.waw.pl/~alina/Polish_ dependency_parsing_models/190520_COMBO_PDBUD_noCDS_ nosem.pkl 21This overlap is in fact only relevant for downstream tasks evaluation. Therefore, for creating the probing datasets, a model based on full PDB-UD treebank is used. on 3M sentences with the SENT2VEC library22 (Pagliardini et al., 2018). The SENT2VEC models are estimated with a neural architecture which resembles the CBOW model architecture by Mikolov et al. (2013). The tested models (SENT2VECNS) are estimated on unigrams and bigrams with the loss function coupled with negative sampling, to improve training efficiency. Pre-trained Sentence Embeddings We test English sentence embeddings provided by the pretrained SENT2VEC and USE models, and multilingual sentence embeddings generated by the LASER model. The SENT2VECORIG model23 trained on the Toronto Book corpus24 (70M sentences) outputs 700dimensional sentence embeddings. The Universal Sentence Encoder model25 (USE, Cer et al., 2018) was estimated in a multi-task learning scenario on a variety of data sources26 with a Transformer encoder. It takes a variable length English text (e.g. sentence, phrase, or short paragraph) as input and produces a 512-dimensional vector. The Language-Agnostic SEntence Representations model27 (LASER, Artetxe and Schwenk, 2018) was trained on 223M parallel sentences (93 languages) from various sources. The encoder is implemented as a 5-layer BiLSTM network that represents a sentence as a 1,024-dimensional vector (max-pooling over the last hidden states of the BiLSTM). 5 Results Results reported by SentEval are summarised in Table 1. The best result for each task in each language is highlighted in grey. For almost all probing tasks, the most accurate embedding is one of the two COMBO-based representations. This is not surprising as the contextualised vector representations produced by COMBO are learnt in the context of dependency parsing. Moreover, the target classes in the probing tasks are derived from trees produced by a parser that uses virtually the same neural model, which can be considered a kind of 22https://github.com/epfml/sent2vec 23https://drive.google.com/file/d/ 0B6VhzidiLvjSdENLSEhrdWprQ0k 24http://www.cs.toronto.edu/~mbweb/ 25https://tfhub.dev/google/ universal-sentence-encoder-large/3 26Estimated on Wikipedia, web news, web question-answer pages, discussion forums, and the Stanford Natural Language Inference corpus (SNLI, Bowman et al., 2015). 27https://github.com/facebookresearch/LASER 5735 language measure FASTTEXTMAX FASTTEXTMEAN BERTMAX BERTMEAN COMBOMAX COMBOMEAN SENT2VECNS SENT2VECORIG LASER USE SentLen E a 52.55 72.27 72.66 82.13 85.03 87.38 71.56 64.76 85.98 60.00 P a 52.63 67.44 70.79 82.19 84.46 86.31 65.15 — 86.73 — WC E a 24.44 46.73 35.24 45.53 9.39 11.05 59.96 79.23 59.79 43.11 P a 19.83 45.84 38.56 43.60 23.04 26.23 63.85 — 49.03 — TreeDepth E a 29.91 33.00 33.97 38.20 49.08 51.87 33.92 31.03 39.48 31.09 P a 26.99 30.12 34.43 37.81 44.96 47.35 32.84 — 40.04 — TopDeps E a 60.49 71.11 78.20 79.33 93.99 93.87 75.77 65.31 83.33 63.88 P a 65.45 70.67 71.68 75.28 88.16 88.53 73.44 — 78.84 — Passive E a 84.13 89.47 89.77 92.40 98.48 98.41 88.73 89.04 92.85 86.61 P a 85.19 91.92 92.16 94.77 98.41 98.71 92.44 — 95.37 — Tense E a 75.04 84.47 89.32 90.89 96.65 96.64 83.19 85.25 92.19 85.64 P a 81.56 88.89 93.73 96.09 97.35 97.47 87.36 — 96.87 — SubjNum E a 73.87 81.43 88.43 90.75 93.19 93.37 82.27 80.88 94.21 81.65 P a 76.73 87.01 89.89 91.51 94.20 95.03 87.84 — 93.79 — ObjNum E a 71.75 79.24 85.16 86.89 93.23 94.71 77.23 80.12 89.33 79.61 P a 69.41 76.05 80.24 82.64 90.27 90.31 74.77 — 82.53 — SentType E a 96.23 96.20 97.39 97.76 96.85 96.04 97.17 93.76 97.84 85.25 P a 90.61 96.09 98.36 98.57 98.53 98.56 98.09 — 98.39 — Relatedness E p 75.71 76.02 74.23 76.54 58.94 59.38 73.43 79.81 84.54 86.86 s 69.35 69.20 68.61 69.54 58.35 58.59 67.97 70.64 79.03 80.80 P p 76.10 78.06 78.46 83.08 77.40 77.44 76.53 — 88.09 — s 77.01 79.31 78.91 83.65 77.81 77.98 76.72 — 89.30 — Entailment E a 76.72 76.86 77.71 77.11 72.82 72.58 78.59 78.26 83.26 81.77 P a 86.10 87.40 86.70 83.90 84.70 86.10 83.80 — 87.80 — Table 1: Probing and downstream task results. Languages: P=Polish, E=English, measures: a=accuracy, p=Pearson’s r, s=Spearman’s ρ. All measures are expressed in %. information leak. With COMBO models excluded from the ranking due to their obvious handicap, the best-performing sentence embeddings (shown in boldface) for 17 task-language pairs in 22 are yielded by LASER. The exceptions are ObjNum and SentType for Polish (where the advantage of BERTMEAN is so small it might be insignificant), Relatedness for English (suggesting that a comparable USE model could beat LASER in the Polish version of the task as well) and WC (where SENT2VEC performs visibly better than all other, even if it is trained on a relatively small corpus). An interesting observation is that among the pooled embeddings, the MEAN variants quite consistently outperform their MAX counterparts. Figure 2 visualises the results yielded by selected models in the particular tasks. The models shown are BERTMEAN (the best pooled model), SENT2VECNS (trained on Paralela corpus) and LASER (best-performing apart from COMBO, pretrained on massive multilingual data). The plots are very similar in shape, the only striking difference being the discrepancy in WC results, with LASER and SENT2VECNS faring similarly (and better than BERTMEAN) for English and SENT2VECNS yielding visibly best results for Polish. We also measure the correlations between results for Polish and English in two ways. First, for each embedding model we compare the results it yielded in all Polish tasks and all English tasks. Second, for each task type we compare the results obtained using all models in the Polish and English variant of the task.28 The corresponding correlation coefficients are plotted in Figure 3. All the per-model correlations are high, which strongly suggests that given embeddings encode a given property similarly well (or poorly) relative to other properties regardless of the language. In the case of per-task correlations, there are three 28SENT2VECORIG and USE models are excluded from both calculations as they were only tested for English. 5736 SentLen WC TreeDepth TopDeps Passive Tense SubjNum ObjNum SentType Relatedness Entailment 0 20 40 60 80 100 SentLen WC TreeDepth TopDeps Passive Tense SubjNum ObjNum SentType Relatedness Entailment 0 20 40 60 80 100 BERTMEAN SENT2VECNS LASER Figure 2: Results in probing and downstream tasks for 3 selected embedding models (left: English, right: Polish). The measure is accuracy (except for Relatedness, where Spearman’s ρ is shown). All measures are expressed in %. FASTTEXTMAX FASTTEXTMEAN BERTMAX BERTMEAN COMBOMAX COMBOMEAN SENT2VECNS LASER 0 0.2 0.4 0.6 0.8 1 SentLen WC TreeDepth TopDeps Passive Tense SubjNum ObjNum SentType Relatedness Entailment 0 0.2 0.4 0.6 0.8 1 ρ r Figure 3: Correlation (measured by Spearman’s ρ and Pearson’s r) between results for Polish and English (left: per model, right: per task). tasks with visibly lower correlations: SentType and the two downstream tasks. Therefore, for these tasks, the relative performance of individual models differs more between languages. For the downstream tasks this might be partially due to the fact that their respective datasets were created entirely independently and are expected to differ more. As far as SentType is concerned, the accuracies obtained for this task are generally very high and most of them fit within a small range. 6 Related Work Our study follows a research trend in exploring sentence embeddings by means of probing methods, initiated by Shi et al. (2016) and Adi et al. (2017), and continued by Conneau et al. (2018). Investigating NMT systems, Shi et al. (2016) found out that LSTM-based encoders can learn sourcelanguage syntax storing different syntactic properties (e.g. voice, tense, top level constituents, partof-speech tags) in different layers of NMT models. Adi et al. (2017) designed probing tasks for surface properties of sentences (i.e. sentence length, word content, and word order). Two types of sentence embeddings were tested: averaging of CBOW word embeddings and sentence representation output by a LSTM encoder. Conneau et al. (2018) carried out a series of the large-scale experiments on understanding English sentence embeddings with human-validated upper bounds for all probing tasks. They designed 10 probing tasks capturing simple linguistic properties of sentences, tested various 5737 sentence encoding architectures (i.e. BiLSTM and gated convolutional network), and various training objectives (e.g. neural machine translation, autoencoding, SkipThought). Following the mentioned approaches, we examine how much linguistic information is retained in sentence embeddings using 9 similar probing tasks. However, Universal Dependency trees instead of constituent trees are the core of our probing tasks. Furthermore, our experiments are carried out on two contrasting languages, to verify the validity of the evaluation method proposed for English in another language experimental scenario. Ettinger et al. (2018) considered a very important aspect of sentence meaning – composition. They proposed a method of assessing compositional meaning content in sentence embeddings on the examples of semantic role and negation phenomena. This study has drawn our attention to the compositional dimension of our probing tasks. Related works by Linzen et al. (2016) and Warstadt and Bowman (2019) proposed evaluation of sentence encoders (e.g. LSTM, transformers) in terms of their ability to learn grammatical information, e.g. to assess sentences as grammatically correct or not (i.e. acceptability judgments). Finally, several studies were devoted to exploring morphosyntactic properties of sentence embeddings in neural machine translation systems (e.g. Shi et al., 2016; Belinkov et al., 2017). 7 Conclusion We presented a methodology of empirical research on retention of linguistic information in sentence embeddings using probing and downstream tasks. In the probing-based scenario, a set of language-independent tests was designed and probing datasets were generated for two contrasting languages – English and Polish. The procedure of generating probing datasets is based on the Universal Dependency schema. It is thereby universal for all languages with a UD treebank on which a natural language pre-processing system can be trained. In the downstream-based scenario, the publicly available datasets for semantic relatedness and entailment were used. We performed a series of probing and downstream experiments with three types of sentence embeddings in the SentEval environment, followed by a thorough analysis of the linguistic content of sentence embeddings. We found out that the COMBO-based embeddings designed to convey morphosyntax encode linguistic information in the most accurate way. Aside from COMBO embeddings, linguistic information is retained most exactly in the recently proposed LASER sentence embeddings, provided by an encoder designed with a relatively simple BiLSTM architecture, but estimated on tremendous multilingual data. Further research is required to find out in what lies the success of LASER embeddings: in the embedding size, in the magnitude of training data, or maybe in the multitude of used languages. Acknowledgments The research presented in this paper was supported by SONATA 8 grant no 2014/15/D/HS2/03486 from the National Science Centre Poland. References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In Proceedings of International Conference on Learning Representations (ICLR 2017). Mikel Artetxe and Holger Schwenk. 2018. Massively Multilingual Sentence Embeddings for ZeroShot Cross-Lingual Transfer and Beyond. CoRR, abs/1812.10464. Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do Neural Machine Translation Models Learn about Morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 861–872. Association for Computational Linguistics. Luisa Bentivogli, Raffaella Bernardi, Marco Marelli, Stefano Menini, Marco Baroni, and Roberto Zamparelli. 2014. SICK through the SemEval Glasses. Lesson learned from the evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. Journal of Language Resources and Evaluation, 50:95–124. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642. Association for Computational Linguistics. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, 5738 Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal Sentence Encoder for English. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169–174. Association for Computational Linguistics. Alexis Conneau and Douwe Kiela. 2018. SentEval: An Evaluation Toolkit for Universal Sentence Representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), pages 1699–1704. European Language Resource Association. Alexis Conneau, Germán Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. 2018. What you can cram into a single \$&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126–2136. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. CoRR, abs/1810.04805. Allyson Ettinger, Ahmed Elgohary, Colin Phillips, and Philip Resnik. 2018. Assessing Composition in Sentence Vector Representations. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1790–1801. Association for Computational Linguistics. Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning Word Vectors for 157 Languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), pages 3483–3487. European Language Resource Association. Benjamin Heinzerling and Michael Strube. 2018. BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), pages 2989–2993. European Language Resource Association. Pauline Jacobson. 2014. Compositional Semantics. An Introduction to the Syntax/Semantics Interface. Oxford Textbooks in Linguistics. Oxford University Press. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies. Transactions of the Association for Computational Linguistics, 4:521– 535. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In Neural and Information Processing System (NIPS). Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajiˇc, Christopher D. Manning, Ryan T. McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A Multilingual Treebank Collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation LREC 2016, pages 1659– 1666. Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2018. Unsupervised Learning of Sentence Embeddings Using Compositional n-Gram Features. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 528–540. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Association for Computational Linguistics. Piotr P˛ezik. 2016. Exploring Phraseological Equivalence with Paralela. In Polish-Language Parallel Corpora, page 67–81. Instytut Lingwistyki Stosowanej UW, Warsaw. Piotr Rybak and Alina Wróblewska. 2018. SemiSupervised Neural System for Tagging, Parsing and Lematization. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 45–54. Association for Computational Linguistics. Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does String-Based Neural MT Learn Source Syntax? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1526–1534. Association for Computational Linguistics. Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Christopher D. Manning. 2014. A Gold Standard Dependency Corpus for English. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC2014), pages 2897–2904. European Language Resource Association. Milan Straka and Jana Straková. 2017. Tokenizing, POS Tagging, Lemmatizing and Parsing UD 2.0 with UDPipe. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 88–99, Vancouver, Canada. Association for Computational Linguistics. 5739 Alex Warstadt and Samuel R. Bowman. 2019. Grammatical Analysis of Pretrained Sentence Encoders with Acceptability Judgments. CoRR, abs/1901.03438. Alina Wróblewska. 2018. Extended and Enhanced Polish Dependency Bank in Universal Dependencies Format. In Proceedings of the Second Workshop on Universal Dependencies (UDW 2018), pages 173– 182. Association for Computational Linguistics. Alina Wróblewska and Katarzyna Krasnowska-Kiera´s. 2017. Polish evaluation dataset for compositional distributional semantics models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 784–792. Association for Computational Linguistics.
2019
573
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5740–5753 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5740 Probing for Semantic Classes: Diagnosing the Meaning Content of Word Embeddings Yadollah Yaghoobzadeh1 Katharina Kann2 Timothy J. Hazen1 Eneko Agirre3 Hinrich Sch¨utze4 1Microsoft Research Montr´eal 2Center for Data Science, New York University 3IXA NLP Group, University of the Basque Country 4CIS, LMU Munich [email protected] Abstract Word embeddings typically represent different meanings of a word in a single conflated vector. Empirical analysis of embeddings of ambiguous words is currently limited by the small size of manually annotated resources and by the fact that word senses are treated as unrelated individual concepts. We present a large dataset based on manual Wikipedia annotations and word senses, where word senses from different words are related by semantic classes. This is the basis for novel diagnostic tests for an embedding’s content: we probe word embeddings for semantic classes and analyze the embedding space by classifying embeddings into semantic classes. Our main findings are: (i) Information about a sense is generally represented well in a single-vector embedding – if the sense is frequent. (ii) A classifier can accurately predict whether a word is single-sense or multi-sense, based only on its embedding. (iii) Although rare senses are not well represented in single-vector embeddings, this does not have negative impact on an NLP application whose performance depends on frequent senses. 1 Introduction Word embeddings learned by methods like Word2vec (Mikolov et al., 2013) and Glove (Pennington et al., 2014) have had a big impact on natural language processing (NLP) and information retrieval (IR). They are effective and efficient for many tasks. More recently, contextualized embeddings like ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) have further improved performance. To understand both word and contextualized embeddings, which still rely on word/subword embeddings at their lowest layer, we must peek inside the blackbox embeddings. Given the importance of word embeddings, attempts have been made to construct diagnostic tools to analyze them. However, the main tool for analyzing their semantic content is still looking at nearest neighbors of embeddings. Nearest neighbors are based on full-space similarity neglecting the multifacetedness property of words (Gladkova and Drozd, 2016) and making them unstable (Wendlandt et al., 2018). As an alternative, we propose diagnostic classification of embeddings into semantic classes as a probing task to reveal their meaning content. We will refer to semantic classes as Sclasses. We use S-classes such as food, drug and living-thing to define word senses. Sclasses are frequently used for semantic analysis, e.g., by Kohomban and Lee (2005), Ciaramita and Altun (2006) and Izquierdo et al. (2009) for word sense disambiguation, but have not been used for analyzing embeddings. Analysis based on S-classes is only promising if we have high-quality S-class annotations. Existing datasets are either too small to train embeddings, e.g., SemCor (Miller et al., 1993), or artificially generated (Yaghoobzadeh and Sch¨utze, 2016). Therefore, we build WIKI-PSE, a WIKIpediabased resource for Probing Semantics in word Embeddings. We focus on common and proper nouns, and use their S-classes as proxies for senses. For example, “lamb” has the senses food and living-thing. Embeddings do not explicitly address ambiguity; multiple senses of a word are crammed into a single vector. This is not a problem in some applications (Li and Jurafsky, 2015); one possible explanation is that this is an effect of sparse coding that supports the recovery of individual meanings from a single vector (Arora et al., 2018). But ambiguity has an adverse effect in other scenarios, e.g., Xiao and Guo (2014) see the need of filtering out embeddings of ambiguous words in dependency parsing. 5741 We present the first comprehensive empirical analysis of ambiguity in word embeddings. Our resource, WIKI-PSE, enables novel diagnostic tests that help explain how (and how well) embeddings represent multiple meanings.1 Our diagnostic tests show: (i) Single-vector embeddings can represent many non-rare senses well. (ii) A classifier can accurately predict whether a word is single-sense or multi-sense, based only on its embedding. (iii) In experiments with five common datasets for mention, sentence and sentencepair classification tasks, the lack of representation of rare senses in single-vector embeddings has little negative impact – this indicates that for many common NLP benchmarks only frequent senses are needed. 2 Related Work S-classes (semantic classes) are a central concept in semantics and in the analysis of semantic phenomena (Yarowsky, 1992; Ciaramita and Johnson, 2003; Senel et al., 2018). They have been used for analyzing ambiguity by Kohomban and Lee (2005), Ciaramita and Altun (2006), and Izquierdo et al. (2009), inter alia. There are some datasets designed for interpreting word embedding dimensions using S-classes, e.g., SEMCAT (Senel et al., 2018) and HyperLex (Vulic et al., 2017). The main differentiator of our work is our probing approach using supervised classification of word embeddings. Also, we do not use WordNet senses but Wikipedia entity annotations since WordNettagged corpora are small. In this paper, we probe word embeddings with supervised classification. Probing the layers of neural networks has become very popular. Conneau et al. (2018) probe sentence embeddings on how well they predict linguistically motivated classes. Hupkes et al. (2018) apply diagnostic classifiers to test hypotheses about the hidden states of RNNs. Focusing on embeddings, Kann et al. (2019) investigate how well sentence and word representations encode information necessary for inferring the idiosyncratic frame-selectional properties of verbs. Similar to our work, they employ supervised classification. Tenney et al. (2019) probe syntactic and semantic information learned by contextual embeddings (Melamud et al., 2016; McCann et al., 2017; Pe1WIKI-PSE is available publicly at https: //github.com/yyaghoobzadeh/WIKI-PSE. ters et al., 2018; Devlin et al., 2018) compared to non-contextualized embeddings. They do not, however, address ambiguity, a key phenomenon of language. While the terms “probing” and “diagnosing” come from this literature, similar probing experiments were used in earlier work, e.g., Yaghoobzadeh and Sch¨utze (2016) probe for linguistic properties in word embeddings using synthetic data and also the task of corpus-level finegrained entity typing (Yaghoobzadeh and Sch¨utze, 2015). We use our new resource WIKI-PSE for analyzing ambiguity in the word embedding space. Word sense disambiguation (WSD) (Agirre and Edmonds, 2007; Navigli, 2009) and entity linking (EL) (Bagga and Baldwin, 1998; Mihalcea and Csomai, 2007) are related to ambiguity in that they predict the context-dependent sense of an ambiguous word or entity. In our complementary approach, we analyze directly how multiple senses are represented in embeddings. While WSD and EL are important, they conflate (a) the evaluation of the information content of an embedding with (b) a model’s ability to extract that information based on contextual clues. We mostly focus on (a) here. Also, in contrast to WSD datasets, WIKI-PSE is not based on inferred sense tags and not based on artificial ambiguity, i.e., pseudowords (Gale et al., 1992; Sch¨utze, 1992), but on real senses marked by Wikipedia hyperlinks. There has been work in generating dictionary definitions from word embeddings (Noraset et al., 2017; Bosc and Vincent, 2018; Gadetsky et al., 2018). Gadetsky et al. (2018) explicitly adress ambiguity and generate definitions for words conditioned on their embeddings and selected contexts. This also conflates (a) and (b). Some prior work also looks at how ambiguity affects word embeddings. Arora et al. (2018) posit that a word embedding is a linear combination of its sense embeddings and that senses can be extracted via sparse coding. Mu et al. (2017) argue that sense and word vectors are linearly related and show that word embeddings are intersections of sense subspaces. Working with synthetic data, Yaghoobzadeh and Sch¨utze (2016) evaluate embedding models on how robustly they represent two senses for low vs. high skewedness of senses. Our analysis framework is novel and complementary, with several new findings. Some believe that ambiguity should be elimi5742 Figure 1: Example of how we build WIKI-PSE. There are three sentences linking “apple” to different entities. There are two mentions (m2,m3) with the organization sense (S-class) and one mention (m1) with the food sense (S-class). nated from embeddings, i.e., that a separate embedding is needed for each sense (Sch¨utze, 1998; Huang et al., 2012; Neelakantan et al., 2014; Li and Jurafsky, 2015; Camacho-Collados and Pilehvar, 2018). This can improve performance on contextual word similarity, but a recent study (Dubossarsky et al., 2018) questions this finding. WIKI-PSE allows us to compute sense embeddings; we will analyze their effect on word embeddings in our diagnostic classifications. 3 WIKI-PSE Resource We want to create a resource that allows us to probe embeddings for S-classes. Specifically, we have the following desiderata: (i) We need a corpus that is S-class-annotated at the token level, so that we can train sense embeddings as well as conventional word embeddings. (ii) We need a dictionary of the corpus vocabulary that is S-class-annotated at the type level. This gives us a gold standard for probing embeddings for S-classes. (iii) The resource must be large so that we have a training set of sufficient size that lets us compare different embedding learners and train complex models for probing. We now describe WIKI-PSE, a Wikipediadriven resource for Probing Semantics in Embeddings, that satisfies our desiderata. WIKI-PSE consists of a corpus and a corpusbased dataset of word/S-class pairs: an S-class is assigned to a word if the word occurs with that Slocation, person, organization, art, event, broadcast program, title, product, living thing, peopleethnicity, language, broadcast network, time, religion-religion, award, internet-website, god, education-educational degree, food, computerprogramming language, metropolitan transittransit line, transit, finance-currency, disease, chemistry, body part, finance-stock exchange, law, medicine-medical treatment, medicinedrug, broadcast-tv channel, medicine-symptom, biology, visual art-color Table 1: S-classes in WIKI-PSE sorted by frequency. class in the corpus. There exist sense annotated corpora like SemCor (Miller et al., 1993), but due to the cost of annotation, those corpora are usually limited in size, which can hurt the quality of the trained word embeddings – an important factor for our analysis. In this work, we propose a novel and scalable approach to building a corpus without depending on manual annotation except in the form of Wikipedia anchor links. WIKI-PSE is based on the English Wikipedia (2014-07-07). Wikipedia is suitable for our purposes since it contains nouns – proper and common nouns – disambiguated and linked to Wikipedia pages via anchor links. To find more abstract meanings than Wikipedia pages, we annotate the nouns with S-classes. We make use of the 113 FIGER types2 (Ling and Weld, 2012), e.g., person and person/author. Since we use distant supervision from knowledge base entities to their mentions in Wikipedia, the annotation contains noise. For example, “Karl Marx” is annotated with person/author, person/politician and person and so is every mention of him based on distant supervision which is unlikely to be true. To reduce noise, we sacrifice some granularity in the Sclasses. We only use the 34 parent S-classes in the FIGER hierarchy that have instances in WIKIPSE; see Table 1. For example, we leave out person/author and person/politician and just use person. By doing so, mentions of nouns are rarely ambiguous with respect to S-class and we still have a reasonable number of S-classes (i.e., 34). The next step is to aggregate all S-classes a surface form is annotated with. Many surface forms 2We follow the mappings in https://github.com/ xiaoling/figer to first find the corresponding Freebase topic of a Wikipedia page and then map it to FIGER types. 5743 are used for referring to more than one Wikipedia page and, therefore, possibly to more than one Sclass. So, by using these surface forms of nouns3, and their aggregated derived S-classes, we build our dataset of words and S-classes. See Figure 1 for “apple” as an example. We differentiate linked mentions by enclosing them with “@”, e.g., “apple” →“@apple@”. If the mention of a noun is not linked to a Wikipedia page, then it is not changed, e.g., its surface form remains “apple”. This prevents conflation of Sclass-annotated mentions with unlinked mentions. For the corpus, we include only sentences with at least one annotated mention resulting in 550 million tokens – an appropriate size for embedding learning. By lowercasing the corpus and setting the minimum frequency to 20, the vocabulary size is ≈500,000. There are ≈276,000 annotated words in the vocabulary, each with >= 1 Sclasses. In total, there are ≈343,000 word/S-class pairs, i.e., words have 1.24 S-classes on average. For efficiency, we select a subset of words for WIKI-PSE. We first add all multiclass words (those with more than one S-class) to the dataset, divided randomly into train and test (same size). Then, we add a random set with the same size from single-class words, divided randomly into train and test (same size). The resulting train and test sets have the size of 44,250 each, with an equal number of single and multiclass words. The average number of S-classes per word is 1.75. 4 Probing for Semantic Classes in Word Embeddings We investigate embeddings by probing: Is the information we care about available in a word w’s embedding? Specifically, we probe for S-classes: Can the information whether w belongs to a specific S-class be obtained from its embedding? The probing method we use should be: (i) simple with only the word embedding as input, so that we do not conflate the quality of embeddings with other confounding factors like quality of context representation (as in WSD); (ii) supervised with enough training data so that we can learn strong and nonlinear classifiers to extract meanings from embeddings; (iii) agnostic to the model architecture that the word embeddings are trained with. WIKI-PSE, introduced in §3, provides a text corpus and annotations for setting up probing 3Linked multiwords are treated as single tokens. methods satisfying (i) – (iii). We now describe the other elements of our experimental setup: word and sense representations, probing tasks and classification models. 4.1 Representations of Words and Senses We run word embedding models like WORD2VEC on WIKI-PSE to get embeddings for all words in the corpus, including special common and proper nouns like “@apple@”. We also learn an embedding for each S-class of a word, e.g., one embedding for “@apple@food” and one for “@apple@-organization”. To do this, each annotated mention of a noun (e.g., “@apple@”) is replaced with a word/S-class token corresponding to its annotation (e.g., with “@apple@-food” or “@apple@-organization”). These word/S-class embeddings correspond to sense embeddings in other work. Finally, we create an alternative word embedding for an ambiguous word like “@apple@” by aggregrating its word/S-class embeddings by summing them: ⃗w = P i αi ⃗ wci where ⃗w is the aggregated word embedding and the ⃗ wci are the word/Sclass embeddings. We consider two aggregations: • For uniform sum, written as unifΣ, we set αi = 1. So a word is represented as the sum of its sense (or S-class) embeddings; e.g., the representation of “apple” is the sum of its organization and food S-class vectors. • For weighted sum, written as wghtΣ, we set αi = freq(wci)/ P j freq(wcj), i.e., the relative frequency of word/S-class wci in mentions of the word w. So a word is represented as the weighted sum of its sense (or S-class) embeddings; e.g., the representation of “apple” is the weighted sum of its organization and food S-class vectors where the organization vector receives a higher weight since it is more frequent in our corpus. unifΣ is common in multi-prototype embeddings, cf. (Rothe and Sch¨utze, 2017). wghtΣ is also motivated by prior work (Arora et al., 2018). Aggregation allows us to investigate the reason for poor performance of single-vector embeddings. Is it a problem that a single-vector representation is used as the multi-prototype literature claims? Or are single-vectors in principle sufficient, but the way sense embeddings are aggregated in a single5744   R1 event organization  food   R2   R3 R4 R5 R6 R7 + + + Figure 2: A 2D embedding space with three S-classes (food, organization and event). A line divides positive and negative regions of each S-class. Each of the seven Ri regions corresponds to a subset of S-classes. vector representation (through an embedding algorithm, through unifΣ or through wghtΣ) is critical. 4.2 Probing Tasks The first task is to probe for S-classes. We train, for each S-class, a binary classifier that takes an embedding as input and predicts membership in the S-class. An ambiguous word like “@apple@” belongs to multiple S-classes, so each of several different binary classifiers should diagnose it as being in its S-class. How well this type of probing for S-classes works in practice is one of our key questions: can S-classes be correctly encoded in embedding space? Figure 2 shows a 2D embedding space: each point is assigned to a subset of the three S-classes, e.g., “@apple@” is in the region “+food ∩+organization ∩-event” and “@google@” in the region “-food ∩+organization ∩-event”. The second probing task predicts whether an embedding represents an unambiguous (i.e., one S-class) or an ambiguous (i.e., multiple S-classes) word. Here, we do not look for any specific meaning in an embedding, but assess whether it is an encoding of multiple different meanings or not. High accuracy of this classifier would imply that ambiguous and unambiguous words are distinguishable in the embedding space. 4.3 Classification Models Ideally, we would like to have linearly separable spaces with respect to S-classes – presumably embeddings from which information can be effectively extracted by such a simple mechanism are better. However, this might not be the case considering the complexity of the space: non-linear models may detect S-classes more accurately. Nearest neighbors computed by cosine similarity are frequently used to classify and analyze embeddings, so we consider them as well. Accordingly, we experiment with three classifiers: (i) logistic regression (LR); (ii) multi-layer perceptron (MLP) with one hidden and a final ReLU layer; and (iii) KNN: K-nearest neighbors. 5 Experiments Learning embeddings. Our method is agnostic to the word embedding model. Therefore, we experiment with two popular similar embedding models: (i) SkipGram (henceforth SKIP) (Mikolov et al., 2013), and (ii) Structured SkipGram (henceforth SSKIP) (Ling et al., 2015). SSKIP models word order while SKIP is a bag-of-words model. We use WANG2VEC (Ling et al., 2015) with negative sampling for training both models on WIKI-PSE. For each model, we try four embedding sizes: {100, 200, 300, 400} using identical hyperparameters: negatives=10, iterations=5, window=5. emb size ln LR KNN MLP SKIP word 100 1 .723 .738 .773 200 2 .740 .734 .786 300 3 .745 .730 .787 400 4 .747 .727 .786 SKIP wghtΣ 100 5 .681 .727 .752 200 6 .695 .721 .756 300 7 .699 .728 .752 400 8 .702 .711 .753 SKIP unifΣ 100 9 .787 .783 .830 200 10 .797 .773 .833 300 11 .800 .765 .832 400 12 .801 .758 .834 SSKIP word 100 13 .737 .749 .785 200 14 .754 .745 .793 300 15 .760 .741 .797 400 16 .762 .737 .790 SSKIP wghtΣ 100 17 .699 .733 .762 200 18 .710 .726 .764 300 19 .714 .718 .767 400 20 .717 .712 .763 SSKIP unifΣ 100 21 .801 .783 .834 200 22 .809 .767 .840 300 23 .812 .755 .842 400 24 .814 .747 .844 random – – .273 – – Table 2: F1 for S-class prediction. emb: embedding, unifΣ (resp. wghtΣ): uniform (resp. weighted) sum of word/S-classes. ln: line number. Bold: best F1 result per column and embedding model (SKIP and SSKIP). 5.1 S-class Prediction Table 2 shows results on S-class prediction for word, unifΣ and wghtΣ embeddings trained using SKIP and SSKIP. Random is a simple baseline that randomly assigns to a test example each S-class 5745 0.2 0.4 0.6 0.8 1.0 dominance-level 0.4 0.5 0.6 0.7 0.8 0.9 1.0 R word unif wght random (a) 2 4 6 8 10 12 number of S-classes 0.2 0.4 0.6 0.8 1.0 R word unif wght random (b) Figure 3: Results of S-class prediction as a function of two important factors: dominance-level and number of S-classes according to its prior probability (i.e., proportion in train). We train classifiers with Scikit-learn (Pedregosa et al., 2011). Each classifier is an independent binary predictor for one S-class. We use the global metric of micro F1 over all test examples and over all S-class predictions. We see the following trends in our results. MLP is consistently better than LR or KNN. Comparing MLP and LR reveals that the space is not linearly separable with respect to the S-classes. This means that linear classifiers are insufficient for semantic probing: we should use models for probing that are more powerful than linear. Higher dimensional embeddings perform better for MLP and LR, but worse for KNN. We do further analysis by counting the number k of unique S-classes in the top 5 nearest neighbors for word embeddings; k is 1.42 times larger for embeddings of dimensionality 400 than 200. Thus, more dimensions results in more diverse neighborhoods and more randomness. We explain this by the increased degrees of freedom in a higher dimensional space: idiosyncratic properties of words can also be represented given higher capacity and so similarity in the space is more influenced by idiosyncracies, not by general properties like semantic classes. Similarity datasets tend to only test the majority sense of words (Gladkova and Drozd, 2016), and that is perhaps why similarity results usually do not follow the same trend (i.e., higher dimensions improve results). See Table 6 in Appendix for results on selected similarity datasets. SSKIP performs better than SKIP. The difference between the two is that SSKIP models word order. Thus, we conclude that modeling word order is important for a robust representation. This is in line with the more recent FASTTEXT model with word order that outperforms prior work (Mikolov et al., 2017). We now compare word embeddings, unifΣ, and wghtΣ. Recall that the sense vectors of a word have equal weight in unifΣ and are weighted according to their frequency in wghtΣ. The results for word embeddings (e.g., line 1) are between those of unifΣ (e.g., line 9) and wghtΣ (e.g., line 5). This indicates that their weighting of sense vectors is somewhere between the two extremes of unifΣ and wghtΣ. Of course, word embeddings are not computed as an explicit weighted sum of sense vectors, but there is evidence that they are implicit frequency-based weighted sums of meanings or concepts (Arora et al., 2018). The ranking unifΣ > word embeddings > wghtΣ indicates how well individual sense vectors are represented in the aggregate word vectors and how well they can be “extracted” by a classifier in these three representations. Our prediction task is designed to find all meanings of a word, including rare senses. unifΣ is designed to give relatively high weight to rare senses, so it does well on the prediction task. wghtΣ and word embeddings give low weights to rare senses and very high weights to frequent senses, so the rare senses can be “swamped” and difficult to extract by classifiers from the embeddings. Public embeddings. To give a sense on how well public embeddings, trained on much larger data, do on S-class prediction in WIKI-PSE, we use 300d GLOVE embeddings trained on 6B to5746 emb LR KNN MLP word .711 .605 .715 wghtΣ .652 .640 .667 unifΣ .766 .709 .767 GLOVE(6B) .667 .638 .685 FASTTEXT(Wiki) .699 .599 .697 Table 3: F1 for S-class prediction on the subset of WIKI-PSE whose vocabulary is shared with GLOVE and FASTTEXT. Apart from using a subset of WIKIPSE, this is the same setup as in Table 2, but here we compare word, wghtΣ, and unifΣ with public GLOVE and FASTTEXT. kens4 from Wikipedia and Gigaword and FASTTEXT Wikipedia word embeddings.5 We create a subset of the WIKI-PSE dataset by keeping only single-token words that exist in the two embedding vocabularies. The size of the resulting dataset is 13,000 for train and test each; the average number of S-classes per word is 2.67. Table 3 shows results and compares with our different SSKIP 300d embeddings. There is a clear performance gap between the two off-theshelf embedding models and unifΣ, indicating that training on larger text does not necessarily help for prediction of rare meanings. This table also confirms Table 2 results with respect to comparison of learning model (MLP, LR, KNN) and embedding model (word, wghtΣ, unifΣ). Overall, the performance drops compared to the results in Table 2. Compared to the WIKI-PSE dataset, this subset has fewer (13,000 vs. 44,250) training examples, and a larger number of labels per example (2.67 vs. 1.75). Therefore, it is a harder task. 5.1.1 Analysis of Important Factors We analyze the performance with respect to multiple factors that can influence the quality of the representation of S-class s in the embedding of word w: dominance, number of S-classes, frequency and typicality. We discuss the first two here and the latter two in the Appendix §A. These factors are similar to those affecting WSD systems (Pilehvar and Navigli, 2014). We perform this analysis for MLP classifier on SSKIP 400d embeddings. We compute the recall for various conditions.6 Dominance of the S-class s for word w is defined as the percentage of the occurrences of w where its labeled S-class is s. Figure 3a shows 4https://nlp.stanford.edu/projects/glove/ 5https://fasttext.cc/docs/en/pretrained-vectors.html 6Precision for these cases is not defined. This is similarly applied in WSD (Pilehvar and Navigli, 2014). for each dominance level what percentage of Sclasses of that level were correctly recognized by their binary classifier. For example, 0.9 or 90% of S-classes of words with dominance level 0.3 were correctly recognized by the corresponding Sclass’s binary classifier for unifΣ ((a), red curve). Not surprisingly, more dominant meanings are represented and recognized better. We also see that word embeddings represent non-dominant meanings better than wghtΣ, but worse than unifΣ. For word embeddings, the performance drops sharply for dominance <0.3. For wghtΣ, the sharp drops happens earlier, at dominance <0.4. Even for unifΣ, there is a (less sharp) drop – this is due to other factors like frequency and not due to poor representation of less dominant S-classes (which all receive equal weight for unifΣ). The number of S-classes of a word can influence the quality of meaning extraction from its embedding. Figure 3b confirms our expectation: It is easier to extract a meaning from a word embedding that encodes fewer meanings. For words with only one S-class, the result is best. For ambiguous words, performance drops but this is less of an issue for unifΣ. For word embeddings (word), performance remains in the range 0.6-0.7 for more than 3 S-classes which is lower than unifΣ but higher than wghtΣ by around 0.1. 5.2 Ambiguity Prediction We now investigate if a classifier can predict whether a word is ambiguous or not, based on the word’s embedding. We divide the WIKI-PSE dataset into two groups: unambiguous (i.e., one S-class) and ambiguous (i.e., multiple S-classes). LR, KNN and MLP are trained on the training set and applied to the words in test. The only input to a classifier is the embedding; the output is binary: one S-class or multiple S-classes. We use SSKIP word embeddings (dimensionality 400) and L2-normalize all vectors before classification. As a baseline, we use the word frequency as single feature (FREQUENCY) for LR classifier. model LR KNN MLP FREQUENCY 64.8 word 77.9 72.1 81.2 wghtΣ 76.9 69.2 81.1 unifΣ 96.2 72.2 97.1 Table 4: Accuracy for predicting ambiguity 5747 1 2 3 4 5 6 7 8 number of S-classes 0.0 0.2 0.4 0.6 0.8 1.0 ACC word FREQUENCY Figure 4: Accuracy of word embedding and FREQUENCY for predicting ambiguity as a function of number of S-classes, using MLP classifier. Table 4 shows overall accuracy and Figure 4 accuracy as a function of number of S-classes. Accuracy of standard word embeddings is clearly above the baselines, e.g., 81.2% for MLP and 77.9% for LR compared to 64.8% for FREQUENCY. The figure shows that the decision becomes easier with increased ambiguity (e.g., ≈100% for 6 or more S-classes). It makes sense that a highly ambiguous word is more easily identifiable than a twoway ambiguous word. MLP accuracy for unifΣ is close to 100%. We can again attribute this to the fact that rare senses are better represented in unifΣ than in regular word embeddings, so the ambiguity classification is easier. KNN results are worse than LR and MLP. This indicates that similarity is not a good indicator of degree of ambiguity: words with similar degrees of ambiguity do not seem to be neighbors of each other. This observation also points to an explanation for why the classifiers achieve such high accuracy. We saw before that S-classes can be identified with high accuracy. Imagine a multilayer architecture that performs binary classification for each S-class in the first layer and, based on that, makes the ambiguity decision based on the number of S-classes found. LR and MLP seem to approximate this architecture. Note that this can only work if the individual S-classes are recognizable, which is not the case for rare senses in regular word embeddings. In Appendix §C, we show top predictions for ambiguous and unambiguous words. 5.3 NLP Application Experiments Our primary goal is to probe meanings in word embeddings without confounding factors like contextual usage. However, to give insights on how our probing results relate to NLP tasks, we evaluate our embeddings when used to represent word tokens.7 Note that our objective here is not to improve over other baselines, but to perform analysis. We select mention, sentence and sentence-pair classification datasets. For mention classification, we adapt Shimaoka et al. (2017)’s setup:8 training, evaluation (FIGER dataset) and implementation. The task is to predict the contextual fine-grained types of entity mentions. We lowercase the dataset to match the vocabularies of GLOVE(6B), FASTTEXT(Wiki) and our embeddings. For sentence and sentence-pair classifications, we use the SentEval9 (Conneau and Kiela, 2018) setup for four datasets: MR (Pang and Lee, 2005) (positive/negative sentiment prediction for movie reviews) , CR (Hu and Liu, 2004) (positive/negative sentiment prediction for product reviews), SUBJ (Pang and Lee, 2004) (subjectivity/objectivity prediction) and MRPC (Dolan et al., 2004) (paraphrase detection). We average embeddings to encode a sentence. emb MC CR MR SUBJ MRPC word 64.6 70.4 71.4 89.2 71.3 wghtΣ 65.4 72.3 72.0 89.4 71.5 unifΣ 61.6 69.1 68.8 87.9 71.3 GLOVE(6B) 58.1 75.7 75.2 91.3 72.5 FASTTEXT(Wiki) 55.5 76.7 75.2 91.2 71.6 Table 5: Performance of the embedding models on five NLP tasks Table 5 shows results. For MC, performance of embeddings is ordered: wghtΣ > word > unifΣ. This is the opposite of the ordering in Table 2 where unifΣ was the best and wghtΣ the worst. The models with more weight on frequent meanings perform better in this task, likely because the dominant S-class is mostly what is needed. In an error analysis, we found many cases where mentions have one major sense and some minor senses; e.g., unifΣ predicts “Friday” to be “location” in the context “the U.S. Attorney’s Of7For the embeddings used in this experiment, if there are versions with and without “@”s, then we average the two; e.g., “apple” is the average of “apple” and “@apple@”. 8https://github.com/shimaokasonse/NFGEC 9https://github.com/facebookresearch/SentEval 5748 fice announced Friday”. Apart from the major Sclass “time”, “Friday” is also a mountain (“Friday Mountain”). unifΣ puts the same weight on “location” and “time”. wghtΣ puts almost no weight on “location” and correctly predicts “time”. Results for the four other datasets are consistent: the ordering is the same as for MC. 6 Discussion and Conclusion We quantified how well multiple meanings are represented in word embeddings. We did so by designing two probing tasks, S-class prediction and ambiguity prediction. We applied these probing tasks on WIKI-PSE, a large new resource for analysis of ambiguity and word embeddings. We used S-classes of Wikipedia anchors to build our dataset of word/S-class pairs. We view S-classes as corresponding to senses. A summary of our findings is as follows. (i) We can build a classifier that, with high accuracy, correctly predicts whether an embedding represents an ambiguous or an unambiguous word. (ii) We show that semantic classes are recognizable in embedding space – a novel result as far as we know for a real-world dataset – and much better with a nonlinear classifier than a linear one. (iii) The standard word embedding models learn embeddings that capture multiple meanings in a single vector well – if the meanings are frequent enough. (iv) Difficult cases of ambiguity – rare word senses or words with numerous senses – are better captured when the dimensionality of the embedding space is increased. But this comes at a cost – specifically, cosine similarity of embeddings (as, e.g., used by KNN, §5.2) becomes less predictive of S-class. (v) Our diagnostic tests show that a uniform-weighted sum of the senses of a word w (i.e., unifΣ) is a high-quality representation of all senses of w – even if the word embedding of w is not. This suggests again that the main problem is not ambiguity per se, but rare senses. (vi) Rare senses are badly represented if we use explicit frequency-based weighting of meanings (i.e., wghtΣ) compared to word embedding learning models like SkipGram. To relate these findings to sentence-based applications, we experimented with a number of public classification datasets. Results suggest that embeddings with frequency-based weighting of meanings work better for these tasks. Weighting all meanings equally means that a highly dominant sense (like “time” for “Friday”) is severely downweighted. This indicates that currently used tasks rarely need rare senses – they do fine if they have only access to frequent senses. However, to achieve high-performance natural language understanding at the human level, our models also need to be able to have access to rare senses – just like humans do. We conclude that we need harder NLP tasks for which performance depends on rare as well as frequent senses. Only then will we be able to show the benefit of word representations that represent rare senses accurately. Acknowledgments We are grateful for the support of the European Research Council (ERC #740516) and UPV/EHU (excellence research group) for this work. Next, we thank all the anonymous reviewers their detailed assessment and helpful comments. We also appreciate the insightful discussion with Geoffrey J. Gordon, Tong Wang, and other members of Microsoft Research Montr´eal. References Eneko Agirre and Philip Edmonds. 2007. Word Sense Disambiguation: Algorithms and Applications, 1st edition. Springer Publishing Company, Incorporated. Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2018. Linear algebraic structure of word senses, with applications to polysemy. TACL, 6:483–495. Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In The first international conference on language resources and evaluation workshop on linguistics coreference, volume 1, pages 563–566. Tom Bosc and Pascal Vincent. 2018. Auto-encoding dictionary definitions into consistent word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1522–1532. Jos´e Camacho-Collados and Mohammad Taher Pilehvar. 2018. From word to sense embeddings: A survey on vector representations of meaning. Journal of Artifical Intelligence, 63:743–788. Massimiliano Ciaramita and Yasemin Altun. 2006. Broad-coverage sense disambiguation and information extraction with a supersense sequence tagger. In EMNLP, pages 594–602. 5749 Massimiliano Ciaramita and Mark Johnson. 2003. Supersense tagging of unknown nouns in wordnet. In EMNLP, pages 168–175. Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representations. arXiv preprint arXiv:1803.05449. Alexis Conneau, German Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single vector: Probing sentence embeddings for linguistic properties. In ACL 2018. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Proceedings of the 20th International Conference on Computational Linguistics. Haim Dubossarsky, Eitan Grossman, and Daphna Weinshall. 2018. Coming to your senses: on controls and evaluation sets in polysemy research. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1732–1740. Lucie Flekova and Iryna Gurevych. 2016. Supersense embeddings: A unified model for supersense interpretation, prediction, and utilization. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2029–2041. Artyom Gadetsky, Ilya Yakubovskiy, and Dmitry Vetrov. 2018. Conditional generators of words definitions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 266–271. William A Gale, Kenneth W Church, and David Yarowsky. 1992. Work on statistical methods for word sense disambiguation. In Working Notes of the AAAI Fall Symposium on Probabilistic Approaches to Natural Language, volume 54, page 60. Anna Gladkova and Aleksandr Drozd. 2016. Intrinsic evaluations of word embeddings: What can we do better? In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, RepEval@ACL 2016, pages 36–42. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 168–177. Eric Huang, Richard Socher, Christopher Manning, and Andrew Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 873–882. Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and’diagnostic classifiers’ reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907–926. Rub´en Izquierdo, Armando Su´arez, and German Rigau. 2009. An empirical study on class-based word sense disambiguation. In EACL, pages 389–397. Stanislaw Jastrzebski, Damian Lesniak, and Wojciech Marian Czarnecki. 2017. How to evaluate word embeddings? on importance of data efficiency and simple supervised tasks. CoRR, abs/1702.02170. Katharina Kann, Alex Warstadt, Adina Williams, and Samuel R Bowman. 2019. Verb argument structure alternations in word and sentence embeddings. In Proceedings of the Society for Computation in Linguistics. Upali S. Kohomban and Wee Sun Lee. 2005. Learning semantic classes for word sense disambiguation. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 34–41. Jiwei Li and Dan Jurafsky. 2015. Do multi-sense embeddings improve natural language understanding? In EMNLP, pages 1722–1732. Wang Ling, Chris Dyer, Alan W. Black, and Isabel Trancoso. 2015. Two/too simple adaptations of word2vec for syntax problems. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1299–1304. Xiao Ling and Daniel S. Weld. 2012. Fine-grained entity recognition. In Proceedings of the 16th AAAI Conference on Artificial Intelligence. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems, pages 6294–6305. Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context embedding with bidirectional lstm. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. Rada Mihalcea and Andras Csomai. 2007. Wikify!: linking documents to encyclopedic knowledge. In Proceedings of the Sixteenth ACM Conference on Information and Knowledge Management, pages 233– 242. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2017. Advances in pre-training distributed word representations. arXiv preprint arXiv:1712.09405. 5750 George A. Miller, Claudia Leacock, Randee Tengi, and Ross T. Bunker. 1993. A semantic concordance. In Proceedings of the Workshop on Human Language Technology, pages 303–308. Jiaqi Mu, Suma Bhat, and Pramod Viswanath. 2017. Geometry of polysemy. In Proceedings of the 5th International Conference on Learning Representations. Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Comput. Surv., 41(2):10:1–10:69. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014. Efficient nonparametric estimation of multiple embeddings per word in vector space. In EMNLP, pages 1059–1069. Thanapon Noraset, Chen Liang, Larry Birnbaum, and Doug Downey. 2017. Definition modeling: Learning to define word embeddings in natural language. In Thirty-First AAAI Conference on Artificial Intelligence. Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, pages 271–278. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pages 115–124. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532–1543. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237. Mohammad Taher Pilehvar and Roberto Navigli. 2014. A large-scale pseudoword-based evaluation framework for state-of-the-art word sense disambiguation. Computational Linguistics, 40(4):837–881. Sascha Rothe and Hinrich Sch¨utze. 2017. Autoextend: Combining word embeddings with semantic resources. Computational Linguistics, 43(3):593– 617. Hinrich Sch¨utze. 1992. Dimensions of meaning. In Proceedings Supercomputing ’92, Minneapolis, MN, USA, November 16-20, 1992, pages 787–796. Hinrich Sch¨utze. 1998. Automatic word sense discrimination. Computational Linguistics, 24(1):97–123. LutfiKerem Senel, Ihsan Utlu, Veysel Y¨ucesoy, Aykut Koc, and Tolga C¸ ukur. 2018. Semantic structure and interpretability of word embeddings. IEEE/ACM Trans. Audio, Speech & Language Processing, 26(10):1769–1779. Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2017. Neural architectures for fine-grained entity type classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 1271–1280. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, et al. 2019. What do you learn from context? probing for sentence structure in contextualized word representations. In ICLR. Ivan Vulic, Daniela Gerz, Douwe Kiela, Felix Hill, and Anna Korhonen. 2017. Hyperlex: A large-scale evaluation of graded lexical entailment. Computational Linguistics, 43(4). Laura Wendlandt, Jonathan K. Kummerfeld, and Rada Mihalcea. 2018. Factors influencing the surprising instability of word embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2092–2102. Min Xiao and Yuhong Guo. 2014. Distributed word representation learning for cross-lingual dependency parsing. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 119–129. Yadollah Yaghoobzadeh and Hinrich Sch¨utze. 2015. Corpus-level fine-grained entity typing using contextual information. In EMNLP, pages 715–725. Yadollah Yaghoobzadeh and Hinrich Sch¨utze. 2016. Intrinsic subspace evaluation of word embedding representations. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 236–246. David Yarowsky. 1992. Word-sense disambiguation using statistical models of roget’s categories trained on large corpora. In 14th International Conference on Computational Linguistics, pages 454–460. 5751 0 50 100 150 200 frequency-level 0.4 0.5 0.6 0.7 0.8 0.9 1.0 R word unif wght random (a) 0.2 0.0 0.2 0.4 typicality-level 0.0 0.2 0.4 0.6 0.8 1.0 R word unif wght random (b) Figure 5: Results of word, uniform and weighted word/S-class embeddings for two other important factors: frequency and typicality of S-class. A Analysis of important factor: more analysis Frequency is defined as the absolute frequency of s in occurrences of w. Frequency is important to get good representations and the assumption is that more frequency means better results. In Figure 5a, prediction performance is shown for a varying frequency-level. Due to rounding, each level in x includes frequencies [x −5, x + 5]. As expected higher frequency means better results. All embeddings have high performance when frequency is more than 20, emphasizing that embeddings can indeed represent a meaning well if it is not too rare. For low frequency word/S-class es, the uniform sum performs clearly better than the other models. This shows that word and weighted word/S-class embeddings are not good encodings for rare meanings. Typicality of a meaning for a word is important. We define the typicality of S-class s for word w as its average compatibility level with other classes of w. We use Pearson correlation between Sclasses in the training words and assign the compatibility level of S-classes based on that. In Figure 5b, we see that more positive typicality leads to better results in general. Each level in x axis represents [x −0.05, x + 0.05]. The S-classes that have negative typicality are often the frequent ones like “person” and “location” and that is why the performance is relatively good for them. B What does happen when classes of a word become balanced? Here, we analyze the space of word embeddings with multiple semantic classes as the class disFigure 6: The average number of unique semantic classes in the nearest neighbors of words with two classes, in different dominance level. tribution gets more balanced. In Figure 6, we show that for two-class words, the average number of unique classes in the top five nearest neighbors increases as the dominance level increases. The dominance-level of 0.4 is basically where the two classes are almost equally frequent. As the two classes move towards equal importance, their word embeddings move towards a space with more diversity. C Ambiguity prediction examples In Table 7, we show some example predicted ambiguous and unambiguous words based on the word embeddings. D Supersense experiment To confirm our results in another dataset, we try supersense annotated Wikipedia of UKP (Flekova and Gurevych, 2016). We use their published 200dimensional word embeddings. A similar process 5752 model size MEN MTurk RW SimLex999 WS353 Google MSR SKIP 100 0.633 0.589 0.283 0.276 0.585 0.386 0.317 SKIP 200 0.675 0.613 0.286 0.306 0.595 0.473 0.382 SKIP 300 0.695 0.624 0.279 0.325 0.626 0.495 0.405 SKIP 400 0.708 0.630 0.268 0.334 0.633 0.506 0.416 SSKIP 100 0.598 0.555 0.313 0.272 0.559 0.375 0.349 SSKIP 200 0.629 0.574 0.310 0.306 0.592 0.464 0.413 SSKIP 300 0.645 0.588 0.300 0.324 0.606 0.486 0.430 SSKIP 400 0.655 0.576 0.291 0.340 0.616 0.491 0.431 Table 6: Similarity and analogy results of our word embeddings on a set of datasets (Jastrzebski et al., 2017). The table shows the Spearmans correlation between the models similarities and human judgments. Size is the dimensionality of the embeddings. Except for RW dataset, results improve by increasing embeddings size. word frequency senses likelihood @liberty@ 554 event, organization, location, product, art, person 1.0 @aurora@ 879 organization, location, product, god, art, person, broadcast program 1.0 @arcadia@ 331 event, organization, location, product, art, person, living thing 1.0 @brown@ 590 food, event, title, organization, visual art-color, person, art, location, people-ethnicity, living thing 1.0 @marshall@ 1070 art, location, title, organization, person 1.0 @green@ 783 food, art, organization, visual art-color, location, internet-website, metropolitan transit-transit line, religion-religion, person, living thing 1.0 @howard@ 351 person, title, organization, location 1.0 @lucas@ 216 art, person, organization, location 1.0 @smith@ 355 title, organization, person, product, art, location, broadcast program 1.0 @taylor@ 367 art, location, product, organization, person 1.0 ... ... @tom cibulec@ 47 person 0.0 @judd winick@ 113 person 0.0 @roger reijners@ 26 person 0.0 @patrick rafter@ 175 person 0.0 @nasser hussain@ 82 person 0.0 @sam wyche@ 76 person, event 0.0 @lovie smith@ 116 person 0.0 @calliostomatidae@ 431 living thing 0.0 @joe girardi@ 147 person 0.0 @old world@ 91 location, living thing 0.0 Table 7: The top ten ambiguous words followed by the top unambiguous words based on our model prediction in Section 5.3. Each line is a word followed by its frequency in the corpus, its dataset senses and finally our ambiguity prediction likelihood to be ambiguous. 5753 model norm? LR KNN MLP MAJORITY 50.0 FREQUENCY 67.3 word embedding yes 70.1 65.4 72.4 word embedding no 72.3 65.4 73.0 Table 8: Ambiguity prediction accuracy for the supersense dataset. Norm: L2-normalizing the vectors. as our WIKI-PSE is applied on the annotated corpus to build word/S-class dataset. Here, the Sclasses are the supersenses. We consider NOUN categories of words and build datasets for our analysis by aggregating the supersenses a word annotated with in the corpus. Number of supersenses is 26 and train and test size: 27874. In Table 8, we show the results of ambiguity prediction. As we see, we can predict ambiguity using word embeddings with accuracy of 73%.
2019
574
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5754–5764 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5754 Deep Neural Model Inspection and Comparison via Functional Neuron Pathways James Fiacco Language Technologies Institute Carnegie Mellon University [email protected] Samridhi Choudhary∗ Alexa Machine Learning Amazon [email protected] Carolyn P. Ros´e Language Technologies Institute Carnegie Mellon University [email protected] Abstract We introduce a general method for the interpretation and comparison of neural models. The method is used to factor a complex neural model into its functional components, which are comprised of sets of co-firing neurons that cut across layers of the network architecture, and which we call neural pathways. The function of these pathways can be understood by identifying correlated task level and linguistic heuristics in such a way that this knowledge acts as a lens for approximating what the network has learned to apply to its intended task. As a case study for investigating the utility of these pathways, we present an examination of pathways identified in models trained for two standard tasks, namely Named Entity Recognition and Recognizing Textual Entailment. 1 Introduction Interpretation of neural models is a difficult task because the knowledge learned within neural networks is distributed across hundreds of thousands of parameters. Interpreting the significance of any individual neuron is tantamount to reconstructing a forest based on a single pine needle. More specifically, the contribution of each individual neuron is a minuscule part in the overall representation of the learned solution, and the mapping between neurons and function may be many-to-many (Goodfellow et al., 2016). As a response to this, the contribution of this paper is a new method of network interpretation that enables a more abstract view of what a network has learned, which we refer to as neural pathways. In this approach, inspired by the concept of biological neural pathways used in neuroscience research to understand physical brain function (Kennedy et al., 1975), a network is factored into functional groups of co-firing neurons ∗Work was done as a graduate student at Carnegie Mellon University. that cut across layers in a complex network architecture. Rather than attempt interpretation of the activation pattern through a single neuron at a time, we instead attempt interpretation of a functional group of neurons where the activation pattern of the group can then be more effectively associated with task and linguistic knowledge. This enables understanding the neuron groups as working together to accomplish a comprehensible sub-task. These pathways help conceptualize what task and linguistic knowledge a model may be using in an approximate way, the benefit of which is that it does not depend on an isomorphism between network architectures. This method, which can be applied simply in a purely post-hoc analysis, independent of the training process, can enable both understanding of individual models and comparison across models. The interpretation process enables investigation of which identified functional groups correspond to linguistic or task level heuristics that may be employed in well understood non-neural methods for performing the task. Furthermore, it enables comparison across very different architectures in terms of the extent and the manner in which each architecture has approximated use of such knowledge. In so doing, the method can also be used to formulate explanations for differences in performance between models based on relevant linguistic or task knowledge that is identified as learned or not learned by the models. This approach builds on and extends prior work using linguistic and task knowledge to understand the behavior and the results of modern neural models (Shi et al., 2016b; Adi et al., 2016; Conneau et al., 2018). In the remainder of the paper we review common techniques for network interpretation followed by a detailed description of the neural pathways approach. Next, we apply the neural pathways approach to previously published neural models, 5755 namely models for the task of named entity recognition (NER) (Ma and Hovy, 2016) on CoNLL 2003 data for English (Sang and Meulder, 2003) and recognizing textual entailment (Dagan and Glickman, 2004). We compare across different neural architectures through a shared lens comprising linguistic and task-level heuristics for the two target tasks and draw conclusions about learning outcomes on those tasks. 2 Related Work Our work falls under the broad topic of neural network interpretation. Recently, in this area of research a wide variety of models have been the target of investigation, including additive classifiers (Poulin et al., 2006), kernel-based classifiers (Baehrens et al., 2010), hierarchical networks (Landecker et al., 2013), and many others that are too numerous to list. As our work focuses on interpretation, we are not presenting new state-of-theart performance on a given task, but rather a new method to understand and compare neural models. Our evaluation is a demonstration that focuses on models trained for the Named Entity Recognition and Recognizing Textual Entailment tasks. The specific goal of our evaluation will be to demonstrate the broad applicability of the approach, and position it as building on and extending the existing body of work exploring interpretability of previously defined neural models (Glockner et al., 2018; Mudrakarta et al., 2018). We observe that neural interpretation approaches fall within several broad categories: visualizations and heatmaps (Karpathy et al., 2015; Strobelt et al., 2016), gradient-based analyses (Potapenko et al., 2017; Samek et al., 2017b; Bach et al., 2015; Arras et al., 2017), learning disentangled representations during training (Whitney, 2016; Siddharth et al., 2017; Esmaeili et al., 2018), and model probes (Shi et al., 2016a; Adi et al., 2016; Conneau et al., 2018; Zhu et al., 2018; Kuncoro et al., 2018; Khandelwal et al., 2018). Our work uses linear probes as a method to identify the function of groups of neurons that are correlated with linguistic and tasklevel features, rather than for interpretation of individual neurons. Through correlation with the pathway analysis, we can furthermore reason about the role that those linguistic and task-level features have in the network’s predictions. Recent attempts to understand the functioning of trained neural models have limited themselves to investigations of the function of individual neurons or individual architectural components. An early way to probe the function of target components, as Karpathy et al. (2015) and Strobelt et al. (2016) have each proposed, is by visualizing patterns of activation through the target components, for example using heatmaps. However, making meaningful patterns apparent in these visualizations can be highly dependent on the artful arrangement of the data presented within them, and it is easy to overlook patterns that are not immediately obvious. There have also been approaches that made use of simpler classifiers to predict and then explain mistakes made by more complex models (Ribeiro et al., 2016; Krishnan and Wu, 2017). In a similar vein, linear classifier probes have been used by Alain and Bengio (2016) to co-train simple linear models to illustrate functions performed by particular layers in arbitrarily deep models, and then later by associating the learned patterns in the linear models with task or linguistic knowledge determined by hand or through some other means to be relevant or not instance-by-instance. More recently, Montavon et al. (2017) published a detailed tutorial on the recent approaches and techniques of interpreting deep neural networks. They identified cross-cutting techniques that have been applied to explain the behavior of a wide range of models. A notable contribution of this tutorial is an approach for sensitivity analysis capable of identifying important input features to a network. The technique observes the magnitude of the gradient for each input feature for each data point, giving relevance scores per data point for each feature. Analogous methods for accomplishing similar goals include layer-wise relevance propagation (Bach et al., 2015) and its derivatives (Samek et al., 2017a; Arras et al., 2017). While these approaches have mainly focused on explaining the predictions and performances of a single network at a time, few if any prior attempts have been made to use these techniques for comparison across different network architectures, as we do in this paper. 3 Methodology Many previous approaches have analyzed individual neurons or architectures of specific neural networks with gradient methods (Karpathy et al., 2015; Bach et al., 2015; Arras et al., 2017). However, we propose an approach that enables abstraction above 5756 Figure 1: Flowchart representation of neural pathway based model interpretation. the surface structure of a network architecture, enabling a relaxation of the assumption of an direct link between structure and function. To accomplish this abstraction, we employ a simple approach to identify what we conceptualize as emergent neural pathways, which are specific sets of co-firing neurons that work together as the model makes predictions on the data. To understand the specifics of the function performed by the functional group, we align activation patterns through the group per instance with patterns of relevance for task and linguistic knowledge. 3.1 Prerequisites As this is an interpretation method, there is an assumed set of information about the model, the dataset, and the task that must be known in order to apply the techniques effectively. Namely, there should be a reference set of heuristic knowledge, either at the linguistic or task level, that is associated with the dataset on an instance-by-instance level for at least some subset of the data. Metrics of Interest: As our approach can be generalized across many tasks, the metrics that will be used to identify the salient pathways must be defined before the interpretation process. Section 4.1 and 4.2 provide specific examples of these metrics as applied to the entailment and NER models. Metrics are chosen to be able to be easily computed and will provide the target values for the statistical analysis outlined in Section 3.3, Linear Comparisons. Example metrics include disagreement between models, incorrectly predicted values, or other task specific metrics. Model and Data: The proposed neural pathways method is a post-training analytic approach, and thus it requires the existence of pretrained models, that will be the target of the interpretation process. This stands in contrast to previous co-training approaches, where the mechanism for interpretation is trained simultaneously with the networks that are of interest. Task Knowledge: Our interpretation method is built on the assumption that the researcher has external knowledge of the task that their model is being applied to. This can be as straightforward as simply having a feature engineered baseline, as with our named entity recognition example (Section 4.2). However, it can also be as nuanced as having access to an analysis of the types of required knowledge to accurately predict certain instances in the data, as in our recognizing textual entailment example where we use an alternate validation set for the MultiNLI corpus where subsets have been earmarked as of interest for specific kinds of task and linguistic knowledge (Section 4.1). The external knowledge that is brought to the interpretation process will directly affect what conclusions can be drawn from the neural model as this method does not generate new knowledge, but validates the relevance of external knowledge for explaining network function. If the knowledge brought to the process is only partial, then only partial understanding of network function will be possible. However, as one iterates through the interpretation process, the potential relevance of additional knowledge may emerge, and the process can be repeated with the expanded set. This is an advantage of not requiring the interpretation mechanism to be trained along side the model in question. Extracting Activations: As a preparatory step for the interpretation process, an activation matrix is 5757 constructed where the columns represent individual neurons, the rows represent instances, and the value of each cell is the activation of the associated neuron in the associated instance. Part of this method’s flexibility is that the set of probed neurons can be arbitrarily large or small. This way, the sets can be specified to analyze the pathways within certain subsections of the model or in the model as a whole. This flexibility allows researchers to ignore parts of the model that may already be well explained by other neural interpretation techniques (e.g. low-level feature extraction in convolutional neural networks in image recognition, or attention heatmaps). 3.2 Identifying Pathways Neural pathways are a distinct (though related) phenomenon from interconnectivity of a given network based on individual connection weights. While the weights describe the strength of connectivity between individual pairs of neurons, co-activation is an emergent property that arises through sets of connected neurons, and because of this, pathways can not be constructed through a simple graph partitioning of the network structure based on weights apart from the observation of the network in use. Dimensionality Reduction: A dimensionality reduction is applied to the activation matrix to get a set of factors that will correspond to our neural pathways. While in principle, any form of dimensionality reduction can be used, Principal Component Analysis (PCA) (Hotelling, 1933) is used in this work for the dimensionality reduction for its simplicity and transparency. Different methods for dimensionality reduction may prove better or worse for interpreting certain models for certain tasks, but the question of which specific dimensionality reduction technique works best is not of interest in this foundational work. Finding Active Pathways: For each data instance in the validation set, the pathways that are activated to produce the model predictions are identified. This is done by constructing an activation matrix, as explained above (Section 3.1), and applying PCA to it in order to define functional groups of neurons based on their coordinated behavior. The factors identified become the neural pathways and the factor loadings (DeCoster, 1998) become a means for understanding the activity of the pathways. These factor loadings are later used along with the weights learned by linear probes to align the extracted pathways with interpretable task information. 3.3 Evaluating Pathway Effects With an approach similar to Radford et al. (2017), where it was found in a specific case that sentimentrelated activations were encoded within single neurons, we abstract the concept of single neuron prediction up a level to examine single pathway prediction. Rather than operating at the level of a single neuron, where neurons typically play a minuscule part in many different functions, we operate at the level of a pathway, where a pathway represents neurons that demonstrate their relatedness through their coordinated behavior. Linear Comparisons: This refers to the correlation between the activities associated with each pathway per instance to the pattern of relevance per instance of each metric of interest (e.g. each piece of linguistic or task knowledge). This yields a set of correlation coefficients which represent the importance of each PCA dimension (pathway) for explaining the use of each of the metrics of interest by the learned network. 3.4 Associating Task Knowledge with Pathways Neural pathways are a way to abstract the problem of interpreting single neurons in a neural model to interpreting the functional groups of neurons. In isolation, the pathways are not meaningful, though grounded to task-related information via linear probes and rank correlation, the learned representations within the neural model can be evaluated. Linear Probes: Like Conneau et al. (2018), a series of logistic regression models are trained to map a neural representation to a given linguistic phenomenon, though all of the neurons from parts of the network that are to be analyzed are included whether or not they come from the same layer. Logistic regression probes were used as opposed to the MLP probes in Conneau et al. (2018) to avoid the problem of attempting to interpret a model with another model that is comparably difficult to interpret. Additionally, concepts beyond surface features may also be used as the targets for the probes. This is demonstrated in Section 4.1, where we explore the types of knowledge required to solve a task rather than the surface features of the input. From each of the linear models, we store the weight 5758 vector, which represents the importance of each neuron for predicting the types of task-specific phenomena learned by the linear model and the performance of the linear model which indicates the degree to which that information is embedded in the neural model. Rank Correlation: Using both the factor loadings of the neurons from Section 3.2 and the weights from the linear probes discussed above, we can connect the pathways to known task information. Intuitively, if a neural pathway was approximating a function similar to one of the phenomena examined by the linear probes, then the loadings of each neuron in the pathway would be similar in relative shape to the weights of the relevant linear probe. That is, if the pathway and the probe are viewing the same phenomenon, the neurons with stronger weights in the probe should have higher loadings in the pathway and vice versa. To measure the relatedness of each pathway’s loadings to each linear model’s weights, we use Spearman’s rank correlation coefficient (ρ) (Spearman, 1904), which assesses the monotonicity of two data sets giving a numerical comparison of the relative shapes of the weights and loadings. 3.5 Interpretation The above methods provide the foundation for a quantitatively backed interpretation of a neural model. With this foundation, inferences can be made about the model with a statistical indicator of the confidence or utility of the pathways. Function Inference: From pathways that have high rank correlation with the linear probes, it can be inferred that the model contains a set of neurons in those pathways that perform the tasks provided to the probe. It is also known what metrics of interest that pathway has influence over from the linear comparisons. It is then possible to extrapolate whether the model has learned to use the knowledge examined by the probes in such a way that it can influence those metrics. This directly provides an insight into what knowledge the model has learned and in what cases it has learned to apply it. Confidence: The confidence of the claim that the model has learned such information can be assessed by using the rank correlation coefficient and the performance metrics of the linear probe and the linear comparisons. The rank correlation coefficient measures how well the knowledge stored within the network aligns with the function that the pathway a1 a2 am ....... b1 b2 bn ....... F(a) F(b) G(a, ) β G(b, ) α ( a, ), b, )) H ∑G( β ∑G( α yˆ a b Figure 2: Decomposable Attention Model. Dotted arrows indicate networks with shared weights. is performing. The linear probe and linear comparison performance are likewise related to how likely the information is stored within the pathway and how influential that pathway is on the metric respectively. 4 Experiments To evaluate our interpretation technique on real world data, we applied our method on four trained models over two tasks: recognizing textual entailment using the Multi-genre Natural Language Inference corpus (Williams et al., 2018) and named entity recognition using the CoNLL 2003 data (Sang and Meulder, 2003) for English NER. The analysis was implemented using Scikit-Learn (Pedregosa et al., 2011) and SciPy (Jones et al., 2001–) and unless otherwise noted used default hyperparameters. 4.1 Recognizing Textual Entailment Recognizing textual entailment is a task comprised of deciding whether the concepts presented in one text can be determined to be true given some context or premise in a different text (Dagan and Glickman, 2004). The Multi-genre Natural Language Inference (MultiNLI) corpus (Williams et al., 2018) follows this definition and contains annotated pairs of sentences which are labeled as entailment if the hypothesis sentence is definitely true given the premise sentence, contradiction if the hypothesis is definitely false given the premise, and neutral if the hypothesis could be true, but is not guaranteed to be given the premise. 5759 Models and Data: We implemented two neural models for this task: a bidirectional version of the simple LSTM classifier from Bowman et al. (2015) and the decomposable attention model (DAM) (Figure 2) from Parikh et al. (2016). We use Keras (Chollet et al., 2015) with the TensorFlow (Abadi et al., 2015) backend for our implementations of both of the entailment models. Metrics of Interest: For purposes of this work, the metric of interest used is simply the class value for each data instance. For this task, the activations in the representations for each text segment learned by the model just prior to the classification step are used in the analysis. Task Knowledge: Our external knowledge for this task comes from a stress test dataset developed for models trained on the MultiNLI corpus (Naik et al., 2018). There are nine categories and subcategories, each of which contains data instances that require a specific type or reasoning to correctly identify the entailment relationship. We combine all of the data instances in the stress test and tag each with the category or subcategory it belongs to. The entailment models’ representations are analyzed in terms of the type of reasoning they can perform. While we acknowledge that recent work by Liu et al. (2019) has found limitations in this dataset with respect to the reasoning that is required for the models to achieve, we use it as a foundation for interpretation that can be expanded as new resources become available. 4.2 Named Entity Recognition Given an input sequence, the NER task involves predicting a tag for each token in the sequence that denotes whether the token is an entity or not, as well as what type of entity it is. An example of such a tag might be PER for a “person” entity or ORG for an “organization” entity. Models and Data: We implemented two neural models for our experiments: the first (Figure 3) is a well performing neural model that uses a CNN over characters, word embeddings, a Bidirectional LSTM, and a CRF layer for decoding (Ma and Hovy, 2016). Our second model has the same architecture as above only with a BiLSTM over the characters instead of a CNN. The neurons chosen for analysis were the resulting activations for each character encoding sub-network, the word embeddings, and the resulting activations from the sentence level BiLSTM. Implementations of each of Figure 3: End-to-end model architecture for neural SOTA described in Ma and Hovy (2016). The character representation is computed by a CNN over the characters of the word. This is concatenated with the word embedding (initialized with GloVe) and fed into a BiLSTM. A CRF layer does a sequential decoding to predict the NER tags using the BiLSTM hidden layer vector. the NER models was done using DyNet (Neubig et al., 2017). We used the CoNLL 2003 dataset (Sang and Meulder, 2003) for training. For the analysis we sampled the data to get a dataset with a balanced number of classes. The sampling procedure is inexpensive and can be repeated to maintain statistical power. Metrics of Interest: The differences in predictions for the task are used as the metric of interest. This is a binary value for each data instance where it is 1 if the two models did not produce the same response and 0 otherwise (correct or not). Neurons from across layers were used for the NER task analysis. Task Knowledge: For our external knowledge, we use a set of features inspired by Tkachenko and Simanovsky (2012) who describe a comprehensive set of traditionally used and linguistically informed features for the NER task. These can be sorted into three categories: ‘Local Knowledge Features’ that refer to the features that can be extracted from a particular word; ‘External Knowledge Features’ are those that use external information such as part5760 Task Model Dev F1 ENTAILMENT BILSTM ENCODER 57.4 DECOMPOSABLE ATTENTION 72.8 NER BILSTM-BILSTM-CRF 83.7 CNN-BILSTM-CRF 94.4 Table 1: F1 score for each model on the development set for the entailment task and the NER task. of-speech tags (extracted using nltk1); and Other which includes miscellaneous features like Endof-Sentence markers, hyphenated words, among others. 5 Results Table 1 shows the F1 score on the validation set for the models on both tasks. These models were not tuned to obtain the highest performance possible as they are simply the subject of the interpretation techniques, but their relative performance on the tasks provides some context for further analysis. 5.1 Identifying Pathways For our analysis, we selected the number of pathways for each model so that they explain ≈75% of the total variance in the model. This number was chosen arbitrarily as a balance between the total variance explained by the dimensionality reduction and the quantity of pathways required. Further experimentation may reveal an optimal balance. For the entailment models, the total variance explained for the decomposable attention model was 76.9% over 15 pathways and for the BiLSTM encoder model variance explained was 76.5% over 175 pathways. This result clearly shows that the representation learned by the decomposable attention model has significantly more internal coherence as compared to the BiLSTM encoder. For the NER models, 74.5% of the variance was explained for the CNN-BiLSTM-CRF with 40 pathways and 75.1% of the variance was explained by 35 pathways in the BiLSTM-BiLSTM-CRF. This shows a that both models have similar amounts of observable structure within them. 5.2 Evaluating Pathway Effects Entailment: From the linear comparisons for the decomposable attention model, three pathways had a correlation coefficient greater than 0.25 (p < 0.001). However, in the LSTM model, there were 1http://www.nltk.org/api/nltk.tag.html Instance Type DAM BiLSTM Difference ANTONYM 0.93 0.38 0.55 LENGTH.DIFFERENCE 0.98 0.98 0.00 NEGATION 1.00 0.93 0.07 NUMERIC 0.99 0.96 0.03 WORD.OVERLAP 1.00 0.94 0.06 CONTENT.WORD.SWAP 0.69 0.47 0.22 FUNCTION.WORD.SWAP 0.56 0.47 0.09 KEYBOARD.SWAP 0.59 0.50 0.09 SPELLING.SWAP 0.62 0.59 0.03 Feature CNN BiLSTM Difference WORD.CONTAINSCAPITAL 0.98 0.98 0.01 WORD.HYPEN 0.80 0.83 -0.03 WORD.ISDIGIT 1.00 0.99 0.01 WORD.ISTITLE 1.00 1.00 0.00 WORD.UPPER 0.92 0.93 -0.01 WORD.LOWER 0.73 0.71 0.01 WORD.POSTAG-( 0.94 0.95 -0.00 WORD.POSTAG-) 0.58 0.38 0.20 WORD.POSTAG-, 1.00 1.00 0.00 WORD.POSTAG-. 0.59 0.59 -0.00 WORD.POSTAG-IN 1.00 1.00 0.00 WORD.POSTAG-JJR 1.00 1.00 0.00 WORD.POSTAG-JJS 0.55 0.66 -0.11 WORD.POSTAG-MD 0.90 0.98 -0.08 WORD.POSTAG-NN 0.95 0.95 -0.00 WORD.POSTAG-NNP 0.95 0.95 -0.00 WORD.POSTAG-NNPS 0.11 0.21 -0.10 WORD.POSTAG-NNS 0.24 0.41 -0.17 WORD.POSTAG-PRP 0.44 0.62 -0.18 WORD.POSTAG-VB 0.17 0.21 -0.04 WORD.POSTAG-VBD 0.99 0.98 0.01 WORD.POSTAG-VBG 0.13 0.19 -0.06 WORD.POSTAG-VBN 0.98 0.98 -0.00 WORD.POSTAG-VBP 0.64 0.59 0.05 WORD.POSTAG-VBZ 0.56 0.64 -0.08 Table 2: Linear probe F1 score for the presence of provided external task knowledge given the neural activations and the difference between the two models. Top: entailment stress test data instance categories. Bottom: NER surface features. All performance metrics have p < 0.05. 14 pathways that correlated with the model prediction, but none of them individually had a correlation coefficient greater than 0.2 (p < 0.05). Higher coefficient indicate the pathways that have stronger effect on the model prediction. It also indicates that individual pathways in the decomposable attention model are more informative for understanding why the model makes certain predictions than the LSTM model. NER: Similarly, for the NER task, the differences in predictions for the CNN based character encoder model and the BiLSTM based character encoder via the linear comparisons, were explained by several pathways. For the CNN-BiLSTM-CRF, the top 5 predictive pathways for the differences be5761 tween the two models’ predictions have an average of 0.025 higher correlation coefficient (p < 0.001) than the BiLSTM-BiLSTM-CRF. 5.3 Associating Pathways With Task Knowledge Linear Probes: The results from the linear probes are presented in Table 2 with the F1 score of each probe on the given piece of external task information. For the entailment task, 55% of the instance types can be predicted with high precision and recall for the decomposable attention model, though only 44% with the BiLSTM encoder. There are two stand-out instance types that have major differences between models: Antonyms and Swapped Content Words. Both of these are related to word meanings indicating that the decomposable attention model may be storing more information about meaning than the BiLSTM encoder. For the NER task, 13 out of 50 features are almost perfectly predicted by the activation probes (i.e. greater than 0.90 F1) and there are no significant differences between higher performing probes for the BiLSTM-CRF with the CNN character encoder versus the BiLSTM character encoder. The main difference seen in the results is that the CNN trades off storing information about plural nouns and adjectives for storing clearer representations for parentheses and digits. Rank Correlation: Presented in Table 3 are the results for correlating the neural pathways with the information extracted via the linear probes. The pathway numbers are ordered by variance explained, with lower pathway indexes indicating that the pathway explains more variance in the activations. For the entailment task, the largest difference between the models is that the decomposable attention model has pathways which are correlated well with antonyms and numeric types of data instances even where the antonym pathway represents a relatively small amount of the model variance. Contrasted to this, the BiLSTM encoder model has the best correlations with data instances that display large length differences between the hypothesis and premise sentences. Despite having well over 100 different pathways to explain the variance in the model, the pathways that correlate well with high level instance types also explain more variance on average. For the NER analysis, the pathways that correspond with the surface features represent a very Instance Type DAM BiLSTM Pathway ρ Pathway ρ ANTONYM 12 0.19 16 0.10 LENGTH.DIFFERENCE 0 0.10 17 0.23 NEGATION 1 0.08 1 0.18 NUMERIC 2 0.29 4 0.13 WORD.OVERLAP 3 0.15 10 0.16 CONTENT.WORD.SWAP 8 0.08 32 0.11 FUNCTION.WORD.SWAP 8 0.11 31 0.11 KEYBOARD.SWAP 4 0.09 31 0.13 SPELLING.SWAP 8 0.10 12 0.09 Feature CNN BiLSTM Pathway ρ Pathway ρ WORD.CONTAINSCAPITAL 35 0.11 30 0.11 WORD.HYPEN 38 0.09 26 0.07 WORD.ISDIGIT 18 0.11 6 0.16 WORD.ISTITLE 30 0.14 28 0.23 WORD.UPPER 38 0.12 0 0.14 WORD.LOWER 15 0.05 28 0.05 WORD.POSTAG-( 4 0.12 10 0.07 WORD.POSTAG-) 27 0.09 0 0.08 WORD.POSTAG-, 31 0.15 32 0.18 WORD.POSTAG-. 28 0.09 23 0.06 WORD.POSTAG-IN 27 0.13 22 0.15 WORD.POSTAG-JJR 13 0.11 34 0.18 WORD.POSTAG-JJS 0 0.11 8 0.07 WORD.POSTAG-MD 37 0.11 16 0.08 WORD.POSTAG-NN 0 0.07 22 0.06 WORD.POSTAG-NNP 35 0.10 3 0.09 WORD.POSTAG-NNPS 39 0.13 33 0.08 WORD.POSTAG-NNS 26 0.04 8 0.07 WORD.POSTAG-PRP 18 0.06 8 0.14 WORD.POSTAG-VB 0 0.10 25 0.07 WORD.POSTAG-VBD 25 0.08 34 0.13 WORD.POSTAG-VBG 39 0.06 14 0.04 WORD.POSTAG-VBN 38 0.07 17 0.12 WORD.POSTAG-VBP 17 0.05 24 0.10 Table 3: Most correlated neural pathway along with the rank correlation coefficient for each model for each task studied. Top: entailment stress test data instance categories. Bottom: NER surface features. All rank correlations have p < 0.001. small amount of the variance within the model (with few exceptions). A notable difference between the two models is that the BiLSTM character encoder seems to have a considerably more organized pathway corresponding to title case than the CNN based character encoder. 5.4 Interpretation For the entailment models, the experiment was designed to explore the predictive behavior of each model for the task. The linear probes indicate that the information about what type of reasoning is required for a task, which is hypothesized to be encoded in the models, was distinctly encoded in each model, but to a greater extent in the decomposable 5762 attention model. The connection between the pathways and the linear probes was less strong, however. This indicates that despite the models having an encoding of the knowledge observed by the probe, it is likely a byproduct of a different function that is being approximated by the neural network. The pathways were created by analyzing which neurons behave cohesively, indicating a subprocess within the network. However, these subprocesses do not correspond strongly to any of the tested features. Consequences of this finding could be an indication that the model is ‘cheating’ on the task and has some inductive bias that is beneficial to the task independent from the task as envisioned by the creators. Otherwise, if many models demonstrate this behavior, the task or dataset may be insufficient to induce the desired learning behavior in neural models. This is consistent with recent highly domain specific analyses of this task (Gururangan et al., 2018; Glockner et al., 2018; Poliak et al., 2018). The NER model analysis was set up to understand the factors contributing to the differences between the two models rather than the factors influencing the prediction accuracy. Many of the surface features that were tested were present in the models, although there were not significant differences as to which of these features were encoded in one model or the other. Examination of the correlation of each pathway to the prediction differences between the models indicate that the differences were primarily explained by pathways that had high amounts of explained variance. Strong linear probe results, in conjunction with a mismatch between which pathways correlated to the metric of interest and which pathways correlated well to each surface feature that was probed, indicate that each of the models learned the surface features from the data and that other functions are responsible for differences. This can guide future examination of these models to pinpoint exactly what knowledge the model is using for the task. For example, a high variance pathway for the CNN-BiLSTM-CRF included some neurons from the CNN and some from the LSTMs and was typically activated by words with capital letters. However, it also activated on notable exceptions such as “van” and “de” that serve as a lowercase part of some names indicated that it had memorized those exceptions to the broader heuristic. No such pathway was identified in the BiLSTM-BiLSTM-CRF model. 6 Conclusions In this paper, we have demonstrated an approach for neural interpretation using neural pathways on recognizing textual entailment and named entity recognition. By abstracting away from individual neurons and combining linear probes, task knowledge, and correlation techniques, insight into the knowledge learned by the neural models have been made more transparent. This general interpretation method draws similar conclusions to highly domain-specific analyses, and while it will not replace the need for deep analysis, it provides a much simpler starting point for a broad class of models. Future work can improve this method further by examining the effects of different dimensionality reduction methods with varying properties on extracting the most informative pathways from the activations. Acknowledgements This work was funded in part by NSF grant IIS 1822831. We would also like to thank Shruti Rijhwani for her help with the implementations for the NER models. References Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org. Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. arXiv preprint arXiv:1608.04207. Guillaume Alain and Yoshua Bengio. 2016. Understanding intermediate layers using linear classifier probes. arXiv preprint arXiv:1610.01644. Leila Arras, Franziska Horn, Gr´egoire Montavon, Klaus-Robert M¨uller, and Wojciech Samek. 2017. ” what is relevant in a text document?”: An interpretable machine learning approach. PloS one, 12(8):e0181142. 5763 Sebastian Bach, Alexander Binder, Gr´egoire Montavon, Frederick Klauschen, Klaus-Robert M¨uller, and Wojciech Samek. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140. David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and KlausRobert M ˜Aˇzller. 2010. How to explain individual classification decisions. Journal of Machine Learning Research, 11(Jun):1803–1831. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642. Franc¸ois Chollet et al. 2015. Keras. https:// keras.io. Alexis Conneau, Germ´an Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single $ &!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126–2136. Ido Dagan and Oren Glickman. 2004. Probabilistic textual entailment: Generic applied modeling of language variability. Learning Methods for Text Understanding and Mining, 2004:26–29. Jamie DeCoster. 1998. Overview of factor analysis. Babak Esmaeili, Hao Wu, Sarthak Jain, Alican Bozkurt, N Siddharth, Brooks Paige, Dana H Brooks, Jennifer Dy, and Jan-Willem van de Meent. 2018. Structured disentangled representations. stat, 1050:12. Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking nli systems with sentences that require simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 650–655. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep learning (adaptive computation and machine learning series). Adaptive Computation and Machine Learning series, page 800. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112. Harold Hotelling. 1933. Analysis of a complex of statistical variables into principal components. Journal of educational psychology, 24(6):417. Eric Jones, Travis Oliphant, Pearu Peterson, et al. 2001–. SciPy: Open source scientific tools for Python. [Online; accessed ¡today¿]. Andrej Karpathy, Justin Johnson, and Li Fei-Fei. 2015. Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078. C Kennedy, MH Des Rosiers, JW Jehle, M Reivich, F Sharpe, and L Sokoloff. 1975. Mapping of functional neural pathways by autoradiographic survey of local metabolic rate with (14c) deoxyglucose. Science, 187(4179):850–853. Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. 2018. Sharp nearby, fuzzy far away: How neural language models use context. arXiv preprint arXiv:1805.04623. Sanjay Krishnan and Eugene Wu. 2017. Palm: Machine learning explanations for iterative debugging. In Proceedings of the 2nd Workshop on Human-Inthe-Loop Data Analytics, page 4. ACM. Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom. 2018. Lstms can learn syntax-sensitive dependencies well, but modeling structure makes them better. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1426–1436. Will Landecker, Michael D Thomure, Lu´ıs MA Bettencourt, Melanie Mitchell, Garrett T Kenyon, and Steven P Brumby. 2013. Interpreting individual classifications of hierarchical networks. In Computational Intelligence and Data Mining (CIDM), 2013 IEEE Symposium on, pages 32–38. IEEE. Nelson F Liu, Roy Schwartz, and Noah A Smith. 2019. Inoculation by fine-tuning: A method for analyzing challenge datasets. arXiv preprint arXiv:1904.02668. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1064–1074. Gr´egoire Montavon, Wojciech Samek, and KlausRobert M¨uller. 2017. Methods for interpreting and understanding deep neural networks. Digital Signal Processing. Pramod Kaushik Mudrakarta, Ankur Taly, Mukund Sundararajan, and Kedar Dhamdhere. 2018. Did the model understand the question? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1896–1906. Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340–2353. 5764 Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980. Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249–2255. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 180–191. Anna Potapenko, Artem Popov, and Konstantin Vorontsov. 2017. Interpretable probabilistic embeddings: bridging the gap between topic models and neural networks. In Conference on Artificial Intelligence and Natural Language, pages 167–180. Springer. Brett Poulin, Roman Eisner, Duane Szafron, Paul Lu, Russell Greiner, David S Wishart, Alona Fyshe, Brandon Pearcy, Cam MacDonell, and John Anvik. 2006. Visual explanation of evidence with additive classifiers. In Proceedings Of The National Conference On Artificial Intelligence, volume 21, page 1822. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999. Alec Radford, Rafal Jozefowicz, and Ilya Sutskever. 2017. Learning to generate reviews and discovering sentiment. arXiv preprint arXiv:1704.01444. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1135–1144. ACM. Wojciech Samek, Alexander Binder, Gr´egoire Montavon, Sebastian Lapuschkin, and Klaus-Robert M¨uller. 2017a. Evaluating the visualization of what a deep neural network has learned. IEEE transactions on neural networks and learning systems, 28(11):2660–2673. Wojciech Samek, Thomas Wiegand, and Klaus-Robert M¨uller. 2017b. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 142–147. Association for Computational Linguistics. Xing Shi, Kevin Knight, and Deniz Yuret. 2016a. Why neural translations are the right length. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2278–2282. Xing Shi, Inkit Padhi, and Kevin Knight. 2016b. Does string-based neural mt learn source syntax? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1526– 1534. Narayanaswamy Siddharth, T Brooks Paige, JanWillem Van de Meent, Alban Desmaison, Noah Goodman, Pushmeet Kohli, Frank Wood, and Philip Torr. 2017. Learning disentangled representations with semi-supervised deep generative models. In Advances in Neural Information Processing Systems, pages 5925–5935. Charles Spearman. 1904. The proof and measurement of association between two things. The American journal of psychology, 15(1):72–101. Hendrik Strobelt, Sebastian Gehrmann, Bernd Huber, Hanspeter Pfister, and Alexander M Rush. 2016. Visual analysis of hidden state dynamics in recurrent neural networks. Technical report, Harvard University OpenScholar. Maksim Tkachenko and Andrey Simanovsky. 2012. Named entity recognition: Exploring features. In KONVENS, pages 118–127. William Whitney. 2016. Disentangled representations in neural models. Ph.D. thesis, Massachusetts Institute of Technology. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1112–1122. Xunjie Zhu, Tingfeng Li, and Gerard Melo. 2018. Exploring semantic properties of sentence embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 632–637.
2019
575
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5765–5772 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5765 Collocation Classification with Unsupervised Relation Vectors Luis Espinosa-Anke1, Leo Wanner2,3, and Steven Schockaert1 1School of Computer Science, CardiffUniversity, United Kingdom 2ICREA and 3NLP Group, Universitat Pompeu Fabra, Barcelona, Spain {espinosa-ankel,schockaerts1}@cardiff.ac.uk [email protected] Abstract Lexical relation classification is the task of predicting whether a certain relation holds between a given pair of words. In this paper, we explore to which extent the current distributional landscape based on word embeddings provides a suitable basis for classification of collocations, i.e., pairs of words between which idiosyncratic lexical relations hold. First, we introduce a novel dataset with collocations categorized according to lexical functions. Second, we conduct experiments on a subset of this benchmark, comparing it in particular to the well known DiffVec dataset. In these experiments, in addition to simple word vector arithmetic operations, we also investigate the role of unsupervised relation vectors as a complementary input. While these relation vectors indeed help, we also show that lexical function classification poses a greater challenge than the syntactic and semantic relations that are typically used for benchmarks in the literature. 1 Introduction Relation classification is the task of predicting whether between a given pair of words or phrases, a certain lexical, semantic or morphosyntactic relation holds. This task has direct impact in downstream NLP tasks such as machine translation, paraphrase identification (Etzioni et al., 2005), named entity recognition (Socher et al., 2012), or knowledge base completion (Socher et al., 2013). The currently standard approach to relation classification is to combine the embeddings corresponding to the arguments of a given relation into a meaningful representation, which is then passed to a classifier. As for which relations have been targeted so far, the landscape is considerably more varied, although we may safely group them into morphosyntactic and semantic relations. Morphosyntactic relations have been the focus of work on unsupervised relational similarity, as it has been shown that verb conjugation or nominalization patterns are relatively well preserved in vector spaces (Mikolov et al., 2013; Pennington et al., 2014a). Semantic relations pose a greater challenge (Vylomova et al., 2016), however. In fact, as of today, it is unclear which operation performs best (and why) for the recognition of individual lexico-semantic relations (e.g., hyperonymy or meronymy, as opposed to cause, location or action). Still, a number of works address this challenge. For instance, hypernymy has been modeled using vector concatenation (Baroni et al., 2012), vector difference and component-wise squared difference (Roller et al., 2014) as input to linear regression models (Fu et al., 2014; Espinosa-Anke et al., 2016); cf. also a sizable number of neural approaches (Shwartz et al., 2016; Anh et al., 2016). Furthermore, several high quality semantic relation datasets are available, ranging from wellknown resources such as WordNet (Miller, 1995), Yago (Suchanek et al., 2007), BLESS (Baroni and Lenci, 2011), several SemEval datasets (Jurgens et al., 2012; Camacho-Collados et al., 2018) or DiffVec (Vylomova et al., 2016). But there is a surprising gap regarding collocation modeling. Collocations, which are semi-compositional in their nature in that they are situated between fixed multiword expressions (MWEs) and free (semantic) word combinations, are of relevance to second language (henceforth, L2) learners and NLP applications alike. In what follows, we investigate whether collocations can be modeled along the same lines as semantic relations between pairs of words. For this purpose, we introduce LexFunC, a newly created dataset, in which collocations are annotated with respect to the semantic typology of lexical functions (LFs) (Mel’ˇcuk, 1996). We use LexFunC to train linear SVMs on top of 5766 different word and relation embedding composition. We show that the recognition of the semantics of a collocation, i.e., its classification with respect to the LF-typology, is a more challenging problem than the recognition of standard lexicosemantic relations, although incorporating distributional relational information brings a significant increase in performance. 2 Collocations and LexFunC We first introduce the notion of collocation and LF and then present the LexFunC dataset.1 2.1 The phenomenon of collocation Collocations such as make [a] suggestion, attend [a] lecture, heavy rain, deep thought or strong tea, to name a few, are described by Kilgarriff(2006) as restricted lexical co-occurrences of two syntactically bound lexical items. Due to their idiosyncrasy, collocations tend to be language-specific. For instance, in English or Norwegian we take [a] nap, whereas in Spanish we throw it, and in French, Catalan, German and Italian we make it. However, they are compositionally less rigid than some other types of multiword expressions such as, e.g., idioms (as, e.g., [to] kick the bucket) or multiword lexical units (as, e.g., President of the United States or chief inspector). Specifically, they are formed by a freely chosen word (the base), which restricts the selection of its collocate (e.g., rain restricts us to use heavy in English to express intensity).2 Recovery of collocations from corpora plays a major role in improving L2 resources, in addition to obvious advantages in NLP applications such as natural language analysis and generation, text paraphrasing / simplification, or machine translation (Hausmann, 1984; Bahns and Eldaw, 1993; Granger, 1998; Lewis and Conzett, 2000; Nesselhauf, 2005; Alonso Ramos et al., 2010). Starting with the seminal work by Church and Hanks (1989), an extensive body of work has been produced on the detection of collocations in 1Data and code are available at bitbucket.org/ luisespinosa/lexfunc. LexFunC is a continuously growing project. At the time of publication, the full set (available at https://www.upf.edu/web/taln/resources) contains around 10,000 collocations collected and manually categorized in terms of lexical functions by I. Mel’ˇcuk. 2In our interpretation of the notion of collocation, we thus follow the lexicographic tradition Benson (1989); Cowie (1994); Mel’ˇcuk (1995); Binon and Verlinde (2013), which differs from a purely statistical interpretation based exclusively on relative co-occurrence frequency measures. text corpora; cf., e.g., (Evert and Kermes, 2013; Evert, 2007; Pecina, 2008; Bouma, 2010; Garcia et al., 2017), as well as the Shared Task of the PARSEME European Cost Action on automatic recognition of verbal MWEs.3 However, mere lists of collocations are often insufficient for both L2 acquisition and NLP. Thus, a language learner may not know the difference between, e.g., come to fruition and bring to fruition or between have [an] approach and take [an] approach, etc. Semantic labeling is required. The failure to identify the semantics of collocations also led, e.g., in earlier machine translation systems, to the necessity of the definition of collocation-specific crosslanguage transfer rules (Dorr, 1994; Orliac and Dillinger, 2003). The above motivates us to consider in this paper collocations and their classification in terms of LFs (Mel’ˇcuk, 1996), their most fine-grained semantic typology (see Section 2.2). Especially because, so far, this is only discussed in a reduced number of works, and typically on a smaller scale (Wanner et al., 2006; Gelbukh and Kolesnikova., 2012). 2.2 LFs and the LexFunc dataset An LF can be viewed as a function f(·) that associates, with a given base L (which is the argument or keyword of f), a set of (more or less) “synonymous collocates that are selected contingent on L to manifest the meaning corresponding to f” (Mel’ˇcuk, 1996). The name of an LF is a Latin abbreviation of this meaning. For example, Oper for oper¯ari (‘do’, ‘carry out’), Magn for magnus (‘great’, ‘intense’), and so forth. The LexFunc dataset consists of collocations categorized in terms of LFs. Table 1 lists the ten LFs used in this paper, along with a definition, example and frequency. The LFs have been selected so as to cover the most prominent syntactic patterns of collocations (verb+direct object, adjective+noun, and noun+noun). 3 Experimental Setup In our experiments, we want to assess whether different LFs (i.e., semantically different collocational relations) can be captured using standard relation classification models, despite the acknowledged idiosyncratic and language-specific 3https://typo.uni-konstanz.de/parseme/index.php/2general/202-parseme-shared-task-on-automaticidentification-of-verbal-mwes-edition-1-1 5767 LF definition example freq. magn ‘very’, ‘intense’ strong accent 2,491 oper1 ‘do’, ‘carry out’, ‘participate’ engage [in an] activity 1,036 real1 ‘realize’, ‘accomplish’, ‘apply according to purpose’ drop [a] bomb 316 antimagn ‘weak’, ‘little intense’ imperceptible accent 301 antibon ‘negative’, ‘not as expected’ lame attempt 201 causfunc0 ‘cause sth to be materialized / to function’ lay [an] egg 150 bon ‘positive’, ‘as expected’ impressive figure 142 liqufunc0 ‘eliminate’, ‘make sth. not function’ resolve ambiguity 118 sing ‘single item or quantum of a collection or a mass’ clove [of] garlic 72 mult ‘multitude or collection of a given item or quantum’ bunch [of] keys 56 total 4,914 Table 1: Statistics, definitions and examples of the LexFunc dataset. The indices indicate the argument structure of the LF: ‘1‘stands for “first actant is the grammatical subject”; ‘0’ for “the base is the grammatical subject”. (but still semi-compositional) “collocationality” between a collocation’s base and collocate. To this end, we benchmark standard relation classification baselines in the task of LF classification. Furthermore, we also explore an explicit encoding of relational properties by distributional relation vectors (see Section 3.2). Moreover, to contrast the LF categories in our LexFunc dataset with others typically found in the relation classification literature, we use ten categories from DiffVec (Vylomova et al., 2016), a dataset which was particularly designed to explore the role of vector difference in supervised relation classification. The rationale for this being that, by subtraction, the features that are common to both words are known to be “cancelled out”. For instance, for madrid −spain, this operation can be expected to capture that the first word is a capital city and the second word is a country, and “remove” the fact that both words are related to Spain (Levy et al., 2014). Both for DiffVec and LexFunc, we run experiments on those categories for which we have at least 99 instances. We cast the relation classification task as a multi-class classification problem and use a stratified 2 3 portion of the data for training and the rest for evaluation. We consider each of the datasets in isolation, as well as a concatenation of both (referred to in Table 2 as DiffVec+LexFunc). The model we use is a Linear SVM,4, trained on a suite of vector composition 4Implemented in scikit-learn (http://scikit-learn. operations (Section 3.1). 3.1 Modeling relations using word vectors Let w1 and w2 be the vector representations of two words w1 and w2. We experiment with the following word-level operations: diff (w2 −w1), concat (w1 ⊕w2), sum (w1 + w2), mult (w1 ◦w2)), and leftw (w1), the latter operation being included to explore the degree to which the data can be lexically memorized (Levy et al., 2015)5. 3.2 Relation vectors Because word embeddings are limited in the amount of relational information they can capture, a number of complementary approaches have emerged which directly learn vectors that capture the relation between concepts, typically using distributional statistics from sentences mentioning both words (Espinosa-Anke and Schockaert, 2018; Washio and Kato, 2018; Joshi et al., 2018; Jameel et al., 2018). Below we explore the potential of such relation vectors for semantic relation classification. Specifically, we trained them for all word pairs from DiffVec and LexFunc using two different variants of the SeVeN model (EspinosaAnke and Schockaert, 2018). The corpus for training these vectors is a Wikipedia dump from Janorg. 5In fact, prototypicality may be a strong indicator for capturing some LFs. Heavy, for instance, may be considered as a prototypical collocate of ‘magn’. However, ‘heavy rain’ is a more restricted English combination than ‘heavy laptop’, for example, but less frozen than ‘heavy artillery’. 5768 DiffVec DiffVec+LexFunc LexFunc P R F A P R F A P R F A diff 78.94 80.91 78.00 90.00 62.58 62.00 61.84 79.80 54.54 51.44 52.64 73.98 sum 63.34 60.80 61.77 84.74 57.70 55.01 55.64 77.78 56.11 56.92 56.37 75.00 mult 49.56 41.74 44.45 66.99 41.25 61.44 33.55 58.17 39.51 32.39 34.68 59.62 leftw 49.74 50.50 49.49 80.76 38.89 36.14 36.49 67.35 63.07 60.65 61.64 80.30 concat 81.70 84.89 83.11 93.34 71.07 71.10 70.89 87.12 64.68 62.39 63.20 80.54 diff+rvAE 82.58 85.35 83.71 94.78 68.25 67.24 67.18 86.43 52.38 50.85 51.49 74.27 diff+rvAvg6 84.07 85.20 84.42 94.44 67.62 67.64 67.16 85.55 59.52 59.83 59.45 77.92 sum+rvAE 73.14 70.60 71.49 90.25 65.56 60.67 62.82 84.67 61.54 57.31 59.13 78.21 sum+rvAvg6 71.62 69.06 70.15 90.64 67.27 65.02 65.58 85.28 61.63 59.83 60.58 78.71 mult+rvAE 65.27 53.57 57.59 82.95 53.86 44.42 47.15 75.28 44.43 37.22 39.36 68.22 mult+rvAvg6 69.72 57.84 62.18 85.94 55.23 50.02 52.10 79.04 54.21 49.16 51.07 71.94 leftw+rvAE 65.16 61.51 62.64 90.88 50.37 46.35 47.51 80.38 45.65 39.45 41.69 66.96 leftw+rvAvg6 72.30 65.23 67.62 91.34 57.14 54.23 55.21 83.02 62.71 59.42 60.84 79.84 concat+rvAE 86.23 87.78 86.58 95.79 72.12 72.51 72.08 89.33 64.43 60.14 61.84 80.83 concat+rvAvg6 88.09 88.27 88.09 95.89 73.65 74.26 73.73 89.88 67.23 68.30 67.70 81.92 Table 2: Experimental results of several baselines on different multiclass settings for relation classification. uary 2018, with GloVe (Pennington et al., 2014b) 300d pre-trained embeddings. The first variant, referred to as rvAvg6, is based on averaging the words that appear in sentences mentioning the two given target words. Since this approach differentiates between words that appear before the first word, after the second word, or in between the two words, and takes into account the order in which the words appear, it results in relation vectors with a dimensionality which is six times the dimensionality of the considered word vectors. The second variant, referred to as rvAE, starts from the same high-dimensional relation vector, but then uses a conditional autoencoder to obtain a lower-dimensional and potentially higher-quality 300d vector6. 4 Results Table 2 shows the experimental results for DiffVec, LexFunc and both datasets together. The first five rows show the performance of word embedding operations, whereas the configurations for remaining rows also include a relation vector. 4.1 Discussion We highlight two major conclusions. First, despite vector difference and component-wise multiplication being the most popular vector operations for encoding relations between words, also 6We used the code available at bitbucket.com/ luisespinosa/seven for obtaining both representations. in more expensive neural architectures for relation modeling (Washio and Kato, 2018; Joshi et al., 2018), vector concatenation alone proves to be a highly performing baseline. Moreover, the overall best method (concat+rvAvg6), obtains performance gains when compared with the standard diff method ranging from +5.89% in DiffVec to +7.98% in LexFunc and +10.08% in the combined dataset. This suggests that while vector differences may encode relational properties, important information is lost when only this operation is considered. Second, despite being a well-studied topic, recognizing lexical functions emerges as a challenging problem. They seem difficult to classify, not only between themselves, but also when coupled with other lexical semantic relations. This may be due to the fact that collocations are idiosyncratic lexical co-occurrences which are syntactically bound. The base and collocate embeddings should account for these properties, rather than over-relying on the contexts in which they appear. In the following section we present an analysis on the main sources of confusion in the LexFunc and DiffVec+LexFunc settings. 4.2 Problematic LFs We aim to gain an understanding of recurrent errors made both by the best performing model (concat+rvAvg6) and diff. Figure 1 shows confusion matrices for the two datasets involving LFs, 5769 Figure 1: Confusion matrices on the best performing model (concat+rvAvg6): (1a) and (1b) show performance on the LexFunc experiment, and (1c) and (1d) on DiffVec+LexFunc. namely LexFunc and DiffVec+LexFunc. We are particularly interested in pinpointing which LFs are most difficult to classify, and whether there is any particular label that agglutinates most predictions. For example, in (Fig. 1a) we see a strong source of confusion in the diff model between the ‘bon’ and ‘magn’ labels. Both are noun-adjective combinations and both are used as intensifiers, but they subtly differ in that only one enforces a perceived degree of positiveness (e.g., resounding vs. crushing victory). Thus, combining their vectors produces clearly similar representations that confuse the classifier, a scenario which is only partly alleviated by the use of relation vectors (Fig. 1b). The case of ‘oper1’ (perform) and ‘real1’ (accomplish) also proves problematic. The number of light verbs as collocates of these LFs is notably high in the former, amounting to 48%; ‘real1’ is more semantic, with almost 11% light verbs. Interestingly, however, these labels are almost never confused with the ‘event’ label from DiffVec (Figs. 1c and 1d), even if it also contains relations with light verbs such as break or pay. Finally, one last source of confusion that warrants discussion involves ‘magn’ and ‘antimagn’, two noun-advective collocations which are different in that the former conveys a notion of intensity, whereas the latter is about weakness (e.g., ‘faint admiration’ or ‘slight advantage’). These two LFs typically include antonymic collocates (e.g., ‘weak’ and ‘strong’ as collocates for the base ‘argument’), and these are known to have similar distributional vectors (Mrkˇsi´c et al., 2016; Nguyen et al., 2016), which in high likelihood constitutes a source of confusion. 5 Conclusions and Future Work In this paper, we have discussed the task of distributional collocation classification. We have used a set of collocations categorized by lexical functions, as introduced in the Meaning Text Theory (Mel’ˇcuk, 1996), and evaluated a wide range of vector representations of relations. In addition, we have used the DiffVec (Vylomova et al., 2016) dataset to provide a frame of reference, as this dataset has been extensively studied in the distributional semantics literature, mostly for evaluating the role of vector difference. We found that, despite this operation being the go-to representation for lexical relation modeling, concatenation works as well or better, and clear improvements can be obtained by incorporating explicitly learned relation vectors. However, even with these improvements, categorizing LFs proves to be a difficult task. In the future, we would like to experiment with more data, so that enough training data can be obtained for less frequent LFs. To this end, we could benefit from the supervised approach proposed in (Rodr´ıguez-Fern´andez et al., 2016), and then filter by pairwise correlation strength metrics such as PMI. Another exciting avenue would involve exploring cross-lingual transfer of LFs, taking advantage of recent development in unsupervised cross-lingual embedding learning (Artetxe et al., 2017; Conneau et al., 2017). Acknowledgements We would like to thank the reviewers for their helpful comments. We also owe a special thanks to Igor Mel’ˇcuk for providing his collection of LF samples to us. The second author was supported by the European Commission under the contract numbers H2020-825079-STARTS, H2020786731-RIA, H2020-779962-RIA, and H20207000024-RIA. The last author was supported by ERC Starting Grant 637277. 5770 References M. Alonso Ramos, L. Wanner, O. Vincze, G. Casamayor, N. V´azquez, E. Mosqueira, and S. Prieto. 2010. Towards a Motivated Annotation Schema of Collocation Errors in Learner Corpora. In Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC), pages 3209–3214, La Valetta, Malta. Tuan Luu Anh, Yi Tay, Siu Cheung Hui, and See Kiong Ng. 2016. Learning term embeddings for taxonomic relation identification using dynamic weighting neural network. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 403–413. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2017. Unsupervised neural machine translation. arXiv preprint arXiv:1710.11041. J. Bahns and M. Eldaw. 1993. Should we teach EFL students collocations? System, 21(1):101–114. Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In Proceedings of EACL, pages 23–32. Marco Baroni and Alessandro Lenci. 2011. How we blessed distributional semantic evaluation. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, pages 1–10. Association for Computational Linguistics. M. Benson. 1989. The structure of the collocational dictionary. International Journal of Lexicography, 2(1):1–13. J. Binon and S. Verlinde. 2013. Electronic pedagogical dictionaries. In R. Gouws, U. Heid, W. Schweickard, and H.E. Wiegand, editors, Dictionaries. An International Encyclopedia of Lexicography, pages 1035–1046. De Gruyter Mouton, Berlin. G. Bouma. 2010. Collocation extraction beyond the independence assumption. In Proceedings of the ACL 2010, Short paper track, Uppsala. Jose Camacho-Collados, Claudio Delli Bovi, Luis Espinosa Anke, Sergio Oramas, Tommaso Pasini, Enrico Santus, Vered Shwartz, Roberto Navigli, and Horacio Saggion. 2018. Semeval-2018 task 9: hypernym discovery. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 712–724. K. Church and P. Hanks. 1989. Word Association Norms, Mutual Information, and Lexicography. In Proceedings of the 27th Annual Meeting of the ACL, pages 76–83. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087. A. Cowie. 1994. Phraseology. In R.E. Asher and J.M.Y. Simpson, editors, The Encyclopedia of Language and Linguistics, Vol. 6, pages 3168–3171. Pergamon, Oxford. B.J. Dorr. 1994. Machine translation divergences: A formal description and proposed solution. Computational linguistics, 20:579–634. Luis Espinosa-Anke, Jose Camacho-Collados, Claudio Delli Bovi, and Horacio Saggion. 2016. Supervised distributional hypernym discovery via domain adaptation. In Conference on Empirical Methods in Natural Language Processing; 2016 Nov 1-5; Austin, TX. Red Hook (NY): ACL; 2016. p. 424-35. ACL (Association for Computational Linguistics). Luis Espinosa-Anke and Steven Schockaert. 2018. Seven: Augmenting word embeddings with unsupervised relation vectors. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2653–2665. Oren Etzioni, Michael Cafarella, Doug Downey, AnaMaria Popescu, Tal Shaked, Stephen Soderland, Daniel S Weld, and Alexander Yates. 2005. Unsupervised named-entity extraction from the web: An experimental study. Artificial intelligence, 165(1):91–134. S. Evert. 2007. Corpora and collocations. In A. L¨udeling and M. Kyt¨o, editors, Corpus Linguistics. An International Handbook. Mouton de Gruyter, Berlin. S. Evert and H. Kermes. 2013. Experiments on candidate data for collo- cation extraction. In Companion Volume to the Proceedings of the 10th Conference of the EACL, pages 472–487. Ruiji Fu, Jiang Guo, Bing Qin, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Learning semantic hierarchies via word embeddings. In Proceedings of ACL, volume 1. Marcos Garcia, Marcos Garc´ıa-Salido, and Margarita Alonso-Ramos. 2017. Using bilingual wordembeddings for multilingual collocation extraction. In Proceedings of the 13th Workshop on Multiword Expressions (MWE 2017), pages 21–30. A. Gelbukh and O. Kolesnikova. 2012. Semantic Analysis of Verbal Collocations with Lexical Functions. Springer, Heidelberg. S. Granger. 1998. Prefabricated patterns in advanced EFL writing: Collocations and Formulae. In A. Cowie, editor, Phraseology: Theory, Analysis and Applications, pages 145–160. Oxford University Press, Oxford. F.J. Hausmann. 1984. Wortschatzlernen ist Kollokationslernen. Zum Lehren und Lernen franz¨osischer Wortwendungen. Praxis des neusprachlichen Unterrichts, 31(1):395–406. 5771 Shoaib Jameel, Zied Bouraoui, and Steven Schockaert. 2018. Unsupervised learning of distributional relation vectors. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 23–33. Mandar Joshi, Eunsol Choi, Omer Levy, Daniel S Weld, and Luke Zettlemoyer. 2018. pair2vec: Compositional word-pair embeddings for cross-sentence inference. arXiv preprint arXiv:1810.08854. David A Jurgens, Peter D Turney, Saif M Mohammad, and Keith J Holyoak. 2012. Semeval-2012 task 2: Measuring degrees of relational similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 356–364. Association for Computational Linguistics. A. Kilgarriff. 2006. Collocationality (and how to measure it). In Proceedings of the Euralex Conference, pages 997–1004, Turin, Italy. Springer-Verlag. Omer Levy, Yoav Goldberg, and Israel Ramat-Gan. 2014. Linguistic regularities in sparse and explicit word representations. In Proceedings of CoNLL, pages 171–180. Omer Levy, Steffen Remus, Chris Biemann, Ido Dagan, and Israel Ramat-Gan. 2015. Do supervised distributional methods really learn lexical inference relations? In Proceedings of NAACL 2015, Denver, Colorado, USA. M. Lewis and J. Conzett. 2000. Teaching Collocation. Further Developments in the Lexical Approach. LTP, London. I. Mel’ˇcuk. 1995. Phrasemes in Language and Phraseology in Linguistics. In M. Everaert, E.J. van der Linden, A. Schenk, and R. Schreuder, editors, Idioms: Structural and Psychological Perspectives, pages 167–232. Lawrence Erlbaum Associates, Hillsdale. I.A. Mel’ˇcuk. 1996. Lexical functions: A tool for the description of lexical relations in the lexicon. In L. Wanner, editor, Lexical Functions in Lexicography and Natural Language Processing, pages 37– 102. Benjamins Academic Publishers, Amsterdam. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and JeffDean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. George A Miller. 1995. WordNet: a lexical database for english. Communications of the ACM, 38(11):39–41. Nikola Mrkˇsi´c, Diarmuid O S´eaghdha, Blaise Thomson, Milica Gaˇsi´c, Lina Rojas-Barahona, PeiHao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. arXiv preprint arXiv:1603.00892. N. Nesselhauf. 2005. Collocations in a Learner Corpus. Benjamins Academic Publishers, Amsterdam. Kim Anh Nguyen, Sabine Schulte im Walde, and Ngoc Thang Vu. 2016. Integrating distributional lexical contrast into word embeddings for antonym-synonym distinction. arXiv preprint arXiv:1605.07766. Brigitte Orliac and Mike Dillinger. 2003. Collocation extraction for machine translation. In Proceedings of Machine Translation Summit IX, pages 292–298. P. Pecina. 2008. A machine learning approach to multiword expression extraction. In Proceedings of the LREC 2008 Workshop Towards a Shared Task for Multiword Expressions (MWE 2008), pages 54–57, Marrakech. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014a. GloVe: Global vectors for word representation. In Proceedings of EMNLP, pages 1532–1543. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014b. Glove: Global vectors for word representation. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), 12. Sara Rodr´ıguez-Fern´andez, Luis Espinosa Anke, Roberto Carlini, and Leo Wanner. 2016. Semanticsdriven recognition of collocations using word embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 499–505. Stephen Roller, Katrin Erk, and Gemma Boleda. 2014. Inclusive yet selective: Supervised distributional hypernymy detection. In Proceedings of COLING, pages 1025–1036. Vered Shwartz, Yoav Goldberg, and Ido Dagan. 2016. Improving hypernymy detection with an integrated path-based and distributional method. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2389–2398. Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in neural information processing systems, pages 926–934. Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 joint conference on empirical 5772 methods in natural language processing and computational natural language learning, pages 1201– 1211. Association for Computational Linguistics. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. YAGO: A core of semantic knowledge. unifying WordNet and Wikipedia. pages 697– 706. Ekaterina Vylomova, Laura Rimell, Trevor Cohn, and Timothy Baldwin. 2016. Take and took, gaggle and goose, book and read: Evaluating the utility of vector differences for lexical relation learning. In Proceedings of ACL. L. Wanner, B. Bohnet, and M. Giereth. 2006. Making sense of collocations. Computer Speech and Language, 20(4):609–624. Koki Washio and Tsuneaki Kato. 2018. Neural latent relational analysis to capture lexical semantic relations in a vector space. arXiv preprint arXiv:1809.03401.
2019
576
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5773–5779 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5773 . Abstract In this paper we discuss the usefulness of applying a checking procedure to existing thesauri. The procedure is based on the analysis of discrepancies of corpus-based and thesaurus-based word similarities. We applied the procedure to more than 30 thousand words of the Russian wordnet and found some serious errors in word sense description, including inaccurate relationships and missing senses of ambiguous words. 1 Introduction Large thesauri such as Princeton WordNet (Fellbaum, 1998) and wordnets created for other languages (Bond and Foster, 2013) are important instruments for natural language processing. Developing and maintaining such resources is a very expensive and time-consuming procedure. At the same time, contemporary computational systems, which can translate texts with almost human quality (Castilho et al., 2017), cannot automatically create such thesauri from scratch providing a structure somehow similar to resources created by professionals (CamachoCollados, 2017; Camacho-Collados et al., 2018). But if such a thesaurus exists, the developers should have approaches to maintain and improve it. In previous works, various methods on lexical enrichment of thesauri have been studied (Snow et al., 2006; Navigli and Ponzetto, 2012). But another issue was not practically discussed: how to find mistakes in existing thesaurus descriptions: incorrect relations or missed significant senses of ambiguous words, which were not included accidentally or appeared recently. In fact, it is much more difficult to reveal missed and novel senses or wrong relations, if compared to detect novel words (Frermann and Lapata, 2016; Lau et al., 2014). So it is known that such missed senses are often found during semantic annotation of a corpus and this is an additional problem for such annotation (Snyder and Palmer, 2004; Bond and Wang, 2014). In this paper, we consider an approach that uses embedding models to reveal problems in a thesaurus. Previously, distributional and embedding methods were evaluated in comparison with manual data (Baroni and Lenci, 2011; Panchenko et al., 2015). But we can use them in the opposite way: to utilize embeddingbased similarities and try to detect some problems in a thesaurus. We study such similarities for more than 30 thousand words presented in Russian wordnet RuWordNet (Loukachevitch et al., 2018)1. RuWordNet was created on the basis of another Russian thesaurus RuThes in 2016, which was developed as a tool for natural language processing during more than 20 years (Loukachevitch and Dobrov, 2002). Currently, the published version of RuWordNet includes 110 thousand Russian words and expressions. 2 Related Work Word sense induction approaches (Agirre and Soroa, 2007; Navigli, 2009; Lau et al., 2014; Panchenko et al., 2018) try to induce senses of ambiguous words from their contexts in a large corpus. Sometimes such approaches can find new senses not described in any lexical resources. But the results of these methods are rarely intended to 1http://ruwordnet.ru/en/ Corpus-based Check-up for Thesaurus Natalia Loukachevitch Research Computing Center Lomonosov Moscow State University Leninskie Gory, 1/4, Moscow, Russia [email protected] 5774 . improve the sense representation in a specific semantic resource. Lau et al. (2014) study the task of finding unattested senses in a dictionary is studied. At first, they apply the method of word sense induction based on LDA topic modeling. Each extracted sense is represented as top-N words in the constructed topics. To compute the similarity between a sense and a topic, the words in the definition are converted into the probability distribution. Then two probability distributions (gloss-based and topic-based) are compared using the Jensen-Shannon divergence. It was found that the proposed novelty measure could identify target lemmas with high- and medium-frequency novel senses. But the authors evaluated their method using word sense definitions in the Macmillan dictionary2 and did not check the quality of relations presented in a thesaurus. A series of works was devoted to studies of semantic changes in word senses (Gulordava and Baroni, 2011; Mitra et al., 2015; Frermann and Lapata, 2016), Gulordava and Baroni, 2011) study semantic change of words using Google n-gram corpus. They compared frequencies and distributional models based on word bigrams in 60s and 90s. They found that significant growth in frequency often reveals the appearance of a novel sense. Also it was found that sometimes the senses of words do not change but the context of their use changed significantly. In (Mitra et al., 2015), the authors study the detection of word sense changes by analyzing digitized books archives. They constructed networks based on a distributional thesaurus over eight different time windows, clustered these networks and compared these clusters to identify the emergence of novel senses. The performance of the method has been evaluated manually as well as by comparison with WordNet and a list of slang words. But Mitra et al. (2015) did not check if WordNet misses some senses. 3 Comparison of Distributional and Thesaurus Similarities To compare distributional and thesaurus similarities for Russian according to RuWordNet, we used a collection of 1 million news articles as a reference collection. The collection was lemmatized. For our study, we took thesaurus 2 https://www.macmillandictionary.com/ words with frequency more than 100 in the corpus. We obtained 32,596 words (nouns, adjectives, and verbs). For each of these words, all words located in the three-step relation paths (including synonyms, hyponyms, hypernyms, cohyponyms, indirect hyponyms and hypernyms, cross-categorial synonyms, and some others) were considered as related words according to the thesaurus. For ambiguous words, all sense-related paths were considered and collected together. In such a way, for each word, we collected the thesaurus-based "bag" of similar words (TBag). Then we calculated embeddings according to word2vec model with the context window of 3 words, planning to study paradigmatic relations (synonyms, hypernyms, hyponyms, cohyponyms). Using this model, we extracted the twenty most similar words wi to the initial word w0. Each wi should also be from the thesaurus. In such a way, we obtained the distributional (word2vec) "bag" of similar words for w0 (DBag) with their calculated word2vec similarities to w0. Now we can calculate the intersection between TBag and DBag and sum up the word2vec similarities in the intersection. Figure 1 shows the distribution of words according to the similarity score of the TBag-DBag intersection. The axis X denotes the total similarity in the TBag-DBag intersection: it can achieve more than 17 for some words, denoting high correspondence between corpus-based and thesaurus-based similarities. Relative adjectives corresponding to geographical names have the highest similarity values in the TBag-DBag intersection, for example, samarskii (related to Samara city), vologodskii (related to Vologda city), etc. Also nouns denoting cities, citizens, nationalities, nations have very high similarity values in the TBag-DBag intersection. Figure 1. Distribution of thesaurus words according to the total similarity in the TBag-Dbag intersection 5775 . Among verbs, verbs of thinking, movement (drive  fly), informing (say  inform  warn), value changing (decrease  increase), belonging to large semantic fields, have the highest similarity values (more than 13). At the same time, the rise of the curve in the low similarity values reveals the segment of problematic words. 4 Analyzing Discrepancies between Distributional and Thesaurus Similarities We are interested in cases when the TBagDBag intersection is absent or contains only 1 word with small word2vec similarity (less than the threshold (0.5)). We consider such a difference in the similarity bags as a problem, which should be explained. We obtained 2343 such problematic "words". Table 1 shows the distribution of these words according to the part of speech. It can be seen that verbs have a very low share in this group of words. It can be explained that in Russian, most verbs have two aspect forms (Perfective and Imperfective) and also frequently have sense-related reflexive verbs. All these verb variants (perfective, imperfective, reflexive) are presented as different entries in RuWordNet. Therefore, in most cases altogether they should easily overcome the established threshold of discrepancies. In the same time, if some verbs are found in the list of problematic words, they have real problems of their description in the thesaurus. Part of speech Number Nouns 1240 Adjectives 877 Verbs 226 Total 2343 Table 1. Distribution of parts of speech among problematic words To classify the causes of discrepancies, we ordered the list of problematic words in decreasing similarity of their first most similar word from the thesaurus, that is in the beginning words with the most discrepancies are gathered (further, ProblemList). Table 2 shows the share of found problems in the first 100 words of this list. In the subsections, we consider specific reasons, which can explain discrepancies between thesaurus and corpus-based similarities. 4.1 Morphological Ambiguity and Misprints The most evident source of the discrepancies is morphological ambiguity when two different words w1 and w2 have the same wordform and words from DBag of w1 in fact are semantically related to w2 (usually w2 has larger frequency). For example, in Russian there are two words bank (financial organization) and banka (a kind of container). All similar words from Dbag to banka are from the financial domain: gosbank (state bank), sberbank (saving bank), bankir (banker), etc. The analyzed list of problematic words includes about 90 such words. 32 of such words are located in the top of ProblemList. The technical reasons of some discrepancies are frequent misprints. For example, frequent Russian word zayavit (to proclaim) is often erroneously written as zavit (to curl). Therefore the DBag of word zavit includes many words similar to zayavit such as soobshchit' (to inform), or otmetit (to remark). Another example is a pair words statistka (showgirl) and statistika (statistics). In the top-100 of ProblemList, two such words were found. Such cases can be easily excluded from further analysis. 4.2 Named Entities and Multiword Expressions The natural reason of discrepancies are named entities, whose names coincide with ordinary words, they are not described in the thesaurus, and are frequent in the corpus under analysis. For example, mistral is described in RuWordNet as a specific wind, but in the current corpus French helicopter carrier Mistral is actively discussed. Frequent examples of such named entities are names of football, hockey and other teams popular in Russia coinciding with ordinary Russian words or geographical names (Zenith, Dynamo, etc.). Some teams can have nicknames, which are written with lowercase letters in Russian and cannot be revealed as named entities. For example, Russian word iriska means a kind of candy. In the same time, it is nickname of Everton Football Club (The Toffees). Some discrepancies can be based on frequent multiword expressions, which can be present or absent in the thesaurus. A component w1 of multiword expression w2 can be distributionally similar to other words frequently met with w2 or it 5776 . can be similar to words related to the whole phrase w1 w2. For example, word toplenyi (rendered) occurs in the phrase toplenoe maslo (rendered butter) 78 times of 112 of its total frequency. Because of this, this word is the most similar to word mindalnyi (adjective to almond), which is met in the phrase mindalnoe maslo (almond oil) 57 of 180 times. But two words toplenyi and mindalnyi cannot be considered as sense-related words. Explanation Number of words Morphological ambiguity 32 Misprints 2 Unknown names, including 11 - Sports teams names 6 - Sports teams nick names 2 Multiword expression 5 Incorrect relations 6 Lost Senses 10 Table 2. Explanations of discrepancies between thesaurus and distributional similarities for Top-100 of ProblemList 4.3 Correcting Thesaurus Relations In some cases, the idea of distributional similarity is clear, but the revision cannot be made in the thesaurus. We found two types of such cases. First, such epithet as gigant (giant) in the current corpus is applied mainly to large companies (IT-giant, cosmetics giant, etc.). But it can be strange to provide the relations between words giant and company in a thesaurus. The second case can be seen on the similarity row to word massazhistka (female masseur), comprising such words as hairdresser, housekeeper, etc. This is a kind of specialists in specific personal services but it seems that an appropriate word or expression does not exist in Russian. So, we do not have any language means to create a more detailed classification of such specialists. Another interesting example of a similarity grouping is the group of “flaws in the appearance”: word tsellyulit (cellulite)3 is most similar to words: morshchina (crease of the skin), perkhot' (dandruff), kariyes (dental caries), oblyseniye (balding), vesnushki (freckles). It can be noted that a bald head or freckles are not necessary flaws of a specific person, but on 3 https://en.wikipedia.org/wiki/Cellulite average they are considered as flaws. On the other hand, such a phrase as nedostatki vneshnosti (flaws in the appearance) is quite frequent in Internet pages according to global search engines. Therefore maybe it could be useful to introduce the corresponding synset for correct describing the conceptual system of the modern personality. But also real problems of thesaurus descriptions were found. They included word relations, which could be presented more accurately (6 cases in Top-100). For example, word tamada (toastmaster) was linked to a more general word, not to veduschii (master of ceremonies), and it was revealed from the ProblemList analysis. 4.4 Senses Unattested in Thesaurus Also significant missed senses including serious errors for verbs were found. As it was mentioned before, in Russian there are groups of related verbs: perfective, imperfective, and reflexive. These verbs usually have a set of related senses, and also can have their own separate senses. In the comparison of discrepancies between TBag and Dbag of verbs, it was found that at least for 25 verbs some of senses were unattested in the current version of the thesaurus, which can be considered as evident mistakes. For example, the imperfective sense of verb otpravlyatsya (depart) was not presented in the thesaurus. Several dozens of novel senses, which are the most frequent senses in the current collection, were identified. Most such senses are jargon (sports or journalism) senses, i.e. derbi (derby as a game between main regional teams) or naves as a type of a pass in football (high-cross pass). Also several novel senses that belong to information technologies were detected: proshivka (firmware), socset’ (abbreviation from sotsial'naya set' ‒ social network). Several colloquial (but well-known) word senses absent in RuWordNet were found. For example, verb obzech’sya in the literary sense means ‘burn oneself’. In Dbag the colloquial sense ‘make a mistake’ is clearly seen. For word korrektor (corrector), two most frequent unattested senses were revealed. The Dbag of this word looks as a mixture of cosmetics and stationary terms: guash' (gouache), kistochka (tassel), tonal'nyy (tonal), chernila (ink), tipografskiy (typographic), etc. 5777 . Word Absent senses Type and Domain Distributional Similarity to Frequ- ency otpravlyatsya Missed imperfective to Perfective otpravit'sya Mistake, General otpravit'sya 0.85 10712 oblachnyy (adjective for oblako – cloud) As in cloud computing, cloud service, etc. Newly appeared, Computer geterogennyy (heterogenous) 0.5 4662 konyushnya Formula-1 team Newly appeared, Sport, Jargon gonshchik (racer) 0.63 3854 derbi (derby) Derby as a game between main regional teams Sport, Jargon, match (match as a competition) 0.62 3743 leibl (label) As a record company Newly appeared, Journalism, Jargon, plastinka (vinyl disk) 0.56 2147 proshivka (firmware) As firmware (kind of software) Newly appeared, Computer, updeit (update), 0.67 1311 korrektor (corrector) Two senses 1. as correction fluid 2. as a cosmetic preparation (skin corrector) Newly appeared, 1. Stationary, 2. Cosmetics guash' (gouache) 0.49 pomada (lipstick) 0.44 237 perkussiya (percussion) As percussion musical instrument Newly appeared, Borrowing from English, Music klavishniy (key-based) 0.73 146 Table 3. Examples of found ambiguous words with missed senses Currently, about 90 evident missed senses (different from named entities), which are most frequent senses of the word in the collection, are identified. Among them, 10 words are in the Top100 of the ProblemList. Table 3 presents the examples of found ambiguous words with missed senses that should be added to RuWordNet. 4.5 Other Cases In some cases, paths longer than 3 should be used to provide better correspondence between thesaurus-based and corpus-based similar words (10 words in the top 100 words of ProblemList), for example, such 4-step paths as two hypernyms, then two hyponyms. Four words in the top-100 have strange corpusbased similarities. We suppose that it is because of the presence of some news articles in Ukrainian. 5 Conclusion In this paper we discuss the usefulness of applying a checking procedure to existing thesauri. The procedure is based on the analysis of discrepancies between corpus-based and thesaurus-based word similarities. We applied the procedure to more than 30 thousand words of Russian wordnet RuWordNet, classified sources of differences between word similarities and found some serious errors in word sense description including inaccurate relationships and missing senses for ambiguous words. We highly recommend using this procedure for checking wordnets. It is possible to find a lot of unexpected knowledge about the language and the thesaurus. In future, we plan to develop an automatic procedure of finding thesaurus regularities in DBag of problematic words, which can make more evident what kind of relations or senses are missed in the thesaurus. Acknowledgments The reported study was funded by RFBR according to the research project N 18-0001226 (18-00-01240). The author is grateful to Ekaterina Parkhomenko for help with programming the approach. 5778 . References Eneko Agirre and Aitor Soroa. 2007. Semeval-2007 task 02: Evaluating word sense induction and discrimination systems. In Proceedings of the 4th International Workshop on Semantic Evaluations Association for Computational Linguistics, pages 7-12. http://www.aclweb.org/anthology/S07-1002. Marco Baroni and Alessandro Lenci. 2011. How we BLESSed distributional semantic evaluation. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, Edinburgh, Scotland, pages 1–11. http://www.aclweb.org/anthology/W11-2501. Francis Bond and Ryan Foster. 2013. Linking and extending an open multilingual wordnet. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages. 1352-1362. http://www.aclweb.org/anthology/P13-1133. Francis Bond and Shan Wang. 2014. Issues in building English-Chinese parallel corpora with WordNets. In Proceedings of the Seventh Global Wordnet Conference, pages 391-399. http://www.aclweb.org/anthology/W14-0154. Jose Camacho-Collados, Claudio Bovi, Luis Espinosa-Anke, Siergio Oramas, Tomasso Pasini, Enriko Santus, Vered Schartz, Roberto Navigli and Horacio Saggion. 2018. SemEval-2018 Task 9: hypernym discovery. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 712-724. http://www.aclweb.org/anthology/S18-1115. Jose Camacho-Collados. 2017. Why we have switched from building full-fledged taxonomies to simply detecting hypernymy relations. arXiv preprint arXiv:1703.04178 Sheila Castilho, Joss Moorkens, Federico Gaspari, Iacer Calixto, John Tinsley, and Andy Way. 2017. Is neural machine translation the new state of the art? The Prague Bulletin of Mathematical Linguistics, 108(1), pages 109-120. https://doi: 10.1515/pralin-2017-0013. Paul Cook and Graeme Hirst. 2011. Automatic identification of words with novel but infrequent senses. In Proceedings of the 25th Pacific Asia Conference on Language, Information and Computation. https://www.aclweb.org/anthology/Y11-1028 Christiane Fellbaum. 1998. WordNet: An electronic lexical database. MIT press. Lea Frermann and Mirella Lapata. 2016. Bayesian model of diachronic meaning change. Transactions of the Association for Computational Linguistics. V. 4. pages 31-45. https://www.mitpressjournals.org/doi/pdfplus/10.1 162/tacl_a_00081 Kristina Gulordava and Marco Baroni. 2011. A distributional similarity approach to the detection of semantic change in the Google Books Ngram corpus. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics. Association for Computational Linguistics, pages 67-71. http://www.aclweb.org/anthology/W11-2508. Jey Han Lau, Paul Cook, Diana McCarthy, Spandana Gella and Timothy Baldwin. 2014. Learning word sense distributions, detecting unattested senses and identifying novel senses using topic models. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages. 259-270. http://www.aclweb.org/anthology/P14-1025 Natalia Loukachevitch and Boris Dobrov. 2002. Development and Use of Thesaurus of Russian Language RuThes. In Proceedings of workshop on WordNet Structures and Standartisation, and How These Affect WordNet Applications and Evaluation.(LREC 2002), pages 65-70. Natalia Loukachevitch, German Lashevich and Boris Dobrov, Boris. 2018. Comparing Two Thesaurus Representations for Russian. In Proceedings of Global WordNet Conference GWC-2018, pages 35-44. Sunny Mitra, Ritwik Mitra, Suman Kalyan Maity, Martin Riedl, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2015. An automatic approach to identify word sense changes in text media across timescales. Natural Language Engineering, 21(5), 773-798. https:// doi:10.1017/S135132491500011X Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM computing surveys (CSUR). V. 41, №. 2, pages 10. http://doi.acm.org/10.1145/1459352.1459355 Roberto Navigli and Simone Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193, pages 217-250. https://doi.org/10.1016/j.artint.2012.07.001 Alexander Panchenko, Natalia Loukachevitch, Dmitrii Ustalov, Denis Paperno, Christian Meyer, and Natali Konstantinova. 2015. Russe: The first workshop on russian semantic similarity. In Proceeding of the Dialogue 2015 Conference, pages 89-105. Alexander Panchenko, Anastasiya Lopukhina, Dmitry Ustalov, Konstantin Lopukhin, Nikolay Arefyev, Alexey Leontyev, and Natalia Loukachevitch. 5779 . 2018. RUSSE'2018: A Shared Task on Word Sense Induction for the Russian Language. In Proceedings of Intern. conference Dialogue-2018, pages 547--564. Rion Snow, Daniel Jurafsky, and Andrew Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, Association for Computational Linguistics, pages 801-808. http://www.aclweb.org/anthology/P06-1101. Benjamin Snyder and Martha Palmer. 2004. The English all-words task. In Proceedings of SENSEVAL-3, the Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text. http://www.aclweb.org/anthology/W04-0811
2019
577
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5780–5785 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5780 Confusionset-guided Pointer Networks for Chinese Spelling Check Dingmin Wang♣, Yi Tay♠, Li Zhong♣ ♣Tencent Cloud AI ♠Nanyang Technological University, Singapore [email protected] [email protected] [email protected] Abstract This paper proposes Confusionset-guided Pointer Networks for Chinese Spell Check (CSC) task. More concretely, our approach utilizes the off-the-shelf confusionset for guiding the character generation. To this end, our novel Seq2Seq model jointly learns to copy a correct character from an input sentence through a pointer network, or generate a character from the confusionset rather than the entire vocabulary. We conduct experiments on three human-annotated datasets, and results demonstrate that our proposed generative model outperforms all competitor models by a large margin of up to 20% F1 score, achieving state-of-the-art performance on three datasets. 1 Introduction In our everyday writing, there exists different types of errors, one of which that frequently occurs is misspelling a character due to the characters’ similarity in terms of sound, shape, and/or meaning. Spelling check is a task to detect and correct such problematic usage of language. Although these tools been useful, detecting and fixing errors in natural language, especially in Chinese, remains far from solved. Notably, Chinese is very different from other alphabetical languages (e.g., English). First, there are no word delimiters between the Chinese words. Second, the error detection task is difficult due to its context-sensitive nature, i.e., errors can be only often determined at phrase/sentence level and not at character-level. In this paper, we propose a novel neural architecture for the Chinese Spelling Check (CSC) task. For the task at hand, it is intuitive that the generated sentence and the input sentence would usually share most characters, along with same sentence structure with a slight exception for several incorrect characters. This is unlike other generative tasks (e.g., neural machine translation or dialog translation) in which the output would differ greatly from the input. To this end, this paper proposes a novel Confusionset-guided copy mechanism which achieves significant performance gain over competitor approaches. Copy mechanisms (Gulcehre et al., 2016), enable the copying of words directly from the input via pointing, providing an extremely appropriate inductive bias for the CSC task. More concretely, our model jointly learns the selection of appropriate characters to copy or to generate a correct character from the vocabulary when an incorrect character occurs. The clear novelty of our work, however, is the infusion of Confusionsets1 with Pointer Networks, which help reduce the search space and vastly improve the probability of generating correct characters. Experimental results on three benchmark datasets demonstrate that our model outperforms all competitor models, obtaining performance gains of up to 20%. 2 Our Proposed Model Given an input, we represent the input sentence as X = {cs 1, cs 2, · · · , cs n}, where ci is a Chinese character2 and n is the number of characters. We map X to an output sentence Y = {ct 1, ct 2, · · · , ct n}, namely maximizing the probability P(Y |X). Our model consists of an encoder and a decoder similar to (Sutskever et al., 2014), as shown in Figure 1. The encoder maps X to a higher-level representation with a bidirectional BiLSTM architecture similar to that of (Hochreiter and Schmidhuber, 1997). The decoder is also a recurrent neural 1Confusionsets are a lexicon of commonly confused characters. Details are deferred to later sections. 2In Chinese, there is no explicit delimiter between words and one word usually consists of two or more characters, e.g., 中国(China) as a word consists of two characters: 中and 国. In this paper, we use c and w to denote Chinese word and Chinese character, respectively. 5781 Figure 1: Structure of Confusionset-guided Pointer Network with for Chinese Spelling Check. network with the attention mechanism (Bahdanau et al., 2014) to attend to the encoded representation and generate Y one character at a time. In our setting, the length of Y is limited to be equal to the length of X. Confusionset M Confusionset, a prepared set which consists of commonly confused characters plays a key role in spelling error detection and correction. Most Chinese characters have similar characters in shape or pronunciation. According to the statistic result of incorrect Chinese characters collected from the Internet (Liu et al., 2010), 83% of these errors were related to phonological similarity, and 48% of them were related to visual similarity between the involved characters. To reduce the searching space while ensuring that the target characters are not excluded, we build a confusionset matrix M ∈Rn∗w, where w is the size of the vocabulary, n corresponds to the number of characters in X, in which each element is 0 or 1. Take an input “这使我永生难望” as an example, the 7-th character “望” is a spelling error and its confusion set 3 is “汪圣忘晚往完万网· · · ”. In M[7], the locations these confusion words occur in will be set to be 1 and the left are set to be 0. 2.1 Encoder Before diving into the model, we first give a character-level reasoning. Consider the charac3Confusionset is downloaded from https://github. com/wdimmy/Automatic-Corpus-Generation, and this confusionset claims to cover most of spelling errors (Wang et al., 2018). teristic of Chinese characters, in which there is no explicit delimiter between words like some alphabetic-based languages, i.e., English, so our neural network model operates at the character level. One of reasons is that even for the stateof-the-art word segmenter, there exists some segmenting errors , and texts with spelling errors will exacerbate this phenomenon. Incorrectly segmented results might influence the capture of semantic representation in X for the encoder. The encoder reads X and outputs a sequence of vectors, associated with each word in the sentence, which will be selectively accessed during decoding via a soft attentional mechanism. We use a bidirectional LSTM network to obtain the hidden states hs i for each time step i, hs i = BiLSTM(hs i−1, es i) (1) where hs i is the concatenation of the forward hidden state ←− hs i and the backward hidden state −→ hs i , and es i is the character embedding4 for cs i in X. 2.2 Decoder The decoder utilizes another LSTM that produces a distribution over the next target character given the source vectors [hs 1, hs 2, · · · , hs n], the previously generated target characters ˆY<j = [ˆct 1, ˆct 2, · · · , ˆct j], and M ∈Rn∗w, mathematically, ht j = LSTM(ht j−1, et j−1) (2) 4We pretrain the Chinese character embedding based on the large quantities of online Chinese corpus via using the method proposed in (Sun et al., 2014). 5782 where ht j is the summary of the target sentence up to the j-th word, where et j is the word embedding for ct j−1. Note that during training the ground truth ct j−1 is fed into the network to predict ct j, while at test time the most probable ˆct j−1 is used. We extend this decoder with an attention based model (Bahdanau et al., 2014; Luong et al., 2015), where, at every time step t, an attention score as i is computed for each hidden state hs i of the encoder, using the attention mechanism of (Vinyals et al., 2015). Mathematically, ui = vT tanh(W1ht j + W2hs i) (3) ai = softmax(ui) (4) ht j ′ = n X i=0 aihs i (5) The source vectors are multiplied with the respective attention weights, and summed to a new vector as the summary of the source vectors, ht j ′. ht j ′ is then interacted with the current decoder hidden state ht j to produce a context vector Cj: Cj = tanh(W(ht j; ht j ′) (6) where U, W1, W2, and W are trainable parameters of the model. Cj is then used for generating two distributions: one is over the vocabulary, which is given by applying an affine transformation to Cj followed by a softmax, Pvocab = softmax(WvocabCj) (7) and the other is over the input sentence, in which we use the copy mechanism. Additionally, we add the location information of the corresponding character cs j in X, Locj, and this allows the decoder to have knowledge of previous (soft) alignments at each time step. Locj is a vector of length n initialized by 0, and at the timestep j, the j-th element in Locj is set to be 1 and the other is kept to be 0. The hidden state for generating the distribution over the input sentence is as follows, Lj = softmax(Wi[WgCj; Locj]) (8) where ·; · denotes the concatenation operation. To train the pointer networks, we define the position label at the decoding time step j as, Lloc j = ( max(z), if ∃z s.t. ct j = X[z] n + 1, otherwise (9) The position n+1 is a sentinel token deliberately concatenated to the end of X that allows us to calculate loss function even if ct j does not exist in the input sentence. Then, the loss between Lt and Lloc t is defined as, Lossl = m X i −log Lj[Lloc j ] (10) During the inference time, ˆct j is defined as, ˆct j = ( arg max(Lj), if arg max(Lj) ! = n + 1 arg max(Pvocab ⊙M[j]), otherwise (11) where ⊙is the element-wise multiplication, and M[j] is utilized to limit the scope of generated words based on the assumption that the correct character is contained in the corresponding confusionset of the erroneous character. 3 Experiments Train data We use the large annotated corpus which contains spelling errors, either visually or phonologically resembled characters, by an automatic approach proposed in (Wang et al., 2018). In addition, a small fraction of three humanannotated training datasets provided in (Wu et al., 2013; Yu et al., 2014; Tseng et al., 2015) are also included in our training data. Test data To evaluate the effectiveness of our proposed model, we test our trained model on benchmark datasets from three shared tasks of CSC (Wu et al., 2013; Yu et al., 2014; Tseng et al., 2015). Since these testing datasets are written in traditional Chinese, we convert them into simplified Chinese characters using OpenCC5. Details of experimental data statistics information, including the training datasets, the testing datasets and the Confusionsets used in our model, are shown in Table 1. Evaluation metrics We adopt precision, recall and F1 scores as our evaluation metrics, which are widely used as evaluation metrics in CSC tasks. Baseline models We compare our model with two baseline methods for CSC: one is N-gram language modeling with a pre-constructed confusionset (LMC), and for its simplicity and power, it is widely used in CSC (Liu et al., 2013; Yu 5https://github.com/BYVoid/ 5783 Name Data Size(lines) Avg. Sentence Length # of Errors Train Data (Wang et al., 2018) 271,329 44.4 382,704 SIGHAN 2013(train) 350 49.2 350 SIGHAN 2014(train) 6,526 49.7 10,087 SIGHAN 2015(train) 3,174 30.0 4,237 Total 281,379 44.4 397,378 Test Data SIGHAN 2013(test) 974 74.1 1,227 SIGHAN 2014(test) 526 50.1 782 SIGHAN 2015(test) 550 30.5 715 Name # of Characters Avg. # of confusionset Confusionsets 4,922 7.8 Table 1: Experimental Data Statistics Information. Methods Detection-level Correction-level Test13 Test14 Test15 Test13 Test14 Test15 P R F1 P R F1 P R F1 P R F1 P R F1 P R F1 LMC 79.8 50.0 61.5 56.4 34.8 43.0 83.8 26.2 40.0 77.6 22.7 35.1 71.1 50.2 58.8 67.6 31.8 43.2 SL 54.0 69.3 60.7 51.9 66.2 58.2 56.6 69.4 62.3 \ \ \ \ \ \ \ \ \ Ours− 40.7 84.3 54.8 51.1 72.3 59.9 58.7 61.7 60.2 67.1 31.9 43.2 51.6 64.7 57.4 46.7 43.9 45.3 Ours+ 56.8 91.4 70.1 63.2 82.5 71.6 66.8 73.1 69.8 79.7 59.4 68.1 79.3 68.9 73.7 71.5 59.5 64.9 Table 2: Experimental results of detection-level and correction-level performance on three testing datasets (%). + and - denote using Confusionsets and not using Confusionsets, respectively. and Li, 2014; Xie et al., 2015). By utilizing the confusionset to replace characters in a sentence, the sentence probability is calculated after and before the replacement, which is then used to determine whether the sentence contains spelling errors. We re-implement the pipline proposed in (Xie et al., 2015); Another is the sequence labeling method (SL), which casts Chinese spelling error detection into a sequence tagging problem on characters, in which the correct and incorrect characters are tagged as 1 and 0, respectively. We follow the baseline model (Wang et al., 2018) that implements a LSTM based sequence tagging model. Model Hyperparameters The training hyperparameters are selected based on the results of the validation set. The dimension of word embedding is set to 300 and the hidden vector is set to 512 in both the encoder and decoder. The dimension of the attention vector is also set to 512 and the dropout rate is set to 0.5 for regularization. The mini-batched Adam (Kingma and Ba, 2014) algorithm is used to optimize the objective function. The batch size and base learning rates are set to 64 and 0.001, respectively. Results As shown in Table 2, we compare our confusionset-guided pointer networks with two baseline methods. Not to our surprise, except for two precision results lower than LMC, our model consistently improves performance over other models for both detection-level and correctionlevel evaluation. One reason might be that compared with SL, which considers the spelling check as a classification task at the character-level, and the information available for the current timpstep is somewhat constrained while our generative model can utilize both the location information and the whole input information by an attention mechanism, and the copy mechanism also make the decoding more effective. As for LMC, how to set a threshold probability for judging whether a given sentence is correct remain explored, and there exists great trade-off between the precision and the recall as reported in (Jia et al., 2013). Utility of M Specifically, by comparing the experimental results of Ours−and Ours+, we can observe that the latter achieves better performance, 5784 which validates the effectiveness of utilizing Confusionsets that can help improve the probability of generating correct target characters. 4 Discussion and Future Work In our everyday Chinese writing, there exist a variety of problematic usage of language, one of which is the spelling error referred in this paper. Such spelling errors are mainly generated due to the similarity of Chinese characters in terms of sound, shape, and/or meaning, and the task is to detect the misspelled words and then replace them with their corresponding correct ones. Besides the spelling errors mentioned above, grammar errors are also common in our Chinese writing, which requires us to correct the erroneous sentence by insertion, deletion and even re-ordering. Take as an example “我真不不明白,为啥他要自 杀。” (Translation: I really don’t understand why he committed suicide.), we need to delete the character in red in order to guarantee the correctness of the sentence. However, our model is unable to handle such errors in that we limit the length of the generated sentence to be same to that of the input sentence in order to incorporate Confusionsets into our model as a guiding resource. For the future work, we hope to extend this idea proposed in this paper to train a model capable of handling different types of errors through the generative model since it can generate different lengths of results. One concern is that we need to reconsider how to incorporate Confusionsets into the encoder-decoder architecture. 5 Related Work Most CSC related studies have emerged as a result of a series of shared tasks (Wu et al., 2013; Yu et al., 2014; Tseng et al., 2015; Fung et al., 2017; Gaoqi et al., 2018), which involve automatic detection and correction of spelling errors for a given sentence. Earlier work in CSC focus mainly on unsupervised methods such as language model with a pre-constructed confusionset (Liu et al., 2013; Yu and Li, 2014). Subsequently, some work cast CSC as a sequential labeling problem, in which conditional random fields (CRF) (Lafferty et al., 2001), gated recurrent networks (Hochreiter and Schmidhuber, 1997; Chung et al., 2014) have been employed to model the problem (Zheng et al., 2016; Xie et al., 2017; Wu et al., 2018). More recently, motivated by a serials of remarkable success achieved by neural network-based sequenceto-sequence learning (Seq2Seq) in various natural language processing (NLP) tasks (Sutskever et al., 2014; Cho et al., 2014), generative models have also been applied to the spelling check task by considering it as an encoder-decoder (Xie et al., 2016; Ge et al., 2018). 6 Conclusion and Future Work We proposed a novel end-to-end confusionsetguided encoder-decoder model for the Chinese Spelling Check (CSC) task. By the infusion of Confusionsets with copy mechanism, our proposed approach achieves a huge performance gain over competitive baselines, demonstrating its effectiveness on the CSC task. Acknowledgements The authors want to express special thanks to all anonymous reviewers for their insightful and valuable comments and suggestions on various aspects of this work. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural Machine Translation by Jointly Learning to Align and Translate. arXiv preprint arXiv:1409.0473. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations Using RNN Encoderdecoder for Statistical Machine Translation. arXiv preprint arXiv:1406.1078. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arXiv preprint arXiv:1412.3555. Gabriel Fung, Maxime Debosschere, Dingmin Wang, Bo Li, Jia Zhu, and Kam-Fai Wong. 2017. Nlptea 2017 shared task–chinese spelling check. In Proceedings of the 4th Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA 2017), pages 29–34. RAO Gaoqi, Qi Gong, Baolin Zhang, and Endong Xun. 2018. Overview of nlptea-2018 share task chinese grammatical error diagnosis. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications, pages 42–51. 5785 Tao Ge, Furu Wei, and Ming Zhou. 2018. Reaching Human-level Performance in Automatic Grammatical Error Correction: An Empirical Study. arXiv preprint arXiv:1807.01270. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the Unknown Words. arXiv preprint arXiv:1603.08148. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long Short-term Memory. Neural computation, 9(8):1735–1780. Zhongye Jia, Peilu Wang, and Hai Zhao. 2013. Graph Model for Chinese Spell Checking. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 88–92. Diederik P Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. Chao-Lin Liu, Min-Hua Lai, Yi-Hsuan Chuang, and Chia-Ying Lee. 2010. Visually and Phonologically Similar Characters in Incorrect Simplified Chinese Words. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 739–747. Association for Computational Linguistics. Xiaodong Liu, Kevin Cheng, Yanyan Luo, Kevin Duh, and Yuji Matsumoto. 2013. A hybrid Chinese Spelling Correction Using Language Model and Statistical Machine Translation with Reranking. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 54–58. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective Approaches to Attentionbased Neural Machine Translation. arXiv preprint arXiv:1508.04025. Yaming Sun, Lei Lin, Nan Yang, Zhenzhou Ji, and Xiaolong Wang. 2014. Radical-enhanced Chinese Character Embedding. In International Conference on Neural Information Processing, pages 279–286. Springer. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in neural information processing systems, pages 3104–3112. Yuen-Hsien Tseng, Lung-Hao Lee, Li-Ping Chang, and Hsin-Hsi Chen. 2015. Introduction to Sighan 2015 Bake-off for Chinese Spelling Check. ACL-IJCNLP 2015, page 32. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar As a Foreign Language. In Advances in neural information processing systems, pages 2773–2781. Dingmin Wang, Yan Song, Jing Li, Jialong Han, and Haisong Zhang. 2018. A hybrid approach to automatic corpus generation for chinese spelling check. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2517–2527. Shih-Hung Wu, Chao-Lin Liu, and Lung-Hao Lee. 2013. Chinese Spelling Check Evaluation at SIGHAN Bake-off 2013. Shih-Hung Wu, Jun-Wei Wang, Liang-Pu Chen, and Ping-Che Yang. 2018. CYUT-III Team Chinese Grammatical Error Diagnosis System Report in NLPTEA-2018 CGED Shared Task. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications, pages 199–202. Pengjun Xie et al. 2017. Alibaba at ijcnlp-2017 task 1: Embedding grammatical features into lstms for chinese grammatical error diagnosis task. Proceedings of the IJCNLP 2017, Shared Tasks, pages 41–46. Weijian Xie, Peijie Huang, Xinrui Zhang, Kaiduo Hong, Qiang Huang, Bingzhou Chen, and Lei Huang. 2015. Chinese Spelling Check System Based on N-gram Model. In Proceedings of the Eighth SIGHAN Workshop on Chinese Language Processing, pages 128–136. Ziang Xie, Anand Avati, Naveen Arivazhagan, Dan Jurafsky, and Andrew Y Ng. 2016. Neural language correction with character-based attention. arXiv preprint arXiv:1603.09727. Junjie Yu and Zhenghua Li. 2014. Chinese Spelling Error Detection and Correction Based on Language Model, Pronunciation, and Shape. In Proceedings of The Third CIPS-SIGHAN Joint Conference on Chinese Language Processing, pages 220–223. Liang-Chih Yu, Lung-Hao Lee, Yuen-Hsien Tseng, Hsin-Hsi Chen, et al. 2014. Overview of SIGHAN 2014 Bake-off for Chinese Spelling Check. In Proceedings of the 3rd CIPSSIGHAN Joint Conference on Chinese Language Processing (CLP’14), pages 126–132. Bo Zheng, Wanxiang Che, Jiang Guo, and Ting Liu. 2016. Chinese Grammatical Error Diagnosis with Long Short-term Memory Networks. In Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA2016), pages 49–56.
2019
578
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5786–5796 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5786 Generalized Data Augmentation for Low-Resource Translation Mengzhou Xia, Xiang Kong, Antonios Anastasopoulos, Graham Neubig Language Technologies Institute, Carnegie Mellon University {mengzhox, xiangk, aanastas, gneubig}@andrew.cmu.edu Abstract Translation to or from low-resource languages (LRLs) poses challenges for machine translation in terms of both adequacy and fluency. Data augmentation utilizing large amounts of monolingual data is regarded as an effective way to alleviate these problems. In this paper, we propose a general framework for data augmentation in low-resource machine translation that not only uses target-side monolingual data, but also pivots through a related highresource language (HRL). Specifically, we experiment with a two-step pivoting method to convert high-resource data to the LRL, making use of available resources to better approximate the true data distribution of the LRL. First, we inject LRL words into HRL sentences through an induced bilingual dictionary. Second, we further edit these modified sentences using a modified unsupervised machine translation framework. Extensive experiments on four low-resource datasets show that under extreme low-resource settings, our data augmentation techniques improve translation quality by up to 1.5 to 8 BLEU points compared to supervised back-translation baselines.1 1 Introduction The task of Machine Translation (MT) for low resource languages (LRLs) is notoriously hard due to the lack of the large parallel corpora needed to achieve adequate performance with current Neural Machine Translation (NMT) systems (Koehn and Knowles, 2017). A standard practice to improve training of models for an LRL of interest (e.g. Azerbaijani) is utilizing data from a related high-resource language (HRL, e.g. Turkish). Both transferring from HRL to LRL (Zoph et al., 2016; Nguyen and Chiang, 2017; Gu et al., 2018) and 1Code is available at https://github.com/ xiamengzhou/DataAugForLRL : Available Resource : Generated Resource LRL ENG [c] HRL ENG [b] ENG [a] HRL LRL ENG LRL ENG LRL ENG [1] ENG→LRL [2] ENG→HRL [4] HRL→LRL [3] HRL→LRL Figure 1: With a low-resource language (LRL) and a related high-resource language (HRL), typical data augmentation scenarios use any available parallel data [b] and [c] to back-translate English monolingual data [a] and generate parallel resources ([1] and [2]). We additionally propose scenarios [3] and [4], where we pivot through HRL in order to generate a LRL–ENG resource. joint training on HRL and LRL parallel data (Johnson et al., 2017; Neubig and Hu, 2018) have shown to be effective techniques for low-resource NMT. Incorporating data from other languages can be viewed as one form data augmentation, and particularly large improvements can be expected when the HRL shares vocabulary or is syntactically similar with the LRL (Lin et al., 2019). Simple joint training is still not ideal, though, considering that there will still be many words and possibly even syntactic structures that will not be shared between the most highly related languages. There are model-based methods that ameliorate the problem through more expressive source-side representations conducive to sharing (Gu et al., 2018; Wang et al., 2019), but they add significant computational and implementation complexity. In this paper, we examine how to better share information between related LRL and HRLs through a framework of generalized data augmentation for low-resource MT. In our basic setting, we have 5787 access to parallel or monolingual data of an LRL of interest, its HRL, and the target language, which we will assume is English. We propose methods to create pseudo-parallel LRL data in this setting. As illustrated in Figure 1, we augment parallel data via two main methods: 1) back-translating from ENG to LRL or HRL; 2) converting the HRL-ENG dataset to a pseudo LRL-ENG dataset. In the first thread, we focus on creating new parallel sentences through back-translation. Backtranslating from the target language to the source (Sennrich et al., 2016) is a common practice in data augmentation, but has also been shown to be less effective in low-resource settings where it is hard to train a good back-translation model (Currey et al., 2017). As a way to ameliorate this problem, we examine methods to instead translate from the target language to a highly-related HRL, which remains unexplored in the context of low-resource NMT. This pseudo-HRL-ENG dataset can then be used for joint training with the LRL-ENG dataset. In the second thread, we focus on converting an HRL-ENG dataset to a pseudo-LRL-to-ENG dataset that better approximates the true LRL data. Converting between HRLs and LRLs also suffers from lack of resources, but because the LRL and HRL are related, this is an easier task that we argue can be done to some extent by simple (or unsupervised) methods.2 In our proposed method, for the first step, we substitute HRL words on the source side of HRL parallel datasets with corresponding LRL words from an induced bilingual dictionary generated by mapping word embedding spaces (Xing et al., 2015; Lample et al., 2018b). In the second step, we further attempt translate the pseudo-LRL sentences to be closer to LRL ones utilizing an unsupervised machine translation framework. In sum, our contributions are four fold: 1. We conduct a thorough empirical evaluation of data augmentation methods for lowresource translation that take advantage of all accessible data, across four language pairs. 2. We explore two methods for translating between related languages: word-by-word substitution using an induced dictionary, and unsupervised machine translation that further uses this word-by-word substituted data as 2This sort of pseudo-corpus creation was examined in a different context of pivoting for SMT (De Gispert and Marino, 2006), but this was usually done with low-resource sourcetarget language pairs with English as the pivot. input. These methods improve over simple unsupervised translation from HRL to LRL by more than 2 to 10 BLEU points. 3. Our proposed data augmentation methods improve over standard supervised backtranslation by 1.5 to 8 BLEU points, across all datasets, and an additional improvement of up to 1.1 BLEU points by augmenting from both ENG monolingual data, as well as HRL-ENG parallel data. 2 A Generalized Framework for Data Augmentation In this section, we outline a generalized data augmentation framework for low-resource NMT. 2.1 Datasets and Notations Given an LRL of interest and its corresponding HRL, with the goal of translating the LRL to English, we usually have access to 1) a limited-sized LRL-ENG parallel dataset {SLE, TLE}; 2) a relatively highresource HRL-ENG parallel dataset {SHE, THE}; 3) a limited-sized LRL-HRL parallel dataset {SHL, THL}; 4) large monolingual datasets in LRL ML, HRL MH and English ME. To clarify notation, we use S and T to denote the source and target sides of parallel datasets, and M for monolingual data. Created data will be referred to as ˆSm A )B. The superscript m denotes a particular augmentation approach (specified in Section 3). The subscripts denote the translation direction that is used to create the data, with the LRL, HRL, and ENG denoted with ‘L’, ‘H’, and ‘E’ respectively. 2.2 Augmentation from English The first two options for data augmentation that we explore are typical back-translation approaches: 1. ENG-LRL We train an ENG-LRL system and back-translate English monolingual data to LRL, denoted by { ˆSE )L, ME}. 2. ENG-HRL We train an ENG-HRL system and back-translate English monolingual data to HRL, denoted by { ˆSE )H, ME}. Since we have access to LRL-ENG and HRLENG parallel datasets, we can train these backtranslation systems (Sennrich et al., 2016) in a supervised fashion. The first option is the common practice for data augmentation. However, in a low-resource scenario, the created LRL data can 5788 be of very low quality due to the limited size of training data, which in turn could deteriorate the LRL)ENG translation performance. As we show in Section 5, this is indeed the case. The second direction, using HRL back-translated data for LRL)ENG translation, has not been explored in previous work. However, we suggest that in low-resource scenarios it has potential to be more effective than the first option because the quality of the generated HRL data will be higher, and the HRL is close enough to the LRL that joint training of a model on both languages will likely have a positive effect. 2.3 Augmentation via Pivoting Using HRL-ENG data improves LRL-ENG translation because (1) adding extra ENG data improves the target-side language model, (2) it is possible to share vocabulary (or subwords) between languages, and (3) because the syntactically similar HRL and LRL can jointly learn parameters of the encoder. However, regardless of how close these related languages might be, there still is a mismatch between the vocabulary, and perhaps syntax, of the HRL and LRL. However, translating between HRL and LRL should be an easier task than translating from English, and we argue that this can be achieved by simple methods. Hence, we propose “Augmentation via Pivoting" where we create an LRL-ENG dataset by translating the source side of HRL-ENG data, into the LRL. There are again two ways in which we can construct a new LRL-ENG dataset: 3. HRL-LRL We assume access to an HRL-ENG dataset. We then train an HRL-LRL system and convert the HRL side of SHE to LRL, creating a { ˆSH )L, THE} dataset. 4. ENG-HRL-LRL Exactly as before, except that the HRL-ENG dataset is the result of backtranslation. That means that we have first converted English monolingual data ME to ˆSE )H, and then we convert those to the LRL, creating a dataset { ˆSE )H )L, ME}. Given a LRL-HRL dataset {SLH, TLH} one could also train supervised back-translation systems. But we still face the same problem of data scarcity, leading to poor quality of the augmented datasets. Based on the fact that an LRL and its corresponding HRL can be similar in morphology and word order, in the following sections, we propose methods to convert HRL to LRL for data augmentation in a more reliable way. 3 LRL-HRL Translation Methods In this section, we introduce two methods for converting HRL to LRL for data augmentation. 3.1 Augmentation with Word Substitution Mikolov et al. (2013) show that the word embedding spaces share similar innate structure over different languages, making it possible to induce bilingual dictionaries with a limited amount of or even without parallel data (Xing et al., 2015; Zhang et al., 2017; Lample et al., 2018b). Although the capacity of these methods is naturally constrained by the intrinsic properties of the two mapped languages, it’s more likely to create a high-quality bilingual dictionary for two highly-related languages. Given the induced dictionary, we can substitute HRL words with LRL ones and construct a word-by-word translated pseudo-LRL corpus. Dictionary Induction We use a supervised method to obtain a bilingual dictionary between the two highly-related languages. Following Xing et al. (2015), we formulate the task of finding the optimal mapping between the source and target word embedding spaces as the Procrustes problem (Schönemann, 1966), which can be solved by singular value decomposition (SVD): min W ∥WX −Y ∥2 F s.t. W T W = I, where X and Y are the source and target word embedding spaces respectively. As a seed dictionary to provide supervision, we simply exploit identical words from the two languages. With the learned mapping W, we compute the distance between mapped source and target words with the CSLS similarity measure (Lample et al., 2018b). Moreover, to ensure the quality of the dictionary, a word pair is only added to the dictionary if both words are each other’s closest neighbors. Adding an LRL word to the dictionary for every HRL word results in relatively poor performance due to noise as shown in Section 5.3. Corpus Construction Given an HRL-ENG {SHE, THE} or a back-translated { ˆSE )H, ME} dataset, we substitute the words in SHE with the corresponding LRL ones using our induced dictionary. Words not in the dictionary are left untouched. By injecting LRL words, we convert 5789 the original or augmented HRL data into pseudoLRL, which explicitly increases lexical overlap between the concatenated LRL and HRL data. The created datasets are denoted by { ˆSw H )E, THE} and { ˆSw E )H )L, ME} where w denotes augmentation with word substitution. 3.2 Augmentation with Unsupervised MT Although we assume LRL and HRL to be similar with regards to word morphology and word order, the simple word-by-word augmentation process will almost certainly be insufficient to completely replicate actual LRL data. A natural next step is to further convert the pseudo-LRL data into a version closer to the real LRL. In order to achieve this in our limited-resource setting, we propose to use unsupervised machine translation (UMT). UMT Unsupervised Neural Machine Translation (Artetxe et al., 2018; Lample et al., 2018a,c) makes it possible to translate between languages without parallel data. This is done by coupling denoising auto-encoding, iterative back-translation, and shared representations of both encoders and decoders, making it possible for the model to extend the initial naive word-to-word mapping into learning to translate longer sentences. Initial studies of UMT have focused on data-rich, morphologically simple languages like English and French. Applying the UMT framework to lowresource and morphologically rich languages is largely unexplored, with the exception of Neubig and Hu (2018) and Guzmán et al. (2019), showing that UMT performs exceptionally poorly between dissimilar language pairs with BLEU scores lower than 1. The problem is naturally harder for morphologically rich LRLs due to two reasons. First, morphologically rich languages have a higher proportions of infrequent words (Chahuneau et al., 2013). Second, even though still larger than the respective parallel datasets, the size of monolingual datasets in these languages is much smaller compared to HRLs. Modified Initialization As pointed out in Lample et al. (2018c), a good initialization plays a critical role in training NMT in an unsupervised fashion. Previously explored initialization methods include: 1) word-for-word translation with an induced dictionary to create synthetic sentence pairs for initial training (Lample et al., 2018a; Artetxe et al., 2018); 2) joint Byte-Pair-Encoding (BPE) for both the source and target corpus sides as a pre-processing step. While the first method intends to give a reasonable prior for parameter search, the second method simply forces the source and target languages to share the same subword vocabulary, which has been shown to be effective for translation between highly related languages. Inspired by these two methods, we propose a new initialization method that uses our word substitution strategy (§3.1). Our initialization is comprised of a sequence of three steps: 1. First, we use an induced dictionary to substitute HRL words in MH to LRL ones, producing a pseudo-LRL monolingual dataset ˆ ML. 2. Second, we learn a joint word segmentation model on both ML and ˆ ML and apply it to both datasets. 3. Third, we train a NMT model in an unsupervised fashion between ML and ˆ ML. The training objective L is a weighted sum of two loss terms for denoising auto-encoding and iterative back-translation: L = λ1 Ex∼ML−log Ps)s(x|C(x)) + Ey∼ˆ ML−log Pt)t(y|C(y))  +λ2 Ex∼ML−log Pt)s(x|u∗(y|x)) + Ey∼ˆ ML−log Ps)t(y|u∗(x|y))  where u∗denotes translations obtained with greedy decoding, C denotes a noisy manipulation over input including dropping and swapping words randomly, λ1 and λ2 denotes the weight of language modeling and back translation respectively. In our method, we do not use any synthetic parallel data for initialization, expecting the model to learn the mappings between a true LRL distribution and a pseudo-LRL distribution. This takes advantage of the fact that the pseudo-LRL is naturally closer to the true LRL than the HRL is, as the injected LRL words increase vocabulary overlap. Corpus Construction Given the word-level augmented datasets { ˆSw H )E, THE} and { ˆSw E )H )L, ME}, we use the UMT model trained with this method to translate the pseudo-LRL data from ˆSw H )E and from ˆSw E )H )L. We obtain new parallel datasets { ˆSm H )E, THE} and { ˆSm E )H )L, ME} with superscript m denoting Modified UMT (M-UMT). We use superscript u for un-modified standard UMT. 5790 Datasets LRL (HRL) AZE BEL GLG SLK (TUR) (RUS) (POR) (CES) SLE, TLE 5.9K 4.5K 10K 61K SHE, THE 182K 208K 185K 103K SLH, TLH 5.7K 4.2K 3.8K 44K ML 2.02M 1.95M 1.98M 2M MH 2M 2M 2M 2M ME 2M/ 200K Table 1: Statistics (number of sentences) of all datasets. 3.3 Why Pivot for Back-Translation? Pivoting through an HRL in order to convert English to LRL will be a better option compared to directly translating ENG to LRL under the following three conditions: 1) HRL and LRL are related enough to allow for the induction of a high-quality bilingual dictionary; 2) There exists a relatively high-resource HRL-ENG dataset; 3) A high-quality LRL-ENG dictionary is hard to acquire due to data scarcity or morphological distance. Essentially, the direct ENG)LRL back-translation may suffer from both data scarcity and morphological differences between the two languages. Our proposal breaks the process into two easier steps: ENG)HRL translation is easier due to the availability of data, and HRL)LRL translation is easier because the two languages are related. A good example is the agglutinative language of Azerbaijiani, where each word may consist of several morphemes and each morpheme could possibly map to an English word itself. Correspondences to (also agglutinative) Turkish, however, are easier to uncover. To give a concrete example, the Azerbijiani word “dü¸süncәlәrim” can be fairly easily aligned to the Turkish word “dü¸süncelerim” while in English it corresponds to the phrase “my thoughts”, which is unlikely to be perfectly aligned. 4 Experimental Setup 4.1 Data We use the multilingual TED corpus (Qi et al., 2018) as a test-bed for evaluating the efficacy of each augmentation method. We conduct extensive experiments over four low-resource languages: Azerbaijani (AZE), Belarusian (BEL), Galician (GLG), and Slovak (SLK), along with their highly related languages Turkish (TUR), Russian (RUS), Portuguese (POR), and Czech (CES) respectively. We also have small-sized LRL-HRL parallel datasets, and we download Wikipedia dumps to acquire monolingual datasets for all languages. The statistics of the parallel datasets are shown in Table 1. For AZE, BEL and GLG, we use all available Wikipedia data, while for the rest of the languages we sample a similar-sized corpus. We sample 2M/200K English sentences from Wikipedia data, which are used for baseline UMT training and augmentation from English respectively. 4.2 Pre-processing We train a joint sentencepiece3 model for each LRL-HRL pair by concatenating the monolingual corpora of the two languages. The segmentation model for English is trained on English monolingual data only. We set the vocabulary size for each model to 20K. All data are then segmented by their respective segmentation model. We use FastText4 to train word embeddings using ML and MH with a dimension of 256 (used for the dictionary induction step). We also pre-train subword level embeddings on the segmented ML, ˆ ML and MH with the same dimension. 4.3 Model Architecture Supervised NMT We use the self-attention Transformer model (Vaswani et al., 2017). We adapt the implementation from the open-source translation toolkit OpenNMT (Klein et al., 2017). Both encoder and decoder consist of 4 layers, with the word embedding and hidden unit dimensions set to 256. 5 We use a batch size of 8096 tokens. Unsupervised NMT We train unsupervised Transformer models with the UnsupervisedMT toolkit.6 Layer sizes and dimensions are the same as in the supervised NMT model. The parameters of the first three layers of the encoder and the decoder are shared. The embedding layers are initialized with the pre-trained subword embeddings from monolingual data. We set the weight parameters for autodenoising language modeling and iterative back translation as λ1 = 1 and λ2 = 1. 4.4 Training and Model Selection After data augmentation, we follow the pre-train and fine tune paradigm for learning (Zoph et al., 3https://github.com/google/sentencepiece 4https://github.com/facebookresearch/fastText 5We tuned on multiple settings to find the optimal parameters for our datasets. 6https://github.com/facebookresearch/UnsupervisedMT 5791 Training Data BLEU for X)ENG AZE BEL GLG SLK (TUR) (RUS) (POR) (CES) Results from Literature SDE (Wang et al., 2019) 12.89 18.71 31.16 29.16 many-to-many (Aharoni et al., 2019) 12.78 21.73 30.65 29.54 Standard NMT 1 {SLESHE , TLETHE} (supervised MT) 11.83 16.34 29.51 28.12 2 {ML, ME} (unsupervised MT) 0.47 0.18 1.15 0.75 Standard Supervised Back-translation 3 + { ˆSs E )L , ME} 11.84 15.72 29.19 29.79 4 + { ˆSs E )H , ME} 12.46 16.40 30.07 30.60 Augmentation from HRL-ENG 5 + { ˆSs H )L , THE} (supervised MT) 11.92 15.79 29.91 28.52 6 + { ˆSu H )L , THE} (unsupervised MT) 11.86 13.83 29.80 28.69 7 + { ˆSw H )L , THE} (word subst.) 14.87 23.56 32.02 29.60 8 + { ˆSm H )L , THE} (modified UMT) 14.72 23.31 32.27 29.55 9 + { ˆSw H )L ˆSm H )L , THETHE} 15.24 24.25 32.30 30.00 Augmention from ENG by pivoting 10 + { ˆSw E )H )L , ME} (word subst.) 14.18 21.74 31.72 30.90 11 + { ˆSm E )H )L , ME} (modified UMT) 13.71 19.94 31.39 30.22 Combinations 12 + { ˆSw H )L ˆSw E )H )L , THEME} (word subst.) 15.74 24.51 33.16 32.07 13 + { ˆSw H )L ˆSm H )L , THETHE} 15.91 23.69 32.55 31.58 + { ˆSw E )H )L ˆSm E )H )L , MEME} Table 2: Evaluation of translation performance over four language pairs. Rows 1 and 2 show pre-training BLEU scores. Rows 3–13 show scores after fine tuning. Statistically significantly best scores are highlighted (p < 0.05). 2016; Nguyen and Chiang, 2017). We first train a base NMT model on the concatenation of {SLE, TLE} and {SHE, THE}. Then we adopt the mixed fine-tuning strategy of Chu et al. (2017), fine-tuning the base model on the concatenation of the base and augmented datasets. For each setting, we perform a sufficient number of updates to reach convergence in terms of development perplexity. We use the performance on the development sets (as provided by the TED corpus) as our criterion for selecting the best model, both for augmentation and final model training. 5 Results and Analysis A collection of our results with the baseline and our proposed methods is shown in Table 2. 5.1 Baselines The performance of the base supervised model (row 1) varies from 11.8 to 29.5 BLEU points. Generally, the more distant the source language is from English, the worse the performance. A standard unsupervised MT model (row 2) achieves extremely low scores, confirming the results of Guzmán et al. (2019), indicating the difficulties of directly translating between LRLand ENG in an unsupervised fashion. Rows 3 and 4 show that standard supervised back-translation from English at best yields very modest improvements. Notable is the exception of SLK-ENG, which has more parallel data for training than other settings. In the case of BEL and GLG, it even leads to worse performance. Across all four languages, supervised back-translation into the HRL helps more than into the LRL; data is insufficient for training a good LRL-ENG MT model. 5.2 Back-translation from HRL HRL-LRL Rows 5–9 show the results when we create data using the HRL side of an HRL-ENG dataset. Both the low-resource supervised (row 5) and vanilla unsupervised (row 6) HRL)ENG translation do not lead to significant improvements. 5792 5 10 15 20 Pivot BLEU 15 20 25 30 Translation BLEU AZE BEL GLG SLK BASE BASE-UMT Word Subst. M-UMT Figure 2: Correlation between HRL-LRL (augmentation) pivot BLEU and LRL-ENG translation BLEU. On the other hand, our simple word substitution approach (row 7) and the modified UMT approach (row 8) lead to improvements across the board: +3.0 BLEU points in AZE, +7.8 for BEL, +2.3 for GLG, +1.1 for SLK. These results are significant, demonstrating that the quality of the back-translated data is indeed important. In addition, we find that combining the datasets produced by our word substitution and UMT models provide an additional improvement in all cases (row 9). Interestingly, this happens despite the fact that the ENG data are the exact same between rows 5–9. ENG-HRL-LRL We also show that even in the absence of parallel HRL-LRL data, our pivoting method is still valuable. Rows 10 and 11 in Table 2 show the translation accuracy when the augmented data are the result of our two-step pivot back-translation. In both cases, monolingual ENG is first translated into HRL and then into LRL with either just word substitution (row 10) or modified UMT (row 11). Although these results are slightly worse than our one-step augmentation of a parallel HRL-LRL dataset, they still outperform the baseline standard back-translation (rows 3 and 4). An interesting note is that in this setting, word substitution is clearly preferable to UMT for the second translation pivoting step, which we explain in §5.3. Combinations We obtain our best results by combining the two sources of data augmentation. Row 12 shows the result of using our simple word substitution technique on the HRL side of both a parallel and an artificially created (back-translated) HRL-ENG dataset. In this setting, we further improve not only the encoder side of our model, as before, but we also aid the decoder’s language modeling capabilities by providing ENG data from two SHL ˆSw HL ˆSm HL ˆSw HL+ ˆSm HL ˆSw HL+ ˆSw EHL AZE (TUR) 0.0 0.2 0.4 0.6 0.77 BEL (RUS) 0.0 0.2 0.4 0.6 0.94 GLG (POR) 0.0 0.2 0.4 0.6 0.99 SLK (CES) 0.0 0.2 0.4 0.6 0.96 5 10 15 20 25 15 20 25 30 35 20 25 30 35 40 15 20 25 30 35 Translation BLEU Address Rate Figure 3: Rare word address rate (bars) and LRL-ENG BLEU scores (line plot) for each data augmentation method. The numbers in each upper left corner is the Pearson correlation coefficient. distinct resources. This leads to improvements of 3.6 to 8.2 BLEU points over the base model and 0.3 to 2.1 over our best results from HRL-ENG augmentation. Finally, row 13 shows our attempt to obtain further gains by combining the datasets from both word substitution and UMT, as we did in setting 7. This leads to a small improvement of 0.2 BLEU points in AZE, but also to a slight degradation on the other three datasets. We also compare the results of our augmentation methods with other state-of-the-art methods that either perform improvements to modeling to improve the ability to do parameter sharing (Wang et al., 2019), or train on many different target languages simultaneously (Aharoni et al., 2019). The results demonstrate that the simple data augmentation strategies presented here improve significantly over these previous methods. 5.3 Analysis In this section we focus on the quality of HRL)LRL translation, showing that our better M-UMT initialization method leads to significant improvements compared to standard UMT. We use the dev sets of the HRL-LRL datasets to examine the performance of M-UMT between related languages. We calculate the pivot BLEU7 score on the LRL side of each created dataset (SHL, ˆSw H )L, ˆSu H )L, ˆSm H )L). In Figure 2 we plot pivot HRLLRL BLEU scores against the translation LRL-ENG BLEU ones. First, we observe that across all 7We will refer to pivot BLEU in order to avoid confusion with translation BLEU scores from the previous sections. 5793 Data Example Sentence Pivot BLEU SLE (GLG) Pero con todo, veste obrigado a agardar nas mans dunha serie de estraños moi profesionais. SHE (POR) Em vez disso, somos obrigados a esperar nas mãos de uma série de estranhos muito profissionais. 0.09 ˆSw H )L En vez disso, somos obrigados a esperar nas mans de unha serie de estraños moito profesionais. 0.18 ˆSm H )L En vez diso, somos obrigados a esperar nas mans dunha serie de estraños moi profesionais. 0.54 TLE But instead, you are forced there to wait in the hands of a series of very professional strangers. Table 3: A POR-GLG pivoting example with corresponding pivot BLEU scores. Edits by word substitution or M-UMT are highlighted. datasets, the pivot BLEU of our M-UMT method is higher than standard UMT (the squares are all further right than their corresponding stars). Vanilla UMT’s scores are 2 to 10 BLEU points worse than the M-UMT ones. This means that UMT across related languages significantly benefits from initializing with our simple word substitution method. Second, as illustrated in Figure 2, the pivot BLEU score and the translation BLEU are imperfectly correlated; even though M-UMT reaches the highest pivot BLEU, the resulting translation BLEU is comparable to using the simple word substitution method (rows 7 and 8 in Table 2). The reason is that the quality of { ˆSm H )L , THE} is naturally restricted by the { ˆSw H )L , THE}, whose quality is in turn restricted by the induced dictionary. However, by combining the augmented datasets from these two methods, we consistently improve the translation performance over using only word substitution augmentation (compare Table 2 rows 7 and 9). This suggests that the two augmented sets improve LRL-ENG translation in an orthogonal way. Additionally, we observe that augmentation from back-translated HRL data leads to generally worse results than augmentation from original HRL data (compare rows 7,8 with rows 10,11 in Table 2). We believe this to be the result of noise in the back-translated HRL, which is then compounded by further errors from the induced dictionary. Therefore, we suggest that the simple word substitution method should be preferred for the second pivoting step when augmenting back-translated HRL data. Table 3 provides an example conversion of an HRL sentence to pseudo-LRL with the word substitution strategy, and its translation with M-UMT. From SHE to ˆSw H )L, the word substitution strategy achieves very high unigram scores (0.50 in this case), largely narrowing the gap between two languages. The M-UMT model then edits the pseudoLRL sentence to convert all its words to LRL. AZE BEL GLG SLK (TUR) (RUS) (POR) (CES) WT-Bi 35K 42K 34K 51K WT-Uni 211K 179K 89K 117K WN-Bi 1.6M 2.5M 3.1M 2.0M WN-Uni 2.9M 3.8M 3.8M 2.9M BLEU-Bi 14.33 21.55 31.72 29.09 BLEU-Uni 14.10 21.86 30.51 28.58 Table 4: Injected word type (WT), injected word number (WN) and BLEU score (BLEU) on low-resource translation with different induced dictionaries. Bi denotes bidirectional and Uni denotes unidirectional word induction. Rare Word Coverage Next, we quantitatively evaluate how our pivoting augmentation methods increase rare word coverage and the correlation with LRL-ENG translation quality. For each word in the tested set, we define a word as “rare” if it is in the training set’s lowest 10th frequency percentile. This is particularly true for LRL test set words when using concatenated HRL-LRL training data, as the LRL data will be smaller. We further define rare words to be “addressed” if after adding augmented data the rare word is not in the lowest 10th frequency percentile anymore. Then, we define the “address rate” of a test dataset as the ratio of the number of addressed words to the number of rare words. The address rate of each method, along with the corresponding translation BLEU score is shown in Figure 3. As indicated by the Pearson correlation coefficients, these two metrics are highly correlated, indicating that our augmentation methods significantly mitigate problems caused by rare words, improving MT quality as a result. Dictionary Induction We conduct experiments to compare two methods of dictionary induction from the mapped word embedding spaces: 1) Uni5794 directional: For each HRL word, we collect its closest LRL word to be added to the dictionary; 2) Bidirectional: We only add word pairs the two words of which are each other’s closest neighbor to the dictionary. In order to know how many LRL words are injected into the HRL corpus, we show the number of injected unique word types, number of injected words, and the corresponding BLEU score of models trained with bidirectional and unidirectional word induction in Table 4. It can be seen that the ratio of word numbers is higher than that of word types between bidirectional and unidirectional word induction, indicating that the injected words using the bidirectional method are of relatively high frequency. The BLEU scores show that bidirectional word induction performs better than unidirectional induction in most cases (except BEL). One explanation could be that adding each word’s closest neighbor as a pair into the dictionary introduces additional noise that might harm the low-resource translation to some extent. 6 Related Work Our work is related to multilingual and unsupervised translation, bilingual dictionary induction, as well as approaches for triangulation (pivoting). In a low-resource MT scenario, multilingual training that aims at sharing parameters by leveraging parallel datasets of multiple languages is a common practice. Some works target learning a universal representation for all languages either by leveraging semantic sharing between mapped word embeddings (Gu et al., 2018) or by using character n-gram embeddings (Wang et al., 2019) optimizing subword sharing. More related with data augmentation, Nishimura et al. (2018) fill in missing data with a multi-source setting to boost multilingual translation. Unsupervised machine translation enables training NMT models without parallel data (Artetxe et al., 2018; Lample et al., 2018a,c). Recently, multiple methods have been proposed to further improve the framework. By incorporating a statistical MT system as posterior regularization, Ren et al. (2019) achieved state-of-the-art for en-fr and en-de MT. Besides MT, the framework has also been applied to other unsupervised tasks like nonparallel style transfer (Subramanian et al., 2018; Zhang et al., 2018). Bilingual dictionaries learned in both supervised and unsupervised ways have been used in lowresource settings for tasks such as named entity recognition (Xie et al., 2018) or information retrieval (Litschko et al., 2018). Hassan et al. (2017) synthesized data with word embeddings for spoken dialect translation, with a process that requires a LRL-ENG as well as a HRL-LRL dictionary, while our work only uses a HRL-LRL dictionary. Bridging source and target languages through a pivot language was originally proposed for phrasebased MT (De Gispert and Marino, 2006; Cohn and Lapata, 2007). It was later adapted for Neural MT (Levinboim and Chiang, 2015), and Cheng et al. (2017) proposed joint training for pivot-based NMT. Chen et al. (2017) proposed to use an existing pivottarget NMT model to guide the training of sourcetarget model. Lakew et al. (2018) proposed an iterative procedure to realize zero-shot translation by pivoting on a third language. 7 Conclusion We propose a generalized data augmentation framework for low-resource translation, making best use of all available resources. We propose an effective two-step pivoting augmentation method to convert HRL parallel data to LRL. In future work, we will explore methods for controlling the induced dictionary quality to improve word substitution as well as M-UMT. We will also attempt to create an end-toend framework by jointly training M-UMT pivoting system and low-resource translation system in an iterative fashion in order to leverage more versions of augmented data. Acknowledgements The authors thank Junjie Hu and Xinyi Wang for discussions on the paper. This material is based upon work supported in part by the Defense Advanced Research Projects Agency Information Innovation Office (I2O) Low Resource Languages for Emergent Incidents (LORELEI) program under Contract No. HR0011-15-C0114 and the National Science Foundation under grant 1761548. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. 5795 References Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. In International Conference on Learning Representations. Victor Chahuneau, Eva Schlinger, Noah A Smith, and Chris Dyer. 2013. Translating into morphologically rich languages with synthetic phrases. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1677–1687. Yun Chen, Yang Liu, Yong Cheng, and Victor OK Li. 2017. A teacher-student framework for zeroresource neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1925–1935. Yong Cheng, Qian Yang, Yang Liu, Maosong Sun, and Wei Xu. 2017. Joint training for pivot-based neural machine translation. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 3974–3980. Chenhui Chu, Raj Dabre, and Sadao Kurohashi. 2017. An empirical comparison of domain adaptation methods for neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 385–391. Trevor Cohn and Mirella Lapata. 2007. Machine translation by triangulation: Making effective use of multi-parallel corpora. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 728–735. Anna Currey, Antonio Valerio Miceli Barone, and Kenneth Heafield. 2017. Copied monolingual data improves low-resource neural machine translation. In Proceedings of the Second Conference on Machine Translation, pages 148–156. Adrià De Gispert and Jose B Marino. 2006. Catalanenglish statistical machine translation without parallel corpus: bridging through spanish. In Proc. of 5th International Conference on Language Resources and Evaluation (LREC), pages 65–68. Citeseer. Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor OK Li. 2018. Universal neural machine translation for extremely low resource languages. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 344–354. Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc’Aurelio Ranzato. 2019. Two new evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English. arXiv preprint arXiv:1902.01382. Hany Hassan, Mostafa Elaraby, and Ahmed Tawfik. 2017. Synthetic data for neural machine translation of spoken-dialects. arXiv preprint arXiv:1707.00079. Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, et al. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Opennmt: Opensource toolkit for neural machine translation. pages 67–72. Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 28–39. Surafel M Lakew, Quintino F Lotito, Matteo Negri, Marco Turchi, and Marcello Federico. 2018. Improving zero-shot translation of low-resource languages. arXiv preprint arXiv:1811.01389. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Representations. Guillaume Lample, Alexis Conneau, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018b. Word translation without parallel data. In International Conference on Learning Representations. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, et al. 2018c. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049. Tomer Levinboim and David Chiang. 2015. Supervised phrase table triangulation with neural word embeddings for low-resource languages. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1079– 1083, Lisbon, Portugal. Association for Computational Linguistics. Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. 2019. Choosing transfer languages for cross-lingual 5796 learning. In The 57th Annual Meeting of the Association for Computational Linguistics (ACL), Florence, Italy. Robert Litschko, Goran Glavaš, Simone Paolo Ponzetto, and Ivan Vuli´c. 2018. Unsupervised crosslingual information retrieval using monolingual data only. arXiv preprint arXiv:1805.00879. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Graham Neubig and Junjie Hu. 2018. Rapid adaptation of neural machine translation to new languages. In Conference on Empirical Methods in Natural Language Processing (EMNLP), Brussels, Belgium. Toan Q. Nguyen and David Chiang. 2017. Transfer learning across low-resource, related languages for neural machine translation. In Proc. IJCNLP, volume 2, pages 296–301. Yuta Nishimura, Katsuhito Sudoh, Graham Neubig, and Satoshi Nakamura. 2018. Multi-source neural machine translation with data augmentation. Ye Qi, Devendra Sachan, Matthieu Felix, Sarguna Padmanabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neural machine translation? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 529–535. Shuo Ren, Zhirui Zhang, Shujie Liu, Ming Zhou, and Shuai Ma. 2019. Unsupervised neural machine translation with SMT as posterior regularization. arXiv preprint arXiv:1901.04112. Peter H Schönemann. 1966. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1–10. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 86–96. Sandeep Subramanian, Guillaume Lample, Eric Michael Smith, Ludovic Denoyer, Marc’Aurelio Ranzato, and Y-Lan Boureau. 2018. Multiple-attribute text style transfer. CoRR, abs/1811.00552. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Xinyi Wang, Hieu Pham, Philip Arthur, and Graham Neubig. 2019. Multilingual neural machine translation with soft decoupled encoding. In International Conference on Learning Representations. Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A Smith, and Jaime Carbonell. 2018. Neural crosslingual named entity recognition with minimal resources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 369–379. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006–1011, Denver, Colorado. Association for Computational Linguistics. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Earth mover’s distance minimization for unsupervised bilingual lexicon induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1934– 1945. Zhirui Zhang, Shuo Ren, Shujie Liu, Jianyong Wang, Peng Chen, Mu Li, Ming Zhou, and Enhong Chen. 2018. Style transfer as unsupervised machine translation. arXiv preprint arXiv:1808.07894. Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1568–1575.
2019
579
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 608–618 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 608 Topic Tensor Network for Implicit Discourse Relation Recognition in Chinese Sheng Xu, Peifeng Li, Fang Kong, Qiaoming Zhu and Guodong Zhou Institute of Artificial Intelligence, School of Computer Science and Technology, Soochow University, China [email protected];{pfli,kongfang,qmzhu,gdzhou}@suda.edu.cn Abstract In the literature, most of the previous studies on English implicit discourse relation recognition only use sentence-level representations, which cannot provide enough semantic information in Chinese due to its unique paratactic characteristics. In this paper, we propose a topic tensor network to recognize Chinese implicit discourse relations with both sentencelevel and topic-level representations. In particular, besides encoding arguments (discourse units) using a gated convolutional network to obtain sentence-level representations, we train a simplified topic model to infer the latent topic-level representations. Moreover, we feed the two pairs of representations to two factored tensor networks, respectively, to capture both the sentence-level interactions and topiclevel relevance using multi-slice tensors. Experimentation on CDTB, a Chinese discourse corpus, shows that our proposed model significantly outperforms several state-of-the-art baselines in both micro and macro F1-scores. 1 Introduction As a critical component of discourse parsing, discourse relation recognition focuses on determining how two adjacent discourse units (e.g., clauses, sentences, and sentence groups), called arguments, semantically connect to one another. Obviously, identifying discourse relations can help many downstream NLP applications, such as automatic summarization, information extraction and question answering. In principle, the discourse connectives between two arguments are important for recognizing the relationship between them. For explicit discourse relation recognition where the discourse connectives explicitly exist in the text, a simple frequency-based mapping table can achieve high performance due to the critical role of a connective in determining the discourse relations (Xue et al., 2016). For implicit discourse relation recognition, it is much more challenging due to missing an exact connective and it normally depends on the understanding of the whole text (Pitler et al., 2009). This paper focuses on recognizing implicit discourse relations in Chinese. In contrast to English, which is a hypotactic language (formal cohesion), Chinese is a paratactic language (semantic cohesion) that tends to pro-drop clause connectives. Our statistics indicate that the implicit relations in the Chinese CDTB corpus account for 75.2%, while the proportion in the English PDTB corpus declines to only 40%. Hence, recognizing implicit discourse relations in Chinese becomes more crucial than in English. In the literature, most of previous studies focused on English, with only a few on Chinese. Compared with traditional feature-based methods (Pitler et al., 2009; Lin et al., 2009; Wang et al., 2017; Kong and Zhou, 2017) that directly rely on feature engineering, recent neural network models (Liu et al., 2017; Qin et al., 2017; Guo et al., 2018; Bai and Zhao, 2018) can capture deeper semantic cues and learn better representations (Zhang et al., 2015). In particular, most neural network-based methods encode arguments using variants of BiLSTM or CNN (Qin et al., 2016; Guo et al., 2018) and propose various models (e.g., the gated relevance network, the encoder-decoder model, and interactive attention) to measure the semantic relevance (Chen et al., 2016; Cianflone and Kosseim, 2018; Guo et al., 2018) Due to the large differences between the hypotactic English language and the paratactic Chinese language, English-based models, which rely heavily on sentence-level representations, may not function well on Chinese. Due to its paratactic nature, Chinese is flooded with a broad range of flexible sentence structures and semantic cohesion, such as ellipses, references, substitutions, and con609 junctions. Therefore, Chinese discourse parsing relies heavily on the deep semantics of arguments, especially topic continuity (Lei et al., 2018). In many cases, considering only the sentence-level representation is not enough for Chinese implicit discourse relation recognition, and we need various semantic clues beyond the sentence-level, e.g., at the topic level. Take the following two arguments as examples: [一九九一年至一九九五年,中国的对 外开放以高速向前推进(From 1991 to 1995, China’s opening was moving forward at a high speed)]Arg1 [国民经济更加广泛地参与国际 分工与国际交换,中外经济技术合作与交流 已渗入到中国经济生活的各个领域(the national economy is more widely involved in the international division of labor and international exchange, and the economic and technological cooperation and exchanges between China and foreign countries had penetrated into various fields of China’s economic life)]Arg2 Although there is an Elaboration relation between the above two arguments, it is difficult to obtain sufficient information for identifying this potential association by directly matching the words in Arg1 (e.g., “speed” and “moving”) and those in Arg2 (e.g., “economic” and “exchanges”). To identify their Elaboration relation, the most crucial clue may be the fact that they belong to the same topic, i.e., China’s opening is an international economic event. Therefore, it is critical for implicit discourse relation recognition to capture such topic information as an important clue. In this paper, we propose a Topic Tensor Network (TTN) to recognize implicit discourse relations in Chinese using both sentence-level and topic-level representations. First, we introduce a GCN-based (Gated Convolutional Network) encoder to learn the sentence-level representations. Then, we train a Simplified Topic Model (STM) to infer the latent topic-level representations to provide additional semantic clues. Finally, we feed the two pairs of representations to two Factored Tensor Networks (FTNs) to model both the sentence-level interactions and topic-level relevance using multi-slice tensors. We summarize the contributions of our work as follows: • Compared with previous works that were focused on sentence-level representations, we incorporate additional topic-level representations to capture the deep semantic interactions among arguments. • We introduce the simplified topic model STM to infer the latent topic-level representations and employ such topic-level relevance to recognize Chinese implicit discourse relations. • We propose the factored tensor network FTN to model the complex semantic interactions, and it has the advantage of significantly reducing the complexity of the original model (Guo et al., 2018). 2 Related Work Most previous studies evaluated their models on PDTB (Prasad et al., 2008) and RST-DT (Carlson et al., 2003), which are two English discourse corpora that were available up to now. PDTB is the largest English discourse corpus with 2312 annotated documents from Wall Street Journal using the PTB-style predicate-argument structure. RSTDT is another popular English discourse corpus, which annotates 385 documents from Wall Street Journal using the RST tree scheme. Basically, previous studies can be categorized into traditional models that focus on linguistically informed features (Pitler et al., 2009; Lin et al., 2009; Feng and Hirst, 2014; Wang et al., 2017), and neural network methods (Liu and Li, 2016; Chen et al., 2016; Guo et al., 2018; Bai and Zhao, 2018). Especially, Zhou et al., (2010) attempted to predict implicit connectives. Qin et al. (2017), Shi et al. (2017) and Xu et al. (2018) attempted to leverage explicit examples for data augmentation. Other studies resorted to unlabeled data to perform multi-task or unsupervised learning (Liu et al., 2016; Lan et al., 2017). Since discourse relation recognition is essentially a classification problem, what those neural network methods need to consider is how to model the arguments and how to incorporate their semantic interactions. From this regard, most of them focused on improving representations or incorporating the complex interactions. Bai and Zhao (2018) proposed a deep enhanced representation to represent arguments at the character, subword, word, and sentence levels. Chen et al. (2016) introduced a gated relevance network to model both the linear and nonlinear correlations between two arguments. Guo et al. (2018) used a neural tensor network to capture the interactive features with 610 FTN FTN Classifier GCN Mean Pooling Mean Pooling �1 �2 log �1 2 log �2 2 �1 �2 �1 � 1 Simplified Topic Model GCN �1 �2 �1 �2 �1 �2 �(�, �) ݂ℎ, ݂�, ݂� ݃ Decoder Encoder � 2 �2 Figure 1: The overall framework of our Topic Tensor Network. a multi-slice tensor. Among others, Qin et al. (2017) applied an adversarial method to transfer the discriminability of connectives to implicit features through competition, while Xu et al. (2018) expanded the training set by cooperating active learning with explicit-to-implicit relation transformation. In comparison, previous studies on Chinese implicit discourse relation recognition were mainly carried out on CDTB (Li et al., 2014) and CDTBZX (Zhou and Xue, 2015). CDTB includes 500 newswire documents annotated with a connectivedriven dependency tree scheme, while CDTBZX only contains 164 documents from Xinhua Newswire annotated with PDTB-style discourse relations. Basically, most of the previous studies followed the English studies. Kong and Zhou (2017) constructed an end-to-end Chinese discourse parser, which used contextual features, lexical features and dependency tree features to recognize discourse relations with a maximum entropy classifier. R¨onnqvist et al. (2017) proposed a Bi-LSTM model with attention mechanism to link two arguments by inserting special labels. Liu et al. (2017) provided a memory augmented attention model that used memory slots to store the interactions between two input arguments. 3 Topic Tensor Network for Implicit Discourse Relation Recognition In this section, we describe our topic tensor network TTN with the overall architecture as shown in Figure 1. TTN has four major components: (1) a simplified topic model (STM) to infer the latent topic distributions of arguments as topic-level representations; (2) a GCN-based encoder to generate sentence-level representations; (3) two factored tensor networks (FTNs) to jointly model the sentence-level interactions and the topic-level relevance; and (4) an MLP classifier, which produces the final discourse relation labels. In particular, the GCN-based encoder extracts hierarchical features from the long text of arguments by stacking multiple gated convolution layers, and fully represents the sentence-level semantic information. STM provides additional topic information for the MLP classifier to recognize discourse relations at a higher level. On this basis, the two pairs of representations are fed into two FTNs, respectively, which use multi-slice tensors to jointly model the sentence-level interactions and the topic-level relevance. Compared with the neural tensor network used in Guo et al. (2018), our FTN greatly reduces the computational complexity due to the tensor factorization. Hence, we can set more tensor slices to capture more complex interaction features. Formally, the word sequence Ek = {w1, w2, ..., wL} and the BoW (Bag-of-Words) representation Bk ∈RV of arguments are the input of our model, where L is the sequence length and V is the vocabulary size. Each word wi in an argument is represented as the combination of its word embedding ei and POS (Part-Of-Speech) embedding pi. The two word 611 sequences E1 and E2 of the two arguments are fed into the GCN-based encoder to obtain the sentence-level representations, and the BoW representations B1 and B2 are sent to STM to infer the latent topic-level representations. On this basis, two FTNs are applied to capture the interactive features between two arguments based on the above representations. Finally, the MLP classifier concatenates all of the features produced by FTNs to predict the discourse relation label y. 3.1 Simplified Topic Model on Topic-level Representation Similar to the LDA-style topic models, we believe that there is an association between the word distribution Bk of an argument and its topic distribution Zk. For each Bk, we can infer a latent topic distribution Zk ∈RK through our topic model, where K denotes the number of topics. Inspired by the Neural Topic Model (NTM) (Zeng et al., 2018; Miao et al., 2016), we propose a simplified topic model STM based on the Variational AutoEncoder (VAE) (Kingma and Welling, 2013). Unlike NTM, our model does not attempt to reconstruct the document during the decoding phase, and it only restores the word distributions. Although STM cannot learn the semantic word embeddings, it significantly reduces the training parameters to perform unsupervised training on the discourse corpus with a small sample size. Similar to NTM, we can interpret our STM as a VAE: a neural network encoder p(Z|B) first compresses the BoW representation Bk into a continuous hidden vector Zk, and then an MLP decoder g(Z) restores Zk to Bk. Since STM is an unsupervised model, we can only use the existing BoW representation Bk to learn the latent topic distribution Zk ∼N(µ, σ2). The inference network p(Z|B) is defined as follows: µ = fµ(fh(B)) (1) log σ2 = fσ(fh(B)) (2) where fh(·) is a single layer neural network with ReLU as the activation function, and fµ(·), fσ(·) are simple linear transformations. For the BoW representation Bk of the argument, the inference network generates its own parameters µk, σ2 k that parameterize the normal distribution N(µk, σ2 k), and we can further sample the latent topic distribution Zk corresponding to the argument. To reduce the variance in the stochastic estimation, we follow (Rezende et al., 2014) to sample Z by the reparametric method and sample ϵ ∼N(0, I) as follows: Z = µ + ϵ · σ (3) We hope that our STM can reconstruct the original input B as much as possible using the topic distribution Z while adding Gaussian noise to the result generated by the encoder to increase the robustness of the decoder. Therefore, the loss function of STM is defined as follows: LST M = EZ∼p(Z|B)[−log q(B|Z)]+KL(q(Z)∥p(Z|B)) (4) where q(Z) is a standard normal distribution N(0, I). It is worth mentioning that reducing the reconstruction loss can make the decoder have the generative ability. We calculate the reconstruction loss by calculating the binary cross entropy between the BoW representation Bk and b Bk reconstructed by the decoder. Since decreasing the KL (Kullback-Leibler) divergence makes all p(Z|B) approximate the standard normal distribution, the noise can be prevented from being zero with the result as follows. KL(q(Z)∥p(Z|B)) = 1 2(−log σ2 + µ2 + σ2 −1) (5) Given the BoW representation Bk, our STM can infer its latent topic distribution Zk to provide topic-level representations. 3.2 GCN-based Encoder on Sentence-level Representation Most previous studies used Bi-LSTM or 1D CNN to encode input sequences. However, CNN lacks visibility when capturing global information due to its limited view of the convolution kernel, while Bi-LSTM training is time-consuming due to its cyclic structure, especially for long texts, such as arguments. To address the above issues, Dauphin et al. (2017) proposed a Gated Convolutional Network (GCN) to extract hierarchical features from long texts by stacking multiple gated convolutional layers and mitigate the vanishing gradient problem by using gate units. In this paper, we choose GCN as our text encoder. National Institute of Child Health and Human Development (2000) found that when readers repeatedly read text in detail with specific learning aims, they could improve not only their reading fluency, but also their comprehension of the text. Following He et al. (2016), we introduce the residual into GCN by adding the input of each layer 612 to its output so that the original input information can be passed to the back layers. Specifically, for the input sequence with N words E ∈RN×D, where D is the sum of the size of the word embedding and POS embedding, each gated convolutional layer hl is computed as follows: hl(X) = (X ·W +b)⊗σ(X ·V +c)+X (6) where X ∈RN×D is the input of layer hl (either the input sequence E or the outputs of previous layers), W ∈RC×D×D, b ∈RD, V ∈ RC×D×D, c ∈RD are model parameters, and C is the size of the convolution kernel. σ(·) is the sigmoid function and ⊗is the element-wise product between matrices. After stacking L layers on top of the input, we can obtain the semantic representation sequence of the argument H = hL ◦...◦h1(E) ∈RN×D. Finally, the Mean Pooling operation is performed to obtain the respective argument representations on the sequences H1 = {h1 L1, ..., hN L1} and H2 = {h1 L2, ..., hN L2} corresponding to the two arguments: R1 = 1 N N X i=1 hi L1, R2 = 1 N N X i=1 hi L2 (7) As a result, in the GCN-based encoder, we stack multiple gated convolution layers with the residual structure to learn the sentence-level representations, which can take advantage of the parallel computing of convolutional networks, and also control the flow of information through the gate units similar to LSTM. 3.3 Factored Tensor Network on Joint Representations Traditional methods for modeling the semantic relevance between two arguments capture the linear and nonlinear interactions using various text matching models, such as Bilinear model (Jenatton et al., 2012) and Single Layer Network (Collobert and Weston, 2008). Based on these methods, Socher et al. (2013) proposed a Neural Tensor Network (NTN) to combine the advantages of these two models and showed the ability of the tensor to model complex informative interactions in knowledge graphs. Following Guo et al. (2018), we use two NTNs to capture the interactive features between the semantic representations R1, R2, and between the topic distributions Z1, Z2 as follows: T(x, y) = fn  x⊤M [1:m]y + U x y  + s  (8) where fn(·) is a standard nonlinear function, M ∈ Rd×d×m is a 3rd-order transformation tensor, U ∈ Rm×2d and s ∈Rm are parameters. The tensor product x⊤M [1:m]y results in a vector c ∈Rm, where each entry is computed by slice i of the tensor M as ci = x⊤M [i]y, and it is equivalent to including m Bilinear models that simultaneously capture multiple linear interactions between vectors. However, it increases the parameters and the computational complexity of the model; therefore, we adopt tensor factorization (Pei et al., 2014), which uses two low rank matrices to approximate each tensor slice M [i], as follows: M [i] ⇒J[i]K[i] (9) where J[i] ∈Rd×r, K[i] ∈Rr×d and r ≪d. We named our model FTN (Factored Tensor Network). Compared with the original NTN (Guo et al., 2018), our FTN greatly reduces the number of parameters. Hence, it can set more tensor slices and make the training process easier. In particular, for semantic representations R1, R2 ∈RD, the parameter d in FTN is set to D, and for topic distribution Z1, Z2 ∈RK, it is set to K. FTN can model not only the sentence-level interactions between argument representations but also the relevance between topic-level representations, which can be regarded as topic-level interactions. Finally, we concatenate the sentencelevel interactions T(R1, R2) and the topic-level relevance T(Z1, Z2) and send them to a two-layer neural network classifier, which first applies a nonlinear transformation and then computes the probabilities of each relation by a softmax layer. 3.4 Joint Learning To simultaneously update the parameters in all components of TTN, we jointly tackle the topic modeling and the classification, and define the loss function of the overall model to combine the two effects as follows. L = LSTM + λLMLP (10) where LSTM represents the loss of STM and LMLP is the cross entropy loss of the classifier. λ is the trade-off parameter controlling the balance 613 between the topic model and the MLP classifier. To prevent overfitting, a dropout operation is performed on the parameter vector input to the softmax layer. 4 Experimentation 4.1 Experiment Settings Due to the small number of documents in CDTBZX, we evaluate our model on CDTB (Li et al., 2014) with 500 annotated newswire articles from CTB (Xue et al., 2005). CDTB contains 7310 annotated relations (implicit: 5496) which can be divided into 4 classes and 17 categories. To make full use of this corpus, we erase the existing connectives information and treat all samples as implicit discourse relation samples. Following previous work (Kong and Zhou, 2017), we choose the same 450 documents as the training set and the remaining 50 documents as the testing set. We also evaluate TTN on the four toplevel classes in CDTB, and transform all of the non-binary trees into left binary trees. Table 1 summarizes the statistics of the four CDTB relations, i.e., Causality, Coordination, Elaboration, and Transition. Relation Train Test Causality 1213 119 Coordination 4618 515 Elaboration 1465 151 Transition 205 11 Table 1: Statistics of the discourse relations in CDTB. We use HanLP1 as the NLP tool for word segmentation and POS tagging, and use the Keras2 library to implement our model. We selected 10% of the samples from the training set as the development set to fine-tune the hyper-parameters, and only give their final settings due to space limitation. The 300-dimensional pre-trained word embeddings are provided by Word2Vec (Mikolov et al., 2013), and the dimension of the POS embeddings is set to 50. The trade-off parameter λ in Equ. (10) is set to 1.0. To alleviate the data sparseness of the input BoW representations, we limit the vocabulary to the top 5000 most frequent words, i.e., V = 5000. 1https://github.com/hankcs/HanLP 2https://keras.io/ In STM, the number of topics is set to 256, and the number of neurons in the single-layer networks fh(·), fµ(·), fσ(·) are set to 512, 256 and 256, respectively. In addition, the generator g is implemented by a two-layer network with a hidden layer size of 512. In the GCN-based text encoder, the number of layers L is set to 3, and the convolution kernel size C is set to 3. In FTN, the number of tensor slices m is set to 128, and r of the tensor factorization is set to 10. The size of the nonlinear transformation layer in the MLP classifier and the droupout rate are set to 64 and 0.5, respectively. 4.2 Experimental Results To exhibit the effectiveness of our TTN model, we selected Bi-LSTM, CNN and GCN (Dauphin et al., 2017) as baselines in addition to three stateof-the-art models proposed in previous works: (1) Liu&Li (Liu and Li, 2016): a multi-level attention model that simulates the repeated reading process by stacking multiple attention layers with external memory; (2) R¨onnqvist (R¨onnqvist et al., 2017): a Bi-LSTM model with attention mechanism that first links argument pairs by inserting special labels; and (3) Guo (Guo et al., 2018): a neural tensor network that encodes the arguments by BiLSTM and interactive attention. Among them, GCN uses the same settings as our model. Following Liu and Li (2016), the hidden size for each direction of Bi-LSTM is set to 350, the same as the dimension of the word embeddings. Following Qin et al. (2016), the convolution kernel size and the number in CNN are set to 2 and 1024, respectively. The three state-of-the-art models are reproduced following their corresponding work. The experimental results on CDTB are illustrated in Table 2. It shows that our TTN model outperforms the other baselines in both the micro and macro F1-scores. This indicated that topiclevel information is a vital evidence to reveal the relationships among arguments and justify the effectiveness of our TTN model. Compared with the basic recurrent neural network Bi-LSTM, the CNN and GCN significantly improve the micro and macro F1-scores due to the powerful capabilities of convolution kernels to capture features. Especially, GCN is better than CNN because it can control the information flow in the convolutional network using gate units and extract hierarchical features by stacking multiple layers. In addition, Liu&Li and Guo, two state-of614 Model Caus. Coor. Elab. Tran. Micro-F1 Macro-F1 Bi-LSTM 37.4 79.8 51.8 73.7 68.7 61.1 CNN 41.2 81.5 52.5 80.0 71.4 64.4 GCN 46.2 82.4 51.4 76.2 71.5 64.6 Liu&Li 42.8 81.4 54.6 85.7 71.1 66.2 R¨onnqvist 39.2 81.6 57.1 78.3 71.1 64.3 Guo 42.4 80.1 60.0 80.0 70.7 65.8 TTN 40.6 83.1 60.7 84.2 73.6 67.8 Table 2: Performance of six baselines and TNN with F1-scores. the-art models on English implicit discourse relation recognition, and R¨onnqvist, a state-of-the-art model on Chinese, focus on extracting sentencelevel features from arguments and achieve similar performance. Our TTN model outperforms all of the baselines with large gains from 2.1 to 4.9 in the micro F1-score and significant gains from 1.6 to 6.7 in the macro F1-score. Compared with the baselines, TTN not only captures the interactive features at sentence-level, but also considers the topic-level relevance among arguments. This result shows that TTN can recognize the discourse relations at a higher level to improve the performance of Chinese implicit discourse relation recognition. Different from Liu&Li, TTN not only learns the argument representations by stacking multiple layers with residuals to simulate the repeated reading, but also models the deep semantic interactions through factored tensor networks. Different from Guo, TTN not only reduces the complexity of the tensor network using tensor factorization, but also models the sentence-level and topic-level interactions together. 5 Analysis and Discussion 5.1 Impact on Different Relations Table 2 also compares the F1-scores on different relations. We can find that our TTN achieves the highest F1-scores in the Elaboration and Coordination relations, and it achieves a comparable performance in the Transition relation. However, it reduces the F1-score in the Causality relation by 5.6, compared with GCN. To explain the reasons behind this, we conduct experiments on some variants of TTN with the results shown in Table 3. We choose the gated convolutional network (GCN) as the Base model with its parameters being set the same as our model. To analyze the contribution of the topic-level representation and the factored tensor modeling method separately, we add our simplified topic model (STM) and our factored tensor network (FTN) to the Base model, respectively. The results shows that STM gives the latent topic distributions of arguments and there is a significant improvement (+8.6) in recognizing the Elaboration relation. The existence of an Elaboration relation between two arguments means that the content of one argument is a further explanation of the other, and these arguments usually have similar topic distributions. Hence, STM essentially provides additional topic distribution features to TNN, which help in recognizing the Elaboration relation. Equally, STM can also improve the performance of recognizing the Coordination relation because two arguments with the Coordination relation are equally important at the semantic level, and their contents describe different aspects of one thing or different parts of a certain behavior; hence, they are also similar at the topic level in most cases. However, this does not apply to the Causality relation and there is a large drop (-9.8) with the lowest F1-score among all four relations. The reason behind this may be due to the fact that the recognition of the Causality relation relies more on the logical connection, and arguments with the Causality relation are not similar at the topic level in most cases. Hence, STM, which simply introduces topical information to the Base model, does not help and even may harm the recognition. Take the following two arguments as examples: [出口快速增长,(Exports have grown rapidly,)]Arg1 [成为推动经济增长的重要力 量。(become an important force driving economic growth.)]Arg2 Arg1 is the reason for Arg2, and hence the relation between them is Causality. However, 615 Model Caus. Coor. Elab. Tran. Micro-F1 Macro-F1 Base(GCN) 46.2 82.4 51.4 76.2 71.5 64.6 +STM 36.4 82.9 60.0 73.7 73.1 64.1 +FTN 41.3 82.7 55.3 84.2 72.5 66.4 Table 3: Comparison of Base, STM and FTN on the F1-score. Model Caus. Coor. Elab. Tran. Micro-F1 Macro-F1 TTN 40.6 83.1 60.7 84.2 73.6 67.8 NTN(Guo) 39.6 82.1 56.2 84.2 72.6 66.4 Table 4: Comparison of TTN and NTN(Guo) on the F1-score. from the perspective of the topic, the words in the two arguments revolve around the same topic of “economic growth”. Therefore, our STM will directly infers the similar topic distribution from the words of these two arguments and interfere with the recognition of the Causality relation. Our neural factored tensor networks (FTNs) are capable of modeling complex semantic interactions between two arguments using multiple Bilinear models and single layer neural network. Therefore, after the addition, a certain improvement has been achieved in recognizing most relations (except for Causality). Especially, it improves the F1-scores of the Elaboration and Transition relations by 3.9 and 8.0, respectively. 5.2 Impact of Tensor Factorization To further verify the impact of tensor factorization, we compare it with Guo et al. (2018). Table 4 illustrates the results, where NTN(Guo) is a modified version of our TTN, which uses the NTN model proposed by Guo et al. (2018) to replace our FTN. Since NTN(Guo) does not use the tensor factorization operation, its parameter number and computational complexity increase greatly. The parameters of factored tensor network in our model are reduced by approximately 20 times, compared with NTN(Guo). If it directly adopts our parameter settings, the model will have serious overfitting, and it will not even recognize the Transition relation, which is only a small proportion of the training set. Therefore, following (Guo et al., 2018), we set the tensor number to a very small value. It shows that NTN(Guo) has a performance degradation of 1.0 and 1.4 in micro and macro F1scores, respectively, indicating that the tensor factorization operation in our model is very effective. In addition, our neural tensor network can set more tensor slices to model the complex interactions between two arguments. 5.3 Error Analysis Table 5 illustrates the error statistics of our TTN model. It shows that 51.3% of the Causality samples, 33.8% of the Elaboration samples, and 18.2% of the Transition samples are incorrectly identified as Coordination. This indicates that the error mainly occurs when judging whether a sample is Coordination. This may be due to two reasons, which are that the number of Coordination samples accounts for more than half of the training set (61.6%) and that many argument pairs with non-Coordination relations are similar at both the text level and the topic level. Take the following two arguments as examples: Model Caus. Coor. Elab. Tran. Caus. 51.3% 15.1% 0% Coor. 5.4% 7.8% 0% Elab. 6.0% 33.8% 0% Tran. 9.1% 18.2% 0% Table 5: Percentages of misclassified samples. [甘肃省积极实施科技兴农战略,推广增产 措施(Gansu Province promotes various agricultural applicable technologies and production increase measures)]Arg1 [农业获得较好收成,全 年粮食总产量达七十六点六亿公斤(Agriculture has achieved a good harvest, and the annual total grain output reached 7.66 billion kg)]Arg2 In above samples, since Arg1 is the reason for Arg2, the discourse relation between them is Causality. However, there is a strong sentencelevel correlation between the words in Arg1 (e.g., 616 “agricultural” and “production”) and those in Arg2 (e.g., “harvests”, “gain”, and “output”). Moreover, these two arguments are all about agriculture. Therefore, there is a strong similarity in the topic distribution, too. 6 Conclusion In this paper, we propose a topic tensor network TTN to recognize implicit discourse relations in Chinese with both the sentence-level and topiclevel representations. In addition to using a GCNbased encoder to obtain the sentence-level argument representations, we train a STM to infer the latent topic distribution as the topic-level representations. Moreover, we feed the two pairs of representations to two FTNs, respectively, to model the sentence-level interactions and topic-level relevance among arguments. Evaluation on CTDB shows that our proposed TTN model significantly outperforms several state-of-the-art baselines in both micro and macro F1-scores. In the future work, we will focus on how to mine different representations for different discourse relation types and apply the topic information to other languages. Acknowledgments The authors would like to thank four anonymous reviewers for their comments on this paper. This research was supported by the National Natural Science Foundation of China under Grant Nos. 61836007, 61772354 and 61773276. References Hongxiao Bai and Hai Zhao. 2018. Deep enhanced representation for implicit discourse relation recognition. In Proceedings of the 27th International Conference on Computational Linguistics (COLING), pages 571–583. Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2003. Building a discourse-tagged corpus in the framework of rhetorical structure theory. In Current and new directions in discourse and dialogue, pages 85–112. Jifan Chen, Qi Zhang, Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Implicit discourse relation detection via a deep architecture with gated relevance network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1726–1735. National Institute of Child Health and Human Development. 2000. Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Andre Cianflone and Leila Kosseim. 2018. Attention for implicit discourse relation recognition. In Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC), pages 1946–1951. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning (ICML), pages 160–167. Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. In Proceedings of the 34th International Conference on Machine Learning (ICML), pages 933–941. Vanessa Wei Feng and Graeme Hirst. 2014. A lineartime bottom-up discourse parser with constraints and post-editing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL), pages 511–521. Fengyu Guo, Ruifang He, Di Jin, Jianwu Dang, Longbiao Wang, and Xiangang Li. 2018. Implicit discourse relation recognition using neural tensor network with interactive attention and sparse learning. In Proceedings of the 27th International Conference on Computational Linguistics (COLING), pages 547–558. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pages 770–778. Rodolphe Jenatton, Nicolas L Roux, Antoine Bordes, and Guillaume R Obozinski. 2012. A latent factor model for highly multi-relational data. In Advances in Neural Information Processing Systems (NIPS), pages 3167–3175. Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. Fang Kong and Guodong Zhou. 2017. A CDT-styled end-to-end Chinese discourse parser. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 16(4):26. Man Lan, Jianxiang Wang, Yuanbin Wu, Zheng-Yu Niu, and Haifeng Wang. 2017. Multi-task attentionbased neural networks for implicit discourse relationship representation and identification. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1299–1308. 617 Wenqiang Lei, Yuanxin Xiang, Yuwei Wang, Qian Zhong, Meichun Liu, and Min-Yen Kan. 2018. Linguistic properties matter for implicit discourse relation recognition: Combining semantic interaction, topic continuity and attribution. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), pages 4848–4855. Yancui Li, Fang Kong, and Guodong Zhou. 2014. Building Chinese discourse corpus with connectivedriven dependency tree structure. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2105– 2114. Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing implicit discourse relations in the Penn Discourse Treebank. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 343–351. Yang Liu and Sujian Li. 2016. Recognizing implicit discourse relations via repeated reading: Neural networks with multi-level attention. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1224– 1233. Yang Liu, Sujian Li, Xiaodong Zhang, and Zhifang Sui. 2016. Implicit discourse relation classification via multi-task neural networks. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI), pages 2750–2756. Yang Liu, Jiajun Zhang, and Chengqing Zong. 2017. Memory augmented attention model for Chinese implicit discourse relation recognition. In Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data (CCL), pages 411–423. Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In Proceedings of the 33rd International Conference on Machine Learning (ICML), pages 1727–1736. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems (NIPS), pages 3111–3119. Wenzhe Pei, Tao Ge, and Baobao Chang. 2014. Maxmargin tensor neural network for Chinese word segmentation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL), pages 293–303. Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic sense prediction for implicit discourse relations in text. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP (ACL-IJCNLP), pages 683–691. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind K Joshi, and Bonnie L Webber. 2008. The Penn Discourse TreeBank 2.0. In The 6th international conference on Language Resources and Evaluation (LREC), pages 2961—-2968. Lianhui Qin, Zhisong Zhang, and Hai Zhao. 2016. A stacking gated neural architecture for implicit discourse relation classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2263–2270. Lianhui Qin, Zhisong Zhang, Hai Zhao, Zhiting Hu, and Eric Xing. 2017. Adversarial connectiveexploiting networks for implicit discourse relation classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1006–1017. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31st International Conference on Machine Learning (ICML), pages 1278–1286. Samuel R¨onnqvist, Niko Schenk, and Christian Chiarcos. 2017. A recurrent neural model with attention for the recognition of Chinese implicit discourse relations. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 256–262. Wei Shi, Frances Yung, Raphael Rubino, and Vera Demberg. 2017. Using explicit discourse connectives in translation for implicit discourse relation classification. In Proceedings of the 8th International Joint Conference on Natural Language Processing (IJCNLP), pages 484–495. Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in neural information processing systems (NIPS), pages 926–934. Yizhong Wang, Sujian Li, and Houfeng Wang. 2017. A two-stage parsing method for text-level discourse analysis. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 184–188. Yang Xu, Yu Hong, Huibin Ruan, Jianmin Yao, Min Zhang, and Guodong Zhou. 2018. Using active learning to expand training data for implicit discourse relation recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 725–731. Naiwen Xue, Fei Xia, Fudong Chiou, and Martha Palmer. 2005. The Penn Chinese TreeBank: Phrase structure annotation of a large corpus. Natural Language Engineering, 11(2):207–238. 618 Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Bonnie Webber, Attapol Rutherford, Chuan Wang, and Hongmin Wang. 2016. The CoNLL-2016 shared task on shallow discourse parsing. In Proceedings of the 20th Conference on Computational Natural Language Learning - Shared Task (CoNLL), pages 1–19. Jichuan Zeng, Jing Li, Yan Song, Cuiyun Gao, Michael R Lyu, and Irwin King. 2018. Topic memory networks for short text classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3120–3131. Biao Zhang, Jinsong Su, Deyi Xiong, Yaojie Lu, Hong Duan, and Junfeng Yao. 2015. Shallow convolutional neural network for implicit discourse relation recognition. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2230–2235. Yuping Zhou and Nianwen Xue. 2015. The Chinese Discourse TreeBank: a Chinese corpus annotated with discourse relations. Language Resources and Evaluation, 49(2):397–431. Zhi-Min Zhou, Yu Xu, Zheng-Yu Niu, Man Lan, Jian Su, and Chew Lim Tan. 2010. Predicting discourse connectives for implicit discourse relation recognition. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING), pages 1507–1514.
2019
58
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5797–5808 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5797 Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned Elena Voita1,2 David Talbot1 Fedor Moiseev1,5 Rico Sennrich3,4 Ivan Titov3,2 1Yandex, Russia 2University of Amsterdam, Netherlands 3University of Edinburgh, Scotland 4University of Zurich, Switzerland 5Moscow Institute of Physics and Technology, Russia {lena-voita, talbot, femoiseev}@yandex-team.ru [email protected] [email protected] Abstract Multi-head self-attention is a key component of the Transformer, a state-of-the-art architecture for neural machine translation. In this work we evaluate the contribution made by individual attention heads in the encoder to the overall performance of the model and analyze the roles played by them. We find that the most important and confident heads play consistent and often linguistically-interpretable roles. When pruning heads using a method based on stochastic gates and a differentiable relaxation of the L0 penalty, we observe that specialized heads are last to be pruned. Our novel pruning method removes the vast majority of heads without seriously affecting performance. For example, on the English-Russian WMT dataset, pruning 38 out of 48 encoder heads results in a drop of only 0.15 BLEU.1 1 Introduction The Transformer (Vaswani et al., 2017) has become the dominant modeling paradigm in neural machine translation. It follows the encoderdecoder framework using stacked multi-head selfattention and fully connected layers. Multi-head attention was shown to make more efficient use of the model’s capacity: performance of the model with 8 heads is almost 1 BLEU point higher than that of a model of the same size with single-head attention (Vaswani et al., 2017). The Transformer achieved state-of-the-art results in recent shared translation tasks (Bojar et al., 2018; Niehues et al., 2018). Despite the model’s widespread adoption and recent attempts to investigate the kinds of information learned by the model’s encoder (Raganato and Tiedemann, 2018), the analysis of multi-head attention and its importance 1We release code at https://github.com/ lena-voita/the-story-of-heads. for translation is challenging. Previous analysis of multi-head attention considered the average of attention weights over all heads at a given position or focused only on the maximum attention weights (Voita et al., 2018; Tang et al., 2018), but neither method explicitly takes into account the varying importance of different heads. Also, this obscures the roles played by individual heads which, as we show, influence the generated translations to differing extents. We attempt to answer the following questions: • To what extent does translation quality depend on individual encoder heads? • Do individual encoder heads play consistent and interpretable roles? If so, which are the most important ones for translation quality? • Which types of model attention (encoder self-attention, decoder self-attention or decoder-encoder attention) are most sensitive to the number of attention heads and on which layers? • Can we significantly reduce the number of attention heads while preserving translation quality? We start by identifying the most important heads in each encoder layer using layer-wise relevance propagation (Ding et al., 2017). For heads judged to be important, we then attempt to characterize the roles they perform. We observe the following types of role: positional (heads attending to an adjacent token), syntactic (heads attending to tokens in a specific syntactic dependency relation) and attention to rare words (heads pointing to the least frequent tokens in the sentence). To understand whether the remaining heads perform vital but less easily defined roles, or are simply redundant to the performance of the model as 5798 measured by translation quality, we introduce a method for pruning heads based on Louizos et al. (2018). While we cannot easily incorporate the number of active heads as a penalty term in our learning objective (i.e. the L0 regularizer), we can use a differentiable relaxation. We prune attention heads in a continuous learning scenario starting from the converged full model and identify the roles of those which remain in the model. These experiments corroborate the findings of layer-wise relevance propagation; in particular, heads with clearly identifiable positional and syntactic functions are pruned last and hence shown to be most important for the translation task. Our key findings are as follows: • Only a small subset of heads are important for translation; • Important heads have one or more specialized and interpretable functions in the model; • The functions correspond to attention to neighbouring words and to tokens in specific syntactic dependency relations. 2 Transformer Architecture In this section, we briefly describe the Transformer architecture (Vaswani et al., 2017) introducing the terminology used in the rest of the paper. The Transformer is an encoder-decoder model that uses stacked self-attention and fully connected layers for both the encoder and decoder. The encoder consists of N layers, each containing two sub-layers: (a) a multi-head self-attention mechanism, and (b) a feed-forward network. The multi-head attention mechanism relies on scaled dot-product attention, which operates on a query Q, a key K and a value V : Attention(Q, K, V ) = softmax QKT √dk  V (1) where dk is the key dimensionality. In selfattention, queries, keys and values come from the output of the previous layer. The multi-head attention mechanism obtains h (i.e. one per head) different representations of (Q, K, V ), computes scaled dot-product attention for each representation, concatenates the results, and projects the concatenation through a feed-forward layer. This can be expressed in the same notation as Equation (1): headi = Attention(QW Q i , KW K i , V W V i ) (2) MultiHead(Q, K, V ) = Concati(headi)W O (3) where the Wi and W O are parameter matrices. The second component of each layer of the Transformer network is a feed-forward network. The authors propose using a two-layer network with a ReLU activation. Analogously, each layer of the decoder contains the two sub-layers mentioned above as well as an additional multi-head attention sub-layer. This additional sub-layer receives the output of the encoder as its keys and values. The Transformer uses multi-head attention in three different ways: encoder self-attention, decoder self-attention and decoder-encoder attention. In this work, we concentrate primarily on encoder self-attention. 3 Data and setting We focus on English as a source language and consider three target languages: Russian, German and French. For each language pair, we use the same number of sentence pairs from WMT data to control for the amount of training data and train Transformer models with the same numbers of parameters. We use 2.5m sentence pairs, corresponding to the amount of English–Russian parallel training data (excluding UN and Paracrawl). In Section 5.2 we use the same held-out data for all language pairs; these are 50k English sentences taken from the WMT EN-FR data not used in training. For English-Russian, we perform additional experiments using the publicly available OpenSubtitles2018 corpus (Lison et al., 2018) to evaluate the impact of domains on our results. In Section 6 we concentrate on English-Russian and two domains: WMT and OpenSubtitles. Model hyperparameters, preprocessing and training details are provided in appendix B. 4 Identifying Important Heads Previous work analyzing how representations are formed by the Transformer’s multi-head attention mechanism focused on either the average or the maximum attention weights over all heads (Voita et al., 2018; Tang et al., 2018), but neither method explicitly takes into account the varying importance of different heads. Also, this obscures the roles played by individual heads which, as we will show, influence the generated translations to differing extents. 5799 (a) LRP (b) confidence (c) head functions Figure 1: Importance (according to LRP), confidence, and function of self-attention heads. In each layer, heads are sorted by their relevance according to LRP. Model trained on 6m OpenSubtitles EN-RU data. (a) LRP (EN-DE) (b) head functions (c) LRP (EN-FR) (d) head functions Figure 2: Importance (according to LRP) and function of self-attention heads. In each layer, heads are sorted by their relevance according to LRP. Models trained on 2.5m WMT EN-DE (a, b) and EN-FR (c, d). We define the “confidence” of a head as the average of its maximum attention weight excluding the end of sentence symbol,2 where average is taken over tokens in a set of sentences used for evaluation (development set). A confident head is one that usually assigns a high proportion of its attention to a single token. Intuitively, we might expect confident heads to be important to the translation task. Layer-wise relevance propagation (LRP) (Ding et al., 2017) is a method for computing the relative contribution of neurons at one point in a network to neurons at another.3 Here we propose to use LRP to evaluate the degree to which different heads at each layer contribute to the top-1 logit predicted by the model. Heads whose outputs have a higher relevance value may be judged to be more important to the model’s predictions. 2We exclude EOS on the grounds that it is not a real token. 3A detailed description of LRP is provided in appendix A. The results of LRP are shown in Figures 1a, 2a, 2c. In each layer, LRP ranks a small number of heads as much more important than all others. The confidence for each head is shown in Figure 1b. We can observe that the relevance of a head as computed by LRP agrees to a reasonable extent with its confidence. The only clear exception to this pattern is the head judged by LRP to be the most important in the first layer. It is the most relevant head in the first layer but its average maximum attention weight is low. We will discuss this head further in Section 5.3. 5 Characterizing heads We now turn to investigating whether heads play consistent and interpretable roles within the model. We examined some attention matrices paying particular attention to heads ranked highly by LRP and identified three functions which heads might be playing: 1. positional: the head points to an adjacent token, 2. syntactic: the head points to tokens in a specific syntactic relation, 3. rare words: the head points to the least frequent tokens in a sentence. Now we discuss the criteria used to determine if a head is performing one of these functions and examine properties of the corresponding heads. 5.1 Positional heads We refer to a head as “positional” if at least 90% of the time its maximum attention weight is assigned to a specific relative position (in practice either -1 or +1, i.e. attention to adjacent tokens). Such heads are shown in purple in Figures 1c for 5800 English-Russian, 2b for English-German, 2d for English-French and marked with the relative position. As can be seen, the positional heads correspond to a large extent to the most confident heads and the most important heads as ranked by LRP. In fact, the average maximum attention weight exceeds 0.8 for every positional head for all language pairs considered here. 5.2 Syntactic heads We hypothesize that, when used to perform translation, the Transformer’s encoder may be responsible for disambiguating the syntactic structure of the source sentence. We therefore wish to know whether a head attends to tokens corresponding to any of the major syntactic relations in a sentence. In our analysis, we looked at the following dependency relations: nominal subject (nsubj), direct object (dobj), adjectival modifier (amod) and adverbial modifier (advmod). These include the main verbal arguments of a sentence and some other common relations. They also include those relations which might inform morphological agreement or government in one or more of the target languages considered here. 5.2.1 Methodology We evaluate to what extent each head in the Transformer’s encoder accounts for a specific dependency relation by comparing its attention weights to a predicted dependency structure generated using CoreNLP (Manning et al., 2014) on a large number of held-out sentences. We calculate for each head how often it assigns its maximum attention weight (excluding EOS) to a token with which it is in one of the aforementioned dependency relations. We count each relation separately and allow the relation to hold in either direction between the two tokens. We refer to this relative frequency as the “accuracy” of head on a specific dependency relation in a specific direction. Note that under this definition, we may evaluate the accuracy of a head for multiple dependency relations. Many dependency relations are frequently observed in specific relative positions (for example, often they hold between adjacent tokens, see Figure 3). We say that a head is “syntactic” if its accuracy is at least 10% higher than the baseline that looks at the most frequent relative position for this dependency relation. Figure 3: Distribution of the relative position of dependent for different dependency relations (WMT). dep. direction best head / baseline accuracy WMT OpenSubtitles nsubj v →s 45 / 35 77 / 45 s →v 52 / 35 70 / 45 dobj v →o 78 / 41 61 / 46 o →v 73 / 41 84 / 46 amod noun →adj.m. 74 / 72 81 / 80 adj.m. →noun 82 / 72 81 / 80 advmod v →adv.m. 48 / 46 38 / 33 adv.m. →v 52 / 46 42 / 33 Table 1: Dependency scores for EN-RU, comparing the best self-attention head to a positional baseline. Models trained on 2.5m WMT data and 6m OpenSubtitles data. Figure 4: Dependency scores for EN-RU, EN-DE, ENFR each trained on 2.5m WMT data. 5.2.2 Results Table 1 shows the accuracy of the most accurate head for each of the considered dependency relations on the two domains for English-Russian. Figure 4 compares the scores of the models trained on WMT with different target languages. Clearly certain heads learn to detect syntactic relations with accuracies significantly higher than the positional baseline. This supports the hypoth5801 (a) (b) (c) Figure 5: Attention maps of the rare words head. Models trained on WMT: (a) EN-RU, (b) EN-DE, (c) EN-FR esis that the encoder does indeed perform some amount of syntactic disambiguation of the source sentence. Several heads appear to be responsible for the same dependency relation. These heads are shown in green in Figures 1c, 2b, 2d. Unfortunately, it is not possible to draw any strong conclusions from these results regarding the impact of target language morphology on the accuracy of the syntactic attention heads although relations with strong target morphology are among those that are most accurately learned. Note the difference in accuracy of the verbsubject relation heads across the two domains for English-Russian. We hypothesize that this is due to the greater variety of grammatical person present4 in the Subtitles data which requires more attention to this relation. However, we leave proper analysis of this to future work. 5.3 Rare words In all models (EN-RU, EN-DE, EN-FR on WMT and EN-RU on OpenSubtitles), we find that one head in the first layer is judged to be much more important to the model’s predictions than any other heads in this layer. We find that this head points to the least frequent tokens in a sentence. For models trained on OpenSubtitles, among sentences where the least frequent token in a sentence is not in the top500 most frequent tokens, this head points to the rarest token in 66% of cases, and to one of the two least frequent tokens in 83% of cases. For models trained on WMT, this head points to one of the two least frequent tokens in more than 50% of such cases. This head is shown in orange in Fig4First, second and third person subjects are encountered in approximately 6%, 3% and 91% of cases in WMT data and in 32%, 21% and 47% of cases in OpenSubtitles data. ures 1c, 2b, 2d. Examples of attention maps for this head for models trained on WMT data with different target languages are shown in Figure 5. 6 Pruning Attention Heads We have identified certain functions of the most relevant heads at each layer and showed that to a large extent they are interpretable. What of the remaining heads? Are they redundant to translation quality or do they play equally vital but simply less easily defined roles? We introduce a method for pruning attention heads to try to answer these questions. Our method is based on Louizos et al. (2018). Whereas they pruned individual neural network weights, we prune entire model components (i.e. heads). We start by describing our method and then examine how performance changes as we remove heads, identifying the functions of heads retained in the sparsified models. 6.1 Method We modify the original Transformer architecture by multiplying the representation computed by each headi by a scalar gate gi. Equation (3) turns into MultiHead(Q, K, V )=Concati(gi·headi)W O. Unlike usual gates, gi are parameters specific to heads and are independent of the input (i.e. the sentence). As we would like to disable less important heads completely rather than simply downweighting them, we would ideally apply L0 regularization to the scalars gi. The L0 norm equals the number of non-zero components and would push the model to switch off less important heads: L0(g1, . . . , gh) = h X i=1 (1 −[[gi = 0]]), 5802 where h is the number of heads, and [[ ]] denotes the indicator function. Unfortunately, the L0 norm is nondifferentiable and so cannot be directly incorporated as a regularization term in the objective function. Instead, we use a stochastic relaxation: each gate gi is now a random variable drawn independently from a head-specific distribution.5 We use the Hard Concrete distributions (Louizos et al., 2018), a parameterized family of mixed discrete-continuous distributions over the closed interval [0, 1], see Figure 6a. The distributions have non-zero probability mass at 0 and 1, P(gi = 0|φi) and P(gi = 1|φi), where φi are the distribution parameters. Intuitively, the Hard Concrete distribution is obtained by stretching the binary version of the Concrete (aka Gumbel softmax) distribution (Maddison et al., 2017; Jang et al., 2017) from the original support of (0, 1) to (−ϵ, 1 + ϵ) and then collapsing the probability mass assigned to (−ϵ, 1] and [1, 1 + ϵ) to single points, 0 and 1, respectively. These stretching and rectification operations yield a mixed discretecontinuous distribution over [0, 1]. Now the sum of the probabilities of heads being non-zero can be used as a relaxation of the L0 norm: LC(φ) = h X i=1 (1 −P(gi = 0|φi)). The new training objective is L(θ, φ) = Lxent(θ, φ) + λLC(φ), where θ are the parameters of the original Transformer, Lxent(θ, φ) is cross-entropy loss for the translation model, and LC(φ) is the regularizer described above. The objective is easy to optimize: the reparameterization trick (Kingma and Welling, 2014; Rezende et al., 2014) can be used to backpropagate through the sampling process for each gi, whereas the regularizer and its gradients are available in the closed form. Interestingly, we observe that the model converges to solutions where gates are either almost completely closed (i.e. the head is pruned, P(gi = 0|φi) ≈1) or completely open (P(gi = 1|φi) ≈1), the latter not being explicitly encouraged.6 This means that at test time we can treat the model as a standard Transformer and use only a subset of heads.7 5In training, we resample gate values gi for each batch. 6The ‘noise’ pushes the network not to use middle values. The combination of noise and rectification has been previ(a) (b) Figure 6: Concrete distribution: (a) Concrete and its stretched and rectified version (Hard Concrete); (b) Hard Concrete distributions with different parameters. When applying this regularizer, we start from the converged model trained without the LC penalty (i.e. parameters θ are initialized with the parameters of the converged model) and then add the gates and continue training the full objective. By varying the coefficient λ in the optimized objective, we obtain models with different numbers of heads retained. 6.2 Pruning encoder heads To determine which head functions are most important in the encoder and how many heads the model needs, we conduct a series of experiments with gates applied only to encoder self-attention. Here we prune a model by fine-tuning a trained model with the regularized objective.8 During pruning, the parameters of the decoder are fixed and only the encoder parameters and head gates are fine-tuned. By not fine-tuning the decoder, we ensure that the functions of the pruned encoder heads do not migrate to the decoder. 6.2.1 Quantitative results: BLEU score BLEU scores are provided in Figure 7. Surprisingly, for OpenSubtitles, we lose only 0.25 BLEU when we prune all but 4 heads out of 48.9 For the more complex WMT task, 10 heads in the encoder are sufficient to stay within 0.15 BLEU of the full model. ously used to achieve discretization (e.g., Kaiser and Bengio (2018)). 7At test time, gate values are either 0 or 1 depending on which of the values P(gi = 0|φi), P(gi = 1|φi) is larger. 8In preliminary experiments, we observed that fine-tuning a trained model gives slightly better results (0.2–0.6 BLEU) than applying the regularized objective, or training a model with the same number of self-attention heads, from scratch. 9If all heads in a layer are pruned, the only remaining connection to the previous layer is the residual connection. 5803 Figure 7: BLEU score as a function of number of retained encoder heads (EN-RU). Regularization applied by fine-tuning trained model. Figure 8: Functions of encoder heads retained after pruning. Each column represents all remaining heads after varying amount of pruning (EN-RU; Subtitles). 6.2.2 Functions of retained heads Results in Figure 7 suggest that the encoder remains effective even with only a few heads. In this section, we investigate the function of those heads that remain in the encoder during pruning. Figure 8 shows all heads color-coded for their function in a pruned model. Each column corresponds to a model with a particular number of heads retained after pruning. Heads from all layers are ordered by their function. Some heads can perform several functions (e.g., s →v and v →o); in this case the number of functions is shown. First, we note that the model with 17 heads retains heads with all the functions that we identified in Section 5, even though 2⁄3 of the heads have been pruned. This indicates that these functions are indeed the most important. Furthermore, when we have fewer heads in the model, some functions “drift” to other heads: for example, we see positional heads starting to track syntactic dependencies; hence some heads are assigned more than one color at certain stages in Figure 8. attention BLEU heads from from (e/d/d-e) trained scratch WMT, 2.5m baseline 48/48/48 29.6 sparse heads 14/31/30 29.62 29.47 12/21/25 29.36 28.95 8/13/15 29.06 28.56 5/9/12 28.90 28.41 OpenSubtitles, 6m baseline 48/48/48 32.4 sparse heads 27/31/46 32.24 32.23 13/17/31 32.23 31.98 6/9/13 32.27 31.84 Table 2: BLEU scores for gates in all attentions, ENRU. Number of attention heads is provided in the following order: encoder self-attention, decoder selfattention, decoder-encoder attention. 6.3 Pruning all types of attention heads We found our pruning technique to be efficient at reducing the number of heads in the encoder without a major drop in translation quality. Now we investigate the effect of pruning all types of attention heads in the model (not just in the encoder). This allows us to evaluate the importance of different types of attention in the model for the task of translation. In these experiments, we add gates to all multi-head attention heads in the Transformer, i.e. encoder and decoder self-attention and attention from the decoder to the encoder. 6.3.1 Quantitative results: BLEU score Results of experiments pruning heads in all attention layers are provided in Table 2. For models trained on WMT data, we are able to prune almost 3⁄4 of encoder heads and more than 1⁄3 of heads in decoder self-attention and decoder-encoder attention without any noticeable loss in translation quality (sparse heads, row 1). We can also prune more than half of all heads in the model and lose no more than 0.25 BLEU. While these results show clearly that the majority of attention heads can be removed from the fully trained model without significant loss in translation quality, it is not clear whether a model can be trained from scratch with such a small number of heads. In the rightmost column in Ta5804 Figure 9: Number of active heads of different attention type for models with different sparsity rate ble 2 we provide BLEU scores for models trained with exactly the same number and configuration of heads in each layer as the corresponding pruned models but starting from a random initialization of parameters. Here the degradation in translation quality is more significant than for pruned models with the same number of heads. This agrees with the observations made in works on model compression: sparse architectures learned through pruning cannot be trained from scratch to the same test set performance as a model trained with joint sparsification and optimization (Zhu and Gupta, 2017; Gale et al., 2019). In our case, attention heads are less likely to learn important roles when a model is retrained from scratch with a small number of heads. 6.3.2 Heads importance Figure 9 shows the number of retained heads for each attention type at different pruning rates. We can see that the model prefers to prune encoder self-attention heads first, while decoder-encoder attention heads appear to be the most important for both datasets. Obviously, without decoderencoder attention no translation can happen. The importance of decoder self-attention heads, which function primarily as a target side language model, varies across domains. These heads appear to be almost as important as decoder-encoder attention heads for WMT data with its long sentences (24 tokens on average), and slightly more important than encoder self-attention heads for OpenSubtitles dataset where sentences are shorter (8 tokens on average). Figure 10 shows the number of active selfattention and decoder-encoder attention heads at different layers in the decoder for models with different sparsity rate (to reduce noise, we plot the sum of heads remaining in pairs of adjacent layers). It can be seen that self-attention heads are Figure 10: Number of active heads in different layers of the decoder for models with different sparsity rate (EN-RU, WMT) retained more readily in the lower layers, while decoder-encoder attention heads are retained in the higher layers. This suggests that lower layers of the Transformer’s decoder are mostly responsible for language modeling, while higher layers are mostly responsible for conditioning on the source sentence. These observations are similar for both datasets we use. 7 Related work One popular approach to the analysis of NMT representations is to evaluate how informative they are for various linguistic tasks. Different levels of linguistic analysis have been considered including morphology (Belinkov et al., 2017a; Dalvi et al., 2017; Bisazza and Tump, 2018), syntax (Shi et al., 2016) and semantics (Hill et al., 2017; Belinkov et al., 2017b; Raganato and Tiedemann, 2018). Bisazza and Tump (2018) showed that the target language determines which information gets encoded. This agrees with our results for different domains on the English-Russian translation task in Section 5.2.2. There we observed that attention heads are more likely to track syntactic relations requiring more complex agreement in the target language (in this case the subject-verb relation). An alternative method to study the ability of language models and machine translation models to capture hierarchical information is to test their sensitivity to specific grammatical errors (Linzen et al., 2016; Gulordava et al., 2018; Tran et al., 2018; Sennrich, 2017; Tang et al., 2018). While this line of work has shown that NMT models, including the Transformer, do learn some syntactic structures, our work provides further insight into the role of multi-head attention. There are several works analyzing attention weights of different NMT models (Ghader and Monz, 2017; Voita et al., 2018; Tang et al., 2018; 5805 Raganato and Tiedemann, 2018). Raganato and Tiedemann (2018) use the self-attention weights of the Transformer’s encoder to induce a tree structure for each sentence and compute the unlabeled attachment score of these trees. However they do not evaluate specific syntactic relations (i.e. labeled attachment scores) or consider how different heads specialize to specific dependency relations. Recently Bau et al. (2019) proposed a method for identifying important individual neurons in NMT models. They show that similar important neurons emerge in different models. Rather than verifying the importance of individual neurons, we identify the importance of entire attention heads using layer-wise relevance propagation and verify our findings by observing which heads are retained when pruning the model. 8 Conclusions We evaluate the contribution made by individual attention heads to Transformer model performance on translation. We use layer-wise relevance propagation to show that the relative contribution of heads varies: only a small subset of heads appear to be important for the translation task. Important heads have one or more interpretable functions in the model, including attending to adjacent words and tracking specific syntactic relations. To determine if the remaining less-interpretable heads are crucial to the model’s performance, we introduce a new approach to pruning attention heads. We observe that specialized heads are the last to be pruned, confirming their importance directly. Moreover, the vast majority of heads, especially the encoder self-attention heads, can be removed without seriously affecting performance. In future work, we would like to investigate how our pruning method compares to alternative methods of model compression in NMT. Acknowledgments We would like to thank anonymous reviewers for their comments. We thank Wilker Aziz, Joost Bastings for their helpful suggestions. The authors also thank Yandex Machine Translation team for helpful discussions and inspiration. Ivan Titov acknowledges support of the European Research Council (ERC StG BroadSem 678254) and the Dutch National Science Foundation (NWO VIDI 639.022.518). References Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140. Anthony Bau, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2019. Identifying and controlling important neurons in neural machine translation. In International Conference on Learning Representations, New Orleans. Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017a. What do neural machine translation models learn about morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 861–872. Association for Computational Linguistics. Yonatan Belinkov, Lluís Màrquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2017b. Evaluating layers of representation in neural machine translation on part-of-speech and semantic tagging tasks. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1–10. Asian Federation of Natural Language Processing. Arianna Bisazza and Clara Tump. 2018. The lazy encoder: A fine-grained analysis of the role of morphology in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2871–2876, Brussels, Belgium. Association for Computational Linguistics. Ondˇrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (wmt18). In Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, pages 272–307, Belgium, Brussels. Association for Computational Linguistics. Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, and Stephan Vogel. 2017. Understanding and improving morphological learning in the neural machine translation decoder. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 142–151. Asian Federation of Natural Language Processing. Yanzhuo Ding, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Visualizing and understanding neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1150– 1159, Vancouver, Canada. Association for Computational Linguistics. 5806 Trevor Gale, Erich Elsen, and Sara Hooker. 2019. The state of sparsity in deep neural networks. arXiv preprint. Hamidreza Ghader and Christof Monz. 2017. What does attention in neural machine translation pay attention to? In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 30–39. Asian Federation of Natural Language Processing. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195–1205. Association for Computational Linguistics. Felix Hill, Kyunghyun Cho, Sébastien Jean, and Y Bengio. 2017. The representational geometry of word meanings acquired by neural machine translation models. Machine Translation, 31. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In International Conference on Learning Representations, Toulon, France. Łukasz Kaiser and Samy Bengio. 2018. Discrete autoencoders for sequence models. arXiv preprint arXiv:1801.09797. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representation (ICLR 2015). Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In International Conference on Learning Representations, Banff, Canada. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of lstms to learn syntaxsensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521–535. Pierre Lison, Jörg Tiedemann, and Milen Kouylekov. 2018. OpenSubtitles2018: Statistical Rescoring of Sentence Alignments in Large, Noisy Parallel Corpora. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Christos Louizos, Max Welling, and Diederik P. Kingma. 2018. Learning sparse neural networks through l_0 regularization. In International Conference on Learning Representations, Vancouver, Canada. Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. 2017. The concrete distribution: A continuous relaxation of discrete random variables. In International Conference on Learning Representations, Toulon, France. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55–60, Baltimore, Maryland. Association for Computational Linguistics. Jan Niehues, Ronaldo Cattoni, Sebastian Stüker, Mauro Cettolo, Marco Turchi, and Marcello Federico. 2018. The IWSLT 2018 Evaluation Campaign. In Proceedings of the 15th International Workshop on Spoken Language Translation, pages 118–123, Bruges, Belgium. Martin Popel and Ondrej Bojar. 2018. Training Tips for the Transformer Model. pages 43–70. Alessandro Raganato and Jörg Tiedemann. 2018. An analysis of encoder representations in transformerbased machine translation. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 287–297, Brussels, Belgium. Association for Computational Linguistics. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pages 1278–1286, Bejing, China. PMLR. Rico Sennrich. 2017. How Grammatical is Characterlevel Neural Machine Translation? Assessing MT Quality with Contrastive Translation Pairs. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 376–382, Valencia, Spain. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural mt learn source syntax? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1526– 1534. Association for Computational Linguistics. Gongbo Tang, Mathias Müller, Annette Rios, and Rico Sennrich. 2018. Why self-attention? a targeted evaluation of neural machine translation architectures. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4263–4272. Association for Computational Linguistics. 5807 Gongbo Tang, Rico Sennrich, and Joakim Nivre. 2018. An analysis of attention mechanisms: The case of word sense disambiguation in neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 26– 35, Belgium, Brussels. Association for Computational Linguistics. Ke Tran, Arianna Bisazza, and Christof Monz. 2018. The importance of being recurrent for modeling hierarchical structure. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4731–4736. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS, Los Angeles. Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine translation learns anaphora resolution. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1264–1274, Melbourne, Australia. Association for Computational Linguistics. Michael Zhu and Suyog Gupta. 2017. To prune, or not to prune: exploring the efficacy of pruning for model compression. arXiv preprint arXiv:1710.01878. A Layer-wise Relevance Propagation Layer-wise relevance propagation (LRP) was originally designed to compute the contributions of single pixels to predictions of image classifiers (Bach et al., 2015). LRP back-propagates relevance recursively from the output layer to the input layer. We adapt LRP to the Transformer model to calculate relevance that measures the association degree between two arbitrary neurons in neural networks. In the following, we describe the general idea of the LRP method, give the formal definition used in our experiments and describe how to compute a head relevance. A.1 General idea Layer-wise relevance propagation in its general form assumes that the model can be decomposed into several layers of computation. The first layer are the inputs (for example, the pixels of an image or tokens of a sentence), the last layer is the realvalued prediction output of the model f. The l-th layer is modeled as a vector z = (z(l) d )V (l) d=1 with dimensionality V (l). Layer-wise relevance propagation assumes that we have a Relevance score R(l+1) d for each dimension z(l+1) d of the vector z at layer l + 1. The idea is to find a Relevance score R(l) d for each dimension z(l) d of the vector z at the next layer l which is closer to the input layer such that the following equation holds: f =. . .= X d∈l+1 R(l+1) d = X d∈l R(l) d = · · · = X d R(1) d . This equation represents a conservation principle, on which LRP relies to propagate the prediction back without using gradients. Intuitively, this means that total contribution of neurons at each layer is constant. Since we are interested only in heads relevance, we do not propagate till input variables and stop at the neurons of the encoder layer of interest. A.2 Formal rules In this section, we provide formal rules for propagating relevance. Here we follow the approach by Ding et al. (2017) previously used for neural machine translation. Let ru←v denote relevance of neuron u for neuron v. Definition 1 Given a neuron u, its incoming neuron set IN(u) comprises all its direct connected preceding neurons in the network. Definition 2 Given a neuron u, its outcoming neuron set OUT(u) comprises all its direct connected descendant neurons in the network. Definition 3 Given a neuron v and its incoming neurons u ∈IN(v), the weight ratio measures the contribution of u to v. It is calculated as wu→v = Wu,vu P u′∈IN(v) Wu′,vu′ if v = X u′∈IN(v) Wu′,vu′, wu→v = u P u′∈IN(v) u′ if v = Y u′∈IN(v) u′. These equations define weight ratio for matrix multiplication and element-wise multiplication operations. Redistribution rule for LRP Relevance is propagated using the local redistribution rule as follows: ru←v = X z∈OUT(u) wu→zrz←v. The provided equations for computing weights ratio and the redistribution rule allow to compute the relative contribution of neurons at one point in a network to neurons at another. Note that we follow Ding et al. (2017) and ignore non-linear activation functions. 5808 A.3 Head relevance In our experiments, we compute relative contribution of each head to the network predictions. For this, we evaluate contribution of neurons in headi (see equation 1) to the top-1 logit predicted by the model. Head relevance for a given prediction is computed as the sum of relevances of its neurons, normalized over heads in a layer. The final relevance of a head is its average relevance, where average is taken over all generation steps for a development set. B Experimental setup B.1 Data preprocessing Sentences were encoded using byte-pair encoding (Sennrich et al., 2016), with source and target vocabularies of about 32000 tokens. For OpenSubtitles data, we pick only sentence pairs with a relative time overlap of subtitle frames between source and target language subtitles of at least 0.9 to reduce noise in the data. Translation pairs were batched together by approximate sequence length. Each training batch contained a set of translation pairs containing approximately 1600010 source tokens. It has been shown that Transformer’s performance depends heavily on a batch size (Popel and Bojar, 2018), and we chose a large value of batch size to ensure that models show their best performance. B.2 Model parameters We follow the setup of Transformer base model (Vaswani et al., 2017). More precisely, the number of layers in the encoder and in the decoder is N = 6. We employ h = 8 parallel attention layers, or heads. The dimensionality of input and output is dmodel = 512, and the inner-layer of a feedforward networks has dimensionality dff = 2048. We use regularization as described in (Vaswani et al., 2017). B.3 Optimizer The optimizer we use is the same as in (Vaswani et al., 2017). We use the Adam optimizer (Kingma and Ba, 2015) with β1 = 0.9, β2 = 0.98 and ε = 10−9. We vary the learning rate over the course of 10This can be reached by using several of GPUs or by accumulating the gradients for several batches and then making an update. training, according to the formula: lrate = scale · min(step_num−0.5, step_num · warmup_steps−1.5) We use warmup_steps = 16000, scale = 4.
2019
580
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5809–5815 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5809 Better OOV Translation with Bilingual Terminology Mining Matthias Huck, Viktor Hangya and Alexander Fraser Center for Information and Language Processing LMU Munich, Germany {mhuck, hangyav, fraser}@cis.lmu.de Abstract Unseen words, also called out-of-vocabulary words (OOVs), are difficult for machine translation. In neural machine translation, byte-pair encoding can be used to represent OOVs, but they are still often incorrectly translated. We improve the translation of OOVs in NMT using easy-to-obtain monolingual data. We look for OOVs in the text to be translated and translate them using simple-to-construct bilingual word embeddings (BWEs). In our MT experiments we take the 5−best candidates, which is motivated by intrinsic mining experiments. Using all five of the proposed target language words as queries we mine target-language sentences. We then back-translate, forcing the back-translation of each of the five proposed target-language OOV-translation-candidates to be the original source-language OOV. We show that by using this synthetic data to finetune our system the translation of OOVs can be dramatically improved. In our experiments we use a system trained on Europarl and mine sentences containing medical terms from monolingual data. 1 Introduction Neural machine translation (NMT) systems achieved a breakthrough in translation quality recently, by learning an end-to-end system (Sutskever et al., 2014; Bahdanau et al., 2015). However, NMT systems have low quality when translating out-of-vocabulary words (OOVs), especially because they have a fixed modest sized vocabulary due to memory limitations. By splitting words into subword units the problem of representing OOVs can be solved (Sennrich et al., 2016b) but their translation is still problematic because by definition source-side OOVs were not seen in the training parallel data together with their translations. In this work, we evaluate a simple approach for improving the translation of OOVs using bilingual word embeddings (BWEs), which we hope will trigger more research on this interesting problem. In previous approaches, to include words in the target sentence for which the translation is unknown the token unk is often used which can be handled by later steps. In many cases, such as named entities, it is possible to just copy the source token to the target side instead of translating it. Gulcehre et al. (2016) proposed a pointer network based (Vinyals et al., 2015) system which can learn when to translate and when to copy. On the other hand, it is not possible to always copy when the translation is unknown. If the alignment of the unk tokens to the source are known it is possible to translate source words using a large dictionary as a post-processing step. Although NMT systems do not rely on word alignments explicitly, it is possible to learn and output word alignments (Luong et al., 2015). It is also possible to use lexically-constrained decoders (Post and Vilar, 2018; Hasler et al., 2018) in order to force the network to output certain words or sequences. This way alignments are not needed and the system can decide the position of the constraints in the output. The disadvantage of the above methods is that the translation of words needed to be decided either as a pre- or post-processing step without the context which makes the translation of some words, such as polysemous words, difficult. In addition, lexically-constrained decoders require the target words to be observed in context at training time, or they will usually not be placed properly. In contrast, we fine-tune NMT systems for better translation of problematic words on the sentence level and are thus able to exploit the context instead of handling the problem on the word level. In our approach, we rely on bilingual word embeddings (BWEs) which can be built using large 5810 monolingual data and a cheap bilingual signal. BWEs can easily cover a very large vocabulary. Given the sentences to translate we look for source language words not included in the parallel training set of our MT system (OOVs). We translate OOVs using BWE based dictionaries taking nbest candidates as opposed to previous work (e.g., (Luong et al., 2015)) where only the best translation is used during post-processing. In our experiments we take the 5−best predictions of our BWEs, and retrieve sentences containing these target-language predictions from a monolingual corpus. As was shown before, NMT systems can be quickly and effectively fine-tuned using just a few sentences (Farajian et al., 2017, 2018; Wuebker et al., 2018). Based on the 5−best translations of OOVs we mine sentences from target language monolingual data and generate a synthetic parallel corpus using back-translation (Sennrich et al., 2016a). We force the source-language translation of each OOV-translation-candidate to be the original OOV. We show that by using this synthetic data to fine-tune our system the translation of unseen words can be dramatically improved, despite the presence of wrong translations of each OOV in the synthetic data. We test our system on the translation of English medical terms to German and show significant improvements using our approach. In this paper, we study a domain adaptation task in order to show the advantages clearly, but our approach does not focus on this domain adaptation and it can also be directly applied generally with no modification (e.g., to an in-domain task). 2 Approach In order to fine-tune an NMT system we aim to generate a synthetic parallel corpus containing the translations of source OOVs on the target side. Our approach relies on a dictionary containing source-target word translations. We mine target language sentences using the n−best translations of OOVs from topic specific monolingual data. We back-translate these sentences and run a (finetuning) training step of the NMT system on the generated corpus. Even though many word translation candidates in the dictionary are incorrect, we show in our experiments that the NMT system can effectively filter out the noise in the synthetic corpus using the context. 2.1 Word Translation To translate source language words we use a combination of BWE based cosine and orthographic similarity. BWEs represent source and target language words in a joint space and can be built by training monolingual spaces and projecting them to the common space. Initially, a small seed lexicon was used as the bilingual signal to learn a linear mapping (Mikolov et al., 2013) which was further improved by applying orthogonal transformations only (Xing et al., 2015). Recently, various techniques were developed to build BWEs without any bilingual signal (Conneau et al., 2018; Artetxe et al., 2018). In the work of Conneau et al. (2018) adversarial training is employed to generate an initial seed lexicon of frequent words which is then used for orthogonal mapping. Even though BWEs in general are of good quality the translation of various words types, such as named entities and rare words, could be further improved by using orthographic similarity (Braune et al., 2018; Riley and Gildea, 2018; Artetxe et al., 2019). Similarly to (Braune et al., 2018), we combine the BWE based cosine and orthographic similarity of word pairs to get the translations of source words. We generate a dictionary of source-target word pairs by taking the top n most similar target words for each source using both similarity measures. We define orthographic similarity as one minus normalized Levenshtein distance. Since orthographic similarity of close words are higher than their cosine, we weight the former with 0.2 (we found this value to work well on a different task and did not tune it further). To build monolingual embeddings we use fastText’s skipgram model (Bojanowski et al., 2017) with dimension size 300 and minimum word frequency 3. For building unsupervised BWEs we use MUSE as the implementation of (Conneau et al., 2018). Note that we use unsupervised BWEs due to their good performance on the En-De language pair (see (Conneau et al., 2018)). But acquiring a small lexicon including frequent words is cheap for language pairs where unsupervised mapping has a lower performance than supervised mapping, and could be considered in future work. 2.2 NMT Fine-Tuning We mine target language sentences from a monolingual corpus which contains the translations of source OOVs. Since the source sentences needed 5811 UFAL UFAL+orth EU+UFAL EU+UFAL+orth n P@n R@n F1@n P@n R@n F1@n P@n R@n F1@n P@n R@n F1@n 1 58.19 13.58 22.02 58.13 25.28 35.24 68.65 37.56 48.55 69.59 41.87 52.28 5 44.46 26.10 32.89 50.05 43.82 46.73 54.33 48.46 51.22 51.13 51.71 51.41 10 35.80 29.84 32.55 41.04 47.64 44.09 42.94 53.41 47.61 44.45 56.34 49.70 20 29.54 33.58 31.43 34.43 50.16 40.83 36.42 58.78 44.98 37.42 61.30 46.47 Table 1: Quality of the mining procedure using different sizes of n−best translations. We use only sentences from UFAL or both EU and UFAL to build BWEs. We compare cosine only and cosine combined with orthography. to be translated are available before running the decoder, it is possible to get a list of OOVs from them by using the word vocabulary of the parallel training data. We translate the OOVs of our development and test data using the dictionaries described above by taking n−best translations. We present experiments with different n values in our intrinsic experiments. These source words tend to be noisy, especially in the medical domain, thus we apply a filtering step by ignoring those words containing non-letter characters as more than one third of their characters. In addition, we also filter out translations that are stopwords. We then use the set of target language words to mine all sentences that contain any of them from the monolingual data. We filter out sentences longer than 50 tokens, since they tend to be listings of medical terms, and back-translate the rest to generate synthetic parallel data. We force the back-translation of each of the proposed target-language OOVtranslation-candidates to be the original sourcelanguage OOV. In our experiments we use an encoder-decoder NMT system (Sennrich et al., 2017) with attention, 500 dimensional embedding layer, 1024 dimensional GRU layer and we use Adam with a learning rate of 0.0001 to train the network. We apply word segmentation with BPE using 50K merge operations to the English text, and a linguistically informed pipeline to the target-side German text (Huck et al., 2017b). It is important to understand that OOVs for us are words, and we handle both the dictionary based OOV translation and sentence mining on the word level. BPEs are only used when using NMT to translate. We train two systems, one each for the forward and backward directions. We describe the used data in Section 3. During back-translation we force the OOV-translation-candidates to be back-translated to the original source-language OOV by changing the OOV-translation-candidate to a special token on the target side before translation and then substituting the special token in the source-language back-translated output with the original OOV. This way, we make sure the MT system sees the OOV and each of its OOV-translation-candidates in the correct target-language context for the particular OOV-translation-candidate being considered. Finally, to improve the OOV translation of the forward system, we fine-tune it on the generated parallel data. We run only one training step over the whole synthetic corpus similarly to (Farajian et al., 2018), which makes the system learn newly seen words while not overwriting important knowledge previously learned from the truly parallel data the system was originally trained on. Since we mine target sentences based on multiple OOV-translation-candidates for each given OOV the system is tuned on different translations and their relevant contexts. This helps the network to correctly translate polysemous words, because the input context (which often disambiguates a polysemous word) will usually be most similar to the target-language context of the correct OOV-translation-candidate. Furthermore, this also makes our approach robust against incorrect OOV-translation-candidates in the used dictionary, since they are often used in very different contexts compared to the context of the source OOV we are translating. 3 Experiments We translate medical English sentences to German. To train the baseline NMT system we used the Europarl v7 (EU) parallel dataset containing 1.9M sentence pairs (Koehn, 2005). As medical data, we took 3.1M sentences from titles of medical Wikipedia articles, medical termpairs, patents and documents from the European Medicines Agency which are part of the UFAL Medical Corpus (UFAL). Since the corpus is parallel, we split it and used even sentences for English and odd ones for German. We built BWEs not only on the monolingual medical data but on 5812 Acc1 Acc5 freq (Braune et al., 2018) 38.6 47.4 EU+UFAL+orth 25.9 40.6 rare (Braune et al., 2018) 26.3 28.2 EU+UFAL+orth 17.5 28.8 Table 2: Medical bilingual lexicon induction results showing the quality of the BWE based dictionaries using 1-best and 5-best translations. the concatenation of all Europarl data and the monolingual medical data to improve the quality of BWEs (Hangya et al., 2018). We only mined sentences from the monolingual medical German corpus. The testing of our approach was done on the medical Health In My Language (HimL) corpora (Haddow et al., 2017) containing 1.9K sentence pairs in both development and test sets. All corpora were tokenized and truecased using Moses scripts (Koehn et al., 2007). We ran two sets of experiments. First we show the translation quality of our dictionaries by looking at the OOVs and their translations using HimL development data. Then we show translation quality improvements on the HimL test data. 3.1 OOV Translation The quality of our proposed method is highly dependent on that of the used dictionaries, since in order to mine useful sentences OOVs first needed to be translated correctly. Since we lack the gold translations of the OOVs, we measure the quality of the mined target language sentences using parallel data by following the approach presented for the fine-tuning of the NMT system. We translate source OOVs from the HimL development data using the n−best translations resulting a set of target language words. We mine sentences from the target side containing any of these translations. For each mined sentence we check if its source side pair contains the corresponding OOV, meaning that the correct translation of the OOV was contained by translation candidates, or not which means that the sentence was mined due to the translation of a different OOV. In addition, we also measure the number of missed sentences, i.e., in case a source sentence contains an OOV but its target reference was not mined due to no correct translation of the OOV in the candidates. We show precision, recall and F1 scores indicating how precisely would our system mine sentences from the target side for the OOVs and the ratio of Cochrane NHS24 baseline 22.4 20.2 copy 23.4 20.5 fine-tuned 27.2 22.5 Table 3: BLEU scores on the HimL test sets comparing the baseline systems and our OOV specific fine-tuning. OOVs covered. We use dictionaries with different number of n−best translations built using only the medical sentences of UFAL or both Europarl and medical sentences in case of EU+UFAL. We also compare dictionaries using only cosine similarity with combined cosine and orthography (+orth). We present results in Table 1. By comparing dictionaries it can be seen that by using the additional EU data to build embeddings the translation performance could be improved. As it was shown in (Hangya et al., 2018) as well, the use of additional general knowledge monolingual embeddings have higher quality. In addition, although the parallelism in the EU data is not exploited explicitly, it effects mapping due to higher monolingual space isomorphism (Søgaard et al., 2018). Using orthographic similarity in addition to cosine further improves quality since a lot of medical terms have similar surface forms across languages. The precision using the most similar translation of OOVs indicates good dictionary quality for all setups. On the other hand, it misses a lot of OOVs. By increasing translation candidates recall could be improved to the detriment of precision. Looking at F1 scores we found that 5−best translations gives best results 3 out of 4 times, thus we chose this value for the MT experiments. We also compare the quality of our best dictionary (EU+UFAL+orth) to previous work by running bilingual lexicon induction using the test lexicons of Braune et al. (2018) containing frequent and rare medical words respectively. Accuracies of 1-best and 5-best translations in Table 2 show comparable word translation quality to previous work, although we do not employ any task specific steps in contrast to Braune et al. (2018). Note that our dictionary does not contain some of the rare words of the test lexicons which we ignore during evaluation. 3.2 Machine Translation We present the improvements of our approach in terms of translation quality in the following. As the baseline, we used the English to German NMT 5813 source regular nosebleeds reference regelmäßige Nasenbluten baseline Regelmäßige Misskredite (discredits) fine-tuned Regelmäßige Nasenbluten source dizziness or lightheadedness reference Schwindel oder Benommenheit baseline Schwindelerregend (dizzying) oder zurückhaltend (reluctant). fine-tuned Schwindel oder Schwächegefühl (feeling of faintness) source A coronary angioplasty may not be technically possible [. . . ] reference Eine Koronarangioplastie ist wahrscheinlich technisch nicht möglich [. . . ] baseline Ein Herzinfarkt (heart attack) ist vielleicht technisch nicht möglich [. . . ] fine-tuned Eine koronare Angioplastie ist möglicherweise nicht technisch möglich [. . . ] source Four different alpha blockers were tested (alfuzosin, tamsulosin, doxazosin and silodosin). reference Vier verschiedene Alphablocker wurden getestet (Alfuzosin, Tamsulosin, Doxazosin und Silodosin). baseline Vier verschiedene Alphablocker wurden getestet (alfuzos, tasuloin, doxasa und silodosin). fine-tuned Vier unterschiedliche Alphablocker wurden untersucht (Alfuzosin, Tamsulosin, Doxazosin und Tigecyclin). Table 4: Example translations comparing the baseline with our fine-tuned model. OOVs and their translations are highlighted in bold. For convenience, we provide the English meaning of a selected set of German translations (small font in parentheses). system detailed earlier without fine-tuning, i.e., trained only on Europarl data. We also compare our system to an approach which simply copies source OOVs to the target side. Similarly to our back-translation approach, we change OOVs to a special token on the source side before translation which we substitute with the original OOV on the target side. If multiple OOVs appear in a sentence we use the order as they appear on the source side. Based on the experiments in the previous section, we used the EU+UFAL+orth dictionary with 5−best translations resulting in 95K mined target sentences from the monolingual corpus. We present case-sensitive BLEU scores calculated with the mteval-v13a.pl script from the Moses toolkit on the two parts of HimL test set separately: Cochrane and NHS24. Results are in Table 3. The performance of the baseline system is poor on both parts of the test set due to the many OOVs in the source sentences which were not seen in the parallel Europarl. The system is also out of domain which causes an additional detriment. (Cf. Huck et al. (2017a, 2018) for descriptions of state-of-the-art health domain translation systems that are trained on large indomain parallel data.) A simple source-to-target OOV token copying strategy improves over the baseline, but not by a large margin. The finetuned system, by contrast, performs considerably better, achieving an increase of +4.8 and +2.3 BLEU points on Cochrane and NHS24, respectively. By looking at examples (Table 4) we see that, on top of the domain adaptation effect of the back-translated data, the translation of OOVs is improved, especially of medical terminology, showing the effectiveness of the approach. 4 Conclusions Although OOVs can be represented in NMT systems, translation is difficult. In this paper we proposed a method for better translation of OOVs. Our approach relies on bilingual word embeddings based dictionaries which are simple to construct but cover a large vocabulary. We mine targetlanguage sentences containing the 5−best translations of OOVs according to our BWEs. We then back-translate. Using this noisy synthetic parallel data we fine-tune the initial NMT system. We showed the performance of our approach on the translation of medical terms using a system trained on Europarl parallel data. Our results showed that having both source OOVs and their translations in the sentence pairs results in improvements in BLEU. Our method of term mining followed by back-translation and fine-tuning can easily be applied to any NMT task including non-domainadaptation tasks. Acknowledgments We would like to thank the anonymous reviewers for their valuable input. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement №640550). 5814 References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proc. ACL. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019. An Effective Approach to Unsupervised Machine Translation. CoRR, abs/1902.01313. Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation By Jointly Learning To Align and Translate. In Proc. ICLR. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5. Fabienne Braune, Viktor Hangya, Tobias Eder, and Alexander Fraser. 2018. Evaluating bilingual word embeddings on the long tail. In Proc. NAACL-HLT. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word Translation Without Parallel Data. In Proc. ICLR. M. Amin Farajian, Nicola Bertoldi, Matteo Negri, Marco Turchi, Marcello Federico, and Fondazione Bruno Kessler. 2018. Evaluation of Terminology Translation in Instance-Based Neural MT Adaptation. In Proc. EAMT. M. Amin Farajian, Marco Turchi, Matteo Negri, Marcello Federico, and Fondazione Bruno Kessler. 2017. Multi-Domain Neural Machine Translation through Unsupervised Adaptation. In Proc. WMT. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the Unknown Words. In Proc. ACL. Barry Haddow, Alexandra Birch, Ondrej Bojar, Fabienne Braune, Colin Davenport, Alexander Fraser, Matthias Huck, Michal Kaspar, Kvetoslava Kovaríková, Josef Plch, Anita Ramm, Juliane Ried, James Sheary, Ales Tamchyna, Dusan Varis, Marion Weller, and Phil Williams. 2017. HimL: Health in my Language. In Proc. EAMT. Viktor Hangya, Fabienne Braune, Alexander Fraser, and Hinrich Schütze. 2018. Two Methods for Domain Adaptation of Bilingual Tasks : Delightfully Simple and Broadly Applicable. In Proc. ACL. Eva Hasler, Adrià de Gispert, Gonzalo Iglesias, and Bill Byrne. 2018. Neural Machine Translation Decoding with Terminology Constraints. In Proc. NAACL-HLT. Matthias Huck, Fabienne Braune, and Alexander Fraser. 2017a. LMU Munich’s Neural Machine Translation Systems for News Articles and Health Information Texts. In Proc. WMT. Matthias Huck, Simon Riess, and Alexander Fraser. 2017b. Target-side Word Segmentation Strategies for Neural Machine Translation. In Proc. WMT. Matthias Huck, Dario Stojanovski, Viktor Hangya, and Alexander Fraser. 2018. LMU Munich’s Neural Machine Translation Systems at WMT 2018. In Proc. WMT. Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Proc. MT Summit. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcelo Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. ACL: Interactive Poster and Demonstration Sessions. Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the Rare Word Problem in Neural Machine Translation. In Proc. ACL. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. CoRR, abs/1309.4168. Matt Post and David Vilar. 2018. Fast Lexically Constrained Decoding with Dynamic Beam Allocation for Neural Machine Translation. In Proc. NAACLHLT. Parker Riley and Daniel Gildea. 2018. Orthographic Features for Bilingual Lexicon Induction. In Proc. ACL. Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexandra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel Läubli, Antonio Valerio Miceli Barone, Jozef Mokry, and Maria Nadejde. 2017. Nematus: a Toolkit for Neural Machine Translation. In Proc. EACL, Software Demonstrations. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving Neural Machine Translation Models with Monolingual Data. In Proc. ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural Machine Translation of Rare Words with Subword Units. In Proc. ACL. Anders Søgaard, Sebastian Ruder, and Ivan Vuli´c. 2018. On the Limitations of Unsupervised Bilingual Dictionary Induction. In Proc. ACL. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proc. NIPS. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Proc. NIPS. 5815 Joern Wuebker, Patrick Simianer, and John DeNero. 2018. Compact Personalized Models for Neural Machine Translation. In Proc. EMNLP. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized Word Embedding and Orthogonal Transform for Bilingual Word Translation. In Proc. NAACL-HLT.
2019
581
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5816–5822 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5816 Simultaneous Translation with Flexible Policy via Restricted Imitation Learning Baigong Zheng 1,∗Renjie Zheng 2,∗Mingbo Ma 1,∗Liang Huang 1,2 1Baidu Research, Sunnyvale, CA, USA 2Oregon State University, Corvallis, OR, USA {baigongzheng, mingboma}@baidu.com [email protected] Abstract Simultaneous translation is widely useful but remains one of the most difficult tasks in NLP. Previous work either uses fixed-latency policies, or train a complicated two-staged model using reinforcement learning. We propose a much simpler single model that adds a “delay” token to the target vocabulary, and design a restricted dynamic oracle to greatly simplify training. Experiments on Chinese↔English simultaneous translation show that our work leads to flexible policies that achieve better BLEU scores and lower latencies compared to both fixed and RL-learned policies. 1 Introduction Simultaneous translation, which translates sentences before they are finished, is useful in many scenarios such as international conferences, summits, and negotiations. However, it is widely considered one of the most challenging tasks in NLP, and one of the holy grails of AI (Grissom II et al., 2014). A major challenge in simultaneous translation is the word order difference between the source and target languages, e.g., between SOV languages (German, Japanese, etc.) and SVO languages (English, Chinese, etc.). Simultaneous translation is previously studied as a part of real-time speech recognition system (Yarmohammadi et al., 2013; Bangalore et al., 2012; F¨ugen et al., 2007; Sridhar et al., 2013; Jaitly et al., 2016; Graves et al., 2013). Recently, there have been two encouraging efforts in this problem with promising but limited success. Gu et al. (2017) propose a complicated two-stage model that is also trained in two stages. The base model, responsible for producing target words, is a conventional full-sentence seq2seq model, and on top of that, the READ/WRITE (R/W) model decides, at every step, whether to wait for another source word (READ) or to emit a target word ∗These authors contributed equally. Chinese 我得到 有关 方面 的 回应 pinyin wˇo d´ed`ao yˇougu¯an f¯angmi`an de hu´ıy`ıng gloss I receive relevant party ’s response wait-1 policy I received thanks from relevant parties wait-5 policy I received responses from relevant parties adaptive policy I received responses from relevant parties Table 1: A Chinese-to-English translation example. Wait-1 policy makes a mistake on guessing thanks from while wait-5 policy has high latency. The adaptive policy can wait for more information to avoid guesses while maintaining low latency. (WRITE) using the pretrained base model. This R/W model is trained by reinforcement learning (RL) method without updating the base model. Ma et al. (2018), on the other hand, propose a much simpler architecture, which only need one model and can be trained with end-to-end local training method. However, their model follows a fixed-latency policy, which inevitably needs to guess future content during translation. Table 1 gives an example which is difficult for the fixedlatency (wait-k) policy but easy for adaptive policy. We aim to combine the merits of both efforts, that is, we design a single model end-toend trained from scratch to perform simultaneous translation, as with Ma et al. (2018), which can decide on the fly whether to wait or translate as in Gu et al. (2017). There are two key ideas to achieve this: the first is to add a “delay” token (similar to the READ action in Gu et al. (2017), the empty token in Press and Smith (2018), and the ‘blank’ unit in Connectionist Temporal Classification (CTC) (Graves et al., 2006)) to the targetside vocabulary, and if the model emits this delay token, it will read one source word; the second idea is to train the model using (restricted) imitation learning by designing a (restricted) dynamic oracle as the expert policy. Table 2 summarizes different approaches for simultaneous translation using neural machine translation (NMT) model. 5817 seq-to-seq prefix-to-prefix fixed policy static Read-Write (Dalvi et al., 2018) test-time wait-k (Ma et al., 2018) wait-k (Ma et al., 2018) adaptive policy RL (Gu et al., 2017) imitation learning (this work) Table 2: Different approaches for simultaneous translation. 2 Preliminaries Let x = (x1, . . . , xn) be a sequence of words. For an integer 0 ≤i ≤n, we denote the sequence consisting of the first consecutive i −1 words in x by x<i = (x1, . . . , xi−1). We say such a sequence x<i is a prefix of the sequence x, and define s ⪯x if sequence s is a prefix of x. Conventional Machine Translation Given a sequence x from the source language, the conventional machine translation model predicts the probability distribution of the next target word yj at the j-th step, conditioned on the full source sequence x and previously generated target words y<j, that is p(yj | x, y<j). The probability of the whole sequence y generated by the model will be p(y | x) = Q|y| j=1 p(yj | x, y<j). To train such a model, we can maximize the probability of ground-truth target sequence conditioned on the corresponding source sequence in a parallel dataset D, which is equivalent to minimize the following loss: ℓ(D) = −P (x,y)∈D log p(y | x). (1) In this work, we use Transformer (Vaswani et al., 2017) as our NMT model, which consists of an encoder and a decoder. The encoder works in a self-attention fashion and maps a sequence of words to a sequence of continuous representations. The decoder performs attention over the predicted words and the output of the encoder to generate next prediction. Both encoder and decoder take as input the sum of a word embedding and its corresponding positional embedding. Prefix-to-Prefix Framework Previous work (Gu et al., 2017; Dalvi et al., 2018) use seq2seq models to do simultaneous translation, which are trained with full sentence pairs but need to predict target words based on partial source sentences. Ma et al. (2018) proposed a prefix-to-prefix training framework to solve this mismatch. The key idea of this framework is to train the model to predict the next target word conditioned on the partial source sequence the model has seen, instead of the full source sequence. As a simple example in this framework, Ma et al. (2018) presented a class of policies, called wait-k policy, that can be applied with local training in the prefix-to-prefix framework. For a positive integer k, the wait-k policy will wait for the first k source words and then start to alternate generating a target word with receiving a new source word, until there is no more source words, when the problem becomes the same as the fullsequence translation. The probability of the j-th word is pk(yj | x<j+k, y<j), and the probability of the whole predicted sequence is pk(y | x) = Q|y| j=1 pk(yj | x<j+k, y<j). 3 Model To obtain a flexible and adaptive policy, we need our model to be able to take both READ and WRITE actions. Conventional translation model already has the ability to write target words, so we introduce a “delay” token ⟨ε⟩in target vocabulary to enable our model to apply the READ action. Formally, for the target vocabulary V , we define an extended vocabulary V+ = V ∪{⟨ε⟩}. (2) Each word in this set can be an action, which is applied with a transition function δ on a sequence pair (s, t) for a given source sequence x where s ⪯x. We assume ⟨ε⟩cannot be applied with the sequence pair (s, t) if s = x, then we have the transition function δ as follows, δ((s, t), a) = (s ◦x|s|+1, t) if a = ⟨ε⟩ (s, t ◦a) otherwise where s ◦x represents concatenating a sequence s and a word x. Based on this transition function, our model can do simultaneous translation as follows. Given the currently available source sequence, our model continues predicting next target word until it predicts a delay token. Then it will read a new source word, and continue prediction. Since we use Transformer model, the whole available source sequence needs to be encoded again when reading in a new source word, but the predicted target sequence will not be changed. Note that the predicted delay tokens do not provide any semantic information, but may introduce 5818 Target Source Target Source aggressive bound 𝛼 conserv. bound 𝛽 Figure 1: Illustration of our proposed dynamic oracle on a prefix grid. The blue right arrow represents choosing next ground-truth target word, and the red downward arrow represents choosing the delay token. The left figure shows a simple dynamic oracle without delay constraint. The right figure shows the dynamic oracle with delay constraints. some noise in attention layer during the translation process. So we propose to remove those delay token in the attention layers except for the current input one. However, this removal may reduce the explicit latency information which will affect the predictions of the model since the model cannot observe previous output delay tokens. Therefore, to provide this information explicitly, we embed the number of previous delay tokens to a vector and add this to the sum of the word embedding and position embedding as the input of the decoder. 4 Methods 4.1 Training via Restricted Imitation Learning We first introduce a restricted dynamic oracle (Cross and Huang, 2016) based on our extended vocabulary. Then we show how to use this dynamic oracle to train a simultaneous translation model via imitation learning. Note that we do not need to train this oracle. Restricted Dynamic Oracle Given a pair of full sequences (x, y) in data, the input state of our restricted dynamic oracle will be a pair of prefixes (s, t) where s ⪯x, t ⪯y and (s, t) ̸= (x, y). The whole action set is V+ defined in the last section. The objective of our dynamic oracle is to obtain the full sequence pair (x, y) and maintain a reasonably low latency. For a prefix pair (s, t), the difference of the lengths of the two prefixes can be used to measure the latency of translation. So we would like to bound this difference as a latency constraint. This idea can be illustrated in the prefix grid (see Figure 1), where we can define a band region and always keep the translation process in this band. For simplicity, we first assume the two full sequences have the same lengths, i.e. |x| = |y|. Then we can bound the difference d = |s| −|t| by two constants: α < d < β. The conservative bound (β) guarantees relatively small difference and low latency; while the aggressive bound (α) guarantees there are not too many target words predicted before seeing enough source words. Formally, this dynamic oracle is defined as follows. π⋆ x,y,α,β(s, t) =    {⟨ε⟩} if s ̸= x and |s| −|t| ≤α {y|t|+1} if t ̸= y and |s| −|t| ≥β {⟨ε⟩, y|t|+1} otherwise By this definition, we know that this oracle can always find an action sequence to obtain (x, y). When the input state does not satisfy any latency constraint, then this dynamic oracle will provide only one action, applying which will improve the length difference. Note that this dynamic oracle is restricted in the sense that it is only defined on the prefix pair instead of any sequence pair. And since we only want to obtain the exact sequence from data, this oracle can only choose the next groundtruth target word other than ⟨ε⟩. In many cases, the assumption |x| = |y| does not hold. To overcome this limitation, we can utilize the length ratio γ = |x|/|y| to modify the length difference: d′ = |s| −γ|t|, and use this new difference d′ in our dynamic oracle. Although we cannot obtain this ratio during testing time, we may use the averaged length ratio obtained from training data (Huang et al., 2017). Training with Restricted Dynamic Oracle We apply imitation learning to train our translation model, using the proposed dynamic oracle as the expert policy. Recall that the prediction of our model depends on the whole generated prefix including ⟨ε⟩(as the input contains the embedding of the number of ⟨ε⟩), which is also an action sequence. If an action sequence a is obtained from our oracle, then applying this sequence will result in a prefix pair, say sa and ta, of x and y. Let p(a | sa, ta) be the probability of choosing action a given the prefix pair obtained by applying action sequence a. Then the averaged probability of choosing the oracle actions conditioned on the action sequence a will be 5819 f(a, π⋆ x,y,α,β) = P a∈π⋆ x,y,α,β(sa,ta) p(a | sa, ta) |π⋆ x,y,α,β(sa, ta)| . To train a model to learn from the dynamic oracle, we can sample from our oracle to obtain a set, say S(x, y), of action sequences for a sentence pair (x, y). The loss function for each sampled sequence a ∈S(x, y) will be ℓ(a|x, y) = − |a| P i=1 log f(a<i, π⋆ x,y,α,β). For a parallel text D, the training loss is ℓ(D) = P (x,y)∈D P a∈S(x,y) 1 |S(x,y)|ℓ(a|x, y). Directly optimizing the above loss may require too much computation resource since for each pair of (x, y), the size of S(x, y) (i.e. the number of different action sequences) can be exponentially large. To reduce the computation cost, we propose to use two special action sequences as our sample set so that our model can learn to do translation within the two latency constraints. Recall that the latency constraints of our dynamic oracle π⋆ x,y,α,β are defined by two bounds: α and β. For each bound, there is a unique action sequence, which corresponds to a path in the prefix grid, such that following it can generate the most number of prefix pairs that make this bound tight. Let aα (x,y) (aβ (x,y)) be such an action sequence for (x, y) and α (β). We replace S(x, y) with {aα (x,y), aβ (x,y)}, then the above loss for dataset D becomes ℓα,β(D) = P (x,y)∈D ℓ(aα (x,y)|x,y)+ℓ(aβ (x,y)|x,y) 2 . This is the loss we use in our training process. Note that there are some steps where our oracle will return two actions, so for such steps we will have a multi-label classification problem where labels are the actions from our oracle. In such cases, Sigmoid function for each action is more appropriate than the Softmax function for the actions will not compete each other (Ma et al., 2017; Zheng et al., 2018; Ma et al., 2019). Therefore, we apply Sigmoid for each action instead of using Softmax function to generate a distribution for all actions. 4.2 Decoding We observed that the model trained on the two special action sequences occasionally violates the latency constraints and visits states outside of the designated band in prefix grid. To avoid such case, we force the model to choose actions such that it will always satisfy the latency constraints. That is, if the model reaches the aggressive bound, it must choose a target word other than ⟨ε⟩with highest score, even if ⟨ε⟩has higher score; if the model reaches the conservative bound, it can only choose ⟨ε⟩at that step. We also apply a temperature constant et to the score of ⟨ε⟩, which can implicitly control the latency of our model without retraining it. This improves the flexibility of our trained model so that it can be used in different scenarios with different latency requirements. 5 Experiments To investigate the empirical performance of our proposed method, we conduct experiments on NIST corpus for Chinese-English. We use NIST 06 (616 sentence pairs) as our development set and NIST 08 (691 sentence pairs) as our testing set. We apply tokenization and byte-pair encoding (BPE) (Sennrich et al., 2015) on both source and target languages to reduce their vocabularies. For training data, we only include 1 million sentence pairs with length larger than 50. We use Transformer (Vaswani et al., 2017) as our NMT model, and our implementation is adapted from PyTorchbased OpenNMT (Klein et al., 2017). The architecture of our Transformer model is the same as the base model in the original paper. We use BLEU (Papineni et al., 2002) as the translation quality metric and Average Lagging (AL) introduced by Ma et al. (2018) as our latency metrics, which measures the average delayed words. AL avoids some limitations of other existing metrics, such as insensitivity to actual lagging like Consecutive Wait (CW) (Gu et al., 2017), and sensitivity to input length like Average Proportion (AP) (Cho and Esipova, 2016) . Results We tried three different pairs for α and β: (1, 5), (3, 5) and (3, 7), and summarize the results on testing sets in Figure 2. Figure 2 (a) shows the results on Chinese-to-English translation. In this direction, our model can always achieve higher BLEU scores with the same latency, compared with the wait-k models and RL models. We notice the model prefers conservative policy during decoding time when t = 0. So we apply negative values of t to encourage the model to choose actions other than ⟨ε⟩. This can effectively reduce latency without sacrificing much 5820 4 6 8 AL 22 24 26 28 4-ref BLEU 33 (a) Chinese-to-English 2 4 6 8 AL 10 12 14 1-ref BLEU 44 (b) English-to-Chinese Figure 2: Translation quality (BLEU) against latency (AL) on testing sets. Markers : wait-k models for k ∈ {1, 3, 5, 7}, +: RL with CW = 5, ×: RL with CW = 8, ⋆: full-sentence translation. Markers for our models are given in the right table. Training Decoding Policy α β wait-α wait-β t = −2 t = −0.5 t = 0 t = 4.5 t = 9 1 5 3 5 3 7 translation quality, implying that our model can implicitly control latency during testing time. Figure 2 (b) shows our results on English-toChinese translation. Since the English source sentences are always longer than the Chinese sentences, we utilize the length ratio γ = 1.25 (derived from the dev set) during training, which is the same as using “catchup” with frequency c = 0.25 introduced by Ma et al. (2018). Different from the other direction, models for this direction works better if the difference of α and β is bigger. Another difference is that our model prefers aggressive policy instead of conservative policy when t = 0. Thus, we apply positive values of t to encourage it to choose ⟨ε⟩, obtaining more conservative policies to improve translation quality. Example We provide an example from the development set of Chinese-to-English translation in Table 3 to compare the behaviours of different models. Our model is trained with α = 3, β = 7 and tested with t = 0. It shows that our model can wait for information “ ¯Oum´eng” to translates “eu”, while the wait-3 model is forced to guess this information and made a mistake on the wrong guess “us” before seeing “ ¯Oum´eng”. Ablation Study To analyze the effects of proposed techniques on the performance, we also provide an ablation study on those techniques for our model trained with α = 3 and β = 5 in Chineseto-English translation. The results are given in Table 4, and show that all the techniques are important to the final performance and using Sigmoid function is critical to learn adaptive policy. Model Decoding Policy Wait-3 Wait-5 t=0 BLEU AL BLEU AL BLEU AL Wait-3 29.32 4.60 Wait-5 30.97 6.30 keep ⟨ε⟩ in attention 29.55 4.50 30.68 6.49 30.74 6.53 no ⟨ε⟩number embedding 30.20 4.76 30.98 6.36 30.65 6.29 use Softmax instead of Sigmoid 29.23 5.11 31.46 6.79 29.99 4.79 Full 29.45 4.71 31.72 6.35 31.59 6.28 Table 4: Ablation study on Chinese-to-English development set with α = 3 and β = 5. 6 Conclusions We have presented a simple model that includes a delay token in the target vocabulary such that the model can apply both READ and WRITE actions during translation process without a explicit policy model. We also designed a restricted dynamic oracle for the simultaneous translation problem and provided a local training method utilizing this dynamic oracle. The model trained with this method can learn a flexible policy for simultaneous translation and achieve better translation quality and lower latency compared to previous methods. Chinese 一名不 愿 具名 的 欧盟 官员 指出 ... pinyin y`ım´ıngb´u y`uan j`um´ıng de ¯Oum´eng g¯uany´uan zhˇıch¯u gloss a not willing named ’s EU official point out ... wait-3 a us official who declined to be named said that ... our work a eu official , who declined to be named , pointed out ... Table 3: A Chinese-to-English development set example. Our model is trained with α = 3 and β = 7. 5821 References Srinivas Bangalore, Vivek Kumar Rangarajan Sridhar, Prakash Kolan, Ladan Golipour, and Aura Jimenez. 2012. Real-time incremental speech-tospeech translation of dialogs. In Proc. of NAACLHLT. Kyunghyun Cho and Masha Esipova. 2016. Can neural machine translation do simultaneous translation? volume abs/1606.02012. James Cross and Liang Huang. 2016. Span-based constituency parsing with a structure-label system and provably optimal dynamic oracles. In Proceedings of EMNLP. Fahim Dalvi, Nadir Durrani, Hassan Sajjad, and Stephan Vogel. 2018. Incremental decoding and training methods for simultaneous translation in neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 493–499, New Orleans, Louisiana. Association for Computational Linguistics. Christian F¨ugen, Alex Waibel, and Muntsin Kolss. 2007. Simultaneous translation of lectures and speeches. Machine translation, 21(4):209–252. Alex Graves, Santiago Fern´andez, Faustino Gomez, and J¨urgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning, pages 369–376. ACM. Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In 2013 IEEE international conference on acoustics, speech and signal processing, pages 6645–6649. IEEE. Alvin Grissom II, He He, Jordan Boyd-Graber, John Morgan, and Hal Daum´e III. 2014. Don’t until the final verb wait: Reinforcement learning for simultaneous machine translation. In Proceedings of the 2014 Conference on empirical methods in natural language processing (EMNLP), pages 1342–1352. Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Victor O. K. Li. 2017. Learning to translate in realtime with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pages 1053–1062. Liang Huang, Kai Zhao, and Mingbo Ma. 2017. When to finish? optimal beam search for neural text generation (modulo beam size). In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2134–2139. Navdeep Jaitly, David Sussillo, Quoc V Le, Oriol Vinyals, Ilya Sutskever, and Samy Bengio. 2016. An online sequence-to-sequence model using partial conditioning. In Advances in Neural Information Processing Systems, pages 5067–5075. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. arXiv preprint arXiv:1701.02810. Mingbo Ma, Liang Huang, Bing Xiang, and Bowen Zhou. 2017. Group sparse CNNs for question classification with answer sets. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 335–340, Vancouver, Canada. Association for Computational Linguistics. Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, and Haifeng Wang. 2018. STACL: Simultaneous translation with integrated anticipation and controllable latency. arXiv preprint arXiv:1810.08398, to appear ACL 2019. Mingbo Ma, Renjie Zheng, and Liang Huang. 2019. Learning to stop in structured prediction for neural machine translation. arXiv preprint arXiv:1904.01032, to appear NAACL 2019. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL, pages 311–318, Philadephia, USA. Ofir Press and Noah A. Smith. 2018. You may not need attention. arXiv preprint arXiv:1810.13409. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Vivek Kumar Rangarajan Sridhar, John Chen, Srinivas Bangalore, Andrej Ljolje, and Rathinavelu Chengalvarayan. 2013. Segmentation strategies for streaming speech translation. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 230–238. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30. Mahsa Yarmohammadi, Vivek Kumar Rangarajan Sridhar, Srinivas Bangalore, and Baskaran Sankaran. 2013. Incremental segmentation and decoding strategies for simultaneous translation. In Proceedings of the Sixth International Joint Conference on Natural Language Processing. 5822 Renjie Zheng, Mingbo Ma, and Liang Huang. 2018. Multi-reference training with pseudo-references for neural translation and text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3188–3197.
2019
582
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5823–5828 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5823 Target Conditioned Sampling: Optimizing Data Selection for Multilingual Neural Machine Translation Xinyi Wang Language Technologies Institute Carnegie Mellon University [email protected] Graham Neubig Language Technologies Institute Carnegie Mellon University [email protected] Abstract To improve low-resource Neural Machine Translation (NMT) with multilingual corpora, training on the most related high-resource language only is often more effective than using all data available (Neubig and Hu, 2018). However, it is possible that an intelligent data selection strategy can further improve lowresource NMT with data from other auxiliary languages. In this paper, we seek to construct a sampling distribution over all multilingual data, so that it minimizes the training loss of the low-resource language. Based on this formulation, we propose an efficient algorithm, Target Conditioned Sampling (TCS), which first samples a target sentence, and then conditionally samples its source sentence. Experiments show that TCS brings significant gains of up to 2 BLEU on three of four languages we test, with minimal training overhead1. 1 Introduction Multilingual NMT has led to impressive gains in translation accuracy of low-resource languages (LRL) (Zoph et al., 2016; Firat et al., 2016; Gu et al., 2018; Neubig and Hu, 2018; Nguyen and Chiang, 2018). Many real world datasets provide sentences that are multi-parallel, with the same content in a variety of languages. Examples include TED (Qi et al., 2018), Europarl (Koehn, 2005), and many others (Tiedemann, 2012). These datasets open up the tantalizing prospect of training a system on many different languages to improve accuracy, but previous work has found methods that use only a single related (HRL) often out-perform systems trained on all available data (Neubig and Hu, 2018). In addition, because the resulting training corpus is smaller, using a single language is also substantially faster to 1The code can be found at https://github.com/ cindyxinyiwang/TCS. train, speeding experimental cycles (Neubig and Hu, 2018). In this paper, we go a step further and ask the question: can we design an intelligent data selection strategy that allows us to choose the most relevant multilingual data to further boost NMT performance and training speed for LRLs? Prior work has examined data selection from the view of domain adaptation, selecting good training data from out-of-domain text to improve indomain performance. In general, these methods select data that score above a preset threshold according to some metric, such as the difference between in-domain and out-of-domain language models (Axelrod et al., 2011; Moore and Lewis, 2010) or sentence embedding similarity (Wang et al., 2017). Other works use all the data but weight training instances by domain similarity (Chen et al., 2017), or sample subsets of training data at each epoch (van der Wees et al., 2017). However, none of these methods are trivially applicable to multilingual parallel datasets, which usually contain many different languages from the same domain. Moreover, most of these methods need to pretrain language models or NMT models with a reasonable amount of data, and accuracy can suffer in low-resource settings like those encountered for LRLs (Duh et al., 2013). In this paper, we create a mathematical framework for data selection in multilingual MT that selects data from all languages, such that minimizing the training objective over the sampled data approximately minimizes the loss of the LRL MT model. The formulation leads to an simple, efficient, and effective algorithm that first samples a target sentence and then conditionally samples which of several source sentences to use for training. We name the method Target Conditioned Sampling (TCS). We also propose and experiment with several design choices for TCS, which are especially effective for LRLs. On the TED multilin5824 gual corpus (Qi et al., 2018), TCS leads to large improvements of up to 2 BLEU on three of the four languages we test, and no degradation on the fourth, with only slightly increased training time. To our knowledge, this is the first successful application of data selection to multilingual NMT. 2 Method 2.1 Multilingual Training Objective First, in this section we introduce our problem formally, where we use the upper case letters X, Y to denote the random variables, and the corresponding lower case letters x, y to denote their actual values. Suppose our objective is to learn parameters θ of a translation model from a source language s into target language t. Let x be a source sentence from s, and y be the equivalent target sentence from t, given loss function L(x, y; θ) our objective is to find optimal parameters θ∗that minimize: Ex,y∼PS(X,Y )[L(x, y; θ)] (1) where Ps(X, Y ) is the data distribution of s-t parallel sentences. Unfortunately, we do not have enough data to accurately estimate θ∗, but instead we have a multilingual corpus of parallel data from languages {s1, S2, ..., Sn} all into t. Therefore, we resort to multilingual training to facilitate the learning of θ. Formally, we want to construct a distribution Q(X, Y ) with support over s1, s2, ..., sn-T to augment the s-t data with samples from Q during training. Intuitively, a good Q(X, Y ) will have an expected loss Ex,y∼Q(X,Y )[L(x, y; θ)] (2) that is correlated with Eqn 1 over the space of all θ, so that training over data sampled from Q(X, Y ) can facilitate the learning of θ. Next, we explain a version of Q(X, Y ) designed to promote efficient multilingual training. 2.2 Target Conditioned Sampling We argue that the optimal Q(X, Y ) should satisfy the following two properties. First, Q(X, Y ) and Ps(X, Y ) should be target invariant; the marginalized distributions Q(Y ) and Ps(Y ) should match as closely as possible: Q(Y ) ≈Ps(Y ) (3) This property ensures that Eqn 1 and Eqn 2 are optimizing towards the same target Y distribution. Second, to have Eqn 2 correlated with Eqn 1 over the space of all θ, we need Q(X, Y ) to be correlated with Ps(X, Y ), which can be loosely written as Q(X, Y ) ≈Ps(X, Y ). (4) Because we also make the target invariance assumption in Eqn 3, Q(X, Y ) Q(Y ) ≈Ps(X, Y ) Ps(Y ) (5) Q(X|Y ) ≈Ps(X|Y ). (6) We call this approximation of Ps(X|Y ) by Q(X|Y ) conditional source invariance. Based on these two assumptions, we define Target Conditioned Sampling (TCS), a training framework that first samples y ∼Q(Y ), and then conditionally samples x ∼Q(X|y) during training. Note Ps(X|Y = y) is the optimal back-translation distribution, which implies that back-translation (?) is a particular instance of TCS. Of course, we do not have enough s-t parallel data to obtain a good estimate of the true backtranslation distribution Ps(X|y) (otherwise, we can simply use that data to learn θ). However, we posit that even a small amount of data is sufficient to construct an adequate data selection policy Q(X|y) to sample the sentences x from multilingual data for training. Thus, the training objective that we optimize is Ey∼Q(Y )Ex∼Q(X|y) [L(x, y; θ)] (7) Next, in Section 2.3, we discuss the choices of Q(Y ) and Q(X|y). 2.3 Choosing the Sampling Distributions Choosing Q(Y ). Target invariance requires that we need Q(Y ) to match Ps(Y ), which is the distribution over the target of s-t. We have parallel data from multiple languages s1, s2, ..., sn, all into t. Assuming no systematic inter-language distribution differences, a uniform sample of a target sentence y from the multilingual data can approximate Ps(Y ). We thus only need to sample y uniformly from the union of all extra data. Choosing Q(X|y). Choosing Q(X|y) to approximate Ps(X|y) is more difficult, and there are a number of methods could be used to do so. To do so, we note that conditioning on the same target y and restricting the support of Ps(X|y) to the 5825 sentences that translate into y in at least one of sit, Ps(X = x|y) simply measures how likely x is in s. We thus define a heuristic function sim(x, s) that approximates the probability that x is a sentence in s, and follow the data augmentation objective in Wang et al. (2018) in defining this probability according to Q∗(x|y) = exp (sim(x, s)/τ) P x′ exp (sim(x′, s)/τ) (8) where is a temperature parameter that adjusts the peakiness of the distribution. 2.4 Algorithms The formulation of Q(X, Y ) allows one to sample multilingual data with the following algorithm: 1. Select the target y based on Q(y). In our case we can simply use the uniform distribution. 2. Given the target y, gather all data (xi, y) ∈ s1, s2, ...sn-t and calculate sim(xi, s) 3. Sample (xi, y) based on Q(X|y) The algorithm requires calculating Q(X|y) repeatedly during training. To reduce this overhead, we propose two strategies for implementation: 1) Stochastic: compute Q(X|y) before training starts, and dynamically sample each minibatch using the precomputed Q(X|y); 2) Deterministic: compute Q(X|y) before training starts and select x′ = argmaxx Q(x|y) for training. The deterministic method is equivalent to setting τ, the degree of diversity in Q(X|y), to be 0. 2.5 Similarity Measure In this section, we define two formulations of the similarity measure sim(s, x), which is essential for constructing Q(X|y). Each of the similarity measures can be calculated at two granularities: 1) language level, which means we calculate one similarity score for each language based on all of its training data; 2) sentence level, which means we calculate a similarity score for each sentence in the training data. Vocab Overlap provides a crude measure of surface form similarity between two languages. It is efficient to calculate, and is often quite effective, especially for low-resource languages. Here we use the number of character n-grams that two languages share to measure the similarity between the two languages. LRL Train Dev Test HRL Train aze 5.94k 671 903 tur 182k bel 4.51k 248 664 rus 208k glg 10.0k 682 1007 por 185k slk 61.5k 2271 2445 ces 103k Table 1: Statistics of our datasets. We can calculate the language-level similarity between Si and S simvocab-lang(si, s) = |vocabk(s) ∩vocabk(si)| k vocabk(·) represents the top k most frequent character n-grams in the training data of a language. Then we can assign the same language-level similarity to all the sentences in si. This can be easily extended to the sentence level by replacing vocabk(si) to the set of character ngrams of all the words in the sentence x. Language Model trained on s can be used to calculate the probability that a data sequence belongs to s. Although it might not perform well if s does not have enough training data, it may still be sufficient for use in the TCS algorithm. The language-level metric is defined as simLM-lang(si, s) = exp P ci∈si NLLs(ci) |ci ∈si|  where NLLs(·) is negative log likelihood of a character-level LM trained on data from s. Similarly, the corresponding sentence level metric is the LM probability over each sentence x. 3 Experiment 3.1 Dataset and Baselines We use the 58-language-to-English TED dataset (Qi et al., 2018). Following the setup in prior work (Qi et al., 2018; Neubig and Hu, 2018), we use three low-resource languages Azerbaijani (aze), Belarusian (bel), Galician (glg) to English, and a slightly higher-resource dataset, Slovak (slk) to English. We use multiple settings for baselines: 1) Bi: each LRL is paired with its related HRL, following Neubig and Hu (2018). The statistics of the LRL and their corresponding HRL are listed in Table 1; 2) All: we train a model on all 58 languages; 3) Copied: following Currey et al. (2017), we use the union of all English sentences as monolingual data by copying them to the source side. 5826 Sim Method aze bel glg slk Bi 10.35 15.82 27.63 26.38 All 10.21 17.46 26.01 26.64 copied 9.54 13.88 26.24 26.77 Back-Translate TCS 7.79 11.50 27.45 28.44 LM-sent TCS-D 10.34 14.68 27.90 27.29 LM-sent TCS-S 10.95† 17.15 27.91 27.24 LM-lang TCS-D 10.76 14.97 27.92 28.40 LM-lang TCS-S 11.47∗ 17.61 28.53† 28.56∗ Vocab-sent TCS-D 10.68 16.13 27.29 27.03 Vocab-sent TCS-S 11.09† 16.30 28.36† 27.01 Vocab-lang TCS-D 10.58 16.32 28.17 28.27∗ Vocab-lang TCS-S 11.46∗ 17.79 29.57∗ 28.45∗ Table 2: BLEU scores on four languages. Statistical significance (Clark et al., 2011) is indicated with ∗(p < 0.001) and † (p < 0.05), compared with the best baseline. 3.2 Experiment Settings A standard sequence-to-sequence (Sutskever et al., 2014) NMT model with attention is used for all experiments. Byte Pair Encoding (BPE) (Sennrich et al., 2016; Kudo and Richardson, 2018) with vocabulary size of 8000 is applied for each language individually. Details of other hyperparameters can be found in Appendix A.1. 3.3 Results We test both the Deterministic (TCS-D) and Stochastic (TCS-S) algorithms described in Section 2.4. For each algorithm, we experiment with the similarity measures introduced in Section 2.5. The results are listed in Table 2. Of all the baselines, Bi in general has the best performance, while All, which uses all the data and takes much longer to train, generally hurts the performance. This is consistent with findings in prior work (Neubig and Hu, 2018). Copied is only competitive for slk, which indicates the gain of TCS is not simply due to extra English data. TCS-S combined with the language-level similarity achieves the best performance for all four languages, improving around 1 BLEU over the best baseline for aze, and around 2 BLEU for glg and slk. For bel, TCS leads to no degradation while taking much less training time than the best baseline All. TCS-D vs. TCS-S. Both algorithms, when using document-level similarity, improve over the baseline for all languages. TCS-D is quite effective without any extra sampling overhead. TCS-S outperforms TCS-D for all experiments, indicatSim Model aze bel glg slk Bi 11.87 18.03 28.70 26.77 All 10.87 17.77 25.49 26.28 copied 10.74 17.19 29.75 27.81 LM-lang TCS-D 11.97 17.17 30.10 28.78∗ LM-lang TCS-S 12.55† 17.23 30.69† 28.95∗ Vocab-lang TCS-D 12.30 18.96† 31.10∗ 29.35∗ Vocab-lang TCS-S 12.37 19.83† 30.94† 29.00∗ Table 3: BLEU scores using SDE as word encoding. Statistical significance is indicated with ∗(p < 0.001) and † (p < 0.05), compared with the best baseline. ing the importance of diversity in the training data. Sent. vs. Lang. For all experiments, languagelevel outperforms the sentence-level similarity. This is probably because language-level metric provides a less noisy estimation, making Q(x|y) closer to Ps(x|y). LM vs. Vocab. In general, the best performing methods using LM and Vocab are comparable, except for glg, where Vocab-lang outperforms LMlang by 1 BLEU. Slk is the only language where LM outperformed Vocab in all settings, probably because it has the largest amount of data to obtain a good language model. These results show that easy-to-compute language similarity features are quite effective for data selection in low-resource languages. Back-Translation TCS constructs Q(X|y) to sample augmented multilingual data, when the LRL data cannot estimate a good back-translation model. Here we confirm this intuition by replacing the Q(X|y) in TCS with the back-translations generated by the model trained on the LRLs. To make it comparable to Bi, we use the sentence from the LRL and its most related HRL if there is one for the sampled y, but use the backtranslated sentence otherwise. Table 2 shows that for slk, back-translate achieves comparable results with the best similarity measure, mainly because slk has enough data to get a reasonable backtranslation model. However, it performs much worse for aze and bel, which have the smallest amount of data. 3.4 Effect on SDE To ensure that our results also generalize to other models, specifically ones that are tailored for better sharing of information across languages, we also test TCS on a slightly different multilingual NMT model using soft decoupled encoding (SDE; 5827 20000 40000 60000 80000 12 14 16 18 20 Dev ppl 20000 40000 60000 8 10 12 14 16 Bi Det Sam 20000 40000 60000 80000 Step 6 8 10 Dev ppl 25000 50000 75000 100000 Step 7 8 9 10 11 Figure 1: Development set perplexity vs. training steps. Top left: aze. Top right: bel. Bottom left: glg. Bottom right: slk. Wang et al. (2019)), a word encoding method that assists lexical transfer for multilingual training. The results are shown in Table 3. Overall the results are stronger, but the best TCS model outperforms the baseline by 0.5 BLEU for aze, and around 2 BLEU for the rest of the three languages, suggesting the orthogonality of data selection and better multilingual training methods. 3.5 Effect on Training Curves In Figure 1, we plot the development perplexity of all four languages during training. Compared to Bi, TCS always achieves lower development perplexity, with only slightly more training steps. Although using all languages, TCS is able to decrease the development perplexity at similar rate as Bi. This indicates that TCS is effective at sampling helpful multilingual data for training NMT models for LRLs. 4 Conclusion We propose Target Conditioned Sampling (TCS), an efficient data selection framework for multilingual data by constructing a data sampling distribution that facilitates the NMT training of LRLs. TCS brings up to 2 BLEU improvements over strong baselines with only slight increase in training time. Acknowledgements The authors thank Hieu Pham and Zihang Dai for helpful discussions and comments on the paper. We also thank Paul Michel, Zi-Yi Dou, and Calvin McCarter for proofreading the paper. This material is based upon work supported in part by the Defense Advanced Research Projects Agency Information Innovation Office (I2O) Low Resource Languages for Emergent Incidents (LORELEI) program under Contract No. HR0011-15-C0114. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. References Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in-domain data selection. In EMNLP. Boxing Chen, Colin Cherry, George Foster, and Samuel Larkin. 2017. Cost weighting for neural machine translation domain adaptation. In WMT. Jonathan Clark, Chris Dyer, Alon Lavie, and Noah Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for optimizer instability. In ACL. Anna Currey, Antonio Valerio Miceli Barone, and Kenneth Heafield. 2017. Copied monolingual data improves low-resource neural machine translation. In WMT. Kevin Duh, Graham Neubig, Katsuhito Sudoh, and Hajime Tsukada. 2013. Adaptation data selection using neural language models: Experiments in machine translation. In ACL. Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. NAACL. Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor O. K. Li. 2018. Universal neural machine translation for extremely low resource languages. NAACL. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In EMNLP. Robert C. Moore and William D. Lewis. 2010. Intelligent selection of language model training data. In ACL. Graham Neubig and Junjie Hu. 2018. Rapid adaptation of neural machine translation to new languages. EMNLP. 5828 Toan Q. Nguyen and David Chiang. 2018. Transfer learning across low-resource, related languages for neural machine translation. In NAACL. Ye Qi, Devendra Singh Sachan, Matthieu Felix, Sarguna Padmanabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neural machine translation? NAACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In NIPS. J¨org Tiedemann. 2012. Parallel data, tools and interfaces in opus. In LREC. Rui Wang, Andrew Finch, Masao Utiyama, and Eiichiro Sumita. 2017. Sentence embedding for neural machine translation domain adaptation. In ACL. Xinyi Wang, Hieu Pham, Philip Arthur, and Graham Neubig. 2019. Multilingual neural machine translation with soft decoupled encoding. In ICLR. Xinyi Wang, Hieu Pham, Zihang Dai, and Graham Neubig. 2018. Switchout: an efficient data augmentation algorithm for neural machine translation. In EMNLP. Marlies van der Wees, Arianna Bisazza, and Christof Monz. 2017. Dynamic data selection for neural machine translation. In EMNLP. Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low resource neural machine translation. EMNLP. A Appendix A.1 Model Details and Hyperparameters • The LM similarity is calculated using a character-level LM2 • We use character n-grams with n = {1, 2, 3, 4} for Vocab similarity and SDE. • During training, we fix the language order of multilingual parallel data for each LRL, and only randomly shuffle the parallel sentences for each language. Therefore, we control the effect of the order of training data for all experiments. • For TCS-S, we search over τ = {0.01, 0.02, 0.1} and pick the best model based on its performance on the development set. 2We sligtly modify the LM code from https:// github.com/zihangdai/mos for our experiments.
2019
583
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5829–5839 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5829 Adversarial Learning of Privacy-Preserving Text Representations for De-Identification of Medical Records Max Friedrich1, Arne K¨ohn2, Gregor Wiedemann1, Chris Biemann1 1Language Technology Group, Universit¨at Hamburg {2mfriedr,gwiedemann,biemann}@informatik.uni-hamburg.de 2Department of Language Science and Technology, Saarland University [email protected] Abstract De-identification is the task of detecting protected health information (PHI) in medical text. It is a critical step in sanitizing electronic health records (EHRs) to be shared for research. Automatic de-identification classifiers can significantly speed up the sanitization process. However, obtaining a large and diverse dataset to train such a classifier that works well across many types of medical text poses a challenge as privacy laws prohibit the sharing of raw medical records. We introduce a method to create privacy-preserving shareable representations of medical text (i.e. they contain no PHI) that does not require expensive manual pseudonymization. These representations can be shared between organizations to create unified datasets for training de-identification models. Our representation allows training a simple LSTM-CRF de-identification model to an F1 score of 97.4%, which is comparable to a strong baseline that exposes private information in its representation. A robust, widely available de-identification classifier based on our representation could potentially enable studies for which de-identification would otherwise be too costly. 1 Introduction Electronic health records (EHRs) are are valuable resource that could potentially be used in largescale medical research (Botsis et al., 2010; Birkhead et al., 2015; Cowie et al., 2017). In addition to structured medical data, EHRs contain free-text patient notes that are a rich source of information (Jensen et al., 2012). However, due to privacy and data protection laws, medical records can only be shared and used for research if they are sanitized to not include information potentially identifying patients. The PHI that may not be shared includes potentially identifying information such as names, geographic identifiers, dates, and account numbers; the American Health Insurance Portability Accountability Act1 (HIPAA, 1996) defines 18 categories of PHI. De-identification is the task of finding and labeling PHI in medical text as a step toward sanitization. As the information to be removed is very sensitive, sanitization always requires final human verification. Automatic deidentification labeling can however significantly speed up the process, as shown for other annotation tasks in e.g. Yimam (2015). Trying to create an automatic classifier for deidentification leads to a “chicken and egg problem” (Uzuner et al., 2007): without a comprehensive training set, an automatic de-identification classifier cannot be developed, but without access to automatic de-identification, it is difficult to share large corpora of medical text in a privacypreserving way for research (including for training the classifier itself). The standard method of data protection compliant sharing of training data for a de-identification classifier requires humans to pseudonymize protected information with substitutes in a document-coherent way. This includes replacing e.g. every person or place name with a different name, offsetting dates by a random amount while retaining date intervals, and replacing misspellings with similar misspellings of the pseudonym (Uzuner et al., 2007). In 2019, a pseudonymized dataset for deidentification from a single source, the i2b2 2014 dataset, is publicly available (Stubbs and Uzuner, 2015). However, de-identification classifiers trained on this dataset do not generalize well to data from other sources (Stubbs et al., 2017). To obtain a universal de-identification classifier, many medical institutions would have to pool their data. But, preparing this data for sharing using the document-coherent pseudonymization ap1https://legislink.org/us/pl-104-191 5830 James was admitted to St. Thomas. .. Raw patient notes [James]Patient was admitted to [St. Thomas]Hosp. . . PHI-labeled patient notes [Henry]Patient was admitted to [River Clinic]Hosp. . . Pseudonymized patient notes µ [□□□]Patient □□□□□□ □□□[□□□□□□]Hosp. . . Private vector representation of patient notes µ PHI labeling / de-identification Pseudonymization Non-reversible transformation   f 3 Figure 1: Sharing training data for de-identification. PHI annotations are marked with [brackets]. Upper alternative: traditional process using manual pseudonymization. Lower alternative: our approach of sharing private vector representations. The people icon represents tasks done by humans; the gears icon represents tasks done by machines; the lock icon represents privacy-preserving artifacts. Manual pseudonymization is marked with a dollar icon to emphasize its high costs. proach requires large human effort (Dernoncourt et al., 2017). To address this problem, we introduce an adversarially learned representation of medical text that allows privacy-preserving sharing of training data for a de-identification classifier by transforming text non-reversibly into a vector space and only sharing this representation. Our approach still requires humans to annotate PHI (as this is the training data for the actual de-identification task) but the pseudonymization step (replacing PHI with coherent substitutes) is replaced by the automatic transformation to the vector representation instead. A classifier then trained on our representation cannot contain any protected data, as it is never trained on raw text (as long as the representation does not allow for the reconstruction of sensitive information). The traditional approach to sharing training data is conceptually compared to our approach in Fig. 1. 2 Related Work Our work builds upon two lines of research: firstly de-identification, as the system has to provide good de-identification performance, and secondly adversarial representation learning, to remove all identifying information from the representations to be distributed. 2.1 Automatic De-Identification Analogously to many natural language processing tasks, the state of the art in de-identification changed in recent years from rule-based systems and shallow machine learning approaches like conditional random fields (CRFs) (Uzuner et al., 2007; Meystre et al., 2010) to deep learning methods (Stubbs et al., 2017; Dernoncourt et al., 2017; Liu et al., 2017). Three i2b2 shared tasks on de-identification were run in 2006 (Uzuner et al., 2007), 2014 (Stubbs et al., 2015), and 2016 (Stubbs et al., 2017). The organizers performed manual pseudonymization on clinical records from a single source to create the datasets for each of the tasks. An F1 score of 95% has been suggested as a target for reasonable de-identification systems (Stubbs et al., 2015). Dernoncourt et al. (2017) first applied a long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) model with a CRF output component to de-identification. Transfer learning from a larger dataset slightly improves performance on the i2b2 2014 dataset (Lee et al., 2018). Liu et al. (2017) achieve state-of-the-art performance in de-identification by combining a deep learning ensemble with a rule component. Up to and including the 2014 shared task, the organizers emphasized that it is unclear if a system trained on the provided datasets will generalize to medical records from other sources (Uzuner et al., 2007; Stubbs et al., 2015). The 2016 shared task featured a sight-unseen track in which deidentification systems were evaluated on records from a new data source. The best system achieved an F1 score of 79%, suggesting that systems at the time were not able to deliver sufficient performance on completely new data (Stubbs et al., 2017). 5831 2.2 Adversarial Representation Learning Fair representations (Zemel et al., 2013; Hamm, 2015) aim to encode features of raw data that allows it to be used in e.g. machine learning algorithms while obfuscating membership in a protected group or other sensitive attributes. The domain-adversarial neural network (DANN) architecture (Ganin et al., 2016) is a deep learning implementation of a three-party game between a representer, a classifier, and an adversary component. The classifier and the adversary are deep learning models with shared initial layers. A gradient reversal layer is used to worsen the representation for the adversary during back-propagation: when training the adversary, the adversary-specific part of the network is optimized for the adversarial task but the shared part is updated against the gradient to make the shared representation less suitable for the adversary. Although initially conceived for use in domain adaptation, DANNs and similar adversarial deep learning models have recently been used to obfuscate demographic attributes from text (Elazar and Goldberg, 2018; Li et al., 2018) and subject identity (Feutry et al., 2018) from images. Elazar and Goldberg (2018) warn that when a representation is learned using gradient reversal methods, continued adversary training on the frozen representation may allow adversaries to break representation privacy. To test whether the unwanted information is not extractable from the generated information anymore, adversary training needs to continue on the frozen representation after finishing training the system. Only if after continued adversary training the information cannot be recovered, we have evidence that it really is not contained in the representation anymore. 3 Dataset and De-Identification Model We evaluate our approaches using the i2b2 2014 dataset (Stubbs and Uzuner, 2015), which was released as part of the 2014 i2b2/UTHealth shared task track 1 and is the largest publicly available dataset for de-identification today. It contains 1304 free-text documents with PHI annotations. The i2b2 dataset uses the 18 categories of PHI defined by HIPAA as a starting point for its own set of PHI categories. In addition to the HIPAA set of categories, it includes (sub-)categories such as doctor names, professions, states, countries, and ages under 90. Hyperparameter Value Pre-trained embeddings FastText, GloVe Casing feature Yes Batch size 32 Number of LSTM layers 2 LSTM units per layer/dir. 128 Input embedding dropout 0.1 Variational dropout 0.25 Dropout after LSTM 0.5 Optimizer Nadam Gradient norm clipping 1.0 Table 1: Hyperparameter configuration of our deidentification model. We compare three different approaches: a nonprivate de-identification classifier and two privacyenabled extensions, automatic pseudonymization (Section 4) and adversarially learned representations (Section 5). Our non-private system as well as the privacyenabled extensions are based on a bidirectional LSTM-CRF architecture that has been proven to work well in sequence tagging (Huang et al., 2015; Lample et al., 2016) and de-identification (Dernoncourt et al., 2017; Liu et al., 2017). We only use pre-trained FastText (Bojanowski et al., 2017) or GloVe (Pennington et al., 2014) word embeddings, not explicit character embeddings, as we suspect that these may allow easy re-identification of private information if used in shared representations. In place of learned character features, we provide the casing feature from Reimers and Gurevych (2017) as an additional input. The feature maps words to a one-hot representation of their casing (numeric, mainly numeric, all lower, all upper, initial upper, contains digit, or other). Table 1 shows our raw de-identification model’s hyperparameter configuration that was determined through a random hyperparameter search. 4 Automatic Pseudonymization To provide a baseline to compare our primary approach against, we introduce a na¨ıve word-level automatic pseudonymization approach that exploits the fact that state-of-the-art de-identification models (Liu et al., 2017; Dernoncourt et al., 2017) as well as our non-private de-identification model work on the sentence level and do not rely on document coherency. Before training, we shuffle the 5832 James was admitted · · · · · · Representation Model · · · De-Identification Model · · · Adversary Model Tokens Emb. Represent. De-identification output Adversary output Figure 2: Simplified visualization of the adversarial model architecture. Sequences of squares denote realvalued vectors, dotted arrows represent possible additional real or fake inputs to the adversary. The casing feature that is provided as a second input to the deidentification model is omitted for legibility. training sentences and replace all PHI tokens with a random choice of a fixed number N of their closest neighbors in an embedding space (including the token itself), as determined by cosine distance in a pre-computed embedding matrix. Using this approach, the sentence [James] was admitted to [St. Thomas] may be replaced by [Henry] was admitted to [Croix Scott]. While the resulting sentences do not necessarily make sense to a reader (e.g. “Croix Scott” is not a realistic hospital name), its embedding representation is similar to the original. We train our deidentification model on the transformed data and test it on the raw data. The number of neighbors N controls the privacy properties of the approach: N = 1 means no pseudonymization; setting N to the number of rows in a precomputed embedding matrix delivers perfect anonymization but the resulting data may be worthless for training a deidentification model. 5 Adversarial Representation We introduce a new data sharing approach that is based on an adversarially learned private representation and improves on the pseudonymization from Section 4. After training the representation on an initial publicly available dataset, e.g. the i2b2 2014 data, a central model provider shares the frozen representation model with participating medical institutions. They transform their PHIlabeled raw data into the pseudonymized representation, which is then pooled into a new public dataset for de-identification. Periodically, the pipeline consisting of the representation model and a trained de-identification model can be published to be used by medical institutions on their unlabeled data. Since both the representation model and the resulting representations are shared in this scenario, our representation procedure is required to prevent two attacks: A1. Learning an inverse representation model that transforms representations back to original sentences containing PHI. A2. Building a lookup table of inputs and their exact representations that can be used in known plaintext attacks. 5.1 Architecture Our approach uses a model that is composed of three components: a representation model, the deidentification model from Section 3, and an adversary. An overview of the architecture is shown in Fig. 2. The representation model maps a sequence of word embeddings to an intermediate vector representation sequence. The de-identification model receives this representation sequence as an input instead of the original embedding sequence. It retains the casing feature as an auxiliary input. The adversary has two inputs, the representation sequence and an additional embedding or representation sequence, and a single output unit. 5.2 Representation To protect against A1, our representation must be invariant to small input changes, like a single PHI token being replaced with a neighbor in the embedding space. Again, the number of neighbors N controls the privacy level of the representation. To protect against A2, we add a random element to the representation that makes repeated transformations of one sentence indistinguishable from representations of similar input sentences. 5833 We use a bidirectional LSTM model to implement the representation. It applies Gaussian noise N with zero mean and trainable standard deviations to the input embeddings E and the output sequence. The model learns a standard deviation for each of the input and output dimensions. R = Nout + LSTM(E + Nin) (1) In a preliminary experiment, we confirmed that adding noise with a single, fixed standard deviation is not a viable approach for privacypreserving representations. To change the cosine similarity neighborhoods of embeddings at all, we need to add high amounts of noise (more than double of the respective embedding matrix’s standard deviation), which in turn results in unrealistic embeddings that do not allow training a deidentification model of sufficient quality. In contrast to the automatic pseudonymization approach from Section 4 that only perturbs PHI tokens, the representation models in this approach processes all tokens to represent them in a new embedding space. We evaluate the representation sizes d ∈{50, 100, 300}. 5.3 Adversaries We use two adversaries that are trained on tasks that directly follow from A1 and A2: T1. Given a representation sequence and an embedding sequence, decide if they were obtained from the same sentence. T2. Given two representation sequences (and their cosine similarities), decide if they were obtained from the same sentence. We generate the representation sequences for the second adversary from a copy of the representation model with shared weights. We generate real and fake pairs for adversarial training using the automatic pseudonymization approach presented in Section 4, limiting the number of replaced PHI tokens to one per sentence. The adversaries are implemented as bidirectional LSTM models with single output units. We confirmed that this type of model is able to learn the adversarial tasks on random data and raw word embeddings in preliminary experiments. To use the two adversaries in our architecture, we average their outputs. R A D 1. R A D 2. R A D 3. a) R A D 3. b) Figure 3: Visualization of Feutry et al.’s three-part training procedure. The adversarial model layout follows Fig. 2: the representation model is at the bottom, the left branch is the de-identification model and the right branch is the adversary. In each step, the thick components are trained while the thin components are frozen. 5.4 Training We evaluate two training procedures: DANN training (Ganin et al., 2016) and the three-part procedure from Feutry et al. (2018). In DANN training, the three components are trained conjointly, optimizing the sum of losses. Training the de-identification model modifies the representation model weights to generate a more meaningful representation for de-identification. The adversary gradient is reversed with a gradient reversal layer between the adversary and the representation model in the backward pass, causing the representation to become less meaningful for the adversary. The training procedure by Feutry et al. (2018) is shown in Fig. 3. It is composed of three phases: P1. The de-identification and representation models are pre-trained together, optimizing the de-identification loss ldeid. P2. The representation model is frozen and the adversary is pre-trained, optimizing the adversarial loss ladv. P3. In alternation, for one epoch each: (a) The representation is frozen and both de-identification model and adversary are trained, optimizing their respective losses ldeid and ladv. (b) The de-identification model and adversary are frozen and the representation is trained, optimizing the combined loss lrepr = ldeid + λ|ladv −lrandom| (2) In each of the first two phases, the respective validation loss is monitored to decide at which point 5834 the training should move on to the next phase. The alternating steps in the third phase each last one training epoch; the early stopping time for the third phase is determined using only the combined validation loss from Phase P3b. Gradient reversal is achieved by optimizing the combined representation loss while the adversary weights are frozen. The combined loss is motivated by the fact that the adversary performance should be the same as a random guessing model, which is a lower bound for anonymization (Feutry et al., 2018). The term |ladv−lrandom| approaches 0 when the adversary performance approaches random guessing2. λ is a weighting factor for the two losses; we select λ = 1. 6 Experiments To evaluate our approaches, we perform experiments using the i2b2 2014 dataset. Preprocessing: We apply aggressive tokenization similarly to Liu et al. (2017), including splitting at all punctuation marks and mid-word e.g. if a number is followed by a word (“25yo” is split into “25”, “yo”) in order to minimize the amount of GloVe out-of-vocabulary tokens. We extend spaCy’s3 sentence splitting heuristics with additional rules for splitting at multiple blank lines as well as bulleted and numbered list items. Deep Learning Models: We use the Keras framework4 (Chollet et al., 2015) with the TensorFlow backend (Abadi et al., 2015) to implement our deep learning models. Evaluation: In order to compare our results to the state of the art, we use the token-based binary HIPAA F1 score as our main metric for deidentification performance. Dernoncourt et al. (2017) deem it the most important metric: deciding if an entity is PHI or not is generally more important than assigning the correct category of PHI, and only HIPAA categories of PHI are required to be removed by American law. Non-PHI tokens are not incorporated in the F1 score. We perform the evaluation with the official i2b2 evaluation script5. 2In the case of binary classification: Lrandom = −log 1 2. 3https://spacy.io 4https://keras.io 5https://github.com/kotfic/i2b2_ evaluation_scripts Model F1 (%) Our non-private FastText 97.67 Our non-private GloVe 97.24 Our non-private GloVe + casing 97.62 Dernoncourt et al. (LSTM-CRF) 97.85 Liu et al. (ensemble + rules) 98.27 Our autom. pseudon. FastText 96.75 Our autom. pseudon. GloVe 96.42 Our adv. repr. FastText 97.40 Our adv. repr. GloVe 96.89 Table 2: Binary HIPAA F1 scores of our non-private (top) and private (bottom) de-identification approaches on the i2b2 2014 test set in comparison to non-private the state of the art. Our private approaches use N = 100 neighbors as a privacy criterion. 7 Results Table 2 shows de-identification performance results for the non-private de-identification classifier (upper part, in comparison to the state of the art) as well as the two privacy-enabled extensions (lower part). The results are average values out of five experiment runs. 7.1 Non-private De-Identification Model When trained on the raw i2b2 2014 data, our models achieve F1 scores that are comparable to Dernoncourt et al.’s results. The casing feature improves GloVe by 0.4 percentage points. 7.2 Automatic Pseudonymization For both FastText and GloVe, moving training PHI tokens to random tokens from up to their N = 200 closest neighbors does not significantly reduce deidentification performance (see Fig. 4). F1 scores for both models drop to around 95% when selecting from N = 500 neighbors and to around 90% when using N = 1 000 neighbors. With N = 100, the FastText model achieves an F1 score of 96.75% and the GloVe model achieves an F1 score of 96.42%. 7.3 Adversarial Representation We do not achieve satisfactory results with the conjoint DANN training procedure: in all cases, our models learn representations that are not sufficiently resistant to the adversary. When training the adversary on the frozen representation for an 5835 100 101 102 103 Number of neighbors N 0.900 0.925 0.950 0.975 1.000 Binary HIPAA F1 score De-identification performance FastText GloVe Figure 4: F1 scores of our models when trained on automatically pseudonymized data where PHI tokens are moved to one of different numbers of neighbors N. The gray dashed line marks the 95% target F1 score. additional 20 epochs, it is able to distinguish real from fake input pairs on a test set with accuracies above 80%. This confirms the difficulties of DANN training as described by Elazar and Goldberg (2018) (see Section 2.2). In contrast, with the three-part training procedure, we are able to learn a representation that allows training a de-identification model while preventing an adversary from learning the adversarial tasks, even with continued training on a frozen representation. Figure 5 (left) shows our de-identification results when using adversarially learned representations. A higher number of neighbors N means a stronger invariance requirement for the representation. For values of N up to 1 000, our FastText and GloVe models are able to learn representations that allow training de-identification models that reach or exceed the target F1 score of 95%. However, training becomes unstable for N > 500: at this point, the adversary is able to break the representation privacy when trained for an additional 50 epochs (Fig. 5 right). Our choice of representation size d ∈ {50, 100, 300} does not influence de-identification or adversary performance, so we select d = 50 for further evaluation. For d = 50 and N = 100, the FastText model reaches an F1 score of 97.4% and the GloVe model reaches an F1 score of 96.89%. 8 De-Identification Performance In the following, we discuss the results of our models with regard to our goal of sharing sensitive training data for automatic de-identification. Overall, privacy-preserving representations come at a cost, as our best privacy-preserving model scores 0.27 points F1 score lower than our best non-private model; we consider this relative increase of errors of less than 10% as tolerable. Raw Text De-Identification: We find that the choice of GloVe or FastText embeddings does not meaningfully influence de-identification performance. FastText’s approach to embedding unknown words (word embeddings are the sum of their subword embeddings) should intuitively prove useful on datasets with misspellings and ungrammatical text. However, when using the additional casing feature, FastText beats GloVe only by 0.05 percentage points on the i2b2 test set. In this task, the casing feature makes up for GloVe’s inability to embed unknown words. Liu et al. (2017) use a deep learning ensemble in combination with hand-crafted rules to achieve state-of-the-art results for de-identification. Our model’s scores are similar to the previous state of the art, a bidirectional LSTM-CRF model with character features (Dernoncourt et al., 2017). Automatically Pseudonymized Data: Our na¨ıve automatic word-level pseudonymization approach allows training reasonable de-identification models when selecting from up to N = 500 neighbors. There is almost no decrease in F1 score for up to N = 20 neighbors for both the FastText and GloVe model. Adversarially Learned Representation: Our adversarially trained vector representation allows training reasonable de-identification models (F1 scores above 95%) when using up to N = 1 000 neighbors as an invariance requirement. The adversarial representation results beat the automatic pseudonymization results because the representation model can act as a task-specific feature extractor. Additionally, the representations are more general as they are invariant to word changes. 9 Privacy Properties In this section, we discuss our models with respect to their privacy-preserving properties. Embeddings: When looking up embedding space neighbors for words, it is notable that many FastText neighbors include the original word or parts of it as a subword. For tokens that occur as PHI in the i2b2 training set, on average 7.37 of their N = 100 closest neighbors in the FastText embedding matrix contain the original token 5836 101 102 103 Number of neighbors N 0.96 0.98 1.00 Binary HIPAA F1 score De-identification performance 101 102 103 Number of neighbors N 0.50 0.75 1.00 Accuracy Adversary accuracy FastText GloVe Figure 5: Left: de-identification F1 scores of our models using an adversarially trained representation with different numbers of neighbors N for the representation invariance requirement. Right: mean adversary accuracy when trained on the frozen representation for an additional 50 epochs. The figure shows average results out of five experiment runs. as a subword. When looking up neighbors using GloVe embeddings, the value is 0.44. This may indicate that FastText requires stronger perturbation (i.e. higher N) than GloVe to sufficiently obfuscate protected information. Automatically Pseudonymized Data: The word-level pseudonymization does not guarantee a minimum perturbation for every word, e.g. in a set of pseudonymized sentences using N = 100 FastText neighbors, we found the phrase [Florida Hospital], which was replaced with [Miami-Florida Hosp]. Additionally, the approach may allow an adversary to piece together documents from the shuffled sentences. If multiple sentences contain similar pseudonymized identifiers, they will likely come from the same original document, undoing the privacy gain from shuffling training sentences across documents. It may be possible to infer the original information using the overlapping neighbor spaces. To counter this, we can re-introduce document-level pseudonymization, i.e. moving all occurrences of a PHI token to the same neighbor. However, we would then also need to detect misspelled names as well as other hints to the actual tokens and transform them similarly to the original, which would add back much of the complexity of manual pseudonymization that we try to avoid. Adversarially Learned Representation: Our adversarial representation empirically satisfies a strong privacy criterion: representations are invariant to any protected information token being replaced with any of its N neighbors in an embedding space. When freezing the representation model from an experiment run using up to N = 500 neighbors and training the adversary for an additional 50 epochs, it still does not achieve higherthan-chance accuracies on the training data. Due to the additive noise, the adversary does not overfit on its training set but rather fails to identify any structure in the data. In the case of N = 1 000 neighbors, the representation never becomes stable in the alternating training phase. The adversary is always able to break the representation privacy. 10 Conclusions & Future Work We introduced a new approach to sharing training data for de-identification that requires lower human effort than the existing approach of document-coherent pseudonymization. Our approach is based on adversarial learning, which yields representations that can be distributed since they do not contain private health information. The setup is motivated by the need of deidentification of medical text before sharing; our approach provides a lower-cost alternative than manual pseudonymization and gives rise to the pooling of de-identification datasets from heterogeneous sources in order to train more robust classifiers. Our implementation and experimental data are publicly available6. As precursors to our adversarial representation approach, we developed a deep learning model for de-identification that does not rely on explicit character features as well as an automatic 6https://github.com/maxfriedrich/ deid-training-data 5837 word-level pseudonymization approach. A model trained on our automatically pseudonymized data with N = 100 neighbors loses around one percentage point in F1 score when compared to the non-private system, scoring 96.75% on the i2b2 2014 test set. Further, we presented an adversarial learning based private representation of medical text that is invariant to any PHI word being replaced with any of its embedding space neighbors and contains a random element. The representation allows training a de-identification model while being robust to adversaries trying to re-identify protected information or building a lookup table of representations. We extended existing adversarial representation learning approaches by using two adversaries that discriminate real from fake sequence pairs with an additional sequence input. The representation acts as a task-specific feature extractor. For an invariance criterion of up to N = 500 neighbors, training is stable and adversaries cannot beat the random guessing accuracy of 50%. Using the adversarially learned representation, de-identification models reach an F1 score of 97.4%, which is close to the non-private system (97.67%). In contrast, the automatic pseudonymization approach only reaches an F1 score of 95.0% at N = 500. Our adversarial representation approach enables cost-effective private sharing of training data for sequence labeling. Pooling of training data for de-identification from multiple institutions would lead to much more robust classifiers. Eventually, improved de-identification classifiers could help enable large-scale medical studies that eventually improve public health. Future Work: The automatic pseudonymization approach could serve as a data augmentation scheme to be used as a regularizer for deidentification models. Training a model on a combination of raw and pseudonymized data may result in better test scores on the i2b2 test set, possibly improving the state of the art. Private character embeddings that are learned from a perturbed source could be an interesting extension to our models. In adversarial learning with the three-part training procedure, it might be possible to tune the λ parameter and define a better stopping condition that avoids the unstable characteristics with high values for N in the invariance criterion. A further possible extension is a dynamic noise level in the representation model that depends on the LSTM output instead of being a trained weight. This might allow using lower amounts of noise for certain inputs while still being robust to the adversary. When more training data from multiple sources become available in the future, it will be possible to evaluate our adversarially learned representation against unseen data. Acknowledgments This work was partially supported by BWFG Hamburg within the “Forum 4.0” project as part of the ahoi.digital funding line. De-identified clinical records used in this research were provided by the i2b2 National Center for Biomedical Computing funded by U54LM008748 and were originally prepared for the Shared Tasks for Challenges in NLP for Clinical Data organized by ¨Ozlem Uzuner, i2b2 and SUNY. References Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Accessed May 31, 2019. Guthrie S. Birkhead, Michael Klompas, and Nirav R. Shah. 2015. Uses of electronic health records for public health surveillance to advance public health. Annual Review of Public Health, 36(1):345–359. PMID: 25581157. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Taxiarchis Botsis, Gunnar Hartvigsen, Fei Chen, and Chunhua Weng. 2010. Secondary use of EHR: Data quality issues and informatics opportunities. AMIA Summits on Translational Science Proceedings, 2010:1–5. 5838 Franc¸ois Chollet et al. 2015. Keras. Accessed May 31, 2019. Martin R. Cowie, Juuso I. Blomster, Lesley H. Curtis, Sylvie Duclaux, Ian Ford, Fleur Fritz, Samantha Goldman, Salim Janmohamed, J¨org Kreuzer, Mark Leenay, Alexander Michel, Seleen Ong, Jill P. Pell, Mary Ross Southworth, Wendy Gattis Stough, Martin Thoenes, Faiez Zannad, and Andrew Zalewski. 2017. Electronic health records to facilitate clinical research. Clinical Research in Cardiology, 106(1):1–9. Franck Dernoncourt, Ji Young Lee, ¨Ozlem Uzuner, and Peter Szolovits. 2017. De-identification of patient notes with recurrent neural networks. Journal of the American Medical Informatics Association, 24(3):596–606. Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 11–21, Brussels, Belgium. Association for Computational Linguistics. Cl´ement Feutry, Pablo Piantanida, Yoshua Bengio, and Pierre Duhamel. 2018. Learning anonymized representations with adversarial neural networks. arXiv preprint arXiv:1802.09386. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(1):2096–2030. Jihun Hamm. 2015. Preserving privacy of continuous high-dimensional data with minimax filters. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, volume 38 of Proceedings of Machine Learning Research, pages 324–332, San Diego, CA, USA. PMLR. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991. Peter B. Jensen, Lars J. Jensen, and Søren Brunak. 2012. Mining electronic health records: towards better research applications and clinical care. Nature Reviews Genetics, 13:395–405. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270, San Diego, CA, USA. Association for Computational Linguistics. Ji Young Lee, Franck Dernoncourt, and Peter Szolovits. 2018. Transfer Learning for NamedEntity Recognition with Neural Networks. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, pages 4470–4473, Miyazaki, Japan. European Language Resources Association. Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. Towards robust and privacy-preserving text representations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 25–30, Melbourne, Australia. Association for Computational Linguistics. Zengjian Liu, Buzhou Tang, Xiaolong Wang, and Qingcai Chen. 2017. De-identification of clinical notes via recurrent neural network and conditional random field. Journal of Biomedical Informatics, 75(S):S34–S42. Stephane M. Meystre, F. Jeffrey Friedlin, Brett R. South, Shuying Shen, and Matthew H. Samore. 2010. Automatic de-identification of textual documents in the electronic health record: a review of recent research. BMC Medical Research Methodology, 10(1):70. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Nils Reimers and Iryna Gurevych. 2017. Optimal hyperparameters for deep LSTM-networks for sequence labeling tasks. arXiv preprint arXiv:1707.06799. Amber Stubbs, Michele Filannino, and ¨Ozlem Uzuner. 2017. De-identification of psychiatric intake records: Overview of 2016 CEGS N-GRID shared tasks track 1. Journal of Biomedical Informatics, 75(S):S4–S18. Amber Stubbs, Christopher Kotfila, and ¨Ozlem Uzuner. 2015. Automated systems for the de-identification of longitudinal clinical narratives: Overview of 2014 i2b2/UTHealth shared task track 1. Journal of Biomedical Informatics, 58:11–19. Amber Stubbs and ¨Ozlem Uzuner. 2015. Annotating longitudinal clinical narratives for de-identification: The 2014 i2b2/UTHealth corpus. Journal of Biomedical Informatics, 58:20–29. The United States Congress. 1996. Health insurance portability and accountability act of 1996. Accessed May 31, 2019. ¨Ozlem Uzuner, Yuan Luo, and Peter Szolovits. 2007. Evaluating the state-of-the-art in automatic deidentification. Journal of the American Medical Informatics Association, 14(5):550–563. 5839 Seid Muhie Yimam. 2015. Narrowing the loop: Integration of resources and linguistic dataset development with interactive machine learning. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 88–95, Denver, CO, USA. Association for Computational Linguistics. Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. Learning fair representations. In Proceedings of the 30th International Conference on Machine Learning, volume 28 of Proceedings of Machine Learning Research, pages 325– 333, Atlanta, GA, USA. PMLR.
2019
584
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5840–5850 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5840 Merge and Label: A novel neural network architecture for nested NER Joseph Fisher Department of Economics University of Cambridge [email protected] Andreas Vlachos Dept. of Computer Science and Technology University of Cambridge [email protected] Abstract Named entity recognition (NER) is one of the best studied tasks in natural language processing. However, most approaches are not capable of handling nested structures which are common in many applications. In this paper we introduce a novel neural network architecture that first merges tokens and/or entities into entities forming nested structures, and then labels each of them independently. Unlike previous work, our merge and label approach predicts real-valued instead of discrete segmentation structures, which allow it to combine word and nested entity embeddings while maintaining differentiability. We evaluate our approach using the ACE 2005 Corpus, where it achieves state-of-the-art F1 of 74.6, further improved with contextual embeddings (BERT) to 82.4, an overall improvement of close to 8 F1 points over previous approaches trained on the same data. Additionally we compare it against BiLSTM-CRFs, the dominant approach for flat NER structures, demonstrating that its ability to predict nested structures does not impact performance in simpler cases.1 1 Introduction The task of nested named entity recognition (NER) focuses on recognizing and classifying entities that can be nested within each other, such as “United Kingdom” and “The Prime Minister of the United Kingdom” in Figure 1. Such entity structures, while very commonly occurring, cannot be handled by the predominant variant of NER models (McCallum and Li, 2003; Lample et al., 2016), which can only tag non-overlapping entities. A number of approaches have been proposed for nested NER. Lu and Roth (2015) introduced a hypergraph representation which can represent 1Code available at https://github.com/ fishjh2/merge_label overlapping mentions, which was further improved by Muis and Lu (2017), by assigning tags between each pair of consecutive words, preventing the model from learning spurious structures (overlapping entity structures which are gramatically impossible). More recently, Katiyar and Cardie (2018) built on this approach, adapting an LSTM (Hochreiter and Schmidhuber, 1997) to learn the hypergraph directly, and Wang and Lu (2018) introduced a segmental hypergraph approach, which is able to incorporate a larger number of span based features, by encoding each span with an LSTM. Our approach decomposes nested NER into two stages. First tokens are merged into entities (Level 1 in Figure 1), which are merged with other tokens or entities in higher levels. These merges are encoded as real-valued decisions, which enables a parameterized combination of word embeddings into entity embeddings at different levels. These entity embeddings are used to label the entities identified. The model itself consists of feedforward neural network layers and is fully differentiable, thus it is straightforward to train with backpropagation. Unlike methods such as Katiyar and Cardie (2018), it does not predict entity segmentation at each layer as discrete 0-1 labels, thus allowing the model to flexibly aggregate information across layers. Furthermore inference is greedy, without attempting to score all possible entity spans as in Wang and Lu (2018), which results in faster decoding (decoding requires simply a single forward pass of the network). To test our approach on nested NER, we evaluate it on the ACE 2005 corpus (LDC2006T06) where it achieves a state-of-the-art F1 score of 74.6. This is further improved with contextual embeddings (Devlin et al., 2018) to 82.4, an overall improvement of close to 8 F1 points against the 5841 Figure 1: Trained model’s representation of nested entities, after thresholding the merge values, M (see section 2.1). Note that the merging of “, to” is a mistake by the model. previous best approach trained on the same data, (Wang and Lu, 2018). Our approach is also 60 times faster than its closest competitor. Additionally, we compare it against BiLSTM-CRFs(Huang et al., 2015), the dominant flat NER paradigm, on Ontonotes (LDC2013T19) and demonstrate that its ability to predict nested structures does not impact performance in flat NER tasks as it achieves comparable results to the state of the art on this dataset. 2 Network Architecture 2.1 Overview The model decomposes nested NER into two stages. Firstly, it identifies the boundaries of the named entities at all levels of nesting; the tensor M in Figure 2, which is composed of real values between 0 and 1 (these real values are used to infer discrete split/merge decisions at test time, giving the nested structure of entities shown in Figure 1). We refer to this as predicting the “structure” of the NER output for the sentence. Secondly, given this structure, it produces embeddings for each entity, by combining the embeddings of smaller entities/tokens from previous levels (i.e. there will be an embedding for each rectangle in Figure 1). These entity embeddings are used to label the entities identified. An overview of the architecture used to predict the structure and labels is shown in Figure 2. The dimensions of each tensor are shown in square brackets in the figure. The input tensor, X, holds the word embeddings of dimension e, for every word in the input of sequence length, s. The first dimension, b, is the batch size. The Static Layer updates the token embeddings using contextual information, giving tensor Xs of the same dimension, [b, s, e]. Next, for u repetitions, we go through a series of building the structure using the Structure Layer, and then use this structure to continue updating the individual token embeddings using the Update Figure 2: Model architecture overview Layer, giving an output Xu. The updated token embeddings Xu are passed through the Structure Layer one last time, to give the final entity embeddings, T and structure, M. A feedforward Output Layer then gives the predictions of the label of each entity. The structure is represented by the tensor M, of dimensions [b, s −1, L]. M holds, for every pair of adjacent words (s −1 given input length s) and every output level (L levels), a value between 0 and 1. A value close to 0 denotes that the two (adjacent) tokens/entities from the previous level are likely to be merged on this level to form an entity; nested entities emerge when entities from lower levels are used. Note that for each individual application of the Structure Layer, we are building multiple levels (L) of nested entities. That is, within each Structure Layer there is a loop of length L. By building the structure before the Update Layer, the updates to the token embeddings can utilize information about which entities each token is in, as well as neighbouring entities, as opposed to just using information about neighbouring tokens. 5842 2.2 Preliminaries Before analysing each of the main layers of the network, we introduce two building blocks, which are used multiple times throughout the architecture. The first one is the Unfold operators. Given that we process whole news articles in one batch (often giving a sequence length (s) of 500 or greater) we do not allow each token in the sequence to consider every other token. Instead, we define a kernel of size k around each token, similar to convolutional neural networks (Kim, 2014), allowing it to consider the k/2 prior tokens and the k/2 following tokens. Figure 3: Unfold Operators for the passage “... yesterday. The President of France met with ...”. Each row in the matrices corresponds to the words “The”, “President”, ”of” and “France” (top to bottom). The unfold operators create kernels transforming tensors holding the word embeddings of shape [b, s, e] to shape [b, s, k, e]. unfold[from] simply tiles the embedding x of each token k times, and unfold[to] generates the k/2 token embeddings either side, as shown in Figure 3, for a kernel size k of 4. The first row of the unfold[to] tensor holds the two tokens before and the two tokens after the word “The”, the second row the two before and after “President” etc. As we process whole articles, the unfold operators allow tokens to consider tokens from previous/following sentences. The second building block is the Embed Update layer, shown in Figure 4. This layer is used to update embeddings within the model, and as such, can be thought of as equivalent in function to the residual update mechanism in Transformer (Vaswani et al., 2017). It is used in each of the Static Layer, Update Layer and Structure Layer from the main network architecture in Figure 2. It takes an input I′ of size [b, s, k, in], formed using the unfold ops described above, where the last dimension in varies depending on the point in the architecture at which the layer is used. It passes this input through the feedforward NN Figure 4: Embed Update layer FFEU, giving an output of dimension [b, s, k, e + 1] (the network broadcasts over the last three dimensions of the input tensor). The output is split into two. Firstly, a tensor E′ of shape [b, s, k, e], which holds, for each word in the sequence, k predictions of an updated word vector based on the k/2 words either side. Secondly, a weighting tensor C′ of shape [b, s, k, 1], which is scaled between 0 and 1 using the sigmoid function, and denotes how “confident” each of the k predictions is about its update to the word embedding. This works similar to an attention mechanism, allowing each token to focus on updates from the most relevant neighbouring tokens.2 The output, U is then a weighted average of E′: U = sum2(sigmoid(C′) ∗E′) where sum2 denotes summing across the second dimension of size k. U therefore has dimensions [b, s, e] and contains the updated embedding for each word. During training we initialize the weights of the network using the identity function. As a result, the default behaviour of FFEU prior to training is to pass on the word embedding unchanged, which is then updated during via backpropagation. An example of the effect of the identity initialization is provided in the supplementary materials. 2.3 Static Layer The static layer is a simple preliminary layer to update the embeddings for each word based on contextual information, and as such, is very similar to a Transformer (Vaswani et al., 2017) layer. Following the unfold ops, a positional encoding P of 2The difference being that the weightings are generated using a sigmoid rather than a softmax layer, allowing the attention values to be close to one for multiple tokens. 5843 dimension e (we use a learned encoding) is added, giving tensor Is: Is = concat(Unfold[from](X), Unfold[to](X)+P) Is is then passed through the Embed Update layer. In our experiments, we use a single static layer. There is no merging of embeddings into entities in the static layer. Figure 5: Static Layer 2.4 Structure Layer The Structure Layer is responsible for three tasks. Firstly, deciding which token embeddings should be merged at each level, expressed as real values between 0 and 1, and denoted M. Secondly, given these merge values M, deciding how the separate token embeddings should be combined in order to give the embeddings for each entity, T. Finally, for each token and entity, providing directional vectors D to the k/2 tokens either side, which are used to update each token embedding in the Update Layer based on its context. Intuitively, the directional vectors D can be thought of as encoding relations between entities - such as the relation between an organization and its leader, or that between a country and its capital city (see Section 6.2 for an analysis of these relation embeddings). Figure 6 shows a minimal example of the calculation of D, M and T, with word embedding and directional vector dimensions e = d = 2, and kernel size, k = 4. We pass the embeddings (X) of each pair of adjacent words through a feedforward NN FFS to give directions D [b, s-1, d] and merge values M [b, s-1, 1] between each pair. If FFS predicts M(1,2) to be close to 0, this indicates that tokens 1 and 2 are part of the same entity on this level. The unfold[to] op gives, for each word (we show only the unfolded tensors for the word “Kingdom” in Figure 6 for simplicity), D and M for pairs of words up to k/2 either side. Figure 6: Calculation of merging weight, directions and entities in Structure Layer By taking both the left and right cumulative sum (cumsum) of the resulting two tensors from the center out (see grey dashed arrows in Figure 6 for direction of the two cumsum ops), we get directional vectors and merge values from the word “Kingdom” to the words before and after it in the phrase, D′ 3,i and M′ 3,i for i = (1, 2, 4, 5). Note that we take the inverse of vectors D(1,2) and D(2,3) prior to the cumsum, as we are interested in the directions from the token “Kingdom” backwards to the tokens “United” and “The”. The values M′ 3,i are converted to weights W ′ of dimension [b, s, k, 1] using the formula W ′ = max(0, 1 −M′)3, with the max operation ensuring the model puts a weight of zero on tokens in separate entities (see the reduction of the value of 1.7 in M′ in Figure 6 to a weighting of 0.0). The weights are normalized to sum to 1, and multiplied with the unfolded token embeddings X′ to give the entity embeddings T, of dimension [b, s, e] T = W ′ sum2(W ′) ∗X′ Consequently, the embeddings at the end of level 1 for the words “The”, “United” and “Kingdom” 3We use the notation D′ to denote the unfolded version of tensor D, i.e. D′ = Unfold[to](D) 5844 (T 1 1 , T 1 2 and T 1 3 respectively) are all now close to equal, and all have been formed from a weighted average of the three separate token embeddings. If M(1,2) and M(2,3) were precisely zero, and M(3,4) was precisely 1.0, then all three would be identical. In addition, on higher levels, the directions from other words to each of these three tokens will also be identical. In other words, the use of “directions”4 allows the network to represent entities as a single embedding in a fully differentiable fashion, whilst keeping the sequence length constant. Figure 6 shows just a single level from within the Structure Layer. The embeddings T are then passed onto the next level, allowing progressively larger entities to be formed by combining smaller entities from the previous levels. Figure 7: Structure Layer The full architecture of the Structure Layer is shown in Figure 7. The main difference to Figure 6 is the additional use of Embed Update Layer, to decide how individual token/entity embeddings are combined together into a single entity. The reason for this is that if we are joining the words “The”, “United” and “Kingdom” into a single entity, it makes sense that the joint vector should be based largely on the embeddings of “United” and “Kingdom”, as “The” should add little information. The embeddings are unfolded (using the unfold[from] op) to shape [b, s, k, e] and concatenated with the directions between words, D′, to give the tensor of shape [b, s, k, e + d]. This is passed through the Embed Update layer, giving, 4We use the term “directions” as we inverse the vectors to get the reverse direction, and cumsum them to get directions between tokens multiple steps away. for each word, a weighted and updated embedding, ready to be combined into a single entity (for unimportant words like “The”, this embedding will have been reduced to close to zero). We use this tensor in place of tensor X in Figure 6, and multiply with the weights W ′ to give the new entity embeddings, T. There are four separate outputs from the Structure Layer. The first, denoted by T , is the entity embeddings from each of the levels concatenated together, giving a tensor of size [b, s, e, L]. The second output, R , is a weighted average of the embeddings from different layers, of shape [b, s, k, e]. This will be used in the place of the unfold[to] tensor described above as an input the the Update Layer. It holds, for each token in the sequence, embeddings of entities up to k/2 tokens either side. The third output, D , will also be used by the Update Layer. It holds the directions of each token/entity to the k/2 tokens/entities either side. It is formed using the cumsum op, as shown in Figure 6. Finally, the fourth output, M , stores the merge values for every level. It is used in the loss function, to directly incentivize the correct merge decisions at the correct levels. 2.5 Update Layer The Update Layer is responsible for updating the individual word vectors, using the contextual information derived from outputs R and D of the Structure Layer. It concatenates the two outputs together, along with the output of the unfold[from] op, X′ s, and with an article theme embedding A tensor, giving tensor Z of dimension [b, s, k, (e*2 + d + a)]. The article theme embedding is formed by passing every word in the article through a feedforward NN, and taking a weighted average of the outputs, giving a tensor of dimension [b, a]. This is then tiled5 to dimension [b, s, k, a], giving tensor A. A allows the network to adjust its contextual understanding of each token based on whether the article is on finance, sports, etc. Z is then passed through an Embed Update layer, giving an output Xu of shape [b, s, e]. Xu = Embed Update(concat(X′ s, R, D, A)) We therefore update each word vector using four pieces of information. The original word embedding, a direction to a different token/entity, the 5Tiling refers to simply repeating the tensor across both the sequence length s and kernel size k dimensions 5845 embedding of that different token/entity, and the article theme. Figure 8: Update Layer The use of directional vectors D in the Update Layer can be thought of as an alternative to the positional encodings in Transformer (Vaswani et al., 2017). That is, instead of updating each token embedding using neighbouring tokens embeddings with a positional encoding, we update using neighbouring token embeddings, and the directions to those tokens. 3 Implementation Details 3.1 Data Preprocessing 3.1.1 ACE 2005 ACE 2005 is a corpus of around 180K tokens, with 7 distinct entity labels. The corpus labels include nested entities, allowing us to compare our model to the nested NER literature. The dataset is not pre-tokenized, so we carry out sentence and word tokenization using NLTK. 3.1.2 OntoNotes OntoNotes v5.0 is the largest corpus available for NER, comprised of around 1.3M tokens, and 19 different entity labels. Although the labelling of the entities is not nested in OntoNotes, the corpus also includes labels for all noun phrases, which we train the network to identify concurrently. For training, we copy entities which are not contained within a larger nested entity onto higher levels, as shown in Figure 9. 3.1.3 Labelling For both datasets, during training, we replace all “B-” labels with their corresponding “I-” label. At evaluation, all predictions which are the first word in a merged entity have the “B-” added back on. As the trained model’s merging weights, M, can take any value between 0 and 1, we have to set a Figure 9: OntoNotes Labelling cutoff at eval time when deciding which words are in the same entity. We perform a grid search over cutoff values using the dev set, with a value of 0.75 proving optimal. 3.2 Loss function The model is trained to predict the correct merge decisions, held in the tensor M of dimension [b, s1, L] and the correct class labels given these decisions, C. The merge decisions are trained directly using the mean absolute error (MAE): MAEM = sum(|M −ˆ M|) (b ∗s ∗L) This is then weighted by a scalar wM, and added to the usual Cross Entropy (CE) loss from the predictions of the classes, CEC, giving a final loss function of the form: Loss = (wM ∗MAEM) + CEC In experiments we set the weight on the merge loss, wM to 0.5. 3.3 Evaluation Following previous literature, for both the ACE and OntoNotes datasets, we use a strict F1 measure, where an entity is only considered correct if both the label and the span are correct. 3.3.1 ACE 2005 For the ACE corpus, the default metric in the literature (Wang et al., 2018; Ju et al., 2018; Wang and Lu, 2018) does not include sequential ordering of nested entities (as many architectures do not have a concept of ordered nested outputs). As a result, an entity is considered correct if it is present in the target labels, regardless of which layer the model predicts it on. 3.3.2 OntoNotes NER models evaluated on OntoNotes are trained to label the 19 entities, and not noun phrases (NP). 5846 To provide as fair as possible a comparison, we consequently flatten all labelled entities into a single column. As 96.5% of labelled entities in OntoNotes do not contain a NP nested inside, this applies to only 3.5% of the dataset. Figure 10: OntoNotes Targets The method used to flatten the targets is shown in Figure 10. The OntoNotes labels include a named entity (TIME), in the second column, with the NP “twenty-four” minutes nested inside. Consequently, we take the model’s prediction from the second column as our prediction for this entity. This provides a fair comparison to existing NER models, as all entities are included, and if anything, disadvantages our model, as it not only has to predict the correct entity, but do so on the correct level. That said, the NP labels provide additional information during training, which may give our model an advantage over flat NER models, which do not have access to these labels. 3.4 Training and HyperParameters We performed a small amount of hyperparameter tuning across dropout, learning rate, distance embedding size d, and number of update layers u. We set dropout at 0.1, the learning rate to 0.0005, d to 200, and u to 3. For full hyperparameter details see the supplementary materials. The number of levels, L, is set to 3, with a kernel size k of 10 on the first level, 20 on the second, and 30 on the third (we increase the kernel size gradually for computational efficiency as first level entities are extremely unlikely to be composed of more than 10 tokens, whereas higher level nested entities may be larger). Training took around 10 hours for OntoNotes, and around 6 hours for ACE 2005, on an Nvidia 1080 Ti. For experiments without language model (LM) embeddings, we used pretrained Glove embeddings (Pennington et al., 2014) of dimension 300. Following (Strubell et al., 2017), we added a “CAP features” embedding of dimension 20, denoting if each word started with a capital letter, was all capital letters, or had no capital letters. For the experiments with LM embeddings, we used the implementations of the BERT (Devlin et al., 2018) and ELMO (Peters et al., 2018) models from the Flair (Akbik et al., 2018) project6. We do not finetune the BERT and ELMO models, but take their embeddings as given. 4 Results 4.1 ACE 2005 On the ACE 2005 corpus, we begin our analysis of our model’s performance by comparing to models which do not use the POS tags as additional features, and which use non-contextual word embeddings. These are shown in the top section of Table 1. The previous state-of-the-art F1 of 72.2 was set by Ju et al. (2018), using a series of stacked BiLSTM layers, with CRF decoders on top of each of them. Our model improves this result with an F1 of 74.6 (avg. over 5 runs with std. dev. of 0.4). This also brings the performance into line with Wang et al. (2018) and Wang and Lu (2018), which concatenate embeddings of POS tags with word embeddings as an additional input feature. Model Pr. Rec. F1 Multigraph + MS (Muis and Lu, 2017) 69.1 58.1 63.1 RNN + hyp (Katiyar and Cardie, 2018) 70.6 70.4 70.5 BiLSTM-CRF stacked (Ju et al., 2018) 74.2 70.3 72.2 LSTM + forest [POS] (Wang et al., 2018) 74.5 71.5 73.0 Segm. hyp [POS] (Wang and Lu, 2018) 76.8 72.3 74.5 Merge and Label 75.1 74.1 74.6 LM embeddings Merge and Label [ELMO] 79.7 78.0 78.9 Merge and Label [BERT] 82.7 82.1 82.4 LM + OntoNotes DyGIE (Luan et al., 2019) 82.9 Table 1: ACE 2005 Given the recent success on many tasks using contextual word embeddings, we also evaluate performance using the output of pre-trained BERT (Devlin et al., 2018) and ELMO (Peters et al., 2018) models as input embeddings. This leads to a significant jump in performance to 78.9 with ELMO, and 82.4 with BERT (both avg. over 6https://github.com/zalandoresearch/flair/ 5847 5 runs with 0.4 and 0.3 std. dev. respectively), an overall increase of 8 F1 points from the previous state-of-the-art. Finally, we report the concurrently published result of Luan et al. (2019), in which they use ELMO embeddings, and additional labelled data (used to train the coreference part of their model and the entity boundaries) from the larger OntoNotes dataset. A secondary advantage of our architecture relative to those models which require construction of a hypergraph or CRF layer is its decoding speed, as decoding requires only a single forward pass of the network. As such it achieves a speed of 9468 words per second (w/s) on an Nvidia 1080 Ti GPU, relative to a reported speed of 157 w/s for the closest competitor model of Wang and Lu (2018), a sixty fold advantage. 4.2 OntoNotes As mentioned previously, given the caveats that our model is trained to label all NPs as well as entities, and must also predict the correct layer of an entity, the results in Table 2 should be seen as indicative comparisons only. Using non-contextual embeddings, our model achieves a test F1 of 87.59. To our knowledge, this is the first time that a nested NER architecture has performed comparably to BiLSTM-CRFs (Huang et al., 2015) (which have dominated the named entity literature for the last few years) on a flat NER task. Given the larger size of the OntoNotes dataset, we report results from a single iteration, as opposed to the average of 5 runs as in the case of ACE05. Model F1 BiLSTM-CRF (Chiu and Nichols, 2016) 86.28 ID-CNN (Strubell et al., 2017) 86.84 BiLSTM-CRF (Strubell et al., 2017) 86.99 Merge and Label 87.59 LM embeddings or extra data BiLSTM-CRF lex (Ghaddar and Langlais, 2018) 87.95 BiLSTM-CRF with CVT (Clark et al., 2018) 88.81 Merge and Label [BERT] 89.20 BiLSTM-CRF Flair (Akbik et al., 2018) 89.71 Table 2: OntoNotes NER We also see a performance boost from using BERT embeddings, pushing the F1 up to 89.20. This falls slightly short of the state-of-the-art on this dataset, achieved using character-based Flair (Akbik et al., 2018) contextual embeddings. 5 Ablations To better understand the results, we conducted a small ablation study. The affect of including the Static Layer in the architecture is consistent across both datasets, yielding an improvement of around 2 F1 points; the updating of the token embeddings based on context seems to allow better merge decisions for each pair of tokens. Next, we look at the method used to update entity embeddings prior to combination into larger entities in the Structure Layer. In the described architecture, we use the Embed Update mechanism (see Figure 7), allowing embeddings to be changed dependent on which other embeddings they are about to be combined with. We see that this yields a significant improvement on both tasks of around 4 F1 points, relative to passing each embedding through a linear layer. The inclusion of an “article theme” embedding, used in the Update Layer, has little effect on the ACE05 data. but gives a notable improvement for OntoNotes. Given that the distribution of types of articles is similar for both datasets, we suggest this is due to the larger size of the OntoNotes set allowing the model to learn an informative article theme embedding without overfitting. Next, we investigate the impact of allowing the model to attend to tokens in neighbouring sentences (we use a set kernel size of 30, allowing each token to consider up to 15 tokens prior and 15 after, regardless of sentence boundaries). Ignoring sentence boundaries boosts the results on ACE05 by around 4 F1 points, whilst having a smaller affect on OntoNotes. We hypothesize that this is due to the ACE05 task requiring the labelling of pronominal entities, such as “he” and “it”, which is not required for OntoNotes. The coreference needed to correctly label their type is likely to require context beyond the sentence. 6 Discussion 6.1 Entity Embeddings As our architecture merges multi-word entities, it not only outputs vectors of each word, but also for all entities - the tensor T. To demonstrate this, Table 3 shows the ten closest entity vectors in the OntoNotes test data to the phrases “the United Kingdom”, “Arab Foreign Ministers” and “Israeli 5848 the United Kingdom Arab Foreign Ministers Israeli Prime Minister Ehud Barak the United States Palestinian leaders Italian President Francesco Cossiga the Tanzania United Republic Yemeni authorities French Foreign Minister Hubert Vedrine the Soviet Union Palestinian security officials Palestinian leader Yasser Arafat the United Arab Emirates Israeli officials Iraqi leader Saddam Hussein the Hungary Republic Canadian auto workers Likud opposition leader Ariel Sharon Myanmar Palestinian sources UN Secretary General KofiAnnan Shanghai many Jewish voters Russian President Vladimir Putin China Lebanese Christian lawmakers Syrian Foreign Minister Faruq al - Shara Syria Israeli and Palestinian negotiators PLO leader Arafat the Kyrgystan Republic A Canadian bank Libyan leader Muammar Gaddafi Table 3: Entity Embeddings Nearest Neighbours ACE05 OntoNotes Static Layer with 74.6 87.59 without 73.1 85.22 Embed Combination Linear 70.2 83.96 Embed Update 74.6 87.59 Article Embedding with 74.5 87.59 without 74.6 85.60 Sentence boundaries with 70.8 86.30 without 74.6 87.59 Table 4: Architecture Ablations Prime Minister Ehud Barak”.7 Given that the OntoNotes NER task considers countries and cities as GPE (Geo-Political Entities), the nearest neighbours in the left hand column are expected. The nearest neighbours of “Arab Foreign Ministers” and “Israeli Prime Minister Ehud Barak” are more interesting, as there is no label for groups of people or jobs for the task.8 Despite this, the model produces good embedding-based representations of these complex higher level entities. 6.2 Directional Embeddings The representation of the relationship between each pair of words/entities as a vector is primarily a mechanism used by the model to update the word/entity vectors. However, the resulting vectors, corresponding to output D of the Structure Layer, may also provide useful information for 7Note that we exclude from the 10 nearest neighbours identical entities from higher levels. I.e. if “the United Kingdom” is kept as a three token entity, and not merged into a larger entity on higher levels, we do not report the same phrase from all levels in the nearest neighbours. 8The phrase “Israeli Prime Minister Ehud Barak” would have “Israeli” labelled as NORP, and “Ehud Barak” labelled as PERSON in the OntoNotes corpus. downstream tasks such as knowledge base population. To demonstrate the directional embeddings, Table 5 shows the ten closest matches for the direction between “the president” and “the People’s Bank of China”. The network has clearly picked up on the relationship of an employee to an organisation. the president →the People’s Bank of China the chairman →the SEC Vice Minister →the Ministry of Foreign Affairs Chairman →the People’s Association of Taiwan Deputy Chairman →the TBAD Women’s Division Chairman →the KMT Vice President →the Military Commission of the CCP vice-chairman →the CCP Associate Justices→the Supreme Court of the United States Chief Editor →Taiwan’s contemporary monthly General Secretary →the Communist Party of China Table 5: Directional Embeddings Nearest Neighbours Table 5 also provides further examples of the network merging and providing intuitive embeddings for multi-word entities. 7 Conclusion We have presented a novel neural network architecture for smoothly merging token embeddings in a sentence into entity embeddings, across multiple levels. The architecture performs strongly on the task of nested NER, setting a new state-of-the-art F1 score by close to 8 F1 points, and is also competitive at flat NER. Despite being trained only for NER, the architecture provides intuitive embeddings for a variety of multi-word entities, a step which we suggest could prove useful for a variety of downstream tasks, including entity linking and coreference resolution. Acknowledgments Andreas Vlachos is supported by the EPSRC grant eNeMILP (EP/R021643/1). 5849 References Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638–1649. Association for Computational Linguistics. Jason P. C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics, 4:357–370. Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc V. Le. 2018. Semi-supervised sequence modeling with cross-view training. EMNLP. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Abbas Ghaddar and Philippe Langlais. 2018. Robust lexical features for improved neural network namedentity recognition. CoRR, abs/1806.03489. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735– 1780. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. CoRR, abs/1508.01991. Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named entity recognition. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1446–1459. Association for Computational Linguistics. Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 861–871. Association for Computational Linguistics. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). G¨unter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. 2017. Self-normalizing neural networks. CoRR, abs/1706.02515. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL-HLT, pages 260–270. Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 857–867. Association for Computational Linguistics. Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. CoRR, abs/1904.03296. Andrew McCallum and Wei Li. 2003. Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 188–191. Association for Computational Linguistics. Aldrian Obaja Muis and Wei Lu. 2017. Labeling gaps between words: Recognizing overlapping mentions with mention separators. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2608–2618. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In In EMNLP. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. Emma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate sequence labeling with iterated dilated convolutions. EMNLP. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. CoRR, abs/1706.03762. Bailin Wang and Wei Lu. 2018. Neural segmental hypergraphs for overlapping mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 204–214. Association for Computational Linguistics. Bailin Wang, Wei Lu, Yu Wang, and Hongxia Jin. 2018. A neural transition-based model for nested mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1011–1017. Association for Computational Linguistics. 5850 A Supplemental Material A.1 HyperParameters In addition to the hyperparameters recorded in the main paper, there are a large number of additional hyperparameters which we kept constant throughout experiments. The feedforward NN in the Static Layer, FFs, has two hidden layers each of dimension 200. The NN in the Embed Update layer, FFEU has two hidden layers, each of dimension 320. The output NN has one hidden layer of dimension 200. Aside from FFEU, which is initialized using the identity function as described in Supplementary section A.2, all parameters of networks are initialized from the uniform distribution between -0.1 and 0.1. The article theme size, a, is set to 50. All network layers use the SELU activation function of (Klambauer et al., 2017). The kernel size k for the Static Layer is set to 6, allowing each token to attend the 3 tokens either side. On the OntoNotes Corpus, we train for 60 epochs, and half the learning rate every 12 epochs. On ACE 2005, we train for 150 epochs, and half the learning rate every 30 epochs. We train with a maximum batch dimension of 900 tokens. Articles longer than length 900 are split and processed in separate batches. We train using the Adam Optimizer, and, in addition to the dropout of 0.1, we apply a dropout to the Glove/LM embeddings of 0.2. A.2 Identity initialization Figure 11 gives a minimum working example of identity initialization of FFEU. The embedding for “The” is [1.1, 0.5], and that for “President” is [1.1, -0.3]. Through the unfold ops, we’ll end up with the two embeddings concatenated together. Figure 11 shows FFEU as having just one layer with no activation function to demonstrate the effect of the identity initialization. The first two dimensions of the output are the embedding for “The” with no changes. The final output (in light green) is the weighting. Figure 11: Update mechanism In reality, the zeros in the weights tensor are initialized to very small random numbers (we use a uniform initialization between -0.01 and 0.01), so that during training FFEU learns to update the embedding for “The” using the information that it is one step before the word “President”. A.3 Formation of outputs R and D in Structure Layer Outputs R and D of the Structure Layer have dimensions [b,s, k, e] and [b, s, k, d] respectively. These outputs are a weighted average of the directional and embedding outputs from the L levels of the structure layer. We use the weights, W ′, (see Figure 6) to form the weighted average: D = L X l=1 W ′ l Dl In the case of the weighted average for the embedding tensor, R, we use the weights from the next level. R = L X l=1 W ′ l+1Rl As a result, when updating, each token “sees” information from tokens/entities on other levels dependent on whether or not they are in the same entity. For the intuition behind this, we use the example phrase “The United Kingdom government” from Figure 6. The model should output merge values M which group the tokens “The United Kingdom” on the first level, and then group all the tokens on the second level. If this is the case, then for the token “United”, R and D will hold the embedding of/directions to the tokens “The” and “Kingdom” in their disaggregated (unmerged) form. However, for the token “government”, R and D will hold embeddings of/ directions to the combined entity “the United Kingdom” in each of the three slots for “The”, “United” and “Kingdom”. Because “government” is not in the same entity as “The United Kingdom” on the first level, it “sees” the aggregated embedding of this entity. Intuitively, this allows the token “government” to update in the model based on the information that it has a country one step to the left of it, as opposed to having three separate tokens, one, two and three steps to the left respectively. Note that as with the entity merging, there are no hard decisions during training, with this effect based on the real valued merge tensor M, to allow differentiability.
2019
585
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5851–5861 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5851 Low-resource Deep Entity Resolution with Transfer and Active Learning Jungo Kasai♥∗ Kun Qian♣ Sairam Gurajada♣ Yunyao Li♣ Lucian Popa♣ ♥Paul G. Allen School of Computer Science & Engineering, University of Washington ♣IBM Research – Almaden [email protected] {qian.kun,Sairam.Gurajada}@ibm.com {yunyaoli,lpopa}@us.ibm.com Abstract Entity resolution (ER) is the task of identifying different representations of the same real-world entities across databases. It is a key step for knowledge base creation and text mining. Recent adaptation of deep learning methods for ER mitigates the need for dataset-specific feature engineering by constructing distributed representations of entity records. While these methods achieve stateof-the-art performance over benchmark data, they require large amounts of labeled data, which are typically unavailable in realistic ER applications. In this paper, we develop a deep learning-based method that targets lowresource settings for ER through a novel combination of transfer learning and active learning. We design an architecture that allows us to learn a transferable model from a highresource setting to a low-resource one. To further adapt to the target dataset, we incorporate active learning that carefully selects a few informative examples to fine-tune the transferred model. Empirical evaluation demonstrates that our method achieves comparable, if not better, performance compared to state-of-the-art learning-based methods while using an order of magnitude fewer labels. 1 Introduction Entity Resolution (ER), also known as entity matching, record linkage (Fellegi and Sunter, 1969), reference reconciliation (Dong et al., 2005), and merge-purge (Hern´andez and Stolfo, 1995), identifies and links different representations of the same real-world entities. ER yields a unified and consistent view of data and serves as a crucial step in downstream applications, including knowledge base creation, text mining (Zhao et al., 2014), and social media analysis (Campbell ∗Work done during summer internship at IBM Research – Almaden. et al., 2016). For instance, seen in Table 1 are citation data records from two databases, DBLP and Google Scholar. If one intends to build a system that analyzes citation networks of publications, it is essential to recognize publication overlaps across the databases and to integrate the data records (Pasula et al., 2002). Recent work demonstrated that deep learning (DL) models with distributed representations of words are viable alternatives to other machine learning algorithms, including support vector machines and decision trees, for performing ER (Ebraheem et al., 2018; Mudgal et al., 2018). The DL models provide a universal solution to ER across all kinds of datasets that alleviates the necessity of expensive feature engineering, in which a human designer explicitly defines matching functions for every single ER scenario. However, DL is well known to be data hungry; in fact, the DL models proposed in Ebraheem et al. (2018); Mudgal et al. (2018) achieve state-of-the-art performance by learning from thousands of labels.1 Unfortunately, realistic ER tasks have limited access to labeled data and would require substantial labeling effort upfront, before the actual learning of the ER models. Creating a representative training set is especially challenging in ER problems due to the data distribution, which is heavily skewed towards negative pairs (i.e. non-matches) as opposed to positive pairs (i.e. matches). This problem limits the applicability of DL methods in low-resource ER scenarios. Indeed, we will show in a later section that the performance of DL models degrades significantly as compared to other machine learning algorithms when only a limited amount of labeled data is available. To address this issue, we propose a DLbased method that combines transfer learning and 117k labels were used for the DBLP-Scholar scenario. 5852 DBLP Authors Title Venue Year M Carey, D Dewitt, J Naughton, M Asgarian, P Brown, J Gehrke, D Shah The Bucky Object-relational Benchmark (Experience Paper) SIGMOD Conference 1997 A Netz, S Chaudhuri, J Bernhardt, U Fayyad Integration of Data Mining with Database Technology VLDB 2000 Google Scholar Authors Title Venue Year MJ Carey, DJ Dewitt, JF Naughton, M Asgarian, P The Bucky Object Relational Benchmark Proceedings of the SIGMOD Conference on Management of Data NULL A Netz, S Chaudhuri, J Bernhardt, U Fayyad Integration of Data Mining and Relational Databases Proc. 2000 Table 1: Data record examples from DBLP-Scholar (citation genre). The first records from DBLP and Google Scholar (red) refer to the same publication even though the information is not identical. The second ones (blue and brown) record different papers with the same authors and year. active learning. We first develop a transfer learning methodology to leverage a few pre-existing scenarios with abundant labeled data, in order to use them in other settings of similar nature but with limited or no labeled data. More concretely, through a carefully crafted neural network architecture, we learn a transferable model from multiple source datasets with cumulatively abundant labeled data. Then we use active learning to identify informative examples from the target dataset to further adapt the transferred model to the target setting. This novel combination of transfer and active learning in ER settings enables us to learn a comparable or better performing DL model while using significantly fewer target dataset labels in comparison to state-of-the-art DL and even non-DL models. We also note that the two techniques are not dependent on each other. For example, one could skip transfer learning if no high-resource dataset is available and directly use active learning. Conversely, one could use transfer learning directly without active learning. We evaluate these cases in the experiments. Specifically, we make the following contributions: • We propose a DL architecture for ER that learns attribute agnostic and transferable representations from multiple source datasets using dataset (domain) adaptation. • To the best of our knowledge, we are the first to design an active learning algorithm for deep ER models. Our active learning algorithm searches for high-confidence examples and uncertain examples, which provide a guided way to improve the precision and recall of the transferred model to the target dataset. • We perform extensive empirical evaluations over multiple benchmark datasets and demonstrate that our method outperforms state-ofthe-art learning-based models while using an order of magnitude fewer labels. 2 Background and Related Work 2.1 Entity Resolution Let D1 and D2 be two collections of entity records. The task of ER is to classify the entity record pair ⟨e1, e2⟩, ∀e1 ∈D1, e2 ∈D2, into a match or a non-match. This is accomplished by comparing entity record e1 to e2 on their corresponding attributes. In this paper, we assume records in D1 and D2 share the same schema (set of attributes). In cases where they have different attributes, one can use schema matching techniques (Rahm and Bernstein, 2001) to first align the schemas, followed by data exchange techniques (Fagin et al., 2009). Each attribute value is a sequence of words. Table 1 shows examples of data records from an ER scenario, DBLP-Scholar (K¨opcke et al., 2010) from the citation genre and clearly depicts our assumption of datasets handled in this paper. Since the entire Cartesian product D1 × D2 often becomes large and it is infeasible to run a high-recall classifier directly, we typically decompose the problem into two steps: blocking and matching. Blocking filters out obvious nonmatches from the Cartesian product to obtain a candidate set. Attribute-level or record-level tf-idf and jaccard similarity can be used for blocking criteria. For example, in the DBLP-Scholar scenario, one blocking condition could be based on applying equality on “Year”. Hence, two publications in different years will be considered as obvious nonmatches and filtered out from the candidate set. Then, the subsequent matching phase classifies the candidate set into matches and non-matches. 5853 Figure 1: Deep ER model architecture with dataset adaptation via gradient reversal. Only two attributes are shown. Ws indicate word vectors. 2.2 Learning-based Entity Resolution As described above, after the blocking step, ER reduces to a binary classification task on candidate pairs of data records. Prior work has proposed learning-based methods that train classifiers on training data, such as support vector machines, naive bayes, and decision trees (Christen, 2008; Bilenko and Mooney, 2003). These learningbased methods first extract features for each record pair from the candidate set across attributes in the schema, and use them to train a binary classifier. The process of selecting appropriate classification features is often called feature engineering and it involves substantial human effort in each ER scenario. Recently, Ebraheem et al. (2018) and Mudgal et al. (2018) have proposed deep learning models that use distributed representations of entity record pairs for classification. These models benefit from distributed representations of words and learn complex features automatically without the need for dataset-specific feature engineering. 3 Deep ER Model Architecture We describe the architecture of our DL model that classifies each record pair in the candidate set into a match or a non-match. As shown in Fig. 1, our model encompasses a sequence of steps that computes attribute representations, attribute similarity and finally the record similarity for each input pair ⟨e1, e2⟩. A matching classifier uses the record similarity representation to classify the pair. For an extensive list of hyperparameters and training details we chose, see the appendix. Input Representations. For each entity record pair ⟨e1, e2⟩, we tokenize the attribute values and vectorize the words by external word embeddings to obtain input representations (Ws in Fig. 1). We use the 300 dimensional fastText embeddings (Bojanowski et al., 2017), which capture subword information by producing word vectors via character n-grams. This vectorization has the benefit of well representing out-of-vocabulary words (Bojanowski et al., 2017) that frequently appear in ER attributes. For instance, venue names SIGMOD and ACL are out of vocabulary in the publicly available GloVe vectors (Pennington et al., 2014), but we clearly need to distinguish them. Attribute Representations. We build a universal bidirectional RNN on the word input representations of each attribute value and obtain attribute vectors (attr1 and attr2 in Fig. 1) by concatenating the last hidden units from both directions. Crucially, the universal RNN allows for transfer learning between datasets of different schemas without error-prone schema mapping. We found that gated recurrent units (GRUs, Cho et al. (2014)) yielded the best performance on the dev set as compared to simple recurrent neural networks (SRNNs, Elman (1990)) and Long Short-Term Memory networks (LSTMs, Hochreiter and Schmidhuber (1997)). We also found that using BiGRU with multiple layers did not help, and we will use one-layer BiGRUs with 150 hidden units throughout the experiments below. Attribute Similarity. The resultant attribute representations are then used to compare attributes of each entity record pair. In particular, we compute the element-wise absolute difference between the two attribute vectors for each attribute and construct attribute similarity vectors (sim1 and sim2 in Fig. 1). We also considered other comparison mechanisms such as concatenation and elementwise multiplication, but we found that absolute difference performs the best in development, and we will report results from absolute difference. Record Similarity. Given the attribute similarity vectors, we now combine those vectors to represent the similarity between the input entity record pair. Here, we take a simple but effective approach of adding all attribute similarity vectors (sim in Fig. 1). This way of combining vectors ensures that the final similarity vector is of the same dimensionality regardless of the number of attributes and facilitates transfer of all the subsequent parameters. For instance, the DBLP-Scholar and 5854 Cora2 datasets have four and eight attributes respectively, but the networks can share all weights and biases between the two. We also tried methods such as max pooling and average pooling, but none of them outperformed the simple addition method. Matching Classification. We finally feed the similarity vector for the two records to a two-layer multilayer perceptron (MLP) with highway connections (Srivastava et al., 2015) and classify the pair into a match or a non-match (“Matching Classifier” in Fig. 1). The output from the final layer of the MLP is a two dimensional vector and we normalize it by the softmax function to obtain a probability distribution. We will discuss dataset adaptation for transfer learning in the next section. Training Objectives. We train the networks to minimize the negative log-likelihood loss. We use the Adam optimization algorithm (Kingma and Ba, 2015) with batch size 16 and an initial learning rate of 0.001, and after each epoch we evaluate our model on the dev set. Training terminates after 20 epochs, and we choose the model that yields the best F1 score on the dev set and evaluate the model on the test data. 4 Deep Transfer Active Learning for ER We introduce two orthogonal frameworks for our deep ER models in low resource settings: transfer and active learning. We also introduce the notion of likely false positives and likely false negatives, and provide a principled active labeling method in the context of deep ER models, which contributes to stable and high performance. 4.1 Adversarial Transfer Learning The architecture described above allows for simple transfer learning: we can train all parameters in the network on source data and use them to classify a target dataset. However, this method of transfer learning can suffer from dataset-specific properties. For example, the author attribute in the DBLP-ACM dataset contains first names while that in the DBLP-Scholar dataset only has first initials. In such situations, it becomes crucial to construct network representations that are invariant with respect to idiosyncratic properties of datasets. To this end, we apply the technique of dataset (domain) adaptation developed in image recognition 2http://www.cs.umass.edu/mccallum/ data/cora-refs.tar (Ganin and Lempitsky, 2015). In particular, we build a dataset classifier with the same architecture as the matching classifier (“Dataset Classifier” in Fig. 1) that predicts which dataset the input pair comes from. We replace the training objective by the sum of the negative log-likelihood losses from the two classifiers. We add a gradient reversal layer between the similarity vector and the dataset classifier so that the parameters in the dataset classifier are trained to predict the dataset while the rest of the network is trained to mislead the dataset classifier, thereby developing dataset-independent internal representations. Crucially, with dataset adaptation, we feed pairs from the target dataset as well as the source to the network. For the pairs from the target, we disregard the loss from the matching classifier. 4.2 Active Learning Since labeling a large number of pairs for each ER scenario clearly does not scale, prior work in ER has adopted active learning as a more guided approach to select examples to label (Tejada et al., 2001; Sarawagi and Bhamidipaty, 2002; Arasu et al., 2010; de Freitas et al., 2010; Isele and Bizer, 2013; Qian et al., 2017). Designing an effective active learning algorithm for deep ER models is particularly challenging because finding informative examples is very difficult (especially for positive examples due to the extremely low matching ratio in realistic ER tasks), and we need more than a handful of both negative and positive examples in order to tune a deep ER model with many parameters. To address this issue, we design an iterative active learning algorithm (Algorithm 1) that searches for two different types of examples from unlabeled data in each iteration: (1) uncertain examples including likely false positives and likely false negatives, which will be labeled by human annotators; (2) high-confidence examples including high-confidence positives and high-confidence negatives. We will not label high-confidence examples and use predicted labels as a proxy. We will show below that those carefully selected examples serve different purposes. Uncertain examples and high-confidence examples are characterized by the entropy of the conditional probability distribution given by the current model. Let K be the sampling size and the unlabeled dataset consisting of candidate record 5855 pairs be DU = {xi}N i=1. Denote the probability that record pair xi is a match according to the current model by p(xi). Then, the conditional entropy of the pair H (xi) is computed by: −p(xi) log p(xi) −(1 −p(xi)) log(1 −p(xi)) Uncertain examples and high-confidence examples are associated with high and low entropy. Given this notion of uncertainty and high confidence, one can simply select record pairs with top K entropy as uncertain examples and those with bottom K entropy as high-confidence examples. Namely, take argmax D⊆DU|D|=K X x∈D H(x), argmin D⊆DU|D|=K X x∈D H(x) as sets of uncertain and high-confidence examples respectively. However, these simple criteria can introduce an unintended bias toward a certain direction, resulting in unstable performance. For example, uncertain examples selected solely on the basis of entropy can sometimes contain substantially more negative examples than positive ones, leading the network to a solution with low recall. To address this instability problem, we propose a partition sampling mechanism. We first partition the unlabeled data DU into two subsets: D U and DU, consisting of pairs that the model predicts as matches and non-matches respectively. Namely, D U = {x ∈DU|p(x) ≥0.5}, DU = {x ∈ DU|p(x) < 0.5}. Then, we pick top/bottom k = K/2 examples from each subset with respect to entropy. Uncertain examples are now: argmax D⊆DU|D|=k X x∈D H(x), argmax D⊆DU|D|=k X x∈D H(x) where the two criteria select likely false positives and likely false negatives respectively. Likely false positives and likely false negatives are useful for improving the precision and recall of ER models (Qian et al., 2017). However, the deep ER models do not have explicit features, and thus we use entropy to identify the two types of examples in contrast to the feature-based method used in Qian et al. (2017). High-confidence examples are identified by: argmin D⊆DU|D|=k X x∈D H(x), argmin D⊆DU|D|=k X x∈D H(x) where the two criteria correspond to highconfidence positives and high-confidence negatives respectively. These sampling criteria equally partition uncertain examples and high-confidence examples into different categories. We will show that the partition mechanism contributes to stable and better performance in a later section. Algorithm 1 Deep Transfer Active Learning Require: Unlabeled data DU, sampling size K, batch size B, max. iteration number T, max. number of epochs I. Ensure: Denote the deep ER parameters and the set of labeled examples by W and DL respectively. Update(W, DL, B) denotes a parameter update function that optimizes the negative log-likelihood of the labeled data DL with batch size B. Set k = K/2. 1: Initialize W via transfer learning. Initialize also DL = ∅ 2: for t ∈{1, 2, ..., T} do 3: Select k likely false positives and k likely false negatives from DU and remove them from DU. Label those examples and add them to DL. 4: Select k high-confidence positives and k highconfidence negatives from DU and add them with positive and negative labels to DL. 5: for t ∈{1, 2, ..., I} do 6: W ←Update(W, DL, B) 7: Run deep ER model on DL with W and get the F1 score. 8: if the F1 score improves then 9: Wbest ←W 10: end if 11: end for 12: W ←Wbest 13: end for 14: return W High-confidence examples prevent the network from overfitting to selected uncertain examples (Wang et al., 2017). Moreover, they can give the DL model more labeled data without actual manual effort. Note that we avoid using any entropy level thresholds to select examples, and instead fix the number of examples. In contrast, the active learning framework for neural network image recognition in Wang et al. (2017) uses entropy thresholds. Such thresholds necessitate fine-tuning for each target dataset: Wang et al. (2017) use different thresholds for different image recognition datasets. However, since we do not have sufficient labeled data for the target in low-resource ER problems, the necessity of finetuning thresholds would undermine the applicability of the active learning framework. 5856 dataset genre size matches attr DBLP-ACM citation 12,363 2,220 4 DBLP-Scholar citation 28,707 5,347 4 Cora citation 50,000 3,969 8 Fodors-Zagats restaurant 946 110 6 Zomato-Yelp restaurant 894 214 4 Amazon-Google software 11,460 1,167 3 Table 2: Post-blocking statistics of the ER datasets we used. (attr denotes the number of attributes.) 5 Experiments 5.1 Experimental Setup For all datasets, we first conduct blocking to reduce the Cartesian product to a candidate set. Then, we randomly split the candidate set into training, development, and test data with a ratio of 3:1:1. For the datasets used in Mudgal et al. (2018) (DBLP-ACM, DBLP-Scholar, Fodors-Zagats, and Amazon-Google), we adopted the same feature-based blocking strategies and random splits to ensure comparability with the state-of-the-art method. The candidate set of Cora was obtained by randomly sampling 50,000 pairs from the result of the jaccard similarity-based blocking strategy described in Wang et al. (2011). The candidate set of Zomato-Yelp was taken from Das et al. (2016).3 All dataset statistics are given in Table 2. For evaluation, we compute precision, recall, and F1 score on the test sets. In the active learning experiments, we hold out the test sets a priori and sample solely from the training data to ensure fair comparison with non-active learning methods. The sampling size K for active learning is 20. As preprocessing, we tokenize with NLTK (Bird et al., 2009) and lowercase all attribute values. For every configuration, we run experiments with 5 random initializations and report the average. Our DL models are all implemented using the publicly available deepmatcher library.4 5.2 Baselines We establish baselines using a state-of-the-art learning-based ER package, Magellan (Konda et al., 2016). We experimented with the following 6 learning algorithms: Decision Tree, SVM, Ran3We constructed Zomato-Yelp by merging Restaurants 1 and 2, which are available in Das et al. (2016). Though the two datasets share the same source, their schemas slightly differ: Restaurants 1 has an address attribute that contains zip code, while Restaurants 2 has a zip code attribute and an address attribute. We put a null value for the zip code attribute in Restaurants 1 and avoid merging errors. 4https://github.com/anhaidgroup/ deepmatcher 0 1000 2000 3000 4000 5000 6000 7000 92 93 94 95 96 97 98 # Labeled Training examples F1 Deep Learning Decision Tree SVM Random Forest Naive Bayes Logistic Regression Linear Regression Figure 2: Performance vs. data size (DBLP-ACM). dom Forest, Naive Bayes, Logistic Regression, and Linear Regression. We use the same feature set as in Mudgal et al. (2018). See the appendix for extensive lists of features chosen. 5.3 Results and Discussions Model Performance and Data Size. Seen in Fig. 2 is F1 performance of different models with varying data size on DBLP-ACM. The DL model improves dramatically as the data size increases and achieves the best performance among the 7 models when 7000 training examples are available. In contrast, the other models suffer much less from data scarcity with an exception of Random Forest. We observed similar patterns in DBLP-Scholar and Cora. These results confirm our hypothesis that deep ER models are data-hungry and require a lot of labeled data to perform well. Transfer Learning. Table 3 shows results from our transfer learning framework when used in isolation (i.e., without active learning, which we will discuss shortly). Our dataset adaptation method substantially ameliorates performance when the target is DBLP-Scholar (from 41.03 to 53.84 F1 points) or Cora (from 38.3 to 43.13 F1 points) and achieves the same level of performance on DBLP-ACM. Transfer learning with our dataset adaptation technique achieves a certain level of performance without any target labels, but we still observe high variance in performance (e.g. 6.21 standard deviation in DBLP-Scholar) and a huge discrepancy between transfer learning and training directly on the target dataset. To build a reliable and stable ER model, a certain amount of target labels may be necessary, which leads us to apply our active learning framework. Active Learning. Fig. 3 shows results from our active learning as well as the 7 algorithms trained on labeled examples of corresponding size that are 5857 Target DBLP-ACM DBLP-Scholar Cora Method Prec Recall F1 Prec Recall F1 Prec Recall F1 Train on Source 86.98 98.38 92.32±1.15 73.41 43.20 41.03±6.33 92.54 24.22 38.30±3.77 +Adaptation 88.71 96.21 92.31±1.36 88.06 39.03 53.84±6.21 40.64 52.16 43.13±3.62 Train on Target 98.30 98.60 98.45±0.22 92.72 93.08 92.94±0.47 98.01 99.37 98.68±0.26 Mudgal et al. (2018) – – 98.4 – – 93.3 – – – Table 3: Transfer learning results (citation genre). We report standard deviations of the F1 scores. For each target dataset, the source is given by the other two datasets (e.g., the source for DBLP-ACM is DBLP-Scholar and Cora.) 0 100 200 300 400 82 84 86 88 90 92 94 96 98 # Labeled Training examples F1 scores (a) DBLP-ACM 0 200 400 600 800 1000 60 70 80 90 # Labeled Training examples (b) DBLP-Scholar 0 200 400 600 800 1000 20 30 40 50 60 70 80 90 100 # Labeled Training examples (c) Cora Deep Transfer Active Deep Active Deep Learning Decision Tree SVM Random Forest Naive Bayes Logistic Regression Linear Regression Figure 3: Low-resource performances on different datasets. randomly sampled.5 Deep transfer active learning (DTAL) initializes the network parameters by transfer learning whereas deep active learning (DAL) starts with a random initialization. We can observe that DTAL models remedy the data scarcity problem as compared to DL models with random sampling in all three datasets. DAL can achieve competitive performance to DTAL at the expense of faster convergence. Seen in Table 4 is performance comparison of different algorithms in low-resource and highresource settings. (We only show the SVM results since SVM performed best in each configuration among the 6 non-DL algorithms.) First, deep transfer active learning (DTAL) achieves the best performance in the low-resource setting of each dataset. In particular, DTAL outperforms the others to the greatest degree in Cora (97.68 F1 points) probably because Cora is the most complex dataset with 8 attributes in the schema. NonDL algorithms require many interaction features, which lead to data sparsity. Deep active learning (DAL) also outperforms SVM and yields comparable performance to DTAL. However, the standard deviations in performance of DAL are substantially higher than those of DTAL (e.g. 4.15 5We average the results over 5 random samplings. vs. 0.33 in DBLP-ACM), suggesting that transfer learning provides useful initializations for active learning to achieve stable performance. One can argue that DTAL performs best in the low-resource scenario, but the other algorithms can also boost their low-resource performance by active learning. While there are many approaches to active learning on feature-based (non-DL) ER (e.g. Bellare et al. (2012); Qian et al. (2017)) that yield strong performance under certain condition, it requires further research to quantify how these methods perform with varying datasets, genres, and blocking functions. It should be noted, however, that in DBLP-Scholar and Cora, DTAL in the low-resource setting even significantly outperforms SVM (and the other 5 algorithms) in the high-resource scenario. These results imply that DTAL would significantly outperform SVM with active learning in the low-resource setting since the performance with the full training data with labels serves as an upper bound. Moreover, we can observe that DTAL with a limited amount of data (less than 6% of training data in all datasets), performs comparably to DL models with full training data. Therefore, we have demonstrated that a deep ER system with our transfer and active learning frameworks can provide a stable and reliable solu5858 Dataset Method Train Size F1 DTAL 400 97.89±0.33 DAL 400 95.35±4.15 DL 400 93.40±2.61 SVM 400 96.97±0.69 DL 7,417 98.45±0.22 DBLP-ACM SVM 7,417 98.35±0.14 DTAL 1000 89.54±0.39 DAL 1000 88.76±0.76 DL 1000 83.33±1.26 SVM 1000 85.36±0.32 DL 17,223 92.94±0.47 DBLP-Scholar SVM 17,223 88.56±0.46 DTAL 1000 97.68±0.39 DAL 1000 97.05±0.64 DL 1000 84.35±4.25 SVM 1000 87.66±3.15 DL 30,000 98.68±0.26 Cora SVM 30,000 95.39±0.31 Table 4: Low-resource (shaded) and high-resource (full training data) performance comparison. DTAL, DAL, and DL denote deep transfer active learning, deep active learning, and deep learning (random sampling). tion to entity resolution with low annotation effort. Other Genre Results. We present results from the restaurant and software genres.6 Shown in Table 5 are results of transfer and active learning from Zomato-Yelp to Fodors-Zagats. Similarly to our extensive experiments in the citation genre, the dataset adaptation technique facilitates transfer learning significantly, and only 100 active learning labels are needed to achieve the same performance as the model trained with all target labels (894 labels). Fig. 4 shows low-resource performance in the software genre. The relative performance among the 6 non-DL approaches differs to a great degree as the best non-DL model is now logistic regression, but deep active learning outperforms the rest with 1200 labeled examples (10.4% of training data). These results illustrate that our low-resource frameworks are effective in other genres as well. Active Learning Sampling Strategies. As discussed in a previous section, we adopted highconfidence sampling and a partition mechanism for our active learning. Here we analyze the effect of the two methods. Table 6 shows deep transfer active learning performance in DBLP-ACM with varying sampling strategies. We can observe that high-confidence sampling and the partition mech6We intend to apply our approaches to more genres, but unfortunately we lack large publicly available ER datasets in other genres than citation. Applications to non-English languages are also of interest. We leave this for future. Method Prec Recall F1 Train on Src 100.00 6.37 11.76±6.84 +Adaptation 95.33 57.27 70.13±19.89 +100 active labels 100.00 100.00 100.00±0.00 Train on Tgt 100.00 100.00 100.00±0.00 Mudgal et al. (2018) – – 100 Table 5: Transfer and active learning results in the restaurant genre. The target and source datasets are Fodors-Zagats and Zomato-Yelp respectively. 0 200 400 600 800 1000 1200 1400 0 10 20 30 40 50 # Labeled Training examples F1 Deep Active Deep Learning Decision Tree SVM Random Forest Naive Bayes Logistic Regression Linear Regression Figure 4: Low-resource performance (software genre). anism contribute to high and stable performance as well as good precision-recall balance. Notice that there is a huge jump in recall by adding partition while precision stays the same (row 4 to row 3). This is due to the fact that the partition mechanism succeeds in finding more false negatives. The breakdown of labeled examples (Table 7) shows that is indeed the case. It is noteworthy that the partition mechanism lowers the ratio of misclassified examples (FP+FN) in the labeled sample set because partitioning encourages us to choose likely false negatives more aggressively, yet false negatives tend to be more challenging to find in entity resolution due to the skewness toward the negative (Qian et al., 2017). We observed similar patterns in DBLP-Scholar and Cora. 6 Further Related Work Transfer learning has proven successful in fields such as computer vision and natural language processing, where networks for a target task is pretrained on a source task with plenty of training data (e.g. image classification (Donahue et al., 2014) and language modeling (Peters et al., 2018)). In this work, we developed a transfer learning framework for a deep ER model. Concurrent work (Thirumuruganathan et al., 2018) to ours has also proposed transfer learning on top of the features from distributed representations, but they focused on classical machine learning classifiers (e.g., logistic regression, SVMs, decision trees, random forests) and they did not con5859 Sampling Method Prec Recall F1 High-Confidence 93.32 97.21 95.19±2.21 Partition 96.14 97.12 96.61±0.57 High-Conf.+Part. 97.63 97.84 97.73±0.43 Top K Entropy 96.16 89.64 92.07±9.73 Table 6: Low-resource performance (300 labeled examples) of different sampling strategies (DBLP-ACM). Method FP TP FN TN Part 79.65.9 70.45.9 59.25.6 90.85.6 W/o Part 101.67.7 57.415.9 41.64.4 99.422.5 Table 7: Breakdown of 300 labeled samples (uncertain samples) from deep transfer active learning in DBLPACM. Part, FP, TP, FN, and TN denote the partition mechanism, false positives, true positives, false negatives, and true negatives respectively. sider active learning. Their distributed representations are computed in a “bag-of-words” fashion, which can make applications to textual attributes more challenging (Mudgal et al., 2018). Moreover, their method breaks attribute boundaries for tuple representations in contrast to our approach that computes a similarity vector for each attribute in an attribute-agnostic manner. In a complex ER scenario, each entity record is represented by a large number of attributes, and comparing tuples as a single string can be infeasible. Other prior work also proposed a transfer learning framework for linear model-based learners in ER (Negahban et al., 2012). 7 Conclusion We presented transfer learning and active learning frameworks for entity resolution with deep learning and demonstrated that our models can achieve competitive, if not better, performance as compared to state-of-the-art learning-based methods while only using an order of magnitude less labeled data. Although our transfer learning alone did not suffice to construct a reliable and stable entity resolution system, it contributed to faster convergence and stable performance when used together with active learning. These results serve as further support for the claim that deep learning can provide a unified data integration method for downstream NLP tasks. Our frameworks of transfer and active learning for deep learning models are potentially applicable to low-resource settings beyond entity resolution. Acknowledgments We thank Sidharth Mudgal for assistance with the DeepMatcher/Magellan libraries and replicating experiments. We also thank Vamsi Meduri, Phoebe Mulcaire, and the anonymous reviewers for their helpful feedback. JK was supported by travel grants from the Masason Foundation fellowship. References Arvind Arasu, Michaela G¨otz, and Raghav Kaushik. 2010. On active learning of record matching packages. In Proc. of SIGMOD. Kedar Bellare, Suresh Iyengar, Aditya G. Parameswaran, and Vibhor Rastogi. 2012. Active sampling for entity matching. In Proc. of KDD. Mikhail Bilenko and Raymond J. Mooney. 2003. Adaptive duplicate detection using learnable string similarity measures. In Proc. of KDD, pages 39–48, New York, NY, USA. ACM. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. OReilly Media. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. TACL, 5:135–146. William M. Campbell, Lin Li, Charlie K. Dagli, Joel Acevedo-Aviles, K. Geyer, Joseph P. Campbell, and C. Priebe. 2016. Cross-domain entity resolution in social media. In Proc. of SocialNLP. Kyunghyun Cho, Bart van Merrienboer, aglar G¨ulehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proc. of EMNLP. Peter Christen. 2008. Febrl: A freely available record linkage system with a graphical user interface. In Proc. of HDKM, pages 17–25, Darlinghurst, Australia, Australia. Australian Computer Society, Inc. Sanjib Das, AnHai Doan, Paul Suganthan G. C., Chaitanya Gokhale, and Pradap Konda. 2016. The Magellan data repository. https://sites.google.com/site/ anhaidgroup/projects/data. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. 2014. DeCAF: A deep convolutional activation feature for generic visual recognition. In ICML, volume 32 of Proceedings of Machine Learning Research, pages 647–655, Bejing, China. 5860 Xin Dong, Alon Halevy, and Jayant Madhavan. 2005. Reference reconciliation in complex information spaces. In Proc. of SIGMOD, pages 85–96, New York, NY, USA. ACM. Muhammad Ebraheem, Saravanan Thirumuruganathan, Shafiq Joty, Mourad Ouzzani, and Nan Tang. 2018. Distributed representations of tuples for entity resolution. VLDB. Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science, 14:179–211. Ronald Fagin, Laura M. Haas, Mauricio A. Hern´andez, Ren´ee J. Miller, Lucian Popa, and Yannis Velegrakis. 2009. Clio: Schema mapping creation and data exchange. In Conceptual Modeling: Foundations and Applications. Ivan P. Fellegi and Alan B. Sunter. 1969. A theory for record linkage. JASA. Junio de Freitas, Gisele Lobo Pappa, Altigran Soares da Silva, Marcos Andr´e Gonalves, Edleno Silva de Moura, Adriano Veloso, Alberto H. F. Laender, and Mois´es G. de Carvalho. 2010. Active learning genetic programming for record deduplication. IEEE Congress on Evolutionary Computation, pages 1–8. Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In Proc. of ICML, volume 37 of Proceedings of Machine Learning Research, pages 1180–1189, Lille, France. PMLR. Mauricio A. Hern´andez and Salvatore J. Stolfo. 1995. The merge/purge problem for large databases. In Proc. of SIGMOD, pages 127–138, New York, NY, USA. ACM. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9:1735– 1780. Robert Isele and Christian Bizer. 2013. Active learning of expressive linkage rules using genetic programming. J. Web Sem., 23:2–15. Diederik P. Kingma and Jimmy Lei Ba. 2015. ADAM: A Method for Stochastic Optimization. In ICLR. Pradap Konda, Sanjib Das, C. PaulSuganthanG., AnHai Doan, Adel Ardalan, Jeffrey R. Ballard, Han Li, Fatemah Panahi, Haojun Zhang, Jeffrey F. Naughton, Shishir Prasad, Ganesh Krishnan, Rohit Deep, and Vijay Raghavendra. 2016. Magellan: Toward building entity matching management systems. VLDB, 9:1197–1208. Hanna K¨opcke, Andreas Thor, and Erhard Rahm. 2010. Evaluation of entity resolution approaches on realworld match problems. VLDB, pages 484–493. Sidharth Mudgal, Han Li, Theodoros Rekatsinas, AnHai Doan, Youngchoon Park, Ganesh Krishnan, Rohit Deep, Esteban Arcaute, and Vijay Raghavendra. 2018. Deep learning for entity matching: A design space exploration. In Proc. of SIGMOD, pages 19– 34, New York, NY, USA. ACM. Sahand N. Negahban, Benjamin I. P. Rubinstein, and Jim Gemmell. 2012. Scaling multiple-source entity resolution using statistically efficient transfer learning. In Proc. of CIKM. Hanna M. Pasula, Bhaskara Marthi, Brian Milch, Stuart J. Russell, and Ilya Shpitser. 2002. Identity uncertainty and citation matching. In Proc. of NeurIS. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proc. of EMNLP. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke S. Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. Kun Qian, Lucian Popa, and Prithviraj Sen. 2017. Active learning for large-scale entity resolution. In CIKM, pages 1379–1388, New York, NY, USA. ACM. Erhard Rahm and Philip A. Bernstein. 2001. A survey of approaches to automatic schema matching. VLDB, 10:334–350. Sunita Sarawagi and Anuradha Bhamidipaty. 2002. Interactive deduplication using active learning. In Proc. of KDD. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Training very deep networks. In Proc. of NeurIS. Sheila Tejada, Craig A. Knoblock, and Steven Minton. 2001. Learning object identification rules for information integration. Inf. Syst., 26:607–633. Saravanan Thirumuruganathan, Shameem Puthiya Parambath, Mourad Ouzzani, Nan Tang, and Shafiq R. Joty. 2018. Reuse and adaptation for entity resolution through transfer learning. arXiv:1809.11084. Jiannan Wang, Guoliang Li, Jeffrey Xu Yu, and Jianhua Feng. 2011. Entity matching: How similar is similar. VLDB, 4(10):622–633. Keze Wang, Dongyu Zhang, Ya Li, Ruimao Zhang, and Liang Lin. 2017. Cost-effective active learning for deep image classification. IEEE Trans. Cir. and Sys. for Video Technol., 27(12):2591–2600. Xin Zhao, Yuexin Wu, Hongfei Yan, and Xiaoming Li. 2014. Group based self training for e-commerce product record linkage. In Proc. of COLING, pages 1311–1321, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. 5861 A Appendices A.1 Deep ER Hyperparameters Seen in Table 8 is a list of hyperparameters for our deep entity resolution models. We use the same hyperparameters regardless of scenario and dataset. We initialize the 300 dimensional word embeddings by the character-based pretrained fastText vectors publicly available.7 Input Representations Word embedding size 300 Input dropout rate 0.0 Word-level BiGRU GRU size 150 # GRU layers 1 Final ouput concat Similarity Representations Attr. sim. absolute diff. Record sim. sum Matching Classification # MLP layers 2 # MLP size 300 # MLP activation relu Highway Connection Yes Domain Classification (Adversarial) # MLP layers 2 # MLP size 300 # MLP activation relu Highway Connection Yes Training Objective cross-entropy Batch size 16 # Epochs 20 Adam (Kingma and Ba, 2015) lrate 0.001 Adam β1 0.9 Adam β2 0.999 Table 8: Deep ER hyperparameters. A.2 Non-DL Learning Algorithms Magellan (Konda et al., 2016) is an open-source package that provides state-of-the-art learningbased algorithms for ER.8 We use the package to run the following 6 learning algorithms for baselines: Decision Tree, SVM, Random Forest, Naive Bayes, Logistic Regression, and Linear Regression. For each attribute in the schema, we apply the following similarity functions: q-gram jaccard, cosine distance, Levenshtein disntance, Levenshtein similairty, Monge-Elkan measure, and exact matching. 7https://github.com/facebookresearch/ fastText 8https://sites.google.com/site/ anhaidgroup/projects/magellan
2019
586
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5862–5866 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5862 A Semi-Markov Structured Support Vector Machine Model for High-Precision Named Entity Recognition Ravneet Arora, Chen-Tse Tsai, Ketevan Tsereteli, Prabhanjan Kambadur, Yi Yang∗ Bloomberg L.P., *ASAPP Inc. {rarora62,ctsai54,ktsereteli1,pkambadur}@bloomberg.net, *[email protected] Abstract Named entity recognition (NER) is the backbone of many NLP solutions. F1 score, the harmonic mean of precision and recall, is often used to select/evaluate the best models. However, when precision needs to be prioritized over recall, a state-of-the-art model might not be the best choice. There is little in the literature that directly addresses training-time modifications to achieve higher precision information extraction. In this paper, we propose a neural semi-Markov structured support vector machine model that controls the precisionrecall trade-off by assigning weights to different types of errors in the loss-augmented inference during training. The semi-Markov property provides more accurate phrase-level predictions, thereby improving performance. We empirically demonstrate the advantage of our model when high precision is required by comparing against strong baselines based on CRF. In our experiments with the CoNLL 2003 dataset, our model achieves a better precisionrecall trade-off at various precision levels. 1 Introduction Named Entity Recognition (NER) is the task of locating and categorizing phrases into a closed set of classes, such as organizations, people, and locations. NER is an information extraction task that is important for understanding large bodies of text and is an essential component for many natural language processing (NLP) pipelines. The most common evaluation metric for information extraction tasks is F1, which is the harmonic mean between precision and recall: that is, false positives and false negatives are weighted equally. In certain real-world applications (e.g., medicine and finance), extracting wrong information is much worse than extracting nothing: hence, ∗Work conducted while working at Bloomberg L.P. in such domains, high precision is emphasized. Trade-offs between precision and recall have been well researched for classification (Joachims, 2005; Jansche, 2005; Cortes and Mohri, 2004). However, barring studies on inference-time heuristics, there is limited work on training precision-oriented sequence tagging models. In this paper, we present a method for training precision-driven NER models. By defining custom loss objectives for the structured SVM (SSVM) model, we extend costsensitive learning (Domingos, 1999; Margineantu, 2001) to sequence tagging problems. A difficulty in applying cost-sensitive learning to NER is that the model needs to operate on segmentations of the input sentence and the labels of the segments. Inspired by semi-Markov CRF (Sarawagi and Cohen, 2005), we propose a semi-Markov SSVM model that scores and labels consecutive tokens together, which allows us to directly interact with the segment-level errors in the precision-beneficial loss of the SSVM model. We compare our semi-Markov SSVM model with several competitive inference-time baselines that have been proposed for high-precision NER. Our results show that our model outperforms competitive baselines on organization names, and is at least as good as the best inference-time approaches at some precision levels for other NER classes. 2 Related Work For classification, several papers try to optimize different evaluation metrics directly. Joachims (2005) proposes an SSVM model for optimizing multivariate performance measures of binary classification tasks. Fβ is one of the metrics in their example. Similarly, Jansche (2005) maximizes expected F-measure, Cortes and Mohri (2004) and Narasimhan and Agarwal (2013) optimize AUC 5863 Figure 1: Semi-Markov SSVM model architecture. and partial AUC, respectively. However, these cannot be directly applied to sequence tagging as labels are assigned at the token or segment level. Cost-sensitive classification (Domingos, 1999; Margineantu, 2001; Elkan, 2001; Zadrozny et al., 2003) is another body of work where different mis-classification errors have different costs and one attempts to minimize the total cost that a model incurs on the test data. Our approach uses similar ideas – we make the costs of false positive prediction higher than the false-negative costs – and therefore can be viewed as a cost-sensitive model for sequence tagging problems. For sequence tagging problems, inference-time heuristics for tuning the precision-recall trade-off for information extraction models have been proposed. Culotta and McCallum (2004) calculate confidence scores of the extracted phrases from a CRF model: these scores are used for sorting and filtering extractions. Similarly, Carpenter (2007) computes phrase-level conditional probabilities from an HMM model, and try to increase the recall of gene name extraction by lowering the threshold on these probabilities. Given a trained CRF model, Minkov et al. (2006) hyper-tune the weight for the feature which indicates the token is not a named entity. Changing this weight could encourage or discourage the CRF decoding process to extract entities. We compare our model with these inference-time approaches. 3 Models We adopt the BiLSTM-CNNs architecture (Ma and Hovy, 2016) to extract features from a sequence of words for all models in this paper. 1 Each word is passed through character-level CNN, and the result is concatenated with Glove word 1Our implementation is based on NCRF++ (Yang and Zhang, 2018). embedding (Pennington et al., 2014) to form the input of Bi-directional LSTM. To map the word representation obtained from BiLSTM into k (label) dimensions, one layer of feed-forward neural network is applied. At the output layer, instead of using a CRF (Lafferty et al., 2001) to capture the output label dependencies, we use the SSVM objective (Tsochantaridis et al., 2004). While CRFs have consistently given state-of-the-art NER results, their objective function is difficult to directly modify for highprecision extraction. Hence, we select the SSVM formulation as it allows us to directly modify the loss function for high precision. Given training sequences (xi, yi), i = 1 . . . m, the loss function for SSVM is: m X i=1 argmaxy∈Yxi(∆(yi, y)+s(y, xi)−s(yi, xi)), where ∆is the Hamming loss between two sequences, Yxi contains all possible label assignments for the sentence xi, and s is the decoding score between input sentence x and label sequence y. 3.1 High-Precision SSVM Without modifications, the SSVM performs similar to the CRF. However, the presence of ∆(yi, y) in the SSVM loss allows us to design custom loss functions for high precision NER. No inferencetime changes are introduced. Class-specific Token-level Loss The first modification we make is to pick a target entity class and modify ∆(yi, y) to have word-wise loss of ℓtgt for false positives on the target class and loss of ℓ˜ tgt for false positives on other classes. That is, let yj i be j-th element of sequence yi, we define ∆(yi, y) = P j wj, where wj =      0, if yj i = yj ℓtgt, if yj i ̸= yj and yj = target class ℓ˜ tgt, if yj i ̸= yj and yj ̸= target class Note that the target class in the above equation contains all the labels related to the target entity type; that is, if the target class is ORG, we consider B-ORG and I-ORG to be the related labels. Typically ℓtgt ≫ℓ˜ tgt so that the false positives on the target class will generate more loss, thereby discouraging the model from making such decisions. Both ℓtgt and ℓ˜ tgt are determined through 5864 hyper-parameter tuning. Setting ℓtgt = ℓ˜ tgt = 1 falls back to the standard Hamming loss. Semi-Markov SSVM A problem with tokenlevel loss is that it does not always reflect phraselevel errors accurately; it may over generate loss since a phrase could consist of multiple tokens. It is unclear how individual token false positives contribute to phrase-level false positives. Therefore, we try a semi-Markov variation of the SSVM following (Sarawagi and Cohen, 2005). The semi-Markov formulation groups consecutive tokens into segments. Whole segments are considered as a single unit and only transitions between segments are modeled. We ignore all intrasegment transition probabilities, effectively collapsing the number of labels to 5 (ORG, PER, LOC, MISC, O instead of the BIO labelling scheme for CoNLL data). The scores of each segment are obtained by summing up the word-level class scores of words present in the segment (Ye and Ling, 2018). We restrict segments to be ≤7 tokens long, and we do not use any additional segment level features. During decoding, all possible segmentations of a sentence (≤7) will be considered. The architecture of our BiLSTM semiMarkov SSVM model is shown in Figure 1. To tune the semi-Markov SSVM model to high precision for a specific class, a segment will contribute ℓtgt to the loss if it is predicted as the target class and this segment does not exist in the gold segmentation. Other types of errors in the prediction have a loss of ℓ˜ tgt. This is similar to the class-specific loss used on the token-level in the SSVM formulation. In our experiments, we refer to the token-level model simply as SSVM, and the segment-level model as semi-Markov SSVM. 4 Results All experiments were conducted on the CoNLL 2003 English dataset. We first show the performance of CRF, SSVM, and semi-Markov SSVM models without tuning for high precision in Table 1. We see that all three models perform similarly, with CRF being slightly better. These numbers are the starting points for the rest of the experiments. We compare the proposed models with the following inference-time baselines:2 2Results of Minkov et al. (2006) are given in the Appendix as the performance is worse than the other methods. ORG PER LOC MISC ALL CRF P. 89.5 96.3 91.8 81.1 91.06 R. 87.7 95.4 93.8 81.3 90.88 F1 88.6 95.8 92.8 81.2 90.97 SSVM P. 90.0 95.7 91.0 80.4 90.75 R. 87.7 95.5 93.7 80.5 90.79 F1 88.8 95.6 92.4 80.4 90.77 Semi. SSVM P. 89.3 96.0 92.3 80.1 90.92 R. 87.2 95.2 93.2 81.9 90.60 F1 88.2 95.6 92.8 81.0 90.76 Table 1: Performance of the baseline and proposed models without tuning for high precision. These numbers are on the CoNLL 2003 English test set. The development set is not included in training. ORG (Precision: 94.5) Ment. Length 1(65.1%) 2(24.3%) ≥3(10.6%) Thres. CRF 84.94 78.16 75.57 Semi. SSVM 84.57 80.40 83.52 LOC (Precision: 95.5) Ment. Length 1(86.1%) 2(12.4%) ≥3(1.5%) Thres. CRF 92.90 90.82 60.00 Semi. SSVM 92.06 91.79 64.00 PER (Precision: 97.9) Ment. Length 1(32.8%) 2(63.0%) ≥3(4.2%) Thres. CRF 81.73 97.74 91.18 Semi. SSVM 81.54 99.02 95.59 Table 2: Recall of the thresholded CRF and semiMarkov SSVM for different mention lengths at the same precision level. The chosen precision levels are listed right next to the entity types. The percentages in parenthesis are of the gold mentions. Thresholded CRF We compute the probability of each extracted phrase by Constrained Forward-Backward algorithm (Culotta and McCallum, 2004). An extraction is dropped if its phrase probability is lower than a given threshold, a tunable hyper-parameter. Bootstrap CRF By generating bootstrap samples of the CoNLL training set, we generate 100 BiLSTM CRF models. To increase precision over a single CRF, we decode each sentence with each of the 100 models and compute the votes for each proposed named entity. The threshold (percent of votes) for a candidate entity is hyper-tuned. Using the dev set, we tune the hyper-parameters of each model at which the desired precision is achieved. For our proposed SSVM-based mod5865 Figure 2: Precision-recall trade-off of the proposed SSVM model versus baselines: semi-Markov SSVM outperforms all models for ORG, is on par with Thresholded CRF for LOC, and is competitive for the PER class. The detailed numbers are listed in the Appendix. els, the hyper-parameters are ℓtgt and ℓ˜ tgt.3 To speed up training, we initialize the parameters of the entire model (neural network and SSVM) using a pre-trained model with ℓtgt = 1, ℓ˜ tgt = 1, and train further for 20 epochs. We set several precision levels from 90 to 100. For each precision level, we choose the hyperparameters which have precision higher than the target precision level and obtain the maximum F1 score on the dev set, and report the corresponding test performance. The results are shown in Figure 2. Threshold CRF can achieve a wider range of precision than SSVM-based models. In this figure, we only focus on the range which SSVM-based models can achieve. We can see that semi-Markov SSVM clearly outperforms all the other models for ORG, is on par with Thresholded CRF for LOC, and has some strong points in the high precision region for PER. The good performance on ORG is consistent with the observation in Ye and Ling (2018) that semi-Markov models have advantages in longer phrases because labels are assigned at the segment level directly. Since longer mentions tend to have a smaller phrase probability and the length of ORG mentions varies more than the length of the other two types, Thresholded CRF is less robust for ORG. The token-based SSVM is consistently worse than semi-Markov SSVM and fails to achieve higher precision, especially for PER. This shows that the semi-Markov property penalizes false positives at the phrase-level more accurately. Bootstrap CRF does not perform well for ORG and LOC, but is pretty strong for PER at some precision levels. We believe higher performance of bootstrap CRF on PER class comes from the fact 3ℓtgt is searched in the range between 1 and 5, and ℓ˜ tgt is between 0.0001 and 0.1. that the baseline CRF model itself achieves very high precision for this class, which allows bootstrapping technique reduce the variance on predictions accurately. This makes bootstrapping approach more promising to situations where models have already achieved very high precision. 4.1 Error Analysis We perform error analysis for the two main methods: Thresholded CRF and semi-Markov SSVM. We pick model settings such that both models achieve the same precision level (ORG:94.5 PER:97.9 LOC:95.5) for a given class. Table 2 illustrates the recall values achieved by these models for different entity mention lengths. We can see that semi-Markov SSVM clearly outperforms Thresholded CRF on multi-token mentions, especially for long organization names. The high percentage of long mentions in ORG explains semiMarkov SSVM’s superior performance in Figure 2. However, we also see that semi-Markov SSVM produces more “larger predicted span” errors. Therefore the recall of unit-length mentions is lower than Thresholded CRF. This we believe is a side effect of semi-Markov models being more willing to predict longer length segments. These two methods can be applied together to achieve even better results. For example, thresholding and bootstrap techniques can be applied to semi-Markov SSVM models as well. In this work, we focus on showing the performance of individual approaches. Another question is what types of errors are reduced when tuning towards precision? We find that precision tuning reduces all error types, but especially the MISC type errors for all 3 classes (i.e., MISC being classified as one of the other 3 classes). 5866 5 Conclusion We proposed a semi-Markov SSVM model for high-precision NER. To our best knowledge, it is the first training-time model for high precision structured prediction. Experiment results show that our model performs better than inference-time approaches at several precision levels, especially for longer mentions. The proposed model offers promising future extensions in terms of directly optimizing other metrics such as Recall and Fβ. This work also opens up a range of questions from modeling to evaluation methodology. References Bob Carpenter. 2007. Lingpipe for 99.99% recall of gene mentions. In Proceedings of the Second BioCreative Challenge Evaluation Workshop, volume 23, pages 307–309. Corinna Cortes and Mehryar Mohri. 2004. AUC optimization vs. error rate minimization. In Advances in neural information processing systems, pages 313– 320. Aron Culotta and Andrew McCallum. 2004. Confidence estimation for information extraction. In Proceedings of the Human Language Technology Conference of the NAACL, pages 109–112. Pedro Domingos. 1999. Metacost: A general method for making classifiers cost-sensitive. In Proceedings of the 5th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 155–164. Charles Elkan. 2001. The foundations of cost-sensitive learning. In Proceedings of the 17th international joint conference on Artificial intelligence, pages 973–978. Martin Jansche. 2005. Maximum expected F-measure training of logistic regression models. In Proceedings of the conference on Empirical Methods in Natural Language Processing, pages 692–699. Thorsten Joachims. 2005. A support vector method for multivariate performance measures. In Proceedings of the 22nd international conference on Machine learning, pages 377–384. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 282–289. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1064–1074. Dragos Dorin Margineantu. 2001. Methods for costsensitive learning. PhD Thesis, Oregon State University. Einat Minkov, Richard C Wang, Anthony Tomasic, and William W Cohen. 2006. NER systems that suit user’s preferences: adjusting the recall-precision trade-off for entity extraction. In Proceedings of the Human Language Technology Conference of the NAACL, pages 93–96. Harikrishna Narasimhan and Shivani Agarwal. 2013. SVM pAUC tight: a new support vector method for optimizing partial auc based on a tight convex upper bound. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 167–175. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the conference on Empirical Methods in Natural Language Processing. Sunita Sarawagi and William W Cohen. 2005. Semimarkov conditional random fields for information extraction. In Advances in neural information processing systems, pages 1185–1192. Ioannis Tsochantaridis, Thomas Hofmann, Thorsten Joachims, and Yasemin Altun. 2004. Support vector machine learning for interdependent and structured output spaces. In Proceedings of the international conference on Machine learning, page 104. Jie Yang and Yue Zhang. 2018. NCRF++: An opensource neural sequence labeling toolkit. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Zhi-Xiu Ye and Zhen-Hua Ling. 2018. Hybrid semimarkov CRF for neural sequence labeling. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Bianca Zadrozny, John Langford, and Naoki Abe. 2003. Cost-sensitive learning by cost-proportionate example weighting. In Proceedings of the 3rd IEEE International Conference on Data Mining, pages 435–442.
2019
587
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5867–5872 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5867 Using Human Attention to Extract Keyphrase from Microblog Post Yingyi Zhang Chengzhi Zhang∗ Nanjing University of Science and Technology {yingyizhang, zhangcz}@njust.edu.cn Abstract This paper studies automatic keyphrase extraction on social media. Previous works have achieved promising results on it, but they neglect human reading behavior during keyphrase annotating. The human attention is a crucial element of human reading behavior. It reveals the relevance of words to the main topics of the target text. Thus, this paper aims to integrate human attention into keyphrase extraction models. First, human attention is represented by the reading duration estimated from eye-tracking corpus. Then, we merge human attention with neural network models by an attention mechanism. In addition, we also integrate human attention into unsupervised models. To the best of our knowledge, we are the first to utilize human attention on keyphrase extraction tasks. The experimental results show that our models have significant improvements on two Twitter datasets. 1 Introduction Rapidly growth of user-generated content on social media has far outpaced human beings’ reading and understanding capacity. Keyphrase extraction is one of the technologies that can organize this massive content. A keyphrase consists of one or more salient words, which represents the main topics of a document. It has a series of downstream applications, e.g., text summarization (Zhao et al., 2011a) and information retrieval (Choi et al., 2012). Generally, corpus with human annotated keyphrases are needed to train models in supervised keyphrase extraction frameworks. The premise for annotators to annotate keyphrases is to read the corresponding content. Intuitively, features estimated from human reading behavior can be leveraged to assist keyphrase extraction. *Corresponding Author. Previous studies on keyphrase extraction have ignored these features (Zhang et al., 2016, 2018). Thus, this paper aims to integrate the reading behavior into keyphrase extraction frameworks. When human reading, they do not pay the same attention to all words (Carpenter and Just, 1983). The reading time of per-word is the indicative of textual (as well as lexical, syntactic and semantic) processing (Demberg and Keller, 2008), which reflects human attention on various content. To obtain human attention during reading, this paper estimates eye fixation duration from eye-tracking corpus inspired by Carpenter and Just (1983) and Barrett et al. (2018). The modern-day eye tracking equipment resulting in a very rich and detailed dataset (Cop et al., 2017). Thus, we utilize opensource eye-tracking corpora and do not require eye-tracking information of the target datasets. To integrate human attention into keyphrase extraction models, this paper constructs a neural network model with attention mechanism. Attention mechanism is a neural module designed to imitate human visual attention when they reading and looking (Bahdanau et al., 2014). To regularize the predicted value of attention mechanism, human attention estimated from eye-tracking corpus is leveraged as the ground truth of it. Quantitative and qualitative analyses demonstrate that our models yield a better performance than state-of-the-art models. In addition, we prove that human attention is also effective on unsupervised keyphrase extraction models. We are, to the best of our knowledge, the first to integrate human attention into keyphrase extraction tasks. 2 Related Work Recently, keyphrase extraction technologies have been extended to social media (Zhao et al., 2011b; Bellaachia and Al-Dhelaan, 2012), e.g., 5868 Twitter and Sina Weibo. Previous studies extract keyphrases using traditional supervised algorithms (Marujo et al., 2015), which depending on a large set of manually selected features. To overcome this drawback, neural network models, which can learn features from training corpus automatically, are proposed and are proven effective in keyphrase extraction. For instance, Zhang et al. (2016) propose a neural network model to extract keyphrases from Tweets. This model extracts keyphrases from Tweets directly, which suffers from the severe data sparsity problem. External knowledge is utilized to alleviate this problem. Zhang et al. (2018) encode conversation context consisting of Tweet reply in neural models. This model yields a better performance than Zhang et al. (2016) , which prove the effectiveness of external knowledge. Thus, this paper is in the line of integrating external knowledge into neural network models. In this paper, we explore the idea of using human attention estimated from available eye-tracking corpus to assist keyphrase extraction. The open source eye-tracking corpus of natural reading include the Dundee corpus (Ekbal et al., 2007) and GECO (Cop et al., 2017). The features of eye tracking corpus include first fixation duration (FFD), total reading time (TRT), go-past time (GPT) , et al. TRT is a feature that has been applied to various natural language processing tasks, such as multi word expressions prediction (Rohanian et al., 2017) and sentiment analysis (Barrett et al., 2018). Thus, we select the TRT feature to represent the human attention. Since the GECO corpus is open sourced and is in English, we estimate the TRT feature from it. 3 Keyphrase Extraction Framework Formally, given a target microblog post xi formulated as word sequence < xi,1, xi,2, · · · , xi,|xi| >, where |xi| denotes the length of xi, we aim to produce a tag sequence < yi,1, yi,2, · · · , yi,|xi| >, where yi,w indicates whether xi,w is part of a keyphrase. As shown in Figure 1, our models use the character-level word embedding proposed by Jebbara and Cimiano (2017), but we ignore this part of our architecture in the equations below: yi,w = σ(Wytanh(Weyhi,w + bey) + by) (1) where hi,w is the representation of xi,w after passing through the Bi-directional LSTM (BiLSTM) layer, Wy and by are parameters of the function 𝑥𝑖,𝑤,𝑐−1 𝑥𝑖,𝑤,𝑐 𝑥𝑖,𝑤,𝑐+1 𝑥𝑖,𝑤 BiLSTM 𝒗𝑖,𝑤,𝑐−1 𝒗𝑖,𝑤,𝑐 𝒗𝑖,𝑤,𝑐+1 𝒗𝑖,𝑤 + ⋯ ⋯ Attention Mechanism 𝒗𝑖,𝑤𝑐 BiLSTM ⋯ ⋯ 𝒉𝑖,𝑤−1 𝒉𝑖,𝑤 𝒉𝑖,𝑤+1 𝛼(𝒉𝑖,∗) ⋯ 𝑠𝑜𝑓𝑡𝑚𝑎𝑥 𝒉𝑖,𝑤+1 𝑎𝑖,𝑤−1 𝑎𝑖,𝑤 𝑎𝑖,𝑤+1 ⋯ 𝒉𝑖,𝑤 𝒉𝑖,𝑤−1 𝑦𝑖,𝑤+1 𝑦𝑖,𝑤 𝑦𝑖,𝑤−1 𝒉𝑖,∗ 𝒗𝑖,𝑤𝑐+1 𝒗𝑖,𝑤𝑐−1 Figure 1: The framework of neural network keyphrase extraction with human attention. σ(·) to be learned. Wey and bey are parameters of the function tanh(·) to be learned, σ(·) is a nonliner function. In detail, yi,w has five possible values following Zhang et al. (2016): yϵ {Single, Begin, Middle, End, Not} (2) where Single represents that xi,w is a one-word keyword. Begin, Middle and End represent that xi,w is the first word, the middle word and the last word of a keyphrase, respectively. Not represents that xi,w is not a keyword or part of a keyphrase. From the hidden states, we directly predict word level raw attention scores ai,w: ai,w = Waei,w + ba (3) ei,w = tanh(Wehi,w + be) (4) where We and be are parameters of function tanh(·). Then, we normalize these predictions to attention weights g ai,w: g ai,w = ai,w P k ai,k (5) where k is the length of xi. Inspired by Barrett et al. (2018), we combine above mentioned two objections: word-level and attention-level. The word-level is to minimize the squared error between outputs yi,w and true word labels ˆyi,w. Lword = X i X w (yi,w −ˆyi,w)2 (6) The attention-level objective, similarly, is to minimize the squared error between the attention 5869 weights ai,w and real human attention ˆai,w estimated from eye-tracking corpus. Latt = X i X w (ai,w −ˆai,w)2 (7) When combined, λword and λatt (between 0 and 1) are utilized to trade off loss functions at the wordlevel and attention-level, respectively. L = λwordLword + λattLatt (8) In addition to above mentioned single layer models, we also use joint-layer BiLSTM proposed by Zhang et al. (2016). As a multi-task learner, jointlayer BiLSTM tackles two tasks with two types of outputs, y1 i,w and y2 i,w. y1 i,w has a binary tagset, which indicates whether the word xi,w is part of a keyphrase or not. y2 i,w employs the 5-value tagset defined in Equation 2. There is an attention module upon each BiLSTM layer with a corresponding prediction. The loss changes with the number of layers in models. The out represents the number of layers in the model. L = out X i=1 λi wordLi word + out X i=1 λi attLi att (9) 4 Experiment Settings 4.1 Twitter Dataset Our experiments are conducted on two datasets, i.e., Daily-Life dataset and Election-Trec dataset. Daily-Life This is collected from January of 2018 to April of 2018 using Twitter’s steaming API with a set of daily life keywords. Election-Trec This is constructed based on opensource dataset TREC2011 track1 and Election corpus (Zeng et al., 2018)2. For keyphrase annotation, we follow Zhang et al. (2016) to use microblog hashtags as goldstandard keyphrases and filtered all microblog posts by two rules: first, there is only one hash tag per post; second, the hashtag is inside a post. Then, we removed all the ‘#’ before keyphrase extraction. For both Twitter datasets, we randomly sample 0.8, 0.1 and 0.1 for training, development and testing. We preprocessed both Twitter datasets 1https://trec.nist.gov/data/tweets/ 2http://www.ccs.neu.edu/home/luwang/datasets/micro blog conversation.zip Dataset # of annot. msgs mesgs length Vocab Cover Election-Trec Train 24,210 19.94 36,018 7.7 Vali 3,027 20.00 9,909 17.8 Test 3,027 19.71 9,973 17.9 Daily-Life Train 12,827 28.92 40,628 7.0 Vali 1,610 28.77 9,964 17.4 Test 1,610 29.75 10,355 17.5 Table 1: Statistics of two datasets. Train, Dev, and Test denotes training, development, and test set, respectively. # of annot. Msgs: number of target post with keyphrase annotation. mesgs length: average count of words in the target post. Vocab: vocabulary size. Cover: The percent (%) of words existing in GECO. with Twitter NLP tool3 for tokenization. After filtering and preprocessing, Daily-Life dataset and Election-Trec dataset contains 16,047 Tweets and 30,264 Tweets, respectively. Table 1 shows the statistic information of two Twitter datasets Since there are no spaces between words in hashtags, we use some strategies to segment hashtags. There are two kinds of hashtags in the datasets. One is the ‘multi-word’ that contains both capitals and lowercases, the other are the ‘single-word’ in all lowercases or capitals. If a hashtag is a ‘multi-word’, we segment hashtags with two patterns, first is (capital) ∗ (lowercase)+, which represents one capital followed by one or more lowercases, second is (capital)+, which represents one or more capitals. When doing hashtag segmentation, the first pattern is utilized firstly and then the second pattern is applied. Meanwhile, we do not do any preprocessing if a hashtag is a ‘single-word’. 4.2 Eye-tracking Corpus This paper estimates human attention from GECO corpus (Cop et al., 2017), which is based on normal reading. In GECO, participants read a part of the novel ‘The Mysterious Affair at Styles’ by Agatha Christie. Six males and seven females whose native language is English participated in and read a total of 5,031 sentences. There are various features in GECO, including First Fixation Duration (FFD) and Total Reading Time (TRT). In this paper, we merely use the TRT feature, which represents total human attention on words during reading. This feature is also used by Carpenter and Just (1983) and Barrett et al. (2018). We then di3http://www.cs.cmu.edu/ ark/TweetNLP/ 5870 vide TRT values by the number of participants to get an average TRT (ATRT). Human attention correlates with word frequency (Rayner and Duffy, 1988). Thus, ATRT is normalized by the word frequency of the British National Corpus (BNC)4. Before normalizing, BNC is log-transformed per million and inversed (INV-BNC), such that rare words get a high value. ATRT and INV-BNC are min-max-normalized to a value in the range 0-1. ATRT is multiplied with INV-BNC to get normalized ATRT (N-ATRT). After preprocessing, there are 5,012 unique words in the dataset. In addition, words that are not included in the GECO corpus, which do not have a corresponding N-ATRT value, are given the mean value of N-ATRT. Table 1 shows the percentage of words that can be found in GECO corpus. 4.3 Implementation Details In the training phrase, we choose BiLSTM (Graves and Schmidhuber, 2005) with 300 dimensions. For single layer models, λword and λatt are set to 0.7 and 0.3, respectively. For joint layer models, λ1 word, λ1 att, λ2 word and λ2 att are set to 0.4, 0.2, 0.2 and 0.2, respectively. Parameters are set under the best performance. The epoch is set to 5. We initialize target post by embeddings pre-trained on 99M tweets with 27B tokens and 4.6M words in the vocabulary. 4.4 Baseline Models We compare our models with CRF (Zhang et al., 2008) and two kinds of neural network models: one kind is the neural network model without attention mechanism (BiLSTM model), the other is the neural network model with attention mechanism but is not modified by human attention (ABiLSTM model). Similar as HA-BiLSTM proposed by this paper, BiLSTM models and ABiLSTM models employ the single layer pattern and the joint layer pattern. The parameter setting of the joint layer pattern is same with Zhang et al. (2016). We compare the performance of models with the P, R and F1 evaluation metrics. BiLSTM model This model is merely constructed by the character-level word embedding and the BiLSTM layer. A-BiLSTM model This model is constructed by the character-level word embedding, BiL4http://www.natcorp.ox.ac.uk/ Daily-Life Election-Trec Baseline CRF 64.07 58.34 BiLSTM(Single) 70.37±1.30 66.42±0.97 A-BiLSTM(Single) 70.49±0.50 66.70±0.81 BiLSTM(Joint) 72.48±0.47 67.74±0.47 A-BiLSTM(Joint) 73.23±1.06 69.69±0.37 Our model HA-BiLSTM(Single) 71.28±0.33 67.57±0.28 HA-BiLSTM(Joint) 74.35±0.17 70.74±0.38 Table 2: Comparisons of the average F1 scores (%) and their standard deviations (%) over the results of models on two datasets with 5 sets of parameters for random initialization. BiLSTM (Single) is the BiLSTM model with a single layer pattern. BiLSTM (Joint) is the BiLSTM model with a joint layer model. ABiLSTM (Single) is the A-BiLSTM model with a single layer pattern. A-BiLSTM (Joint) is the A-BiLSTM model with a joint layer pattern. HA-BiLSTM (Single) is the HA-BiLSTM model with a single layer pattern. HA-BiLSTM (Joint) is the HA-BiLSTM model with a joint layer pattern. STM layer and attention mechanism. Different with HA-BiLSTM, the attention mechanism in ABiLSTM is not modified by human attention. 5 Result 5.1 Overall Comparisons Human attention estimated from eye-tracking corpus is helpful in improving the performance of neural network keyphrase extraction. As shown in Table 2, all the F1 values of models with human attention are higher than those of baseline models. In this paper, human attention is represented by the total reading time of per-word estimated from eye-tracking corpus. Thus, it indicates that the attempt of integrating human reading behavior information into neural network is feasible. The open-source eye-tracking corpus can improve the performance of models on datasets in different genres. Although the genre of the GECO eye-tracking corpus is fiction, which is different with the genre of the target dataset (Microblog), it has the ability to improve the performance of keyphrase extraction on target datasets. 5.2 Qualitative Analysis To qualitatively analyze why models with human attention generally perform better in comparison, we conduct a case study on two simple instances in Table 3 and Table 4. In Table 3, the keyphrase of the target post should be ‘hillary clinton’. We compare the keyphrase produced by A-BiLSTM 5871 Target Post what would a hillary clinton supreme court look like? Gold-standard hillary clinton Models A-BiLSTM (Single) hillary clinton; court HA-BiLSTM (Single) hillary clinton Table 3: The example that the hashtag in the target post is ‘hillary clinton’. Target Post I nominate MEN for a shorty award in entertainment because she never fails to write awesome smileys! xd URL Gold-standard entertainment Models A-BiLSTM (Single) NULL HA-BiLSTM (Single) entertainment Table 4: The example that the hashtag in the target post is ‘entertainment’. (Single) and HA-BiLSTM (Single). Interestingly, the A-BiLSTM extracts two phrases ‘hillary clinton’ and ‘court’. It may due to that the attention weight of ‘court’ is the biggest among all words in the target post in A-BiLSTM. The HA-BiLSTM identifies the correct keyphrase. In this model, the attention weight of ‘court’ is the 6th biggest among all words in the target post. The reason of this phenomenon is that the ‘court’ has a low NATRT value (0.024). Using the N-ATRT value of ‘court’ can modify the attention weight of ‘court’. In Table 4, the keyphrase of the target post should be ‘entertainment’. As shown in Table 4, the A-BiLSTM model do not extract any phrase, while the HA-BiLSTM model extract the correct keyphrase. It may due to that the attention weight of ‘entertainment’ in A-BiLSTM is the 13th biggest among all the words in the target post, while it is the third biggest in HA-BiLSTM, which is due to the high N-ATRT value (0.147) of ‘entertainment’ in GECO eye-tracking dataset modifying the corresponding attention weight. 5.3 Analysis on Unsupervised Models In this section, we explore the idea of using human attention on TextRank (Mihalcea and Tarau, 2004), which is an unsupervised keyphrase extraction algorithm. As defined in Section 3, a Tweet xi consist of words xi,1, xi,2, · · · , xi,n. If xi,m is appeared within the window of xi,j, there is an edge e(xi,m, xi,j) between these two words. Based on the graph composited by word vertices and edges, the importance of each word vertices can be calculated. In TextRank, the value of xi,j Num Daily-Life Election-Trec P R F1 P R F1 TextRank 2 1.7 3.5 2.3 4.0 8.0 5.4 5 2.8 8.6 4.3 4.6 15.3 7.1 10 2.9 8.6 4.3 4.7 15.8 7.2 HATR 2 2.7 5.5 3.6 6.4 12.9 8.6 5 4.0 12.1 6.0 7.3 24.4 11.3 10 4.0 12.1 6.0 7.4 24.9 11.4 Table 5: The P, R, F1 scores (%) of TextRank and TextRank with human attention (HATR) models on two datasets. Num represents the number of top-Num phrases that are chose to be candidate words. and e(xi,m, xi,j) are initialized unprivileged. In our models, we utilize human attention to normalize the initialized value of xi,j and e(xi,m, xi,j). The initialized value of xi,j depends on the N-ATRT value of itself. The initialized value of e(xi,m, xi,j) depends on the N-ATRT value of xi,m and xi,j. After extracting candidate words by HATR, we generate keyphrases by combining candidate words if words are connected together in target posts. As shown in Table 5, all the P, R and F1 values of HATR are higher than those of TextRank. These observations indicate that integrating human attention during reading into TextRank is feasible. Moreover, more candidate keyphrases yield better keyphrase extraction performance. 6 Conclusion In this paper, we consolidate the neural network keyphrase extraction algorithm with human attention represented by total reading time (TRT) estimated from GECO eye-tracking corpus. The proposed models yield a better performance on two Twitter datasets. Moreover, human attention is also effective on unsupervised models. In the future, first, we try to utilize more eyetracking corpus and estimate more features of reading behavior. Then, we will attempt to analyze real human reading behavior on social media and thereby explore more specific human attention features on social media. Acknowledgments This work is supported by Major Projects of National Social Science Fund (No. 17ZDA291). 5872 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv pre-print, arXiv/1409.0473. Maria Barrett, Joachim Bingel, Nora Hollenstein, Marek Rei, and Anders Søgaard. 2018. Sequence classification with human attention. In Proceedings of the 22nd Conference on Computational Natural Language Learning, CoNLL, pages 302–312. Abdelghani Bellaachia and Mohammed Al-Dhelaan. 2012. NE-Rank: A Novel Graph-Based Keyphrase Extraction in Twitter. In Proceedings of the IEEE/WIC/ACM International Conferences on Web Intelligence, pages 372–379. Patricia A Carpenter and Marcel Adam Just. 1983. What your eyes do while your mind is reading. Eye movements in reading: Perceptual and language processes, pages 275–307. Jaeho Choi, W. Bruce Croft, and Jinyoung Kim. 2012. Quality Models For Microblog Retrieval. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM, pages 1834–1838. Uschi Cop, Nicolas Dirix, Denis Drieghe, and Wouter Duyck. 2017. Presenting geco: An eyetracking corpus of monolingual and bilingual sentence reading. Behavior research methods, 49(2):602–615. Vera Demberg and Frank Keller. 2008. Data from eyetracking corpora as evidence for theories of syntactic processing complexity. Cognition, 109(2):193–210. Asif Ekbal, S Mondal, and Sivaji Bandyopadhyay. 2007. Pos tagging using hmm and rule-based chunking. In Proceedings of workshop on shallow parsing in South Asian languages, SPSAL, pages 25–28. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures. Neural Networks, 18(5-6):602–610. Soufian Jebbara and Philipp Cimiano. 2017. Improving opinion-target extraction with character-level word embeddings. In Proceedings of the First Workshop on Subword and Character Level Models in NLP, SCLeM, pages 159–167. Lu´ıs Marujo, Wang Ling, Isabel Trancoso, Chris Dyer, Alan W. Black, Anatole Gershman, David Martins de Matos, Jo˜ao Paulo da Silva Neto, and Jaime G. Carbonell. 2015. Automatic Keyword Extraction on Twitter. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL, pages 637–643. Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing Order into Text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing , EMNLP, A meeting of SIGDAT, a Special Interest Group of the ACL, held in conjunction with ACL, pages 404–411. Keith Rayner and Susan A Duffy. 1988. On-line comprehension processes and eye movements in reading. Reading research: Advances in theory and practice, 6:13–66. Omid Rohanian, Shiva Taslimipoor, Victoria Yaneva, and Le An Ha. 2017. Using gaze data to predict multiword expressions. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP, pages 601–609. Xingshan Zeng, Jing Li, Lu Wang, Nicholas Beauchamp, Sarah Shugars, and Kam-Fai Wong. 2018. Microblog conversation recommendation via joint modeling of topics and discourse. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, pages 375–385. Chengzhi Zhang, Huilin Wang, Yao Liu, Dan Wu, Yi Liao, and Bo Wang. 2008. Automatic keyword extraction from documents using conditional random fields. Journal of Computational Information Systems, 4(3):1169–1180. Qi Zhang, Yang Wang, Yeyun Gong, and Xuanjing Huang. 2016. Keyphrase Extraction Using Deep Recurrent Neural Networks on Twitter. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 836–845. Yingyi Zhang, Jing Li, Yan Song, and Chengzhi Zhang. 2018. Encoding conversation context for neural keyphrase extraction from microblog posts. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACLHLT, pages 1676–1686. Hong Zhao, Chen Sheng Bai, and Song Zhu. 2011a. Automatic keyword extraction algorithm and implementation. Applied Mechanics and Materials, 44:4041–4049. Wayne Xin Zhao, Jing Jiang, Jing He, Yang Song, Palakorn Achananuparp, Ee-Peng Lim, and Xiaoming Li. 2011b. Topical Keyphrase Extraction from Twitter. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, ACL, pages 379–388.
2019
588
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5873–5879 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5873 Model-Agnostic Meta-Learning for Relation Classification with Limited Supervision Abiola Obamuyide Department of Computer Science University of Sheffield [email protected] Andreas Vlachos Dept. of Computer Science and Technology University of Cambridge [email protected] Abstract In this paper we frame the task of supervised relation classification as an instance of metalearning. We propose a model-agnostic metalearning protocol for training relation classifiers to achieve enhanced predictive performance in limited supervision settings. During training, we aim to not only learn good parameters for classifying relations with sufficient supervision, but also learn model parameters that can be fine-tuned to enhance predictive performance for relations with limited supervision. In experiments conducted on two relation classification datasets, we demonstrate that the proposed meta-learning approach improves the predictive performance of two state-of-the-art supervised relation classification models. 1 Introduction Relation classification, the task of determining the relationship that exists between two entities, is a long-standing challenge in artificial intelligence with many downstream applications, including question answering, knowledge base population and web search. A variety of supervised methods have been proposed in the literature for this task (Zelenko et al., 2003; Bunescu and Mooney, 2005; Mintz et al., 2009; Surdeanu et al., 2012; Riedel et al., 2013). Current approaches are predominantly supervised models based on neural networks, for instance recursive neural networks (Socher et al., 2012; Hashimoto et al., 2013), convolutional neural networks (Zeng et al., 2014; Nguyen and Grishman, 2015), recurrent neural networks (Zhang and Wang, 2015; Xu et al., 2015; Zhang et al., 2017) or a combination of recurrent and convolutional neural networks (Vu et al., 2016). The performance of these approaches relies mostly on the quantity of their training data. However, labelled training data can be expensive to obtain and available only in limited quantities. It is therefore pertinent to develop methods that reduce their reliance on large quantities of labelled training data. In this work we propose a model-agnostic protocol for training supervised relation classification systems to achieve higher predictive performance in limited supervision settings, motivated by the observation that meta-learning leads to learning a better parameter initialization for new tasks than ad hoc multi-task learning across all tasks (Finn et al., 2017). We show that relation classification can be approached from a meta-learning perspective, and propose a model-agnostic meta-learning protocol for training relation classification models that explicitly learns a model parameter initialization for enhanced predictive performance across all relations with limited supervision. During training, our algorithm considers all relations and their instances as coming from a joint distribution, and seeks to learn model parameters that can be quickly adapted using each relation’s training instances to enhance predictive performance on its test set. In experiments on two relation classification datasets, we apply the proposed approach to two relation classification models, the position-aware relation classification model proposed in Zhang et al. (2017) (TACRED-PA) and the contextual graph convolution networks proposed in Zhang et al. (2018) (C-GCN), with varying amounts of supervision available at training time. We find that our approach improves the accuracy of both relation classification models on the two datasets. For instance our approach improves the F1 performance of TACRED-PA from 3.13% to 21.05% with just 1% of the training data on the SemEval dataset, and from 2.98% to 34.59% with just 0.5% of the training data on the TACRED dataset. 5874 2 Background Meta-learning, sometimes referred to as learning to learn (Thrun and Pratt, 1998), aims to develop models and algorithms which are able to exploit background knowledge to adaptively improve their learning process with experience. A number of meta-learning approaches have been proposed, and broadly fall into the following lines of work: learning how to update model parameters from background knowledge (for instance, Andrychowicz et al. 2016; Ravi and Larochelle 2017), specific model architectures for learning with limited supervision (for instance, Vinyals et al. 2016; Snell et al. 2017), and model-agnostic methods for learning a good parameter initialization for learning with limited supervision (for instance, Finn et al. 2017; Nichol et al. 2018). We next give a brief overview of the modelagnostic methods for meta-learning, which learn a good parameter initialization for target tasks from a set of source tasks, as proposed in Finn et al. (2017) and Nichol et al. (2018). These algorithms work by training a meta-model on the set of source tasks, such that the meta-model provides a good parameter initialization for target tasks which are taken from the same distribution as the source tasks. At test time, such an initialization can be fine-tuned with a limited number of gradient steps using a limited amount of training examples from the target tasks, in order to achieve good performance on the target tasks. In formal terms, let p(T ) be the distribution over tasks and fθ be the function learned by a neural model parametrized by θ. During adaptation to each task Ti sampled from p(T ), the model parameters θ are updated to task-specific parameters θ′ i. For a single gradient step, for instance, this update can be carried out as: θ′ i = θ −α∇θLTi(fθ) (1) where LTi is the loss on task Ti and α is the step size hyperparameter. The model parameters θ are trained to optimize the performance of fθ′ i, after taking a number of gradient steps with limited example instances from tasks sampled from p(T ). This is can be achieved by utilizing the meta-objective: min θ X Ti∼p(T ) LTi(fθ′ i) = X Ti∼p(T ) LTi(fθ−α∇θLTi(fθ)) (2) The optimization of the meta-objective is performed across tasks using SGD, by making updates to θ: θ ←θ −ϵ∇θ X Ti∼p(T ) LTi(fθ′ i) (3) where ϵ is the meta step size parameter. Intuitively, the meta-objective explicitly encourages the model to learn model parameters that can be quickly adapted to achieve optimum predictive performance across all tasks with as few gradient descent steps as possible. A number of approaches have been proposed for extracting relations with zero or few supervision instances. For the problem of zero-shot extraction of relations, Rockt¨aschel et al. (2015); Demeester et al. (2016) proposed the use of logic rules, Levy et al. (2017) proposed to address the problem by formulating it as a reading comprehension challenge, while Obamuyide and Vlachos (2018) proposed to address it as a textual entailment challenge. In this work we address the case where a limited number of supervision instances is available for all relations. In previous work, Obamuyide and Vlachos (2017) explored the use of a Factorization Machine (Rendle, 2010) framework for extracting relations with limited supervision instances. Here we instead propose an approach which is generally applicable to gradient-optimized relation extraction models. Han et al. (2018) proposed a dataset and evaluation setup for few-shot relation classification which assumes access to full supervision for training relations (specifically 700 instances per relation). In contrast, we address a different setting in which only limited supervision is available for all relations. In addition, the setup in Han et al. (2018) requires a model architecture specific to few-shot learning based on distance metric learning. On the other hand, our approach has the advantage that it applies to any gradient-optimized relation classification model. 3 Model-Agnostic Meta-Learning for Relation Classification If we consider each relation Ri as a task, then one approach to supervised relation classification with limited supervision is to train a multi-class classifier for all relations in a multi-task fashion. For all relations Ri from a distribution p(R), this approach directly optimizes for the following objec5875 tive: θ∗= min θ X Ri∼p(R) LRi(fθ) (4) where LRi is the loss on relation Ri. This assumes that joint training on all relations would naturally result in the optimal model parameters θ∗ with good predictive performance for all relations. This is however not necessarily the case, especially for relations with limited training instances from which the model can learn to generalize. We propose to instead utilize meta-learning to explicitly encourage the model to learn a good joint parameter initialization for all relations, which can then be fine-tuned with limited supervision from each relation’s training instances to achieve good performance on its test set. Such parameters would be especially beneficial for enhancing performance on relations with limited training instances. Observe though that directly optimizing Equation 2 requires computing second order derivatives over the parameters, which can be computationally expensive. Thus, we follow Nichol et al. (2018) by approximating the meta-objective in Equation 2 with the training Algorithm in 1. Algorithm 1 Meta-Learning Relation Classification (MLRC) Require: distribution over relations p(R) Require: relation classification function fθ Require: gradient-based optimization algorithm (e.g. SGD) Require: step size ϵ, learning rate α 1: randomly initialize θ 2: while not done do 3: Sample batch of B relations Ri ∼p(R) 4: for all Ri do 5: Sample train instances D = {x(j), y(j)} from Ri 6: Evaluate ∇θLRi(fθ) using D 7: Compute adapted parameters: θ′ i = SGD(θi, ∇θLRi(fθ), α) 8: end for 9: Compute update of meta-parameters: θ = θ −ϵ 1 B i=B X i=1 (θ′ i −θ) 10: end while 11: Fine-tune fθ with standard supervised learning. Subsequently we refer to our overall training procedure as summarized in Algorithm 1 as Metalearning Relation Classification (MLRC). We assume access to fθ (learner model), which is a relation classification model parameterized by θ and a distribution over relations p(R). The algorithm consists of the meta-learning phase (lines 1-10), followed by the supervised learning phase (line 11) which fine-tunes the meta-learned parameters, both carried out on a relation classification model using the same data for both stages. In the first phase of learning, each iteration in our approach starts by sampling a batch of relations from p(R) (line 3). Then for each relation we sample a batch of supervision instances D from its training set (line 5). We then obtain the adapted model parameters θ′ i on this relation by first computing the gradient of the training loss on the sampled relation instances (line 6) and backpropagating the gradients with a gradient-based optimization algorithm such as SGD or Adagrad (Duchi et al., 2011) (line 7). At the end of the learning iteration, the adapted parameters on each sampled relation in the batch are averaged, and an update is made on the model parameters θ (line 9). In the second phase of learning, we first initialize the model parameters with that learned during meta-training. We then proceed to fine-tune the model parameters with standard supervised learning by taking a number of gradient descent steps using the same randomly sampled batches of supervision instances from the relations’ training set as was used during meta-learning (line 11). 4 Experiments 4.1 Relation Classification Models We adopt as the learner model (fθ) two recent supervised relation classification models, the position-aware model of Zhang et al. (2017) (TACRED-PA) and the contextual graph convolution networks proposed in Zhang et al. (2018) (CGCN), both of which are multi-class models with parameters optimized via stochastic gradient descent. 4.2 Setup We conduct experiments in a limited supervision setting, where we provide all models with the same fraction of randomly sampled supervision instances during training. Further, for each experiment the supervision instances within each fraction is exactly the same across all models. We report results for each experiment by taking the average over ten (10) different runs. 4.3 Datasets We evaluate our approach on the SemEval-2010 Task 8 relation classification dataset (Hendrickx et al., 2009) (SemEval), and on the recent, more 5876 (a) (b) Figure 1: Results obtained using TACRED-PA as the learner model on (a) SemEval, and (b) TACRED datasets challenging TACRED dataset (Zhang et al., 2017) (TACRED). The SemEval dataset has a total of 8000 training and 2717 testing instances respectively. For experiments the training set is split into two, and we use 7500 instances for training and 500 instances for development. For TACRED, we use the standard training, development and testing splits as provided by Zhang et al. (2017). 4.4 Experimental Details and Hyperparameters We initialize word embeddings with Glove vectors (Pennington et al., 2014) and did not fine-tune them during training. Model training and parameter tuning are carried out on the training and development splits of each dataset, and final results reported on the test set. We ensure all models have access to the same data. For model MLRC, for each fraction, we train for 150 meta-learning iterations on TACRED dataset and 1000 meta-iterations on the SemEval dataset using that fraction of data. We then finetune with standard supervised learning using exactly the same data as was used during metalearning. For both relation classification models, that is TACRED-PA and C-GCN, we use the same hyper(a) (b) Figure 2: Results obtained using C-CGN as the learner model on (a) SemEval, and (b) TACRED datasets parameters as in Zhang et al. (2017) and Zhang et al. (2018) respectively. Relation # F1(%) TC-PA MLRC Instrument-Agency 3 0 8.44 Content-Container 4 0.93 30.9 Member-Collection 5 3.04 24.19 Entity-Destination 7 14.33 35.36 Entity-Origin 7 2.85 24.62 Message-Topic 7 0.8 12.32 Component-Whole 8 2.68 14.87 Product-Producer 9 0.68 10.29 Cause-Effect 11 2.93 28.52 Average 3.13 21.05 Table 1: Results with 1% training data on SemEval. The # column is the number of instances of each relation during training, and TC-PA denotes the TACRED-PA model (trained without meta-learning), while MLRC denotes the same model trained with our approach. 5877 4.5 Evaluation Metrics For the TACRED dataset, we follow Zhang et al. (2017) and report micro-averaged F1 scores1. For the SemEval dataset, we report the official measure, which is the F1 score macro-averaged across relations.2 4.6 Results and Discussion The results obtained on the SemEval and TACRED datasets using TACRED-PA as the learner model (fθ) are shown in Figures 1(a) and 1(b) respectively. We find that on both datasets, our approach improves performance as more supervision becomes available, with the largest gains obtained at the early stage when very limited supervision is available. For instance on SemEval, given just 1% of the training set (first datapoint in Figure 1(a)), our approach improves the F1 performance of TACRED-PA from 3.13% to 21.05%, representing an absolute increase of 17.92%. Table 1 gives a further breakdown of the F1 scores of individual relations when both approaches are given access to 1% of the training set. We observe that MLRC considerably improves the performance of TACRED-PA on relations with the least number of training instances, likely by leveraging background knowledge from relations with more training instances. On the TACRED dataset, MLRC improves the performance of TACRED-PA from 2.98% to 34.59% with just 0.5% of the training data (fifth datapoint in Figure 1(b)), which is an absolute increase of 31.61%. A similar trend is observed using C-GCN as the learner model on both datasets, as presented in Figures 2(a) and 2(b). For instance on SemEval, we improve the F1 performance of C-GCN from 3.38% to 17.14% using just 1% of the training data (first datapoint in Figure 2(a)). Similarly on TACRED, the performance of C-GCN is improved from 7.59% to 23.18% (first datapoint in Figure 2(b)) by using 0.1% of its training set. Further, we find that the proposed approach does not adversely affect performance when full supervision is available during training. For instance, when given full supervision on the TACRED dataset, while TACRED-PA obtains an F1 score of 65.1%, its performance is improved to 65.2% by using our approach, demonstrating that 1We use the same evaluation script as Zhang et al. (2017). 2We compute these measures using the official evaluation script that comes with the dataset. the proposed approach does not adversely affect performance when provided full supervision during training. 5 Conclusion and Future Work We show that the performance of supervised relation classification models can be improved, even with limited supervision at training time, by framing relation classification as an instance of metalearning, and proposed a model-agnostic learning protocol for training relation classifiers with enhanced predictive performance in limited supervision settings. In future work, we want to extend this approach to other natural language processing tasks. Acknowledgements The authors acknowledge support from the EU H2020 SUMMA project (grant agreement number 688139). We are grateful to Yuhao Zhang for sharing his data with us. References Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas. 2016. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems, pages 3981–3989. Razvan Bunescu and Raymond J Mooney. 2005. A shortest path dependency kernel for relation extraction. Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Vancouver, B.C. Thomas Demeester, Tim Rockt¨aschel, and Sebastian Riedel. 2016. Lifted rule injection for relation embeddings. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1389–1399, Austin, Texas. Association for Computational Linguistics. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159. Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1126–1135, International Convention Centre, Sydney, Australia. 5878 Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. Fewrel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4803– 4809, Brussels, Belgium. Association for Computational Linguistics. Kazuma Hashimoto, Makoto Miwa, Yoshimasa Tsuruoka, and Takashi Chikayama. 2013. Simple Customization of Recursive Neural Networks for Semantic Relation Classification. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1372–1376. Association for Computational Linguistics. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2009. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions, pages 94–99. Association for Computational Linguistics. Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 333–342, Vancouver, Canada. Association for Computational Linguistics. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 1003–1011. Thien Huu Nguyen and Ralph Grishman. 2015. Relation Extraction: Perspective from Convolutional Neural Networks. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 39–48, Denver, Colorado. Association for Computational Linguistics. Alex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms. CoRR, abs/1803.02999. Abiola Obamuyide and Andreas Vlachos. 2017. Contextual pattern embeddings for one-shot relation extraction. In Proceedings of the NeurIPS 2017 Workshop on Automated Knowledge Base Construction (AKBC). Abiola Obamuyide and Andreas Vlachos. 2018. Zeroshot relation classification as textual entailment. In Proceedings of the EMNLP 2018 Workshop on Fact Extraction and VERification (FEVER), pages 72–78, Brussels, Belgium. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global Vectors for Word Representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Sachin Ravi and Hugo Larochelle. 2017. Optimization As a Model for Few-Shot Learning. In International Conference on Learning Representations 2017. Steffen Rendle. 2010. Factorization machines. Proceedings - IEEE International Conference on Data Mining, ICDM, pages 995–1000. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation Extraction with Matrix Factorization and Universal Schemas. Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 74–84. Tim Rockt¨aschel, Sameer Singh, and Sebastian Riedel. 2015. Injecting Logical Background Knowledge into Embeddings for Relation Extraction. North American Association for Computational Linguistics, pages 1119–1129. Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pages 4077–4087. Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic Compositionality through Recursive Matrix-Vector Spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201–1211. Association for Computational Linguistics. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning, pages 455–465. Association for Computational Linguistics. Sebastian Thrun and Lorien Pratt. 1998. Learning to Learn: Introduction and Overview. In Learning to Learn, pages 3–17. Springer US, Boston, MA. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. 2016. Matching networks for one shot learning. In Advances in neural information processing systems, pages 3630–3638. Ngoc Thang Vu, Heike Adel, Pankaj Gupta, and Hinrich Sch¨utze. 2016. Combining Recurrent and Convolutional Neural Networks for Relation Classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 534–539. Association for Computational Linguistics. 5879 Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015. Classifying Relations via Long Short Term Memory Networks along Shortest Dependency Paths. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1785–1794, Lisbon, Portugal. Association for Computational Linguistics. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. The Journal of Machine Learning Research, 3:1083–1106. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation Classification via Convolutional Deep Neural Network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2335–2344, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. Dongxu Zhang and Dong Wang. 2015. Relation classification via recurrent neural network. arXiv preprint arXiv:1508.01006. Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2205–2215. Association for Computational Linguistics. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D Manning. 2017. Positionaware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 35–45.
2019
589
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 619–628 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 619 Learning from omission Bill McDowell Duolingo Pittsburgh, PA 15206 [email protected] Noah D. Goodman Stanford University Stanford, CA 94305 [email protected] Abstract Pragmatic reasoning allows humans to go beyond the literal meaning when interpreting language in context. Previous work has shown that such reasoning can improve the performance of already-trained language understanding systems. Here, we explore whether pragmatic reasoning during training can improve the quality of learned meanings. Our experiments on reference game data show that end-to-end pragmatic training produces more accurate utterance interpretation models, especially when data is sparse and language is complex. 1 Introduction We often draw pragmatic inferences about a speaker’s intentions from what they choose to say, but also from what they choose not to say in context. This pragmatic reasoning arises from listeners’ inferences based on speakers’ cooperativity (Grice, 1975), and prior work has observed that such reasoning enables human children to more quickly learn word meanings (Frank and Goodman, 2014). This suggests that pragmatic reasoning might allow modern neural network models to more efficiently learn on grounded language data from cooperative reference games. As a motivating case, consider an instance of the color reference task from Monroe et al. (2017)— shown in the first row of Table 1. In this task, a speaker communicates a target color to a listener in a context containing two distractor colors; the listener picks out the target based on what the speaker says. In the first instance from Table 1, the speaker utters “dark blue” to describe the target. Whereas “dark” and “blue” also apply to the target, they lose their informativity in the presence of the distractors, and so the speaker pragmatically opts for “dark blue”. A listener who is learning the language from such examples might draw several inferences from the speaker’s utterance. First, under the assumption that the speaker is informative, a “literal” learner might infer that “dark blue” applies to the target shade more than the distractors. Second, a “pragmatic” learner might consider the cheaper alternatives–“dark” and “blue”–that have occurred in the presence of the same target in prior contexts, and infer that these alternative utterances must also apply to the distractors given the speaker’s failure to use them. The pragmatic learner might thus gain more semantic knowledge from the same training instances than the literal learner: pragmatic reasoning can reduce the data complexity of learning. The pragmatic learning effects just described depend on the existence of low cost alternative utterances that the learner already knows can apply to the target object. The existence of short alternatives will be more likely when the target objects are more complex (as in row 2 of Table 1), because these objects require longer utterances (with therefore more short alternatives) to individuate. Thus, we further hypothesize that pragmatic inference will reduce data complexity especially in contexts that elicit more complex language. In light of these arguments, we leverage the pragmatic inference described here in training neural network models to play reference games. For formal, probabilistic representations of contextual reasoning in our training objectives, we embed neural language models within pragmatic listener and speaker distributions, as specified by the Rational Speech Acts (RSA) framework (Goodman and Frank, 2016; Frank and Goodman, 2012). Pragmatic inference allows our models to learn from indirect pragmatic evidence of the sort described above, yielding better calibrated, context-sensitive models and more efficient use 620 Target Distractors Utterance Cheaper Alternative Utterances 1. x x x “dark blue” “blue”, “dark”. . . 2. x x x x x x x x x “left dark blue” “dark blue”, “left dark”, “right black”. . . Table 1: Speaker utterances describing (1) colors and (2) color grids to differentiate them from distractors. A learner might draw inferences about fine-grained linguistic distinctions by explaining the speaker’s failure to use cheaper alternatives in context (e.g. they might infer that “blue” and “dark” apply to some distractors in 1). These inferences have the potential to increase in number and in strength as dimensionality of the referents and utterance complexity increase (as in 2). of the training data. We compare pragmatic and non-pragmatic models at training and at test, while varying conditions on the training data to test hypotheses regarding the utility of pragmatic inference for learning. In particular, we show that incorporating pragmatic reasoning at training time yields improved, state-of-the-art accuracy for listener models on the color reference task from Monroe et al. (2017), and the effect demonstrated by this improvement is especially large under small training data sizes. We further introduce a new color-grid reference task and data set consisting of higher dimensional objects and more complex speaker language; we find that the effect of pragmatic listener training is even larger in this setting. 2 Related Work Prior work has shown that neural network models trained to capture the meanings of utterances can be improved using pragmatic reasoning at test time via the RSA framework (Andreas and Klein, 2016; Monroe et al., 2017; Goodman and Frank, 2016; Frank and Goodman, 2012). For instance, Monroe et al. (2017) train context-agnostic (i.e. non-pragmatic) neural network models to learn the meanings of color utterances using a corpus of examples of the form shown in the first line of Table 1. At evaluation, they add an RSA layer on top of the trained model to draw pragmatic, context-sensitive inferences about intended color referents. Other related work explores additional approaches to create context-aware models that generate color descriptions (Meo et al., 2014), image captions (Vedantam et al., 2017), spatial references (Golland et al., 2010), and utterances in simple reference games (Andreas and Klein, 2016). Each of these shows that adding pragmatics at test time improves performance on tasks where context is relevant. Whereas this prior work showed the effectiveness of pragmatic inferences for models trained non-pragmatically, our current work shows that these pragmatic inferences can also inform the training procedure, providing additional gains in performance. More similar to our work, Monroe and Potts (2015) improve model performance by incorporating pragmatic reasoning into the learning procedure for an RSA pragmatic speaker model. However, in contrast to our work, they consider a much simpler corpus, and a simple non-neural semantics. We consider richer corpora with sequential utterances and continuous referent objects that pose several algorithmic challenges which we solve using neural networks and Monte Carlo methods. 3 Approach We compare neural nets trained pragmatically and non-pragmatically on a new color-grid reference game corpus as well as the color reference corpus from Monroe et al. (2017). In this section, we describe our tasks and models. 3.1 Reference Game Listener Tasks The color reference game from Monroe et al. (2017) consists of rounds played between a speaker and a listener. Each round has a context of two distractors and a target color (Figure 1a). Only the speaker knows the target, and must communicate it to the listener—who must pick out the target based on the speaker’s English utterance. Similarly, each round of our new color-grid reference game contains target and distractor color-grid objects, and the speaker must communicate the target grid to the listener (Figure 1b). We train neural network models to play the listener role in these games. 3.2 Models In both reference games, our listener models reason about a round r represented by a single train621 (a) Round of color reference (each object is a color) (b) Round of grid reference (each object is a grid) Figure 1: Rounds from the reference game tasks. These rounds consist of messages sent between a speaker and listener. The speaker communicates the target referent object (with a green border) to the listener. ing/testing example of the form (O(r), U(r), t(r)) where O(r) is the set of objects observed in the round (colors or color-grids), U (r) is a sequence of utterances produced by the speaker about the target (represented as a token sequence), and t(r) is the target index in O(r). The models predict the most likely referent O(r) t of an utterance within a context O(r) according to an RSA listener distribution l(t(r) | U (r), O(r)) over targets given the utterances and a context. In pragmatic models, a nested structure allows the listener to form its beliefs about the intended referent by reasoning recursively about speaker intentions with respect to a hypothetical “literal” (non-pragmatic) listener’s interpretations of utterances. This recursive reasoning allows listener models to account for the speaker’s context-sensitive, pragmatic adjustments to the semantic content of utterances. Formally, our pragmatic RSA model l1, with learnable semantic parameters θ, for target referent t, given an observed context O and speaker utterances U, is computed as: l1(t | U, O; θ) = s1(U | t, O; θ)p(t) P t′ s1(U | t′, O; θ)p(t′) s1(U | t, O; θ) = l0(t | U, O; θ)αp(U | O) P U′ l0(t | U ′, O; θ)αp(U ′ | O) l0(t | U, O; θ) = Lθ U,Otp(t) P t′ Lθ U,Ot′ p(t′) In these equations, the top-level l1 listener model estimates the target referent by computing a pragmatic speaker s1 and a target prior p(t). Similarly, the pragmatic speaker s1 computes an utterance distribution with respect to a literal listener l0, an utterance prior p(U | O), and a rationality parameter α. Finally, the “literal” listener computes its expectation about the target referent from the target prior p(t) and the literal meaning, Lθ U,Ot, which captures the extent to which utterance U applies to Ot. In both the l0 and l1 distributions, we take p(t) to be a uniform distribution over target indices. Literal meanings The literal meanings Lθ U,Ot in l0 are computed by an LSTM (Hochreiter and Schmidhuber, 1997) that takes an input utterance and an object (color or color-grid), and produces output in the interval (0, 1) representing the degree to which the utterance applies to the object (see Figure 2b). The object is represented as a single continuous vector, and is mapped into the initial hidden state of the LSTM by a dense linear layer in the case of colors, and an averagepooled, convolutional layer in the case of grids (with weights shared across the grid-cell representations described in Section 4.1.2). Given the initialized hidden state, the LSTM runs over embeddings of the tokens of an utterance. The final hidden state is passed through an affine layer, and squished by a sigmoid to produce output in (0, 1). This neural net contains all learnable parameters θ of our listeners. Utterance prior The utterance prior p(U | O) in s1 is a non-uniform distribution over sequences of English tokens—represented by a pre-trained LSTM language model conditioned on an input color or grid (see Figure 2a). Similar to the literal meaning LSTM, we apply a linear transformation to the input object to initialize the LSTM hidden state. Then, each step of the LSTM applies to and outputs successive tokens of an utterance. In addition, when operating over grid inputs, we apply a layer of multiplicative attention given by the “general” scoring function in (Luong et al., 2015) between the LSTM output and the convolutional 622 grid output before the final Softmax. This allows the language model to “attend” to individual grid cells when producing output tokens, yielding an improvement in utterance prior sample quality. The language model is pre-trained over speaker utterances paired with targets, but the support of the distribution encoded by this LSTM is too large for the s1 normalization term within the RSA listener to be computed efficiently. Similar to Monroe et al. (2017), we resolve this issue by taking a small set of samples from the pre-trained LSTM applied to each object in a context, to approximate p(U | O), each time l1 is computed during training and evaluation. 3.3 Learning The full l1 neural RSA architecture for computing pragmatic predictions over batches of input utterances and contexts is given by Algorithm 1.1 During training, we backpropagate gradients through the full architecture, including the RSA layers, and optimize the pragmatic likelihood maxθ log l1(t | U, O; θ). For clarity, we can rewrite this optimization problem for a single (O, U, t) training example in the following simplified form by manipulating the RSA distributional equations from the previous section: max θ  log Lθ U,Ot −log Zl0(U | O; θ) −log Zs1(t | O; θ) −log Zl1(U | O; θ)  Here, Zl1, Zs0, and Zl0 are the normalization terms in the denominators of the nested RSA distributions, which we can rewrite using the log-sum-exp function (LSE) as: log Zl0(U | O; θ) = LSE t′  log Lθ U,Ot′ + log p(t′)  log Zs1(t | O; θ) = LSE U′  log l0(t | U ′, O; θ)α + log p(U ′ | O)  log Zl1(U | O; θ) = LSE t′  log s1(U | t′, O; θ) + log p(t′)  1Note that this algorithm listing provides careful annotation of the dimensionality of various distributional tensors, which we hope might aid future research in reproducing our model implementations. Given this representation of the optimization problem, we can see its relationship to the intuitive characterization of pragmatic learning that we gave in the introduction. First, the two terms log Lθ U,Ot −log Zl0(U | O; θ) can be seen as finding the optimal non-pragmatic parameters; the first log Lθ U,Ot term upweights the model’s estimate of the literal applicability of the observed U to its intended target referent, and the −log Zl0(U | O; θ) term maximizes the margin between this estimate and the applicability of U to the contextual distractors.2 Next, the −log Zs1(t | O; θ) term makes pragmatic adjustments to the parameter estimates by enforcing a margin between the l0 predictions given by low cost alternatives U ′ and the observed utterance U on a referent object t. The enforcement of this margin pushes Lθ U′,Ot′ upward for distractors t′, simulating the pragmatic reasoning described in the introduction, and drawing additional information about the low cost alternative utterances from their omission in context. Finally, the −log Zl1(U | O; θ) term enforces a margin between the speaker prediction s1(U | t, O; θ) and predictions on the true utterance U given distractors Ot′. This ensures that the true utterance is down-weighted on distractor objects following the speaker’s pragmatic adjustments, such that our l1 listener predictions are well-calibrated with respect to the s1 distribution’s cost-sensitive adjustments learned through −log Zs1(t | O; θ). 4 Experiments We investigate the value of pragmatic training by estimating the parameters θ in the RSA “literal meaning” function Lθ for l1 (pragmatic) and l0 (non-pragmatic) distributions according to the maximal likelihood of the training data for the color and grid reference tasks. We then evaluate meanings Lθ from each training procedure using pragmatic l1 inference (and non-pragmatic l0 inference, for completeness). We perform this comparison repeatedly to evaluate the value of pragmatics at training and test under various data conditions. In particular, we evaluate the hypotheses that (1) the pragmatic inferences enabled by the l1 training will reduce sample complexity, leading to more accurate meaning functions especially under small data sizes, and (2) the effectiveness 2Here, we think informally of a margin by considering the LSE as an approximation of max. 623 Algorithm 1 RSA pragmatic listener (l1) neural network forward computation. The l1 function is applied to batches of input utterances and observed contexts, and produces batches of distributions over objects in the contexts, representing the listener’s beliefs about intended utterance referents. 1: b ←data batch size 2: l ←maximum utterance length 3: k ←number of objects per context (i.e. colors or color-grids) 4: d ←dimension of each object 5: u ←number utterances to sample per object in context to make speaker distribution supports 6: z ←ku + 1 number of utterances in each support including input utterance 7: s0 ←pre-trained LSTM language model (Figure 2a) 8: L ←LSTM meaning function architecture (Figure 2b) 9: function l1(utterances U ∈Rb×l, observations O ∈Rb×k×d) 10: Pt ←(S = (0, . . . , k −1)b, P = 1b×k/k) ▷batch of uniform target priors of size b × k 11: S1 ←s1(S[Pt], O, U)T ▷speaker utterance distributions of size b × z × k 12: return Normalize-Rows(S1 · Repeat(P[Pt], z))[U] ▷target distributions conditioned on utterances in U 13: 14: function s1(possible targets T ∈Rb×k, observations O ∈Rb×k×d, fixed input utterances U ∈Rb×l) 15: Putt ←SAMPLE-UTTERANCE-PRIORS(U, O) ▷sample batch of utterance priors of size b × z 16: L0 ←l0(S[Pu], O)T ▷batch of distributions over targets of size b × k × z 17: return Normalize-Rows(Lα 0 · Repeat(P[Pu], k)) ▷speaker utterance distributions of size b × k × z 18: 19: function l0(possible utterances U ∈Rb×z×l, observations O ∈Rb×k×d) 20: Pt ←(S = (0, . . . , k −1)b, P = 1b×k/k) ▷batch of uniform target priors of size b × k 21: L ←COMPUTE-MEANINGS(U, S[Pt], O) ▷batch of meaning matrices of size b × z × k 22: return Normalize-Rows(L · Repeat(P[Pt], z)) ▷batch of distributions over targets of size b × z × k 23: 24: function SAMPLE-UTTERANCE-PRIORS(fixed input utterances U ∈Rb×l, O ∈Rb×k×d) 25: Putt ←(S = 0b×z×l, P = 1b×z z ) ▷initialize supports and probabilities in utterance prior tensor 26: for i = 1 to b do ▷for each round in batch 27: for j = 1 to k do ▷for each object in a round 28: Sample u(v) from s0(⟨s⟩, O[i, j]) for v = 1, . . . , u ▷sample utterances for object O[i, j] 29: S[Putt][i, (j −1)u : ju] ←u ▷add sampled utterance to supports 30: S[Putt][:, ku, :] ←U ▷add input utterances to supports 31: return Putt 32: 33: function COMPUTE-MEANINGS(U ∈Rb×z×l,T ∈Rb×k, O ∈Rb×k×d) 34: L ←0b×z×k ▷initialize meaning tensor to be filled 35: for i = 1 to b do ▷for each round in batch 36: (Ui, Ti) ←cartesian product of utterances in U[i] and targets in T[i] 37: L[i] ←Reshape(L(Ui, O[i, Ti]), z, k) ▷degrees to which each utterance applies to each object 38: return L ▷batch of meaning matrices (one per example context) o ⟨s⟩ u1 u2 u1 u2 ⟨/s⟩ Tanh Embedding LSTM Softmax (a) Architecture for the language model from which we sample for utterance priors. We recursively sample utterance tokens from a dense Softmax layer applied to the LSTM output. o u1 u2 u3 Lθ U,o Tanh Embedding LSTM Sigmoid (b) Architecture for computing the literal meaning Lθ U,o within RSA. A dense sigmoid layer computes the output meaning Lθ U,o ∈(0, 1) based on the final state of the LSTM that was applied to u and o. Figure 2: Neural networks for (a) the speaker language model used to construct utterance priors, and (b) the meaning function Lθ U,o within the RSA listener distributions (diagram style inspired by Monroe et al. (2017)). Both architectures apply a tanh layer to an input object o (a grid or color), and use the result as the initial hidden state of an LSTM layer. In each case, the LSTM operates over embeddings of tokens u1, u2, . . . from utterance U. 624 Model Color Dev Color Test Grid Dev Grid Test l0 training, l0 test 0.8455 ± 0.0011 0.8656 ± 0.0012 0.5714 ± 0.0068 0.5443 ± 0.0122 l0 training, l1 test 0.8472 ± 0.0013 0.8671 ± 0.0017 0.5694 ± 0.0075 0.5455 ± 0.0123 l1 training, l1 test 0.8587 ± 0.0008 0.8771 ± 0.0008 0.6329 ± 0.0045 0.6200 ± 0.0063 Monroe et al. (2017) 0.8484 0.8698 Table 2: Listener accuracies on the color and grid data. All accuracies are reported with ±SE. 0 10 20 30 0 0.2 0.4 Utterance Length in Tokens Data Proportion Color Grid (a) Length distributions of speaker utterances for colors and grids. Grids usually have longer descriptions. Color Utterances Grid Utterances blue top left blue purple purple top left green purple top right (b) Top three most frequent speaker utterances in the color and grid data. Top color descriptions are single words. Multiword utterances like “dark green” are less frequent. Top grid utterances tend to specify colors and locations. Data Color Accuracy Grid Accuracy Full 0.9003 0.9318 Close 0.8333 0.9024 Split 0.8970 0.9291 Far 0.9696 0.9642 (c) Human accuracies on full color and grid data, and in close, split, and far conditions. Grid accuracy is higher than color accuracy, possibly because there are more properties for speakers to describe when referring to grids. Figure 3: Comparison of the color and grid data sets of the l1 training over l0 training will increase on a more difficult reference game task containing higher-dimensional objects and utterances— i.e. pragmatic training will help more in the grids task than in the colors task. 4.1 Data 4.1.1 Color Reference For the color reference task, we use the data collected by Monroe et al. (2017) from human play on the color reference task through Amazon Mechanical Turk using the framework of Hawkins (2015). Each game consists of 50 rounds played by a human speaker and listener. In each round, the speaker describes a target color surrounded by a context of two other distractor colors, and a listener clicks on the targets based on the speaker’s description (see Figure 1a). The resulting data consists of 46, 994 rounds across 948 games, where the colors of some rounds are sampled to be more likely to require pragmatic reasoning than others. In particular, 15, 516 trials are close with both distractors within a small distance to the target color in RGB space, 15, 782 are far with both distractors far from the target, and 15, 693 are split with one distractor near the target and one far from the target. For model development, we use the train/dev/test split from Monroe et al. (2017) with 15, 665 training, 15, 670 dev, and 15, 659 test rounds. Within our models, we represent color objects using a 3-dimensional CIELAB color space— normalized so that the values of each dimension are in [−1, 1]. Our use of the CIELAB color space departs from prior work on the color data which used a 54-dimensional Fourier space (Monroe et al., 2017, 2016; Zhang and Lu, 2002). We found that both the CIELAB and Fourier spaces gave similar model performance, so we chose the CIELAB space due to its smaller dimensionality. Our speaker utterances are represented as sequences of cleaned English token strings. Following Monroe et al. (2017), we preprocess the tokens by lowercasing, splitting off punctuation, and replacing tokens that appear only once with [unk]. In the color data, we also follow the prior work and split off -er, -est, and -ish, suffixes. Whereas the prior work concatenated speaker messages into a single utterance without limit, we limit the full sequence length to 40 tokens for efficiency. 4.1.2 Grid Reference Because initial simulations suggested that pragmatic training would be more valuable in more complex domains (where data sparsity is a greater issue), we collected a new data set from human 625 play on the color-grid reference task described in Section 3.1. Data was collected on Amazon Mechanical Turk using an open source framework for collaborative games (Hawkins, 2015). Each game consists of 60 rounds played between a human speaker and listener, where the speaker describes a target grid in the presence of two distractor grids (see Figure 1b), resulting in a data set of 10,666 rounds spread across 197 games.3 Each round consists of three 3 × 3 grid objects, with the grid colors at each cell location sampled according to the same close, split, and far conditions as the in color reference data—yielding 3,575 close, 3,549 far, and 3,542 split rounds. We also varied the number of cells that differ between objects in a round from 1 to 9. As shown in Figure 3, these grid trials result in more complex speaker utterances than the color data. We partitioned this data into 158 train, 21 dev, 18 test games containing 8,453 training, 1,236 dev, and 977 test rounds. In our models, we represent a single color-grid object from the data as a concatenation of 9 vectors representing the 9 grid cells. Each of the 9 cell vectors consists of the normalized CIELAB representation used in the color data appended to a one-hot vector representing the position of the cell within the grid. For speaker utterances, we use the same representation as in the color data, except that we do not split off the -er, -est, and -ish endings. 4.2 Model Training Details We implement our models in PyTorch (Paszke et al., 2017), and train them using the Adam variant of stochastic gradient descent (Kingma and Ba, 2015) with default parameters (β1, β2) = (0.9, 0.999) and ϵ = 10−8. We train with early-stopping based on dev set log-likelihood (for speaker) or accuracy (for listener) model evaluations. Before training our listeners, we pre-train an LSTM language model to provide samples for the utterance priors on target colors paired with speaker utterances of length at most 12 on examples where human listeners picked the correct color. We follow Monroe et al. (2017) for language model hyper-parameters, with embedding and LSTM layers of size 100. Also following this prior work, we use a learning rate of 0.004, batch size 128, and apply 0.5 dropout to each layer. We train for 7, 000 iterations, evaluating the model’s 3The grid data and our model implementations are available at https://github.com/forkunited/ltprg. accuracy on the dev set every 100 iterations. We pick the model with the best dev set log-likelihood from evaluations at 100 iteration intervals. To train and compare various listeners, we optimize likelihoods under non-pragmatic l0 and pragmatic l1 with a literal meaning function computed by the LSTM architecture described in Section 3.2, sampling new utterance priors for each mini-batch from our pre-trained language model applied to the round’s three colors for use within the s1 module of RSA (see Algorithm 1). We draw 30 samples per round (10 per color or grid) at a maximum length of 12. We generally use speaker rationality α = 8.0 based on dev set tuning, and we follow Monroe et al. (2017) for other hyper-parameters—with embedding size of 100 and LSTM size of 100 in our meaning functions. Also following this prior work, we allow the LSTM to be bidirectional with learning rate of 0.005, batch size 128, and gradient clipping at 5.0. We train listeners for 10, 000 iterations on the color data and 15, 000 iterations on grid data, evaluating dev set accuracy every 500 iterations. We pick the model with the best accuracy from those evaluated at 500 iteration intervals. 4.3 Results 4.3.1 Color Reference The accuracies of color target predictions by l0 and l1 models under both l0 and l1 training are shown in the left columns of Table 2. For robustness, average accuracies and standard errors were computed by repeatedly retraining and evaluating with different weight initializations and training data orderings using 4 different random seeds. The results in the top left panel of Table 2 show that l1 pragmatic training coupled with l1 pragmatic evaluation gives the best average accuracies. The previously studied l1 pragmatic usage with l0 non-pragmatic training is next best. These results provide evidence that literal meanings estimated through pragmatic training are better calibrated for pragmatic usage than meanings estimated through non-pragmatic l0 training. Furthermore, relative to state-of-the-art in Monroe et al. (2017), Table 2 shows that our pragmatically trained model yields improved accuracy over their best “blended” pragmatic Le model which computed predictions based on the product of two separate non-pragmatically trained models. The effect sizes are small for the pragmatic to 626 103 104 0.4 0.6 0.8 1 Color l1 Accuracy Full 103 104 0.4 0.6 0.8 1 Close 103 104 0.4 0.6 0.8 1 Split 103 104 0.4 0.6 0.8 1 Far 103 104 0.4 0.6 0.8 1 Grid l1 Accuracy 103 104 0.4 0.6 0.8 1 103 104 0.4 0.6 0.8 1 103 104 0.4 0.6 0.8 1 l0 training l1 training Figure 4: Listener accuracies on the color and grid dev data for models learned under training data subsets of various sizes (given by the horizontal axes). The separate plots give accuracies over the full data and the close, split, and far conditions. non-pragmatic comparisons when training on the full color data (though approaching the ceiling 0.9108 human accuracy), but we hypothesized that the effect of pragmatic training would increase for training with smaller data sizes (as motivated by arguments in the introduction). To test this, we trained the listener models on smaller subsets of the training data, and evaluated accuracy. As shown by the top left plot of Figure 4, pragmatic training results in a larger gain in accuracy when less data is available for training. Lastly, we also considered the effect of pragmatic training under the varying close, split, and far data conditions. As shown in the three plots at the right of the top row of Figure 4, the effect of l1 training over l0 is especially pronounced for inferences on close and split data conditions where the target is more similar to the distractors, and the language is more context-dependent. This makes sense, as these conditions contain examples where the pragmatic, cost-sensitive adjustments to the learned meanings would be the most necessary. 4.3.2 Grid Reference For the more complex grid reference task, the listener accuracies in the right columns of Table 2 show an even larger gain from pragmatic l1 training, and no gain is seen for pragmatic evaluation with non-pragmatic training. This result is consistent with the hypothesis motivated by arguments in the introduction that pragmatic training should be more effective in contexts containing targets and distractors for which many low-cost alternative utterance are applicable. Furthermore, the grid-reference data-complexity exploration in the bottom row of Figure 4 shows that this improvement given by pragmatic training remains large across data sizes; the exception is the smallest amount of training data under the most difficult close condition, where the language is so sparse that meanings may be difficult to estimate, even with pragmatic adjustments. Altogether, these results suggest that pragmatic training helps with an intermediate amount of data relative to the domain complexity—with too little data, pragmatics has no signal to work with, but with too much data, the indirect evidence provided by pragmatics is less necessary. Since real-world linguistic contexts are more complex than either of our experimental domains, we hypothesize that they often fit into this intermediate data regime. 4.3.3 Literal Meaning Comparisons To improve our understanding of the quantitative results, we also investigate qualitative differences between meaning functions Lθ estimated under 627 Full Data Small Data l0 l1 l0 l1 ‘green’ ‘greenish’ ‘dark green’ ‘neon green’ ‘red’ ‘redish’ ‘yellow’ ‘yellowish’ Table 3: Extensions of various utterances according to the literal meaning functions (L) from listener models l0 and l1 learned on full (left) and smaller (250 example) subsets (right) of the color data. Each cell shows a learned color utterance extension in the Hue × Saturation color space depicted at the top left, with the degree of darkness corresponding to the learned values of Lθ U,O (given by the architecture in Figure 2b) for an utterance U on colors O for regions of color space. l0 and l1 on the color reference task. Table 3 shows representations of these meaning functions for several utterances. For each utterance U, we plot the extension Lθ U estimated under l0 and l1, with the darkness of a pixel at c representing Lθ U,c—the degree to which an utterance U applies to a color c within a Hue×Saturation color space. In these plots, the larger areas of medium gray shades for l1 extensions suggest that the pragmatic training yields more permissive interpretations for a given utterance. This makes sense, as pragmatics allows looser meanings to be effectively tightened at interpretation time. Furthermore, the meanings learned by the l1 also have lower curvature across the color space, consistent with a view of pragmatics as providing a regularizer (Section 3.3)— preventing overfitting. This view is further supported by the plots on the right-hand side of Table 3, which show that the meanings learned by l0 from smaller amounts of training data tend to overfit to idiosyncratic regions of the color space, whereas the pragmatic l1 training tends to smooth out these irregularities. These qualitative observations are also consistent with the data complexity results shown in Figure 4, where the l1 training gives an especially large improvement over l0 for small data sizes. 5 Conclusion Our experiments provide evidence that using pragmatic reasoning during training can yield improved neural semantics models. This was true in the existing color reference corpus, where we achieved state-of-the art results, and even more so in the new color-grid corpus. We thus found that pragmatic training is more effective when data is relatively sparse and the domain yields complex, high-cost utterances and low-cost omissions over which pragmatic inferences might proceed. Future work should provide further exploration of the data regime in which pragmatic learning is most beneficial and its correspondence to realworld language use. This might include scaling with linguistic complexity and properties of referents. In particular, the argument in our introduction suggests that especially frequent objects and low-cost utterances are the seed from which pragmatic inference can proceed over more complex language and infrequent objects. This asymmetry in object reference rates is expected for longtail, real-world regimes consistent with Zipf’s law (Zipf, 1949). Overall, we have shown that pragmatic reasoning regarding alternative utterances provides a useful inductive bias for learning in grounded language understanding systems—leveraging inferences over what speakers choose not to say to reduce the data complexity of learning. Acknowledgments We thank Leon Bergen for guidance in setting the up the initial versions of the pragmatic learning models, Katherine Hermann for help with some initial experiments on the color reference task, and Sahil Chopra for collecting a small batch of pilot data for the color grid task. 628 References Jacob Andreas and Dan Klein. 2016. Reasoning about pragmatics with neural listeners and speakers. In Proceedings of the 2016 Conference on Empirical Methods on Natural Language Processing (EMNLP), pages 1173–1182. Michael C. Frank and Noah D. Goodman. 2012. Predicting pragmatic reasoning in language games. Science, 336(6084):998. Michael C. Frank and Noah D. Goodman. 2014. Inferring word meanings by assuming that speakers are informative. Cognitive Psychology, 75:80–96. Dave Golland, Percy Liang, and Dan Klein. 2010. A game-theoretic approach to generating spatial descriptions. In Proceedings of the 2010 Conference on Empirical Methods on Natural Language Processing (EMNLP), pages 410–419. Noah D. Goodman and Michael C. Frank. 2016. Pragmatic language interpretation as probabilistic inference. Trends in Cognitive Sciences, 20(11):818– 829. H Paul Grice. 1975. Logic and conversation. 1975, pages 41–58. Robert X. D. Hawkins. 2015. Conducting real-time multiplayer experiments on the web. Behavior Research Methods, 47(4):966–976. Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. ICLR. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025. Timothy Meo, Brian McMahan, and Matthew Stone. 2014. Generating and resolving vague color references. In Proceedings of the 18th Workshop on the Semantics and Pragmatics of Dialogue (SemDial), pages 107–115. Will Monroe, Noah D Goodman, and Christopher Potts. 2016. Learning to generate compositional color descriptions. In Proc. EMNLP. Will Monroe, Robert XD Hawkins, Noah D Goodman, and Christopher Potts. 2017. Colors in context: A pragmatic neural model for grounded language understanding. Transactions of the Association for Computational Linguistics. Will Monroe and Christopher Potts. 2015. Learning in the Rational Speech Acts model. In Proceedings of the 20th Amsterdam Colloquium, pages 1–12. Adam Paszke, Sam Gross, and Soumith Chintala. 2017. Pytorch. Ramakrishna Vedantam, Samy Bengio, Kevin Murphy, Devi Parikh, and Gal Chechik. 2017. Context-aware captions from context-agnostic supervision. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1070–1079. Dengsheng Zhang and Guojun Lu. 2002. Shapebased image retrieval using generic fourier descriptor. Signal Processing: Image Communication, 17(10):825–848. George Kingsley Zipf. 1949. Human behavior and the principle of least effort. Oxford, England: AddisonWesley Press.
2019
59
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5880–5894 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5880 Variational Pretraining for Semi-supervised Text Classification Suchin Gururangan1 Tam Dang2 Dallas Card3 Noah A. Smith1,2 1Allen Institute for Artificial Intelligence, Seattle, WA, USA 2Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA 3Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA [email protected] {dangt7,nasmith}@cs.washington.edu [email protected] Abstract We introduce VAMPIRE,1 a lightweight pretraining framework for effective text classification when data and computing resources are limited. We pretrain a unigram document model as a variational autoencoder on in-domain, unlabeled data and use its internal states as features in a downstream classifier. Empirically, we show the relative strength of VAMPIRE against computationally expensive contextual embeddings and other popular semi-supervised baselines under low resource settings. We also find that fine-tuning to indomain data is crucial to achieving decent performance from contextual embeddings when working with limited supervision. We accompany this paper with code to pretrain and use VAMPIRE embeddings in downstream tasks. 1 Introduction An effective approach to semi-supervised learning has long been a goal for the NLP community, as unlabeled data tends to be plentiful compared to labeled data. Early work emphasized using unlabeled data drawn from the same distribution as the labeled data (Nigam et al., 2000), but larger and more reliable gains have been obtained by using contextual embeddings trained with a language modeling (LM) objective on massive amounts of text from domains such as Wikipedia or news (Peters et al., 2018a; Devlin et al., 2019; Radford et al., 2018; Howard and Ruder, 2018). The latter approaches play to the strengths of high-resource settings (e.g., access to web-scale corpora and powerful machines), but their computational and data requirements can make them less useful in resource-limited environments. In this paper, we instead focus on the low-resource setting (§2.1), 1VAriational Methods for Pretraining In Resource-limited Environments and develop a lightweight approach to pretraining for semi-supervised text classification. Our model, which we call VAMPIRE, combines a variational autoencoder (VAE) approach to document modeling (Kingma and Welling, 2013; Miao et al., 2016; Srivastava and Sutton, 2017) with insights from LM pretraining (Peters et al., 2018a). By operating on a bag-of-words representation, we avoid the time complexity and difficulty of training a sequence-to-sequence VAE (Bowman et al., 2016; Xu et al., 2017; Yang et al., 2017) while retaining the freedom to use a multi-layer encoder that can learn useful representations for downstream tasks. Because VAMPIRE ignores sequential information, it leads to models that are much cheaper to train, and offers strong performance when the amount of labeled data is small. Finally, because VAMPIRE is a descendant of topic models, we are able to explore model selection by topic coherence, rather than validation-set perplexity, which results in better downstream classification performance (§6.1). In order to evaluate the effectiveness of our method, we experiment with four text classification datasets. We compare our approach to a traditional semi-supervised baseline (self-training), alternative representation learning techniques that have access to the in-domain data, and the fullscale alternative of using large language models trained on out-of-domain data, optionally finetuned to the task domain. Our results demonstrate that effective semisupervised learning is achievable for limitedresource settings, without the need for computationally demanding sequence-based models. While we observe that fine-tuning a pretrained BERT model to the domain provides the best results, this depends on the existence of such a model in the relevant language, as well as GPUs to fine-tune it. When this is not an option, our 5881 model offers equivalent or superior performance to the alternatives with minimal computational requirements, especially when working with limited amounts of labeled data. The major contributions of this paper are: • We adapt variational document models to modern pretraining methods for semisupervised text classification (§3), and highlight the importance of appropriate criteria for model selection (§3.2). • We demonstrate experimentally that our method is an efficient and effective approach to semi-supervised text classification when data and computation are limited (§5). • We confirm that fine-tuning is essential when using contextual embeddings for document classification, and provide a summary of practical advice for researchers wishing to use unlabeled data in semi-supervised text classification (§8). • We release code to pretrain variational models on unlabeled data and use learned representations in downstream tasks.2 2 Background 2.1 Resource-limited Environments In this paper, we are interested in the low-resource setting, which entails limited access to computation, labels, and out-of-domain data. Labeled data can be obtained cheaply for some tasks, but for others, labels may require expensive and timeconsuming human annotations, possibly from domain experts, which will limit their availability. While there is a huge amount of unlabeled text available for some languages, such as English, this scale of data is not available for all languages. Indomain data availability, of course, varies by domain. For many researchers, especially outside of STEM fields, computation may also be a scarce resource, such that training contextual embeddings from scratch, or even incorporating them into a model could be prohibitively expensive. Moreover, even when such pretrained models are available, they inevitably come with potentially undesirable biases baked in, based on the data on which they were trained (Recasens et al., 2013; Bolukbasi et al., 2016; Zhao et al., 2019). 2http://github.com/allenai/vampire Particularly for social science applications, it may be preferable to exclude such confounders by only working with in-domain or curated data. Given these constraints and limitations, we seek an approach to semi-supervised learning that can leverage in-domain unlabeled data, achieve high accuracy with only a handful of labeled instances, and can run efficiently on a CPU. 2.2 Semi-supervised Learning Many approaches to semi-supervised learning have been developed for NLP, including variants of bootstrapping (Charniak, 1997; Blum and Mitchell, 1998; Zhou and Li, 2005; McClosky et al., 2006), and representation learning using generative models or word vectors (Mikolov et al., 2013; Pennington et al., 2014). Contextualized embeddings have recently emerged as a powerful way to use out-of-domain data (Peters et al., 2018a; Radford, 2018), but training these large models requires a massive amount of appropriate data (typically on the order of hundreds of millions of words), and industry-scale computational resources (hundreds of hours on multiple GPUs).3 There have also been attempts to leverage VAEs for semi-supervised learning in NLP, mostly in the form of sequence-to-sequence models (Xu et al., 2017; Yang et al., 2017), which use sequencebased encoders and decoders (see §3). These papers report strong performance, but there are many open questions which necessitate further investigation. First, given the reported difficulty of training sequence-to-sequence VAEs (Bowman et al., 2016), it is questionable whether such an approach is useful in practice. Moreover, it is unclear if such complex models (which are expensive to train) are actually required for good performance on tasks such as text classification. Here, we instead base our framework on neural document models (Miao et al., 2016; Srivastava and Sutton, 2017; Card et al., 2018), which offer both faster training and an explicit interpretation in the form of topics, and explore their utility in the semi-supervised setting. 3 Model In this work, we assume that we have L documents, DL = {(xi, yi)}L i=1, with observed cat3For example, ULMFIT was trained on 100 million words, and BERT used 3.3 billion. While many pretrained models have been made available, they are unlikely to cover every application, especially for rare languages. 5882 egorical labels y ∈Y. We also assume access to a larger set of U documents drawn from the same distribution, but for which the labels are unobserved, i.e, DU = {xi}U+L i=L+1. Our primary goal is to learn a probabilistic classifier, p(y | x). Our approach heavily borrows from past work on VAEs (Kingma and Welling, 2013; Miao et al., 2016; Srivastava and Sutton, 2017), which we adapt to semi-supervised text classification (see Figure 1). We do so by pretraining the document model on unlabeled data (§3.1), and then using learned representations in a downstream classifier (§3.3). The downstream classifier makes use of multiple internal states of the pretrained document model, as in Peters et al. (2018b). We also explore how to best do model selection in a way that benefits the downstream task (§3.2). 3.1 Unsupervised Pretraining In order to learn useful representations, we initially ignore labels, and assume each document is generated from a latent variable, z. The functions learned in estimating this model then provide representations which are used as features in supervised learning. Using a variational autoencoder for approximate Bayesian inference, we simultaneously learn an encoder, which maps from the observed text to an approximate posterior q(z | x), and a decoder, which reconstructs the text from the latent representation. In practice, we instantiate both the encoder and decoder as neural networks and assume that the encoder maps to a normally distributed posterior, i.e., for document i, q(zi | xi) = N (zi | fµ(xi), diag(fσ(xi))) (1) xi ∼p(xi | fd(zi)). (2) Using standard principles of variational inference, we derive a variational bound on the marginal log-likelihood of the observed data, log p(xi) ≥B(xi) = Eq(zi|xi)[log p(xi | zi)] −KL[q(zi | xi) ∥p(z)]. (3) Intuitively, the first term in the bound can be thought of as a reconstruction loss, ensuring that generated words are similar to the original document. The second term, the KL divergence, encourages the variational approximation to be close to the assumed prior, p(z), which we take to be a spherical normal distribution. Word Vectors Encoder MLP labeled text Pretrained VAE VAMPIRE embedding unlabeled text Word Frequencies σ μ VAE Word Frequencies MLP MLP Label Figure 1: VAMPIRE involves pretraining a deep variational autoencoder (VAE; displayed on left) on unlabeled text. The VAE, which consists entirely of feedforward networks, learns to reconstruct a word frequency representation of the unlabeled text with a logistic normal prior, parameterized by µ and σ. Downstream, the pretrained VAE’s internal states are frozen and concatenated to task-specific word vectors to improve classification in the low-resource setting. Using the reparameterization trick (Kingma and Welling, 2013; Rezende et al., 2014), we replace the expectation with a single-sample approximation,4 i.e., B(xi) ≈log p(xi | z(s) i ) −KL[q(zi | xi) ∥p(z)] (4) z(s) i = fµ(xi) + fσ(xi) · ε(s), (5) where ε(s) ∼N(0, I) is sampled from an independent normal. All parameters can then be optimized simultaneously by performing stochastic gradient ascent on the variational bound. A powerful way of encoding and decoding text is to use sequence models. That is, fµ(x) and fσ(x) would map from a sequence of tokens to a pair of vectors, µ and σ, and fd(z) would similarly decode from z to a sequence of tokens, using recurrent, convolutional, or attention-based networks. Some authors have adopted this approach (Bowman et al., 2016; Xu et al., 2017; Yang et al., 2017), but as discussed above (§2.2), it has a number of disadvantages. In this paper, we adopt a more lightweight and directly interpretable approach, and work with word frequencies instead of word sequences. Using the same basic structure as Miao et al. (2016) 4We leave experimentation with multi-sample approximation (e.g., importance sampling) to future work. 5883 but employing a softmax in the decoder, we encode fµ(x) and fσ(x) with multi-layer feed forward neural networks operating on an input vector of word counts, ci: ci = counts(xi) (6) hi = MLP(ci) (7) µi = fµ(xi) = Wµhi + bµ (8) σi = fσ(xi) = exp(Wσhi + bσ) (9) z(s) i = µi + σi · ε(s). (10) For a decoder, we use the following form, which reconstructs the input in terms of topics (coherent distributions over the vocabulary): θi = softmax(z(s) i ) (11) ηi = softmax(b + Bθi) (12) log p(xi | z(s) i ) = V X j=1 cij · log ηij, (13) where j ranges over the vocabulary. By placing a softmax on z, we can interpret θ as a distribution over latent topics, as in a topic model (Blei et al., 2003), and B as representing positive and negative topical deviations from a background b. This form (essentially a unigram LM) allows for much more efficient inference on z, compared to sequence-based encoders and decoders. 3.2 Model Selection via Topic Coherence Because our pretraining ignores document labels, it is not obvious that optimizing it to convergence will produce the best representations for downstream classification. When pretraining using a LM objective, models are typically trained until model fit stops improving (i.e., perplexity on validation data). In our case, however, θi has a natural interpretation as the distribution (for document i) over the latent “topics” learned by the model (B). As such, an alternative is to use the quality of the topics as a criterion for early stopping. It has repeatedly been observed that different types of topic models offer a trade-off between perplexity and topic quality (Chang et al., 2009; Srivastava and Sutton, 2017). Several methods for automatically evaluating topic coherence have been proposed (Newman et al., 2010; Mimno et al., 2011), such as normalized pointwise mutual information (NPMI), which Lau et al. (2014) found to be among the most strongly correlated with human judgement. As such, we consider using either log likelihood or NPMI as a stopping criteria for VAMPIRE pretraining (§6.1), and evaluate them in terms of which leads to the better downstream classifier. NPMI measures the probability that two words collocate in an external corpus (in our case, the validation data). For each topic t in B, we collect the top ten most probable words and compute NPMI between all pairs: NPMI(t) = X i,j≤10; j̸=i log P(ti,tj) P(ti)P(tj) −log P(ti, tj) (14) We then arrive at a global NPMI for B by averaging the NPMIs across all topics. We evaluate NPMI at the end of each epoch during pretraining, and stop training when NPMI has stopped increasing for a pre-defined number of epochs. 3.3 Using a Pretrained VAE for Text Classification Kingma et al. (2014) proposed using the latent variable of an unsupervised VAE as features in a downstream model for classifying images. However, work on pretraining for NLP, such as Peters et al. (2018a), found that LMs encode different information in different layers, each of which may be more or less useful for certain tasks. Here, for an n-layer MLP encoder on word counts ci, we build on that idea, and use as representations a weighted sum over θi and the internal states of the MLP, h(k) i , with weights to be learned by the downstream classifier.5 That is, for any sequence-to-vector encoder, fs2v(x), we propose to augment the vector representations for each document by concatenating them with a weighted combination of the internal states of our variational encoder (Peters et al., 2018a). We can then train a supervised classifier on the weighted combination, ri = λ0θi + n X k=1 λkh(k) i (15) p(yi | xi) = fc([ri; fs2v(xi)]), (16) where fc is a neural classifier and {λ0, . . . , λn} are softmax-normalized trainable parameters. 5We also experimented with the joint training and combined approaches discussed in Kingma et al. (2014), but found that neither of these reliably improved performance over our pretraining approach. 5884 3.4 Optimization In all cases, we optimize models using Adam (Kingma and Ba, 2014). In order to prevent divergence during pretraining, we make use of a batchnorm layer on the reconstruction of x (Ioffe and Szegedy, 2015). We also use KL-annealing (Bowman et al., 2016), placing a scalar weight on the KL divergence term in Eq. (3), which we gradually increase from zero to one. Because our model consists entirely of feedforward neural networks, it is easily parallelized, and can run efficiently on either CPUs or GPUs. 4 Experimental Setup We evaluate the performance of our approach on four text classification tasks, as we vary the amount of labeled data, from 200 to 10,000 instances. In all cases, we assume the existence of about 75,000 to 125,000 unlabeled in-domain examples, which come from the union of the unused training data and any additional unlabeled data provided by the corpus. Because we are working with a small amount of labeled data, we run each experiment with five random seeds, each with a different sample of labeled training instances, and report the mean performance on test data. 4.1 Datasets and Preprocessing We experiment with text classification datasets that span a variety of label types. The datasets we use are the familiar AG News (Zhang et al., 2015), IMDB (Maas et al., 2011), and YAHOO! Answers datasets (Chang et al., 2008), as well as a dataset of tweets labeled in terms of four HATESPEECH categories (Founta et al., 2018). Summary statistics are presented in Table 1. In all cases, we either use the official test set, or take a random stratified sample of 25,000 documents as a test set. We also sample 5,000 instances as a validation set. We tokenize documents with spaCy, and use up to 400 tokens for sequence encoding (fs2v(x)). For VAMPIRE pretraining, we restrict the vocabulary to the 30,000 most common words in the dataset, after excluding tokens shorter than three characters, those with digits or punctuation, and stopwords.6 We leave the vocabulary for downstream classification unrestricted. 6http://snowball.tartarus.org/ algorithms/english/stop.txt Dataset Label Type Classes Documents AG topic 4 127600 HATESPEECH hatespeech 4 99996 IMDB sentiment 2 100000 YAHOO! topic 15 150015 Table 1: Datasets used in our experiments. 4.2 VAMPIRE Architecture In order to find reasonable hyperparameters for VAMPIRE, we utilize a random search strategy for pretraining. For each dataset, we take the model with the best NPMI for use in the downstream classifiers. We detail sampling bounds and final assignments for each hyperparameter in Table 5 in Appendix A.1. 4.3 Downstream Classifiers For all experiments we make use of the Deep Averaging Network (DAN) architecture (Iyyer et al., 2015) as our baseline sequence-to-vector encoder, fs2v(x). That is, embeddings corresponding to each token are summed and passed through a multi-layer perceptron. p(yi | xi) = MLP  1 |xi| P|xi| j=1 E(xi)j  , (17) where E(x) converts a sequence of tokens to a sequence of vectors, using randomly initialized vectors, off-the-shelf GLOVE embeddings (Pennington et al., 2014), or contextual embeddings. To incorporate the document representations learned by VAMPIRE in a downstream classifier, we concatenate them with the average of randomly initialized trainable embeddings, i.e., p(yi | xi) = MLP  ri; 1 |xi| P|xi| j=1 E(xi)j  . (18) Preliminary experiments found that DANs with one-layer MLPs and moderate dropout provide more reliable performance on validation data than more expressive models, such as CNNs or LSTMs, with less hyperparameter tuning, especially when working with few labeled instances (details in Appendix A.2). 4.4 Resources and Baselines In these experiments, we consider baselines for both low-resource and high-resource settings, where the high-resource baselines have access to 5885 70 80 AG News VAMPIRE Self-training GloVe (OD) 70 75 Hatespeech 200 500 2500 10000 70 75 80 85 IMDB 200 500 2500 10000 55 60 65 70 75 Yahoo! Number of labeled instances Accuracy Figure 2: Learning curves for all datasets in the lowresource setting, showing the mean (line) and one standard deviation (bands) over five runs for VAMPIRE, self-training, and 840B-token GLOVE embeddings. Full results are in Table 2. greater computational resources and a either massive amount of unlabeled data or a pretrained model, such as ELMO or BERT.7 Low resource In the low-resource setting we assume that computational resources are at a premium, so we are limited to lightweight approaches such as VAMPIRE, which can run efficiently on a CPU. As baselines, we consider a) a purely supervised model, with randomly initialized 50dimensional embeddings and no access to unlabeled data; b) the same model initialized with 300dimensional GLOVE vectors, pretrained on 840 billion words;8 c) 300-dimensional GLOVE vectors trained on only in-domain data; and d) selftraining, which has access to the in-domain unlabeled data. For self-training, we iterate over training a model, predicting labels on all unlabeled instances, and adding to the training set all unlabeled instances whose label is predicted with high confidence, repeating this up to five times and using the model with highest validation accuracy. On each iteration, the threshold for a given label is equal to the 90th percentile of predicted probabilities for validation instances with the corresponding label. 7As discussed above, we consider these models to be representative of the high-resource setting, both because they were computationally intensive to train, and because they were made possible by the huge amount of English text that is available online. 8http://nlp.stanford.edu/projects/ glove/ High resource In the high-resource setting, we assume access to plentiful computational resources and massive amounts of out-of-domain data, which may be indirectly accessed through pretrained models. Specifically, we evaluate the performance of a Transformer-based ELMO (Peters et al., 2018b) and BERT, both (a) off-theshelf with frozen embeddings and (b) after semisupervised fine-tuning to both unlabeled and labeled in-domain data. To perform semi-supervised fine-tuning, we first use ELMO and BERT’s original objectives to fine-tune to the unlabeled data. To fine-tune ELMO to the labeled data, we average over the LM states and add a softmax classification layer. We obtain the best results applying slanted triangular learning rates and gradual unfreezing (Howard and Ruder, 2018) to this fine-tuning step. To fine-tune BERT to labeled data, we feed the hidden state corresponding to the [CLS] token of each instance to a softmax classification layer. We use AllenNLP9 to fine-tune ELMO, and Pytorch-pretrained-BERT10 to finetune BERT. We also experiment with ELMO trained only on in-domain data as an example of high-resource LM pretraining methods, such as Dai and Le (2015), when there is no out-of-domain data available. Specifically, we generate contextual word representations with a Transformer-based ELMO. During downstream classification, the resulting vectors are frozen and concatenated to randomly initialized word vectors prior to the summation in Eq. (17). 5 Results In the low-resource setting, we find that VAMPIRE achieves the highest accuracy of all lowresource methods we consider, especially when the amount of labeled data is small. Table 2 shows the performance of all low-resource models on all datasets as we vary the amount of labeled data, and a subset of these are also shown in Figure 2 for easy comparison. In the high-resource setting, we find, not surprisingly, that fine-tuning the pretrained BERT model to in-domain data provides the best performance. For both BERT and ELMO, we find that using frozen off-the-shelf vectors results 9https://allennlp.org/elmo 10https://github.com/huggingface/ pytorch-pretrained-BERT 5886 Dataset Model 200 500 2500 10000 IMDB Baseline 68.5 (7.8) 79.0 (0.4) 84.4 (0.1) 87.1 (0.3) Self-training 73.8 (3.3) 80.0 (0.7) 84.6 (0.2) 87.0 (0.4) GLOVE (ID) 74.5 (0.8) 79.5 (0.4) 84.7 (0.2) 87.1 (0.4) GLOVE (OD) 74.1 (1.2) 80.0 (0.2) 84.6 (0.3) 87.0 (0.6) VAMPIRE 82.2 (2.0) 84.5 (0.4) 85.4 (0.4) 87.1 (0.4) AG Baseline 68.8 (2.0) 77.3 (1.0) 84.4 (0.1) 87.5 (0.2) Self-training 77.3 (1.7) 81.3 (0.8) 84.8 (0.2) 87.7 (0.1) GLOVE (ID) 70.4 (1.2) 78.0 (1.0) 84.1 (0.3) 87.1 (0.2) GLOVE (OD) 68.8 (5.7) 78.8 (1.1) 85.3 (0.3) 88.0 (0.3) VAMPIRE 83.9 (0.6) 84.5 (0.4) 85.8 (0.2) 87.7 (0.1) YAHOO! Baseline 54.5 (2.8) 63.0 (0.5) 69.5 (0.3) 73.6 (0.2) Self-training 57.5 (2.0) 63.2 (0.6) 69.8 (0.3) 73.6 (0.2) GLOVE (ID) 55.2 (2.3) 63.5 (0.3) 69.7 (0.3) 73.5 (0.3) GLOVE (OD) 55.4 (2.4) 63.9 (0.3) 70.1 (0.5) 73.8 (0.4) VAMPIRE 59.9 (0.9) 65.1 (0.3) 69.8 (0.3) 73.6 (0.2) HATESPEECH Baseline 67.7 (1.8) 71.3 (0.2) 75.6 (0.4) 77.8 (0.2) Self-training 68.5 (0.6) 71.3 (0.2) 75.5 (0.3) 78.1 (0.2) GLOVE (ID) 69.7 (1.2) 71.9 (0.5) 76.0 (0.3) 78.3 (0.2) GLOVE (OD) 69.7 (0.7) 72.2 (0.8) 76.1 (0.8) 77.6 (0.5) VAMPIRE 74.1 (0.8) 74.4 (0.5) 76.2 (0.6) 78.0 (0.3) Table 2: Test accuracies in the low-resource setting on four text classification datasets under varying levels of labeled training data (200, 500, 2500, and 10000 documents). Each score is reported as an average over five seeds, with standard deviation in parentheses, and the highest mean result in each setting shown in bold. 85 90 AG News 70 75 80 Hatespeech 200 500 2500 10000 80 90 IMDB BERT (FT) ELMO (FT) VAMPIRE ELMO (ID) ELMO (FR) 200 500 2500 10000 40 60 80 Yahoo! Number of labeled instances Accuracy Figure 3: High-resource methods (plus VAMPIRE) on four datasets; ELMO performance benefits greatly from training on (ID), or fine-tuning (FT) to, the indomain data (as does BERT; full results in Appendix B). Key: FT (fine-tuned), FR (frozen), ID (in-domain). in surprisingly poor performance, compared to fine-tuning to the task domain, especially for HATESPEECH and IMDB.11 For these two datasets, an ELMO model trained only on indomain data offers far superior performance to frozen off-the-shelf ELMO (see Figure 3). This difference is smaller, however, for YAHOO! and 11See also Howard and Ruder (2018). AG. (Please see Appendix B for full results). These results taken together demonstrate that although pretraining on massive amounts of web text offers large improvements over purely supervised models, access to unlabeled in-domain data is critical, either for fine-tuning a pretrained language model in the high-resource setting, or for training VAMPIRE in the low-resource setting. Similar findings have been reported by Yogatama et al. (2019) for tasks such as natural language inference and question answering. 6 Analysis 6.1 NPMI versus NLL as Stopping Criteria To analyze the effectiveness of different stopping criterion in VAMPIRE, we pretrain 200 VAMPIRE models on IMDB: 100 selected via NPMI, and 100 selected via negative log likelihood (NLL) on validation data. Interestingly, we observe that VAMPIRE NPMI and NLL values are negatively correlated (ρ = –0.72; Figure 4A), suggesting that upon convergence, trained models that better fit the data also tend to have more coherent topics. We then train 200 downstream classifiers with the same hyperparameters, on a fixed 200 document random subset of the IMDB dataset, uniformly sampling over the NPMI- and NLL-selected VAMPIRE models as additional features. In Figure 4B and Fig5887 0.06 0.14 NPMI 840 860 880 NLL A 0.06 0.14 NPMI 0.7 0.8 Accuracy B 840 860 880 NLL 0.6 0.8 Accuracy C NPMI NLL Criterion 0.6 0.8 Accuracy D Figure 4: Comparing NPMI and NLL as early stopping criteria for VAMPIRE model selection. NPMI and NLL are correlated measures of model fit, but NPMI-selected VAMPIRE models have lower variance on downtream classification performance with 200 labeled documents of IMDB. Accuracy is reported on the validation data. See §6.1 for more details. ure 4C, we observe that better pretrained VAMPIRE models (according to either criterion) tend to produce better downstream performance. (ρ = 0.55 and ρ = –0.53, for NPMI and NLL respectively). However, we also observe higher variance in accuracy among the VAMPIRE models obtained using NLL as a stopping criterion (Figure 4D). Such models selected via NLL have poor topic coherence and downstream performance. As such, doing model selection using NPMI is the preferred alternative, and all VAMPIRE results in Table 2 are based on pretrained models selected using this criterion. The experiments in Ding et al. (2018) provide some insight into this behaviour. They find that when training neural topic models, model fit and NPMI initially tend to improve on each epoch. At some point, however, perplexity continues to improve, while NPMI starts to drop, sometimes dramatically. We also observe this phenomenon when training VAMPIRE (see Appendix C). Using NPMI as a stopping criterion, as we propose to do, helps to avoid degenerate models that result from training too long. In some preliminary experiments, we also observe cases where NPMI is artificially high because of redundancy in topics. Applying batchnorm to the reconstruction markedly improves diversity of collocating words across topics, which has also been noted by Srivastava and Sutton IMDB YAHOO! Horror Classics Food Obstetrics giallo dunne cuisine obstetrics horror cary peruvian vitro gore abbott bengali endometriosis lugosi musicals cajun fertility zombie astaire potato contraceptive dracula broadway carne pregnancy bela irene idli birth cannibal costello pancake ovarian vampire sinatra tofu menstrual lucio stooges gumbo prenatal Table 3: Example topics learned by VAMPIRE in IMDB and YAHOO! datasets. See Appendix D for more examples. (2017). Future work may explore assigning a word diversity regularizer to the NPMI metric, so as to encourage models that have both stronger coherence and word diversity across topics. 6.2 Learned Latent Topics In addition to being lightweight, one advantage of VAMPIRE is that it produces document representations that can be explicitly interpreted in terms of topics. Although the input we feed into the downstream classifier combines this representation with internal states of the encoder, the topical interpretation helps to summarize what the pretraining has learned. Examples of topics learned by VAMPIRE are provided in Table 3 and Appendix D. 6.3 Learned Scalar Layer Weights Since the scalar weight parameters in ri are trainable, we are able to investigate which layers of the pretrained VAE the classifier tends to prefer. We consistently find that the model tends to upweight the first layer of the VAE encoder, h(1), and θ, and downweight the other layers of the encoder. To improve learning, especially under low resource settings, we initialize the scalar weights applied to the first encoder layer and θ with high values and downweighted the intermediate layers, which increases validation performance. However, we also have observed that using a multi-layer encoder in VAMPIRE leads to larger gains downstream. 6.4 Computational Requirements An appealing aspect of VAMPIRE is its compactness. Table 4 shows the computational requirements involved in training VAMPIRE on a single GPU or CPU, compared to training an ELMO model from scratch on the same data on 5888 Model Parameters Time VAMPIRE (GPU) 3.8M 7 min VAMPIRE (CPU) 3.8M 22 min ELMO (GPU) 159.2M 12 hr 35 min Table 4: VAMPIRE is substantially more compact than Transformer-based ELMO but is still competitive under low-resource settings. Here, we display the computational requirements for pretraining VAMPIRE and ELMO on in-domain unlabeled text from the IMDB dataset. We report results on training VAMPIRE (with hyperparameters listed in Appendix A.1) and ELMO (with its default configuration) on a GeForce GTX 1080 Ti GPU, and VAMPIRE on a 2.60GHz Intel Xeon CPU. VAMPIRE uses about 750MB of memory on a GPU, while ELMO requires about 8.5GB. a GPU. It is possible to train VAMPIRE orders of magnitude faster than ELMO, even without expensive hardware, making it especially suitable for obtaining fast results when resources are limited. 7 Related Work In addition to references given throughout, many others have explored ways of enhancing performance when working with limited amounts of labeled data. Early work on speech recognition demonstrated the importance of pretraining and fine-tuning deep models in the semi-supervised setting (Yu et al., 2010). Chang et al. (2008) considered “dataless” classification, where the names of the categories provide the only supervision. Miyato et al. (2016) showed that adversarial pretraining can offer large gains, effectively augmenting the amount of data available. A long line of work in active learning similarly tries to maximize performance when obtaining labels is costly (Settles, 2012). Xie et al. (2019) describe novel data augmentation techniques leveraging back translation and tf-idf word replacement. All of these approaches could be productively combined with the methods proposed in this paper. 8 Recommendations Based on our findings in this paper, we offer the following practical advice to those who wish to do effective semi-supervised text classification. • When resources are unlimited, the best results can currently be obtained by using a pretrained model such as BERT, but fine-tuning to in-domain data is critically important (see also Howard and Ruder, 2018). • When computational resources and annotations are limited, but there is plentiful unlabeled data, VAMPIRE offers large gains over other low-resource approaches. • Training a language model such as ELMO on only in-domain data offers comparable or somewhat better performance to VAMPIRE, but may be prohibitively expensive, unless working with GPUs. • Alternatively, resources can be invested in getting more annotations; with sufficient labeled data (tens of thousands of instances), the advantages offered by additional unlabeled data become negligible. Of course, other NLP tasks may involve different tradeoffs between data, speed, and accuracy. 9 Conclusions The emergence of models like ELMO and BERT has revived semi-supervised NLP, demonstrating that pretraining large models on massive amounts of data can provide representations that are beneficial for a wide range of NLP tasks. In this paper, we confirm that these models are useful for text classification when the number of labeled instances is small, but demonstrate that fine-tuning to in-domain data is also of critical importance. In settings where BERT cannot easily be used, either due to computational limitations, or because an appropriate pretrained model in the relevant language does not exist, VAMPIRE offers a competitive lightweight alternative for pretraining from unlabeled data in the low-resource setting. When working with limited amounts of labeled data, we achieve superior performance to baselines such as self-training, or using word vectors pretrained on out-of-domain data, and approach the performance of ELMO trained only on in-domain data at a fraction of the computational cost. Acknowledgments We thank the members of the AllenNLP and ARK teams for useful comments and discussions. We also thank the anonymous reviewers for their insightful feedback. Computations on beaker.org were supported in part by credits from Google Cloud. 5889 References David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. JMLR, 3:993– 1022. Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In Proceedings of COLT. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in Neural Information Processing Systems. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal J´ozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In CoNLL. Dallas Card, Chenhao Tan, and Noah A. Smith. 2018. Neural models for documents with metadata. In Proceedings of ACL. Jonathan Chang, Jordan Boyd-Graber, Sean Gerrish, Chong Wang, and David M. Blei. 2009. Reading tea leaves: How humans interpret topic models. In Advances in Neural Information Processing Systems. Ming-Wei Chang, Lev-Arie Ratinov, Dan Roth, and Vivek Srikumar. 2008. Importance of semantic representation: Dataless classification. In Proceedings of AAAI. Eugene Charniak. 1997. Statistical parsing with a context-free grammar and word statistics. In Proceedings AAAI. Andrew M. Dai and Quoc V. Le. 2015. Semisupervised sequence learning. In Advances in Neural Information Processing Systems. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL. Ran Ding, Ramesh Nallapati, and Bing Xiang. 2018. Coherence-aware neural topic modeling. In Proceedings of EMNLP. Antigoni Maria Founta, Constantinos Djouvas, Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of Twitter abusive behavior. In Proceedings of AAAI. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of ACL. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings ICML. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of ACL. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. 2014. Semisupervised learning with deep generative models. In Advances in Neural Information Processing Systems. Diederik P. Kingma and Max Welling. 2013. Autoencoding variational Bayes. CoRR, abs/1312.6114. Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of EACL. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of ACL. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceedings NAACL. Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In Proceedings of ICML. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems. David Mimno, Hanna M. Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing semantic coherence in topic models. In Proceedings of EMNLP. Takeru Miyato, Andrew M. Dai, and Ian J. Goodfellow. 2016. Virtual adversarial training for semi-supervised text classification. CoRR, abs/1605.07725. David Newman, Jey Han Lau, Karl Grieser, and Timothy Baldwin. 2010. Automatic evaluation of topic coherence. In Proceedings of NAACL. Kamal Nigam, Andrew Kachites McCallum, Sebastian Thrun, and Tom Mitchell. 2000. Text classification from labeled and unlabeled documents using em. Machine Learning, 39(2-3). Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of EMNLP. 5890 Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke S. Zettlemoyer. 2018a. Deep contextualized word representations. In Proceedings of NAACL. Matthew E. Peters, Mark Neumann, Luke S. Zettlemoyer, and Wen tau Yih. 2018b. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of EMNLP. Jason Phang, Thibault F´evry, and Samuel R. Bowman. 2018. Sentence encoders on STILTs: Supplementary training on intermediate labeled-data tasks. CoRR, abs/1811.01088. Alec Radford. 2018. Improving language understanding by generative pre-training. Alec Radford, Rafal J´ozefowicz, and Ilya Sutskever. 2018. Learning to generate reviews and discovering sentiment. CoRR, abs/1704.01444. Marta Recasens, Cristian Danescu-Niculescu-Mizil, and Dan Jurafsky. 2013. Linguistic models for analyzing and detecting biased language. In Proceedings of ACL. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of ICML. Burr Settles. 2012. Active Learning. Morgan & Claypool. Akash Srivastava and Charles A. Sutton. 2017. Autoencoding variational inference for topic models. In Proceedings of ICLR. Qizhe Xie, Zihang Dai, Eduard H. Hovy, Minh-Thang Luong, and Quoc V. Le. 2019. Unsupervised data augmentation. CoRR, abs/1904.12848. Weidi Xu, Haoze Sun, Chao Deng, and Ying Tan. 2017. Variational autoencoder for semi-supervised text classification. In AAAI. Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. 2017. Improved variational autoencoders for text modeling using dilated convolutions. In Proceedings of ICML. Dani Yogatama, Cyprien de Masson d’Autume, Jerome Connor, Tom´as Kocisk´y, Mike Chrzanowski, Lingpeng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, and Phil Blunsom. 2019. Learning and evaluating general linguistic intelligence. CoRR, abs/1901.11373. Dong Yu, Li Deng, and George E. Dahl. 2010. Roles of pre-training and fine-tuning in context-dependent DBN-HMMs for real-world speech recognition. In NeurIPS Workshop on Deep Learning and Unsupervised Feature Learning. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In Proceedings of NAACL. Zhi-Hua Zhou and Ming Li. 2005. Tri-training: exploiting unlabeled data using three classifiers. IEEE Transactions on Knowledge and Data Engineering, 17:1529–1541. 5891 A Hyperparameter Search In this section, we describe the hyperparameter search we used to choose model configurations, and include plots illustrating the range of validation performance observed in each setting. A.1 VAMPIRE Search For the results presented in the paper, we varied the hyperparameters of VAMPIRE across a number of different dimensions, outlined in Table 5. A.2 Classifier Search To choose a baseline classifier for which we experiment with all pretrained models, we performed a mix of manual tuning and random search over four basic classifiers: CNN, LSTM, Bag-ofEmbeddings (i.e., Deep Averaging Networks), and Logistic Regression. Figure 6 shows the distribution of validation accuracies using 200 and 10,000 labeled instances, respectively, for different classifiers on the IMDB and AG datasets. Under the lowresource setting, we observe that logistic regression and DAN based classifiers tend to lead to more reliable validation accuracies. With enough compute, CNN-based classifiers tend to produce marginally higher validation accuracies, but the probability is mostly centered below those of the logistic regression and DAN classifiers. LSTMbased classifiers tend to have extremely high variance under the low-resource setting. For this work, we choose to experiment with the DAN classifier, which comes with the richness of vectorbased representations, along with the reliability that comes with having very few hyperparameters to tune. B Results in the High Resource Setting Table 6 shows the results of all high-resource methods (along with VAMPIRE) on all datasets, as we vary the amount of labeled data. As can be seen, training ELMO only on in-domain data results in similar or better performance to using an off-the-shelf ELMO or BERT model, without fine-tuning it to in-domain data. Except for one case in which it fails badly (YAHOO! with 200 labeled instances), fine-tuning BERT to the target domain achieves the best performance in every setting. Though we performed a substantial hyperparameter search under this regime, we attribute the failure of fine-tuning 0 50 Step 850 900 950 NLL 0 50 Epoch 0.00 0.05 0.10 NPMI Figure 5: An example learning curve when training VAMPIRE on the IMDB dataset. If trained for too long, we observe many cases in which NPMI (higher is better) degrades while NLL (lower is better) continues to decrease. To avoid selecting a model that has poor topic coherence, we recommend performing model selection with NPMI rather than NLL. BERT under this setting to potential hyperparameter decisions which could be improved with further tuning. Other work has suggest that random initializations have a significant effect on the failure cases of BERT, pointing to the brittleness of fine-tuning (Phang et al., 2018). The performance gap between fine-tuned ELMO and frozen ELMO in AG News corpus is much smaller than that of the other datasets, perhaps because the ELMO model we used was pretrained on the Billion Words Corpus, which is a news crawl. This dataset is also an example where frozen ELMO tends to out-perform using VAMPIRE. We attribute the strength of frozen, pretrained ELMO under this setting as further evidence of the importance of in-domain data for effective semi-supervised text classification. C Further Details on NPMI vs. NLL as Stopping Criteria In the main paper, we note that we have observed cases in which training VAMPIRE for too long results in NPMI degradation, while NLL continues to improve. In Figure 5, we display example learning curves that point to this phenomenon. D Additional Learned Topics In Table 7 we display some additional topics learned by VAMPIRE on the YAHOO! dataset. 5892 Computing Infrastructure GeForce GTX 1080 GPU Number of search trials 60 trials per dataset Search strategy uniform sampling Model implementation http://github.com/allenai/vampire Hyperparameter Search space IMDB AG YAHOO! HATESPEECH number of epochs 50 50 50 50 50 patience 5 5 5 5 5 batch size 64 64 64 64 64 KL divergence annealing choice[sigmoid, linear, constant] linear linear linear constant KL annealing sigmoid weight 1 0.25 N/A N/A N/A N/A KL annealing sigmoid weight 2 15 N/A N/A N/A N/A KL annealing linear scaling 1000 1000 1000 1000 N/A VAMPIRE hidden dimension uniform-integer[32, 128] 80 81 118 125 Number of encoder layers choice[1, 2, 3] 2 2 3 3 Encoder activation choice[relu, tanh, softplus] tanh relu tanh softplus Mean projection layers 1 1 1 1 1 Mean projection activation linear linear linear linear linear Log variance projection layers 1 1 1 1 1 Log variance projection activation linear linear linear linear linear Number of decoder layers 1 1 1 1 1 Decoder activation linear linear linear linear linear z-dropout random-uniform[0, 0.5] 0.47 0.49 0.41 0.45 learning rate optimizer Adam Adam Adam Adam Adam learning rate loguniform-float[1e-4, 1e-2] 0.00081 0.00021 0.00024 0.0040 update background frequency choice[True, False] False False False False vocabulary size 30000 30000 30000 30000 30000 Dataset VAMPIRE NPMI IMDB 0.131 AG 0.224 YAHOO! 0.475 HATESPEECH 0.139 Table 5: VAMPIRE search space, best assignments, and associated performance on the four datasets we consider in this work. 5893 Dataset Model 200 500 2500 10000 IMDB ELMO (FR) 75.1 (1.4) 80.3 (1.1) 85.3 (0.1) 87.3 (0.3) BERT (FR) 81.5 (1.0) 83.9 (0.4) 86.8 (0.3) 88.2 (0.3) ELMO (ID) 81.7 (1.3) 84.5 (0.2) 86.3 (0.4) 88.0 (0.4) VAMPIRE 82.2 (2.0) 84.5 (0.4) 85.4 (0.4) 87.1 (0.4) ELMO (FT) 86.4 (0.6) 87.9 (0.4) 90.0 (0.4) 91.6 (0.2) BERT (FT) 88.1 (0.7) 89.4 (0.7) 91.4 (0.1) 93.1 (0.1) AG ELMO (FR) 84.5 (0.5) 85.7 (0.5) 88.3 (0.2) 89.4 (0.3) BERT (FR) 84.6 (1.1) 85.7 (0.7) 88.0 (0.4) 89.0 (0.3) ELMO (ID) 84.5 (0.6) 85.8 (0.8) 87.9 (0.2) 89.2 (0.2) VAMPIRE 83.9 (0.6) 84.5 (0.4) 85.8 (0.2) 87.7 (0.1) ELMO (FT) 85.2 (0.5) 86.6 (0.4) 88.6 (0.2) 89.5 (0.1) BERT (FT) 87.1 (0.6) 88.0 (0.4) 90.1 (0.5) 91.9 (0.1) YAHOO! ELMO (FR) 54.3 (1.6) 64.2 (0.6) 71.2 (1.3) 74.1 (0.3) BERT (FR) 57.0 (1.3) 64.2 (0.5) 70.0 (0.3) 73.8 (0.2) ELMO (ID) 60.9 (1.7) 66.9 (0.9) 72.8 (0.5) 75.6 (0.1) VAMPIRE 59.9 (0.9) 65.1 (0.3) 69.8 (0.3) 73.6 (0.2) ELMO (FT) 60.5 (1.9) 66.1 (0.7) 71.7 (0.7) 75.8 (0.3) BERT (FT) 45.3 (7.5) 69.2 (1.6) 76.9 (0.6) 81.0 (0.1) HATESPEECH ELMO (FR) 70.5 (1.7) 72.4 (0.9) 76.0 (0.5) 78.3 (0.2) BERT (FR) 75.1 (0.6) 76.3 (0.3) 77.8 (0.4) 79.0 (0.2) ELMO (ID) 73.3 (0.8) 74.1 (0.8) 77.2 (0.3) 78.9 (0.2) VAMPIRE 74.1 (0.8) 74.4 (0.5) 76.2 (0.6) 78.0 (0.3) ELMO (FT) 73.9 (0.6) 75.4 (0.4) 78.1 (0.3) 78.7 (0.1) BERT (FT) 76.2 (1.8) 78.3 (1.0) 79.8 (0.4) 80.2 (0.3) Table 6: Results in the high-resources setting. YAHOO! Canine Care Networking Multiplayer Gaming Harry Potter training wireless multiplayer dumbledore obedience homepna massively longbottom schutzhund network rifle hogwarts housebreaking verizon cheating malfoy iliotibial phone quake weasley crate blackberry warcraft rubeus ligament lan runescape philosopher orthopedic telephone socom albus fracture bluetooth fortress hufflepuffs gait broadband duel trelawney Nutrition Baseball Sexuality Religion nutritional baseball homophobia islam obesity sox heterosexuality jesus weight yankees orientation isaiah bodybuilding rodriguez transsexuality semitism anorexia gehrig cultures christian diet cardinals transgender baptist malnutrition astros polyamory jewish nervosa babe gay prophet gastric hitter feminism commandments watchers sosa societal god Table 7: Example topics learned by VAMPIRE in the YAHOO! dataset. 5894 0 20 40 60 80 100 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 Probability Density IMDB: 200 labeled instances Logistic Regression LSTM-based Classifier Deep Averaging Network CNN-based Classifier 75.0 77.5 80.0 82.5 85.0 87.5 90.0 92.5 95.0 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 IMDB: 10K labeled instances 0 20 40 60 80 100 Validation accuracy (%) 0.00 0.05 0.10 0.15 0.20 0.25 Probability Density AG News: 200 labeled instances 75.0 77.5 80.0 82.5 85.0 87.5 90.0 92.5 95.0 Validation accuracy (%) 0.0 0.2 0.4 0.6 0.8 1.0 1.2 AG News: 10K labeled instances Figure 6: Probability densities of supervised classification accuracy in low-resource (200 labeled instances; left) and high-resource (10K labeled instances; right) settings for IMDB and AG datasets using randomly initialized trainable embeddings. Each search consists of 300 trials over 5 seeds and varying hyperparameters. We experiment with four different classifiers: Logistic Regression, LSTM-based classifier, Deep Averaging Network, and a CNNbased Classifier. We choose to use the Deep Averaging Network for all classifier baselines, due to its reliability, expressiveness, and low-maintenance.
2019
590
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5895–5906 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5895 Task Refinement Learning for Improved Accuracy and Stability of Unsupervised Domain Adaptation Yftah Ziser and Roi Reichart Faculty of Industrial Engineering and Management, Technion, IIT [email protected], [email protected] Abstract Pivot Based Language Modeling (PBLM) (Ziser and Reichart, 2018a), combining LSTMs with pivot-based methods, has yielded significant progress in unsupervised domain adaptation. However, this approach is still challenged by the large pivot detection problem that should be solved, and by the inherent instability of LSTMs. In this paper we propose a Task Refinement Learning (TRL) approach, in order to solve these problems. Our algorithms iteratively train the PBLM model, gradually increasing the information exposed about each pivot. TRL-PBLM achieves stateof-the-art accuracy in six domain adaptation setups for sentiment classification. Moreover, it is much more stable than plain PBLM across model configurations, making the model much better fitted for practical use.1 1 Introduction Domain adaptation (DA, (Daum´e III, 2007; BenDavid et al., 2010)) is a fundamental challenge in NLP, as many language processing algorithms require costly labeled data that can be found in only a handful of domains. To solve this annotation bottleneck, DA aims to train algorithms with labeled data from one or more source domains so that they can be effectively applied in a variety of target domains. Indeed, DA algorithms have been developed for many NLP tasks and domains (e.g. (Jiang and Zhai, 2007; McClosky et al., 2010; Titov, 2011; Bollegala et al., 2011; Rush et al., 2012; Schnabel and Sch¨utze, 2014)). A number of approaches for DA have been proposed (§ 2). With the raise of Neural Networks (NNs), DA through Representation Learning (DReL) where a shared feature space for the source and the target domains is learned, has 1Our code is publicly available at: https://github. com/yftah89/TRL-PBLM. become prominent. Earlier DReL approaches (Blitzer et al., 2006, 2007) were based on a linear mapping of the original feature space to a new one, modeling the connections between pivot features – features that are frequent in the source and the target domains and are highly correlated with the task label in the source domain – and the complementary set of non-pivot features. This approach was later outperformed by autoencoder (AE) based methods (Glorot et al., 2011; Chen et al., 2012), which employ compress-based noise reduction to extract the shared feature space, but do not explicitly model the correspondence between the source and the target domains. Recently, methods that marry the complementary strengths of NNs and pivot-based ideas (Ziser and Reichart (2017, 2018a), denoted here with ZR17 and ZR18, respectively) established a new state-of-the-art. Despite their strong empirical results, relying on NNs and on the distinction between pivot and non-pivot features, the models in ZR17 and ZR18 suffer from two limitations. These limitations stem from the fact that in order to create the shared feature space these models train NNs to predict the existence of pivot features in unlabeled data from the source and target domains (AEs in ZR17, LSTMs (Hochreiter and Schmidhuber, 1997) in ZR18). The first limitation is due to the large number of pivot features (several hundreds in each source/target domain pair in their experiments), which makes the classification task challenging and may harm the quality of the resulting crossdomain representations. As another limitation, NNs, and especially those that perform sequence tagging like PBLM (Pivot Based Language Modeling, ZR18), are highly sensitive to model design and hyper-parameter selection decisions (Hutter et al., 2014; Reimers and Gurevych, 2017). Intuitively, if a DA approach is not robust across hyper-parameter configurations, it is more chal5896 lenging to apply this approach to a variety of domain pairs. This is particularly worrisome in unsupervised domain adaptation (our focus setup, § 2), where no target domain labeled data is available, and hyper-parameter and configuration tuning is performed on source domain labeled data only. In this paper we propose to solve both problems by applying a novel Task Refinement Learning (TRL) approach to the state-of-the-art PBLM representation learning model (§ 3). In our TRLPBLM model the PBLM is trained in multiple stages. At the first stage the model should predict only the core relevant information each pivot holds with respect to the domain adaptation task. We do this by clustering the pivots with respect to the information they convey about the domain adaptation task and asking the model to predict the clusters rather than the pivots themselves. Then, at subsequent stages, the model should predict an increasingly larger subset of the pivots, while for those pivots that have not yet been exposed it is only their cluster that should be predicted. The pivots exposed in each iteration are defined based on measures of the complexity of the prediction task associated with each pivot and the importance of the pivot for the domain adaptation task. At each stage the PBLM is trained till convergence and its learned parameters then initialize the PBLM that is trained at the next stage. This transfer of information between stages is possible because the complexity of the prediction task with respect to each pivot (predicting the cluster or the pivot itself) can only increase between subsequent stages. Since PBLM is non-convex and hence sensitive to its initialization, each training stage of PBLM exploits the outcome of the learning task of its predecessor. Only at the last stage PBLM should predict the full set of pivot features, as in the standard PBLM training of ZR18. We hypothesize that TRL is a suitable solution for both aforementioned problems. For the large number of classes, TRL-PBLM starts from a small classification problem at the first stage and the number of classes gradually increases in subsequent stages, reaching the maximum only at the last stage. Moreover, the model should gradually predict increasingly more complex pivots that provide more fine grained information about the task. This way it should predict the existence of complex pivots only after it has learned about simpler ones. For configuration instability, we hypothesize that the gradual training of the model should result in a smoother convergence and a smaller impact of arbitrary design choices. Our approach is inspired by curriculum learning (CL (Elman, 1993; Bengio et al., 2009)), a learning paradigm that advocates the presentation of training examples to a learning algorithm in an organized manner, so that more complex concepts are learned after simpler ones. Indeed, CL methods have been designed for many NLP tasks (e.g. (Turian et al., 2010; Spitkovsky et al., 2010; Zou et al., 2013; Shi et al., 2015; Sachan and Xing, 2016; Wieting et al., 2016)) and for other machine learning application areas such as computer vision (e.g. (Pentina et al., 2015; Oh et al., 2015; Gong et al., 2016; Zhang et al., 2017)). However, while in CL the prediction task is fixed but the trained algorithm is exposed to increasingly more complex training examples in subsequent stages, in TRL the algorithm is trained to solve increasingly more complex tasks in subsequent stages, but the training data is kept fixed across the stages. We implemented the experimental setup of ZR18 for sentiment classification, considering all their 5 domains for a total 6 domain pairs (§ 4).2 Our TRL-PBLM-CNN model is identical to the state-of-the-art PBLM-CNN of ZR18, except that PBLM is trained with one of our TRL methods. Our best performing model outperforms the original PBLM-CNN by 2.1% on average across the six setups (80.9% vs. 78.8%). For two domain pairs, the improvement is as high as 5.2% (80.2% vs. 75%) and 3.6% (86.1% vs. 82.5%). Moreover, TRL-PBLM-CNN is more robust than plain PBLM-CNN, consistently achieving a higher maximum, minimum and average results as well as a lower standard deviation across the 30 configurations we considered for each model. We consider this a major result since, as noted above, stability is crucial for the real-world applicability of an unsupervised domain adaptation algorithm, since the selection of model configuration in this setup does not involve target domain labeled data and is hence inherently noisy and risky. 2 Background and Previous Work Domain adaptation is a long standing NLP challenge (Roark and Bacchiani, 2003; Chelba and 2Since TRL-PBLM requires multiple PBLM training stages, it was computationally demanding to experiment with all the 20 domain pairs of ZR18. See § 4 for more details. 5897 Acero, 2004; Daum´e III and Marcu, 2006). Major approaches to DA include: instance re-weighting (Huang et al., 2007; Mansour et al., 2009), subsampling from both domains (Chen et al., 2011) and DA through Representation Learning (DReL) where a joint source and target feature representation is learned. DReL has shown to be the state-ofthe-art for unsupervised DA (Ziser and Reichart, 2017, 2018a,b), and is the approach we pursue. Unsupervised Domain Adaptation In this work we focus on unsupervised DA. In this setup we have access to unlabeled data from the source and the target domains, but labeled data is available in the source domain only. We believe this is the most realistic setup if one likes to extend the reach of NLP to a large number of domains. The pipeline of unsupervised DA with representation learning typically consists of two steps: representation learning and classification. In the first step, a representation model is trained on the unlabeled data from the source and target domains. In the second step, a classifier for the supervised task is trained on the source domain labeled data and is then applied to the target domain. Every example that is fed to the task classifier is first represented by the representation model of the first step. This is the pipeline we follow in our models. In unsupervised DA the representation model and the task classifier can also be trained jointly. In § 4 we compare our models to such an end-toend model (MSDA-DAN (Ganin et al., 2016)). Domain Adaptation with Representation Learning (DReL) A seminal DReL model, from which we start our survey, is Structural Correspondence Learning (SCL) (Blitzer et al., 2006, 2007) that introduced the idea of pivotbased DReL. The main idea is to identify in the shared feature space of the source and the target domains the set of pivot features that can serve as a bridge between the domains. Formally these pivot features are defined to be: (a) frequent in the unlabeled data from both domains; and (b) highly correlated with the task label in the source domain labeled data. The remaining features are referred to as non-pivot features. In SCL, the division of the original feature set into the pivot and non-pivot subsets is utilized in order to learn a linear mapping from the original feature space of both domains into a shared, low-dimensional, real-valued feature space. Since SCL was presented, pivot-based DReL has been researched extensively (e.g. (Pan et al., 2010; Gouws et al., 2012; Bollegala et al., 2015; Yu and Jiang, 2016; Ziser and Reichart, 2017, 2018a)). In contrast to SCL that learns a linear transforamtion between pivot and non-pivot features, the next line of work aimed to learn representations with non-linear models, without making the distinction between pivot and non-pivot features. The basic idea of these models is training an autoencoder (AE) on the unlabeled data from both the source and the target domains, reasoning that the hidden representation of such a model should be less noisy and hence robust to domain changes. Examples of AE variants in recent DReL literature include Stacked Denoising Autoencoders (SDA, (Vincent et al., 2008; Glorot et al., 2011), the more efficient and salable marginalized SDA (MSDA, (Chen et al., 2012)), and MSDA variants (e.g. (Yang and Eisenstein, 2014; Clinchant et al., 2016)). Models based on variational AEs (Kingma and Welling, 2014; Rezende et al., 2014) have also been applied in DA (e.g. variational fair autoencoder (Louizos et al., 2016)), but they were outperformed by MSDA in Ziser and Reichart (2018a). Ziser and Reichart (2017) combined AEs with pivot-based DA. Their models (AE-SCL and AESCL-SR) are based on a three layer feed-forward network where the non-pivot features are fed to the input layer, encoded into a hidden representation and this hidden representation is then decoded into the pivot features of the input example. AESCL-SR utilizes word embeddings to exploit the similarities between pivot-based features, outperforming AE-SCL, and many other DReL models. A major limitation of the ZR17 models is that they do not exploit the structure of their input examples, which can harm document level tasks. We next describe an alternative approach. Pivot Based Language Modeling (PBLM) PBLM is a variant of an LSTM-based language model (LSTM-LM). However, while an LSTMLM predicts at each point the most likely next input word, PBLM predicts the next input unigram or bigram if one of these is a pivot (if both are, it predicts the bigram) and NONE otherwise.3 In the unsupervised DA pipeline PBLM is trained with the source and target domain unlabeled data. Consider the example in Figure 1a (imported 3In § 4 we describe the automatic pivot selection method which is solely based on the labeled and unlabeled data. 5898 very witty great story not bad overall NONE not bad NONE NONE NONE great NONE (a) very witty great story not bad overall Text matrix Filters Max-Pooling Sentiment class FC Classification (b) Figure 1: The PBLM model (figures imported from ZR18). (a) The PBLM representation learning model. (b) PBLM-CNN where PBLM representations feed a CNN task classifier. from ZR18) for adaptation of a sentiment classifier between book reviews and reviews of kitchen appliances. In this example PBLM learns the connection between the book related (and hence non-pivot) adjective witty, and great - a common positive adjective in both domains, and hence a pivot. PBLM is designed to feed structure-aware task classifiers. Particularly, in the PBLM-CNN architecture that we consider here (Figure 1b),4 the PBLM’s softmax layer (that computes the probabilities of each pivot to be the next unigram/bigram) is cut and a matrix whose columns are the PBLM’s ht vectors is fed to the CNN. ZR18 demonstrated the superiority of PBLMCNN over previous approaches to DReL, establishing the importance of structure-aware representation learning for review document modeling. We hence develop our TRL methods for PBLM. 3 Task Refinement Learning for PBLM We apply TRL only to the representation learning stage of the unsupervised domain adaptation pipeline. We first describe the general TRL 4ZR18 also considered a PBLM-LSTM architecture where the PBLM representations feed an LSTM classifier. We focus on PBLM-CNN which demonstrated superior performance in 13 of 20 of their experimental setups. scheme, and then list specific implementations. 3.1 A General TRL Scheme As noted in § 2, PBLM is similar to an LSTM language model, but instead of predicting the next word at each position, it predicts the next unigram or bigram if these are pivots and a special NONE symbol otherwise. Our TRL scheme gradually exposes pivots to PBLM (Algorithm 1). We start by dividing the pivot features into two subsets: PosPiv is the set of pivot features that are more frequent in source domain training documents with positive labels than in source domain documents with negative labels; NegPiv is similarly defined, but these pivots are more frequent in source domain training documents with a negative label. In the first stage, PBLM is trained on the unlabeled data from the source and the target domains till convergence, just as in ZR18. The only difference is that in cases where the next unigram or bigram is a pivot, instead of predicting the actual pivot identity, PBLM should predict PosPiv or NegPiv according to the pivot’s class. That is, the representation learned by the first PBLM model is only sensitive to whether a pivot is positive or negative and not to the actual pivot identity. Following the definition of pivot features (§ 2), the positive/negative distinction is fundamental, and is hence considered at the first TRL stage. Data: Us: unlabeled source domain data; Ut: unlabeled target domain data. Input: K: number of TRL iterations; SortPivots: a sorted array of pivots; NegPiv: the list of negative pivots; PosPiv: the list of positive pivots. θ0 = rand(); θ1 = PBLMTrain (θ0, NegPiv, PosPiv, Us, Ut); i = 1; while i ≤K do θ = update-PBLM-params (θi, NegPiv, PosPiv, SortPivots, i); θi+1 = PBLMTrain (θ, NegPiv, PosPiv, SortPivots, i, Us, Ut); i = i + 1; end return θi; Algorithm 1: TRL for PBLM. After this initial step is completed our TRL algorithm continues for a predefined number of iter5899 ations (denoted with K in Algorithm 1). The algorithm receives as input a sorted array of pivot features such that pivots at the beginning of the array (lower indices) should be exposed first. At each iteration the PBLM is exposed to additional #P /K pivots, where #P is the total number of pivot features. That is, at the first iteration the first #P /K pivots are exposed, at the second iteration the next #P /K are also exposed and so on till the last (Kth) iteration in which all pivots are exposed. Since new features are exposed in each iteration, the label space of PBLM changes. For example, before the first iteration the label space consists of three labels: NONE, PosPiv and NegPiv, while in the first iteration the label space consists of NONE, PosPiv (for all positive pivots that are not exposed in this iteration), NegPiv (for all negative pivots that are not exposed in this iteration) and the first (top ranked) #P /K pivots in the sorted pivot array, for a total of #P /K + 3 labels. At each iteration the algorithm first updates the PBLM parameters (up-PBLM-params method of Algorithm 1). In this step a new PBLM model is initialized such that all its parameters except for those of the softmax prediction matrix are initialized to the parameters to which PBLM converged in the last time it was trained. The softmax matrix grows so that it can predict i · #P /K + 3 labels, instead of (i −1) · #P /K + 3 labels as in the previous PBLM training (i is the iteration number). To do that, the weights for the NONE, PosPiv and NegPiv classes as well as for the pivots that were exposed before the current iteration are initialized to the output of the previous PBLM training, while the weights of the newly exposed pivots are initialized to the weights learned for PosPiv (for those newly exposed pivots that were assigned the PosPiv label in the previous run) or for NegPiv (for those newly exposed pivots that were assigned the NegPiv label in the previous run). After the parameters are initialized, PBLM is trained again and the process proceeds iteratively till the last iteration where all the pivots are exposed. The weights of the last iteration will be used when PBLM is employed at the classification stage of the unsupervised DA pipeline (§ 2). Example To make the above explanation more concrete, we consider an example in which we have four pivots: good, bad, great and worst, so that good and great belong to PosPiv while bad and worst belong to NegPiv. We set K, the number of iterations, to 2, which means that the number of features exposed in each iteration is #P /K = 4/2 = 2. Finally, we assume that our pivot ranking method ranks the pivots in the order in which they were presented above. PBLM is first trained so that at each position if the next word is good or great it should predict PosPiv, if it is bad or worst it should predict NegPiv and otherwise it should predict NONE. Then the pivot exposure iterations begin. At the first iteration the pivots good and bad are exposed. The parameters learned in the previous run of PBLM (with the PosPiv, NegPiv and NONE predictions) are used as an initialization of the PBLM parameters, except that the softmax matrix should now allow five classes: PosPiv (for occurrences of great, that has not been exposed yet), NegPiv (for occurrences of worst), good, bad and NONE. Hence, in the softmax matrix of the new PBLM the parameters for PosPiv, and also for good, will be the parameters learned in the previous iteration for PosPiv. Likewise, the parameters for NegPiv, and also for bad, will be the parameters learned in the previous iteration for NegPiv, and the parameters for NONE are those previously learned for NONE. At the second iteration, the last two pivots, great and worst, are also exposed, and PBLM now has the following 5 classes: good, bad, great, worst and NONE. Parameter initialization is done in a similar manner to the first iteration, where the softmax parameters for great and worst are initialized to the parameters of PosPiv and NegPiv of the previous PBLM, respectively. Finally, this last PBLM is trained to yield the model that will be used in the unsupervised DA setup. We next describe our three methods for the order in which pivots are exposed in TRL training. 3.2 Pivot Exposure in TRL Our goal is to order the pivots so that highly ranked pivots convey more information about the domain adaptation task and are easier to predict by PBLM. We consider three pivot ranking methods. The Ranking by MI (RMI) method ranks the pivots according to their mutual information (MI) with the task label in the source domain training data. The reasoning is that pivots that are more strongly associated with the task label provide a stronger task signal to the representation learning model and should hence be learned earlier in the process. A downside of this method is that it does 5900 not consider any target domain information. Another alternative is the Ranking by Frequency (RF) method that ranks pivots according to the number of times they appear in the unlabeled data of both the source and target domains (combined). The reasoning here is that the representation learning model should have more statistics about the frequent pivots, which makes their prediction easier. Moreover, the frequent pivots presumably provide a more prominent signal about the desired representation and should hence be learned prior to less frequent pivots, whose signal is more nuanced. One obvious advantage of this method is that it considers both the source and the target domain. However, in cases where a pivot is very frequent in one domain and substantially less frequent in the other, RF would consider this pivot frequent, even though it does not provide too much information about one of the domains. To overcome this limitation of RF, we also consider a third pivot ranking method: Ranking by Similar Frequencies (RSF). In this method we compute two quantities for each pivot: fp−source = #ps #sd and fp−target = #pt #td , where #ps is the number of times the pivot p appears in the source domain unlabeled data, #sd is the number of documents in the source domain labeled data, and #pt and #td are defined similarly for the target domain unlabeled data. We then compute the similar frequency score of each pivot p to be: freqScore(p) = min(fp−source,fp−target) max(fp−source,fp−target), and rank the pivots in a descending order of freqScore scores. This way, pivots with more similar frequencies in the unlabeled data of both domains are ranked higher and will be exposed earlier to the PBLM algorithm. 4 Experiments We implemented the setup of ZR18, including datasets, baselines, and hyperparameter details. Task and Domains Following ZR18, and a large body of DA work, we experiment with the task of binary cross-domain sentiment classification with the product review domains of Blitzer et al. (2007) – Books (B), DVDs (D), Electronic items (E) and Kitchen appliances (K). We also consider the airline review domain that was presented by ZR18, who demonstrated that adaptation from the Blitzer product domains to this domain, and vice versa, is more challenging than adaptation between the Blitzer product domains. For each of the domains we consider 2000 labeled reviews, 1000 positive and 1000 negative, and unlabeled reviews: 6000 (B), 34741 (D), 13153 (E), 16785 (K) and 39396 (A). Since PBLM is computationally demanding, and employing TRL to PBLM requires multiple PBLM training processes, we pick 6 setups from the 20 of ZR18. We include each of the domains considered in ZR18 at least once. Our setups are: B-D, B-K, E-D, K-B, A-B and K-A. Models and Baselines Our main baseline is the PBLM-CNN sentiment classifier – the superior model of ZR18 (§ 2) – to which we refer as NoTRL. Our TRL algorithm aims to improve the PBLM (representation learning) step of the PBLM-CNN model. We consider the three TRL methods of § 3.2: Ranking by MI (RMI), Ranking by Frequency (RF), and Ranking by Similar Frequencies (RSF), each protocol is implemented with either K = 4 or K = 2 iterations, in addition to the initial step where the pivots are split into the positive and negative classes. The model names are hence: RMI2, RMI4, RF2, RF4, RSF2 and RSF4. To evaluate the relative importance of the initial pivot split to positive, negative and nonpivot classes compared to the pivot exposure methods, we also add the BasicTRL model in which the basic three class PBLM training is followed by a single iteration where all the pivots are exposed. To put our results in the context of previous leading models we further compare to the prominent baselines of ZR18: AE-SCL-SR; SCL with pivot features selected using the mutual information criterion (SCL-MI, (Blitzer et al., 2007)); MSDA and MSDA-DAN (Ganin et al., 2016) which employs a domain adversarial network (DAN) with MSDA vectors as input. Finally, we compare to a NoDA setup where the sentiment classifier is trained in the source domain and applied to the target domain without adaptation. For this case we consider a logistic regression classifier that was demonstrated in ZR18 to outperform LSTM and CNN classifiers. This is also the classifier employed with AE-SCL-SR and SCL-MI. 5 Features and Pivots The input features of all models are word unigrams and bigrams. The division of the feature set into pivots and non-pivots is based on Blitzer et al. (2007) and (Ziser and Re5The URLs of the datasets and the code we used, are provided in the appendix. 5901 ichart, 2017, 2018a): Pivot features appear at least 10 times in the unlabeled data of both the source and the target domains, and among those features are the ones with the highest mutual information with the task (sentiment) label in the source domain labeled data. For non-pivot features we consider unigrams and bigrams that appear at least 10 times in the unlabeled data of at least one domain. Cross-Validation and Hyperparameter Tuning We employ a 5-fold cross-validation protocol as in ZR18. In all five folds 1600 source domain examples are randomly selected for training data and 400 for development, such that both the training and the development sets have the same number of positive and negative reviews. For each model we report the averaged performance across these 5 folds. For previous models, we follow the tuning process of ZR18. The tuning of PBLM and of our TRL methods is described in the Appendix. 5 Results Overall Performance Our first result is presented in Table 1. On average across the test sets, all TRL-PBLM methods improve over the original PBLM (NoTRL) with the best performing method, RF2, improving by as much as 2.1% on average (80.9 vs. 78.8). In all 6 setups one of the TRLPBLM methods performs best. In two setups RF2 improves over NoTRL by more than 3.5%: 80.2 vs 75 (E-D) and 86.1 vs 82.5 (B-K) (error reduction of 20.8% and 20.6%, respectively). In two other setups RF2 improves by 1.7-2%: K-B (76.2 vs. 74.2), and A-B (72.3 vs. 70.6). In the remaining two setups a TRL method improves, although by less than 0.5%. The 80.9% averaged accuracy of RF2 compares favorably also with the 74.4% of AE-SCL-SR, the strongest baseline from ZR18. Test Set Stability Our second result is presented in Table 2. The table presents the minimum (min), maximum (max), average (avg) and standard deviation (std) of the test set scores of the 30 hyper-parameter configurations we consider for each model. The table compares these numbers for RF2, our best performing TRL-PBLM method, BasicTRL, that exposes all the pivots in the first iteration after PBLM is trained with the positive, negative and non-pivot classes, and for NoTRL. The table clearly demonstrates that RF2 and BasicTRL consistently achieve higher avg, max and min results, as well as a lower std, compared to adaptation with NoTRL. This means that models learned by TRL based methods are much more robust to the selection of the hyper-parameter configuration. Moreover, even the min values of RF2 consistently outperform the NoDA model (where a classifier is trained on the source domain and applied to the target domain without domain adaptation; bottom line of Table 1) and the min values of BasicTRL outperform NoDA in 5 of 6 setups (average difference of 3.9% for RF2 and for 3.5% for BasicTRL). In contrast, the min value of NoTRL is outperformed by NoDA in 5 of 6 cases (with an averaged gap of 2.8%). Model Selection Stability Additional comparison between Table 2 and Table 1 further reveals that model selection by development data has a more negative impact on NoTRL, compared to RF2 and BasicTRL. Particularly, for NoTRL there are only two cases where the model that performs best on the test set (max column of Table 2) was selected by the development data (the numbers reported in Table 1): B-D (84.2%) and K-A (86.1%). Moreover, the averaged difference between the best test set model and the one selected by the development data for NoTRL is 1.3%, and in one setup (E-D) the difference is as high as 4.3%. For RF2, in contrast, there are four cases where the best performing test set model is selected by the development data (E-D, K-B, A-B and K-A), and the averaged gap between the selected model and the best test set model is only 0.1%. For BasicTRL the corresponding numbers are two setups and an averaged difference of 0.6%. These improved stability patterns are observed also with the other TRL methods we experiment with. We do not provide additional numbers in order to keep our presentation concise. Finally, we note that BasicTRL preforms well, despite being simpler than the other TRL models. For example, in three of the six Table 1 setups BasicTRL is the second best model and in one setup it is the best model. Table 2 also reflects similar performance for RF2 and BasicTRL. Likewise, for all pivot exposure methods 2 iterations are somewhat better than 4. In future work we intend to explore additional pivot exposure strategies. Ablation Analysis We finally consider a possible explanation to the success of TRL. Recall that the goal of PBLM is to encode the input text in a way that preserves the information in the pivots. 5902 B-D B-K E-D K-B A-B K-A Average PBLM+TRL Methods RF2 84.1 86.1 80.2 76.2 72.3 86.1 80.9 RF4 83.4 85 79.2 73.7 71 86.5 79.8 RSF2 84 85.1 79.1 74 71.3 85.9 79.9 RSF4 83.4 85.3 78 74.1 69.7 86 79.4 RMI2 83.5 85.4 79.2 74.1 69.6 86.2 79.7 RMI4 83.5 84.9 78.1 72.8 69.4 86.1 79.1 BasicTRL 84.4 85.9 78.2 74.6 70.8 86.4 80.1 Plain PBLM (ZR18) NoTRL 84.2 82.5 75 74.2 70.6 86.1 78.8 Other Baselines AE-SCL-SR 81.1 80.1 74.5 73 60.5 76.9 74.4 MSDA 78.3 78.8 71 70 58.5 76.8 72.2 MSDA-DAN 79.7 75.4 73.1 71.2 59.5 76.6 72.6 SCL 78.8 77.2 70.4 69.3 61.7 72.3 71.6 NoDA 76 74 69.1 67.6 57.5 69.6 67 Table 1: Sentiment accuracy when hyper-parameters are tuned with development data. B-D avg max min std RF2 82.2 84.5 79 1.20 BasicTRL 82.6 84.6 80.5 0.94 NoTRL 78.3 84.2 70.2 3.70 B-K avg max min std RF2 82.7 86.3 78.9 1.96 BasicTRL 83.3 85.9 80.5 1.46 NoTRL 78.6 84.1 71.3 3.30 E-D avg max min std RF2 75.8 80.2 70 2.40 BasicTRL 75.4 79.8 69.6 2.50 NoTRL 71.7 79.3 65.9 3.40 K-B avg max min std RF2 72.1 76.2 68.6 1.70 BasicTRL 72 74.9 66.1 2.24 NoTRL 68.8 74.4 62.8 3.78 A-B avg max min std RF2 65.6 72.3 61.6 2.20 BasicTRL 65.7 72.3 61.3 2.10 NoTRL 64.8 71.6 60.9 2.70 K-A avg max min std RF2 83.6 86.1 78 2 BasicTRL 84.3 86.4 76.9 1.90 NoTRL 76.1 86.1 66.2 6.80 Table 2: Statistics of the test set accuracy distribution achieved by the PBLM-CNN sentiment classifier, when adapted between domains with RF2, BasicTRL, and NoTRL (the first two are TRL-based methods). The statistics are computed across 30 model configurations. B-D B-K E-D K-B A-B K-A RF2 98.4 98.9 99.3 98.6 99.5 99.2 B-TRL 97.9 99.0 99.2 95.5 99.0 98.4 NoTRL 78.2 81.7 81.2 78.3 72.5 76.1 Table 3: Ablation analysis. B-TRL is BasicTRL. This encoding (the hidden vectors of the LSTM) is then fed to the task classifier. We can hence expect that in a high quality PBLM model the representation of pivots (their vectors in the softmax output matrix of the model) from the PosPiv class (§3.1) will be similar to each other, and the representation of pivots from the NegPiv class will be similar to each other, but that members of the two classes will have distinct representations. This way we are promised that the input text encoding preserves an important bit in the pivots’ semantics: their correspondence to one of the sentiment labels. For RF2, BasicTRL and NoTRL we hence perform the following analysis, focusing on the models with 500 pivots. After the model converges we compute for each of the 500 pivots its 10 nearest neighbor and compute the percentage of these neighbors that belong to the same class, PosPiv or NegPiv, as the pivot. In Table 3 we report for each model the average over the 3000 scores we get from the six model configurations we trained with 500 pivots (see the appendix for the details of the configurations). The table clearly demonstrates that the pivot representations learned by RF2 and BasicTRL clustered much better to the PosPiv and 5903 NoTRL BasicTRL RF2 pivot sentiment pivot sentiment pivot sentiment would recommend positive would highly positive would recommend positive love positive would recommend positive would highly positive recommend them positive happy positive recommend them positive remember positive recommend positive happy positive not recommend negative recommend them positive love positive happy positive enjoyed positive I highly positive thought negative only complaint positive remember positive would not negative appreciate positive recommend positive not buy negative I highly positive never have positive I highly positive saves positive appreciate positive Table 4: Top 10 nearest neighbors (ranked from the closest neighbor downward) of the pivot ”highly recommended” according to three models: NoTRL (plain PBLM), BasicTRL and RF2. TRL training results in all members of the neighbor list of a pivot being of the same sentiment class as the pivot itself. NegPiv clusters compared to the pivot representations in NoTRL. This means that the encoding of the input with respect to the pivots preserves the sentiment class information much better in these TRL models than in the NoTRL model. To illustrate this effect, we present here a qualitative example of the nearest neighbor list of a pivot according to three models (Table 4). The domain adaptation setup of the example is K-A and the pivot we selected for this example is highly recommended which falls into the PosPiv class (i.e. it appears many more times in positive source domain reviews than in negative ones). The table demonstrates that for the NoTRL model there are several NegPiv pivots in the nearest neighbor list of highly recommended – e.g. not recommend and not buy. In contrast, the nearest neighbors lists of highly recommended according to BasicTRL and RF2 contain only pivots from the PosPiv class. 6 Conclusions We proposed Task Refinement Learning algorithms for domain adaptation with representation learning. Our TRL algorithms are tailored to the PBLM representation learning model of ZR18 and aim to provide more effective training for this model. The resulting PBLM-CNN model improves both the accuracy and the stability of the original PBLM-CNN model where PBLM is trained without TRL. In future work we would like to develop more sophisticated TRL algorithms, for both in-domain and domain adaptation NLP setups. Moreover, we would like to establish the theoretical groundings to the improved stability achieved by TRL, and to explore this effect beyond domain adaptation. Acknowledgements We would like to thank the members of the IE@Technion NLP group for their valuable feedback and advice. This research has been funded by an ISF personal grant on ”Domain Adaptation in NLP: Combining Deep Learning with Domain and Task Knowledge”. References Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Machine learning, 79(1-2):151–175. Yoshua Bengio, J´erˆome Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48. ACM. John Blitzer, Mark Dredze, Fernando Pereira, et al. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proc. of ACL. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proc. of EMNLP. Danushka Bollegala, Takanori Maehara, and Ken-ichi Kawarabayashi. 2015. Unsupervised cross-domain word representation learning. In Proc. of ACL. Danushka Bollegala, Yutaka Matsuo, and Mitsuru Ishizuka. 2011. Relation adaptation: learning to extract novel relations with minimum supervision. In Proc. of IJCAI. 5904 Ciprian Chelba and Alex Acero. 2004. Adaptation of maximum entropy capitalizer: Little data can help a lot. In Proc. of EMNLP. Minmin Chen, Yixin Chen, and Kilian Q Weinberger. 2011. Automatic feature decomposition for single view co-training. In Proc. of ICML. Minmin Chen, Zhixiang Xu, Kilian Weinberger, and Fei Sha. 2012. Marginalized denoising autoencoders for domain adaptation. In Proc. of ICML. St´ephane Clinchant, Gabriela Csurka, and Boris Chidlovskii. 2016. A domain adaptation regularization for denoising autoencoders. In Proc. of ACL (short papers). Hal Daum´e III. 2007. Frustratingly easy domain adaptation. In Proc. of ACL. Hal Daum´e III and Daniel Marcu. 2006. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research, 26:101–126. Jeffrey L Elman. 1993. Learning and development in neural networks: The importance of starting small. Cognition, 48(1):71–99. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(59):1–35. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In In proc. of ICML, pages 513–520. Chen Gong, Dacheng Tao, Stephen J Maybank, Wei Liu, Guoliang Kang, and Jie Yang. 2016. Multimodal curriculum learning for semi-supervised image classification. IEEE Transactions on Image Processing, 25(7):3249–3260. Stephan Gouws, GJ Van Rooyen, MIH Medialab, and Yoshua Bengio. 2012. Learning structural correspondences across different linguistic domains with synchronous neural language models. In Proc. of the xLite Workshop on Cross-Lingual Technologies, NIPS. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Jiayuan Huang, Arthur Gretton, Karsten M Borgwardt, Bernhard Sch¨olkopf, and Alex J Smola. 2007. Correcting sample selection bias by unlabeled data. In Proc. of NIPS. Frank Hutter, Holger Hoos, and Kevin Leyton-Brown. 2014. An efficient approach for assessing hyperparameter importance. In Proc. of ICML. Jing Jiang and ChengXiang Zhai. 2007. Instance weighting for domain adaptation in nlp. In Proc. of ACL. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proc. of ICLR. Diederik P Kingma and Max Welling. 2014. Autoencoding variational bayes. In Proc. of ICLR. Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. 2016. The variational fair autoencoder. Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. 2009. Domain adaptation with multiple sources. In Proc. of NIPS. David McClosky, Eugene Charniak, and Mark Johnson. 2010. Automatic domain adaptation for parsing. In Proc. of NAACL. Quang Nguyen. 2015. The airline review dataset. https://github.com/quankiquanki/ skytrax-reviews-dataset. Scraped from www.airlinequality.com. Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. 2015. Actionconditional video prediction using deep networks in atari games. In Proc. of NIPS. Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Cross-domain sentiment classification via spectral feature alignment. In Proceedings of the 19th international conference on World wide web, pages 751–760. ACM. Anastasia Pentina, Viktoriia Sharmanska, and Christoph H Lampert. 2015. Curriculum learning of multiple tasks. In Proc. of CVPR, pages 5492–5500. Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of lstm-networks for sequence tagging. In Proc. of EMNLP. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In Proc. of ICML. Brian Roark and Michiel Bacchiani. 2003. Supervised and unsupervised pcfg adaptation to novel domains. In Proc. of HLT-NAACL. Alexander M Rush, Roi Reichart, Michael Collins, and Amir Globerson. 2012. Improved parsing and pos tagging using inter-sentence consistency constraints. In Proc. of EMNLP-CoNLL. Mrinmaya Sachan and Eric Xing. 2016. Easy questions first? a case study on curriculum learning for question answering. In Proc. of ACL. 5905 Tobias Schnabel and Hinrich Sch¨utze. 2014. Flors: Fast and simple domain adaptation for part-ofspeech tagging. Transactions of the Association for Computational Linguistics, 2:15–26. Yangyang Shi, Martha Larson, and Catholijn M Jonker. 2015. Recurrent neural network language model adaptation with curriculum learning. Computer Speech & Language, 33(1):136–154. Valentin I Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2010. From baby steps to leapfrog: How less is more in unsupervised dependency parsing. In Proc. of NAACL-HLT. Ivan Titov. 2011. Domain adaptation by constraining inter-domain variability of latent feature representation. In Proc. of ACL. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proc. of ACL. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders. In Proc. of ICML. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Charagram: Embedding words and sentences via character n-grams. In Proc. of EMNLP. Yi Yang and Jacob Eisenstein. 2014. Fast easy unsupervised domain adaptation with marginalized structured dropout. In Proc. of ACL (short papers). Jianfei Yu and Jing Jiang. 2016. Learning sentence embeddings with auxiliary tasks for cross-domain sentiment classification. In Proc. of EMNLP. Yang Zhang, Philip David, and Boqing Gong. 2017. Curriculum domain adaptation for semantic segmentation of urban scenes. In The IEEE International Conference on Computer Vision (ICCV), volume 2, page 6. Yftah Ziser and Roi Reichart. 2017. Neural structural correspondence learning for domain adaptation. In Proc. of CoNLL. Yftah Ziser and Roi Reichart. 2018a. Pivot based language modeling for improved neural domain adaptation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1241–1251. Yftah Ziser and Roi Reichart. 2018b. Deep pivot-based modeling for cross-language cross-domain transfer with minimal guidance. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 238–249. Will Y Zou, Richard Socher, Daniel Cer, and Christopher D Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proc. of EMNLP. 5906 A URLs of Code and Data As noted in the experiments section, we provide here the URLs for the code and data we use in the paper. • Blitzer et al. (2007) product review data: http://www.cs.jhu.edu/ ˜mdredze/datasets/sentiment/ index2.html. • The airline review data is (Nguyen, 2015). • Code for the PBLM and PBLM-CNN models (Ziser and Reichart, 2018a): https://github.com/yftah89/ PBLM-Domain-Adaptation. • Code for the AE-SCL and AE-SCL-SR models of ZR17 (Ziser and Reichart, 2017): https://github.com/yftah89/ Neural-SCLDomain-Adaptation. • Code for the SCL-MI method of Blitzer et al. (2007): see footnote 6 (the URL does not fit into the line width). • Code for MSDA (Chen et al., 2012): http: //www.cse.wustl.edu/˜mchen. • Code for the domain adversarial network used as part of the MSDA-DAN baseline (Ganin et al., 2016): https://github. com/GRAAL-Research/domain_ adversarial_neural_network. • Logistic regression code: http: //scikit-learn.org/stable/. B Hyperparameter Tuning As noted in the experimental setup, for all previous work models (except from the PBLM models of (Ziser and Reichart, 2018a)), we follow the experimental setup of (Ziser and Reichart, 2017) including their hyperparameter estimation protocol. The hyperparameters of the PBLM models are provided here (they are identical to those of (Ziser and Reichart, 2018a)): • Input word embedding size: (128, 256). • Number of pivot features: (100, 200, 300, 400, 500). 6https://github.com/yftah89/ structural-correspondence-learning-SCL • |ht| : (128, 256, 512). • PBLM model order: second order. Note that Ziser and Reichart (2018a) also considered the word embedding size of 32 and 64. In our preliminary experiments these hyperparameters provided very poor performance for the plain PBLM model, so we excluded them from our full set of experiments. For the CNN in PBLM-CNN we only experimented with K = 250 filters and with a kernel of size d = 3. All the algorithms in the paper that involve a LSTM or a CNN are trained with the ADAM algorithm (Kingma and Ba, 2015). For this algorithm we used the parameters described in the original ADAM article (these parameters were also used by ZR18): • Learning rate: lr = 0.001. • Exponential decay rate for the 1st moment estimates: β1 = 0.9. • Exponential decay rate for the 2nd moment estimates: β2 = 0.999. • Fuzz factor: ϵ = 1e −08. • Learning rate decay over each update: decay = 0.0. For all the experiments in the paper we use the same random seed for parameter initialization. C Experimental Details Pre-processing All sequential models considered in our experiments are fed with one review example at a time. For all models in the paper, punctuation is first removed from the text before it is processed by the model (sentence boundaries are still encoded). This is the only pre-processing step we employ in the paper. This decision is in line with Ziser and Reichart (2018a). Features For AE-SCL-SR, SCL-MI and MSDA we concatenate the representation learned by the model with the original representation and this representation is fed to the logistic regression classifier. MSDA-DAN jointly learns the feature representation and performs the sentiment classification task. It is hence fed by a concatenation of the original and the MSDA-induced representations.
2019
591
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5907–5917 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5907 Optimal Transport-based Alignment of Learned Character Representations for String Similarity Derek Tam1, Nicholas Monath1, Ari Kobren1, Aaron Traylor2, Rajarshi Das1, Andrew McCallum1 1College of Information and Computer Sciences, University of Massachusetts Amherst 2Department of Computer Science, Brown University {dptam,nmonath,akobren,rajarshi,mccallum}@cs.umass.edu [email protected] Abstract String similarity models are vital for record linkage, entity resolution, and search. In this work, we present STANCE–a learned model for computing the similarity of two strings. Our approach encodes the characters of each string, aligns the encodings using Sinkhorn Iteration (alignment is posed as an instance of optimal transport) and scores the alignment with a convolutional neural network. We evaluate STANCE’s ability to detect whether two strings can refer to the same entity–a task we term alias detection. We construct five new alias detection datasets (and make them publicly available). We show that STANCE (or one of its variants) outperforms both state-ofthe-art and classic, parameter-free similarity models on four of the five datasets. We also demonstrate STANCE’s ability to improve downstream tasks by applying it to an instance of cross-document coreference and show that it leads to a 2.8 point improvement in B3 F1 over the previous state-of-the-art approach. 1 Introduction String similarity models are crucial in record linkage, data integration, search and entity resolution systems, in which they are used to determine whether two strings refer to the same entity (Bilenko and Mooney, 2003; McCallum et al., 2005; Li et al., 2015). In the context of these systems, measuring string similarity is complicated by a variety of factors including: the use of nicknames (e.g., Bill Clinton instead of William Clinton), token permutations (e.g., US Navy and Naval Forces of the US) and noise, among others. Many state-of-the-art systems employ either classic similarity models, such as Levenshtein, longest common subsequence, and Jaro-Winkler, or learned models for string similarity (Levin et al., 2012; Li et al., 2015; Ventura et al., 2015; Kim et al., 2016a; Gan et al., 2017). While classic and learned approaches can be effective, they both have a number of shortcomings. First, the classic approaches have few parameters making them inflexible and unlikely to succeed across languages or across domains with unique characteristics (e.g. company names, music album titles, etc.) (Needleman and Wunsch, 1970; Smith and Waterman, 1981; Winkler, 1999; Gionis et al., 1999; Bergroth et al., 2000; Cohen et al., 2003). Classic models also assume that each edit has equal cost, which is unrealistic. For example, consider the names Chun How and Chun Hao–which can refer to the same entity–and the names John A. Smith and John B. Smith, which cannot. Even though the first pair differ by 2 edits and the second pair by 1, transforming ow to ao in the first pair should cost less than transforming A to B in the second. Learned string similarity models address these problems by learning distinct costs for various edits and have thus proven successful in a number of domains (Bilenko and Mooney, 2003; McCallum et al., 2005; Gan et al., 2017). Some learned string similarity models, such as the SVM (Bilenko and Mooney, 2003) and CRFbased (McCallum et al., 2005) approaches, use edit patterns akin to insertions/swaps/deletions, which may lead to strong inductive biases. For example, even when costs are learned, two strings related by a token permutation–e.g., Grace Hopper and Hopper, Grace–are likely to have high cost even though they clearly refer to the same entity. Gan et al. (2017), on the other hand, provide less structure, encoding each string with a single vector embedding and measuring similarity between the embedded representations. In this paper, we present a learned string similarity model that is flexible, captures sequential dependencies of characters, and is readily able to learn a wide range of edit patterns–such as token permutations. Our approach is comprised of three 5908 components: the first encodes each character in both strings using a recurrent neural network; the second softly aligns the two encoded sequences by solving an instance of optimal transport; the third scores the alignment with a convolutional neural network. Each component is differentiable, allowing for end-to-end training. Our model is called STANCE–an acronym that stands for: Similarity of Transport-Aligned Neural Character Encodings. We evaluate STANCE’s ability to capture string similarity in a task we term alias detection. The input to alias detection is a query mention (i.e., a string) and a set of candidate mentions, and the goal is to score querycandidate pairs that can refer to the same entity higher than pairs that cannot. For example, an accurate model scores the query Philips with candidates Philips Corporation and Katherine Philips higher than with M. Phelps. Alias detection differs from both coreference and entity linking in that neither surrounding natural language context of the mention nor external knowledge are available. A similar task is studied in recent work (Gan et al., 2017). In experiments, we compare STANCE to stateof-the-art and classic models of string similarity in alias detection on 5 newly constructed datasets– which we make publicly available. Our results demonstrate that STANCE outperforms all other approaches on 4 out of 5 datasets in terms of Hits@1 and 3 out of 5 datasets in terms of mean average precision. Of the two cases in which STANCE is outperformed by other methods in terms of mean average precision, one is by a variant of STANCE in an ablation study. We also demonstrate STANCE’s capacity for supporting downstream tasks by using it in cross-document coreference for the Twitter at the Grammy’s dataset (Dredze et al., 2016). Using STANCE improves upon the state-of-the-art by 2.8 points of B3 F1. Analyzing our trained model reveals STANCE effectively learns sequence-aware character similarities, filters noise with optimal transport, and uses the CNN scoring component to detect unconventional similarity-preserving edit patterns. 2 STANCE Our goal is to learn a model, f(·, ·), that measures the similarity between two strings–called mentions. The model should produce a high score when its inputs are aliases of the same entity, where a mention is an alias of an entity if it can be used to refer to that entity. For example, the mentions Barack H. Obama and Barry Obama are both aliases of the entity wiki/Barack_Obama. Note that the alias relationship is not transitive: both of the pairs Obama-Barack Obama and ObamaMichelle Obama are aliases of the same entity, but the pair Barack Obama-Michelle Obama are not. In this section we describe our proposed model, STANCE, which is comprised of three stages: encoding both mentions and constructing a corresponding similarity matrix, softly aligning the encoded mentions, and scoring the alignment. 2.1 Mention Encoding Similarity Matrix A flexible string similarity model is sequenceaware, i.e., the cost of each character transformation should depend on the surrounding characters (e.g., transforming Chun How to Chun Hao should have low cost). To capture these sequential dependencies, STANCE encodes each mention using a bidirectional long short-term memory network (LSTM) (Hochreiter and Schmidhuber, 1997; Graves and Schmidhuber, 2005). In particular, each character ci in a mention m is represented by a d-dimensional vector, hi, where hi is the concatenation of the hidden states corresponding to ci produced by running the LSTM in both directions. The encoded representations of the characters are stacked to form a matrix H(m) ∈RL×d where L (a hyperparameter) is the maximum string length considered by STANCE. Given a query m and candidate m′, STANCE computes a similarity matrix of their encodings via an inner product: S = H(m)H(m′)T. Each cell in the resultant matrix represents a measure of the similarity between each pair of character encodings from m and m′. Note that for a mention q only the first |q| (i.e., length of the string q) rows of H(q) contain non-zero values. 2.2 Soft Alignment via Optimal Transport The next component of our model computes a soft alignment between the characters of m and m′. Aligning the mentions is posed as a transport problem, where the goal is to convert one mention into another while minimizing cost. In particular, we solve the Kantorovich formulation of optimal transport (OT). In this formulation, two probability measures, p1 and p2 are given in addition to a cost matrix, C. This matrix defines the cost of moving 5909 H(m) <latexit sha1_base64="Hxutb RiXObMkKncNv6/1AoVAn7s=">ADC3icdVLbhMxFHWmPEp4tIUl G4sIqSyoZhBSWVYtiy6D1DSVMkPl8dzJWPFjZHsgkeVPYMG2fAY7x JaP4Cf4BpxkBM1UvZLlo3O5ePrm9ecGRvHv3vR1p279+5vP+g/fPT 4yc7u3tNzoxpNYUQV/oiJwY4kzCyzHK4qDUQkXMY57OTpT7+BNow Jc/soZMkKlkJaPEBmp8+tHti1f+cncQH8SrwjdB0oIBamt4udf7k xaKNgKkpZwYM0ni2maOaMsoB9PGwM1oTMyhUmAkgwmVvl9fhlYA pcKh2WtHjFXj/hiDBmIfLgFMRWpqstyds0WwmPN8i5CWgzkptqUle MzjtBbfkuc0zWjQVJ1znLhmOr8LJ3uGAaqOWLAjVLDwV04poQm3o cD+1FSgNIcKCgyugZJItmxzukPC5Fa/z7v0tnb37qwFK5UqIYgsX JozP0kyl1qYhx9f73npBon3HSfhU7/WDXVD0LTy6b92dIw57VhfH5/ 8d4cJSbrzcBOcvzlIAv7wdnB03M7KNnqOXqB9lKBDdIRO0RCNEUz 9BVdoW/Rl+h79CP6ubZGvfbM7R0a+/a2sCZw=</latexit> <latexit sha1_base64="Hxutb RiXObMkKncNv6/1AoVAn7s=">ADC3icdVLbhMxFHWmPEp4tIUl G4sIqSyoZhBSWVYtiy6D1DSVMkPl8dzJWPFjZHsgkeVPYMG2fAY7x JaP4Cf4BpxkBM1UvZLlo3O5ePrm9ecGRvHv3vR1p279+5vP+g/fPT 4yc7u3tNzoxpNYUQV/oiJwY4kzCyzHK4qDUQkXMY57OTpT7+BNow Jc/soZMkKlkJaPEBmp8+tHti1f+cncQH8SrwjdB0oIBamt4udf7k xaKNgKkpZwYM0ni2maOaMsoB9PGwM1oTMyhUmAkgwmVvl9fhlYA pcKh2WtHjFXj/hiDBmIfLgFMRWpqstyds0WwmPN8i5CWgzkptqUle MzjtBbfkuc0zWjQVJ1znLhmOr8LJ3uGAaqOWLAjVLDwV04poQm3o cD+1FSgNIcKCgyugZJItmxzukPC5Fa/z7v0tnb37qwFK5UqIYgsX JozP0kyl1qYhx9f73npBon3HSfhU7/WDXVD0LTy6b92dIw57VhfH5/ 8d4cJSbrzcBOcvzlIAv7wdnB03M7KNnqOXqB9lKBDdIRO0RCNEUz 9BVdoW/Rl+h79CP6ubZGvfbM7R0a+/a2sCZw=</latexit> <latexit sha1_base64="Hxutb RiXObMkKncNv6/1AoVAn7s=">ADC3icdVLbhMxFHWmPEp4tIUl G4sIqSyoZhBSWVYtiy6D1DSVMkPl8dzJWPFjZHsgkeVPYMG2fAY7x JaP4Cf4BpxkBM1UvZLlo3O5ePrm9ecGRvHv3vR1p279+5vP+g/fPT 4yc7u3tNzoxpNYUQV/oiJwY4kzCyzHK4qDUQkXMY57OTpT7+BNow Jc/soZMkKlkJaPEBmp8+tHti1f+cncQH8SrwjdB0oIBamt4udf7k xaKNgKkpZwYM0ni2maOaMsoB9PGwM1oTMyhUmAkgwmVvl9fhlYA pcKh2WtHjFXj/hiDBmIfLgFMRWpqstyds0WwmPN8i5CWgzkptqUle MzjtBbfkuc0zWjQVJ1znLhmOr8LJ3uGAaqOWLAjVLDwV04poQm3o cD+1FSgNIcKCgyugZJItmxzukPC5Fa/z7v0tnb37qwFK5UqIYgsX JozP0kyl1qYhx9f73npBon3HSfhU7/WDXVD0LTy6b92dIw57VhfH5/ 8d4cJSbrzcBOcvzlIAv7wdnB03M7KNnqOXqB9lKBDdIRO0RCNEUz 9BVdoW/Rl+h79CP6ubZGvfbM7R0a+/a2sCZw=</latexit> <latexit sha1_base64="Hxutb RiXObMkKncNv6/1AoVAn7s=">ADC3icdVLbhMxFHWmPEp4tIUl G4sIqSyoZhBSWVYtiy6D1DSVMkPl8dzJWPFjZHsgkeVPYMG2fAY7x JaP4Cf4BpxkBM1UvZLlo3O5ePrm9ecGRvHv3vR1p279+5vP+g/fPT 4yc7u3tNzoxpNYUQV/oiJwY4kzCyzHK4qDUQkXMY57OTpT7+BNow Jc/soZMkKlkJaPEBmp8+tHti1f+cncQH8SrwjdB0oIBamt4udf7k xaKNgKkpZwYM0ni2maOaMsoB9PGwM1oTMyhUmAkgwmVvl9fhlYA pcKh2WtHjFXj/hiDBmIfLgFMRWpqstyds0WwmPN8i5CWgzkptqUle MzjtBbfkuc0zWjQVJ1znLhmOr8LJ3uGAaqOWLAjVLDwV04poQm3o cD+1FSgNIcKCgyugZJItmxzukPC5Fa/z7v0tnb37qwFK5UqIYgsX JozP0kyl1qYhx9f73npBon3HSfhU7/WDXVD0LTy6b92dIw57VhfH5/ 8d4cJSbrzcBOcvzlIAv7wdnB03M7KNnqOXqB9lKBDdIRO0RCNEUz 9BVdoW/Rl+h79CP6ubZGvfbM7R0a+/a2sCZw=</latexit> H(m0)T <latexit sha1_base64="xYgXlNZWqmTPoDyUq8DgTAHXjUg=">A ADGHicdVLbhMxFHWGVwmvtCzZWESIsqCaQUiwrFoWXQYpaStlhsjuZOxantGtgcSWf4RFmzhM9hV3XbXn+Ab8CQjaKbqlSwfnXMsH1/f tOJMmzC86gV37t67/2DrYf/R4ydPnw2d451WSsKE1ryUp2mRANnEiaGQ6nlQIiUg4n6dlho598BaVZKcdmWUEiyFynFiPDUb7Bx9sb vi9ZtYEFMoYcfOzQbDcC9cFb4JohYMUVuj2XbvT5yVtBYgDeVE62kUViaxRBlGObh+XGuoCD0jc5h6KIkAndhVeIdfeSbDean8kgav2OsnL BFaL0XqnU1G3dUa8jbNFMLhDXKhPdqMZOeKVAWji05Qk39MLJNVbUDSdc685tiUuGkzpgCavjSA0IV80/FtCKUOPb3Y9NAaUCH2HJwWa QM8majvs7JHxrxeu8/XSLp92dHbdgpdJSCIzG6fMTaPExgYW/vXe5rbYeRcx0n43K1Te0IFC1c/K8dHWNKO9a3B4f/3X5Cou483ATH7 /Yijz+/H+4ftLOyhV6gl2gXRegD2kdHaIQmiKIF+oF+ol/B9+B3cB5crK1Brz3zHG1UcPkXQfwHXg=</latexit> <latexit sha1_base64="xYgXlNZWqmTPoDyUq8DgTAHXjUg=">A ADGHicdVLbhMxFHWGVwmvtCzZWESIsqCaQUiwrFoWXQYpaStlhsjuZOxantGtgcSWf4RFmzhM9hV3XbXn+Ab8CQjaKbqlSwfnXMsH1/f tOJMmzC86gV37t67/2DrYf/R4ydPnw2d451WSsKE1ryUp2mRANnEiaGQ6nlQIiUg4n6dlho598BaVZKcdmWUEiyFynFiPDUb7Bx9sb vi9ZtYEFMoYcfOzQbDcC9cFb4JohYMUVuj2XbvT5yVtBYgDeVE62kUViaxRBlGObh+XGuoCD0jc5h6KIkAndhVeIdfeSbDean8kgav2OsnL BFaL0XqnU1G3dUa8jbNFMLhDXKhPdqMZOeKVAWji05Qk39MLJNVbUDSdc685tiUuGkzpgCavjSA0IV80/FtCKUOPb3Y9NAaUCH2HJwWa QM8majvs7JHxrxeu8/XSLp92dHbdgpdJSCIzG6fMTaPExgYW/vXe5rbYeRcx0n43K1Te0IFC1c/K8dHWNKO9a3B4f/3X5Cou483ATH7 /Yijz+/H+4ftLOyhV6gl2gXRegD2kdHaIQmiKIF+oF+ol/B9+B3cB5crK1Brz3zHG1UcPkXQfwHXg=</latexit> <latexit sha1_base64="xYgXlNZWqmTPoDyUq8DgTAHXjUg=">A ADGHicdVLbhMxFHWGVwmvtCzZWESIsqCaQUiwrFoWXQYpaStlhsjuZOxantGtgcSWf4RFmzhM9hV3XbXn+Ab8CQjaKbqlSwfnXMsH1/f tOJMmzC86gV37t67/2DrYf/R4ydPnw2d451WSsKE1ryUp2mRANnEiaGQ6nlQIiUg4n6dlho598BaVZKcdmWUEiyFynFiPDUb7Bx9sb vi9ZtYEFMoYcfOzQbDcC9cFb4JohYMUVuj2XbvT5yVtBYgDeVE62kUViaxRBlGObh+XGuoCD0jc5h6KIkAndhVeIdfeSbDean8kgav2OsnL BFaL0XqnU1G3dUa8jbNFMLhDXKhPdqMZOeKVAWji05Qk39MLJNVbUDSdc685tiUuGkzpgCavjSA0IV80/FtCKUOPb3Y9NAaUCH2HJwWa QM8majvs7JHxrxeu8/XSLp92dHbdgpdJSCIzG6fMTaPExgYW/vXe5rbYeRcx0n43K1Te0IFC1c/K8dHWNKO9a3B4f/3X5Cou483ATH7 /Yijz+/H+4ftLOyhV6gl2gXRegD2kdHaIQmiKIF+oF+ol/B9+B3cB5crK1Brz3zHG1UcPkXQfwHXg=</latexit> <latexit sha1_base64="xYgXlNZWqmTPoDyUq8DgTAHXjUg=">A ADGHicdVLbhMxFHWGVwmvtCzZWESIsqCaQUiwrFoWXQYpaStlhsjuZOxantGtgcSWf4RFmzhM9hV3XbXn+Ab8CQjaKbqlSwfnXMsH1/f tOJMmzC86gV37t67/2DrYf/R4ydPnw2d451WSsKE1ryUp2mRANnEiaGQ6nlQIiUg4n6dlho598BaVZKcdmWUEiyFynFiPDUb7Bx9sb vi9ZtYEFMoYcfOzQbDcC9cFb4JohYMUVuj2XbvT5yVtBYgDeVE62kUViaxRBlGObh+XGuoCD0jc5h6KIkAndhVeIdfeSbDean8kgav2OsnL BFaL0XqnU1G3dUa8jbNFMLhDXKhPdqMZOeKVAWji05Qk39MLJNVbUDSdc685tiUuGkzpgCavjSA0IV80/FtCKUOPb3Y9NAaUCH2HJwWa QM8majvs7JHxrxeu8/XSLp92dHbdgpdJSCIzG6fMTaPExgYW/vXe5rbYeRcx0n43K1Te0IFC1c/K8dHWNKO9a3B4f/3X5Cou483ATH7 /Yijz+/H+4ftLOyhV6gl2gXRegD2kdHaIQmiKIF+oF+ol/B9+B3cB5crK1Brz3zHG1UcPkXQfwHXg=</latexit> Figure 1: STANCE Model architecture: Character Similarities (§2.1), soft alignment (§2.2), and scoring (§2.3) (a) Similarity Matrix (b) Transport Matrix (c) Similarity × Transport Figure 2: Three Heatmaps: in all three heatmaps, brighter cells correspond to higher similarity. Figure 2a visualizes the character similarity matrix for two mentions: Three Doors Down and 3 Doors Down. Figure 2b visualizes the transport matrix and Figure 2c visualizes the element-wise product of the similarity and transport matrices. Many of the characters are highly similar. Multiplying by the transport matrix amplifies the alignment of the mentions while reducing noise, resulting in a clean alignment for the CNN scoring component. (or converting) each element in the support of p1 to each element in the support of p2. The solution to OT is a matrix, ˆP, called the transport plan, which defines how to completely convert p1 into p2. A viable transport plan is required to be non-negative and is also required to have marginals of p1 and p2 (i.e., if ˆP is summed along the rows then p1 is recovered and if it is summed along the columns p2 is recovered). The goal is to find the plan with minimal cost, P ⋆= argmin P∈P |p1| X i=0 |p2| X j=0 CijPij P = {P ∈RL×L + | P1L = p1, P T 1L = p2} where | · | is the number of elements in the support of the corresponding distribution and P is the set of valid transportation plans. In this sense, a transportation plan can be thought of as a soft alignment of the supports of p1 and p2 (i.e., an element in p1 can be aligned fractionally to multiple elements in p2). A transportation plan can be computed efficiently via Sinkhorn Iteration exploiting parallelism using GPUs (empirically it has been shown to be quadratic in L) (Cuturi, 2013). The transport plan is defined as P = diag(uuu)Kdiag(vvv) where K := e−λC, uuu and vvv are found using the iterative algorithm, λ is the entropic regularizer, and diag(·) gives a matrix with its input argument as the diagonal (Cuturi, 2013). We specifically use the regularized objective that has been shown to be effective for training (Cuturi, 2013; Genevay et al., 2018). Optimal transport has been effectively used in several natural language-based applications such as computing the similarity between two documents as the transport cost (Kusner et al., 2015; Huang et al., 2016), in measuring distances between point cloud-based representations of words (Frogner et al., 2019), and learning correspondences between word embedding spaces across domains/languages (Alvarez-Melis and Jaakkola, 2018; Alvarez-Melis et al., 2019). In our case, p1 represents the mention m and p2 represents m′. The distribution p1 is defined as a point cloud consisting of the character embeddings computed by the LSTM applied to m, i.e., H(m). Formally, it is a set of evenly weighted Dirac Delta functions in Rd where d is the embedding dimensionality of the character representations. The distribution p2 is defined similarly for m′. The cost of transporting a character, ci of m to a character cj of m′ has cost, Ci,j = Smax −Si,j where Smax = maxi′,j′ Si′,j′ and Si,j is the inner product of hi and hj. The resulting transport plan is multiplied by the similarity matrix (Section 2.1) and subsequently fed as input to the next component of our model (Section 2.3). Despite being a soft alignment, this step helps mitigate spurious errors 5910 by reducing the similarity of characters pairs that are not aligned. 2.3 Alignment Score The transport plan, ˆP ∈RL×L + describes how the characters in m are softly aligned to the characters in m′. We compute the element-wise product of the similarity matrix, S, and the transport plan: S′ = S ◦ˆP. Cells containing high values in S′ correspond to similar character pairs from m and m′ that are also well-aligned. Note the distinction between this alignment and the way in which the transport cost can be used as distance measure. The alignment is used as a reweighting of the similarity matrix. In this way, the transport plan is closely related to attention-based models (Bahdanau et al., 2015; Parikh et al., 2016; Vaswani et al., 2017; Kim et al., 2017). Finally, we employ a two dimensional convolutional neural network (CNN) to score S′ (LeCun et al., 1998). With access to the full matrix S′, the CNN is able to detect multiple, aligned, character subsequences from m and m′ that are highly similar. By combining evidence from multiple–potentially non-continguous– aligned character subsequences, the CNN detects long-range similarity-preserving edit patterns. This is crucial, for example, in computing a high score for the pair Obama, Barack and Barack Obama. The architecture of the alignment-scoring CNN is a three layer network with filters of fixed size. A linear model is used to score the final output of the CNN. See Figure 1 for a visual representation of the STANCE architecture. Training We train on mention triples, (q, p, n), where there exists an entity for which q and p are both aliases (i.e., (q, p) is a positive example), and there does not exist an entity for which both q and n are aliases (i.e., a negative example). We use the Bayesian Personalized Ranking objective (Rendle et al., 2009): σ(f(q, p) −f(q, n)). 3 Alias Detection String similarity is a crucial piece of data integration, search and entity resolution systems, yet there are few large-scale datasets for training and evaluating domain-specific string similarity models. Unlike in coreference resolution, a high quality model should return high scores for mention pairs True Positive Aliases peace agreement peacekeeping troops UN Peace- keeping wiki/ Peace_ Treaty Peace Support Operations (1) Small Edit Dist. lease agreement wiki/ Lease (5) Random Irish music wiki/ Irish_ music (2) Char overlap Society of Peace wiki/ Peace_ Society Query (4) 6-Hop Aliases wiki/ Lancaster_ House_ Agreement wiki/ peace_ keeping peace Blue beret wiki/ United_ Nations_ Peacekeeping peace pact Lancaster House peace talks True Negatives (3) 4-Hop Aliases Figure 3: True positive and negative aliases. A depiction of the source KB with mentions as ovals, entities as squares, and the query in a red oval. Links indicate that an entity is referred to by that mention. in which both strings are aliases of (i.e., can refer to) the same entity. For example, the mention Clinton should exhibit high score with both B. Clinton and H. Clinton. We construct five datasets for training and evaluating string similarity models derived from four large-scale public knowledge bases, which encompass a diverse range of entity types. The five datasets are summarized below: 1. Wikipedia (W) – We consider pages in Wikipedia to be entities. For each entity, we extract spans of text hyperlinked to that entity’s page and use these as aliases.1 2. Wikipedia-People (WP) – The Wikipedia dataset restricted to entities with type person in Freebase (Bollacker et al., 2008). 3. Patent Assignee (A) – Aliases of assignees (mostly organizations, some persons) found by combining entity information2 with nondisambiguated assignees in patents3. 4. Music Artist (M) – MusicBrainz (Swartz, 2002) contains alternative names for music artists. 1We used a xml dump of Wikipedia from 2016-03-05. We restrict the entities and hyperlinked spans to come from non-talk, non-list Wikipedia pages. 2sites.google.com/site/ patentdataproject/Home/downloads 3www.patentsview.org/ 5911 5. Diseases (D) – The Comparative Toxicogenomics Database (Davis et al., 2014) stores alternative names for disease entities. For each dataset, entities are divided into training, development, and testing sets, such that each entity appears in only one set. This partitioning scheme is meant to ensure that performant models capture a general notion of similarity, rather than learning to recognize the aliases of particular entities. Dataset statistics can be found in Table 1. Most mention-pairs selected uniformly at random are not aliases of the same entity. A model trained on such pairs may learn to always predict “Non-alias.” To avoid learning such degenerate models and to avoid test sets for which degenerate models are performant, we carefully construct the training, development and test sets by including a mix of positive and negative examples and by generating negative examples designed to be difficult and practical. We use a mixture of the following five heuristics to generate negative examples: 1. Small Edit Distance – mentions with Levenshtein distance of 1 or 2 from the query; 2. Character Overlap – mentions that share a 4-gram word prefix or suffix with the query; 3. 4-Hop Aliases – first, construct a bipartite graph of mentions and entities where an edge between a mention and an entity denotes that the mention is an alias of the entity. Then, sample a mention that is not an alias of an entity for which the query is also an alias, and whose shortest path to the query requires 4 hops in the graph. Note that all mentions 2 hops from the query are aliases of an entity for which the query is also an alias. 4. 6-Hop Aliases – sample a mention whose shortest path to the query in the bipartite mention-entity graph is 6 hops. 5. Random – randomly sample mentions that are not aliases of the entity for which the query is also an alias. We do this by first sampling an entity and then sampling an alias of that entity uniformly at random. In all cases, we sample such that entities that appear more frequently in the corpus and entities that have a larger number of aliases are more likely to be sampled (intuitively, these entities are more relevant and more challenging). For the Wikipediabased datasets, we sample entities proportionally to the number of hyperlink spans linking to the entity. For the Assignee dataset, we estimate entity frequency by the number of patents held by the entity. For the Music Artist dataset, entity frequency is estimated by the number of entity occurrences in the Last-FM-1k dataset (Last.fm; Celma, 2010). For the disease dataset, we do not have frequency information and so sampling is performed uniformly at random. For each dataset, 300 queries are selected for use in the development set and 4000 queries for use in the test set. Each query is paired with up to 1000 negative examples of each type mentioned above. For training, we also construct datasets using the approaches above for creating negative examples. Figure 3 illustrates how negative (and positive) examples are generated for the query peace agreement (which is used to refer to the entities wiki/Peace_Treaty and wiki/Lancaster_House_Agreement). 4Hop (negative) aliases include Peace Support Operations and peacekeeping troops and 6-Hop (negative) examples include UN Peacekeeping and Blue beret. Note that for each type of negative example, any mention that is a true positive alias of the query is excluded from being a negative example, even if it satisfies one of the above heuristics. 4 Experiments We evaluate STANCE directly via alias detection and also indirectly via cross document coreference. We also conduct an ablation study in order to understand the contribution of each of STANCE’s three components to its overall performance. 4.1 Alias Detection In the first experiment, we compare STANCE with both classic and learned similarity models in alias detection. Specifically, we compare STANCE to following approaches: • Deep Conflation Model (DCM) – state of the art model that encodes each string using a 1-dimensional CNN applied to character ngrams and computes cosine similarity (Gan et al., 2017). We use the available code 4. • Learned Dynamic Time Warping (LDTW) – encode mentions using a bidirectional LSTM and compute similarity via dynamic time warping (DTW). We note equivalence between LDTW and weighted finite state trans4github.com/zhegan27/Deep_Conflation_ Model 5912 Data Unique Strings Entity Count Avg. Num. of Mentions/Ent Avg. TP/Ent (Dev) Avg. TP/Ent (Test) W 9.32 × 106 4.64 × 106 2.54 ± 4.65 125.01 ± 356.45 80.31 ± 317.42 WP 1.88 × 106 1.16 × 106 1.83 ± 2.06 9.82 ± 23.71 10.53 ± 43.35 A 3.30 × 105 2.27 × 105 1.501 ± 2.64 30.76 ± 63.46 11.42 ± 25.02 M 1.83 × 106 1.16 × 106 1.694 ± 3.23 5.08 ± 13.63 9.20 ± 136.28 D 7.69 × 104 1.19 × 104 6.67 ± 9.10 7.21 ± 10.60 7.46 ± 10.72 Table 1: Qualities of the 5 created datasets. True positive are correct entity aliases included in the dev or test set. Ours Alias Detection Ablation Data STANCE Lev JW LCS Sdx CRF LSTM DCM LDTW -CNN -LSTM -OT W .416 .238 .297 .332 .294 .299 .230 .288 .362 .208 .287 .340 WP .594 .246 .283 .397 .308 .515 .328 .352 .413 .234 .411 .538 A .906 .720 .850 .622 .733 .780 .790 .782 .903 .797 .838 .910 M .597 .296 .328 .293 .354 .319 .399 .509 .396 .250 .403 .475 D .417 .206 .244 .191 .259 .162 .247 .437 .347 .230 .252 .360 Table 2: Mean Average Precision (MAP). ducers where the transducer topology is the edit distance (insert, delete, swap) program. Parameters are learned such that DTW distance is meaningful (Cuturi and Blondel, 2017). • LSTM – represent each mention using the final hidden state of a bidirectional LSTM. Similarity is the dot product of mention representations (i.e. S|m||m′|). • Classic Approaches – Levenshtein Distance (Lev), Jaro-Winkler distance (JW), Longest Common Subsequence (LCS). • Phonetic Relaxation (Sdx) – transform mentions using the Soundex phonetic mapping and then compute Levenshtein. • CRF – implementation 5 of the model defined in (McCallum et al., 2005). Given a query mention, q, and a set of candidate mentions, we use each model to rank candidates by similarity to q. We compute the mean average precision (MAP) and hits at k = {1, 10, 50} of the ranking with respect to a set of ground truth labeled aliases. We report MAP and hits at k averaged over all test queries. The set of candidates for query q include all corresponding positive and negative examples from the test set (Section 3). For models with hyperparameters, we tune the hyperparameters on the dev set using a grid search over: embedding dimension, learning rate, hidden state dimension, and number of filters (for the CNN). All models were implemented in PyTorch, utilizing SinkhornAutoDiff 6, and optimized with Adam (Kingma and Lei Ba, 2015). Our 5github.com/dirko/pyhacrf 6github.com/gpeyre/SinkhornAutoDiff implementation is publicly available 7. 4.2 Ablation Study Our second experiment is designed to reveal the purpose of each of STANCE’s components. To do so, we compare variants of STANCE with components removed and/or modified. Specifically, we compare the following variants: • WITHOUT-OT (-OT) – STANCE with LSTM encodings and CNN scoring but without optimal transport-based alignment. • CNN-TO-LINEAR (-CNN) – STANCE with the CNN scoring model replaced by a linear scoring model. Again, the optimal transport-based alignment is removed. • LSTM-TO-BINARY (-LSTM) – A binary similarity matrix (Sij = I[mi = m′ j]) and CNN scoring model, designed to assess the importance of the initial mention encodings. Once more, the optimal transport-based alignment is removed. We evaluate each model variant using MAP and hits at k on the 5 datasets as in the first experiment. Results can be found in Table 2 and Table 3, respectively. We note that these ablations are equivalent to the models proposed by Traylor et al. (2017). 4.3 Results and Analysis Table 2 and Table 3 contain the MAP and hits at k (respectively) for each method and dataset (for alias detection and ablation experiments). The results reveal that with the exception of the disease dataset, STANCE (or one of its variants) performs best in terms of both metrics. The results suggest that the 7github.com/iesl/stance 5913 Ours Alias Detection Ablation Data K STANCE Lev JW LCS Sdx CRF LSTM DCM LDTW -CNN -LSTM -OT 1 .698 .553 .630 .569 .545 .599 .436 .610 .570 .358 .509 .586 W 10 .599 .380 .471 .450 .381 .464 .383 .440 .525 .355 .444 .515 50 .604 .373 .488 .441 .366 .474 .448 .431 .556 .446 .507 .556 1 .744 .434 .506 .570 .422 .648 .421 .528 .456 .300 .550 .680 WP 10 .708 .397 .397 .475 .323 .646 .469 .459 .573 .357 .544 .665 50 .766 .417 .488 .517 .370 .716 .745 .546 .729 .547 .672 .745 1 .942 .850 .920 .726 .808 .867 .863 .881 .926 .821 .870 .932 A 10 .932 .805 .896 .738 .746 .840 .870 .841 .947 .879 .904 .950 50 .966 .847 .930 .817 .789 .896 .927 .883 .970 .940 .946 .970 1 .698 .442 .475 .417 .382 .465 .460 .614 .406 .251 .483 .562 M 10 .690 .369 .386 .398 .328 .371 .538 .623 .532 .388 .525 .581 50 .806 .448 .506 .502 .430 .452 .707 .746 .716 .595 .682 .743 1 .589 .514 .517 .458 .451 .410 .449 .630 .508 .314 .381 .505 D 10 .521 .266 .300 .285 .260 .232 .329 .499 .455 .334 .349 .475 50 .638 .305 .395 .371 .324 .316 .470 .571 .600 .497 .511 .604 Table 3: Hits at K. optimal transport and CNN-based alignment scoring components of STANCE lead to a more robust model of similarity than inner-product based models, like LSTM and DCM. We hypothesize that using n-grams as opposed to individual characters embeddings is advantageous on the disease dataset, leading to DCM’s top performance. Surprisingly, -OT is best on the assignee dataset. We hypothesize that this is due to many corporate acronyms. To better understand STANCE’s performance and improvement over the baseline methods we provide analysis of particular examples highlighting two advantages of the model: it leverages optimal transport for noise reduction, and it uses its CNN-based scoring function to learn non-standard similarity-preserving string edit patterns that would be difficult to learn with classic edit operations (i.e., insert, delete and substitute). Noise Reduction. Since the model leverages distributed representations for characters, it often discovers many similarities between the characters in two mentions. For example, Figure 4a shows two strings that are not aliases of the same entity. Despite this, there are many regions of high similarity due to multiple instances of the character bigrams aa, an and en in both mentions. In experiments, we find that this leads the -OT model astray. However, STANCE’s optimal transport component constructs a transport plan that contains little alignment between the characters in the mentions as seen in Figure 4b, which displays the product of the similarity matrix and the transportation plan. Ultimately, this leads STANCE to correctly predict that the two strings are not similar. (a) Similarity Matrix. (b) Noise Filtered Figure 4: Noise Filtering: OT effectively reduces noise in the similarity matrix even when many character n-grams are common to both mentions (Teen Bahuraaniyaan / Saath Saath Banayenge Ek Aashi). Token Permutation. A natural and frequently occurring similarity-preserving edit pattern that occurs in our datasets is token permutation, i.e., the tokens of two aliases of the same entity are ordered differently in each mention. For example, consider the similarity matrix in Figure 5b. The CNN easily learns that two strings may be aliases of the same entity even if one is a token permutation of the other. This is because it identifies multiple contiguous “diagonal lines” in the similarity matrix. Classic and learned string similarity measures do not learn this relationship easily. 4.4 Cross Document Coreference We evaluate the impact of using STANCE for in cross-document coreference in the Twitter at 5914 (a) Similarity (b) Similarity x Transport Figure 5: Token Permutation: STANCE learns that token permutations preserve string similarity (Paul Lieberstein / Lieberstein, Paul). Method Dev B3 F1 Test B3 F1 Ours (HAC + STANCE) 93.5 82.5 Green (Spelling Only) 78.0 77.2 Green (with Context) 88.5 79.7 Phylo (Spelling Only) 96.9 72.3 Phylo (with Context) 97.4 72.1 Phylo (with Context & Time) 97.7 72.3 Table 4: Cross Document Coreference Results on Twitter at the Grammy’s Dataset. Baseline results from (Dredze et al., 2016). the Grammy’s dataset (Dredze et al., 2016). This dataset consists of 4577 mentions of 273 entities in tweets published close in time to the 2013 Grammy awards. We use the same train/dev/test partition with data provided by the authors 8. The dataset is notable for having significant variation in the spellings of mentions that refer to the same entity. We design a simple cross-document coreference model that ignores the mention context and simply uses STANCE trained on the WikiPPL model. We perform average linkage hierarchical agglomerative clustering using STANCE scores as the linkage function and halt agglomerations according to a threshold (i.e., no agglomerations with linkage below the threshold are performed). We tune the threshold on the development set by finding the value which gives the highest evaluation score (B3 F1). We compare our method to the previously published state of the art methods (Green (Green et al., 2012) and Phylo (Andrews et al., 2014)). Both of these methods report numbers using their name spelling features alone as well as with context features. We find that our approach outperforms both methods (including those using context features) on the test dataset in terms of B3 F1 (Table 4). 8bitbucket.org/mdredze/tgx 5 Related Work Classic string similarity methods based on string alignment include Levenshtein distance, Longest Common Subsequence, Needleman and Wunsch (1970), and Smith and Waterman (1981). Sequence modeling and alignment is a widely studied problem in both theoretical and applied computer science and is too vast to be properly covered entirely. We note that the most relevant prior work focuses on learned string edit models and includes the work of McCallum et al. (2005) which uses a model based on CRFs, and Bilenko and Mooney (2003) which uses a SVM-based model. Andrews et al. (2012, 2014) developed a generative model, which is used for joint cross document coreference and string edit modeling tasks. Closely related work also appears in the field of computational morphology (Dreyer et al., 2008; Faruqui et al., 2016; Rastogi et al., 2016). Much of this work uses WFSTs with learned parameters. JRCNames (Steinberger et al., 2011; Ehrmann et al., 2017) is a dataset that stores multilingual aliases of person and organization entities. Similar neural network architectures to our approach have been used for related sequence alignment problems. Santos et al. (2017) uses an RNN to encode toponyms before using a multi-layer perceptron to determine if a pair of toponyms are matching. The Match-SRNN computes a similarity matrix over two sentence representations and uses an RNN applied to the matrix in a manner akin to the classic dynamic program for question answering and IR tasks (Wan et al., 2016). A similar RNN-based alignment approach was also used for phoneme recognition (Graves, 2012). Many previous works have studied character-level models (Kim et al., 2016b; Sutskever et al., 2011). Alias detection also bears similarity to natural language inference tasks, where instead of aligning characters to determine if two mentions refer to the same entity, the task is to aligns words to determine if two sentences are semantically equivalent (Bowman et al., 2015; Williams et al., 2018). Optimal transport and the related Wasserstein distance is studied in mathematics, optimization, and machine learning (Peyré et al., 2017; Villani, 2008). It has notably been used in the NLP community for modeling the distances between documents (Kusner et al., 2015; Huang et al., 2016) as the cost of transporting embedded representations of the words in one document to the words of the an5915 other, in point cloud-based embeddings (Frogner et al., 2019), and in learning word correspondences across languages and domains. (Alvarez-Melis and Jaakkola, 2018; Alvarez-Melis et al., 2019). String similarity models are crucial to record linkage, deduplication, and entity linking tasks. These include author coreference (Levin et al., 2012), record linkage in databases (Li et al., 2015), and record linkage systems with impactful downstream applications (Sadosky et al., 2015). 6 Conclusion In this work, we present STANCE, a neural model of string similarity that is trained end-to-end. The main components of our model are: a characterlevel bidirectional LSTM for character encoding, a soft alignment mechanism via optimal transport, and a powerful CNN for scoring alignments. We evaluate our model on 5 datasets created from publicly available knowledge bases and demonstrate that it outperforms the baselines in almost all cases. We also show that using STANCE improves upon state of the art performance in cross-document coreference in the Twitter at the Grammy’s dataset. We analyze our trained model and show that its optimal transport component helps to filter noise and that is has the capacity to learn non-standard similarity-preserving string edit patterns. In future work, we hope to further study the connections between our optimal transport-based alignment method and methods based on attention. We also hope to consider connections to work on probabilistic latent representation of permutations and matchings (Mena et al., 2018; Linderman et al., 2018). Additionally, we hope to apply STANCE to a wider-range of entity resolution tasks, for which string similarity is a component of model that considers additional features such as the natural language context of the entity mention. Acknowledgments We thank Haw-Shiuan Chang and Luke Vilnis for their helpful discussions. We also thank the anonymous reviewers for their constructive feedback. This work was supported in part by the UMass Amherst Center for Data Science and the Center for Intelligent Information Retrieval, in part by DARPA under agreement number FA8750-13-20020, in part by Amazon Alexa Science, in part by Defense Advanced Research Agency (DARPA) contract number HR0011-15-2-0036, in part by the National Science Foundation (NSF) grant numbers DMR-1534431 and IIS-1514053 and in part by the Chan Zuckerberg Initiative under the project “Scientific Knowledge Base Construction”. The work reported here was performed in part using high performance computing equipment obtained under a grant from the Collaborative R&D Fund managed by the Massachusetts Technology Collaborative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor References David Alvarez-Melis and Tommi Jaakkola. 2018. Gromov-wasserstein alignment of word embedding spaces. Empirical Methods in Natural Language Processing (EMNLP). David Alvarez-Melis, Stefanie Jegelka, and Tommi S. Jaakkola. 2019. Towards optimal transport with global invariances. Artificial Intelligence and Statistics (AISTATS). Nicholas Andrews, Jason Eisner, and Mark Dredze. 2012. Name phylogeny: A generative model of string variation. Empirical Methods in Natural Language Processing (EMNLP). Nicholas Andrews, Jason Eisner, and Mark Dredze. 2014. Robust entity clustering via phylogenetic inference. Association for Computational Linguistics (ACL). Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations (ICLR). Lasse Bergroth, Harri Hakonen, and Timo Raita. 2000. A survey of longest common subsequence algorithms. String Processing and Information Retrieval. Mikhail Bilenko and Raymond J. Mooney. 2003. Adaptive duplicate detection using learnable string similarity measures. Knowledge Discovery and Data Mining (KDD). Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. International Conference on Data Mining (ICDM). Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. Empirical Methods in Natural Language Processing (EMNLP). 5916 O. Celma. 2010. Music Recommendation and Discovery in the Long Tail. Springer. William Cohen, Pradeep Ravikumar, and Stephen Fienberg. 2003. A comparison of string metrics for matching names and records. KDD workshop on data cleaning and object consolidation. Marco Cuturi. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in Neural Information Processing Systems (NeurIPS). Marco Cuturi and Mathieu Blondel. 2017. Soft-dtw: a differentiable loss function for time-series. International Conference on Machine Learning (ICML). Allan Peter Davis, Cynthia J Grondin, Kelley LennonHopkins, Cynthia Saraceni-Richards, Daniela Sciaky, Benjamin L King, Thomas C Wiegers, and Carolyn J Mattingly. 2014. The comparative toxicogenomics database’s 10th year anniversary: update 2015. Nucleic acids research, 43(D1):D914– D920. Mark Dredze, Nicholas Andrews, and Jay DeYoung. 2016. Twitter at the grammys: A social media corpus for entity linking and disambiguation. International Workshop on Natural Language Processing for Social Media. Markus Dreyer, Jason R Smith, and Jason Eisner. 2008. Latent-variable modeling of string transductions with finite-state methods. Empirical Methods in Natural Language Processing (EMNLP). Maud Ehrmann, Guillaume Jacquet, and Ralf Steinberger. 2017. Jrc-names: Multilingual entity name variants and titles as linked data. Semantic Web. Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2016. Morphological inflection generation using character sequence to sequence learning. North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Charlie Frogner, Farzaneh Mirzazadeh, and Justin Solomon. 2019. Learning entropic wasserstein embeddings. International Conference on Learning Representations (ICLR). Zhe Gan, P. D. Singh, Ameet Joshi, Xiaodong He, Jianshu Chen, Jianfeng Gao, and Li Deng. 2017. Character-level deep conflation for business data analytics. International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Aude Genevay, Gabriel Peyré, and Marco Cuturi. 2018. Learning generative models with sinkhorn divergences. AISTATS. Aristides Gionis, Piotr Indyk, and Rajeev Motwani. 1999. Similarity search in high dimensions via hashing. Very Large Data Bases (VLDB). Alex Graves. 2012. Sequence transduction with recurrent neural networks. Representation Learning Worksop, ICML. Alex Graves and Jürgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks. Spence Green, Nicholas Andrews, Matthew R Gormley, Mark Dredze, and Christopher D Manning. 2012. Entity clustering across languages. North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation. Gao Huang, Chuan Guo, Matt J Kusner, Yu Sun, Fei Sha, and Kilian Q Weinberger. 2016. Supervised word mover’s distance. NeurIPS. Kunho Kim, Madian Khabsa, and C Lee Giles. 2016a. Random forest dbscan for uspto inventor name disambiguation. Joint Conference on Digital Library (JCDL). Yoon Kim, Carl Denton, Luong Hoang, and Alexander M Rush. 2017. Structured attention networks. International Conference on Learning Representations (ICLR). Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. 2016b. Character-aware neural language models. Association for the Advancement of Artificial Intelligence (AAAI). Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: a method for stochastic optimization. International Conference on Learning Representations (ICLR). Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to document distances. International Conference on Machine Learning (ICML). Last.fm. https://www.last.fm/. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE. Michael Levin, Stefan Krawczyk, Steven Bethard, and Dan Jurafsky. 2012. Citation-based bootstrapping for large-scale author disambiguation. Journal of the American Society for Information Science and Technology (JASIST). Pei Li, Xin Luna Dong, Songtao Guo, Andrea Maurino, and Divesh Srivastava. 2015. Robust group linkage. The Web Conference (WWW). Scott Linderman, Gonzalo Mena, Hal Cooper, Liam Paninski, and John Cunningham. 2018. Reparameterizing the birkhoff polytope for variational permutation inference. Artificial Intelligence and Statistics (AISTATS). 5917 Andrew McCallum, Kedar Bellare, and Fernando Pereira. 2005. A conditional random field for discriminatively-trained finite-state string edit distance. Uncertainty in Artificial Intelligence (UAI). Gonzalo Mena, David Belanger, Scott Linderman, and Jasper Snoek. 2018. Learning latent permutations with gumbel-sinkhorn networks. International Conference on Learning Representations (ICLR). Saul B Needleman and Christian D Wunsch. 1970. A general method applicable to the search for similarities in the amino acid sequence of two proteins. Journal of molecular biology. Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. Empirical Methods in Natural Language Processing (EMNLP). Gabriel Peyré, Marco Cuturi, et al. 2017. Computational optimal transport. Technical report. Pushpendre Rastogi, Ryan Cotterell, and Jason Eisner. 2016. Weighting finite-state transductions with neural context. North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. Bpr: Bayesian personalized ranking from implicit feedback. Uncertainty in Artificial Intelligence (UAI). Peter Sadosky, Anshumali Shrivastava, Megan Price, and Rebecca C Steorts. 2015. Blocking methods applied to casualty records from the syrian conflict. arXiv preprint arXiv:1510.07714. Rui Santos, Patricia Murrieta-Flores, Pável Calado, and Bruno Martins. 2017. Toponym matching through deep neural networks. International Journal of Geographical Information Science. Temple F Smith and Michael S Waterman. 1981. Identification of common molecular subsequences. Journal of molecular biology. Ralf Steinberger, Bruno Pouliquen, Mijail Kabadjov, Jenya Belyaeva, and Erik van der Goot. 2011. Jrc-names: A freely available, highly multilingual named entity resource. In International Conference Recent Advances in Natural Language Processing. Ilya Sutskever, James Martens, and Geoffrey E Hinton. 2011. Generating text with recurrent neural networks. International Conference on Machine Learning (ICML). Aaron Swartz. 2002. Musicbrainz: A semantic web service. IEEE Intelligent Systems. Aaron Traylor, Nicholas Monath, Rajarshi Das, and Andrew McCallum. 2017. Learning string alignments for entity aliases. Workshop on Automated Knowledge Base Construction (AKBC). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in Neural Information Processing Systems (NeurIPS). Samuel L. Ventura, Rebecca Nugent, and Erica R.H. Fuchs. 2015. Seeing the non-stars: (some) sources of bias in past disambiguation approaches and a new public tool leveraging labeled records. Research Policy. Cédric Villani. 2008. Optimal transport: old and new. Shengxian Wan, Yanyan Lan, Jun Xu, Jiafeng Guo, Liang Pang, and Xueqi Cheng. 2016. Match-srnn: Modeling the recursive matching structure with spatial rnn. International Joint Conference on Artificial Intelligence (IJCAI). Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). William E Winkler. 1999. The state of record linkage and current research problems. Statistical Research Division, US Census Bureau.
2019
592
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5918–5925 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5918 The Referential Reader: A Recurrent Entity Network for Anaphora Resolution Fei Liu ∗ The University of Melbourne Victoria, Australia Luke Zettlemoyer Facebook AI Research University of Washington Seattle, USA Jacob Eisenstein Facebook AI Research Seattle, USA Abstract We present a new architecture for storing and accessing entity mentions during online text processing. While reading the text, entity references are identified, and may be stored by either updating or overwriting a cell in a fixedlength memory. The update operation implies coreference with the other mentions that are stored in the same cell; the overwrite operation causes these mentions to be forgotten. By encoding the memory operations as differentiable gates, it is possible to train the model end-to-end, using both a supervised anaphora resolution objective as well as a supplementary language modeling objective. Evaluation on a dataset of pronoun-name anaphora demonstrates strong performance with purely incremental text processing. 1 Introduction Reference resolution is fundamental to language understanding. Current state-of-the-art systems employ the mention-pair model, in which a classifier is applied to all pairs of spans (e.g., Lee et al., 2017). This approach is expensive in both computation and labeled data, and it is also cognitively implausible: human readers interpret text in a nearly online fashion (Tanenhaus et al., 1995). We present a new method for reference resolution, which reads the text left-to-right while storing entities in a fixed-size working memory (Figure 1). As each token is encountered, the reader must decide whether to: (a) link the token to an existing memory, thereby creating a coreference link, (b) overwrite an existing memory and store a new entity, or (c) disregard the token and move ahead. As memories are reused, their salience increases, making them less likely to be overwritten. This online model for coreference resolution is based on the memory network architecture (We∗Work carried out as an intern at Facebook AI Research M(1) M(2) Ismael told Captain Ahab he saw Moby-Dick o(1) 1 u(1) 5 o(2) 3 u(2) 4 o(2) 7 self link coreferential not coreferential  Figure 1: A referential reader with two memory cells. Overwrite and update are indicated by o(i) t and u(i) t ; in practice, these operations are continuous gates. Thickness and color intensity of edges between memory cells at neighboring steps indicate memory salience;  indicates an overwrite. ston et al., 2015), in which memory operations are differentiable, enabling end-to-end training from gold anaphora resolution data. Furthermore, the memory can be combined with a recurrent hidden state, enabling prediction of the next word. This makes it possible to train the model from unlabeled data using a language modeling objective. To summarize, we present a model that processes the text incrementally, resolving references on the fly (Schlangen et al., 2009). The model yields promising results on the GAP dataset of pronoun-name references.1 2 Model For a given document consisting of a sequence of tokens {wt}T t=1, we represent text at two levels: • Tokens: represented as {xt}T t=1, where the vector xt ∈RDx is computed from any token-level encoder. • Entities: represented by a fixed-length memory Mt = {(k(i) t , v(i) t , s(i) t )}N i=1, where each memory is a tuple of a key k(i) t ∈RDk, a 1Code available at: https://github.com/ liufly/refreader 5919 hidden state ht−1 ht ht+1 memory unit Mt−1 Mt Mt+1 pre-recurrent ˜ht−1 ˜ht ˜ht+1 input embeddings xt−1 xt xt+1 Figure 2: Overview of the model architecture. value v(i) t ∈RDv, and a salience s(i) t ∈[0, 1]. There are two components to the model: the memory unit, which stores and tracks the states of the entities in the text; and the recurrent unit, which controls the memory via a set of gates. An overview is presented in Figure 2. 2.1 Recurrent Unit The recurrent unit is inspired by the CoreferentialGRU, in which the current hidden state of a gated recurrent unit (GRU; Chung et al., 2014) is combined with the state at the time of the most recent mention of the current entity (Dhingra et al., 2018). However, instead of relying on the coreferential structure to construct a dynamic computational graph, we use an external memory unit to keep track of previously mentioned entities and let the model learn to decide what to store in each cell. The memory state is summarized by the weighted sum over values: mt = PN i=1 s(i)v(i) t . The current hidden state and the input are combined into a pre-recurrent state ˜ht = tanh(W ht−1 + Uxt), which is used to control the memory operations; the matrices W and U are trained parameters. To compute the next hidden state ht, we perform a recurrent update: ht = GRU(xt, (1 −ct) × ht−1 + ct × mt) (1) where ct = min(σ(Wc˜ht + bc), P i s(i) t ) is a gate that measures the importance of the memory network to the current token. This gate is a sigmoid function of the pre-recurrent state, clipped by the sum of memory saliences. This ensures that the memory network is used only when at least some memories are salient. 2.2 Memory Unit The memory gates are a collection of scalars {(u(i) t , o(i) t )}N i=1, indicating updates and overwrites to cell i at token wt. To compute these gates, we first determine whether wt is an entity mention, using a sigmoid-activated gate et = σ(φe · ˜ht), where φe ∈RDh is a learnable vector. We next decide whether wt refers to a previously mentioned entity: rt = σ(φr · ˜ht) × et, where φr ∈RDh is a learnable vector. Updating existing entities. If wt is a referential entity mention (rt ≈1), it may refer to an entity in the memory. To compute the compatibility between wt and each memory, we first summarize the current state as a query vector, qt = fq(˜ht), where fq is a two-layer feed-forward network. The query vector is then combined with the memory keys and the reference gate to obtain attention scores, α(i) t = rt × SoftMax(k(i) t−1 · qt + b), where the softmax is computed over all cells i, and b is a learnable bias term, inversely proportional to the likelihood of introducing a new entity. The update gate is then set equal to the query match α(i) t , clipped by the salience, u(i) t = min(α(i) t , 2s(i) t−1). The upper bound of 2s(i) t−1 ensures that an update can at most triple the salience of a memory. Storing new entities. Overwrite operations are used to store new entities. The total amount to overwrite is ˜ot = et −PN i=1 u(i) t , which is the difference between the entity gate and the sum of the update gates. We prefer to overwrite the memory with the lowest salience. This decision is made differentiable using the Gumbel-softmax distribution (GSM; Jang et al., 2017), o(i) t = ˜ot × GSM(i)(−st−1, τ) and st = {s(i) t }N i=1.2 Memory salience. To the extent that each memory is not updated or overwritten, it is copied along to the next timestep. The weight of this copy operation is: r(i) t = 1 −u(i) t −o(i) t . The salience decays exponentially, λt =(et × γe + (1 −et) × γn) (2) s(i) t =λt × r(i) t × s(i) t−1 + u(i) t + o(i) t , (3) where γe and γn represent the salience decay rate upon seeing an entity or non-entity.3 2Here τ is the “temperature” of the distribution, which is gradually decreased over the training period, until the distribution approaches a one-hot vector indicating the argmax. 3We set γe = exp(log(0.5)/ℓe) with ℓe = 4 denoting the entity half-life, which is the number of entity mentions before the salience decreases by half. The non-entity halflife γn is computed analogously, with ℓn = 30. 5920 Memory state. To update the memory states, we first transform the pre-recurrent state ˜ht into the memory domain, obtaining overwrite candidates for the keys and values, ˜kt = fk(˜ht) and ˜vt = fv(˜ht), where fk is a two-layer residual network with tanh nonlinearities, and fv is a linear projection with a tanh non-linearity. Update candidates are computed by GRU recurrence with the overwrite candidate as the input. This yields the state update, k(i) t = u(i) t GRUk(k(i) t−1, ˜kt) + o(i) t ˜kt + r(i) t k(i) t−1 v(i) t = u(i) t GRUv(v(i) t−1, ˜vt) + o(i) t ˜vt + r(i) t v(i) t−1. 2.3 Coreference Chains To compute the probability of coreference between the mentions wt1 and wt2, we first compute the probability that each cell i refers to the same entity at both of those times, ω(i) t1,t2 = t2 Y t=t1+1 (1 −o(i) t ) (4) Furthermore, the probability that mention t1 is stored in memory i is u(i) t1 + o(i) t1 . The probability that two mentions corefer is then the sum over memory cells, ˆψt1,t2 = N X i=1 (u(i) t1 + o(i) t1 ) × u(i) t2 × ω(i) t1,t2. (5) 2.4 Training The coreference probability defined in Equation 5 is a differentiable function of the gates, which in turn are computed from the inputs w1, w2, . . . wT . We can therefore train the entire network end-toend from a cross-entropy objective, where a loss is incurred for incorrect decisions on the level of token pairs. Specifically, we set yi,j = 1 when wi and wj corefer (coreferential links), and also when both wi and wj are part of the same mention span (self links). The coreference loss is then the cross-entropy PT i=1 PT j=i+1 H( ˆψi,j, yi,j). Because the hidden state ht is computed recurrently from w1:t, the reader can also be trained from a language modeling objective, even when coreference annotations are unavailable. Word probabilities P(wt+1 | ht) are computed by projecting the hidden state ht by a matrix of output embeddings, and applying the softmax operation. 3 Experiments As an evaluation of the ability of the referential reader to correctly track entity references in text, we evaluate against the GAP dataset, recently introduced by Webster et al. (2018). Each instance consists of: (1) a sequence of tokens w1, . . . , wT extracted from Wikipedia biographical pages; (2) two person names (A and B, whose token index spans are denoted sA and sB); (3) a single-token pronoun (P with the token index sP ); and (4) two binary labels (yA and yB) indicating whether P is referring to A or B. Language modeling. Given the limited size of GAP, it is difficult to learn a strong recurrent model. We therefore consider the task of language modeling as a pre-training step. We make use of the page text of the original Wikipedia articles from GAP, the URLs to which are included as part of the data release. This results in a corpus of 3.8 million tokens, which is used for pre-training. The reader is free to use the memory to improve its language modeling performance, but it receives no supervision on the coreference links that might be imputed on this unlabeled data. Prediction. At test time, we make coreference predictions using the procedure defined in § 2.3. Following Webster et al. (2018), we do not require exact string match for mention detection: if the selected candidate is a substring of the gold span, we consider it as a predicted coreferential link between the pronoun and person name. Concretely, we focus on the token index sP of the pronoun and predict the positive coreferential relation of the pronoun P and person name A if any (in the span of sA) of ˆψsA,sP (if sA < sP ) or ˆψsP ,sA (otherwise) is greater than a threshold value (selected on the validation set).4 Evaluation. Performance is measured on the GAP test set, using the official evaluation script. We report the overall F1, as well as the scores by gender (Masculine: F M 1 and Feminine: F F 1 ), and the bias (the ratio of F F 1 to F M 1 : F F 1 F M 1 ). Systems. We benchmark our model (RefReader) against a collection of strong baselines presented in the work of Webster et al. (2018): (1) a state-ofthe-art mention-pair coreference resolution (Lee 4As required by Webster et al. (2018), the model is responsible for detecting mentions; only the scoring function accesses labeled spans. 5921 F M 1 F F 1 F F 1 F M 1 F1 Clark and Manning (2015)† 53.9 52.8 0.98 53.3 Lee et al. (2017)† 67.7 60.0 0.89 64.0 Lee et al. (2017), re-trained 67.8 66.3 0.98 67.0 Parallelism† 69.4 64.4 0.93 66.9 Parallelism+URL† 72.3 68.8 0.95 70.6 RefReader, LM objective‡ 61.6 60.5 0.98 61.1 RefReader, coref objective‡ 69.6 68.1 0.98 68.9 RefReader, LM + coref‡ 72.8 71.4 0.98 72.1 RefReader, coref + BERT⋆ 80.3 77.4 0.96 78.8 Table 1: GAP test set performance. †: reported in Webster et al. (2018); ‡: strictly incremental processing; ⋆: average over 5 runs with different random seeds. et al., 2017); (2) a version of (1) that is retrained on GAP; (3) a rule-based system based on syntactic parallelism (Webster et al., 2018); (4) a domainspecific variant of (3) that incorporates the lexical overlap between each candidate and the title of the original Wikipedia page (Webster et al., 2018). We evaluate a configuration of RefReader that uses two memory cells; other details are in the supplement (Appendix A). Results. As shown in Table 1, RefReader achieves state-of-the-art performance, outperforming strong pretrained and retrained systems (e.g., Lee et al., 2017), as well as domainspecific heuristics (Parellelism+URL). Language model pretraining yields an absolute gain of 3.2 in F1. This demonstrates the ability of RefReader to leverage unlabeled text, which is a distinctive feature in comparison with prior work. When training is carried out in the unsupervised setting (with the language modeling objective only), the model is still capable of learning the latent coreferential structure between pronouns and names to some extent, outperforming a supervised coreference system that gives competitive results on OntoNotes (Clark and Manning, 2015). We also test a combination of RefReader and BERT (Devlin et al., 2019), using BERT’s contextualized word embeddings as base features xt (concatenation of the top 4 layers), which yields substantial improvements in accuracy. While this model still resolves references incrementally, it cannot be said to be purely incremental, because BERT uses “future” information to build its contextualized embeddings.5 Note that the gender 5Future work may explore the combination of RefReader bias increases slightly, possibly due to bias in the data used to train BERT. GAP examples are short, containing just a few entity mentions. To test the applicability of our method to longer instances, we produce an alternative test set in which pairs of GAP instances are concatenated together, doubling the average number of tokens and entity mentions. Even with a memory size of two, performance drops to F1 = 70.2 (from 72.1 on the original test set). This demonstrates that the model is capable of reusing memory cells when the number of entities is larger than the size of the memory. We also test a configuration of RefReader with four memory cells, and observe that performance on the original test set decreases only slightly, to F1 = 71.4 (against RefReader LM + coref). Case study and visualization. Figure 3 gives an example of the behavior of the referential reader, as applied to a concatenation of two instances from GAP.6 The top panel shows the salience of each entity as each token is consumed, with the two memory cells distinguished by color and marker. The figure elides long spans of tokens whose gate activations are nearly zero. These tokens are indicated in the x-axis by ellipsis; the corresponding decrease in salience is larger, because it represents a longer span of text. The bottom panel shows the gate activations for each token, with memory cells again distinguished by color and marker, and operations distinguished by line style. The gold tokenentity assignments are indicated with color and superscript. The reader essentially ignores the first name, Braylon Edwards, making a very weak overwrite to memory 0 (m0). It then makes a large overwrite to m0 on the pronoun his. When encountering the token Avant, the reader makes an update to the same memory cell, creating a cataphoric link between Avant and his. The name Padbury appears much later (as indicated by the ellipsis), and at this point, m0 has lower salience than m1. For this reason, the reader chooses to overwrite m0 with this name. The reader ignores the name Cathy Vespers and overwrites m1 with the adverb coincidentally. On encountering the final pronoun she, the reader is conflicted, and makes a partial and large-scale pretrained incremental language models (e.g., Radford et al., 2019). 6For an example involving multi-token spans, see Appendix B. 5922 0.4 0.6 0.8 1.0 salience 0 1 ... behind Braylon2 Edwards2 . During his1 sophomore season in 2003 , Avant1 ... Padbury3 appeared in Piers Haggard 's cult British horror film ... as the unfortunate Cathy4 Vespers4 ... Coincidentally , she3 appeared ... 0.0 0.2 0.4 0.6 0.8 1.0 gate update overwrite Figure 3: An example of the application the referential reader to a concatenation of two instances from GAP. The ground truth is indicated by the color of each token on the x-axis as well as the superscript. overwrite to m0, a partial update (indicating coreference with Padbury), and a weaker update to m1. If the update to m0 is above the threshold, then the reader may receive credit for this coreference edge, which would otherwise be scored as a false negative. The reader ignores the names Braylon Edwards, Piers Haggard, and Cathy Vespers, leaving them out of the memory. Edwards and Vespers appear in prepositional phrases, while Haggard is a possessive determiner of the object of a prepositional phrase. Centering theory argues that these syntactic positions have low salience in comparison with subject and object position (Grosz et al., 1995). It is possible that the reader has learned this principle, and that this is why it chooses not to store these names in memory. However, the reader also learns from the GAP supervision that pronouns are important, and therefore stores the pronoun his even though it is also a possessive determiner. 4 Related Work Memory networks provide a general architecture for online updates to a set of distinct memories (Weston et al., 2015; Sukhbaatar et al., 2015). The link between memory networks and incremental text processing was emphasized by Cheng et al. (2016). Henaff et al. (2017) used memories to track the states of multiple entities in a text, but they predefined the alignment of entities to memories, rather than learning to align entities with memories using gates. The incorporation of entities into language models has also been explored in prior work (Yang et al., 2017; Kobayashi et al., 2017); similarly, Dhingra et al. (2018) augment the gated recurrent unit (GRU) architecture with additional edges between coreferent mentions. In general, this line of prior work assumes that coreference information is available at test time (e.g., from a coreference resolution system), rather than determining coreference in an online fashion. Ji et al. (2017) propose a generative entity-aware language model that incorporates coreference as a discrete latent variable. For this reason, importance sampling is required for inference, and the model cannot be trained on unlabeled data. 5 Conclusion This paper demonstrates the viability of incremental reference resolution, using an end-to-end differentiable memory network. This enables semisupervised learning from a language modeling objective, which substantially improves performance. A key question for future work is the performance on longer texts, such as the full-length news articles encountered in OntoNotes. Another direction is to further explore semi-supervised learning, by reducing the amount of training data and incorporating linguistically-motivated constraints based on morphosyntactic features. Acknowledgments We would like to thank the anonymous reviewers for their valuable feedback, Yinhan Liu, Abdelrahman Mohamed, Omer Levy, Kellie Webster, Vera Axelrod, Mandar Joshi, Trevor Cohn and Timothy Baldwin for their help and comments. 5923 References Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 551–561. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In Proceedings of the NIPS 2014 Deep Learning and Representation Learning Workshop. Kevin Clark and Christopher D. Manning. 2015. Entity-centric coreference resolution with model stacking. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1405–1415. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Bhuwan Dhingra, Qiao Jin, Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. 2018. Neural models for reasoning over multiple mentions using coreference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 42–48. Association for Computational Linguistics. Barbara J Grosz, Scott Weinstein, and Aravind K Joshi. 1995. Centering: A framework for modeling the local coherence of discourse. Computational linguistics, 21(2):203–225. Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. 2017. Tracking the world state with recurrent entity networks. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In Proceedings of the 5th International Conference on Learning Representations. Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, and Noah A. Smith. 2017. Dynamic entity representations in neural language models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1830– 1839. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Sosuke Kobayashi, Naoaki Okazaki, and Kentaro Inui. 2017. A neural language model for dynamically representing the meanings of unknown words and entities in a discourse. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 473– 483, Taipei, Taiwan. Asian Federation of Natural Language Processing. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188–197, Copenhagen, Denmark. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. https://d4mucfpksywv.cloudfront. net/better-language-models/ language-models.pdf. David Schlangen, Timo Baumann, and Michaela Atterer. 2009. Incremental reference resolution: The task, metrics for evaluation, and a Bayesian filtering model that is sensitive to disfluencies. In Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 30–37. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In Proceedings of Advances in Neural Information Processing Systems, pages 2440–2448, Montr´eal, Canada. Michael K Tanenhaus, Michael J Spivey-Knowlton, Kathleen M Eberhard, and Julie C Sedivy. 1995. Integration of visual and linguistic information in spoken language comprehension. Science, 268(5217):1632–1634. Kellie Webster, Marta Recasens, Vera Axelrod, and Jason Baldridge. 2018. Mind the GAP: A balanced corpus of gendered ambiguous pronouns. Transactions of the Association for Computational Linguistics, 6:605–617. Jason Weston, Sumit Chopra, and Antoine Bordes. 2015. Memory networks. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, USA. Zichao Yang, Phil Blunsom, Chris Dyer, and Wang Ling. 2017. Reference-aware language models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1850–1859. 5924 A Supplemental information Model configuration. Training is carried out on the development set of GAP with the Adam optimizer (Kingma and Ba, 2014) and a learning rate of 0.001. Early stopping is applied based on the performance on the validation set. We use the following hyperparameters: • embedding size Dx = 300; • memory key size Dk = 16 (32 with BERT) and value size Dv = 300; the hidden layers in the memory key/value updates fk and fv are also set to 16 (32 with BERT) and 300 respectively; • number of memory cells N = 2; • pre-recurrent and hidden state sizes Dh = 300; • salience half-life for words and entity mentions are 30 and 4 respectively; • Gumbel softmax starts at temperature τ = 1.0 with an exponential decay rate of 0.5 applied every 10 epochs; • dropout is applied to the embedding layer, the pre-recurrent state ˜ht, and the GRU hidden state ht, with a rate of 0.5; • self and coreferential links are weighted differently in the coreference loss cross-entropy in § 2.4 with 0.1 and 5.0 and negative coreferential links weighted higher than positive ones with a ratio of 10:1 to penalize false positive predictions. For the RefReader model trained only on coreference annotations, the base word embeddings (xt) are fixed to the pretrained GloVe embeddings (Pennington et al., 2014). In the RefReader models that include language model pretraining, embeddings are learned on the language modeling task. Language modeling pre-training is carried out using the same configuration as above; the embedding update and early stopping are based on perplexity on a validation set. B Multi-token Span Example In the example shown in Figure 4, the system must handle multi-token spans Paul Sabatier and Wilhelm Normann. It does this by overwriting on the first token, and updating on the second token, indicating that both tokens are part of the name of a single entity. The reader also correctly handles an example of cataphora (During his tenure, Smith voted . . .). It stores Paul Sabatier in the same memory as Smith, but overwrites that memory so as not to create a coreference link. The reader reuses memory one for both entities because in the intervening text, memory zero acquired more salience. Finally, the model perceives some ambiguity on the pronoun he at the end: it narrowly favors coreference with Normann, but assigns some probability to the creation of a new entity. 5925 0.00 0.25 0.50 0.75 1.00 salience 0 1 ... following Richard2 Carroll2 , who was elected ... During his1 tenure , Smith1 voted with ... the French chemist Paul4 Sabatier4 in 1897 , and in 1901 the German chemist Wilhelm3 Normann3 developed the hydrogenation of fats , which he3 patented ... 0.00 0.25 0.50 0.75 gate update overwrite Figure 4: Another example of the referential reader, as applied to a concatenation of two instances from GAP. Again, the ground truth is indicated by the color of each token on the x-axis as well as the superscript.
2019
593
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5926–5930 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5926 Interpolated Spectral NGram Language Models Ariadna Quattoni and Xavier Carreras dMetrics Brooklyn, NY 11211 {ariadna.quattoni,xavier.carreras}@dmetrics.com Abstract Spectral models for learning weighted nondeterministic automata have nice theoretical and algorithmic properties. Despite this, it has been challenging to obtain competitive results in language modeling tasks, for two main reasons. First, in order to capture long-range dependencies of the data, the method must use statistics from long substrings, which results in very large matrices that are difficult to decompose. The second is that the loss function behind spectral learning, based on moment matching, differs from the probabilistic metrics used to evaluate language models. In this work we employ a technique for scaling up spectral learning, and use interpolated predictions that are optimized to maximize perplexity. Our experiments in character-based language modeling show that our method matches the performance of stateof-the-art ngram models, while being very fast to train. 1 Introduction In the recent years we have witnessed the development of spectral methods based on matrix decompositions to learn Probabilistic Non-deterministic Finite Automata (PNFA) and related models (Hsu et al., 2009, 2012; Bailly et al., 2009; Balle et al., 2011; Cohen et al., 2012; Balle et al., 2014). Essentially, PNFA can be regarded as recurrent neural networks where the function that predicts the dynamic state representation from previous states is linear. Despite the expressiveness of PNFA and the strong theoretical properties of spectral learning algorithms, it has been challenging to get competitive results on language modeling tasks. We argue and confirm with our experiments that there are two main reasons why using spectral methods for language modeling is challenging. The first reason is a scalability problem to handle long range dependencies. The spectral method is based on computing a Hankel matrix that contains statistics of expectations over substrings generated by the target language. If we want to incorporate long-range dependencies we need to consider long substrings. A consequence of this is that the Hankel matrix can become too large to make it practical to perform algebraic decompositions. To address this problem we use the basis selection technique by Quattoni et al. (2017) to scale spectral learning and model long range dependencies. Our experiments confirm that modeling long range dependencies is essential to obtain competitive language models. The second limitation of classical spectral methods when applied to language modeling is that the loss function that the learning algorithm attempts to minimize is not aligned with the loss function that is used to evaluate model performance. Spectral methods minimize the ℓ2 distance on the prediction of expectations of substrings up to a certain length (see Balle et al. (2012) for a formulation of spectral learning in terms of loss minimization), while language models are usually evaluated using conditional perplexity. There have been some proposals on generalizing the fundamental ideas of spectral learning to other loss functions (Parikh et al., 2014; Quattoni et al., 2014). However, while these approaches are promising they have the downside that they lead to relatively expensive iterative convex optimizations and it is still a challenge to scale them to model long-range dependencies. In this paper we propose a simpler yet effective alternative to the iterative optimization. We use the classical spectral method based on low-rank matrix decomposition to learn a PNFA that computes substring expectations. Then we use these expectations as features in an interpolated ngram model and we learn the weights of the interpolation so as to maximize perplexity. This interpo5927 lation step is iterative, but it is a simple and very efficient convex optimization: the weights of the interpolation can be trained in a few seconds or minutes at most. The refinement step allows us to leverage all the moments computed by the learned PNFA and to align the spectral method with the perplexity evaluation metric. Our experiments on character-level language model show that: (1) modeling long range dependencies is important; and (2) with the simple interpolation step we can obtain competitive results. Our perplexity results are significantly better than feed-forward NNs, as good or better than sophisticated interpolation techniques such as Kneser-Ney estimation, and close to the performance of RNNs on two datasets. The main contribution of our work consists on combining two simple ideas, i.e. incorporating long-range dependencies via basis selection of long substring moments (Section 2), and refining the predictions of the PNFA with an iterative interpolation step (Section 3). Our experiments show that these two simple ideas bring us one step closer to making spectral methods for PNFA reach state-of-the-art performance on language modeling tasks (Section 4). The advantage of these methods over other popular approaches to language modeling is their simplicity and the fact that they rely on efficient convex optimizations for training the model parameters. Furthermore, PNFA are probabilistic models for which efficient inference methods can be easily derived for computing all sorts of expectations. These expectations could then be used as features to learn predictive interpolation models. In this paper we present experiments with one type of expectation and interpolation model that illustrates the potential of this approach. 2 Spectral Language Models 2.1 Probabilistic Non-Deterministic Finite Automata We start describing the general class of Weighted Automata over strings. Let x = x1 · · · xn be a sequence of length n over some finite alphabet Σ. We denote as Σ⋆the set of all finite sequences, and we use it as a domain of our functions. We use x · x′ to denote the concatenation of two strings x and x′. A Non-Deterministic Weighted Automaton (WA) with k states is defined as a tuple: A = ⟨α0, α∞, {Aσ}σ∈Σ⟩with: α0, α∞∈Rk are the initial and final weight vectors; and Aσ ∈Rk×k are the transition matrices associated to each symbol σ ∈Σ. The function fA : Σ⋆→R realized by an WA A is defined as: fA(x) = α⊤ 0 Ax1 · · · Axnα∞ . (1) Probabilistic Non-Deterministic Finite Automata (PNFA) are WA that compute a probabilistic distribution over strings. One can easily transform a PNFA into another automata that computes substring expectations via simple transformations of the model parameters, and the reverse is also true, see Balle et al. (2014) for details. In this paper we will directly learn and use automata that compute expectations. With these expectations we will calculate the conditional probabilities of a language model1: Pr[σ | x1:n] = fA(x1:n · σ) P σ′∈Σ fA(x1:n · σ′) (2) Here, n is the length of the left context, analogous to the order of an NGram model, but we compute the expectations not from counts but from a PNFA. 2.2 The Spectral Method We now give a brief description of the spectral method for estimating a PNFA that computes expectations over substrings. We only provide a higher-level description of the method; for a complete derivation and the theory justifying the algorithm we refer the reader to the works by Hsu et al. (2009) and Balle et al. (2014). Assume a distribution of strings over some discrete alphabet, our target function f(x) is the expected number of times that x appears as a substring of a string sampled from the distribution. At training, we are given strings T from the distribution and we want to estimate f. We denote as fT(x) the empirical substring expectation of x in T.2 Using fT, the spectral method estimates a WA A with k states, where k is a parameter of the algorithm, such that fA is a good approximation of f. The method reduces the learning problem to computing an SVD decomposition of a special type of matrix called the Hankel matrix, that collects the observed expectations fT. The method is described by the following steps: 1For language models, we assume that Σ includes a special symbol for end of sentence. 2This corresponds to the number of times that x is observed as substring of any string in T, normalized by the number of strings in T. 5928 (1) Select a set of prefixes P and suffixes S, that will serve as indices of the Hankel matrix for rows and columns respectively. A typical choice is to select all substrings up to a certain size n, but this quickly grows, and in practice prior work uses a small n. Instead we use the basis selection technique presented by Quattoni et al. (2017), which allows to capture long-range dependencies (analogous to having a large n) but keeping the number of prefixes and suffixes manageable. (2) Compute Hankel matrices for (P, S). (a) Compute H ∈ RP×S, with entries H(p, s) = fT(p · s). (b) Compute hP ∈RP with hP(p) = fT(p) and hS ∈RS with hS(s) = fT(s). (c) For each σ ∈Σ, compute Hσ ∈RP×S with entries Hσ(p, s) = fT(p · σ · s). (3) Compute a k-rank factorization of H. Compute the truncated SVD of H, i.e. H ≈ UΣV⊤resulting in a matrix F = UΣ ∈ RP×k and a matrix B = V ∈RS×k. Thus H ≈FB⊤is an k-rank factorization of H. (4) Recover the WA A of k states. Let M+ denote the Moore-Penrose pseudo-inverse of a matrix M. The elements of A are recovered as follows. Initial vector: α⊤ 0 = h⊤ S B. Final vector: α∞= F+hP. Transition Matrices: Aσ = F+HσB, for σ ∈Σ. The computation is dominated by step (3), the SVD of the Hankel matrix, which is at most cubic in the size of the matrix. In practice, this method is scalable and fast to train. 3 Interpolated Predictions One limitation of the spectral method is that the loss that it minimizes is not aligned with the probabilistic metrics used in language modeling, such as perplexity. Instead the spectral method minimized the ℓ2 loss over the observed empirical moments, i.e. those substrings collected in the Hankel matrix. To align the loss function with a perplexity measure we propose a simple refinement step, where we use the expected counts computed by the learned PNFA as features of a log-linear model, and learn interpolation weights. In contrast to Equation 2, which uses the longest context x of length n to compute the conditional probability, the interpolated model leverages the ability of the PNFA to model substring expectations of all lengths up to n. This is similar to classic interpolation of language models (Rosenfeld, 1994; Chen, 2009). Given a function f computing substring expectations, the interpolation is: g(x1:n, σ) = exp    n−1 X j=0 wσ,j log f(xn−j:n · σ)    (3) where x1:n is a context of size n, σ is the output symbol, and wσ,j are the interpolation weights, with one parameter per output symbol σ and context length j, with 0 ≤j < n. As it is standard with interpolation models, we train the weights by maximizing the conditional log-likelihood of the development set. We assume that f is fixed, which results in a convex optimization, and we solve with L-BFGS. 4 Experiments We present experiments in character-based language modeling. Our spectral ngram models work with a fixed context length, and we show results varying this length up to relatively large values. Following the standard, the goal is to learn a language model that predicts the next symbol given a sentence prefix, including the prediction of sentence ends. As datasets we use the Penn Treebank (PTB) prepared by Mikolov et al. (2012)3, and “War and Peace” (WP) dataset prepared by Karpathy et al. (2016)4. We use two probabilistic evaluation metrics that are standard in language modeling tasks: Cross Entropy and Bits per Character (BpC). Depending on the dataset, we use one or the other such that we can directly compare to published results. Tables 1 and 2 present results in terms of the context size (n) for the PTB and WP tests respectively. The column “UB” shows an upperbound on the performance metric using a context of size n. This is computed directly using the expected counts on the test set to compute the conditional distribution. If we were able to estimate these expectations perfectly, we would achieve the 349 characters; 5017k / 393k / 442k characters in the train / dev / test portions. 484 symbols; 2658k / 300k / 300k characters in the train / dev / test portions. 5929 Spectral n UB KN longest interp. size H 3 2.60 2.63 2.63 2.63 102 4 1.94 2.01 2.02 2.03 750 5 1.51 1.67 1.70 1.68 1,661 6 1.23 1.54 1.62 1.55 6,360 7 0.98 1.49 1.65 1.49 13,992 8 0.78 1.47 1.67 1.47 35,263 9 0.59 1.47 1.68 1.45 69,292 10 0.46 1.47 1.67 1.45 137,370 Table 1: Bits-per-character on the PTB test set. Spectral n UB KN FNN ME long. int. size H 3 1.86 1.93 1.93 1.95 1.95 1.95 174 4 1.38 1.52 1.55 1.59 1.57 1.55 1,258 5 1.06 1.31 1.45 1.43 1.41 1.36 3,278 6 0.82 1.23 1.34 1.36 1.39 1.29 11,859 7 0.62 1.20 1.32 1.33 1.42 1.25 26,848 8 0.46 1.19 1.30 1.46 1.24 62,628 9 0.32 1.19 1.30 1.47 1.24 121,534 10 0.22 1.19 1.30 1.47 1.24 224,159 Table 2: Cross-entropy on the WP test set. reported performance. As the two tables show, a context of size 10 already gives a high upperbound, suggesting that we can achieve good performance using a fixed but large horizon. The tables show results of the spectral language model for different context sizes, using expectations from the “longest” context or “interpolated” expectations. A clear trend is that the results improve with the context length, achieving a stable performance for n = 10. It is also clear that the interpolated predictions work much better than simply using the longest context. Table 2 also compares to a MaxEnt model (labeled “ME”), which is an interpolation model of Eq.3 but uses empirical expectations fT(x) computed from training counts instead of those given by the spectral PNFA. Clearly, the expectations given by the PNFA generalize better and lead to improvements. The last column of the two tables shows the number of rows (and columns) of the (square) Hankel matrix we factorize for each context size. This gives an idea of the cost of the estimation algorithm, which goes from a few seconds to a few hours, depending on the matrix size.5 Following 5Note that without the scalability trick, the Hankel matrices would be simply too big (in the order of millions of rows and columns) to practically run any experiment. It should be clear, though, that this is the contribution in Quattoni et al. (2017), not of this paper. the theory behind Quattoni et al. (2017), this number is an upper bound on the size of the minimal PNFA that reproduces exactly the expected counts of training substrings. The tables include a column “KN” with the results of an ngram language model estimated with Kneser-Ney interpolation (Kneser and Ney, 1995; Chen and Goodman, 1999). Looking at the results on the PTB data in Table 1, our interpolated model performs equally well, and sometimes better, than the KN models using the same context length. Mikolov et al. (2012) reports the performance of other models: a feed-forward neural network6 obtains 1.57, which our model improves with contexts of n = 6 or larger; an RNN works at 1.41, slightly better than our best result of 1.45. Their best result is of 1.37 for a MaxEnt model with context length of n = 14 engineered for scalability. For the WP test in Table 2, our model and the KN model perform similarly, with some slight improvements by the KN model. The table also includes the results of a feed-forward neural network (FNN) for increasing orders, by Karpathy et al. (2016). We observe that our interpolated model works better, with our best result at 1.24. They also report the results of an RNN obtaining 1.24, and of LSTM and GRU which both obtain 1.08. 5 Conclusions In this paper we presented experiments using character-based spectral ngram language models. We combine two key ideas: a) modeling of longrange dependencies via the basis selection of long substring moments by Quattoni et al. (2017); and b) efficient optimization of arbitrary prediction losses (e.g. cross-entropy) via a loss refinement step. With these two ideas, we can improve the performance of spectral learning for PNFA, and bring the results of spectral models closer to the state-of-the-art. The ability of the spectral method for PNFA to estimate substring expectations can be exploited in other contexts. For example, we are interested in word-level language models that make use of character-level PNFA to compute expectations, which is useful to make predictions on words and substrings which do not appear in training. It is also interesting to consider a PNFA as a special case of an RNN which uses linear transi6However, they do not report the order of that model. 5930 tions. Given that we obtain similar results than feed-forward NN and some RNN, this suggests that some forms of non-linearities can be approximated by linear models, with the advantage that some computations (mainly, expectations) can be done exactly. Acknowledgments We are grateful to Matthias Gall´e for the discussions around this work, as well as to the anonymous reviewers for their useful feedback. References Rapha¨el Bailly, Franc¸ois Denis, and Liva Ralaivola. 2009. Grammatical inference as a principal component analysis problem. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, pages 33–40, New York, NY, USA. ACM. Borja Balle, Xavier Carreras, Franco M. Luque, and Ariadna Quattoni. 2014. Spectral Learning of Weighted Automata: A Forward-Backward Perspective. Machine Learning, 96(1):33–63. Borja Balle, Ariadna Quattoni, and Xavier Carreras. 2011. A spectral learning algorithm for finite state transducers. In Proceedings of the 2011th European Conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I, ECMLPKDD’11, pages 156–171, Berlin, Heidelberg. Springer-Verlag. Borja Balle, Ariadna Quattoni, and Xavier Carreras. 2012. Local loss optimization in operator models: A new insight into spectral learning. In Proceedings of the 29th International Coference on International Conference on Machine Learning, ICML’12, pages 1819–1826, USA. Omnipress. Stanley Chen. 2009. Performance prediction for exponential language models. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 450–458. Association for Computational Linguistics. Stanley F. Chen and Joshua Goodman. 1999. An empirical study of smoothing techniques for language modeling. Computer Speech & Language, 13(4):359 – 394. Shay B. Cohen, Karl Stratos, Michael Collins, Dean P. Foster, and Lyle Ungar. 2012. Spectral learning of latent-variable pcfgs. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 223– 231, Jeju Island, Korea. Association for Computational Linguistics. Daniel Hsu, Sham M Kakade, and Tong Zhang. 2012. A spectral algorithm for learning hidden markov models. Journal of Computer and System Sciences, 78(5):1460–1480. Daniel J. Hsu, Sham M. Kakade, and Tong Zhang. 2009. A spectral algorithm for learning hidden markov models. In COLT 2009 - The 22nd Conference on Learning Theory, Montreal, Quebec, Canada, June 18-21, 2009. Andrej Karpathy, Justin Johnson, and Fei-Fei Li. 2016. Visualizing and understanding recurrent networks. In ICLR Workshop Track. Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, volume I, pages 181–184. Tom´aˇs Mikolov, Ilya Sutskever, Anoop Deoras, HaiSon Le, Stefan Kombrink, and Jan Cernocky. 2012. Subword language modeling with neural networks. preprint (http://www. fit. vutbr. cz/imikolov/rnnlm/char. pdf). Ankur P. Parikh, Avneesh Saluja, Chris Dyer, and Eric Xing. 2014. Language modeling with power low rank ensembles. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1487–1498, Doha, Qatar. Association for Computational Linguistics. Ariadna Quattoni, Borja Balle, Xavier Carreras, and Amir Globerson. 2014. Spectral regularization for max-margin sequence tagging. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1710–1718. JMLR Workshop and Conference Proceedings. Ariadna Quattoni, Xavier Carreras, and Matthias Gall´e. 2017. A Maximum Matching Algorithm for Basis Selection in Spectral Learning. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 1477–1485, Fort Lauderdale, FL, USA. PMLR. Roni Rosenfeld. 1994. Adaptive statistical language modeling: A maximum entropy approach. Ph.D. thesis, Carnegie Mellon University.
2019
594
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5931–5937 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5931 BAM! Born-Again Multi-Task Networks for Natural Language Understanding Kevin Clark† Minh-Thang Luong‡ Urvashi Khandelwal† Christopher D. Manning† Quoc V. Le‡ †Computer Science Department, Stanford University ‡ Google Brain {kevclark,urvashik,manning}@cs.stanford.edu {thangluong,qvl}@google.com Abstract It can be challenging to train multi-task neural networks that outperform or even match their single-task counterparts. To help address this, we propose using knowledge distillation where single-task models teach a multi-task model. We enhance this training with teacher annealing, a novel method that gradually transitions the model from distillation to supervised learning, helping the multi-task model surpass its single-task teachers. We evaluate our approach by multi-task fine-tuning BERT on the GLUE benchmark. Our method consistently improves over standard single-task and multi-task training. 1 Introduction Building a single model that jointly learns to perform many tasks effectively has been a longstanding challenge in Natural Language Processing (NLP). However, multi-task NLP remains difficult for many applications, with multi-task models often performing worse than their single-task counterparts (Plank and Alonso, 2017; Bingel and Søgaard, 2017; McCann et al., 2018). Motivated by these results, we propose a way of applying knowledge distillation (Bucilu et al., 2006; Ba and Caruana, 2014; Hinton et al., 2015) so that singletask models effectively teach a multi-task model. Knowledge distillation transfers knowledge from a “teacher” model to a “student” model by training the student to imitate the teacher’s outputs. In “born-again networks” (Furlanello et al., 2018), the teacher and student have the same neural architecture and model size, but surprisingly the student is able to surpass the teacher’s accuracy. Intuitively, distillation is effective because the teacher’s output distribution over classes provides more training signal than a one-hot label; Hinton et al. (2015) suggest that teacher outputs contain “dark knowledge” capturing additional information about training examples. 0 time train distill train Task 1 Model Task 2 Model Task k Model Multi-Task Model Task 1 Labels Task 2 Labels Task k Labels 1 Figure 1: Overview of our method. λ is increased linearly from 0 to 1 over the course of training. Our work extends born-again networks to the multi-task setting. We compare Single→Multi1 born-again distillation with several other variants (Single→Single and Multi→Multi), and also explore performing multiple rounds of distillation (Single→Multi→Single→Multi). Furthermore, we propose a simple teacher annealing method that helps the student model outperform its teachers. Teacher annealing gradually transitions the student from learning from the teacher to learning from the gold labels. This method ensures the student gets a rich training signal early in training, but is not limited to only imitating the teacher. Our experiments build upon recent success in self-supervised pre-training (Dai and Le, 2015; Peters et al., 2018) and multi-task fine-tune BERT (Devlin et al., 2019) to perform the tasks from the GLUE natural language understanding benchmark (Wang et al., 2019). Our training method, which we call Born-Again Multi-tasking (BAM)2, consistently outperforms standard single-task and multi-task training. Further analysis shows the multi-task models benefit from both better regularization and transfer between related tasks. 1We use Single→Multi to indicate distilling single-task “teacher” models into a multi-task “student” model. 2Code will be released at https://github.com/ google-research/google-research/tree/ master/bam 5932 2 Related Work Multi-task learning for neural networks in general (Caruana, 1997) and within NLP specifically (Collobert and Weston, 2008; Luong et al., 2016) has been widely studied. Much of the recent work for NLP has centered on neural architecture design: e.g., ensuring only beneficial information is shared across tasks (Liu et al., 2017; Ruder et al., 2019) or arranging tasks in linguistically-motivated hierarchies (Søgaard and Goldberg, 2016; Hashimoto et al., 2017; Sanh et al., 2019). These contributions are orthogonal to ours because we instead focus on the multi-task training algorithm. Distilling large models into small models (Kim and Rush, 2016; Mou et al., 2016) or ensembles of models into single models (Kuncoro et al., 2016; Liu et al., 2019a) has been shown to improve results for many NLP tasks. There has also been some work on using knowledge distillation to aide in multi-task learning. In reinforcement learning, knowledge distillation has been used to regularize multi-task agents (Parisotto et al., 2016; Teh et al., 2017). In NLP, Tan et al. (2019) distill singlelanguage-pair machine translation systems into a many-language system. However, they focus on multilingual rather than multi-task learning, use a more complex training procedure, and only experiment with Single→Multi distillation. Concurrently with our work, several other recent works also explore fine-tuning BERT using multiple tasks (Phang et al., 2018; Liu et al., 2019b; Keskar et al., 2019; Liu et al., 2019a). However, they use only standard transfer or multitask learning, instead focusing on finding beneficial task pairs or designing improved task-specific components on top of BERT. 3 Methods 3.1 Multi-Task Setup Model. All of our models are built on top of BERT (Devlin et al., 2019). This model passes byte-pairtokenized (Sennrich et al., 2016) input sentences through a Transformer network (Vaswani et al., 2017), producing a contextualized representation for each token. The vector corresponding to the first input token3 c is passed into a task-specific classifier. For classification tasks, we use a standard softmax layer: softmax(Wc). For regression 3For BERT this is a special token [CLS] that is prepended to each input sequence. tasks, we normalize the labels so they are between 0 and 1 and then use a size-1 NN layer with a sigmoid activation: sigmoid(wT c). In our multi-task models, all of the model parameters are shared across tasks except for these classifiers on top of BERT, which means less than 0.01% of the parameters are task-specific. Following BERT, the token embeddings and Transformer are initialized with weights from a self-supervised pre-training phase.4 Training. Single-task training is performed as in Devlin et al. (2019). For multi-task training, examples of different tasks are shuffled together, even within minibatches. The summed loss across all tasks is minimized. 3.2 Knowledge Distillation We use Dτ = {(x1 τ, y1 τ), ..., (xN τ , yN τ )} to denote the training set for a task τ and fτ(x, θ) to denote the output for task τ produced by a neural network with parameters θ on the input x. Standard supervised learning trains θ to minimize the loss on the training set: L(θ) = X xiτ,yiτ∈Dτ ℓ(yi τ, fτ(xi τ, θ)) where for classification tasks ℓis usually crossentropy. Knowledge distillation trains the model to instead match the predictions of a teacher model with parameters θ′: L(θ) = X xiτ,yiτ∈Dτ ℓ(fτ(xi τ, θ′), fτ(xi τ, θ)) Note that our distilled networks are “born-again” in that the student has the same model architecture as the teacher, i.e., all of our models have the same prediction function fτ for each task. For regression tasks, we train the student to minimize the L2 distance between its prediction and the teacher’s instead of using cross-entropy loss. Intuitively, knowledge distillation improves training because the full distribution over labels provided by the teacher provides a richer training signal than a one-hot label. See Furlanello et al. (2018) for a more thorough discussion. Multi-Task Distillation. Given a set of tasks T , we train a single-task model with parameters θτ on each task τ. For most experiments, we use 4For BERT code and weights, see https://github. com/google-research/bert. 5933 the single-task models to teach a multi-task model with parameters θ: L(θ) = X τ∈T X xiτ,yiτ∈Dτ ℓ(fτ(xi τ, θτ), fτ(xi τ, θ)) However, we experiment with other distillation strategies as well. Teacher Annealing. In knowledge distillation, the student is trained to imitate the teacher. This raises the concern that the student may be limited by the teacher’s performance and not be able to substantially outperform the teacher. To address this, we propose teacher annealing, which mixes the teacher prediction with the gold label during training. Specifically, the term in the summation becomes ℓ(λyi τ + (1 −λ)fτ(xi τ, θτ), fτ(xi τ, θ)) where λ is linearly increased from 0 to 1 throughout training. Early in training, the model is mostly distilling to get as useful of a training signal as possible. Towards the end of training, the model is mostly relying on the gold-standard labels so it can learn to surpass its teachers. 4 Experiments Data. We use the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019), which consists of 9 natural language understanding tasks on English data. Tasks cover textual entailment (RTE and MNLI) question-answer entailment (QNLI), paraphrase (MRPC), question paraphrase (QQP), textual similarity (STS), sentiment (SST-2), linguistic acceptability (CoLA), and Winograd Schema (WNLI). Training Details. Rather than simply shuffling the datasets for our multi-task models, we follow the task sampling procedure from Bowman et al. (2018), where the probability of training on an example for a particular task τ is proportional to |Dτ|0.75. This ensures that tasks with very large datasets don’t overly dominate the training. We also use the layerwise-learning-rate trick from Howard and Ruder (2018). If layer 0 is the NN layer closest to the output, the learning rate for a particular layer d is set to BASE LR · αd (i.e., layers closest to the input get lower learning rates). The intuition is that pre-trained layers closer to the input learn more general features, so they shouldn’t be altered much during training. Hyperparameters. For single-task models, we use the same hyperparameters as in the original BERT experiments except we pick a layerwiselearning-rate decay α of 1.0 or 0.9 on the dev set for each task. For multi-task models, we train the model for longer (6 epochs instead of 3) and with a larger batch size (128 instead of 32), using α = 0.9 and a learning rate of 1e-4. All models use the BERT-Large pre-trained weights. Reporting Results. Dev set results report the average score (Spearman correlation for STS, Matthews correlation for CoLA, and accuracy for the other tasks) on all GLUE tasks except WNLI, for which methods can’t outperform a majority baseline. Results show the median score of at least 20 trials with different randoms seeds. We find using a large number of trials is essential because results can vary significantly for different runs. For example, standard deviations in score are over ±1 for CoLA, RTE, and MRPC for multi-task models. Single-task standard deviations are even larger. 5 Results Main Results. We compare models trained with single-task learning, multi-task learning, and several varieties of distillation in Table 1. While standard multi-task training improves over single-task training for RTE (likely because it is closely related to MNLI), there is no improvement on the other tasks. In contrast, Single→Multi knowledge distillation improves or matches the performance of the other methods on all tasks except STS, the only regression task in GLUE. We believe distillation does not work well for regression tasks because there is no distribution over classes passed on by the teacher to aid learning. The gain for Single→Multi over Multi is larger than the gain for Single→Single over Single, suggesting that distillation works particularly well in combination with multi-task learning. Interestingly, Single→Multi works substantially better than Multi→Multi distillation. We speculate it may help that the student is exposed to a diverse set of teachers in the same way ensembles benefit from a diverse set of models, but future work is required to fully understand this phenomenon. In addition to the models reported in the table, we also trained Single→Multi→Single→Multi models. However, the difference with Single→Multi was not statistically significant, suggesting there is little value in multiple rounds of distillation. 5934 Model Avg. CoLAa SST-2b MRPCc STS-Bd QQPe MNLIf QNLIg RTEh |D| = 8.5k 67k 3.7k 5.8k 364k 393k 108k 2.5k Single 84.0 60.6 93.2 88.0 90.0 91.3 86.6 92.3 70.4 Multi 85.5 60.3 93.3 88.0 89.8 91.4 86.5 92.2 82.1 Single→Single 84.3 61.7∗∗ 93.2 88.7∗ 90.0 91.4 86.8∗∗ 92.5∗∗∗70.0 Multi→Multi 85.6 60.9 93.5 88.1 89.8 91.5∗ 86.7 92.3 82.0 Single→Multi 86.0∗∗∗ 61.8∗∗ 93.6∗ 89.3∗∗ 89.7 91.6∗ 87.0∗∗∗ 92.5∗∗∗82.8∗ Dataset references: aWarstadt et al. (2018) bSocher et al. (2013) cDolan and Brockett (2005) dCer et al. (2017) eIyer et al. (2017) fWilliams et al. (2018) gconstructed from SQuAD (Rajpurkar et al., 2016) hGiampiccolo et al. (2007) Table 1: Comparison of methods on the GLUE dev set. ∗, ∗∗, and ∗∗∗indicate statistically significant (p < .05, p < .01, and p < .001) improvements over both Single and Multi according to bootstrap hypothesis tests.4 Model GLUE score BERT-Base (Devlin et al., 2019) 78.5 BERT-Large (Devlin et al., 2019) 80.5 BERT on STILTs (Phang et al., 2018) 82.0 MT-DNN (Liu et al., 2019b) 82.2 Span-Extractive BERT on STILTs 82.3 (Keskar et al., 2019) Snorkel MeTaL ensemble 83.2 (Hancock et al., 2019) MT-DNNKD* (Liu et al., 2019a) 83.7 BERT-Large + BAM (ours) 82.3 Table 2: Comparison of test set results. *MT-DNNKD is distilled from a diverse ensemble of models. Overall, a key benefit of our method is robustness: while standard multi-task learning produces mixed results, Single→Multi distillation consistently outperforms standard single-task and multitask training, resulting in performance competitive with the current state-of-the-art. We also note that in some trials single-task training resulted in models that score quite poorly (e.g., less than 91 for QQP or less than 70 for MRPC), while the multitask models have more dependable performance. Test Set Results. We compare against recent work by submitting to the GLUE leaderboard. We use Single→Multi distillation. Following the procedure used by BERT, we train multiple models and submit the one with the highest average dev set score to the test set. BERT trained 10 models for each task (80 total); we trained 20 multi-task models. Results are shown in Table 2. Our work outperforms or matches existing pub4For all statistical tests we use the Holm-Bonferroni method (Holm, 1979) to correct for multiple comparisons. lished results that do not rely on ensembling. However, due to the variance between trials discussed under “Reporting Results,” we think these test set numbers should be taken with a grain of salt, as they only show the performance of individual training runs. We believe significance testing over multiple trials would be needed to have a definitive comparison. Single-Task Fine-Tuning. A crucial difference distinguishing our work from the STILTs, Snorkel MeTaL, and MT-DNNKD methods in Table 2 is that we do not single-task fine-tune our model. That is, we do not continue training the model on individual tasks after multi-task training. While single-task fine-tuning improves results, we think to some extent it defeats the purpose of multi-task learning: the result of training is one model for each task instead of a model that can perform all of the tasks. Compared to having many single-task models, a multi-task model is simpler to deploy, faster to run, and arguably more scientifically interesting from the perspective of building general language-processing systems. We evaluate the benefits of single-task finetuning and report results in Table 3. Singletask fine-tuning initializes models with multi-tasklearned weights and then performs single-task training. Hyperparameters are the same as for our single-task models except we use a smaller learning rate of 1e-5. While single-task fine-tuning unsurprisingly improves results, the gain on top of Single→Multi distillation is small, reinforcing the claim that distillation obviates many of the benefits of single-task training. Ablation Study. We show the importance of teacher annealing and the other training tricks in Table 4. We found them all to significantly im5935 Model Avg. Score Multi 85.5 +Single-Task Fine-Tuning +0.3 Single→Multi 86.0 +Single-Task Fine-Tuning +0.1 Table 3: Combining multi-task training with singletask fine-tuning. Improvements are statistically significant (p < .01) according to Mann-Whitney U tests.4 Model Avg. Score Single→Multi 86.0 No layer-wise LRs −0.3 No task sampling −0.4 No teacher annealing: λ = 0 −0.5 No teacher annealing: λ = 0.5 −0.3 Table 4: Ablation Study. Differences from Single→Multi are statistically significant (p < .001) according to Mann-Whitney U tests.4 prove scores. Interestingly, using pure distillation without teacher annealing (i.e., fixing λ = 0) performs no better than standard multi-task learning. Comparing combinations of tasks. Training on a large number of tasks is known to help regularize multi-task models (Ruder, 2017). A related benefit of multi-task learning is the transfer of learned “knowledge” between closely related tasks. We investigate these by comparing several models on the RTE task, including one trained with a very closely related task (MNLI) and one trained with fairly unrelated tasks (QQP, CoLA, and SST). We use Single→Multi distillation (Single→Single in the case of the RTE-only model). Both sets of auxilliary tasks improve RTE performance, suggesting that both benefits are playing a role in improving multi-task models. Interestingly, RTE + MNLI alone slightly outperforms the model performing all tasks, perhaps because training on MNLI, which has a very large dataset, is already enough to sufficiently regularize the model. 6 Discussion and Conclusion We have shown that Single→Multi distillation combined with teacher annealing produces results consistently better than standard single-task or multi-task training. Achieving robust multi-task gains across many tasks has remained elusive in previous research, so we hope our work will make Trained Tasks RTE score RTE 70.0 RTE + MNLI 83.4 RTE + QQP + CoLA + SST 75.1 All GLUE 82.8 Table 5: Which tasks help RTE? Pairwise differences are statistically significant (p < .01) according to Mann-Whitney U tests.4 multi-task learning more broadly useful within NLP. However, with the exception of closely related tasks with small datasets (e.g., MNLI helping RTE), the overall size of the gains from our multi-task method are small compared to the gains provided by transfer learning from self-supervised tasks (i.e., BERT). It remains to be fully understood to what extent “self-supervised pre-training is all you need” and where transfer/multi-task learning from supervised tasks can provide the most value. Acknowledgements We thank Robin Jia, John Hewitt, and the anonymous reviewers for their thoughtful comments and suggestions. Kevin is supported by a Google PhD Fellowship. References Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? In NIPS. Joachim Bingel and Anders Søgaard. 2017. Identifying beneficial task relations for multi-task learning in deep neural networks. In EACL. Samuel R Bowman, Ellie Pavlick, Edouard Grave, Benjamin Van Durme, Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R Thomas McCoy, Roma Patel, et al. 2018. Looking for ELMo’s friends: Sentence-level pretraining beyond language modeling. arXiv preprint arXiv:1812.10860. Cristian Bucilu, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In SIGKDD. Rich Caruana. 1997. Multitask learning. Machine Learning. Daniel M. Cer, Mona T. Diab, Eneko Agirre, I˜nigo Lopez-Gazpio, and Lucia Specia. 2017. Semeval2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In SemEval@ACL. 5936 Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: deep neural networks with multitask learning. In ICML. Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In NIPS. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In IWP@IJCNLP. Tommaso Furlanello, Zachary C Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. 2018. Born again neural networks. In ICML. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and William B. Dolan. 2007. The third pascal recognizing textual entailment challenge. In ACLPASCAL@ACL. Braden Hancock, Clara McCreery, Ines Chami, Vincent Chen, Sen Wu, Jared Dunnmon, Paroma Varma, Max Lam, and Chris R. 2019. Massive multi-task learning with snorkel metal: Bringing more supervision to bear. Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple NLP tasks. In EMNLP. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Sture Holm. 1979. A simple sequentially rejective multiple test procedure. Scandinavian journal of statistics. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In ACL. Shankar Iyer, Nikhil Dandekar, and Kornl Csernai. 2017. First quora dataset release: Question pairs. Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Unifying question answering and text classification via span extraction. arXiv preprint arXiv:1904.09286. Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In EMNLP. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016. Distilling an ensemble of greedy dependency parsers into one mst parser. In EMNLP. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-task learning for text classification. In ACL. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Improving multi-task deep neural networks via knowledge distillation for natural language understanding. arXiv preprint arXiv:1904.09482. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019b. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504. Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task sequence to sequence learning. In ICLR. Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730. Lili Mou, Ran Jia, Yan Xu, Ge Li, Lu Zhang, and Zhi Jin. 2016. Distilling word embeddings: An encoding approach. In CIKM. Emilio Parisotto, Jimmy Ba, and Ruslan Salakhutdinov. 2016. Actor-mimic: Deep multitask and transfer reinforcement learning. In ICLR. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL-HLT. Jason Phang, Thibault F´evry, and Samuel R Bowman. 2018. Sentence encoders on STILTs: Supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088. Barbara Plank and H´ector Mart´ınez Alonso. 2017. When is multitask learning effective? Semantic sequence prediction under varying data conditions. In EACL. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy S. Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In EMNLP. Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098. Sebastian Ruder, Joachim Bingel, Isabelle Augenstein, and Anders Søgaard. 2019. Latent multi-task architecture learning. In AAAI. Victor Sanh, Thomas Wolf, and Sebastian Ruder. 2019. A hierarchical multi-task approach for learning embeddings from semantic tasks. In AAAI. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP. 5937 Anders Søgaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In ACL. Xu Tan, Yi Ren, Di He, Tao Qin, and Tie-Yan Liu. 2019. Multilingual neural machine translation with knowledge distillation. In ICLR. Yee Teh, Victor Bapst, Wojciech M Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, and Razvan Pascanu. 2017. Distral: Robust multitask reinforcement learning. In NIPS. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In ICLR. Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2018. Neural network acceptability judgments. arXiv preprint arXiv:1805.12471. Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL-HLT.
2019
595
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5938–5951 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5938 Curate and Generate: A Corpus and Method for Joint Control of Semantics and Style in Neural NLG Shereen Oraby, Vrindavan Harrison, Abteen Ebrahimi, and Marilyn Walker Natural Language and Dialog Systems Lab University of California, Santa Cruz {soraby,vharriso,aaebrahi,mawalker}@ucsc.edu Abstract Neural natural language generation (NNLG) from structured meaning representations has become increasingly popular in recent years. While we have seen progress with generating syntactically correct utterances that preserve semantics, various shortcomings of NNLG systems are clear: new tasks require new training data which is not available or straightforward to acquire, and model outputs are simple and may be dull and repetitive. This paper addresses these two critical challenges in NNLG by: (1) scalably (and at no cost) creating training datasets of parallel meaning representations and reference texts with rich style markup by using data from freely available and naturally descriptive user reviews, and (2) systematically exploring how the style markup enables joint control of semantic and stylistic aspects of neural model output. We present YELPNLG, a corpus of 300,000 rich, parallel meaning representations and highly stylistically varied reference texts spanning different restaurant attributes, and describe a novel methodology that can be scalably reused to generate NLG datasets for other domains. The experiments show that the models control important aspects, including lexical choice of adjectives, output length, and sentiment, allowing the models to successfully hit multiple style targets without sacrificing semantics. 1 Introduction The increasing popularity of personal assistant dialog systems and the success of end-to-end neural models on problems such as machine translation has lead to a surge of interest around data-totext neural natural language generation (NNLG). State-of-the-art NNLG models commonly use a sequence-to-sequence framework for end-to-end neural language generation, taking a meaning representation (MR) as input, and generating a natural language (NL) realization as output (Dusek and Jurc´ıcek, 2016; Lampouras and Vlachos, 2016; Mei et al., 2015; Wen et al., 2015b). Table 1 shows some examples of MR to human and system NL realizations from recently popular NNLG datasets. The real power of NNLG models over traditional statistical generators is their ability to produce natural language output from structured input in a completely data-driven way, without needing hand-crafted rules or templates. However, these models suffer from two critical bottlenecks: (1) a data bottleneck, i.e. the lack of large parallel training data of MR to NL, and (2) a control bottleneck, i.e. the inability to systematically control important aspects of the generated output to allow for more stylistic variation. Recent efforts to address the data bottleneck with large corpora for training neural generators have relied almost entirely on high-effort, costly crowdsourcing, asking humans to write references given an input MR. Table 1 shows two recent efforts: the E2E NLG challenge (Novikova et al., 2017a) and the WEBNLG challenge (Gardent et al., 2017), both with an example of an MR, human reference, and system realization. The largest dataset, E2E, consists of 50k instances. Other datasets, such as the Laptop (13k) and TV (7k) product review datasets, are similar but smaller (Wen et al., 2015a,b). These datasets were created primarily to focus on the task of semantic fidelity, and thus it is very evident from comparing the human and system outputs from each system that the model realizations are less fluent, descriptive, and natural than the human reference. Also, the nature of the domains (restaurant description, Wikipedia infoboxes, and technical product reviews) are not particularly descriptive, exhibiting little variation. Other work has also focused on the control bottleneck in NNLG, but has zoned in on one particular dimension of style, such as sentiment, length, 5939 1 - E2E (Novikova et al., 2017a) 50k - Crowdsourcing (Domain: Restaurant Description) MR: name[Blue Spice], eatType[restaurant], food[English], area[riverside], familyFriendly[yes], near[Rainbow Vegetarian Cafe] Human: Situated near the Rainbow Vegetarian Cafe in the riverside area of the city, The Blue Spice restaurant is ideal if you fancy traditional English food whilst out with the kids. System: Blue Spice is a family friendly English restaurant in the riverside area near Rainbow Vegetarian Cafe. 2 - WebNLG (Gardent et al., 2017) 21k - DBPedia and Crowdsourcing (Domain: Wikipedia) MR: (Buzz-Aldrin, mission, Apollo-11), (Buzz-Aldrin, birthname, “Edwin Eugene Aldrin Jr.”), (Buzz-Aldrin, awards, 20), (Apollo-11, operator, NASA) Human: Buzz Aldrin (born as Edwin Eugene Aldrin Jr) was a crew member for NASA’s Apollo 11 and had 20 awards. System: Buzz aldrin, who was born in edwin eugene aldrin jr., was a crew member of the nasa operated apollo 11. he was awarded 20 by nasa. 3 - YelpNLG (this work) 300k - Auto. Extraction (Domain: Restaurant Review) MR: (attr=food, val=taco, adj=no-adj, mention=1), (attr=food, val=flour-tortilla, adj=small, mention=1), (attr=food, val=beef, adj=marinated, mention=1), (attr=food, val=sauce, adj=spicy, mention=1) +[sentiment=positive, len=long, first-person=false, exclamation=false] Human: The taco was a small flour tortilla topped with marinated grilled beef, asian slaw and a spicy delicious sauce. System: The taco was a small flour tortilla with marinated beef and a spicy sauce that was a nice touch. Table 1: A comparison of popular NNLG datasets. (1/5 star) I want to curse everyone I know who recommended this craptacular buffet. [...] It’s absurdly overpriced at more than $50 a person for dinner. What do you get for that princely sum? Some cold crab legs (it’s NOT King Crab, either, despite what others are saying) Shrimp cocktail (several of which weren’t even deveined. GROSS. [...]) (5/5 star) One of my new fave buffets in Vegas! Very cute interior, and lots of yummy foods! [...] The delicious Fresh, delicious king grab legs!! [...]REALLY yummy desserts! [...] All were grrreat, but that tres leches was ridiculously delicious. Table 2: Yelp restaurant reviews for the same business. or formality (Fan et al., 2017; Hu et al., 2017; Ficler and Goldberg, 2017; Shen et al., 2017; Herzig et al., 2017; Fu et al., 2018; Rao and Tetreault, 2018). However, human language actually involves a constellation of interacting aspects of style, and NNLG models should be able to jointly control these multiple interacting aspects. In this work, we tackle both bottlenecks simultaneously by leveraging masses of freely available, highly descriptive user review data, such as that shown in Table 2. These naturally-occurring examples show a highly positive and highly negative review for the same restaurant, with many examples of rich language and detailed descriptions, such as “absurdly overpriced”, and “ridiculously delicious”. Given the richness of this type of free, abundant data, we ask: (1) can this freely available data be used for training NNLG models?, and (2) is it possible to exploit the variation in the data to develop models that jointly control multiple interacting aspects of semantics and style? We address these questions by creating the YELPNLG corpus, consisting of 300k MR to reference pairs for training NNLGs, collected completely automatically using freely available data (such as that in Table 2), and off-the-shelf tools.1 Rather than starting with a meaning representation and collecting human references, we begin with the references (in the form of review sentences), and work backwards – systematically constructing meaning representations for the sentences using dependency parses and rich sets of lexical, syntactic, and sentiment information, including ontological knowledge from DBPedia. This method uniquely exploits existing data which is naturally rich in semantic content, emotion, and varied language. Row 3 of Table 1 shows an example MR from YELPNLG, consisting of relational tuples of attributes, values, adjectives, and order information, as well as sentence-level information including sentiment, length, and pronouns. Once we have created the YELPNLG corpus, we are in the unique position of being able to explore, for the first time, how varying levels of supervision in the encoding of content, lexical choice, and sentiment can be exploited to control style in NNLG. Our contributions include: • A new corpus, YELPNLG, larger and more lexically and stylistically varied than existing NLG datasets; • A method for creating corpora such as YELPNLG, which should be applicable to other domains; • Experiments on controlling multiple interacting aspects of style with an NNLG while maintaining semantic fidelity, and results using a broad range of evaluation methods; • The first experiments, to our knowledge, showing that an NNLG can be trained to control lexical choice of adjectives. We leave a detailed review of prior work to Section 5 where we can compare it with our own. 1https://nlds.soe.ucsc.edu/yelpnlg 5940 Figure 1: Extracting information from a review sentence parse to create an MR. 2 Creating the YelpNLG Corpus We begin with reviews from the Yelp challenge dataset,2 which is publicly available and includes structured information for attributes such as location, ambience, and parking availability for over 150k businesses, with around 4 million reviews in total. We note that this domain and dataset are particularly unique in how naturally descriptive the language used is, as exemplified in Table 2, especially compared to other datasets previously used for NLG in domains such as Wikipedia. For corpus creation, we must first sample sentences from reviews in such a way as to allow the automatic and reliable construction of MRs using fully automatic tools. To identify restaurant attributes, we use restaurant lexicons from our previous work on template-based NLG (Oraby et al., 2017). The lexicons include five attribute types prevalent in restaurant reviews: restaurant-type, cuisine, food, service, and staff collected from Wikipedia and DBpedia, including, for example, around 4k for foods (e.g. “sushi”), and around 40 for cuisines (e.g. “Italian”). We then expand these basic lexicons by adding in attributes for ambiance (e.g. “decoration”) and price (e.g. “cost”) using vocabulary items from the E2E generation challenge (Novikova et al., 2017b). To enforce some semantic constraints and “truth grounding” when selecting sentences without severely limiting variability, we only select sentences that mention particular food values. A pilot analysis of random reviews show that some of the most commonly mentioned foods are meat items, i.e. “meat”, “beef”, “chicken”, “crab”, and “steak”. Beginning with the original set of over 4 million business reviews, we sentence-tokenize them and randomly sample a set of 500,000 sentences from restaurant reviews that mention of at least one of the meat items (spanning around 3k 2https://www.yelp.com/dataset/ challenge unique restaurants, 170k users, and 340k reviews). We filter to select sentences that are between 4 and 30 words in length: restricting the length increases the likelihood of a successful parse and reduces noise in the process of automatic MR construction. We parse the sentences using Stanford dependency parser (Chen and Manning, 2014), removing any sentence that is tagged as a fragment. We show a sample sentence parse in Figure 1. We identify all nouns and search for them in the attribute lexicons, constructing (attribute, value) tuples if a noun is found in a lexicon, including the full noun compound if applicable, e.g. (food, chicken-chimichanga) in Figure 1.3 Next, for each (attribute, value) tuple, we extract all amod, nsubj, or compound relations between a noun value in the lexicons and an adjective using the dependency parse, resulting in (attribute, value, adjective) tuples. We add in “mention order” into the tuple distinguish values mentioned multiple times in the same reference. We also collect sentence-level information to encode additional style variables. For sentiment, we tag each sentence with the sentiment inherited from the “star rating” of the original review it appears in, binned into one of three values for lower granularity: 1 for low review scores (1-2 stars), 2 for neutral scores (3 star), and 3 for high scores (45 stars).4 To experiment with control of length, we assign a length bin of short (≤10 words), medium (10-20 words), and long (≥20 words). We also include whether the sentence is in first person. For each sentence, we create 4 MR variations. The simplest variation, BASE, contains only attributes and their values. The +ADJ version adds adjectives, +SENT adds sentiment, and finally the richest MR, +STYLE, adds style information on 3Including noun compounds allows us to identify new values that did not exist in our lexicons, thus automatically expanding them. 4A pilot experiment comparing this method with Stanford sentiment (Socher et al., 2013) showed that copying down the original review ratings gives more reliable sentiment scores. 5941 1 The chicken chimichanga was tasty but the beef was even better! (attr=food, val=chicken chimichanga, adj=tasty, mention=1), (attr=food, val=beef, adj=no adj, mention=1) +[sentiment=positive, len=medium, first person=false, exclamation=true] 2 Food was pretty good ( i had a chicken wrap ) but service was crazy slow. (attr=food, val=chicken wrap, adj=no adj, mention=1), (attr=service, val=service, adj=slow, mention=1) +[sentiment=neutral, len=medium, first person=true, exclamation=false] 3 The chicken was a bit bland ; i prefer spicy chicken or well seasoned chicken. (attr=food, val=chicken, adj=bland, mention=1), (attr=food, val=chicken, adj=spicy, mention=2), (attr=food, val=chicken, adj=seasoned, mention=3) +[sentiment=neutral, len=medium, first person=true, exclamation=false] 4 The beef and chicken kebabs were succulent and worked well with buttered rice, broiled tomatoes and raw onions. (attr=food, val=beef chicken kebabs, adj=succulent, mention=1), (attr=food, val=rice, adj=buttered, mention=1), ( attr=food, val=tomatoes, adj=broiled, mention=1), (attr=food, val=onions, adj=raw, mention=1) +[sentiment=positive, len=long, first person=false, exclamation=false] Table 3: Sample sentences and automatically generated MRs from YELPNLG. Note the stylistic variation that is marked up in the +STYLE MRs, especially compared to those in other corpora such as E2E or WEBNLG. mention order, whether the sentence is first person, and whether it contains an exclamation. Half of the sentences are in first person and around 10% contain an exclamation, and both of these can contribute to controllable generation: previous work has explored the effect of first person sentences on user perceptions of dialog systems (Boyce and Gorin, 1996), and exclamations may be correlated with aspects of a hyperbolic style. Table 3 shows sample sentences for the richest version of the MR (+STYLE) that we create. In Row 1, we see the MR from the example in Figure 1, showing an example of a NN compound, “chicken chimichanga”, with adjective “tasty”, and the other food item, “beef”, with no retrieved adjective. Row 2 shows an example of a “service” attribute with adjective “slow”, in the first person, and neutral sentiment. Note that in this example, the method does not retrieve that the “chicken wrap” is actually described as “good”, based on the information available in the parse, but that much of the other information in the sentence is accurately captured. We expect the language model to successfully smooth noise in the training data caused by parser or extraction errors.5 Row 3 shows an example of the value “chicken” mentioned 3 times, each with different adjectives (“bland”, “spicy”, and “seasoned”). Row 4 shows an example of 4 foods and very positive sentiment. 2.1 Comparison to Previous Datasets Table 4 compares YELPNLG to previous work in terms of data size, unique vocab and adjec5We note that the Stanford dependency parser (Chen and Manning, 2014) has a token-wise labeled attachment score (LAS) of 90.7, but point out that for our MRs we are primarily concerned with capturing NN compounds and adjective-noun relations, which we evaluate in Section 2.2. tives, entropy,6 average reference length (RefLen), and examples of stylistic and structural variation in terms of contrast (markers such as “but” and “although”), and aggregation (e.g. “both” and “also”) (Juraska and Walker, 2018), showing how our dataset is much larger and more varied than previous work. We note that the Laptop and E2E datasets (which allow multiple sentences per references) have longer references on average than YelpNLG (where references are always single sentences and have a maximum of 30 words). We are interested in experimenting with longer references, possibly with multiple sentences, in future work. Figure 2 shows the distribution of MR length, in terms of the number of attribute-value tuples. There is naturally a higher density of shorter MRs, with around 13k instances from the dataset containing around 2.5 attribute-value tuples, but that the MRs go up to 11 tuples in length. E2E LAPTOP YELPNLG Train Size 42k 8k 235k Train Vocab 2,786 1,744 41,337 Train # Adjs 944 381 13,097 Train Entropy 11.59 11.57 15.25 Train RefLen 22.4 26.4 17.32 % Refs w/ Contrast 5.78% 3.61% 9.11% % Refs w/ Aggreg. 1.64% 2.54% 6.39% Table 4: NLG corpus statistics from E2E (Novikova et al., 2017a), LAPTOP (Wen et al., 2016), and YELPNLG (this work). 2.2 Quality Evaluation We examine the quality of the MR extraction with a qualitative study evaluating YELPNLG MR to NL 6We show the formula for entropy in Sec 4 on evaluation. 5942 1 2 3 4 5 6 7 8 9 10 11 2000 4000 6000 8000 10000 12000 14000 Number of Attributes per MR Number of MRs Figure 2: MR distribution in YELPNLG train. pairs on various dimensions. Specifically, we evaluate content preservation (how much of the MR content appears in the NL, specifically, nouns and their corresponding adjectives from our parses), fluency (how “natural sounding” the NL is, aiming for both grammatical errors and general fluency), and sentiment (what the perceived sentiment of the NL is). We note that we conduct the same study over our NNLG test outputs when we generate data using YELPNLG in Section 4.3. We randomly sample 200 MRs from the YELPNLG dataset, along with their corresponding NL references, and ask 5 annotators on Mechanical Turk to rate each output on a 5 point Likert scale (where 1 is low and 5 is high for content and fluency, and where 1 is negative and 5 is positive for sentiment). For content and fluency, we compute the average score across all 5 raters for each item, and average those scores to get a final rating for each model, such that higher content and fluency scores are better. We compute sentiment error by converting the judgments into 3 bins to match the Yelp review scores (as we did during MR creation), finding the average rating for all 5 annotators per item, then computing the difference between their average score and the true sentiment rating in the reference text (from the original review), such that lower sentiment error is better. The average ratings for content and fluency are high, at 4.63 and 4.44 out of 5, respectively, meaning that there are few mistakes in marking attribute and value pairs in the NL references, and that the references are also fluent. This is an important check because correct grammar/spelling/punctuation is not a restriction in Yelp reviews. For sentiment, the largest error is 0.58 (out of 3), meaning that the perceived sentiment by raters does not diverge greatly, on average, from the Yelp review sentiment assigned in the MR, and indicates that inheriting sentence sentiment from the review is a reasonable heuristic. 3 Model Design In the standard RNN encoder-decoder architecture commonly used for machine translation (Sutskever et al., 2014; Bahdanau et al., 2014), the probability of a target sentence w1:T given a source sentence x1:S is modeled as p(w1:T |x) = QT 1 p(wt|w1:t−1, x) (Klein et al., 2018). In our case, the input is not a natural language source sentence as in traditional machine translation; instead, the input x1:S is a meaning representation, where each token xn is itself a tuple of attribute and value features, (fattr, fval). Thus, we represent a given input x1:S as a sequence of attribute-value pairs from an input MR. For example, in the case of BASE MR [(attr=food, val=steak), (attr=food, val=chicken)], we would have x = x1, x2, where x1=(fattr=food,fval=steak), and x2=(fattr=food,fval=chicken). The target sequence is a natural language sentence, which in this example might be, “The steak was extra juicy and the chicken was delicious!” Base encoding. During the encoding phase for BASE MRs, the model takes as input the MR as a sequence of attribute-value pairs. We precompute separate vocabularies for attributes and values. MR attributes are represented as vectors and MR values are represented with reduced dimensional embeddings that get updated during training. The attributes and values of the input MR are concatenated to produce a sequence of attribute-value pairs that then is encoded using a multi-layer bidirectional LSTM (Hochreiter and Schmidhuber, 1997). Additional feature encoding. For the +ADJ, +SENT, and +STYLE MRs, each MR is a longer relational tuple, with additional style feature information to encode, such that an input sequence x1:S = (fattr, fval, f1:N), and where each fn is an additional feature, such as adjective or mention order. Specifically in the case of +STYLE MRs, the additional features may be sentence-level features, such as sentiment, length, or exclamation. In this case, we enforce additional constraints 5943 on the models for +ADJ, +SENT, and +STYLE, changing the conditional probability computation for w1:T given a source sentence x1:S to p(w1:T |x) = QT 1 p(wt|w1:t−1, x, f), where f is the set of new feature constraints to the model. We represent these additional features as a vector of additional supervision tokens or side constraints (Sennrich et al., 2016). Thus, we construct a vector for each set of features, and concatenate them to the end of each attributevalue pair, encoding the full sequence as for BASE above. Target decoding. At each time step of the decoding phase the decoder computes a new decoder hidden state based on the previously predicted word and an attentionally-weighted average of the encoder hidden states. The conditional nextword distribution p(wt|w1:t−1, x, f) depends on f, the stylistic feature constraints added as supervision. This is produced using the decoder hidden state to compute a distribution over the vocabulary of target side words. The decoder is a unidirectional multi-layer LSTM and attention is calculated as in Luong et al. (2015) using the general method of computing attention scores. We present model configurations in Appendix A. 4 Evaluation To evaluate whether the models effectively hit semantic and stylistic targets, we randomly split the YELPNLG corpus into 80% train (∼235k instances), 10% dev and test (∼30k instances each), and create 4 versions of the corpus: BASE, +ADJ, +SENT, and +STYLE, each with the same split.7 Table 5 shows examples of output generated by the models for a given test MR, showing the effects of training models with increasing information. Note that we present the longest version of the MR (that used for the +STYLE model), so the BASE, +ADJ, and +SENT models use the same MR minus the additional information. Row 1 shows an example of partially correct sentiment for BASE, and fully correct sentiment for the rest; +ADJ gets the adjectives right, +SENT is more descriptive, and +STYLE hits all targets. Row 2 gives an example of extra length in +STYLE, “the meat was so ten7Since we randomly split the data, we compute the overlap between train and test for each corpus version, noting that around 14% of test MRs exist in training for the most specific +STYLE version (around 4.3k of the 30k), but that less than 0.5% of the 30k full MR-ref pairs from test exist in train. der and juicy that it melted in your mouth”. Row 3 shows an example of a negative sentiment target, which is achieved by both the +SENT and +STYLE models, with interesting descriptions such as “the breakfast pizza was a joke”, and “the pizza crust was a little on the bland side”. We show more +STYLE model outputs in Appendix C. 4.1 Automatic Semantic Evaluation Machine Translation Metrics. We begin with an automatic evaluation using standard metrics frequently used for machine translation. We use the script provided by the E2E Generation Challenge8 to compute scores for each of the 4 model test outputs compared to the original Yelp review sentences in the corresponding test set. Rows 1-4 of Table 6 summarize the results for BLEU (n-gram precision), METEOR (n-grams with synonym recall), CIDEr (weighted n-gram cosine similarity), and NIST (weighted n-gram precision), where higher numbers indicate better overlap (shown with the ↑). We note that while these measures are common for machine translation, they are not well-suited to this task, since they are based on ngram overlap which is not a constraint within the model; we include them for comparative purposes. From the table, we observe that across all metrics, we see a steady increase as more information is added. Overall, the +STYLE model has the highest scores for all metrics, i.e. +STYLE model outputs are most lexically similar to the references. Semantic Error Rate. The types of semantic errors the models make are more relevant than how well they conform to test references. We calculate average Semantic Error Rate (SER), which is a function of the number of semantic mistakes the model makes (Wen et al., 2015a; Reed et al., 2018). We find counts of two types of common mistakes: deletions, where the model fails to realize a value from the input MR, and repetitions, where the model repeats the same value more than once.9 Thus, we compute SER per MR as SER = D+R N , where D and R are the number of deletions and repetitions, and the N is the number of tuples in the MR, and average across the test outputs. 8https://github.com/tuetschek/ e2e-metrics 9We note that other types of errors include insertions and substitutions, but we evaluate these through our human evaluation in Sec 4.3 since our large vocabulary size makes identifying them non-trivial. 5944 1 MR (food, porridge, no adj, mention=1), (food, meat, no adj, mention=1), (food, flavor, rich, mention=1), (food, soup, no adj, mention=1) +[sentiment=positive, len=long, first person=false, exclamation=false] BASE The porridge was good, but the meat lacked flavor and the soup was bland. +ADJ The porridge had a lot of meat in it and the flavor of the soup was rich. +SENT The porridge had a lot of meat in it and the flavor of the soup was rich and delicious. +STYLE The porridge had a good amount of meat and rich flavor, and the soup was cooked perfectly. 2 MR (food, gyro salad, no adj, mention=1), (food, meat, no adj, mention=1) +[sentiment=positive, len=long, first person=true, exclamation=false] BASE I had the gyro salad and the meat was very good. +ADJ I had the gyro salad and the meat was tender and juicy. +SENT I had the gyro salad and the meat was tender. +STYLE I had the gyro salad and the meat was so tender and juicy that it melted in your mouth. 3 MR (food, eggs, no adj, mention=1), (food, ham steak, small, mention=1), (food, bacon, chewy, mention=1), (food, breakfast pizza, no adj, mention=1) +[sentiment=negative, len=long, first person=true, exclamation=false] BASE I had the eggs, ham steak, bacon, and buffalo pizza. +ADJ Eggs, ham steak, chewy bacon, and breakfast pizza. +SENT The eggs were over cooked, the ham steak was small, the bacon was chewy, and the breakfast pizza was a joke. +STYLE I ordered the eggs benedict and the ham steak was small, the bacon was chewy and the pizza crust was a little on the bland side. Table 5: Sample test MR and corresponding outputs for each model. Note that the MR presented is for +STYLE: the other models all provide less information as described in Section 2. BASE +ADJ +SENT +STYLE 1 BLEU ↑ 0.126 0.164 0.166 0.173 2 METEOR ↑ 0.206 0.233 0.234 0.235 3 CIDEr ↑ 1.300 1.686 1.692 1.838 4 NIST ↑ 3.840 4.547 4.477 5.537 5 Avg SER ↓ 0.053 0.063 0.064 0.090 Table 6: Automatic semantic evaluation (higher is better for all but SER). Table 6 presents the average SER rates for each model, where lower rates mean fewer mistakes (indicated by ↓). It is important to note here that we compute errors over value and adjective slots only, since these are the ones that we are able to identify lexically (we cannot identify whether an output makes an error on sentiment in this way, so we measure that with a human evaluation in Section 4.3). This means that the BASE outputs errors are computed over only value slots (since they don’t contain adjectives), and the rest of the errors are computed over both value and adjective slots. Amazingly, overall, Table 6 results show the SER is extremely low, even while achieving a large amount of stylistic variation. Naturally, BASE, with no access to style information, has the best (lowest) SER. But we note that there is not a large increase in SER as more information is added – even for the most difficult setting, +STYLE, the models make an error on less than 10% of the slots in a given MR, on average. 4.2 Automatic Stylistic Evaluation We compute stylistic metrics to compare the model outputs, with results shown in Table 7.10 For vocab, we find the number of unique words in all outputs for each model. We find the average sentence length (SentLen) by counting the number of words, and find the total number of times an adjective is used (Row 3) and average number of adjectives per reference for each model (Row 4). We compute Shannon text entropy (E) as: E = −P x∈V f t ∗log2( f t ), where V is the vocab size in all outputs generated by the model, f is the frequency of a term (in this case, a trigram), and t counts the number of terms in all outputs. Finally, we count the instances of contrast (e.g. “but” and “although”), and aggregation (e.g. “both” and “also”). For all metrics, higher scores indicate more variability (indicated by ↑). From the table, we see that overall the vocabulary is large, even when compared to the training data for E2E and Laptop, as shown in Table 4. First, we see that the simplest, least constrained BASE model has the largest vocabulary, since it has the most freedom in terms of word choice, while the model with the largest amount of supervision, +STYLE, has the smallest vocab, since we provide it with the most constraints on word choice. For all other metrics, we see that the +STYLE 10These measures can be compared to Table 4, which includes similar statistics for the YelpNLG training data. 5945 BASE +ADJ +SENT +STYLE 1 Vocab ↑ 8,627 8,283 8,303 7,878 2 SentLen ↑ 11.27 11.45 11.30 13.91 3 # Adjs ↑ 24k 26k 26k 37k 4 Adj/Ref ↑ 0.82 0.90 0.89 1.26 5 Entropy ↑ 11.18 11.87 11.93 11.94 6 Contrast ↑ 1,586 1,000 890 2,769 7 Aggreg. ↑ 116 103 106 1,178 Table 7: Automatic stylistic evaluation metrics (higher is better). Paired t-test BASE vs. +STYLE all p < 0.05. model scores highest: these results are especially interesting when considering that +STYLE has the smallest vocab; even though word choice is constrained with richer style markup, +STYLE is more descriptive on average (more adjectives used), and has the highest entropy (more diverse word collocations). This is also very clear from the significantly higher number of contrast and aggregation operations in the +STYLE outputs. Language Template Variations. Since our test set consists of 30k MRs, we are able to broadly characterize and quantify the kinds of sentence constructions we get for each set of model outputs. To make generalized sentence templates, we delexicalize each reference in the model outputs, i.e. we replace any food item with a token [FOOD], any service item with [SERVICE], etc. Then, we find the total number of unique templates each model produces, finding that each “more informed” model produces more unique templates: BASE produces 18k, +ADJ produces 22k, +SENT produces 23k, and +STYLE produces 26k unique templates. In other words, given the test set of 30k, +STYLE produces a novel templated output for over 86% of the input MRs. While it is interesting to note that each “more informed” model produces more unique templates, we also want to characterize how frequently templates are reused. Figure 3 shows the number of times each model repeats its top 20 most frequently used templates. For example, the Rank 1 most frequently used template for the BASE model is “I had the [FOOD] [FOOD].”, and it is used 550 times (out of the 30k outputs). For +STYLE, the Rank 1 most frequently used template is “I had the [FOOD] [FOOD] and it was delicious.”, and it is only used 130 times. The number of repetitions decreases as the template rank moves from 1 to 20, and repetition count is always significantly lower for +STYLE, indicating more variation. Examples of frequent templates from the BASE and +STYLE models are are shown in Appendix B. 1 5 10 15 20 100 200 300 400 500 Template Rank Number of Repetitions base +adj +sent +style Figure 3: Number of output template repetitions for the 20 most frequent templates (+STYLE has the fewest repetitions, i.e. it is the most varied). Achieving Other Style Goals. The +STYLE model is the only one with access to first-person, length, and exclamation markup, so we also measure its ability to hit these stylistic goals. The average sentence length for the +STYLE model for LEN=SHORT is 7.06 words, LEN=MED is 13.08, and LEN=LONG is 22.74, closely matching the average lengths of the test references in those cases, i.e. 6.33, 11.05, and 19.03, respectively. The model correctly hits the target 99% of the time for first person (it is asked to produce this for 15k of the 30k test instances), and 100% of the time for exclamation (2k instances require exclamation). 4.3 Human Quality Evaluation We evaluate output quality using human annotators on Mechanical Turk. As in our corpus quality evaluation from Section 2.2, we randomly sample 200 MRs from the test set, along with the corresponding outputs for each of the 4 models, and ask 5 annotators to rate each output on a 1-5 Likert scale for content, fluency, and sentiment (1 for very negative, 5 for very positive11). Table 8 shows the average scores by criteria and model.12 For content and fluency, all average ratings are very high, above 4.3 (out of 5). The differences between models are small, but it is interesting 11As in Sec 2.2, we scale the sentiment scores into 3 bins to match our Yelp review sentiment. 12The average correlation between each annotator’s ratings and the average rating for each item is 0.73. 5946 to note that the BASE and +STYLE models are almost tied on fluency (although BASE outputs may appear more fluent due to their comparably shorter length). In the case of sentiment error, the largest error is 0.75 (out of 3), with the smallest sentiment error (0.56) achieved by the +STYLE model. Examination of the outputs reveals that the most common sentiment error is producing a neutral sentence when negative sentiment is specified. This may be due to the lower frequency of negative sentiment in the corpus as well as noise in automatic sentiment annotation. BASE +ADJ +SENT +STYLE Content ↑ 4.35* 4.53 4.51 4.49 Fluency ↑ 4.43 4.36 4.37 4.41 Sentiment Err ↓ 0.75* 0.71* 0.67* 0.56 Table 8: Human quality evaluation (higher is better for content and fluency, lower is better for sentiment error). Paired t-test for each model vs.+STYLE, * is p < 0.05. 5 Related Work Recent efforts on data acquisition for NNLG has relied almost exclusively on crowdsourcing. Novikova et al. (2017a) used pictorial representations of restaurant MRs to elicit 50k varied restaurant descriptions through crowdsourcing. Wen et al. (2015a; 2015b) also create datasets for the restaurant (5k), hotel (5k), laptop (13k), and TV (7k) domains by asking Turkers to write NL realizations for different combinations of input dialog acts in the MR. Work on the WEBNLG challenge has also focused on using existing structured data, such as DBPedia, as input into an NLG (Gardent et al., 2017), where matching NL utterances are also crowdsourced. Other recent work on collecting datasets for dialog modeling also use largescale crowdsourcing (Budzianowski et al., 2018). Here, we completely avoid having to crowdsource any data by working in reverse: we begin with naturally occurring user reviews, and automatically construct MRs from them. This allows us to create a novel dataset YELPNLG, the largest existing NLG dataset, with 300k parallel MR to sentence pairs with rich information on attribute, value, description, and mention order, in addition to a set of sentence-level style information, including sentiment, length, and pronouns. In terms of control mechanisms, very recent work in NNLG has begun to explore using an explicit sentence planning stage and hierarchical structures (Moryossef et al., 2019; Balakrishnan et al., 2019). In our own work, we show how we are able to control various aspects of style with simple supervision within the input MR, without requiring a dedicated sentence planner, and in line with the end-to-end neural generation paradigm. Previous work has primarily attempted to individually control aspects of content preservation and style attributes such as formality and verb tense, sentiment (2017), and personality in different domains such as news and product reviews (Fu et al., 2018), movie reviews (Ficler and Goldberg, 2017; Hu et al., 2017), restaurant descriptions (Oraby et al., 2018), and customer care dialogs (Herzig et al., 2017). To our knowledge, our work is the very first to generate realizations that both express particular semantics and exhibit a particular descriptive or lexical style and sentiment. It is also the first work to our knowledge that controls lexical choice in neural generation, a long standing interest of the NLG community (Barzilay and Lee, 2002; Elhadad, 1992; Radev, 1998; Moser and Moore, 1995; Hirschberg, 2008). 6 Conclusions This paper presents the YelpNLG corpus, a set of 300,000 parallel sentences and MR pairs generated by sampling freely available review sentences that contain attributes of interest, and automatically constructing MRs for them. The dataset is unique in its huge range of stylistic variation and language richness, particularly compared to existing parallel corpora for NLG. We train different models with varying levels of information related to attributes, adjective dependencies, sentiment, and style information, and present a rigorous set of evaluations to quantify the effect of the style markup on the ability of the models to achieve multiple style goals. For future work, we plan on exploring other models for NLG, and on providing models with a more detailed input representation in order to help preserve more dependency information, as well as to encode more information on syntactic structures we want to realize in the output. We are also interested in including richer, more semanticallygrounded information in our MRs, for example using Abstract Meaning Representations (AMRs) (Dorr et al., 1998; Banarescu et al., 2013; Flanigan et al., 2014). Finally, we are interested in reproducing our corpus generation method on various other domains to allow for the creation of numerous useful datasets for the NLG community. 5947 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, and Rajen Subba. 2019. Constrained decoding for neural nlg from compositional representations in task-oriented dialogue. To appear in Proceedings of ACL 19. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Association for Computational Linguistics. Regina Barzilay and Lillian Lee. 2002. Bootstrapping lexical choice via multiple-sequence alignment. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 164–171. Association for Computational Linguistics. S. Boyce and A. L. Gorin. 1996. User interface issues for natural spoken dialogue systems. In Proceedings of International Symposium on Spoken Dialogue, pages 65–68. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I˜nigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. 2018. MultiWOZ - a large-scale multi-domain wizard-of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026, Brussels, Belgium. Association for Computational Linguistics. Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In EMNLP, pages 740–750. ACL. Bonnie J. Dorr, Nizar Habash, and David R. Traum. 1998. A thematic hierarchy for efficient generation from lexical-conceptual structure. In Proceedings of the Third Conference of the Association for Machine Translation in the Americas on Machine Translation and the Information Soup, AMTA ’98, pages 333– 343, London, UK, UK. Springer-Verlag. Ondrej Dusek and Filip Jurc´ıcek. 2016. A contextaware natural language generator for dialogue systems. CoRR, abs/1608.07076. Michael Elhadad. 1992. Using Argumentation to Control Lexical Choice: a Functional Unification Implementation. Ph.D. thesis, Columbia University. Angela Fan, David Grangier, and Michael Auli. 2017. Controllable abstractive summarization. CoRR, abs/1711.05217. Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. CoRR, abs/1707.02633. Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discriminative graph-based parser for the abstract meaning representation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426– 1436, Baltimore, Maryland. Association for Computational Linguistics. Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI), pages 663–670. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating Training Corpora for NLG Micro-Planning. In 55th annual meeting of the Association for Computational Linguistics (ACL), Vancouver, Canada. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, page 249256. Jonathan Herzig, Michal Shmueli-Scheuer, Tommy Sandbank, and David Konopnicki. 2017. Neural response generation for customer service based on personality traits. In Proceedings of the 10th International Conference on Natural Language Generation, pages 252–256. Julia Hirschberg. 2008. Speaking more like you: Lexical, acoustic/prosodic, and discourse entrainment in spoken dialogue systems. In Proc. of the 8th SIGdial Workshop on Discourse and Dialogue, page 128. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward controlled generation of text. In ICML, volume 70 of Proceedings of Machine Learning Research, pages 1587–1596. PMLR. Juraj Juraska and Marilyn Walker. 2018. Characterizing variation in crowd-sourced data for training neural language generators to produce stylistically varied outputs. In Proceedings of the 11th International Conference on Natural Language Generation, pages 441–450. Association for Computational Linguistics. Guillaume Klein, Yoon Kim, Yuntian Deng, Vincent Nguyen, Jean Senellart, and Alexander Rush. 2018. Opennmt: Neural machine translation toolkit. In 5948 Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Papers), pages 177–184. Association for Machine Translation in the Americas. Gerasimos Lampouras and Andreas Vlachos. 2016. Imitation learning for language generation from unaligned data. In COLING, pages 1101–1112. ACL. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025. Hongyuan Mei, Mohit Bansal, and Matthew R. Walter. 2015. What to talk about and how? selective generation using lstms with coarse-to-fine alignment. CoRR, abs/1509.00838. Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019. Step-by-step: Separating planning from realization in neural data-to-text generation. CoRR, abs/1904.03396. Margaret G. Moser and Johanna Moore. 1995. Investigating cue selection and placement in tutorial discourse. In ACL 95, pages 130–137. Jekaterina Novikova, Ondrej Duˇsek, and Verena Rieser. 2017a. The E2E dataset: New challenges for end-to-end generation. In Proceedings of the 18th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Saarbrucken, Germany. ArXiv:1706.09254. Jekaterina Novikova, Ondrej Duˇsek, and Verena Rieser. 2017b. The E2E NLG shared task. Shereen Oraby, Sheideh Homayon, and Marilyn Walker. 2017. Harvesting creative templates for generating stylistically varied restaurant reviews. In Proceedings of the Workshop on Stylistic Variation, pages 28–36, Copenhagen, Denmark. Association for Computational Linguistics. Shereen Oraby, Lena Reed, Shubhangi Tandon, Sharath TS, Stephanie Lukin, and Marilyn Walker. 2018. Controlling personality-based stylistic variation with neural natural language generators. In Proceedings of the 19th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), page 15321543. Dragomir R. Radev. 1998. Learning correlations between linguistic indicators and semantic constraints: Reuse of context-dependent descriptions of entities. In COLING-ACL, pages 1072–1078. Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may i introduce the gyafc dataset: Corpus, benchmarks and metrics for formality style transfer. Lena Reed, Shereen Oraby, and Marilyn Walker. 2018. Can neural generators for dialogue learn sentence planning and discourse structuring? In Proceedings of the 11th International Conference on Natural Language Generation, pages 284–295. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Controlling politeness in neural machine translation via side constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 35–40. Association for Computational Linguistics. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi S. Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In NIPS, pages 6833–6844. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Stroudsburg, PA. Association for Computational Linguistics. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):19291958. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina Maria Rojas-Barahona, Pei hao Su, David Vandyke, and Steve J. Young. 2016. Multi-domain neural network language generation for spoken dialogue systems. In HLT-NAACL. Tsung-Hsien Wen, Milica Gasic, Nikola Mrkˇsi´c, PeiHao Su, David Vandyke, and Steve Young. 2015a. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711–1721. Association for Computational Linguistics. Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Peihao Su, David Vandyke, and Steve J. Young. 2015b. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. CoRR, abs/1508.01745. 5949 Appendix A Model Configurations Here we describe final model configurations for the most complex model, +STYLE, after experimenting with different parameter settings. The encoder and decoder are each three layer LSTMs with 600 units. We use Dropout (Srivastava et al., 2014) of 0.3 between RNN layers. Model parameters are initialized using Glorot initialization (Glorot and Bengio, 2010) and are optimized using stochastic gradient descent with mini-batches of size 64. We use a learning rate of 1.0 with a decay rate of 0.5 that gets applied after each training epoch starting with the fifth epoch. Gradients are clipped when the absolute value is greater than 5. We tune model hyper-parameters on a development dataset and select the model of lowest perplexity to evaluate on the test dataset. Beam search with three beams is used during inference. MRs are represented using 300 dimensional embeddings. The target side word embeddings are initialized using pre-trained Glove word vectors (Pennington et al., 2014) which get updated during training. Models are trained using lowercased reference texts. B Repeated Templates from BASE and +STYLE Table 9 shows the top 10 most repeated templates for the BASE and +STYLE models. Note that “# Reps” indicates the number of times the template is repeated in the test set of 30k instances; the largest number of reps is only 550 for the most frequent BASE model template, only 129 for +STYLE, meaning that the models mostly generate novel outputs for each test instance. # Reps BASE Templates 550 i had the [FOOD] [FOOD]. 477 i had the [FOOD] and [FOOD]. 174 i had the [FOOD] [FOOD] [FOOD]. 173 the [FOOD] [FOOD] was good. 171 the [FOOD] and [FOOD] were good. 166 the [FOOD] was tender and the [FOOD] was delicious. 161 i had the [FOOD] fried [FOOD]. 120 the [FOOD] [FOOD] was very good. 117 the [FOOD] was good but the [FOOD] was a little dry. +STYLE Templates 129 i had the [FOOD] [FOOD] and it was delicious. 94 had the [FOOD] and [FOOD] [FOOD] plate. 87 the [FOOD] and [FOOD] were cooked to perfection. 62 i had the [FOOD] [FOOD] and it was good. 60 i had the [FOOD] [FOOD]. 53 i had the [FOOD] and my husband had the [FOOD]. 50 i had the [FOOD] and [FOOD] and it was delicious. 34 the [FOOD] and [FOOD] skewers were the only things that were good. 31 i had the [FOOD] [FOOD] [FOOD] and it was delicious. Table 9: Sample of 10 “most repeated” templates from BASE and +STYLE. C Sample Model Outputs for +STYLE Table 10 shows examples outputs from the +STYLE model, with specific examples of style through different forms of personal pronoun use, contrast, aggregation, and hyperbole in Tables 1114. 5950 1 (attr=food, val=meat, adj=chewy, mention=1), (attr=food, val=sauce, adj=no-adj, mention=1), +[sentiment=negative, len=medium, first-person=false, exclamation=false] The meat was chewy and the sauce had no taste. 2 (attr=food, val=artichokes, adj=no-adj, mention=1), (attr=food, val=beef-carpaccio, adj=no-adj, mention=1), +[sentiment=positive, len=long, first-person=true, exclamation=false] We started with the artichokes and beef carpaccio , which were the highlights of the meal . 3 (attr=staff, val=waitress, adj=no-adj, mention=1), (attr=food, val=meat-tips, adj=no-adj, mention=1), (attr=food, val=ribs, adj=no-adj, mention=1), +[sentiment=neutral, len=long, first-person=true, exclamation=false] The waitress came back and told us that they were out of the chicken meat tips and ribs . 4 (attr=food, val=chicken-lollipops, adj=good, mention=1), (attr=food, val=ambiance, adj=nice, mention=1), +[sentiment=positive, len=medium, first-person=false, exclamation=false] The chicken lollipops were really good , nice ambience . 5 (attr=food, val=meat, adj=no-adj, mention=1), (attr=food, val=sausage, adj=no-adj, mention=1), (attr=food, val=delimeats, adj=no-adj, mention=1), (attr=food, val=cheeses, adj=no-adj, mention=1), (attr=price, val=prices, adj=good, mention=1), +[sentiment=positive, len=medium, first-person=false, exclamation=false] Geat selection of meat , sausage , deli meats , cheeses , and good prices . 6 (attr=food, val=beef-chili, adj=amazing, mention=1), (attr=food, val=onion, adj=carmalized, mention=1), +[sentiment=positive, len=long, first-person=true, exclamation=false] The beef chili was amazing , and i loved the caramelized onions that came with it . 7 (attr=food, val=eggs, adj=runny, mention=1), (attr=food, val=crab-legs, adj=open, mention=1), +[sentiment=neutral, len=long, first-person=true, exclamation=false] The eggs were runny , and the open faced crab legs were a little too much for my taste . 8 (attr=food, val=chicken-salad, adj=grilled, mention=1), (attr=food, val=chicken, adj=no-adj, mention=1), (attr=food, val=spices, adj=right, mention=1), (attr=food, val=salad, adj=fresh, mention=1), +[sentiment=positive, len=long, firstperson=true, exclamation=false] I had the grilled chicken salad , the chicken was tender and the spices and fresh salad were just right . Table 10: Sample test outputs from Model +STYLE. 1 (attr=food, val=fish-meat, adj=no-adj, mention=1), (attr=food, val=horse-radish-sauce, adj=no-adj, mention=1), +[sentiment=positive, len=long, first-person=true, exclamation=false] I had the fish meat and it was very good, and my husband had the chicken horse-radish-sauce which he loved. 2 (attr=food, val=beef, adj=no-adj, mention=1), (attr=restaurant, val=restaurant, adj=nice, mention=1), (attr=staff, val=waiter, adj=friendly, mention=1), +[sentiment=positive, len=long, first-person=true, exclamation=false] The beef was tender, the restaurant was nice, and the waiter was friendly and helpful to us. 3 (attr=food, val=lobster, adj=no-adj, mention=1), (attr=food, val=crab-legs, adj=no-adj, mention=1), (attr=food, val=mussels, adj=no-adj, mention=1), (attr=food, val=clams, adj=no-adj, mention=1), +[sentiment=positive, len=medium, first-person=true, exclamation=false] We had lobster, crab legs, mussels and clams. 4 (attr=food, val=crab-soup, adj=no-adj, mention=1), +[sentiment=negative, len=short, first-person=false, exclamation=false] She had the crab soup. 5 (attr=staff, val=host, adj=no-adj, mention=1), (attr=food, val=steak, adj=no-adj, mention=1), (attr=food, val=lobster, adj=no-adj, mention=1), +[sentiment=positive, len=long, first-person=false, exclamation=false] The host came out with the steak and lobster, and he said it was very good . Table 11: Examples of different pronouns from Model +STYLE. 5951 1 (attr=food, val=kids-chicken-fingers, adj=no-adj, mention=1), (attr=food, val=chicken, adj=actual, mention=1), (attr=food, val=chicken, adj=little, mention=2), +[sentiment=positive, len=long, first-person=false, exclamation=false] The kids chicken fingers are made with actual chicken, but the chicken is a little on the dry side. 2 (attr=food, val=nachos, adj=no-adj, mention=1), (attr=food, val=chicken, adj=no-adj, mention=1), +[sentiment=negative, len=long, first-person=true, exclamation=false] I ordered the nachos with chicken, and they were pretty good, but nothing to write home about. 3 (attr=food, val=chicken-tenders, adj=no-adj, mention=1), (attr=food, val=chicken-nuggets, adj=no-adj, mention=1), +[sentiment=neutral, len=long, first-person=true, exclamation=false] The chicken tenders and chicken nuggets were the only things that were good, but nothing special. 4 (attr=food, val=rice, adj=good, mention=1), (attr=food, val=meat, adj=no-adj, mention=1), +[sentiment=neutral, len=long, first-person=true, exclamation=false] The rice was good, but i wish there was more meat in the dish. Table 12: Examples of contrast from Model +STYLE. 1 (attr=food, val=meat, adj=no-adj, mention=1), (attr=food, val=sausage, adj=no-adj, mention=1), (attr=food, val=deli-meats, adj=no-adj, mention=1), (attr=food, val=cheeses, adj=no-adj, mention=1), (attr=price, val=prices, adj=good, mention=1), +[sentiment=positive, len=medium, first-person=false, exclamation=false] Great selection of meat, sausage, deli meats, cheeses, and good prices. 2 (attr=food, val=tofu, adj=fried, mention=1), (attr=food, val=lemongrass-chicken, adj=aforementioned, mention=1), +[sentiment=neutral, len=long, first-person=true, exclamation=false] I had the fried tofu and my husband had the lemongrass chicken, both of which were very good. 3 (attr=food, val=burgers, adj=different, mention=1), (attr=food, val=chicken-club, adj=grilled, mention=1), +[sentiment=positive, len=long, first-person=true, exclamation=false] We ordered two different burgers and a grilled chicken club, both of which were delicious. 4 (attr=food, val=octopus, adj=no-adj, mention=1), (attr=food, val=salmon, adj=no-adj, mention=1), (attr=food, val=tuna, adj=no-adj, mention=1), (attr=food, val=crab, adj=no-adj, mention=1), (attr=food, val=squid, adj=no-adj, mention=1), (attr=food, val=shrimp, adj=no-adj, mention=1), +[sentiment=positive, len=long, firstperson=false, exclamation=true] Octopus, salmon, tuna, crab, squid, shrimp, etc... all of it was delicious ! Table 13: Examples of aggregation from Model +STYLE. 1 (attr=food, val=meat, adj=spectacular, mention=1), (attr=food, val=sauces, adj=no-adj, mention=1), +[sentiment=positive, len=medium, first-person=false, exclamation=false] The meat was spectacular and the sauces were to die for. 2 (attr=food, val=maine-lobster, adj=heavenly, mention=1), (attr=food, val=crab-bisque, adj=no-adj, mention=1), +[sentiment=positive, len=long, first-person=false, exclamation=false] The lobster claw was heavenly, and the crab bisque was a nice touch, but not overpowering. 3 (attr=food, val=meat-sauce-spaghetti, adj=no-adj, mention=1), (attr=food, val=milk-tea, adj=cold, mention=1), +[sentiment=positive, len=long, first-person=true, exclamation=false] I had the chicken meat sauce spaghetti and it was very good and the cold milk tea was the best i have ever had. 4 (attr=food, val=seafood, adj=fresh, mention=1), (attr=food, val=chicken, adj=fried, mention=1), (attr=food, val=bread-pudding, adj=phenomenal, mention=1), +[sentiment=positive, len=long, first-person=false, exclamation=false] The seafood was fresh, the fried chicken was great, and the bread pudding was phenomenal. Table 14: Examples of hyperbole from Model +STYLE.
2019
596
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5952–5961 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5952 Automated Chess Commentator Powered by Neural Chess Engine Hongyu Zang∗and Zhiwei Yu∗and Xiaojun Wan Institute of Computer Science and Technology, Peking University The MOE Key Laboratory of Computational Linguistics, Peking University {zanghy, yuzw, wanxiaojun}@pku.edu.cn Abstract In this paper, we explore a new approach for automated chess commentary generation, which aims to generate chess commentary texts in different categories (e.g., description, comparison, planning, etc.). We introduce a neural chess engine into text generation models to help with encoding boards, predicting moves, and analyzing situations. By jointly training the neural chess engine and the generation models for different categories, the models become more effective. We conduct experiments on 5 categories in a benchmark Chess Commentary dataset and achieve inspiring results in both automatic and human evaluations. 1 Introduction With games exploding in popularity, the demand for Natural Language Generation (NLG) applications for games is growing rapidly. Related researches about generating real-time game reports (Yao et al., 2017), comments (Jhamtani et al., 2018; Kameko et al., 2015), and tutorials (Green et al., 2018a,b) benefit people with entertainments and learning materials. Among these, chess commentary is a typical task. As illustrated in Figure 1, the commentators need to understand the current board and move. And then they comment about the current move (Description), their judgment about the move (Quality), the game situation for both sides (Contexts), their analysis (Comparison) and guesses about player’s strategy (Planning). The comments provide valuable information about what is going on and what will happen. Such information not only make the game more enjoyable for the viewers, but also help them learn to think and play. Our task is to design automated generation model to address all the 5 sub-tasks (Description, Quality, Comparison, Planning, and Contexts) of single-move chess commentary. ∗The two authors contributed equally to this paper. Figure 1: Chess Commentary Examples. Automatically generating chess comments draws attention from researchers for a long time. Traditional template-based methods (Sadikov et al., 2007) are precise but limited in template variety. With the development of deep learning, data-driven methods using neural networks are proposed to produce comments with high quality and flexibility. However, generating insightful comments (e.g., to explain why a move is better than the others) is still very challenging. Current neural approaches (Kameko et al., 2015; Jhamtani et al., 2018) get semantic representations from raw boards, moves, and evaluation information (threats and scores) from external chess engines. Such methods can easily ground comments to current boards and moves. But they cannot provide sufficient analysis on what will happen next in the game. Although external features are provided by powerful chess engines, the features are not in a continuous space, which may be not very suitable for context modeling and commentary generation. It is common knowledge that professional game commentators are usually game players. And expert players can usually provide more thorough analysis than amateurs. Inspired by this, we argue that for chess commentary generation, the generation model needs to know how to think and play in order to provide better outputs. In this paper, we introduce a neural chess engine into our generation models. The chess engine is 5953 pre-trained by supervised expert games collected from FICS Database1 and unsupervised self-play (Silver et al., 2017a,b) games, and then jointly trained with the generation models. It is able to get board representations, predict reasonable move distributions, and give continuous predictions by self-play. Our generation models are designed to imitate commentators’ thinking process by using the representations and predictions from the internal chess engine. And then the models ground commentary texts to the thinking results (semantics). We perform our experiments on 5 categories (Description, Quality, Contexts, Comparison, Planning) in the benchmark Chess Commentary dataset provided by Harsh (2018). We tried models with different chess engines having different playing strength. Both automatic and human evaluation results show the efficacy and superiority of our proposed models. The contributions are summarized as follows: • To the best of our knowledge, we are the first to introduce a compatible neural chess engine to the chess comment generation models and jointly train them, which enables the generation models benefit a lot from internal representations of game playing and analysis. • On all the 5 categories in the Chess Commentary dataset, our proposed model performs significantly better than previous stateof-the-art models. • Our codes for models and data processing will be released on GitHub2. Experiments can be easily reproduced and extended. 2 Related Works The most relevant work is (Jhamtani et al., 2018). The authors released the Chess Commentary dataset with the state-of-the-art Game Aware Commentary (GAC) generation models. Their models generate comments with extracted features from powerful search-based chess engines. We follow their work to further explore better solutions on different sub-tasks (categories) in their dataset. Another relevant research about Shogi (a similar board game to chess) commentary generation is from Kameko et al. (2015). They rely on external tools to extract key words first, and 1https://www.ficsgames.org/ 2https://github.com/zhyack/SCC then generate comments with respect to the key words. Different from their works, in this paper, we argue that an internal neural chess engine can provide better information about the game states, options and developments. And we design reasonable models and sufficient experiments to support our proposal. Chess engine has been researched for decades (Levy and Newborn, 1982; Baxter et al., 2000; David et al., 2017; Silver et al., 2017a). Powerful chess engines have already achieved much better game strength than human-beings (Campbell et al., 2002; Silver et al., 2017a). Traditional chess engines are based on rules and heuristic searches (Marsland, 1987; Campbell et al., 2002). They are powerful, but limited to the human-designed value functions. In recent years, neural models (Silver et al., 2016, 2017b; David et al., 2017) show their unlimited potential in board games. Several models are proposed and can easily beat the best human players in Go, Chess, Shogi, etc. (Silver et al., 2017a). Compared to the traditional engines, the hidden states of neural engines can provide vast information about the game and have the potential to be compatible in NLG models. We follow the advanced techniques and design our neural chess engine. Apart from learning to play the game, our engine is designed to make game states compatible with semantic representations, which bridges the game state space and human language space. And to realize this, we deploy multi-task learning (Collobert and Weston, 2008; Sanh et al., 2018) in our proposed models. Data-to-text generation is a popular track in NLG researches. Recent researches are mainly about generating from structured data to biography (Sha et al., 2018), market comments (Murakami et al., 2017), and game reports (Li and Wan, 2018). Here we manage to ground the commentary to the game data (boards and moves). Addressing content selection (Wiseman et al., 2017) is one of the top considerations in our designs. 3 Our Approach The overview of our approach is shown in Figure 2. Apart from the text generation models, there are three crucial modules in our approach: the internal chess engine, the move encoder, and the multichoices encoder. We will first introduce our solution to all the sub-tasks of chess commentary generation with the modules as black boxes. And then 5954 Figure 2: Overview of our chess commentary model. we describe them in details. 3.1 Our Solutions In Figure 2, an example is presented with model structures to demonstrate the way our models solving all the sub-tasks. The process is impelled by the internal chess engine. Given the current board b(0) and move m(0), the engine emulates the game and provides the current and next board states together with wining rates of the players. Besides, the engine also predicts for another optional move ˆm(0) from b(0) to make comparisons to m(0). And then a series of long-term moves (m(1), m(2), ...) and boards (b(2), b(3), ...) are further predicted by the engine in a self-play manner (Silver et al., 2017a,b) for deep analysis. With the semantics provided by the engine, generation models are able to predict with abundant and informative contexts. We will first detail the different semantic contexts with respect to models for 5 different subtasks. And then we summarize the common decoding process for all the models. Description Model: Descriptions about the current move intuitively depend on the move itself. However, playing the same move could have different motivations under different contexts. For example, e2e4 is the classic Queen Pawn Opening in a fresh start. But it can be forming a pawn defense structure in the middle of the game. Different from previous works for chess commentary generation (Jhamtani et al., 2018; Kameko et al., 2015), we find all kinds of latent relationships in the current board vital for current move analysis. Therefore, our description model takes the representation of both b(0) and m(0) from the move encoder fME as semantic contexts to produce description comment YDesc. The description model is formulated as Eq.1. fDescription(fME(b(0), m(0))) →YDesc (1) Quality Model: Harsh et al. (2018) find the wining rate features benefit the generation models on Quality category. Inspired by this, we concatenate the current board state E(0) S , the next board state E(1) S , and the wining rate difference v(1) −v(0) as semantic contexts for the decoder. And to model the value of wining rate difference, we introduce a weight matrix Wdiff to map the board state-value pair [E(0) S ; E(1) S ; v(1) −v(0)] to the same semantic space of the other contexts by Eq.2. Our quality model is formulated as Eq.3, where YQual is the target comment about quality. ED = Wdiff[E(0) S ; E(1) S ; v(1) −v(0)] (2) 5955 fQuality(E(0) S , E(1) S , ED) →YQual (3) Comparison Model: Usually, there are more than 10 possible moves in a given board. But not all of them are worth considering. Kameko et al. (2015) propose an interesting phenomenon in chess commentary: when the expert commentators comment about a bad move, they usually explain why the move is bad by showing the right move, but not another bad move. Inspired by this, we only consider the true move m(0) and the potential best move ˆm(0) (decided by the internal chess engine) as options for the comparison model. And the semantic contexts for the options are encoded by the multi-choices encoder. We define the comparison model as Eq.4 , where fMCE is the multi-choices encoder, b(1) is the board after executing m(0) on b(0), ˆb(1) is the board after executing ˆm(0) on b(0), and YComp is the target comment about comparison. fComparison(fMCE((b(1), m(0)), (ˆb(1), ˆm(0)))) →YComp (4) Planning Model: We can always find such scenes where commentators try to predict what will happen assuming they are playing the game. And then they give analysis according to their simulations. Our internal chess engine is able to simulate and predict the game in a similar way (selfplay). We realize our model for planning by imitating the human commentators’ behavior. Predicted moves and boards are processed by our multi-choices encoder to tell the potential big moments in the future. And we use the multi-choices encoder fMCE to produce the semantic contexts for the decoder. The process to generate planning comment YPlan is described in Eq.5. fPlanning(fMCE((b(2), m(1)), (b(3), m(2)), (b(4), m(3)), ...)) →YPlan (5) Contexts Model: To analyze the situation of the whole game, the model should know about not only the current, but also the future. And similar to the planning model, contexts model takes a series of long-term moves and boards produced by self-play predictions as inputs. In this way, the model comments the game in a god-like perspective. And the semantic contexts is also processed by the multi-choices encoder for generating contexts comment YCont as Eq.6. fContexts(fMCE((b(1), m(0)), (b(2), m(1)), (b(3), m(2)), (b(4), m(3)), ...)) →YCont (6) Each of the above models has a decoder (the hexagon blocks in Figure 2) for text generation and we use LSTM decoders (Sundermeyer et al., 2012). And we use cross entropy loss function for training. The function is formalized as Eq.7, where Y is the gold standard outputs. LossGen = −logp(Y |b(0); m(0)) (7) We denote E ∈IRn×d as a bunch of raw context vectors, where n is the number of such context vectors and d is the dimension of the vectors. Although the semantic contexts E for different generation models are different as described before, we regard all of the board states, wining rates, and move representations as general semantic contexts. And we use attention mechanism (Bahdanau et al., 2015; Luong et al., 2015) to gather information from the contexts. For example, assuming that we have a hidden vector h drawing from LSTM units, to decode with the semantic contexts, we use the score function f of Luong attention (Luong et al., 2015) as f(X, y) = XWy, (8) to calculate the attention weights a for vectors in E, where W is a transformation function for the attentional context vectors. The scores are further normalized by a softmax function to a by a = softmax(f(E, h)). (9) We compute weighted sum of E with a to produce the attentional context vector z for word decoding z = E⊤a. (10) 3.2 The Internal Chess Engine The internal chess engine is in charge of the mapping from board B to semantic representation ES, predicting possibility distribution D on valid moves, and evaluating the wining rate v for the players. In previous works (Jhamtani et al., 2018; Kameko et al., 2015), researchers use discrete information (threats, game evaluation scores, etc.) analyzed by external chess engine to build semantic representations. It limits the capability of the 5956 representations by simply mapping the independent features. Our internal chess engine is able to mine deeper relations and semantics with the raw board as input. And it can also make predictions in a continuous semantic space, increasing the capability and robustness for generation. Following advanced researches in neural chess engines (David et al., 2017; Silver et al., 2017a), we split the input raw board into 20 feature planes F for the sake of machine understanding. There are 12 planes for pieces’ (pawn, rook, knight, bishop, queen, king) positions of each player, 4 planes for white’s repetitions, black’s repetitions, total moves, and moves with no progress, and 4 planes for 2 castling choices of each player. The feature planes F are encoded by several CNN layers to produce sufficient information for semantic representation ES. Like previous researches on chess engines, ES is used to predict the move possibility distribution D and the wining rate v by fully connected layers. But different from those pure engines, we share the board state ES with generation models in a multi-task manner (Collobert and Weston, 2008). The engine is designed not only for playing, but also for expressing. Our generation models use ES as part of the inputs to get better understanding of the game states. Given the tuple of game replays (B, M, v′) where M is the corresponding move and v′ is the ground truth wining rate, we optimize the engine’s policy, value function at the same time as Eq.11 shows. When the engine grows stronger, we let the engine produce data by itself in a self-play manner (Silver et al., 2017a). Besides, the engine jointly optimizes LossGen when training generative models. LossEng = −logp(M|B) + (v −v′)2 (11) 3.3 The Move Encoder Apart from understanding the board B, commentators also need to know the semantics of the move M. Besides using the chess engine to produce board representations ES, the move encoders also prepare for move embeddings EM as attention contexts for the text decoders. We set the features of the move (starting cell, the move ending cell, the piece at the starting cell, the piece at the ending cell, the promotion state, and the checking state) as a sequential input to a bi-directional RNN (Schuster and Paliwal, 1997). When a decoder requests attention contexts for hidden state h, the encoder offers E = [EM; ES] to build attentional context vector following Eq.9 and Eq.10. 3.4 The Multi-Choices Encoder For Comparison, Planning, and Contexts, there are multiple moves derived from variations and predictions. The model needs to find the bright spots to describe. To encode these moves and offer precise information for the generation models, we propose a multi-choices encoder. Human commentators usually choose different aspects to comment according to their experiences. We use a global vector g to store our models’ experiences and choose important moves to comment. Note that g is to be learned. In module (c) of Figure 2, we denote Ei M as the output vectors of the ith move encoder, Ei S as the board state of the i-th board, and Ei V as the embedding of wining rate vi of the i-th board. To model the wining rate value, we introduce a mapping matrix Mval and process the state-value pair to the value embedding as Ei V = Wval[Ei S, vi]. (12) Then we calculate the soft weights of choices c = {c1, c2, ...} with respect to the board states S = {E1 S, E2 S, ...} by Eq.13. For hidden state vector h from decoder, attention weight matrix A = {A1, A2, ...} are scaled by c via Eq.14. And we finally get attentional context vector z according to A by Eq.15. This approach enables generation models to generate comments with attention to intriguing board states. And the attention weights can be more accurate when g accumulates abundant experiences in training. c = softmax(gS) (13) Ai = ci ∗softmax(f([Ei M; Ei S; Ei V ], h)) (14) z = X i ([Ei M; Ei S; Ei V ])⊤Ai (15) 4 Experiments 4.1 Dataset We conduct our experiments on recently proposed Chess Commentary dataset3 (Jhamtani et al., 2018). In this dataset, Harsh et al. (2018) collect and process 11,578 annotated chess games from a large social forum GAMEKNOT4. There are 298K aligned data pairs of game moves and 3https://github.com/harsh19/ChessCommentaryGeneration/ 4https://gameknot.com 5957 commentaries. The dataset is split into training set, validation set and test set as a 7:1:2 ratio with respect to the games. As the GAMEKNOT is a free-speech forum, the comments can be very freewheeling in grammar and morphology. The informal language style and unpredictable expression tendency make a big challenge for data-driven neural generation models. To narrow down the expression tendency, Harsh et al. (2018) classify the dataset into 6 categories: Description, Quality, Comparison, Planning, Contexts, and General. The General category is usually about the player and tournament information, which needs external knowledge irrelevant to game analysis. We do not conduct experiments on the last category. And for the training of chess engine, we collect all of the standard chess game records in the past 10 years from FICS Games Database. And we remove the games where any player’s rating below 2,000. There are 36M training data (for single move step) after cleaning. 4.2 Experiment Settings and Baselines We train our neural chess engine using mixed data consisting of supervised FICS data and unsupervised self-play data. The number of self-play games are set to 0 initially. And it will be increased by 1 when the trained model beats the previous best version (with a wining rate larger than 0.55 in 20 games). During 400 iterations of training, we pick one strong engine and one weak engine for further experiments. The stronger engine loses 1 game and draws 55 games to the weak engine in 100 games. As mentioned in Section 3.2, when training generation models, we use the pretrained chess engine and fine-tune it with the generation models. Here we introduce our models and baselines in the experiments. We call our models the Skilled Chess Commentator (SCC) as they have the skills of playing chess. • SCC-weak: The generation models are integrated with the weak engine mentioned above, and they are trained independently with respect to the 5 categories in Chess Commentary dataset. • SCC-strong: The model is similar to SCCweak, but integrated with the strong engine. • SCC-mult: This is a multi-task learning model where generation models for different categories share the strong chess engine, move encoder, the multi-choices encoder and the value mapping matrix Wval. • GAC: The state-of-the-art method proposed by Harsh et al. (2018). Their models incorporate the domain knowledge provided by external chess engines. Their models only work for first 3 categories: Description, Quality, and Comparison. We will compare our results with GAC on these categories. • KWG: Another state-of-the-art method for game commentary generation (Kameko et al., 2015). It is a pipeline method based on keyword generation. We compare the results on all data categories. • Temp: This is a template-based baseline methods. Together with the dataset, Harsh et al. (2018) provide templates for the first two categories. Inspired by (Sadikov et al., 2006), we extend the templates to fit for all the 5 categories. • Re: This is a retrieval-based baseline method. For each input in the test set, we find the most matched datum in the training set by numbers of matched input board and move features. 4.3 Evaluation Metrics We develop both automatic evaluations and human evaluations to compare the models. For automatic evaluations, we use BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) to evaluate the generated comments with ground-truth outputs. BLEU evaluates the modified precision between the predicted texts and gold-standard references on corpus level. Evaluating with 4-grams (BLEU-4 5) is the most popular way in NLG researches. However, for tasks like dialogue system (Li et al., 2016), story telling generation (Jain et al., 2017), and chess commentary (Jhamtani et al., 2018), the outputs can be rather short and free expressions. Under such circumstances, brevity penalty for 4-grams can be too strict and makes the results unbalanced. We use BLEU-2 6 to show more steady results with BLEU 5https://github.com/moses-smt/mosesdecoder/blob/ master/scripts/generic/multi-bleu.perl 6https://github.com/harsh19/ChessCommentaryGeneration/ blob/master/Code/methods/category aware/BLEU2.perl 5958 BLEU-4 (%) Temp Re KWG GAC SCC-weak SCC-strong SCC-mult Description 0.82 1.24 1.22 1.42 1.23 1.31 1.34 Quality 13.71 4.91 13.62 16.90 16.83 18.87 20.06 Comparison 0.11 1.03 1.07 1.37 2.33 3.05 2.53 Planning 0.05 0.57 0.84 N/A 1.07 0.99 0.90 Contexts 1.94 2.70 4.39 N/A 4.04 6.21 4.09 BLEU-2 (%) Temp Re KWG GAC SCC-weak SCC-strong SCC-mult Description 24.42 22.11 18.69 19.46 23.29 25.98 25.87 Quality 46.29 39.14 55.13 47.80 58.53 61.13 61.62 Comparison 7.33 22.58 20.06 24.89 24.85 27.48 23.47 Planning 3.38 20.34 22.02 N/A 22.28 25.82 24.32 Contexts 26.03 30.12 31.58 N/A 37.32 41.59 38.59 METEOR (%) Temp Re KWG GAC SCC-weak SCC-strong SCC-mult Description 6.26 5.27 6.07 6.19 6.03 6.83 7.10 Quality 22.95 17.01 22.86 24.20 24.89 25.57 25.37 Comparison 4.27 8.00 7.70 8.54 8.25 9.44 9.13 Planning 3.05 6.00 6.76 N/A 6.18 7.14 7.30 Contexts 9.46 8.90 10.31 N/A 11.07 11.76 11.09 Table 1: Automatic evaluation results. evaluation algorithm. We also use METEOR as a metric, whose results are more closed to a normal distribution (Dobre, 2015). We also conduct human evaluation to make more convincing comparisons. We recruit 10 workers on Amazon Mechanical Turk7 to evaluate 150 groups of samples (30 from each category). Each sample is assigned to exactly 2 workers. The workers rate 8 shuffled texts (for Ground Truth, Temp, Re, GAC, KWG, and SCC models) for the following 4 aspect in a 5-pt Likert scale8. • Fluency: Whether the comment is fluent and grammatical. • Accuracy: Whether the comment correctly describes current board and move. • Insights: Whether the comment makes appropriate predictions and thorough analysis. • Overall: The annotators’ overall impression about comments. 4.4 Results and Analysis We present the automatic evaluation results in Table 1. Our SCC models outperform all of the baselines and previous state-of-the-art models. Temp 7https://www.mturk.com 8https://en.wikipedia.org/wiki/Likert scale is limited by the variety of templates. It is competitive with the neural models on Description and Quality due to limited expressions in these tasks. But when coming to Comparison, Planning and Contexts, Temp shows really bad performances. Re keeps flexibility by copying the sentences from training set. But it does not perform well, either. The ability of Re is limited by the sparse searching space, where there are 90,743 data in the training set, but 1043 possible boards9 for chess game. KWG and GAC provide competitive results. With the help of external information from powerful chess engines, GAC shows good performances on Quality and Comparison. Although our internal chess engine is no match for the external engines that GAC uses at playing chess, it turns out that our models with directly internal information can better bridge the semantic spaces of chess game and comment language. As for the comparisons within our models, SCC-strong turns to be better than SCC-weak, which supports our assumption that better skills enable more precise predictions, resulting in better comments. Training with multi-task learning seems to hurt the overall performances a little. But SCC-mult still has the state-of-the-art performances. And more important, it can react to all sub-tasks as a whole. The human annotators are required to be good 9https://en.wikipedia.org/wiki/Shannon number 5959 Figure 3: Samples for case study. Models Fluency Accuracy Insights Overall Ground Truth 4.02 3.88 3.58 3.84 Temp 4.05 4.03 3.02 3.56 Re 3.71 3.00 2.80 2.85 KWG 3.51 3.24 2.93 3.00 SCC-weak 3.63 3.62 3.32 3.30 SCC-strong 3.81 3.74 3.49 3.49 SCC-mult 3.82 3.91 3.51 3.61 GAC* 3.68 3.32 2.99 3.14 SCC-mult* 3.83 3.99 3.46 3.52 Table 2: Human evaluation results. Models marked with * are evaluated only for the Description, Quality, and Comparison categories. The underlined results are significantly worse than those of SCC-mult(*) in a two-tail T-test (p<0.01). at playing chess. That is to say, they are the true audiences of the commentator researches and applications. By introducing human evaluations, we further reveal the performances in the perspective of the audiences. We show the average scores and significance test results in Table 2. We further demonstrate the efficacy of our models with significantly better overall performances than the retrieval-based model and previous state-of-the-art ones. It is worth noting that the evaluations about Accuracy and Insights show that our models can produce more precise and thorough analysis owing to the internal chess engine. SCC-mult and SCCstrong perform better than SCC-weak in Accuracy and Overall scores. It also supports the points that the our commentary model can be improved with better internal engine. 4.5 Case Study To have a better view of comparisons among model outputs, we present and analyze some samples in Figure 3. In these samples, our model refers to SCC-mult. For the first example, black can exchange white’s e3 knight and e4 pawn with the b4 bishop if white takes no action. But white chooses to protect the e3 knight with the g1 knight. All the models generate comments about Description. Temp directly describes the move without explanation. Re finds similar situation in the training set and explains the move as defense and developing. KWG is right about developing, but wrong about the position of the knight and the threats. GAC produces safe comment about the developing. And our model has a better understanding about the boards. It annotates the move correctly and even gives the reason why white plays this move. For the second example, the game is at the 3rd turn. White gives up the pawn on d5 and chooses to push the queen’s pawn. Re and KWG both make a mistake and recognize the move d2d4 as Queen Pawn Opening. Temp thinks white is going to win because white have the advantage of one more pawn. However, Temp cannot predict that white will lose the advantage in the next move. Our model is able to predict the future moves via self-play. And it draws the conclusion that pushing the queen’s pawn can open up the ways for the queen and bishop for future planning. 5 Conclusion and Future Work In this work we propose a new approach for automated chess commentary generation. We come up with the idea that models capable of playing chess will generate good comments, and models with better playing strength will perform better in generation. By introducing a compatible chess engine to comment generation models, we get models that can mine deeper information and ground 5960 more insightful comments to the input boards and moves. Comprehensive experiments demonstrate the effectiveness of our models. Our experiment results show the direction to further developing the state-of-the-art chess engine to improve generation models. Another interesting direction is to extend our models to multimove commentary generation tasks. And unsupervised approaches to leverage massive chess comments in social media is also worth exploring. Acknowledgments This work was supported by National Natural Science Foundation of China (61772036) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the anonymous reviewers for their helpful comments. Xiaojun Wan is the corresponding author. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Jonathan Baxter, Andrew Tridgell, and Lex Weaver. 2000. Learning to play chess using temporal differences. Machine Learning, 40(3):243–263. Murray Campbell, A Joseph Hoane Jr, and Fenghsiung Hsu. 2002. Deep blue. Artificial intelligence, 134(1-2):57–83. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: deep neural networks with multitask learning. In Machine Learning, Proceedings of (ICML 2008), pages 160–167. Eli David, Nathan S. Netanyahu, and Lior Wolf. 2017. Deepchess: End-to-end deep neural network for automatic learning in chess. CoRR, abs/1711.09667. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation. Iuliana Dobre. 2015. A comparison between bleu and meteor metrics used for assessing students within an informatics discipline course. Procedia-Social and Behavioral Sciences, 180:305–312. Michael Cerny Green, Ahmed Khalifa, Gabriella A. B. Barros, Andy Nealen, and Julian Togelius. 2018a. Generating levels that teach mechanics. In Proceedings of the 13th International Conference on the Foundations of Digital Games, FDG 2018, Malm¨o, Sweden, August 07-10, 2018, pages 55:1–55:8. Michael Cerny Green, Ahmed Khalifa, Gabriella A. B. Barros, and Julian Togelius. 2018b. ”press space to fire”: Automatic video game tutorial generation. CoRR, abs/1805.11768. Parag Jain, Priyanka Agrawal, Abhijit Mishra, Mohak Sukhwani, Anirban Laha, and Karthik Sankaranarayanan. 2017. Story generation from sequence of independent short descriptions. CoRR, abs/1707.05501. Harsh Jhamtani, Varun Gangal, Eduard H. Hovy, Graham Neubig, and Taylor Berg-Kirkpatrick. 2018. Learning to generate move-by-move commentary for chess games from large-scale social forum data. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1661–1671. Hirotaka Kameko, Shinsuke Mori, and Yoshimasa Tsuruoka. 2015. Learning a game commentary generator with grounded move expressions. In 2015 IEEE Conference on Computational Intelligence and Games, CIG 2015, Tainan, Taiwan, August 31 - September 2, 2015, pages 177–184. David Levy and Monroe Newborn. 1982. How computers play chess. In All About Chess and Computers, pages 24–39. Springer. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 110–119. Liunian Li and Xiaojun Wan. 2018. Point precisely: Towards ensuring the precision of data in generated texts using delayed copy mechanism. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 1044–1055. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of EMNLP 2015, pages 1412–1421. Tony A Marsland. 1987. Computer chess methods. Encyclopedia of Artificial Intelligence, 1:159–171. Soichiro Murakami, Akihiko Watanabe, Akira Miyazawa, Keiichi Goshima, Toshihiko Yanase, Hiroya Takamura, and Yusuke Miyao. 2017. Learning to generate market comments from stock prices. In Proceedings of the 55th Annual Meeting of the 5961 Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1374–1384. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL 2002, pages 311–318. Aleksander Sadikov, Martin Movzina, Matej Guid, Jana Krivec, and Ivan Bratko. 2007. Automated chess tutor. In Computers and Games, pages 13–25, Berlin, Heidelberg. Springer Berlin Heidelberg. Aleksander Sadikov, Martin Mozina, Matej Guid, Jana Krivec, and Ivan Bratko. 2006. Automated chess tutor. In Computers and Games, 5th International Conference, CG 2006, Turin, Italy, May 29-31, 2006. Revised Papers, pages 13–25. Victor Sanh, Thomas Wolf, and Sebastian Ruder. 2018. A hierarchical multi-task approach for learning embeddings from semantic tasks. In AAAI 2019. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681. Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, and Zhifang Sui. 2018. Order-planning neural text generation from structured data. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5414–5421. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484. David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy P. Lillicrap, Karen Simonyan, and Demis Hassabis. 2017a. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. CoRR, abs/1712.01815. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. 2017b. Mastering the game of go without human knowledge. Nature, 550(7676):354. Martin Sundermeyer, Ralf Schl¨uter, and Hermann Ney. 2012. Lstm neural networks for language modeling. In Interspeech, pages 194–197. Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2253–2263. Jin-ge Yao, Jianmin Zhang, Xiaojun Wan, and Jianguo Xiao. 2017. Content selection for real-time sports news construction from commentary texts. In Proceedings of the 10th International Conference on Natural Language Generation, INLG 2017, Santiago de Compostela, Spain, September 4-7, 2017, pages 31–40.
2019
597
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5962–5971 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5962 Barack’s Wife Hillary: Using Knowledge Graphs for Fact-Aware Language Modeling Robert L. Logan IV∗ Nelson F. Liu†§ Matthew E. Peters§ Matt Gardner§ Sameer Singh∗ ∗University of California, Irvine, CA, USA † University of Washington, Seattle, WA, USA § Allen Institute for Artificial Intelligence, Seattle, WA, USA {rlogan, sameer}@uci.edu, {mattg, matthewp}@allenai.org, nfl[email protected] Abstract Modeling human language requires the ability to not only generate fluent text but also encode factual knowledge. However, traditional language models are only capable of remembering facts seen at training time, and often have difficulty recalling them. To address this, we introduce the knowledge graph language model (KGLM), a neural language model with mechanisms for selecting and copying facts from a knowledge graph that are relevant to the context. These mechanisms enable the model to render information it has never seen before, as well as generate out-of-vocabulary tokens. We also introduce the Linked WikiText2 dataset,1 a corpus of annotated text aligned to the Wikidata knowledge graph whose contents (roughly) match the popular WikiText-2 benchmark (Merity et al., 2017). In experiments, we demonstrate that the KGLM achieves significantly better performance than a strong baseline language model. We additionally compare different language models’ ability to complete sentences requiring factual knowledge, and show that the KGLM outperforms even very large language models in generating facts. 1 Introduction For language models to generate plausible sentences, they must be both syntactically coherent as well as consistent with the world they describe. Although language models are quite skilled at generating grammatical sentences, and previous work has shown that language models also possess some degree of common-sense reasoning and basic knowledge (Vinyals and Le, 2015; Serban et al., 2016; Trinh and Le, 2019), their ability to generate factually correct text is quite limited. The clearest limitation of existing language models is that they, at best, can only memorize facts observed during 1https://rloganiv.github.io/linked-wikitext-2 [Super Mario Land] is a [1989] [side-scrolling] [platform video game] developed and published by [Nintendo] as a [launch title] for their [Game Boy] [handheld game console]. Date 21 April 1989 Q828322 platform game Q8093 Nintendo Q647249 Super Mario Land Q186437 Game Boy Q941818 handheld game console Q2281714 side-scrolling video game Q1425505 launch game Publication Date genre publisher platform manufacturer instance of Figure 1: Linked WikiText-2 Example. A localized knowledge graph containing facts that are (possibly) conveyed in the sentence above. The graph is built by iteratively linking each detected entity to Wikidata, then adding any relations to previously mentioned entities. Note that not all entities are connected, potentially due to missing relations in Wikidata. training. For instance, when conditioned on the text at the top of Figure 1, an AWD-LSTM language model (Merity et al., 2018) trained on Wikitext-2 assigns higher probability to the word “PlayStation” than “Game Boy”, even though this sentence appears verbatim in the training data. This is not surprising—existing models represent the distribution over the entire vocabulary directly, whether they are common words, references to real world entities, or factual information like dates and numbers. As a result, language models are unable to generate factually correct sentences, do not generalize to rare/unseen entities, and often omit rare tokens from the vocabulary (instead generating UNKNOWN tokens). We introduce the knowledge graph language model (KGLM), a neural language model with mechanisms for selecting and copying information from an external knowledge graph. The KGLM maintains a dynamically growing local knowledge 5963 graph, a subset of the knowledge graph that contains entities that have already been mentioned in the text, and their related entities. When generating entity tokens, the model either decides to render a new entity that is absent from the local graph, thereby growing the local knowledge graph, or to render a fact from the local graph. When rendering, the model combines the standard vocabulary with tokens available in the knowledge graph, thus supporting numbers, dates, and other rare tokens. Figure 1 illustrates how the KGLM works. Initially, the graph is empty and the model uses the entity Super Mario Land to render the first three tokens, thus adding it and its relations to the local knowledge graph. After generating the next two tokens (“is”, “a”) using the standard language model, the model selects Super Mario Land as the parent entity, Publication Date as the relation to render, and copies one of the tokens of the date entity as the token (“1989” in this case). To facilitate research on knowledge graph-based language modeling, we collect the distantly supervised Linked WikiText-2 dataset. The underlying text closely matches WikiText-2 (Merity et al., 2017), a popular benchmark for language modeling, allowing comparisons against existing models. The tokens in the text are linked to entities in Wikidata (Vrandeˇci´c and Krötzsch, 2014) using a combination of human-provided links and off-theshelf linking and coreference models. We also use relations between these entities in Wikidata to construct plausible reasons for why an entity may have been mentioned: it could either be related to an entity that is already mentioned (including itself) or a brand new, unrelated entity for the document. We train and evaluate the KGLM on Linked WikiText-2. When compared against AWD-LSTM, a recent and performant language model, KGLM obtains not only a lower overall perplexity, but also a substantially lower unknown-penalized perplexity (Ueberla, 1994; Ahn et al., 2016), a metric that allows fair comparisons between models that accurately model rare tokens and ones that predict them to be unknown. We also compare factual completion capabilities of these models, where they predict the next word after a factual sentence (e.g., “Barack is married to ”) and show that KGLM is significantly more accurate. Lastly, we show that the model is able to generate accurate facts for rare entities, and can be controlled via modifications the knowledge graph. 2 Knowledge Graph Language Model In this section we introduce a language model that is conditioned on an external, structured knowledge source, which it uses to generate factual text. 2.1 Problem Setup and Notation A language model defines a probability distribution over each token within a sequence, conditioned on the sequence of tokens observed so far. We denote the random variable representing the next token as xt and the sequence of the tokens before t as x<t, i.e. language models compute p(xt|x<t). RNN language models (Mikolov et al., 2010) parameterize this distribution using a recurrent structure: p(xt|x<t) = softmax(Whht + b), ht = RNN(ht−1, xt−1). (1) We use LSTMs (Hochreiter and Schmidhuber, 1997) as the recurrent module in this paper. A knowledge graph (KG) is a directed, labeled graph consisting of entities E as nodes, with edges defined over a set of relations R, i.e. KG = {(p, r, e) | p ∈E, r ∈R, e ∈E}, where p is a parent entity with relation r to another entity e. Practical KGs have other aspects that make this formulation somewhat inexact: some relations are to literal values, such as numbers and dates, facts may be expressed as properties on relations, and entities have aliases as the set of strings that can refer to the entity. We also define a local knowledge graph for a subset of entities E<t as KG<t = {(p, r, e) | p ∈E<t, r ∈R, e ∈E}, i.e. contains entities E<t and all facts they participate in. 2.2 Generative KG Language Model The primary goal of the knowledge graph language model (KGLM) is to enable a neural language model to generate entities and facts from a knowledge graph. To encourage the model to generate facts that have appeared in the context already, KGLM will maintain a local knowledge graph containing all facts involving entities that have appeared in the context. As the model decides to refer to entities that have not been referred to yet, it will grow the local knowledge graph with additional entities and facts to reflect the new entity. Formally, we will compute p(xt, Et|x<t, E<t) where x<t is the sequence of observed tokens, E<t is the set of entities mentioned in x<t, and KG<t is the local knowledge graph determined by E<t, as described above. The generative process is: 5964 Super Mario Land is a 1989 side-scrolling platform video game developed and published by AAA I nc. Sony I nc. . . . . . . Zzyzx, CA pl at f or m game Super Mar i o Land . . . si de- scr ol l i ng game Super Mar i o Land Ni nt endo Game Boy pl at f or m game 1989 PUBLI SHER GENRE PLATFORM PUB. DATE a the dog ... company Kabushiki Koppai Nintendo ... Relation to Existing Entity Mention of a New Entity Not an Entity Mention Distribution over standard vocabulary and aliases of et Distribution over standard vocabulary standard vocabulary aliases of et SELF Nintendo pick from all entities parent from local entities Figure 2: KGLM Illustration. When trying to generate the token following “published by”, the model first decides the type of the mention (tt) to be a related entity (darker indicates higher probability), followed by identifying the parent (pt), relation (rt), and entity to render (et) from the local knowledge graph as (Super Mario Land, Publisher, Nintendo). The final distribution over the words includes the standard vocabulary along with aliases of Nintendo, and the model selects “Nintendo” as the token xt. Facts related to Nintendo will be added to the local graph. • Decide the type of xt, which we denote by tt: whether it is a reference to an entity in KG<t (related), a reference to an entity not in KG<t (new), or not an entity mention (∅). • If tt = new then choose the upcoming entity et from the set of all entities E. • If tt = related then: – Choose a parent entity pt from E<t. – Choose a factual relation rt to render, rt ∈{(p, r, e) ∈KG<t|p = pt}. – Choose et as one of the tail entities, et ∈{e|(pt, rt, e) ∈KG<t}. • If tt = ∅then et = ∅. • Generate xt conditioned on et, potentially copying one of et’s aliases. • If et /∈E<t, then E<(t+1) ←E<t ∪{et}, else E<(t+1) ←E<t. For the model to refer to an entity it has already mentioned, we introduce a Reflexive relation that self-relates, i.e. p = e for (p, Reflexive, e). An illustration of this process and the variables is provided in Figure 2, for generating a token in the middle of the same sentence as in Figure 1. Amongst the three mention types (tt), the model chooses a reference to existing entity, which requires picking a fact to render. As the parent entity of this fact (pt), the model picks Super Mario Land, and then follows the Publisher relation (rt) to select Nintendo as the entity to render (et). When rendering Nintendo as a token xt, the model has an expanded vocabulary available to it, containing the standard vocabulary along with all word types in any of the aliases of et. Marginalizing out the KG There is a mismatch between our initial task requirement, p(xt|x<t), and the model we describe so far, which computes p(xt, Et|x<t, E<t). We will essentially marginalize out the local knowledge graph to compute the probability of the tokens, i.e. p(x) = P E p(x, E). We will clarify this, along with describing the training and the inference/decoding algorithms for this model and other details of the setup, in Section 4. 2.3 Parameterizing the Distributions The parametric distributions used in the generative process above are defined as follows. We begin by computing the hidden state ht using the formula in Eqn (1). We then split the vector into three components: ht = [ht,x; ht,p; ht,r], which are respectively used to predict words, parents, and relations. The type of the token, tt, is computed using a single-layer softmax over ht,x to predict one of {new, related, ∅}. Picking an Entity We also introduce pretrained embeddings for all entities and relations in the 5965 knowledge graph, denoted by ve for entity e and vr for relation r. To select et from all entities in case tt = new, we use: p(et) = softmax(ve · (ht,p + ht,r)) over all e ∈E. The reason we add ht,p and ht,r is to mimic the structure of TransE, which we use to obtain entity and relation embeddings. Details on TransE will be provided in Section 4. For mention of a related entity, tt = related, we pick a parent entity pt using p(pt) = softmax(vp · ht,p) over all p ∈Et, then pick the relation rt using p(rt) = softmax(vr · ht,r) over all r ∈{r|(pt, r, e) ∈KGt}. The combination of pt and rt determine the entity et (which must satisfy (pt, rt, et) ∈KGt; if there are multiple options one is chosen at random). Rendering the Entity If et = ∅, i.e. there is no entity to render, we use the same distribution over the vocabulary as in Eqn (1) - a softmax using ht,x. If there is an entity to render, we construct the distribution over the original vocabulary and a vocabulary containing all the tokens that appear in aliases of et. This distribution is conditioned on et in addition to xt. To compute the scores over the original vocabulary, ht,x is replaced by h′ t,x = Wproj[ht,x; vet] where Wproj is a learned weight matrix that projects the concatenated vector into the same vector space as ht,x. To obtain probabilities for words in the alias vocabulary, we use a copy mechanism Gu et al. (2016). The token sequences comprising each alias {aj} are embedded then encoded using an LSTM to form vectors aj. Copy scores are computed as: p(xt = aj) ∝exp h σ h′ t,x T Wcopy  aj i 3 Linked WikiText-2 Modeling aside, one of the primary barriers to incorporating factual knowledge into language models is that training data is hard to obtain. Standard language modeling corpora consist only of text, and thus are unable to describe which entities or facts each token is referring to. In contrast, while relation extraction datasets link text to a knowledge graph, the text is made up of disjoint sentences that do not provide sufficient context to train a powerful language model. Our goals are much more aligned to the data-to-text task (Ahn et al., 2016; Lebret et al., 2016; Wiseman et al., 2017; Yang et al., 2017; Gardent et al., 2017; Ferreira et al., 2018), where a small table-sized KB is provided to generate a short piece of text; we are interested in language models that dynamically decide the facts to incorporate from the knowledge graph, guided by the discourse. For these reasons we introduce the Linked WikiText-2 dataset, consisting of (approximately) the same articles appearing in the WikiText-2 language modeling corpus, but linked to the Wikidata (Vrandeˇci´c and Krötzsch, 2014) knowledge graph. Because the text closely matches, models trained on Linked WikiText-2 can be compared to models trained on WikiText-2. Furthermore, because many of the facts in Wikidata are derived from Wikipedia articles, the knowledge graph has a good coverage of facts expressed in the text. The dataset is available for download at: https://rloganiv.github.io/linked-wikitext-2. Our system annotates one document at a time, and consists of entity linking, relation annotations, and post-processing. The following paragraphs describe each step in detail. Initial entity annotations We begin by identifying an initial set of entity mentions within the text. The primary source of these mentions is the humanprovided links between Wikipedia articles. Whenever a span of text is linked to another Wikipedia article, we associate its corresponding Wikidata entity with the span. While article links provide a large number of gold entity annotations, they are insufficient for capturing all of the mentions in the article since entities are only linked the first time they occur. Accordingly, we use the neural-el (Gupta et al., 2017) entity linker to identify additional links to Wikidata, and identify coreferences using Stanford CoreNLP2 to cover pronouns, nominals, and other tokens missed by the linker. Local knowledge graph The next step iteratively creates a generative story for the entities using relations in the knowledge graph as well as identifies new entities. To do this, we process the text token by token. Each time an entity is encountered, we add all of the related entities in Wikidata as candi2https://stanfordnlp.github.io/CoreNLP/ 5966 Tokens xt Super Mario Land is a 1989 side - scrolling platform video game developed Mention type tt new ∅∅ related new related ∅ Entity Mentioned et SML ∅∅04-21-1989 SIDE_SCROLL PVG ∅ Relation rt ∅ ∅∅ pub date ∅ genre ∅ Parent Entity pt ∅ ∅∅ SML ∅ SML ∅ xt and published by Nintendo as a launch title for their Game Boy handheld game console . tt ∅ ∅ ∅ related ∅ ∅ new ∅ ∅ related related ∅ et ∅ ∅ ∅ NIN ∅ ∅ LT ∅ ∅ GAME_BOY HGC ∅ rt ∅ ∅ ∅ pub ∅ ∅ ∅ ∅ ∅ R:manu / platform instance of ∅ pt ∅ ∅ ∅ SML ∅ ∅ ∅ ∅ ∅ NIN / SML GAME_BOY ∅ Table 1: Example Annotation of the sentence from Figure 1, including corresponding variables from Figure 2. Note that Game Boy has multiple parent and relation annotations, as the platform for Super Mario Land and as manufactured by Nintendo. Wikidata identifiers are made human-readable (e.g., SML is Q647249) for clarity. dates for matching. If one of these related entities is seen later in the document, we identify the entity as a parent for the later entity. Since multiple relations may appear as explanations for each token, we allow a token to have multiple facts. Expanding the annotations Since there may be entities that were missed in the initial set, as well as non-entity tokens of interest such as dates and quantities we further expand the entity annotations using string matching. For entities, we match the set of aliases provided in Wikidata. For dates, we create an exhaustive list of all of the possible ways of expressing the date (e.g. "December 7, 1941", "7-12-1941", "1941", ...). We perform a similar approach for quantities, using the pint library in Python to handle the different ways of expressing units (e.g. "g", "gram", ...). Since there are many ways to express a numerical quantity, we only render the quantity at the level of precision supplied by Wikidata, and do not perform unit conversions. Example Annotation An example annotation is provided in Table 1 corresponding to the instance in Figure 1, along with the variables that correspond to the generative process of the knowledge graph language model (KGLM). The entity mentioned for most tokens here are human-provided links, apart from “1989” that is linked to 04-21-1989 by the string matching process. The annotations indicate which of the entities are new and related based on whether they are reachable by entities linked so far, clearly making a mistake for side-scrolling game and platform video game due to missing links in Wikidata. Finally, multiple plausible reasons for Game Boy are included: it’s the platform for Super Mario Land and it is manufactured by Nintendo, even though only the former is more relevant here. Train Dev Test Documents 600 60 60 Tokens 2,019,195 207,982 236,062 Vocab. Size 33,558 Mention Tokens 207,803 21,226 24,441 Mention Spans 122,983 12,214 15,007 Unique Entities 41,058 5,415 5,625 Unique Relations 1,291 484 504 Table 2: Linked WikiText-2 Corpus Statistics. Even with these omissions and mistakes, it is clear that the annotations are rich and detailed, with a high coverage, and thus should prove beneficial for training knowledge graph language models. Dataset Statistics Statistics for Linked WikiText-2 are provided in Table 2. In this corpus, more than 10% of the tokens are considered entity tokens, i.e. they are generated as factual references to information in the knowledge graph. Each entity is only mentioned a few times (less than 5 on average, with a long tail), and with more than thousand different relations. Thus it is clear that regular language models would not be able to generate factual text, and there is a need for language models to be able to refer to external sources of information. Differences from WikiText-2 Although our dataset is designed to closely replicate WikiText-2, there are some differences that prevent direct comparison. Firstly, there are minor variations in text across articles due to edits between download dates. Secondly, according to correspondence with Merity et al. (2017), WikiText-2 was collected by querying the Wikipedia Text API. Because this API discards useful annotation information (e.g. article links), Linked WikiText-2 instead was created by directly from the article HTML. 5967 4 Training and Inference for KGLM In this section, we describe the training and inference algorithm for KGLM. Pretrained KG Embeddings During evaluation, we may need to make predictions on entities and relations that have not been seen during training. Accordingly, we use fixed entity and relations embeddings pre-trained using TransE (Bordes et al., 2013) on Wikidata. Given (p, r, e), we learn embeddings vp, vr and ve to minimize the distance: δ(vp, vr, ve) = ∥vp + vr −ve∥2 . We use a max-margin loss to learn the embeddings: L = max 0, γ + δ (vp, vr, ve) −δ v′ p, vr, v′ e  where γ is the margin, and either p′ or e′ is a randomly chosen entity embedding. Training with Linked WikiText-2 Although the generative process in KGLM involves many steps, training the model on Linked WikiText-2 is straightforward. Our loss objective is the negative loglikelihood of the training data: ℓ(Θ) = X t log p(xt, Et|x<t, E<t; Θ), where Θ is the set of model parameters. Note that if an annotation has multiple viable parents such as Game Boy in 1, then we marginalize over all of the parents. Since all random variables are observed, training can performed using off-the-shelf gradientbased optimizers. Inference While observing annotations makes the model easy to train, we do not assume that the model has access to annotations during evaluation. Furthermore, as discussed in Section 2.2, the goal in language modelling is to measure the marginal probability p(x) = P E p(x, E) not the joint probability. However, this sum is intractable to compute due to the large combinatorial space of possible annotations. We address this problem by approximating the marginal distribution using importance sampling. Given samples from a proposal distribution q(E|x) the marginal distribution is: p(x) = X E p (x, E) = X E p (x, E) q (E|x) q (E|x) ≈1 N X E∼q p (x, E) q (E|x) This approach is used to evaluate models in Ji et al. (2017) and Dyer et al. (2016). Following Ji et al. (2017), we compute q (E|x) using a discriminative version of our model that predicts annotations for the current token instead of for the next token. 5 Experiments To evaluate the proposed language model, we first introduce the baselines, followed by an evaluation using perplexity of held-out corpus, accuracy on fact completion, and an illustration of how the model uses the knowledge graph. 5.1 Evaluation Setup Baseline Models We compare KGLM to the following baseline models: • AWD-LSTM (Merity et al., 2018): strong LSTM-based model used as the foundation of most state-of-the-art models on WikiText-2. • ENTITYNLM (Ji et al., 2017): an LSTM-based language model with the ability to track entity mentions. Embeddings for entities are created dynamically, and are not informed by any external sources of information. • EntityCopyNet: a variant of the KGLM where tt = new for all mentions, i.e. entities are selected from E and entity aliases are copied, but relations in the knowledge graph are unused. Hyperparameters We pre-train 256 dimensional entity and relation embeddings for all entities within two hops of the set of entities that occur in Linked WikiText-2 using TransE with margin γ = 1. Weights are tied between all date embeddings and between all quantity embeddings to save memory. Following Merity et al. (2018) we use 400 dimensional word embeddings and a 3 layer LSTM with hidden dimension 1150 to encode tokens. We also employ the same regularization strategy (DropConnect (Wan et al., 2013) + Dropout(Srivastava et al., 2014)) and weight tying approach. However, we perform optimization using Adam (Kingma and Ba, 2015) with learning rate 1e-3 instead of NT-ASGD, having found that it is more stable. 5.2 Results Perplexity We evaluate our model using the standard perplexity metric: exp  1 T PT t=1 log p(xt)  . However, perplexity suffers from the issue that it 5968 PPL UPP ENTITYNLM* (Ji et al., 2017) 85.4 189.2 EntityCopyNet* 76.1 144.0 AWD-LSTM (Merity et al., 2018) 74.8 165.8 KGLM* 44.1 88.5 Table 3: Perplexity Results on Linked WikiText-2. Results for models marked with * are obtained using importance sampling. overestimates the probability of out-of-vocabulary tokens when they are mapped to a single UNK token. This is problematic for comparing the performance of the KGLM to traditional language models on Linked WikiText-2 since there are a large number of rare entities whose alias tokens are outof-vocabulary. That is, even if the KGLM identifies the correct entity and copies the correct alias token with high probability, other models can attain better perplexity by assigning a higher probability to UNK. Accordingly, we also measure unknown penalized perplexity (UPP) (a.k.a adjusted perplexity) introduced by Ueberla (1994), and used recently by Ahn et al. (2016) and Spithourakis and Riedel (2018). This metric penalizes the probability of UNK tokens by evenly dividing their probability mass over U, the set of tokens that get mapped to UNK . We can be compute UPP by replacing p(UNK) in the perplexity above by 1 |U|p(UNK), where |U| is estimated from the data. We present the model perplexities in Table 3. To marginalize over annotations, perplexities for the ENTITYNLM, EntityCopyNet, and KGLM are estimated using the importance sampling approach described in Section 4. We observe that the KGLM attains substantially lower perplexity than the other entity-based language models (44.1 vs. 76.1/85.4), providing strong evidence that leveraging knowledge graphs is crucial for accurate language modeling. Furthermore, KGLM significantly outperforms all models in unknown penalized perplexity, demonstrating its ability to generate rare tokens. Fact Completion Since factual text generation is our primary objective, we evaluate the ability of language models to complete sentences with factual information. We additionally compare with the small GPT-2 (Radford et al., 2019), a language model trained on a much larger corpus of text. We select 6 popular relations from Freebase, and write a simple completion template for each, such as “X was born in ” for the birthplace relation. We AWDLSTM GPT-2 KGLM Oracle NEL nation-capital 0 / 0 6 / 7 0 / 0 0 / 4 birthloc 0 / 9 14 / 14 94 / 95 85 / 92 birthdate 0 / 25 8 / 9 65 / 68 61 / 67 spouse 0 / 0 2 / 3 2 / 2 1 / 19 city-state 0 / 13 62 / 62 9 / 59 4 / 59 book-author 0 / 2 0 / 0 61 / 62 25 / 28 Average 0.0/8.2 15.3/15.8 38.5/47.7 29.3/44.8 Table 4: Fact Completion. Top-k accuracy (@1/@5,%) for predicting the next token for an incomplete factual sentence. See examples in Table 5. generate sentences for these templates for a number of (X, Y ) pairs for which the relation holds, and manually examine the first token generated by each language model to determine whether it is correct. Table 4 presents performance of each language model on the relations. The oracle KGLM is given the correct entity annotation for X, while the NEL KGLM uses the discriminative model used for importance sampling combined with the NEL entity linker to produce an entity annotation for X. Amongst models trained on the same data, both KGLM variants significantly outperform AWDLSTM; they produce accurate facts, while AWDLSTM produced generic, common words. KGLMs are also competitive with models trained on orders of magnitude more data, producing factual completions that require specific knowledge, such as birthplaces, dates, and authors. However, they do not capture facts or relations that frequently appear in large corpora, like the cities within states.3 It is encouraging to see that the KGLM with automatic linking performs comparably to oracle linking. We provide examples in Table 5 to highlight qualitative differences between KGLM, trained on 600 documents, and the recent state-of-the-art language model, GPT-2, trained on the WebText corpus with over 8 million documents (Radford et al., 2019). For examples that both models get factually correct or incorrect, the generated tokens by KGLM are often much more specific, as opposed to selection of more popular/generic tokens (GPT-2 often predicts “New York” as the birthplace, even for popular entities). KGLM, in particular, gets factual statements correct when the head or tail entities are rare, while GPT-2 can only complete facts for more-popular entities while using more-generic tokens (such as “January” instead of “20”). 3This is not a failure of the KG, but of the model’s ability to pick the correct relation from the KG given the prompt. 5969 Input Sentence Gold GPT-2 KGLM Both correct Paris Hilton was born in New York City New 1981 Arnold Schwarzenegger was born on 1947-07-30 July 30 KGLM correct Bob Dylan was born in Duluth New Duluth Barack Obama was born on 1961-08-04 January August Ulysses is a book that was written by James Joyce a James GPTv2 correct St. Louis is a city in the state of Missouri Missouri Oldham Richard Nixon was born on 1913-01-09 January 20 Kanye West is married to Kim Kardashian Kim the Both incorrect The capital of India is New Delhi the a Madonna is married to Carlos Leon a Alex Table 5: Completion Examples. Examples of fact completion by KGLM and GPT-2, which has been trained on a much larger corpus. GPT-2 tends to produce very common and general tokens, such as one of a few popular cities to follow “born in”. KGLM sometimes makes mistakes in linking to the appropriate fact in the KG, however, the generated facts are more specific and contain rare tokens. We omit AWD-LSTM from this figure as it rarely produced tokens apart from the generic “the” or “a”, or “⟨UNK⟩”. Effect of changing the KG For most language models, it is difficult to control their generation since factual knowledge is entangled with generation capabilities of the model. For KGLM, an additional benefit of its use of an external source of knowledge is that KGLM is directly controllable via modifications to the KG. To illustrate this capability with a simple example, we create completion of “Barack Obama was born on ” with the original fact (Barack Obama, birthDate, 196108-04), resulting in the top three decoded tokens as “August”, “4”, “1961”. After changing the birth date to 2013-03-21, the top three decoded tokens become “March”, “21”, “2013”. Thus, changing the fact in the knowledge graph directly leads to a corresponding change in the model’s prediction. 6 Related Work Knowledge-based language models Our work draws inspiration from two existing knowledgebased language models: (i) ENTITYNLM (Ji et al., 2017) which improves a language model’s ability to track entities by jointly modeling named entity recognition and coreference. Our model similarly tracks entities through a document, improving its ability to generate factual information by modeling entity linking and relation extraction. (ii) The neural knowledge language model (NKLM) (Ahn et al., 2016) which established the idea of leveraging knowledge graphs in neural language models. The main differentiating factor between the KGLM and NKLM is that the KGLM operates on an entire knowledge graph and can be evaluated on text without additional conditioning information, whereas the NKLM operates on a relatively smaller set of predefined edges emanating from a single entity, and requires that entity be provided as conditioning information ahead of time. This requirement precludes direct comparison between NKLM and the baselines in Section 5. Data-to-text generation Our work is also related to the task of neural data-to-text generation. For a survey of early non-neural text generation methods we refer the reader to Reiter and Dale (1997). Recent neural methods have been applied to generating text from tables of sports statistics (Wiseman et al., 2017), lists and tables (Yang et al., 2017), and Wikipedia info-boxes (Lebret et al., 2016). The primary difference between these works and ours is our motivation. These works focus on generating coherent text within a narrow domain (e.g. sports, recipes, introductory sentences), and optimize metrics such as BLEU and METEOR score. Our focus instead is to use a large source of structured knowledge to improve language model’s ability to handle rare tokens and facts on a broad domain of topics, and our emphasis is on improving perplexity. General language modeling Also related are the recent papers proposing modifications to the AWDLSTM that improve performance on Wikitext2 (Gong et al., 2018; Yang et al., 2018; Krause et al., 2018). We chose to benchmark against AWDLSTM since these contributions are orthogonal, and many of the techniques are compatible with the KGLM. KGLM improves upon AWD-LSTM, and we expect using KGLM in conjunction with these methods will yield further improvement. 5970 7 Conclusions and Future Work By relying on memorization, existing language models are unable to generate factually correct text about real-world entities. In particular, they are unable to capture the long tail of rare entities and word types like numbers and dates. In this work, we proposed the knowledge graph language model (KGLM), a neural language model that can access an external source of facts, encoded as a knowledge graph, in order to generate text. Our implementation is available at: https://github.com/rloganiv/ kglm-model. We also introduced Linked WikiText2 containing text that has been aligned to facts in the knowledge graph, allowing efficient training of the model. Linked WikiText-2 is freely available for download at: https://rloganiv.github.io/ linked-wikitext-2. In our evaluation, we showed that by utilizing this graph, the proposed KGLM is able to generate higher-quality, factually correct text that includes mentions of rare entities and specific tokens like numbers and dates. This work lays the groundwork for future research into knowledge-aware language modeling. The limitations of the KGLM model, such as the need for marginalization during inference and reliance on annotated tokens, raise new research problems for advancing neural NLP models. Our distantly supervised approach to dataset creation can be used with other knowledge graphs and other kinds of text as well, providing opportunities for accurate language modeling in new domains. Acknowledgements First and foremost, we would like to thank Stephen Merity for sharing the materials used to collect the WikiText-2 dataset, and Nitish Gupta for modifying his entity linker to assist our work. We would also like to thank Dheeru Dua and Anthony Chen for their thoughtful feedback. This work was supported in part by Allen Institute of Artificial Intelligence (AI2), and in part by NSF award #IIS1817183. The views expressed are those of the authors and do not reflect the official policy or position of the funding agencies. References Sungjin Ahn, Heeyoul Choi, Tanel Pärnamaa, and Yoshua Bengio. 2016. A neural knowledge language model. ArXiv:1608.00318. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Proc. of NeurIPS. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proc. of NAACL. Thiago Castro Ferreira, Diego Moussallem, Emiel Krahmer, and Sander Wubben. 2018. Enriching the WebNLG corpus. In Proc. of INLG. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The WebNLG challenge: Generating text from RDF data. In Proc. of INLG. Chengyue Gong, Di He, Xu Tan, Tao Qin, Liwei Wang, and Tie-Yan Liu. 2018. Frage: frequency-agnostic word representation. In Proc. of NeurIPS. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proc. of ACL. Nitish Gupta, Sameer Singh, and Dan Roth. 2017. Entity linking via joint encoding of types, descriptions, and context. In Proc. of EMNLP. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, and Noah A. Smith. 2017. Dynamic entity representations in neural language models. In Proc. of EMNLP. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proc. of ICLR. Ben Krause, Emmanuel Kahembwe, Iain Murray, and Steve Renals. 2018. Dynamic evaluation of neural sequence models. In Proc. of ICML. Rémi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proc. of EMNLP. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and optimizing LSTM language models. In Proc. of ICLR. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture models. In Proc. of ICLR. 5971 Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proc. of INTERSPEECH. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Technical report, OpenAI. Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Natural Language Engineering, 3(1):57–87. Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proc. of AAAI. Georgios P. Spithourakis and Sebastian Riedel. 2018. Numeracy for language models: Evaluating and improving their ability to predict numbers. In Proc. of ACL. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Trieu H. Trinh and Quoc V. Le. 2019. Do language models have common sense? In Proc. of ICLR. Joerg Ueberla. 1994. Analysing a simple language model·some general conclusions for language models for speech recognition. Computer Speech & Language, 8(2):153 – 176. Oriol Vinyals and Quoc V. Le. 2015. A neural conversational model. Proc. of ICML Deep Learning Workshop. Denny Vrandeˇci´c and Markus Krötzsch. 2014. Wikidata: A free collaborative knowledgebase. Communications of the ACM, 57(10):78–85. Li Wan, Matthew Zeiler, Sixin Zhang, Yann LeCun, and Rob Fergus. 2013. Regularization of neural networks using dropconnect. In Proc. of ICML. Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2017. Challenges in data-to-document generation. In Proc. of EMNLP. Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W Cohen. 2018. Breaking the softmax bottleneck: A high-rank RNN language model. In Proc. of ICLR. Zichao Yang, Phil Blunsom, Chris Dyer, and Wang Ling. 2017. Reference-aware language models. In Proc. of EMNLP.
2019
598
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5972–5984 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5972 Controllable Paraphrase Generation with a Syntactic Exemplar Mingda Chen Qingming Tang Sam Wiseman Kevin Gimpel Toyota Technological Institute at Chicago, Chicago, IL, 60637, USA {mchen,qmtang,swiseman,kgimpel}@ttic.edu Abstract Prior work on controllable text generation usually assumes that the controlled attribute can take on one of a small set of values known a priori. In this work, we propose a novel task, where the syntax of a generated sentence is controlled rather by a sentential exemplar. To evaluate quantitatively with standard metrics, we create a novel dataset with human annotations. We also develop a variational model with a neural module specifically designed for capturing syntactic knowledge and several multitask training objectives to promote disentangled representation learning. Empirically, the proposed model is observed to achieve improvements over baselines and learn to capture desirable characteristics.1 1 Introduction Controllable text generation has recently become an area of intense focus in the natural language processing (NLP) community. Recent work has focused both on generating text satisfying certain stylistic requirements such as being formal or exhibiting a particular sentiment (Hu et al., 2017; Shen et al., 2017; Ficler and Goldberg, 2017), as well as on generating text meeting structural requirements, such as conforming to a particular template (Iyyer et al., 2018; Wiseman et al., 2018). These systems can be used in various application areas, such as text summarization (Fan et al., 2018), adversarial example generation (Iyyer et al., 2018), dialogue (Niu and Bansal, 2018), and data-to-document generation (Wiseman et al., 2018). However, prior work on controlled generation has typically assumed a known, finite set of values that the controlled attribute can take on. In this work, we are interested instead in the novel setting where the generation is controlled 1Code and data are available at github.com/ mingdachen/syntactic-template-generation through an exemplar sentence (where any syntactically valid sentence is a valid exemplar). We will focus in particular on using a sentential exemplar to control the syntactic realization of a generated sentence. This task can benefit natural language interfaces to information systems by suggesting alternative invocation phrases for particular types of queries (Kumar et al., 2017). It can also bear on dialogue systems that seek to generate utterances that fit particular functional categories (Ke et al., 2018; Li et al., 2019). To address this task, we propose a deep generative model with two latent variables, which are designed to capture semantics and syntax. To achieve better disentanglement between these two variables, we design multi-task learning objectives that make use of paraphrases and word order information. To further facilitate the learning of syntax, we additionally propose to train the syntactic component of our model with word noising and latent word-cluster codes. Word noising randomly replaces word tokens in the syntactic inputs based on a part-of-speech tagger used only at training time. Latent codes create a bottleneck layer in the syntactic encoder, forcing it to learn a more compact notion of syntax. The latter approach also learns interpretable word clusters. Empirically, these learning criteria and neural architectures lead to better generation quality and generally better disentangled representations. To evaluate this task quantitatively, we manually create an evaluation dataset containing triples of a semantic exemplar sentence, a syntactic exemplar sentence, and a reference sentence incorporating the semantics of the semantic exemplar and the syntax of the syntactic exemplar. This dataset is created by first automatically finding syntactic exemplars and then heavily editing them by ensuring (1) semantic variation between the syntactic inputs and the references, (2) syntactic 5973 X: his teammates’ eyes got an ugly, hostile expression. Y : the smell of flowers was thick and sweet. Z: the eyes of his teammates had turned ugly and hostile. X: we need to further strengthen the agency’s capacities. Y : the damage in this area seems to be quite minimal. Z: the capacity of this office needs to be reinforced even further. Figure 1: Examples from our annotated evaluation dataset of paraphrase generation using semantic input X (red), syntactic exemplar Y (blue), and the reference output Z (black). similarity between the syntactic inputs and the references, and (3) syntactic variation between the semantic input and references. Examples are shown in Figure 1. This dataset allows us to evaluate different approaches quantitatively using standard metrics, including BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004). As the success of controllability of generated sentences also largely depends on the syntactic similarity between the syntactic exemplar and the reference, we propose a “syntactic similarity” metric based on evaluating tree edit distance between constituency parse trees of these two sentences after removing word tokens. Empirically, we benchmark the syntacticallycontrolled paraphrase network (SCPN) of Iyyer et al. (2018) on this novel dataset, which shows strong performance with the help of a supervised parser at test-time but also can be sensitive to the quality of the parse predictor. We show that using our word position loss effectively characterizes syntactic knowledge, bringing consistent and sizeable improvements over syntactic-related evaluation. The latent code module learns interpretable latent representations. Additionally, all of our models can achieve improvements over baselines. Qualitatively, we show that our models do suffer from the lack of an abstract syntactic representation, though we also show that SCPN and our models exhibit similar artifacts. 2 Related Work We focus primarily on the task of paraphrase generation, which has received significant recent attention (Quirk et al., 2004; Prakash et al., 2016; Mallinson et al., 2017; Dong et al., 2017; Ma et al., 2018; Li et al., 2018). In order to disentangle the syntactic and semantic aspects of paraphrase generation we learn an explicit latent variable model using a variational autoencoder (VAE) (Kingma and Welling, 2014), which is now commonly applied to text generation (Bowman et al., 2016; Miao et al., 2016; Semeniuta et al., 2017; Serban et al., 2017; Xu and Durrett, 2018; Shen et al., 2019). In seeking to control generation with exemplars, our approach relates to recent work in controllable text generation. Whereas much work on controllable text generation seeks to control distinct attributes of generated text (e.g., its sentiment or formality) (Hu et al., 2017; Shen et al., 2017; Ficler and Goldberg, 2017; Fu et al., 2018; Zhao et al., 2018; Fan et al., 2018, inter alia), there is also recent work which attempts to control structural aspects of the generation, such as its latent (Wiseman et al., 2018) or syntactic (Iyyer et al., 2018) template. Our work is closely related to this latter category, and to the syntactically-controlled paraphrase generation of Iyyer et al. (2018) in particular, but our proposed model is different in that it simply uses a single sentence as a syntactic exemplar rather than requiring a supervised parser. This makes our setting closer to style transfer in computer vision, in which an image is generated that combines the content from one image and the style from another (Gatys et al., 2016). In particular, in our setting, we seek to generate a sentence that combines the semantics from one sentence with the syntax from another, and so we only require a pair of (unparsed) sentences. We also note recent, concurrent work that attempts to use sentences as exemplars in controlling generation (Wang et al., 2019) in the context of data-to-document generation (Wiseman et al., 2017). Another related line of work builds generation upon sentential exemplars (Guu et al., 2018; Weston et al., 2018; Pandey et al., 2018; Cao et al., 2018; Peng et al., 2019) in order to improve the quality of the generation itself, rather than to allow for control over syntactic structures. There has been a great deal of work in applying multi-task learning to improve performance on NLP tasks (Plank et al., 2016; Rei, 2017; Augenstein and Søgaard, 2017; Bollmann et al., 2018, inter alia). Some recent work used multi-task learning as a way of improving the quality or disentanglement of learned representations (Zhao et al., 2017; Goyal et al., 2017; Du et al., 2018; John et al., 2018). Part of our evaluation involves assessing the dif5974 x z y x Figure 2: Graphical model. Dashed lines indicate the inference model. Solid lines indicate the generative model. ferent characteristics captured in the semantic and syntactic encoders, relating them to work on learning disentangled representations in NLP, including morphological reinflection (Zhou and Neubig, 2017), sequence labeling (Chen et al., 2018), and sentence representations (Chen et al., 2019). 3 Methods Given two sentences X and Y , our goal is to generate a sentence Z that follows the syntax of Y and the semantics of X. We refer to X and Y as the semantic template and syntactic template, respectively. To solve this problem, we follow Chen et al. (2019) and take an approach based on latentvariable probabilistic modeling, neural variational inference, and multi-task learning. In particular, we assume a generative model that has two latent variables: y for semantics and z for syntax (as depicted in Figure 2). We refer to our model as a vMF-Gaussian Variational Autoencoder (VGVAE). Formally, following the conditional independence assumptions in the graphical model, the joint probability pθ(x, y, z) can be factorized as: pθ(x, y, z) = pθ(y)pθ(z)pθ(x | y, z) = pθ(y)pθ(z) T Y t=1 pθ(xt | x1:t−1, y, z), where xt is the tth word of x and pθ(xt | x1:t−1, y, z) is given by a softmax over a vocabulary of size V . Further details on the parameterization are given below. When applying neural variational inference, we assume a factorized approximated posterior qφ(y|x)qφ(z|x) = qφ(y, z|x), which has also been used in some prior work (Zhou and Neubig, 2017; Chen et al., 2018). Learning in VGVAE maximizes a lower bound of marginal log-likelihood: log pθ(x) ≥ E y∼qφ(y|x) z∼qφ(z|x) [log pθ(x| z, y) −log qφ(z|x) pθ(z) −log qφ(y|x) pθ(y) ] = E y∼qφ(y|x) z∼qφ(z|x) [log pθ(x|z, y)] −KL(qφ(z|x)∥pθ(z)) −KL(qφ(y|x)∥pθ(y)) (1) 3.1 Parameterization vMF Distribution. We choose a von MisesFisher (vMF) distribution for the y (semantic) latent variable. vMF can be regarded as a Gaussian distribution on a hypersphere with two parameters: µ and κ. µ ∈Rm is a normalized vector (i.e., ∥µ∥2 = 1) defining the mean direction. κ ∈R≥0 is often referred to as a concentration parameter analogous to the variance in a Gaussian distribution. We will assume qφ(y|x) follows a vMF distribution and pθ(y) follows the uniform distribution vMF(·, 0). We follow Davidson et al. (2018) and use an acceptance-rejection scheme to sample from the vMF distribution. Gaussian Distribution. We assume qφ(z|x) follows a Gaussian distribution N(µβ(x), diag(σβ(x))) and that the prior pθ(z) is N(0, Id), where Id is a d × d identity matrix. Encoders. At test time, we want to have different combinations of semantic and syntactic inputs, which naturally suggests separate parameterizations for qφ(y|x) and qφ(z|x). Specifically, qφ(y|x) is parameterized by a word averaging encoder followed by a three-layer feedforward neural network since it has been observed that word averaging encoders perform surprisingly well for semantic tasks (Wieting et al., 2016). qφ(z|x) is parameterized by a bidirectional long short-term memory network (LSTM; Hochreiter and Schmidhuber, 1997) also followed by a three-layer feedforward neural network, where we concatenate the forward and backward vectors produced by the biLSTM and then take the average of these vectors. Decoders. As shown in Figure 3, at each time step, we concatenate the syntactic variable z with the previous word’s embedding as the input to the 5975 ht−1 <latexit sha1_base64="VUXtnCDq2VdvMam5fh+B3fHUNxk=">AB7nicb VBNS8NAEJ3Ur1q/qh69LBbBiyWpgh6LXjxWsB/QhrLZbtqlm03YnQgl9Ed48aCIV3+PN/+N2zYHbX0w8Hhvhpl5QSKFQdf9dgpr6xubW8Xt0s7u3v5B+fCoZ eJUM95ksYx1J6CGS6F4EwVK3k0p1EgeTsY38389hPXRsTqEScJ9yM6VCIUjKV2qN+hfetF+uFV3DrJKvJxUIEejX/7qDWKWRlwhk9SYrucm6GdUo2CST0 u91PCEsjEd8q6likbc+Nn83Ck5s8qAhLG2pZDM1d8TGY2MmUSB7YwojsyNxP/87ophjd+JlSIldsShMJcGYzH4nA6E5QzmxhDIt7K2EjaimDG1CJRuCt/ zyKmnVqt5ltfZwVanf5nEU4QRO4Rw8uIY63EMDmsBgDM/wCm9O4rw4787HorXg5DPH8AfO5w/7O49V</latexit> ht <latexit sha1_base64="k7fT5pcp10BZp2sjEvbtdTJgbYE=">AB7Hi cbVBNS8NAEJ34WetX1aOXxSJ4KkV9Fj04rGCaQtKJvtpl262YTdiVBCf4MXD4p49Qd589+4bXPQ1gcDj/dmJkXplIYdN1vZ219Y3Nru7RT3t3bPzisH B23TJpxn2WyER3Qmq4FIr7KFDyTqo5jUPJ2+H4bua3n7g2IlGPOEl5ENOhEpFgFK3kj/o5TvuVqltz5yCrxCtIFQo0+5Wv3iBhWcwVMkmN6XpuikFONQo m+bTcywxPKRvTIe9aqmjMTZDPj52Sc6sMSJRoWwrJXP09kdPYmEkc2s6Y4sgsezPxP6+bYXQT5EKlGXLFouiTBJMyOxzMhCaM5QTSyjTwt5K2IhqytDmU 7YheMsvr5JWveZd1uoPV9XGbRFHCU7hDC7Ag2towD0wQcGAp7hFd4c5bw4787HonXNKWZO4A+czx8e8o7j</latexit> ht+1 <latexit sha1_base64="dwxsea5D8kdFnz1G8AGRWBhpX7U=">AB7nicbVB NS8NAEJ3Ur1q/qh69LBZBEpSBT0WvXisYD+gDWz3bRLN5uwOxFK6I/w4kERr/4eb/4bt20O2vpg4PHeDPzgkQKg67RTW1jc2t4rbpZ3dvf2D8uFRy8SpZrzJ YhnrTkANl0LxJgqUvJNoTqNA8nYwvpv57SeujYjVI04S7kd0qEQoGEUrtUf9DC+8ab9cavuHGSVeDmpQI5Gv/zVG8QsjbhCJqkxXc9N0M+oRsEkn5Z6qeEJZWM65 F1LFY248bP5uVNyZpUBCWNtSyGZq78nMhoZM4kC2xlRHJlbyb+53VTDG/8TKgkRa7YlGYSoIxmf1OBkJzhnJiCWVa2FsJG1FNGdqESjYEb/nlVdKqVb3Lau3hql K/zeMowgmcwjl4cA1uIcGNIHBGJ7hFd6cxHlx3p2PRWvByWeO4Q+czx/4L49T</latexit> et+1 <latexit sha1_base64="18+vl2ocxvNdaJvm61/YD0mVkc=">AB7nicbVBNS8N AEJ3Ur1q/qh69LBZBEpSBT0WvXisYD+gDWznbRLN5uwuxFK6I/w4kERr/4eb/4bt20O2vpg4PHeDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjlo5TxbDJYhGrTkA1Ci6xa bgR2EkU0igQ2A7GdzO/YRK81g+mkmCfkSHkoecUWOlNvYzc+FN+WKW3XnIKvEy0kFcjT65a/eIGZphNIwQbXuem5i/Iwqw5nAamXakwoG9Mhdi2VNELtZ/Nzp+TMKgMSxsq WNGSu/p7IaKT1JApsZ0TNSC97M/E/r5ua8MbPuExSg5ItFoWpICYms9/JgCtkRkwsoUxeythI6oMzahkg3BW35lbRqVe+yWnu4qtRv8ziKcAKncA4eXEMd7qEBTWAwhmd4 hTcncV6cd+dj0Vpw8plj+APn8wfzkY9Q</latexit> et <latexit sha1_base64="WvHUke6IcNozOBiFtAp4xzs5khc=">AB7Hi cbVBNS8NAEJ34WetX1aOXxSJ4KkV9Fj04rGCaQtKJvtpl262YTdiVBCf4MXD4p49Qd589+4bXPQ1gcDj/dmJkXplIYdN1vZ219Y3Nru7RT3t3bPzisH B23TJpxn2WyER3Qmq4FIr7KFDyTqo5jUPJ2+H4bua3n7g2IlGPOEl5ENOhEpFgFK3k836O036l6tbcOcgq8QpShQLNfuWrN0hYFnOFTFJjup6bYpBTjYJ JPi3MsNTysZ0yLuWKhpzE+TzY6fk3CoDEiXalkIyV39P5DQ2ZhKHtjOmODL3kz8z+tmGN0EuVBphlyxaIokwQTMvucDITmDOXEsq0sLcSNqKaMrT5l G0I3vLq6RVr3mXtfrDVbVxW8RglM4gwvw4BoacA9N8IGBgGd4hTdHOS/Ou/OxaF1zipkT+APn8wcaWo7g</latexit> et−1 <latexit sha1_base64="9vtpufrLi3My/RL6ZHqBTakGho=">AB7ni cbVBNS8NAEJ3Ur1q/qh69LBbBiyWpgh6LXjxWsB/QhrLZTtqlm03Y3Qgl9Ed48aCIV3+PN/+N2zYHbX0w8Hhvhpl5QSK4Nq7RTW1jc2t4rbpZ3dvf2D8 uFRS8epYthksYhVJ6AaBZfYNwI7CQKaRQIbAfju5nfkKleSwfzSRBP6JDyUPOqLFSG/uZufCm/XLFrbpzkFXi5aQCORr98ldvELM0QmYoFp3PTcxfka V4UzgtNRLNSaUjekQu5ZKGqH2s/m5U3JmlQEJY2VLGjJXf09kNJ6EgW2M6JmpJe9mfif101NeONnXCapQckWi8JUEBOT2e9kwBUyIyaWUKa4vZWwEVWUG ZtQyYbgLb+8Slq1qndZrT1cVeq3eRxFOIFTOAcPrqEO9CAJjAYwzO8wpuTOC/Ou/OxaC04+cwx/IHz+QP2nY9S</latexit> wt−1 <latexit sha1_base64="Sc8WABjVuiXOYqzxMD7X6AsWgow=">AB7ni cbVBNS8NAEJ34WetX1aOXxSJ4sSRV0GPRi8cK9gPaUDbTbt0swm7E6WE/gvHhTx6u/x5r9x2+agrQ8GHu/NMDMvSKQw6Lrfzsrq2vrGZmGruL2zu7dfO jhsmjVjDdYLGPdDqjhUijeQIGStxPNaRI3gpGt1O/9ci1EbF6wHC/YgOlAgFo2il1lMvw3Nv0iuV3Yo7A1kmXk7KkKPeK31+zFLI6QSWpMx3MT9DO qUTDJ8VuanhC2YgOeMdSRSNu/Gx27oScWqVPwljbUkhm6u+JjEbGjKPAdkYUh2bRm4r/eZ0Uw2s/EypJkSs2XxSmkmBMpr+TvtCcoRxbQpkW9lbChlRTh jahog3BW3x5mTSrFe+iUr2/LNdu8jgKcAwncAYeXEN7qAODWAwgmd4hTcncV6cd+dj3ri5DNH8AfO5w8SYI9k</latexit> wt <latexit sha1_base64="3mAguvntnCB/7BIhVTh81NaEXnw=">AB7Hi cbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48VTFtoQ9lst+3SzSbsTpQS+hu8eFDEqz/Im/GbZuDtj4YeLw3w8y8MJHCoOt+O4W19Y3NreJ2aWd3b/+gf HjUNHGqGfdZLGPdDqnhUijuo0DJ24nmNAolb4Xj25nfeuTaiFg94CThQUSHSgwEo2gl/6mX4bRXrhVdw6ySrycVCBHo1f+6vZjlkZcIZPUmI7nJhkVKN gk9L3dTwhLIxHfKOpYpG3ATZ/NgpObNKnwxibUshmau/JzIaGTOJQtsZURyZW8m/ud1UhxcB5lQSYpcscWiQSoJxmT2OekLzRnKiSWUaWFvJWxENWVo8 ynZELzl1dJs1b1Lq1+8tK/SaPowgncArn4MEV1OEOGuADAwHP8ApvjnJenHfnY9FacPKZY/gD5/MHNeqO8g=</latexit> wt+1 <latexit sha1_base64="Rd0MpeNc0CYo+RcZN/UvfEQY6M=">AB7nicbVBNS8N AEJ34WetX1aOXxSIQkmqoMeiF48V7Ae0oWy2m3bpZhN2J0oJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZekEh0HW/nZXVtfWNzcJWcXtnd2+/dHDYNHGqGW+wWMa6HVDpVC8g QIlbyea0yiQvBWMbqd+65FrI2L1gOE+xEdKBEKRtFKradehufepFcquxV3BrJMvJyUIUe9V/rq9mOWRlwhk9SYjucm6GdUo2CST4rd1PCEshEd8I6likbc+Nns3Ak5tUqfhLG 2pZDM1N8TGY2MGUeB7YwoDs2iNxX/8zophtd+JlSIldsvihMJcGYTH8nfaE5Qzm2hDIt7K2EDamDG1CRuCt/jyMmlWK95FpXp/Wa7d5HEU4BhO4Aw8uIa3EdGsBgBM/wC m9O4rw4787HvHXFyWeO4A+czx8PVI9i</latexit> wt <latexit sha1_base64="3mAguvntnCB/7BIhVTh81NaEXnw=">AB7Hi cbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48VTFtoQ9lst+3SzSbsTpQS+hu8eFDEqz/Im/GbZuDtj4YeLw3w8y8MJHCoOt+O4W19Y3NreJ2aWd3b/+gf HjUNHGqGfdZLGPdDqnhUijuo0DJ24nmNAolb4Xj25nfeuTaiFg94CThQUSHSgwEo2gl/6mX4bRXrhVdw6ySrycVCBHo1f+6vZjlkZcIZPUmI7nJhkVKN gk9L3dTwhLIxHfKOpYpG3ATZ/NgpObNKnwxibUshmau/JzIaGTOJQtsZURyZW8m/ud1UhxcB5lQSYpcscWiQSoJxmT2OekLzRnKiSWUaWFvJWxENWVo8 ynZELzl1dJs1b1Lq1+8tK/SaPowgncArn4MEV1OEOGuADAwHP8ApvjnJenHfnY9FacPKZY/gD5/MHNeqO8g=</latexit> wt+1 <latexit sha1_base64="Rd0MpeNc0CYo+RcZN/UvfEQY6M=">AB7nicbVBNS8N AEJ34WetX1aOXxSIQkmqoMeiF48V7Ae0oWy2m3bpZhN2J0oJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZekEh0HW/nZXVtfWNzcJWcXtnd2+/dHDYNHGqGW+wWMa6HVDpVC8g QIlbyea0yiQvBWMbqd+65FrI2L1gOE+xEdKBEKRtFKradehufepFcquxV3BrJMvJyUIUe9V/rq9mOWRlwhk9SYjucm6GdUo2CST4rd1PCEshEd8I6likbc+Nns3Ak5tUqfhLG 2pZDM1N8TGY2MGUeB7YwoDs2iNxX/8zophtd+JlSIldsvihMJcGYTH8nfaE5Qzm2hDIt7K2EDamDG1CRuCt/jyMmlWK95FpXp/Wa7d5HEU4BhO4Aw8uIa3EdGsBgBM/wC m9O4rw4787HvHXFyWeO4A+czx8PVI9i</latexit> wt+2 <latexit sha1_base64="26xIL84SZVvJTRW3whPYCbdTAI=">AB7nicbVBNS8N AEJ34WetX1aOXxSIQkmqoMeiF48V7Ae0oWy2m3bpZhN2J0oJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZekEh0HW/nZXVtfWNzcJWcXtnd2+/dHDYNHGqGW+wWMa6HVDpVC8g QIlbyea0yiQvBWMbqd+65FrI2L1gOE+xEdKBEKRtFKradehufVSa9UdivuDGSZeDkpQ456r/TV7csjbhCJqkxHc9N0M+oRsEknxS7qeEJZSM64B1LFY248bPZuRNyapU+CWN tSyGZqb8nMhoZM4C2xlRHJpFbyr+53VSDK/9TKgkRa7YfFGYSoIxmf5O+kJzhnJsCWVa2FsJG1JNGdqEijYEb/HlZdKsVryLSvX+sly7yeMowDGcwBl4cAU1uIM6NIDBCJ7hF d6cxHlx3p2PeuKk8cwR84nz8Q2Y9j</latexit> z <latexit sha1_base64="VLEo6VgUnu2TnOxoOkqsMPXv yTo=">AB6HicbVDLTgJBEOzF+IL9ehlIjHxRHbRI9ELx4hkUcCGzI79MLI7OxmZtYECV/gxYPGePWTvPk3DrAHB SvpFLVne6uIBFcG9f9dnJr6xubW/ntws7u3v5B8fCoqeNUMWywWMSqHVCNgktsG4EthOFNAoEtoLR7cxvPaLSPJb3 ZpygH9GB5CFn1Fip/tQrltyOwdZJV5GSpCh1it+dfsxSyOUhgmqdcdzE+NPqDKcCZwWuqnGhLIRHWDHUkj1P5kfui UnFmlT8JY2ZKGzNXfExMaT2OAtsZUTPUy95M/M/rpCa89idcJqlByRaLwlQE5PZ16TPFTIjxpZQpri9lbAhVZQZm0 3BhuAtv7xKmpWyd1Gu1C9L1ZsjycwCmcgwdXUIU7qEDGCA8wyu8OQ/Oi/PufCxac042cwx/4Hz+AOqPjQI=</lat exit> z <latexit sha1_base64="VLEo6VgUnu2TnOxoOkqsMPXv yTo=">AB6HicbVDLTgJBEOzF+IL9ehlIjHxRHbRI9ELx4hkUcCGzI79MLI7OxmZtYECV/gxYPGePWTvPk3DrAHB SvpFLVne6uIBFcG9f9dnJr6xubW/ntws7u3v5B8fCoqeNUMWywWMSqHVCNgktsG4EthOFNAoEtoLR7cxvPaLSPJb3 ZpygH9GB5CFn1Fip/tQrltyOwdZJV5GSpCh1it+dfsxSyOUhgmqdcdzE+NPqDKcCZwWuqnGhLIRHWDHUkj1P5kfui UnFmlT8JY2ZKGzNXfExMaT2OAtsZUTPUy95M/M/rpCa89idcJqlByRaLwlQE5PZ16TPFTIjxpZQpri9lbAhVZQZm0 3BhuAtv7xKmpWyd1Gu1C9L1ZsjycwCmcgwdXUIU7qEDGCA8wyu8OQ/Oi/PufCxac042cwx/4Hz+AOqPjQI=</lat exit> z <latexit sha1_base64="VLEo6VgUnu2TnOxoOkqsMPXv yTo=">AB6HicbVDLTgJBEOzF+IL9ehlIjHxRHbRI9ELx4hkUcCGzI79MLI7OxmZtYECV/gxYPGePWTvPk3DrAHB SvpFLVne6uIBFcG9f9dnJr6xubW/ntws7u3v5B8fCoqeNUMWywWMSqHVCNgktsG4EthOFNAoEtoLR7cxvPaLSPJb3 ZpygH9GB5CFn1Fip/tQrltyOwdZJV5GSpCh1it+dfsxSyOUhgmqdcdzE+NPqDKcCZwWuqnGhLIRHWDHUkj1P5kfui UnFmlT8JY2ZKGzNXfExMaT2OAtsZUTPUy95M/M/rpCa89idcJqlByRaLwlQE5PZ16TPFTIjxpZQpri9lbAhVZQZm0 3BhuAtv7xKmpWyd1Gu1C9L1ZsjycwCmcgwdXUIU7qEDGCA8wyu8OQ/Oi/PufCxac042cwx/4Hz+AOqPjQI=</lat exit> y <latexit sha1_base64="mEcz1FLhuG1BpP6c5hi50qAIJ0g=" >AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48t2FpoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFc G9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMSqE1CNgktsGW4EdhKFNAoEPgTj25n/8IRK81jem0mCfkSHkoecUWOl5qRfrhVd w6ySrycVCBHo1/+6g1ilkYoDRNU67nJsbPqDKcCZyWeqnGhLIxHWLXUkj1H42P3RKzqwyIGsbElD5urviYxGWk+iwHZG1Iz0sj cT/O6qQmv/YzLJDUo2WJRmApiYjL7mgy4QmbExBLKFLe3EjaijJjsynZELzl1dJu1b1Lq15mWlfpPHUYQTOIVz8OAK6nAHDWg BA4RneIU359F5cd6dj0VrwclnjuEPnM8f6QuNAQ=</latexit> y <latexit sha1_base64="mEcz1FLhuG1BpP6c5hi50qAIJ0g=" >AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48t2FpoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFc G9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMSqE1CNgktsGW4EdhKFNAoEPgTj25n/8IRK81jem0mCfkSHkoecUWOl5qRfrhVd w6ySrycVCBHo1/+6g1ilkYoDRNU67nJsbPqDKcCZyWeqnGhLIxHWLXUkj1H42P3RKzqwyIGsbElD5urviYxGWk+iwHZG1Iz0sj cT/O6qQmv/YzLJDUo2WJRmApiYjL7mgy4QmbExBLKFLe3EjaijJjsynZELzl1dJu1b1Lq15mWlfpPHUYQTOIVz8OAK6nAHDWg BA4RneIU359F5cd6dj0VrwclnjuEPnM8f6QuNAQ=</latexit> y <latexit sha1_base64="mEcz1FLhuG1BpP6c5hi50qAIJ0g=" >AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48t2FpoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFc G9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMSqE1CNgktsGW4EdhKFNAoEPgTj25n/8IRK81jem0mCfkSHkoecUWOl5qRfrhVd w6ySrycVCBHo1/+6g1ilkYoDRNU67nJsbPqDKcCZyWeqnGhLIxHWLXUkj1H42P3RKzqwyIGsbElD5urviYxGWk+iwHZG1Iz0sj cT/O6qQmv/YzLJDUo2WJRmApiYjL7mgy4QmbExBLKFLe3EjaijJjsynZELzl1dJu1b1Lq15mWlfpPHUYQTOIVz8OAK6nAHDWg BA4RneIU359F5cd6dj0VrwclnjuEPnM8f6QuNAQ=</latexit> t −1 <latexit sha1_base64="CwkY8kBWZJ254Eiqzh7zjrJXDM=">AB6nicbVBNS8NAEJ 34WetX1aOXxSJ4sSRV0GPRi8eK9gPaUDbTbt0swm7E6GE/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvSKQw6Lrfzsrq2vrGZmGruL2zu7dfOjhsmjVjDdYLGPdDqjhUijeQIGStxPNa RI3gpGt1O/9cS1EbF6xHC/YgOlAgFo2ilBz3eqWyW3FnIMvEy0kZctR7pa9uP2ZpxBUySY3peG6CfkY1Cib5pNhNDU8oG9EB71iqaMSNn81OnZBTq/RJGtbCslM/T2R0ciYcRTY zoji0Cx6U/E/r5NieO1nQiUpcsXmi8JUEozJ9G/SF5ozlGNLKNPC3krYkGrK0KZTtCF4iy8vk2a14l1UqveX5dpNHkcBjuEzsCDK6jBHdShAQwG8Ayv8OZI58V5dz7mrStOPnMEf+B 8/gC63Y1u</latexit> t <latexit sha1_base64="btWuKJH9/rCxCKL5tGKBdwWU5A=">AB6HicbVBNS 8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48t2FpoQ9lsN+3azSbsToQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUNnGqGW+xWMa6E 1DpVC8hQIl7ySa0yiQ/CEY3878hyeujYjVPU4S7kd0qEQoGEUrNbFfrhVdw6ySrycVCBHo1/+6g1ilkZcIZPUmK7nJuhnVKNgk9LvdTwhLIxHfKupYpG3PjZ/NApOb PKgISxtqWQzNXfExmNjJlEge2MKI7MsjcT/O6KYbXfiZUkiJXbLEoTCXBmMy+JgOhOUM5sYQyLeythI2opgxtNiUbgrf8ip16reRbXWvKzUb/I4inACp3AOHlxBHe 6gAS1gwOEZXuHNeXRenHfnY9FacPKZY/gD5/MH4XeM/A=</latexit> t + 1 <latexit sha1_base64="mLxz/koAWvZP5dEolt4WvHd6gJ4=">AB6nicbVBNS8NAEJ34Wet X1aOXxSIQkmqoMeiF48V7Qe0oWy2m3bpZhN2J0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekEh0HW/nZXVtfWNzcJWcXtnd2+/dHDYNHGqGW+wWMa6HVDpVC8gQIlbyea0yiQvBWMbqd+64lrI 2L1iOE+xEdKBEKRtFKD3ju9Uplt+LOQJaJl5My5Kj3Sl/dfszSiCtkhrT8dwE/YxqFEzySbGbGp5QNqID3rFU0YgbP5udOiGnVumTMNa2FJKZ+nsio5Ex4yiwnRHFoVn0puJ/XifF8NrPhEpS5Ir NF4WpJBiT6d+kLzRnKMeWUKaFvZWwIdWUoU2naEPwFl9eJs1qxbuoVO8vy7WbPI4CHMJnIEHV1CDO6hDAxgM4Ble4c2Rzovz7nzMW1ecfOYI/sD5/AG3041s</latexit> Figure 3: Diagram showing training of the decoder. Blue lines indicate the word position loss (WPL). decoder and concatenate the semantic variable y with the hidden vector output by the decoder for predicting the word at the next time step. Note that the initial hidden state of the decoder is always set to zero. 3.2 Latent Codes for Syntactic Encoder Since what we want from the syntactic encoder is only the syntactic structure of a sentence, using standard word embeddings tends to mislead the syntactic encoder to believe the syntax is manifested by the exact word tokens. An example is that the generated sentence often preserves the exact pronouns or function words in the syntactic input instead of making necessary changes based on the semantics. To alleviate this, we follow Chen and Gimpel (2018) to represent each word with a latent code (LC) for word clusters within the word embedding layer. Our goal is for this to create a bottleneck layer in the word embeddings, thereby forcing the syntactic encoder to learn a more abstract representation of the syntax. However, since our purpose is not to reduce model size (unlike Chen and Gimpel, 2018), we marginalize out the latent code to get the embeddings during both training and testing. That is, ew = X cw p(cw)vcw where cw is the latent code for word w, vcw is the vector for latent code cw, and ew is the resulting word embedding for word w. In our models, we use 10 binary codes produced by 10 feedforward neural networks based on a shared word embedding, and then we concatenate these 10 individual cluster vectors to get the final word embeddings. z encoder z1 x1 y encoder y1 x1 y encoder y2 x2 z encoder z2 x2 x1 x2 Figure 4: Diagram showing the training process when using the paraphrase reconstruction loss (dash-dotted lines). The pair (x1, x2) is a sentential paraphrase pair, the y’s are the semantic variables corresponding to each x, and the z’s are syntactic variables. 4 Multi-Task Learning We now describe several additional training losses designed to encourage a clearer separation of information in the semantic and syntactic variables. These losses were also considered in (Chen et al., 2019), but in the context of learning sentence representations. 4.1 Paraphrase Reconstruction Loss Our first loss, the paraphrase reconstruction loss (PRL), requires a dataset of sentence paraphrase pairs. The key assumption is that for a pair of paraphrastic sentences x1, x2, the semantics is shared but the syntax may differ. As shown in Figure 4, we swap the paraphrases to the semantic encoder during training but keep the input to the syntactic encoder to be the same. It is defined as E y2∼qφ(y|x2) z1∼qφ(z|x1) [ log pθ(x1|y2, z1)]+ E y1∼qφ(y|x1) z2∼qφ(z|x2) [ log pθ(x2|y1, z2)] (2) In the following experiments, unless explicitly noted, we will always include PRL as part of the model training and will discuss its effect in Section 7.1. 4.2 Word Position Loss Since word ordering is relatively unimportant for semantic similarity (Wieting et al., 2016), we assume it is more relevant to the syntax of a sentence than to its semantics. Based on this, we introduce a word position loss (WPL). As shown in 5976 POS tags Word tokens something NN eyebrow loss concern smoke … snotty muddy green spiteful locked cackled rebuked nodded an another those all funny happened this … JJ VBD DT … … … Figure 5: An example of word noising. For each word token in the training sentences, we randomly replace it with other words that share the same POS tags. Figure 3, WPL is computed by predicting the position at each time step based on the concatenation of word embeddings with the syntactic variable z. That is, WPL def == E z∼qφ(z|x) "X t log softmax(f([et; z]))t # where softmax(·)t indicates the probability at position t. Empirically, we observe that adding WPL to both the syntactic encoder and decoder improves performance, so we always use it in our experiments unless otherwise indicated. 5 Training 5.1 KL Weight As observed in previous work (Alemi et al., 2017; Bowman et al., 2016; Higgins et al., 2016), the weight of the KL divergence in Equation 1 can be important when learning with latent variables. We attach weights to the KL divergence in Equation 1 and tune them based on development set performance. 5.2 Word Noising via Part-of-Speech Tags In practice, we often observe that the syntactic encoder tends to remember word types instead of learning syntactic structures. To provide a more flexible notion of syntax, we add word noising (WN) based on part-of-speech (POS) tags. More specifically, we tag the training set using the Stanford POS tagger (Toutanova et al., 2003). Then we group the word types based on the top two most frequent tags for each word type. During training, as shown in Figure 5, we noise the syntactic inputs by randomly replacing word tokens based on the groups and tags we obtained. This provides our framework many examples of word interchangeability based on POS tags, and discourages the syntactic encoder from memorizing the word types in the syntactic input. When using WN, the probability of noising a word is tuned based on development set performance. 6 Experiments 6.1 Training Setup For training with the PRL, we require a training set of sentential paraphrase pairs. We use ParaNMT (Wieting and Gimpel, 2018), a dataset of approximately 50 million paraphrase pairs. To ensure there is enough variation between paraphrases, we filter out paraphrases with high BLEU score (Papineni et al., 2002) between the two sentences in each pair, which leaves us with around half a million paraphrases as our training set. All hyperparameter tuning is based on the BLEU score on the development set (see appendix for more details). 6.2 Evaluation Dataset and Metrics To evaluate models quantitatively, we manually annotate 1300 instances based on paraphrase pairs from ParaNMT independent from our training set. Each instance in the annotated data has three sentences: semantic input, syntactic input, and reference, where the semantic input and the reference can be seen as human generated paraphrases and the syntactic input shares its syntax with the reference but is very different from the semantic input in terms of semantics. The differences among these three sentences ensure the difficulty of this task. Figure 1 shows examples. The annotation process involves two steps. We begin with a paraphrase pair ⟨u, v⟩. First, we use an automatic procedure to find, for each sentence u, a syntactically-similar but semanticallydifferent other sentence t. We do this by seeking sentences t with high edit distance of predicted POS tag sequences and low BLEU score with u. Then we manually edit all three sentences to ensure (1) strong semantic match and large syntactic variation between the semantic input u and reference v, (2) strong semantic match between the syntactic input t and its post-edited version, and (3) strong syntactic match between the syntactic input t and the reference v. We randomly pick 500 instances as our development set and use the remaining 800 instances as our test set. We perform additional manual filtering and editing of the test set to ensure quality. For evaluation, we consider two categories of 5977 BLEU (↑) ROUGE-1 (↑) ROUGE-2 (↑) ROUGE-L (↑) METEOR (↑) ST (↓) Return-input baselines Semantic input 18.5 50.6 23.2 47.7 28.8 12.0 Syntactic input 3.3 24.4 7.5 29.1 12.1 5.9 Our work VGVAE 3.5 24.8 7.3 29.7 12.6 10.6 VGVAE + WPL 4.5 26.5 8.2 31.5 13.3 10.0 VGVAE + LC 3.3 24.0 7.2 29.4 12.5 9.1 VGVAE + LC + WPL 5.9 29.1 10.2 33.0 14.5 9.0 VGVAE + WN 13.0 43.2 20.2 47.0 23.8 6.8 VGVAE + WN + WPL 13.2 43.4 20.3 47.0 23.9 6.7 VGVAE + LC + WN + WPL 13.6 44.7 21.0 48.3 24.8 6.7 Prior work using supervised parsers SCPN + template 17.8 47.9 22.8 48.5 27.3 9.9 SCPN + full parse 19.2 50.4 26.1 53.5 28.4 5.9 Table 1: Test results. The final metric (ST) measures the syntactic match between the output and the reference. automatic evaluation metrics, designed to capture different components of the task. To measure roughly the amount of semantic content that matches between the predicted output and the reference, we report BLEU score (BL), METEOR score (MET; Banerjee and Lavie, 2005) and three ROUGE scores, including ROUGE-1 (R1), ROUGE-2 (R-2) and ROUGE-L (R-L). Even though these metrics are not purely based on semantic matching, we refer to them in this paper as “semantic metrics” to differentiate them from our second metric category, which we refer to as a “syntactic metric”. For the latter, to measure the syntactic similarity between generated sentences and the reference, we report the syntactic tree edit distance (ST). To compute ST, we first parse the sentences using Stanford CoreNLP (Manning et al., 2014), and then compute the tree edit distance (Zhang and Shasha, 1989) between constituency parse trees after removing word tokens. 6.3 Baselines We report results for three baselines. The first two baselines directly output the corresponding syntactic or semantic input for each instance. For the last baseline, we consider SCPN (Iyyer et al., 2018). As SCPN requires parse trees for both the syntactic and semantic inputs, we follow the process in their paper and use the Stanford shiftreduce constituency parser (Manning et al., 2014) to parse both, then use the parsed sentences as inputs to SCPN. We report results for SCPN when using only the top two levels of the parse as input (template) and using the full parse as input (full parse). 6.4 Results As shown in Table 1, simply outputting the semantic input shows strong performance across the BLEU, ROUGE, and METEOR scores, which are more relevant to semantic similarity, but shows much worse performance in terms of ST. On the other hand, simply returning the syntactic input leads to lower BLEU, ROUGE, and METEOR scores but also a very strong ST score. These trends provide validation of the evaluation dataset, as they show that the reference and the semantic input match more strongly in terms of their semantics than in terms of their syntax, and also that the reference and the syntactic input match more strongly in terms of their syntax than in terms of their semantics. The goal in developing systems for this task is then to produce outputs with higher semantic metric scores than the syntactic input baseline and simultaneously higher syntactic scores than the semantic input baseline. Among our models, adding WPL leads to gains across both the semantic and syntactic metric scores. The gains are much larger without WN, but even with WN, adding WPL improves nearly all scores. Adding LC typically helps the semantic metrics (at least when combined with WPL) without harming the syntactic metric (ST). We see the largest improvements, however, by adding WN, which uses an automatic part-of-speech tagger at training time only. Both the semantic and syntactic metrics increase consistently with WN, as the syntactic variable is shown many examples of 5978 BL R-1 R-2 R-L MET ST VGVAE w/o PRL 2.0 23.4 4.3 26.4 11.3 11.8 VGVAE w/ PRL 3.5 24.8 7.3 29.7 12.6 10.6 Table 2: Test results when including PRL. BL R-1 R-2 R-L MET ST VGVAE w/o WPL 3.5 24.8 7.3 29.7 12.6 10.6 Dec. hidden state 3.6 24.9 7.3 29.7 12.6 10.5 Enc. emb. 3.9 26.1 7.8 31.0 12.9 10.2 Dec. emb. 4.1 26.3 8.1 31.3 13.1 10.1 Enc. & Dec. emb. 4.5 26.5 8.2 31.5 13.3 10.0 Table 3: Test results with WPL at different positions. word interchangeability based on POS tags. While the SCPN yields very strong metric scores, there are several differences that make the SCPN results difficult to compare to those of our models. In particular, the SCPN uses a supervised parser both during training and at test time, while our strongest results merely require a POS tagger and only use it at training time. Furthermore, since ST is computed based on parse trees from a parser, systems that explicitly use constituency parsers at test time, such as SCPN, are likely to be favored by such a metric. This is likely the reason why SCPN can match the syntactic input baseline in ST. Also, SCPN trains on a much larger portion of ParaNMT. We find large differences in metric scores when SCPN only uses a parse template (i.e., the top two levels of the parse tree of the syntactic input). In this case, the results degrade, especially in ST, showing that the performance of SCPN depends on the quality of the input parses. Nonetheless, the SCPN results show the potential benefit of explicitly using a supervised constituency parser at both training and test time. Future work can explore ways to combine syntactic parsers with our models for more informative training and more robust performance. 7 Analysis 7.1 Effect of Multi-Task Training Effect of Paraphrase Reconstruction Loss. We investigate the effect of PRL by removing PRL from training, which effectively makes VGVAE a variational autoencoder. As shown in Table 2, making use of pairing information can improve performance both in the semantic-related metrics and syntactic tree edit distance. Effect of Position of Word Position Loss. We also study the effect of the position of WPL by Semantic var. Syntactic var. VGVAE 64.8 14.5 VGVAE + WPL 65.2 10.5 VGVAE + LC 67.2 29.0 VGVAE + LC + WPL 67.9 8.5 VGVAE + WN 71.1 10.2 VGVAE + WN + WPL 72.9 9.8 VGVAE + LC + WN + WPL 74.3 7.4 Table 4: Pearson correlation (%) for STS Benchmark test set. (1) using the decoder hidden state, (2) using the concatenation of word embeddings in the syntactic encoder and the syntactic variable, (3) using the concatenation of word embeddings in the decoder and the syntactic variable, or (4) adding it on both the encoder embeddings and decoder word embeddings. Table 3 shows that adding WPL on hidden states can help improve performance slightly but not as good as adding it on word embeddings. In practice, we also observe that the value of WPL tends to vanish when using WPL on hidden states, which is presumably caused by the fact that LSTMs have sequence information, making the optimization of WPL trivial. We also observe that adding WPL to both the encoder and decoder brings the largest improvement. 7.2 Encoder Analysis To investigate what has been learned in the encoder, we evaluate qφ(y|x) and qφ(z|x) on both semantic similarity tasks and syntactic similarity tasks and also inspect the latent codes. Semantic Similarity. We use cosine similarity between two variables encoded by the inference networks as the predictions and then compute Pearson correlations on the STS Benchmark test set (Cer et al., 2017). As shown in Table 4, the semantic variable y always outperforms the syntactic variable z by a large margin, suggesting that different variables have captured different information. Every time when we add WPL the differences in performance between the two variables increases. Moreover, the differences between these two variables are correlated with the performance of models in Table 1, showing that a better generation system has a more disentangled latent representation. Syntactic Similarity. We use the syntactic evaluation tasks from Chen et al. (2019) to evaluate the syntactic knowledge encoded in the encoder. The tasks are based on a 1-nearest-neighbor con5979 Semantic var. Syntactic var. F1 Acc. F1 Acc. Random 19.2 12.9 Best 71.1 62.3 VGVAE 20.7 24.9 25.9 28.8 VGVAE + WPL 21.2 25.3 31.1 33.3 VGVAE + LC 21.6 25.5 29.0 32.4 VGVAE + LC + WPL 18.9 23.5 31.2 33.5 VGVAE + WN 20.6 18.1 28.4 30.4 VGVAE + WN + WPL 20.0 24.6 43.7 40.8 VGVAE + LC +WN + WPL 20.3 24.8 43.7 40.9 Table 5: Labeled F1 score (%) and accuracy (%) on syntactic similarity tasks from Chen et al. (2019). 12 does must could shall do wo ’s did ai ’d ’ll should 451 watching wearing carrying thrown refuse drew 11 ? : * ≫! ; ) . ” , ’ 18 maybe they because if where but we when how 41279 elvish festive freeway anteroom jennifer terrors 10 well ⟨unk⟩anyone okay now everybody someone 165 supposedly basically essentially rarely officially 59 using on by into as the with within under quite Table 6: Examples of learned word clusters. Each row is a different clusters. Numbers in the first column indicate the number of words in that cluster. stituency parser or POS tagger. To understand the difficulty of these two tasks, Table 5 shows results for two baselines. “Random” means randomly pick candidates as predictions. The second baseline (“Best”) is to compute the pairwise scores between the test instances and the sentences in the candidate pool and then take the maximum values. It can be seen as the upper bound performance for these tasks. As shown in Table 5, similar trends are observed as in Tables 1 and 4. When adding WPL or WN, there is a boost in the syntactic similarity for the syntactic variable. Adding LC also helps the performance of the syntactic variable slightly. Latent Code Analysis. We look into the learned word clusters by taking the argmax of latent codes and treating it as the cluster membership of each word. Although these are not the exact word clusters we would use during test time (because we marginalize over the latent codes), it provides us intuition on what individual cluster vectors have contributed to the final word embeddings. As shown in Table 6, the words in the first and last rows are mostly function words. The second row has verbs. The third row has special symbols. The fourth row also has function words but somewhat different from the first row. The fifth row is a large cluster populated by content words, mostly nouns BL R-1 R-2 R-L MET ST LC 13.6 44.7 21.0 48.3 24.8 6.7 Single LC 12.9 44.2 20.3 47.4 24.1 6.9 Table 7: Test results when using a single code. ht <latexit sha1_base64="k7fT5pcp10BZp2sjEvbtdTJgbYE=">AB7HicbVBNS8NAEJ34WetX1aOXxSJ4KkV9Fj04rGCaQtKJvtpl262YTdiVBCf4 MXD4p49Qd589+4bXPQ1gcDj/dmJkXplIYdN1vZ219Y3Nru7RT3t3bPzisHB23TJpxn2WyER3Qmq4FIr7KFDyTqo5jUPJ2+H4bua3n7g2IlGPOEl5ENOhEpFgFK3kj/o5TvuVqltz5yCrxCtIFQo0+5Wv3iBhWcwVMkmN6XpuikFONQom+bTcywxPKRvTIe9aqmjMTZDPj52Sc6sMSJRoWwrJXP09kdPYmEkc2s6Y4s gsezPxP6+bYXQT5EKlGXLFouiTBJMyOxzMhCaM5QTSyjTwt5K2IhqytDmU7YheMsvr5JWveZd1uoPV9XGbRFHCU7hDC7Ag2towD0wQcGAp7hFd4c5bw4787HonXNKWZO4A+czx8e8o7j</latexit> et <latexit sha1_base64="WvHUke6IcNozOBiFtAp4xzs5khc=">AB7HicbVBNS8NAEJ34WetX1aOXxSJ4KkV9Fj04rGCaQtKJvtpl262YTdiVBCf4 MXD4p49Qd589+4bXPQ1gcDj/dmJkXplIYdN1vZ219Y3Nru7RT3t3bPzisHB23TJpxn2WyER3Qmq4FIr7KFDyTqo5jUPJ2+H4bua3n7g2IlGPOEl5ENOhEpFgFK3k836O036l6tbcOcgq8QpShQLNfuWrN0hYFnOFTFJjup6bYpBTjYJPi3MsNTysZ0yLuWKhpzE+TzY6fk3CoDEiXalkIyV39P5DQ2ZhKHtjOmOD L3kz8z+tmGN0EuVBphlyxaIokwQTMvucDITmDOXEsq0sLcSNqKaMrT5lG0I3vLq6RVr3mXtfrDVbVxW8RglM4gwvw4BoacA9N8IGBgGd4hTdHOS/Ou/OxaF1zipkT+APn8wcaWo7g</latexit> wt <latexit sha1_base64="3mAguvntnCB/7BIhVTh81NaEXnw=">AB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48VTFtoQ9lst+3SzSbsTpQS+h u8eFDEqz/Im/GbZuDtj4YeLw3w8y8MJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUNHGqGfdZLGPdDqnhUijuo0DJ24nmNAolb4Xj25nfeuTaiFg94CThQUSHSgwEo2gl/6mX4bRXrhVdw6ySrycVCBHo1f+6vZjlkZcIZPUmI7nJhkVKNgk9L3dTwhLIxHfKOpYpG3ATZ/NgpObNKnwxibUshmau/JzIaGTOJQtsZUR yZW8m/ud1UhxcB5lQSYpcscWiQSoJxmT2OekLzRnKiSWUaWFvJWxENWVo8ynZELzl1dJs1b1Lq1+8tK/SaPowgncArn4MEV1OEOGuADAwHP8ApvjnJenHfnY9FacPKZY/gD5/MHNeqO8g=</latexit> wt+1 <latexit sha1_base64="Rd0MpeNc0CYo+RcZN/UvfEQY6M=">AB7nicbVBNS8NAEJ34WetX1aOXxSIQkmqoMeiF48V7Ae0oWy2m3bpZhN2J0oJ/RFePCji1d/jzX/jts 1BWx8MPN6bYWZekEh0HW/nZXVtfWNzcJWcXtnd2+/dHDYNHGqGW+wWMa6HVDpVC8gQIlbyea0yiQvBWMbqd+65FrI2L1gOE+xEdKBEKRtFKradehufepFcquxV3BrJMvJyUIUe9V/rq9mOWRlwhk9SYjucm6GdUo2CST4rd1PCEshEd8I6likbc+Nns3Ak5tUqfhLG2pZDM1N8TGY2MGUeB7YwoDs2iNxX/8zophtd+JlSIldsvihMJcGYTH8nfaE5Qzm2hD It7K2EDamDG1CRuCt/jyMmlWK95FpXp/Wa7d5HEU4BhO4Aw8uIa3EdGsBgBM/wCm9O4rw4787HvHXFyWeO4A+czx8PVI9i</latexit> z <latexit sha1_base64="VLEo6VgUnu2TnOxoOkqsMPXvyTo=">AB6HicbVDLTgJBEOzF+IL9ehlIjHxRHbRI9 ELx4hkUcCGzI79MLI7OxmZtYECV/gxYPGePWTvPk3DrAHBSvpFLVne6uIBFcG9f9dnJr6xubW/ntws7u3v5B8fCoqeNUMWywWMSqHVCNgktsG4EthOFNAoEtoLR7cxvPaLSPJb3ZpygH9GB5CFn1Fip/tQrltyOwdZJV5GSpCh1it+dfsxS yOUhgmqdcdzE+NPqDKcCZwWuqnGhLIRHWDHUkj1P5kfuiUnFmlT8JY2ZKGzNXfExMaT2OAtsZUTPUy95M/M/rpCa89idcJqlByRaLwlQE5PZ16TPFTIjxpZQpri9lbAhVZQZm03BhuAtv7xKmpWyd1Gu1C9L1ZsjycwCmcgwdXUIU7qE DGCA8wyu8OQ/Oi/PufCxac042cwx/4Hz+AOqPjQI=</latexit> y <latexit sha1_base64="mEcz1FLhuG1BpP6c5hi50qAIJ0g=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48t2FpoQ 9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMSqE1CNgktsGW4EdhKFNAoEPgTj25n/8IRK81jem0mCfkSHkoecUWOl5qRfrhVdw6ySrycVCBHo1/+6g1ilkYoDRNU67nJsbPqDKcCZyWeqnGhLI xHWLXUkj1H42P3RKzqwyIGsbElD5urviYxGWk+iwHZG1Iz0sjcT/O6qQmv/YzLJDUo2WJRmApiYjL7mgy4QmbExBLKFLe3EjaijJjsynZELzl1dJu1b1Lq15mWlfpPHUYQTOIVz8OAK6nAHDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8f6QuNAQ=</latexit > ht <latexit sha1_base64="k7fT5pcp10BZp2sjEvbtdTJgbYE=">AB7HicbVBNS8NAEJ34WetX1aOXxSJ4KkV9Fj04rGCaQtKJvtpl262YTdiVBCf4 MXD4p49Qd589+4bXPQ1gcDj/dmJkXplIYdN1vZ219Y3Nru7RT3t3bPzisHB23TJpxn2WyER3Qmq4FIr7KFDyTqo5jUPJ2+H4bua3n7g2IlGPOEl5ENOhEpFgFK3kj/o5TvuVqltz5yCrxCtIFQo0+5Wv3iBhWcwVMkmN6XpuikFONQom+bTcywxPKRvTIe9aqmjMTZDPj52Sc6sMSJRoWwrJXP09kdPYmEkc2s6Y4s gsezPxP6+bYXQT5EKlGXLFouiTBJMyOxzMhCaM5QTSyjTwt5K2IhqytDmU7YheMsvr5JWveZd1uoPV9XGbRFHCU7hDC7Ag2towD0wQcGAp7hFd4c5bw4787HonXNKWZO4A+czx8e8o7j</latexit> et <latexit sha1_base64="WvHUke6IcNozOBiFtAp4xzs5khc=">AB7HicbVBNS8NAEJ34WetX1aOXxSJ4KkV9Fj04rGCaQtKJvtpl262YTdiVBCf4 MXD4p49Qd589+4bXPQ1gcDj/dmJkXplIYdN1vZ219Y3Nru7RT3t3bPzisHB23TJpxn2WyER3Qmq4FIr7KFDyTqo5jUPJ2+H4bua3n7g2IlGPOEl5ENOhEpFgFK3k836O036l6tbcOcgq8QpShQLNfuWrN0hYFnOFTFJjup6bYpBTjYJPi3MsNTysZ0yLuWKhpzE+TzY6fk3CoDEiXalkIyV39P5DQ2ZhKHtjOmOD L3kz8z+tmGN0EuVBphlyxaIokwQTMvucDITmDOXEsq0sLcSNqKaMrT5lG0I3vLq6RVr3mXtfrDVbVxW8RglM4gwvw4BoacA9N8IGBgGd4hTdHOS/Ou/OxaF1zipkT+APn8wcaWo7g</latexit> wt <latexit sha1_base64="3mAguvntnCB/7BIhVTh81NaEXnw=">AB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48VTFtoQ9lst+3SzSbsTpQS+h u8eFDEqz/Im/GbZuDtj4YeLw3w8y8MJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUNHGqGfdZLGPdDqnhUijuo0DJ24nmNAolb4Xj25nfeuTaiFg94CThQUSHSgwEo2gl/6mX4bRXrhVdw6ySrycVCBHo1f+6vZjlkZcIZPUmI7nJhkVKNgk9L3dTwhLIxHfKOpYpG3ATZ/NgpObNKnwxibUshmau/JzIaGTOJQtsZUR yZW8m/ud1UhxcB5lQSYpcscWiQSoJxmT2OekLzRnKiSWUaWFvJWxENWVo8ynZELzl1dJs1b1Lq1+8tK/SaPowgncArn4MEV1OEOGuADAwHP8ApvjnJenHfnY9FacPKZY/gD5/MHNeqO8g=</latexit> wt+1 <latexit sha1_base64="Rd0MpeNc0CYo+RcZN/UvfEQY6M=">AB7nicbVBNS8NAEJ34WetX1aOXxSIQkmqoMeiF48V7Ae0oWy2m3bpZhN2J0oJ/RFePCji1d/jzX/jts 1BWx8MPN6bYWZekEh0HW/nZXVtfWNzcJWcXtnd2+/dHDYNHGqGW+wWMa6HVDpVC8gQIlbyea0yiQvBWMbqd+65FrI2L1gOE+xEdKBEKRtFKradehufepFcquxV3BrJMvJyUIUe9V/rq9mOWRlwhk9SYjucm6GdUo2CST4rd1PCEshEd8I6likbc+Nns3Ak5tUqfhLG2pZDM1N8TGY2MGUeB7YwoDs2iNxX/8zophtd+JlSIldsvihMJcGYTH8nfaE5Qzm2hD It7K2EDamDG1CRuCt/jyMmlWK95FpXp/Wa7d5HEU4BhO4Aw8uIa3EdGsBgBM/wCm9O4rw4787HvHXFyWeO4A+czx8PVI9i</latexit> z <latexit sha1_base64="VLEo6VgUnu2TnOxoOkqsMPXvyTo=">AB6HicbVDLTgJBEOzF+IL9ehlIjHxRHbRI9 ELx4hkUcCGzI79MLI7OxmZtYECV/gxYPGePWTvPk3DrAHBSvpFLVne6uIBFcG9f9dnJr6xubW/ntws7u3v5B8fCoqeNUMWywWMSqHVCNgktsG4EthOFNAoEtoLR7cxvPaLSPJb3ZpygH9GB5CFn1Fip/tQrltyOwdZJV5GSpCh1it+dfsxS yOUhgmqdcdzE+NPqDKcCZwWuqnGhLIRHWDHUkj1P5kfuiUnFmlT8JY2ZKGzNXfExMaT2OAtsZUTPUy95M/M/rpCa89idcJqlByRaLwlQE5PZ16TPFTIjxpZQpri9lbAhVZQZm03BhuAtv7xKmpWyd1Gu1C9L1ZsjycwCmcgwdXUIU7qE DGCA8wyu8OQ/Oi/PufCxac042cwx/4Hz+AOqPjQI=</latexit> y <latexit sha1_base64="mEcz1FLhuG1BpP6c5hi50qAIJ0g=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48t2FpoQ 9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMSqE1CNgktsGW4EdhKFNAoEPgTj25n/8IRK81jem0mCfkSHkoecUWOl5qRfrhVdw6ySrycVCBHo1/+6g1ilkYoDRNU67nJsbPqDKcCZyWeqnGhLI xHWLXUkj1H42P3RKzqwyIGsbElD5urviYxGWk+iwHZG1Iz0sjcT/O6qQmv/YzLJDUo2WJRmApiYjL7mgy4QmbExBLKFLe3EjaijJjsynZELzl1dJu1b1Lq15mWlfpPHUYQTOIVz8OAK6nAHDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8f6QuNAQ=</latexit > ht <latexit sha1_base64="k7fT5pcp10BZp2sjEvbtdTJgbYE=">AB7Hi cbVBNS8NAEJ34WetX1aOXxSJ4KkV9Fj04rGCaQtKJvtpl262YTdiVBCf4MXD4p49Qd589+4bXPQ1gcDj/dmJkXplIYdN1vZ219Y3Nru7RT3t3bPzisH B23TJpxn2WyER3Qmq4FIr7KFDyTqo5jUPJ2+H4bua3n7g2IlGPOEl5ENOhEpFgFK3kj/o5TvuVqltz5yCrxCtIFQo0+5Wv3iBhWcwVMkmN6XpuikFONQo m+bTcywxPKRvTIe9aqmjMTZDPj52Sc6sMSJRoWwrJXP09kdPYmEkc2s6Y4sgsezPxP6+bYXQT5EKlGXLFouiTBJMyOxzMhCaM5QTSyjTwt5K2IhqytDmU 7YheMsvr5JWveZd1uoPV9XGbRFHCU7hDC7Ag2towD0wQcGAp7hFd4c5bw4787HonXNKWZO4A+czx8e8o7j</latexit> et <latexit sha1_base64="WvHUke6IcNozOBiFtAp4xzs5khc=">AB7Hi cbVBNS8NAEJ34WetX1aOXxSJ4KkV9Fj04rGCaQtKJvtpl262YTdiVBCf4MXD4p49Qd589+4bXPQ1gcDj/dmJkXplIYdN1vZ219Y3Nru7RT3t3bPzisH B23TJpxn2WyER3Qmq4FIr7KFDyTqo5jUPJ2+H4bua3n7g2IlGPOEl5ENOhEpFgFK3k836O036l6tbcOcgq8QpShQLNfuWrN0hYFnOFTFJjup6bYpBTjYJ JPi3MsNTysZ0yLuWKhpzE+TzY6fk3CoDEiXalkIyV39P5DQ2ZhKHtjOmODL3kz8z+tmGN0EuVBphlyxaIokwQTMvucDITmDOXEsq0sLcSNqKaMrT5l G0I3vLq6RVr3mXtfrDVbVxW8RglM4gwvw4BoacA9N8IGBgGd4hTdHOS/Ou/OxaF1zipkT+APn8wcaWo7g</latexit> wt <latexit sha1_base64="3mAguvntnCB/7BIhVTh81NaEXnw=">AB7Hi cbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48VTFtoQ9lst+3SzSbsTpQS+hu8eFDEqz/Im/GbZuDtj4YeLw3w8y8MJHCoOt+O4W19Y3NreJ2aWd3b/+gf HjUNHGqGfdZLGPdDqnhUijuo0DJ24nmNAolb4Xj25nfeuTaiFg94CThQUSHSgwEo2gl/6mX4bRXrhVdw6ySrycVCBHo1f+6vZjlkZcIZPUmI7nJhkVKN gk9L3dTwhLIxHfKOpYpG3ATZ/NgpObNKnwxibUshmau/JzIaGTOJQtsZURyZW8m/ud1UhxcB5lQSYpcscWiQSoJxmT2OekLzRnKiSWUaWFvJWxENWVo8 ynZELzl1dJs1b1Lq1+8tK/SaPowgncArn4MEV1OEOGuADAwHP8ApvjnJenHfnY9FacPKZY/gD5/MHNeqO8g=</latexit> wt+1 <latexit sha1_base64="Rd0MpeNc0CYo+RcZN/UvfEQY6M=">AB7nicbVBNS8N AEJ34WetX1aOXxSIQkmqoMeiF48V7Ae0oWy2m3bpZhN2J0oJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZekEh0HW/nZXVtfWNzcJWcXtnd2+/dHDYNHGqGW+wWMa6HVDpVC8g QIlbyea0yiQvBWMbqd+65FrI2L1gOE+xEdKBEKRtFKradehufepFcquxV3BrJMvJyUIUe9V/rq9mOWRlwhk9SYjucm6GdUo2CST4rd1PCEshEd8I6likbc+Nns3Ak5tUqfhLG 2pZDM1N8TGY2MGUeB7YwoDs2iNxX/8zophtd+JlSIldsvihMJcGYTH8nfaE5Qzm2hDIt7K2EDamDG1CRuCt/jyMmlWK95FpXp/Wa7d5HEU4BhO4Aw8uIa3EdGsBgBM/wC m9O4rw4787HvHXFyWeO4A+czx8PVI9i</latexit> ht <latexit sha1_base64="k7fT5pcp10BZp2sjEvbtdTJgbYE=">AB7HicbVBNS8NAEJ34WetX1aOXxSJ4KkV9Fj04rGCaQtKJvtpl262YTdiVBCf4 MXD4p49Qd589+4bXPQ1gcDj/dmJkXplIYdN1vZ219Y3Nru7RT3t3bPzisHB23TJpxn2WyER3Qmq4FIr7KFDyTqo5jUPJ2+H4bua3n7g2IlGPOEl5ENOhEpFgFK3kj/o5TvuVqltz5yCrxCtIFQo0+5Wv3iBhWcwVMkmN6XpuikFONQom+bTcywxPKRvTIe9aqmjMTZDPj52Sc6sMSJRoWwrJXP09kdPYmEkc2s6Y4s gsezPxP6+bYXQT5EKlGXLFouiTBJMyOxzMhCaM5QTSyjTwt5K2IhqytDmU7YheMsvr5JWveZd1uoPV9XGbRFHCU7hDC7Ag2towD0wQcGAp7hFd4c5bw4787HonXNKWZO4A+czx8e8o7j</latexit> et <latexit sha1_base64="WvHUke6IcNozOBiFtAp4xzs5khc=">AB7HicbVBNS8NAEJ34WetX1aOXxSJ4KkV9Fj04rGCaQtKJvtpl262YTdiVBCf4 MXD4p49Qd589+4bXPQ1gcDj/dmJkXplIYdN1vZ219Y3Nru7RT3t3bPzisHB23TJpxn2WyER3Qmq4FIr7KFDyTqo5jUPJ2+H4bua3n7g2IlGPOEl5ENOhEpFgFK3k836O036l6tbcOcgq8QpShQLNfuWrN0hYFnOFTFJjup6bYpBTjYJPi3MsNTysZ0yLuWKhpzE+TzY6fk3CoDEiXalkIyV39P5DQ2ZhKHtjOmOD L3kz8z+tmGN0EuVBphlyxaIokwQTMvucDITmDOXEsq0sLcSNqKaMrT5lG0I3vLq6RVr3mXtfrDVbVxW8RglM4gwvw4BoacA9N8IGBgGd4hTdHOS/Ou/OxaF1zipkT+APn8wcaWo7g</latexit> wt <latexit sha1_base64="3mAguvntnCB/7BIhVTh81NaEXnw=">AB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48VTFtoQ9lst+3SzSbsTpQS+h u8eFDEqz/Im/GbZuDtj4YeLw3w8y8MJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUNHGqGfdZLGPdDqnhUijuo0DJ24nmNAolb4Xj25nfeuTaiFg94CThQUSHSgwEo2gl/6mX4bRXrhVdw6ySrycVCBHo1f+6vZjlkZcIZPUmI7nJhkVKNgk9L3dTwhLIxHfKOpYpG3ATZ/NgpObNKnwxibUshmau/JzIaGTOJQtsZUR yZW8m/ud1UhxcB5lQSYpcscWiQSoJxmT2OekLzRnKiSWUaWFvJWxENWVo8ynZELzl1dJs1b1Lq1+8tK/SaPowgncArn4MEV1OEOGuADAwHP8ApvjnJenHfnY9FacPKZY/gD5/MHNeqO8g=</latexit> wt+1 <latexit sha1_base64="Rd0MpeNc0CYo+RcZN/UvfEQY6M=">AB7nicbVBNS8NAEJ34WetX1aOXxSIQkmqoMeiF48V7Ae0oWy2m3bpZhN2J0oJ/RFePCji1d/jzX/jts 1BWx8MPN6bYWZekEh0HW/nZXVtfWNzcJWcXtnd2+/dHDYNHGqGW+wWMa6HVDpVC8gQIlbyea0yiQvBWMbqd+65FrI2L1gOE+xEdKBEKRtFKradehufepFcquxV3BrJMvJyUIUe9V/rq9mOWRlwhk9SYjucm6GdUo2CST4rd1PCEshEd8I6likbc+Nns3Ak5tUqfhLG2pZDM1N8TGY2MGUeB7YwoDs2iNxX/8zophtd+JlSIldsvihMJcGYTH8nfaE5Qzm2hD It7K2EDamDG1CRuCt/jyMmlWK95FpXp/Wa7d5HEU4BhO4Aw8uIa3EdGsBgBM/wCm9O4rw4787HvHXFyWeO4A+czx8PVI9i</latexit> z <latexit sha1_base64="VLEo6VgUnu2TnOxoOkqsMPXvyTo=">AB6HicbVDLTgJBEOzF+IL9ehlIjHxRHbRI9 ELx4hkUcCGzI79MLI7OxmZtYECV/gxYPGePWTvPk3DrAHBSvpFLVne6uIBFcG9f9dnJr6xubW/ntws7u3v5B8fCoqeNUMWywWMSqHVCNgktsG4EthOFNAoEtoLR7cxvPaLSPJb3ZpygH9GB5CFn1Fip/tQrltyOwdZJV5GSpCh1it+dfsxS yOUhgmqdcdzE+NPqDKcCZwWuqnGhLIRHWDHUkj1P5kfuiUnFmlT8JY2ZKGzNXfExMaT2OAtsZUTPUy95M/M/rpCa89idcJqlByRaLwlQE5PZ16TPFTIjxpZQpri9lbAhVZQZm03BhuAtv7xKmpWyd1Gu1C9L1ZsjycwCmcgwdXUIU7qE DGCA8wyu8OQ/Oi/PufCxac042cwx/4Hz+AOqPjQI=</latexit> y <latexit sha1_base64="mEcz1FLhuG1BpP6c5hi50qAIJ0g=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48t2FpoQ 9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMSqE1CNgktsGW4EdhKFNAoEPgTj25n/8IRK81jem0mCfkSHkoecUWOl5qRfrhVdw6ySrycVCBHo1/+6g1ilkYoDRNU67nJsbPqDKcCZyWeqnGhLI xHWLXUkj1H42P3RKzqwyIGsbElD5urviYxGWk+iwHZG1Iz0sjcT/O6qQmv/YzLJDUo2WJRmApiYjL7mgy4QmbExBLKFLe3EjaijJjsynZELzl1dJu1b1Lq15mWlfpPHUYQTOIVz8OAK6nAHDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8f6QuNAQ=</latexit > ht <latexit sha1_base64="k7fT5pcp10BZp2sjEvbtdTJgbYE=">AB7HicbVBNS8NAEJ34WetX1aOXxSJ4KkV9Fj04rGCaQtKJvtpl262YTdiVBCf4 MXD4p49Qd589+4bXPQ1gcDj/dmJkXplIYdN1vZ219Y3Nru7RT3t3bPzisHB23TJpxn2WyER3Qmq4FIr7KFDyTqo5jUPJ2+H4bua3n7g2IlGPOEl5ENOhEpFgFK3kj/o5TvuVqltz5yCrxCtIFQo0+5Wv3iBhWcwVMkmN6XpuikFONQom+bTcywxPKRvTIe9aqmjMTZDPj52Sc6sMSJRoWwrJXP09kdPYmEkc2s6Y4s gsezPxP6+bYXQT5EKlGXLFouiTBJMyOxzMhCaM5QTSyjTwt5K2IhqytDmU7YheMsvr5JWveZd1uoPV9XGbRFHCU7hDC7Ag2towD0wQcGAp7hFd4c5bw4787HonXNKWZO4A+czx8e8o7j</latexit> et <latexit sha1_base64="WvHUke6IcNozOBiFtAp4xzs5khc=">AB7HicbVBNS8NAEJ34WetX1aOXxSJ4KkV9Fj04rGCaQtKJvtpl262YTdiVBCf4 MXD4p49Qd589+4bXPQ1gcDj/dmJkXplIYdN1vZ219Y3Nru7RT3t3bPzisHB23TJpxn2WyER3Qmq4FIr7KFDyTqo5jUPJ2+H4bua3n7g2IlGPOEl5ENOhEpFgFK3k836O036l6tbcOcgq8QpShQLNfuWrN0hYFnOFTFJjup6bYpBTjYJPi3MsNTysZ0yLuWKhpzE+TzY6fk3CoDEiXalkIyV39P5DQ2ZhKHtjOmOD L3kz8z+tmGN0EuVBphlyxaIokwQTMvucDITmDOXEsq0sLcSNqKaMrT5lG0I3vLq6RVr3mXtfrDVbVxW8RglM4gwvw4BoacA9N8IGBgGd4hTdHOS/Ou/OxaF1zipkT+APn8wcaWo7g</latexit> wt <latexit sha1_base64="3mAguvntnCB/7BIhVTh81NaEXnw=">AB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48VTFtoQ9lst+3SzSbsTpQS+h u8eFDEqz/Im/GbZuDtj4YeLw3w8y8MJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUNHGqGfdZLGPdDqnhUijuo0DJ24nmNAolb4Xj25nfeuTaiFg94CThQUSHSgwEo2gl/6mX4bRXrhVdw6ySrycVCBHo1f+6vZjlkZcIZPUmI7nJhkVKNgk9L3dTwhLIxHfKOpYpG3ATZ/NgpObNKnwxibUshmau/JzIaGTOJQtsZUR yZW8m/ud1UhxcB5lQSYpcscWiQSoJxmT2OekLzRnKiSWUaWFvJWxENWVo8ynZELzl1dJs1b1Lq1+8tK/SaPowgncArn4MEV1OEOGuADAwHP8ApvjnJenHfnY9FacPKZY/gD5/MHNeqO8g=</latexit> wt+1 <latexit sha1_base64="Rd0MpeNc0CYo+RcZN/UvfEQY6M=">AB7nicbVBNS8NAEJ34WetX1aOXxSIQkmqoMeiF48V7Ae0oWy2m3bpZhN2J0oJ/RFePCji1d/jzX/jts 1BWx8MPN6bYWZekEh0HW/nZXVtfWNzcJWcXtnd2+/dHDYNHGqGW+wWMa6HVDpVC8gQIlbyea0yiQvBWMbqd+65FrI2L1gOE+xEdKBEKRtFKradehufepFcquxV3BrJMvJyUIUe9V/rq9mOWRlwhk9SYjucm6GdUo2CST4rd1PCEshEd8I6likbc+Nns3Ak5tUqfhLG2pZDM1N8TGY2MGUeB7YwoDs2iNxX/8zophtd+JlSIldsvihMJcGYTH8nfaE5Qzm2hD It7K2EDamDG1CRuCt/jyMmlWK95FpXp/Wa7d5HEU4BhO4Aw8uIa3EdGsBgBM/wCm9O4rw4787HvHXFyWeO4A+czx8PVI9i</latexit> z <latexit sha1_base64="VLEo6VgUnu2TnOxoOkqsMPXvyTo=">AB6HicbVDLTgJBEOzF+IL9ehlIjHxRHbRI9 ELx4hkUcCGzI79MLI7OxmZtYECV/gxYPGePWTvPk3DrAHBSvpFLVne6uIBFcG9f9dnJr6xubW/ntws7u3v5B8fCoqeNUMWywWMSqHVCNgktsG4EthOFNAoEtoLR7cxvPaLSPJb3ZpygH9GB5CFn1Fip/tQrltyOwdZJV5GSpCh1it+dfsxS yOUhgmqdcdzE+NPqDKcCZwWuqnGhLIRHWDHUkj1P5kfuiUnFmlT8JY2ZKGzNXfExMaT2OAtsZUTPUy95M/M/rpCa89idcJqlByRaLwlQE5PZ16TPFTIjxpZQpri9lbAhVZQZm03BhuAtv7xKmpWyd1Gu1C9L1ZsjycwCmcgwdXUIU7qE DGCA8wyu8OQ/Oi/PufCxac042cwx/4Hz+AOqPjQI=</latexit> y <latexit sha1_base64="mEcz1FLhuG1BpP6c5hi50qAIJ0g=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48t2FpoQ 9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMSqE1CNgktsGW4EdhKFNAoEPgTj25n/8IRK81jem0mCfkSHkoecUWOl5qRfrhVdw6ySrycVCBHo1/+6g1ilkYoDRNU67nJsbPqDKcCZyWeqnGhLI xHWLXUkj1H42P3RKzqwyIGsbElD5urviYxGWk+iwHZG1Iz0sjcT/O6qQmv/YzLJDUo2WJRmApiYjL7mgy4QmbExBLKFLe3EjaijJjsynZELzl1dJu1b1Lq15mWlfpPHUYQTOIVz8OAK6nAHDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8f6QuNAQ=</latexit > ht <latexit sha1_base64="k7fT5pcp10BZp2sjEvbtdTJgbYE=">AB7Hi cbVBNS8NAEJ34WetX1aOXxSJ4KkV9Fj04rGCaQtKJvtpl262YTdiVBCf4MXD4p49Qd589+4bXPQ1gcDj/dmJkXplIYdN1vZ219Y3Nru7RT3t3bPzisH B23TJpxn2WyER3Qmq4FIr7KFDyTqo5jUPJ2+H4bua3n7g2IlGPOEl5ENOhEpFgFK3kj/o5TvuVqltz5yCrxCtIFQo0+5Wv3iBhWcwVMkmN6XpuikFONQo m+bTcywxPKRvTIe9aqmjMTZDPj52Sc6sMSJRoWwrJXP09kdPYmEkc2s6Y4sgsezPxP6+bYXQT5EKlGXLFouiTBJMyOxzMhCaM5QTSyjTwt5K2IhqytDmU 7YheMsvr5JWveZd1uoPV9XGbRFHCU7hDC7Ag2towD0wQcGAp7hFd4c5bw4787HonXNKWZO4A+czx8e8o7j</latexit> et <latexit sha1_base64="WvHUke6IcNozOBiFtAp4xzs5khc=">AB7Hi cbVBNS8NAEJ34WetX1aOXxSJ4KkV9Fj04rGCaQtKJvtpl262YTdiVBCf4MXD4p49Qd589+4bXPQ1gcDj/dmJkXplIYdN1vZ219Y3Nru7RT3t3bPzisH B23TJpxn2WyER3Qmq4FIr7KFDyTqo5jUPJ2+H4bua3n7g2IlGPOEl5ENOhEpFgFK3k836O036l6tbcOcgq8QpShQLNfuWrN0hYFnOFTFJjup6bYpBTjYJ JPi3MsNTysZ0yLuWKhpzE+TzY6fk3CoDEiXalkIyV39P5DQ2ZhKHtjOmODL3kz8z+tmGN0EuVBphlyxaIokwQTMvucDITmDOXEsq0sLcSNqKaMrT5l G0I3vLq6RVr3mXtfrDVbVxW8RglM4gwvw4BoacA9N8IGBgGd4hTdHOS/Ou/OxaF1zipkT+APn8wcaWo7g</latexit> wt <latexit sha1_base64="3mAguvntnCB/7BIhVTh81NaEXnw=">AB7Hi cbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48VTFtoQ9lst+3SzSbsTpQS+hu8eFDEqz/Im/GbZuDtj4YeLw3w8y8MJHCoOt+O4W19Y3NreJ2aWd3b/+gf HjUNHGqGfdZLGPdDqnhUijuo0DJ24nmNAolb4Xj25nfeuTaiFg94CThQUSHSgwEo2gl/6mX4bRXrhVdw6ySrycVCBHo1f+6vZjlkZcIZPUmI7nJhkVKN gk9L3dTwhLIxHfKOpYpG3ATZ/NgpObNKnwxibUshmau/JzIaGTOJQtsZURyZW8m/ud1UhxcB5lQSYpcscWiQSoJxmT2OekLzRnKiSWUaWFvJWxENWVo8 ynZELzl1dJs1b1Lq1+8tK/SaPowgncArn4MEV1OEOGuADAwHP8ApvjnJenHfnY9FacPKZY/gD5/MHNeqO8g=</latexit> wt+1 <latexit sha1_base64="Rd0MpeNc0CYo+RcZN/UvfEQY6M=">AB7nicbVBNS8N AEJ34WetX1aOXxSIQkmqoMeiF48V7Ae0oWy2m3bpZhN2J0oJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZekEh0HW/nZXVtfWNzcJWcXtnd2+/dHDYNHGqGW+wWMa6HVDpVC8g QIlbyea0yiQvBWMbqd+65FrI2L1gOE+xEdKBEKRtFKradehufepFcquxV3BrJMvJyUIUe9V/rq9mOWRlwhk9SYjucm6GdUo2CST4rd1PCEshEd8I6likbc+Nns3Ak5tUqfhLG 2pZDM1N8TGY2MGUeB7YwoDs2iNxX/8zophtd+JlSIldsvihMJcGYTH8nfaE5Qzm2hDIt7K2EDamDG1CRuCt/jyMmlWK95FpXp/Wa7d5HEU4BhO4Aw8uIa3EdGsBgBM/wC m9O4rw4787HvHXFyWeO4A+czx8PVI9i</latexit> ht <latexit sha1_base64="k7fT5pcp10BZp2sjEvbtdTJgbYE=">AB7HicbVBNS8NAEJ34WetX1aOXxSJ4KkV9Fj04rGCaQtKJvtpl262YTdiVBCf4MXD4p49Qd589+4bXPQ1gcDj/dmJkXplIYdN1vZ219Y3Nru7RT3t3bPzisHB23TJpxn2WyER3Qmq4FIr7KFDyTqo5jUPJ2+H4bua3n7g2IlGPOEl5ENOh EpFgFK3kj/o5TvuVqltz5yCrxCtIFQo0+5Wv3iBhWcwVMkmN6XpuikFONQom+bTcywxPKRvTIe9aqmjMTZDPj52Sc6sMSJRoWwrJXP09kdPYmEkc2s6Y4sgsezPxP6+bYXQT5EKlGXLFouiTBJMyOxzMhCaM5QTSyjTwt5K2IhqytDmU7YheMsvr5JWveZd1uoPV9XGbRFHCU7hDC7Ag2towD0wQcGAp7hFd4c5bw4787HonXNKWZO4A+czx8e8o7j</latexit> et <latexit sha1_base64="WvHUke6IcNozOBiFtAp4xzs5khc=">AB7HicbVBNS8NAEJ34WetX1aOXxSJ4KkV9Fj04rGCaQtKJvtpl262YTdiVBCf4MXD4p49Qd589+4bXPQ1gcDj/dmJkXplIYdN1vZ219Y3Nru7RT3t3bPzisHB23TJpxn2WyER3Qmq4FIr7KFDyTqo5jUPJ2+H4bua3n7g2IlGPOEl5ENOh EpFgFK3k836O036l6tbcOcgq8QpShQLNfuWrN0hYFnOFTFJjup6bYpBTjYJPi3MsNTysZ0yLuWKhpzE+TzY6fk3CoDEiXalkIyV39P5DQ2ZhKHtjOmODL3kz8z+tmGN0EuVBphlyxaIokwQTMvucDITmDOXEsq0sLcSNqKaMrT5lG0I3vLq6RVr3mXtfrDVbVxW8RglM4gwvw4BoacA9N8IGBgGd4hTdHOS/Ou/OxaF1zipkT+APn8wcaWo7g</latexit> wt <latexit sha1_base64="3mAguvntnCB/7BIhVTh81NaEXnw=">AB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48VTFtoQ9lst+3SzSbsTpQS+hu8eFDEqz/Im/GbZuDtj4YeLw3w8y8MJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUNHGqGfdZLGPdDqnhUijuo0DJ24nmNAolb4Xj25nfeuTaiFg94CThQUSH SgwEo2gl/6mX4bRXrhVdw6ySrycVCBHo1f+6vZjlkZcIZPUmI7nJhkVKNgk9L3dTwhLIxHfKOpYpG3ATZ/NgpObNKnwxibUshmau/JzIaGTOJQtsZURyZW8m/ud1UhxcB5lQSYpcscWiQSoJxmT2OekLzRnKiSWUaWFvJWxENWVo8ynZELzl1dJs1b1Lq1+8tK/SaPowgncArn4MEV1OEOGuADAwHP8ApvjnJenHfnY9FacPKZY/gD5/MHNeqO8g=</latexit> wt+1 <latexit sha1_base64="Rd0MpeNc0CYo+RcZN/UvfEQY6M=">AB7nicbVBNS8NAEJ34WetX1aOXxSIQkmqoMeiF48V7Ae0oWy2m3bpZhN2J0oJ/RFePCji1d/jzX/jts1BWx8MPN6bYWZekEh0HW/nZXVtfWNzcJWcXtnd2+/dHDYNHGqGW+wWMa6HVDpVC8gQIlbyea0yiQvBWMbqd+65FrI2L1gOE+xEdKBEKRtFKradehufepFcquxV3BrJMvJyU IUe9V/rq9mOWRlwhk9SYjucm6GdUo2CST4rd1PCEshEd8I6likbc+Nns3Ak5tUqfhLG2pZDM1N8TGY2MGUeB7YwoDs2iNxX/8zophtd+JlSIldsvihMJcGYTH8nfaE5Qzm2hDIt7K2EDamDG1CRuCt/jyMmlWK95FpXp/Wa7d5HEU4BhO4Aw8uIa3EdGsBgBM/wCm9O4rw4787HvHXFyWeO4A+czx8PVI9i</latexit> z <latexit sha1_base64="VLEo6VgUnu2TnOxoOkqsMPXvyTo=">AB6HicbVDLTgJBEOzF+IL9ehlIjHxRHbRI9ELx4hkUcCGzI79MLI7OxmZtYECV/gxYPGePWTvPk3DrAHBSvpFLVne6uIBFcG9f9dnJr6xubW/ntws7u3v5B8fCoqe NUMWywWMSqHVCNgktsG4EthOFNAoEtoLR7cxvPaLSPJb3ZpygH9GB5CFn1Fip/tQrltyOwdZJV5GSpCh1it+dfsxSyOUhgmqdcdzE+NPqDKcCZwWuqnGhLIRHWDHUkj1P5kfuiUnFmlT8JY2ZKGzNXfExMaT2OAtsZUTPUy95M/M/rpCa89idcJqlByRaLwlQE5PZ16TPFTIjxpZQpri9lbAhVZQZm03BhuAtv7xKmpWyd1Gu1C9L1ZsjycwCmcgwdXUIU7qEDGCA8wyu8OQ/Oi/PufCxac042cwx/4Hz+AOqPjQI=</latexit> y <latexit sha1_base64="mEcz1FLhuG1BpP6c5hi50qAIJ0g=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48t2FpoQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMSqE1CNgktsGW 4EdhKFNAoEPgTj25n/8IRK81jem0mCfkSHkoecUWOl5qRfrhVdw6ySrycVCBHo1/+6g1ilkYoDRNU67nJsbPqDKcCZyWeqnGhLIxHWLXUkj1H42P3RKzqwyIGsbElD5urviYxGWk+iwHZG1Iz0sjcT/O6qQmv/YzLJDUo2WJRmApiYjL7mgy4QmbExBLKFLe3EjaijJjsynZELzl1dJu1b1Lq15mWlfpPHUYQTOIVz8OAK6nAHDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8f6QuNAQ=</latexit> ht <latexit sha1_base64="k7fT5pcp10BZp2sjEvbtdTJgbYE=">AB7HicbVBNS8NAEJ34WetX1aOXxSJ4KkV9Fj04rGCaQtKJvtpl262YTdiVBCf4 MXD4p49Qd589+4bXPQ1gcDj/dmJkXplIYdN1vZ219Y3Nru7RT3t3bPzisHB23TJpxn2WyER3Qmq4FIr7KFDyTqo5jUPJ2+H4bua3n7g2IlGPOEl5ENOhEpFgFK3kj/o5TvuVqltz5yCrxCtIFQo0+5Wv3iBhWcwVMkmN6XpuikFONQom+bTcywxPKRvTIe9aqmjMTZDPj52Sc6sMSJRoWwrJXP09kdPYmEkc2s6Y4s gsezPxP6+bYXQT5EKlGXLFouiTBJMyOxzMhCaM5QTSyjTwt5K2IhqytDmU7YheMsvr5JWveZd1uoPV9XGbRFHCU7hDC7Ag2towD0wQcGAp7hFd4c5bw4787HonXNKWZO4A+czx8e8o7j</latexit> et <latexit sha1_base64="WvHUke6IcNozOBiFtAp4xzs5khc=">AB7HicbVBNS8NAEJ34WetX1aOXxSJ4KkV9Fj04rGCaQtKJvtpl262YTdiVBCf4 MXD4p49Qd589+4bXPQ1gcDj/dmJkXplIYdN1vZ219Y3Nru7RT3t3bPzisHB23TJpxn2WyER3Qmq4FIr7KFDyTqo5jUPJ2+H4bua3n7g2IlGPOEl5ENOhEpFgFK3k836O036l6tbcOcgq8QpShQLNfuWrN0hYFnOFTFJjup6bYpBTjYJPi3MsNTysZ0yLuWKhpzE+TzY6fk3CoDEiXalkIyV39P5DQ2ZhKHtjOmOD L3kz8z+tmGN0EuVBphlyxaIokwQTMvucDITmDOXEsq0sLcSNqKaMrT5lG0I3vLq6RVr3mXtfrDVbVxW8RglM4gwvw4BoacA9N8IGBgGd4hTdHOS/Ou/OxaF1zipkT+APn8wcaWo7g</latexit> wt <latexit sha1_base64="3mAguvntnCB/7BIhVTh81NaEXnw=">AB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48VTFtoQ9lst+3SzSbsTpQS+h u8eFDEqz/Im/GbZuDtj4YeLw3w8y8MJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUNHGqGfdZLGPdDqnhUijuo0DJ24nmNAolb4Xj25nfeuTaiFg94CThQUSHSgwEo2gl/6mX4bRXrhVdw6ySrycVCBHo1f+6vZjlkZcIZPUmI7nJhkVKNgk9L3dTwhLIxHfKOpYpG3ATZ/NgpObNKnwxibUshmau/JzIaGTOJQtsZUR yZW8m/ud1UhxcB5lQSYpcscWiQSoJxmT2OekLzRnKiSWUaWFvJWxENWVo8ynZELzl1dJs1b1Lq1+8tK/SaPowgncArn4MEV1OEOGuADAwHP8ApvjnJenHfnY9FacPKZY/gD5/MHNeqO8g=</latexit> wt+1 <latexit sha1_base64="Rd0MpeNc0CYo+RcZN/UvfEQY6M=">AB7nicbVBNS8NAEJ34WetX1aOXxSIQkmqoMeiF48V7Ae0oWy2m3bpZhN2J0oJ/RFePCji1d/jzX/jts 1BWx8MPN6bYWZekEh0HW/nZXVtfWNzcJWcXtnd2+/dHDYNHGqGW+wWMa6HVDpVC8gQIlbyea0yiQvBWMbqd+65FrI2L1gOE+xEdKBEKRtFKradehufepFcquxV3BrJMvJyUIUe9V/rq9mOWRlwhk9SYjucm6GdUo2CST4rd1PCEshEd8I6likbc+Nns3Ak5tUqfhLG2pZDM1N8TGY2MGUeB7YwoDs2iNxX/8zophtd+JlSIldsvihMJcGYTH8nfaE5Qzm2hD It7K2EDamDG1CRuCt/jyMmlWK95FpXp/Wa7d5HEU4BhO4Aw8uIa3EdGsBgBM/wCm9O4rw4787HvHXFyWeO4A+czx8PVI9i</latexit> Figure 6: Variants of decoder. Left (SWAP): we swap the position of variable y and z. Middle (CONCAT): we concatenate word embedding with y and z as input to decoder. Right (INIT): we use word embeddings as input to the decoder and use the concatenation of y and z to compute the initial hidden state of the decoder. and adjectives. The sixth row has words that are not very important semantically and the seventh row has mostly adverbs. We also observe that the size of clusters often correlates with how strongly it relates to topics. In Table 6, clusters that have size under 20 are often function words while the largest cluster (5th row) has words with the most concrete meanings. We also compare the performance of LC by using a single latent code that has 50 classes. The results in Table 7 show that it is better to use smaller number of classes for each cluster instead of using a cluster with a large number of classes. 7.3 Effect of Decoder Structure As shown in Figure 6, we evaluate three variants of the decoder, namely INIT, CONCAT, and SWAP. For INIT, we use the concatenation of semantic variable y and syntactic variable z for computing the initial hidden state of decoder and then use the word embedding as input and hidden state to predict the next word. For CONCAT, we move both y and z to the input of the decoder and use the concatenation of these two variables as input to the decoder and use the hidden state for predicting the next word. For SWAP, we swap the position of y and z to use the concatenation of y and word embeddings as input to the decoder and the concatenation of z and hidden states as output for predicting the next word. Results for these three settings are shown in Table 9. INIT performs 5980 Semantic input Syntactic input Reference SCPN + full parse Our best model don’t you think that’s a quite aggressive message? that’s worth something, ain’t it? that’s a pretty aggressive message, don’t you think? that’s such news, don’t you? that’s impossible message, aren’t you? if i was there, i would kick that bastard in the ass. they would’ve delivered a verdict in your favor. i would’ve kicked that bastard out on his ass. you’d have kicked the bastard in my ass. she would’ve kicked the bastard on my ass. with luck, it may turn out you’re right. of course, i could’ve done better. if lucky, you will be proved correct. with luck, i might have gotten better. of course, i’ll be getting lucky. they can’t help, compassion is unbearable. love is straightforward and it is lasting. their help is impossible and compassion is insufferable. compassion is unbearable but it is excruciating. compassion is unacceptable and it is intolerable. her yelling sounds sad. she looks beautiful. shining like a star. she sounds sad. yelling like that. she’s sad. screaming in the air. she sounds sad. screaming like a scream. me, scare him? how dare you do such thing? how can i scare him? why do you have such fear? why do you scare that scare? Table 8: Examples of generated sentences. BL R-1 R-2 R-L MET ST VGVAE 4.5 26.5 8.2 31.5 13.3 10.0 INIT 3.5 22.7 6.0 24.9 9.8 11.5 CONCAT 4.0 23.9 6.6 27.9 11.2 10.9 SWAP 4.3 25.6 7.5 30.4 12.5 10.5 Table 9: Test results with decoder variants. the worst across the three settings. Both CONCAT and SWAP have variables in each time step in the decoder, which improves performance. SWAP arranges variables in different positions in the decoder and further improves over CONCAT in all metrics. 7.4 Generated Sentences We show several generated sentences in Table 8. We observe that both SCPN and our model suffer from the same problems. When comparing syntactic input and results from both our models and SCPN, we find that they are always the same length. This can often lead to problems like the first example in Table 8. The length of the syntactic input is not sufficient for expressing the semantics in the semantic input, which causes the generated sentences from both models to end at “you?” and omit the verb “think”. Another problem is in the consistency of pronouns between the generated sentences and the semantic inputs. An example is the second row in Table 8. Both models alter “i” to be either “you” or “she” while the “kick that bastard in the ass” becomes “kicked the bastard in my ass”. We found that our models sometimes can generate nonsensical sentences, for example the last row in Table 8. while SCPN, which is trained on a much larger corpus, does not have this problem. Also, our models can sometimes be distracted by the word tokens in the syntactic input as shown in the 3rd row in Table 8, where our model directly copies “of course” from the syntactic input while since SCPN uses a parse tree, it outputs “with luck”. In some rare cases where the function words in both syntactic inputs and the references are the exactly the same, our models can perform better than SCPN, e.g., the last two rows in Table 8. Generated sentences from our model make use of the word tokens “and” and “like” while SCPN does not have access to this information and generates inferior sentences. 8 Conclusion We proposed a novel setting for controlled text generation, which does not require prior knowledge of all the values the control variable might take on. We also proposed a variational model accompanied with a neural component and multiple multi-task training objectives for addressing this task. The proposed approaches do not rely on a test-time parser or tagger and outperform our baselines. Further analysis shows the model has learned both interpretable and disentangled representations. Acknowledgments We would like to thank the anonymous reviewers, NVIDIA for donating GPUs used in this research, and Google for a faculty research award to K. Gimpel that partially supported this research. 5981 References Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. 2017. Deep variational information bottleneck. In Proceedings of ICLR. Isabelle Augenstein and Anders Søgaard. 2017. Multitask learning of keyphrase boundary classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 341–346. Association for Computational Linguistics. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72. Association for Computational Linguistics. Marcel Bollmann, Anders Søgaard, and Joachim Bingel. 2018. Multi-task learning for historical text normalization: Size matters. In Proceedings of the Workshop on Deep Learning Approaches for LowResource NLP, pages 19–24. Association for Computational Linguistics. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10–21. Association for Computational Linguistics. Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2018. Retrieve, rerank and rewrite: Soft template based neural summarization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 152–161, Melbourne, Australia. Association for Computational Linguistics. Daniel Cer, Mona Diab, Eneko Agirre, Inigo LopezGazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14. Association for Computational Linguistics. Mingda Chen and Kevin Gimpel. 2018. Smaller text classifiers with discriminative cluster embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 739–745. Association for Computational Linguistics. Mingda Chen, Qingming Tang, Karen Livescu, and Kevin Gimpel. 2018. Variational sequential labelers for semi-supervised learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 215–226. Association for Computational Linguistics. Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019. A multi-task approach for disentangling syntax and semantics in sentence representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2453–2464, Minneapolis, Minnesota. Association for Computational Linguistics. Tim R. Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, and Jakub M. Tomczak. 2018. Hyperspherical variational auto-encoders. 34th Conference on Uncertainty in Artificial Intelligence (UAI18). Li Dong, Jonathan Mallinson, Siva Reddy, and Mirella Lapata. 2017. Learning to paraphrase for question answering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 875–886. Association for Computational Linguistics. Jiachen Du, Wenjie Li, Yulan He, Ruifeng Xu, Lidong Bing, and Xuan Wang. 2018. Variational autoregressive decoder for neural response generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3154–3163. Angela Fan, David Grangier, and Michael Auli. 2018. Controllable abstractive summarization. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 45–54. Association for Computational Linguistics. Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. In Proceedings of the Workshop on Stylistic Variation, pages 94–104. Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In AAAI. Leon A Gatys, Alexander S Ecker, and Matthias Bethge. 2016. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2414–2423. Anirudh Goyal Alias Parth Goyal, Alessandro Sordoni, Marc-Alexandre Cˆot´e, Nan Rosemary Ke, and Yoshua Bengio. 2017. Z-forcing: Training stochastic recurrent networks. In Advances in Neural Information Processing Systems, pages 6713–6723. Kelvin Guu, Tatsunori B. Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. Transactions of the Association for Computational Linguistics, 6:437–450. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir 5982 Mohamed, and Alexander Lerchner. 2016. betaVAE: Learning basic visual concepts with a constrained variational framework. In Proceedings of ICLR. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1587–1596, International Convention Centre, Sydney, Australia. PMLR. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875–1885. Association for Computational Linguistics. Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2018. Disentangled representation learning for non-parallel text style transfer. arXiv preprint arXiv:1808.04339. Pei Ke, Jian Guan, Minlie Huang, and Xiaoyan Zhu. 2018. Generating informative responses with controlled sentence function. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1499–1508, Melbourne, Australia. Association for Computational Linguistics. Diederik P. Kingma and Max Welling. 2014. AutoEncoding Variational Bayes. In Proceedings of ICLR. Anjishnu Kumar, Arpit Gupta, Julian Chan, Sam Tucker, Bjorn Hoffmeister, Markus Dreyer, Stanislav Peshterliev, Ankur Gandhe, Denis Filiminov, Ariya Rastrow, et al. 2017. Just ASK: building an architecture for extensible self-service spoken language understanding. In 1st Workshop on Conversational AI at NIPS 2017 (NIPS-WCAI). Juntao Li, Lisong Qiu, Bo Tang, Dongmin Chen, Dongyan Zhao, and Rui Yan. 2019. Insufficient data can also rock! learning to converse using smaller data with augmentation. In Thirty-Third AAAI Conference on Artificial Intelligence. Zichao Li, Xin Jiang, Lifeng Shang, and Hang Li. 2018. Paraphrase generation with deep reinforcement learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3865–3878. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out. Shuming Ma, Xu Sun, Wei Li, Sujian Li, Wenjie Li, and Xuancheng Ren. 2018. Query and output: Generating words by querying distributed word representations for paraphrase generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 196–206. Association for Computational Linguistics. Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2017. Paraphrasing revisited with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 881–893. Association for Computational Linguistics. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1727–1736, New York, New York, USA. PMLR. Tong Niu and Mohit Bansal. 2018. Polite dialogue generation without parallel data. Transactions of the Association for Computational Linguistics, 6:373–389. Gaurav Pandey, Danish Contractor, Vineet Kumar, and Sachindra Joshi. 2018. Exemplar encoder-decoder for neural conversation generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1329–1338, Melbourne, Australia. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Hao Peng, Ankur Parikh, Manaal Faruqui, Bhuwan Dhingra, and Dipanjan Das. 2019. Text generation with exemplar-based adaptive decoding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2555–2565, Minneapolis, Minnesota. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. 5983 Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 412–418. Association for Computational Linguistics. Aaditya Prakash, Sadid A. Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual LSTM networks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2923–2934. The COLING 2016 Organizing Committee. Chris Quirk, Chris Brockett, and William Dolan. 2004. Monolingual machine translation for paraphrase generation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Marek Rei. 2017. Semi-supervised multitask learning for sequence labeling. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2121–2130. Association for Computational Linguistics. Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. A hybrid convolutional variational autoencoder for text generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 627–637. Association for Computational Linguistics. Iulian Vlad Serban, Alexander G. Ororbia, Joelle Pineau, and Aaron Courville. 2017. Piecewise latent variables for neural variational text processing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 422–432. Association for Computational Linguistics. Dinghan Shen, Asli Celikyilmaz, Yizhe Zhang, Liqun Chen, Xin Wang, Jianfeng Gao, and Lawrence Carin. 2019. Towards generating long and coherent text with multi-level latent variable models. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in Neural Information Processing Systems, pages 6830–6841. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. Wentao Wang, Zhiting Hu, Zichao Yang, Haoran Shi, Frank Xu, and Eric Xing. 2019. Toward unsupervised text content manipulation. arXiv preprint arXiv:1901.09501. Jason Weston, Emily Dinan, and Alexander Miller. 2018. Retrieve and refine: Improved sequence generation models for dialogue. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI, pages 87–92, Brussels, Belgium. Association for Computational Linguistics. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sentence embeddings. In Proceedings of ICLR. John Wieting and Kevin Gimpel. 2018. ParaNMT50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462. Association for Computational Linguistics. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2253–2263. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2018. Learning neural templates for text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3174–3187. Association for Computational Linguistics. Jiacheng Xu and Greg Durrett. 2018. Spherical latent spaces for stable variational autoencoders. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4503– 4513. Association for Computational Linguistics. Kaizhong Zhang and Dennis Shasha. 1989. Simple fast algorithms for the editing distance between trees and related problems. SIAM journal on computing, 18(6):1245–1262. Junbo Zhao, Yoon Kim, Kelly Zhang, Alexander M Rush, and Yann LeCun. 2018. Adversarially Regularized Autoencoders. In Proceedings of ICML. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 654–664. Chunting Zhou and Graham Neubig. 2017. Multispace variational encoder-decoders for semisupervised labeled sequence transduction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 310–320. Association for Computational Linguistics. 5984 A Appendices A.1 Hyperparameters We use 100 dimensional word embeddings in both encoders and 100 dimensional word embeddings for the decoder. These word embeddings are all initialized by GloVe vectors (Pennington et al., 2014). The syntactic encoder uses 100 dimensions per direction and the decoder is a 100 dimensional unidirectional LSTM. When performing early stopping, we use greedy decoding. During testing, we use beam search with size of 10.
2019
599
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 44–50 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 44 Constructing Interpretive Spatio-Temporal Features for Multi-Turn Responses Selection Junyu Lu†, Chenbin Zhang†, Zeying Xie, Guang Ling, Chao Zhou, Zenglin Xu† † SMILE Lab, University of Electronic Science and Technology of China, Sichuan, China {cs.junyu, aleczhang13, swpdtz, zacharyling}@gmail.com, [email protected], [email protected] Abstract Response selection plays an important role in fully automated dialogue systems. Given the dialogue context, the goal of response selection is to identify the best-matched nextutterance (i.e., response) from multiple candidates. Despite the efforts of many previous useful models, this task remains challenging due to the huge semantic gap and also the large size of candidate set. To address these issues, we propose a Spatio-Temporal Matching network (STM) for response selection. In detail, soft alignment is first used to obtain the local relevance between the context and the response. And then, we construct spatio-temporal features by aggregating attention images in time dimension and make use of 3D convolution and pooling operations to extract matching information. Evaluation on two large-scale multi-turn response selection tasks has demonstrated that our proposed model significantly outperforms the state-ofthe-art model. Particularly, visualization analysis shows that the spatio-temporal features enables matching information in segment pairs and time sequences, and have good interpretability for multi-turn text matching. 1 Introduction Fully automated dialogue systems (Litman and Silliman, 2004; Banchs and Li, 2012; Lowe et al., 2017; Zhou et al., 2018) are becoming increasingly important area in natural language processing. An important research topic in dialogue systems is response selection, as illustrated in Figure 1, which aims to select an optimal response from a pre-defined pool of potential responses (Kummerfeld et al., 2018). Practical methods to response selection are usually retrieval-based, that focus on matching the semantic similarity between the response and utterances in the dialogue history (Shang et al., 2015; Zhang et al., 2018). Recently, convolutional operation, as a useful attempt to explore local correlation, has been inFigure 1: Examples of the Ubuntu dataset provided by NOESIS 1. Text segments with the same color symbols across context and response can be seen as matched pairs. vestigated to extract the matching features from the attention grid (Wu et al., 2017; Zhou et al., 2018). Unfortunately, these methods usually do not perform well when there are many candidate responses. In fact, in multi-turn dialogues, the next sentence is generally based on what was presented before and tends to match a recent local context. This is because the topic in a conversation may change over time, and the effective matching between the dialogue may only appear in a local time period. This phenomena generally appear in video processing (Hara et al., 2018; Tran et al., 2014), image caption (Chen et al., 2017) and action recognition (Girdhar and Ramanan, 2017). Therefore, it is natural to adopt convolutional structure or attention mechanism to extract local matching information from the sentence sequences. Analogously, each turn of dialogue can be regarded as a frame of a video. This motivates us to propose the Spatio-Temporal Matching block (STM) to construct the spatio-temporal 1Noetic End-to-End Response Selection Challenge is described in detail at http://workshop.colips.org/ dstc7. 45 Figure 2: The proposed spatio-temporal matching framework for response selection. features of local semantic relation between each turn of dialog and candidates by soft-attention mechanism. In detail, we model the response selection problem as a multi-class classification problem with sequences as input, where the label of the true response is set to one and the other candidates are set to zero. As illustrated in Figure 2, the proposed STM framework includes two parts: (i) representation module and (ii) matching block. Specifically, representations of the dialogue context and candidate answers are first learned through from dual encoders, and deep 3D ConvNets (Ji et al., 2013) are then used to match attentions between the dialogue contexts and candidate answers. Evaluation on the NOESIS datasets has demonstrated the outstanding performance of our proposed model against other well-known frameworks. Furthermore, our model enjoys a merit of good interpretation with the visualization of the attention weight as a thermal map. Our code is released under https://github.com/CSLujunyu/ Spatio-Temporal-Matching-Network. 2 Our model Before presenting the model, we first provide the problem formulation. Suppose that we have a dialogue dataset {(D, C, R)i}N i=1, we denotes D = {d0, d1, ..., dm} as a conversation context with utterances di and C = {c0, c1, ..., cn} as the next utterance candidate set. R represents the correct response ID in the corresponding candidate set. Our goal is to learn a matching model between the dialog context D and the candidates ci which can measure the matching degree and predict the best matched response. 2.1 Representation Module Given a dialog context D = {d0, d1, ..., dm} and candidates C = {c0, c1, ..., cn}, we employ L layers of bidirectional GRUs (Bi-GRU) (Cho et al., 2014) to extract sequential information in a sentence. The representations we used are deep, in the sense that they are a function of all of the internal layers of the Bi-GRU (Devlin et al., 2018; Peters et al., 2018a) We denote lth GRU layer dialog and candidate representation as Hl µ = {µl 0, µl 1, ..., µl m} and Hl γ = {γl 0, γl 1, ..., γl n} respectively. 2.2 Spatio-Temporal Matching block An illustration of the matching block is shown in Figure 3. We use attention mechanism to construct local related features for every candidate. In order to avoid the influence of gradient explosion caused by large dot product, matching matrices are constructed at each layer using scaleattention (Vaswani et al., 2017), which is defined as: Ml µm,γn = (µl m)T γl n √ d , (1) where l ∈[1, L], µl m ∈Rd×nµ denotes mth turn of dialog representation at lth GRU layer, γl n ∈ Rd×nγ denotes nth candidate representation at lth GRU layer, Ml µm,γn ∈Rnµ×nγ is constructed as 46 attention images, d is the dimension of word embedding, nµ and nγ denotes the number of words in dialog utterances and candidates respectively. Figure 3: A close-up of the matching block Moreover, in order to retain the natural temporal relationship of the matching matrices, we aggregate them all into a 4D-cube by expanding in time dimension. We call 4D-matching as spatiotemporal features and define images of nth candidate as Q(n): Q(n) = {Q(n) i,j,k}m×nµ×nγ, (2) Q(n) i,j,k = {Ml µi,γn[j, k]}L l=0, (3) where Q(n) ∈Rm×nµ×nγ×L, Ml µi,γn[j, k] ∈R and Q(n) i,j,k ∈RL is a pixel in Q(n). Motivated by C3D network (Tran et al., 2014), it is natural to apply a 3D ConvNet to extract local matching information from Q(n). The operation of 3D convolution with max-pooling is the extension of typical 2D convolution, whose filters and strides are 3D cubes. Our matching block has four convolution layers and three pooling layers (First two convolution layers are both immediately followed by pooling layer, yet the last pooling layer follows two continuous convolution layers). All of 3D convolution filters are 3 × 3 × 3 with stride 1 × 1 × 1. With the intention of preserving the temporal information in the early phase, 3D pooling layers are set as 3 × 3 × 3 with stride 3 × 3 × 3 except for the first pooling layer which has kernel size of 1 × 3 × 3 and stride 1 × 3 × 3. One fully-connected layer is used to predict the matching score between dialog context and potential responses. Finally, we compute softmax cross entropy loss, sn = Wfconv(Q(n)) + b, (4) where fconv is the 3D ConvNet we used, W and b are learned parameters. 3 Experiments 3.1 Dataset The ongoing DSTC series starts as an initiative to provide a common testbed for the task of Dialog State Tracking, and the most recent event, DSTC7 in 2018, mainly focused on end-to-end systems (Williams et al., 2013; Yoshino et al., 2019). We evaluate our model on two new datasets that released by the NOESIS (DSTC7 Track1): (1) the Ubuntu Corpus: Ubuntu IRC (Lowe et al., 2015a) consists of almost one million two-person conversations extracted from the Ubuntu chat logs , used to receive technical support for various Ubuntu-related problems. The newest version lies in manually annotations with a large set of candidates (Kummerfeld et al., 2018). The training data includes over 100,000 complete conversations, and the test data contains 1,000 partial conversations. (2) the Advising Dataset: It collects advisor dialogues for the purpose of guiding the student to pick courses that fit not only their curriculum, but also personal preferences about time, difficulty, career path, etc. It provides 100,000 partial conversations for training, obtained by cutting 500 conversations off randomly at different time points. Each conversation has a minimum of 3 turns and up to 100 candidates. 3.2 Metrics We use the same evaluation metrics as in previous works and the recommendation of the NOESIS (Wu et al., 2017; Zhou et al., 2018; Yoshino et al., 2019). Each comparison model is asked to select k best-matched utterances from n available candidates. We calculate the recall of the true positive responses among the k selected ones and denote it as Rn@k = Pk i=0 yi Pn i=0 yi , where yi is the binary label for each candidate. In addition, we use MRR (Mean reciprocal rank) (Voorhees et al., 1999; Radev et al., 2002) to evaluate the confident ranking of the candidates returned by our model. 3.3 Experimental Setting We consider at most 9 turns and 50 words for each utterance and responses in our experiments. Word embeddings are initialized by GloVe1(Pennington 1http://nlp.stanford.edu/data/glove.840B.300d.zip 47 Model R100@1 R100@10 MRR Baseline 0.083 0.359 DAM 0.347 0.663 0.356 DAM+Fine-tune 0.364 0.664 0.443 DME 0.383 0.725 0.498 DME-SMN 0.455 0.761 0.558 STM(Transform) 0.490 0.764 0.588 STM(GRU) 0.503 0.783 0.597 STM(Ensemble) 0.521 0.797 0.616∗ STM(BERT) 0.548∗ 0.827∗ 0.614 Table 1: Experiment Result on the Ubuntu Corpus. Model Advising 1 Advising 2 R100@10 MRR R100@10 MRR Baseline 0.296 DAM 0.603 0.312 0.374 0.174 DAM+Fine-tune 0.622 0.333 0.416 0.192 DME 0.420 0.215 0.304 0.142 DME-SMN 0.570 0.335 0.388 0.183 STM(Transform) 0.590 0.320 0.404 0.182 STM(GRU) 0.654 0.380 0.466 0.220 STM(Ensemble) 0.662∗ 0.385∗ 0.502∗ 0.232∗ Table 2: Experiment Results on the Advising Dataset. et al., 2014) and updated during training. We use Adam (Kingma and Ba, 2014) as the optimizer, set the initial learning rate is 0.001, and we employ early-stopping(Caruana et al., 2001) as a regularization strategy. 3.4 Comparison Methods In this paper, we investigate the current state-ofthe-art model in response selection task. In order to make it compatible to the task of NOESIS, we have made some changes as following: (1) Baseline The benchmark released by DSTC7 is an extension of the Dual LSTM Encoder model 2 (Lowe et al., 2015b). (2) Dual Multi-turn Encoder Different from Baseline, we use a multi-turn encoder to embed each utterance respectively and calculate utterance-candidate matching scores using dot product at the last hidden state of LSTM. (3) Sequential Matching Network We employ Sequential Matching Network (Wu et al., 2017) to measure the matching score of each candidate, and then calculate categorical cross entropy loss across all of them. We name it as DME-SMN in Table 1, 2. (4) Deep Attention Matching Network The DAM (Zhou et al., 2018) trained on undersampling data (Chawla, 2009), which use a 2https://github.com/IBM/dstc7-noesis/tree/master/noesistf 1:1 ratio between true responses and negative responses for training, is represented as DAM in Table 1, 2. Furthermore, we also construct contextrelated negative responses to train the model. We observe that using only this context-related negative responses to train the model will result in divergence. So this data is only used for finetuning. In this way, DAM is firstly trained on undersampling data then get fine-tuned with contextrelated negative responses. We name this model as DAM+Fine-tune in Table 1, 2. 3.5 Ablation Study As it is shown in Table 1, we conduct an ablation study on the testset of the Ubuntu Corpus, where we aim to examine the effect of each part in our proposed model. Firstly, we verify the effectiveness of dual multi-turn encoder by comparing Baseline and DME in Table 1. Thanks to dual multi-turn encoder, DME achieves 0.725 at R100@10 which is 0.366 better than the Baseline (Lowe et al., 2015b). Secondly, we study the ability of representation module by testing LSTM, GRU and Transformer with the default hyperparameter in Tensorflow. We note that GRU is better for this task. After removing spatio-temporal matching block, the performance degrades significantly. In order to verify the effectiveness of STM block further, we design a DME-SMN which uses 2D convolution for extracting spatial attention information and employ GRU for modeling temporal information. The STM block makes a 10.54% improvement at R100@1. Next, we replace GRU with Transformer in STM. Supposed the data has maximal m turns and n candidates, the time complexity of crossattention (Zhou et al., 2018), O(mn), is much higher than that of the Dual-Encoder based model, O(m + n). Thus, cross-attention is an impractical operation when the candidate set is large. So we remove cross-attention operations in DAM and extend it with Dual-Encoder architecture. The result in Table 1 shows that using self-attention only may not be enough for representation. As BERT (Devlin et al., 2018) has been shown to be a powerful feature extractor for various tasks, we employ BERT as a feature-based approach to generate ELMo-like pre-trained contextual representations (Peters et al., 2018b).It succeed the 48 Figure 4: Attention feature across positive and negative matching in the first layer. highest results and outperforms other methods by a significant margin. 3.6 Visualization In order to demonstrate the effectiveness of spatiotemporal information matching mechanism, we visualize attention features across positive and negative examples. To clarify how our model identifies important matching information between context and candidates, we visualize the attention matching matrices in Figure 4. The first row is positive matching matrices and the sencond is negative matching example. We denote the y-axis of Figure 4 as response sentence and the x-axis as utterances in context. Each colored grid represents the matching degree or attention score between two words. Deeper color represents better matching. Attention images in the first row are related to positive matching while those of the second row are related to negative matching. Intuitively, We can see that important words such as “vlc”, “wma” are recognized and carried to match “drm” in correct response. In contrast, the incorrect response has no correlation and thus little matching spaces. Note that our model can not only match wordlevel information, but also can match segmentFigure 5: Attention feature in different granularities. Left picture represents the second layer matching matrix for segment granularities, while right picture match at the third layer. level or sentence level information using 3D convolution. As it shows in Figure 5, the second layer tends to concentrate on segment-level information for which “wma patch” in utterance highly match “the home page drm” and “nasty nasty standard drm” in response. Furthermore, we find in our experiment that third layer tends to focus on sentence topic and more abstract meaning of the segments, which achieve better performance. However, more than three layers will destroy model ability in our experiments. 4 Conclusion and Future Work In this paper, we proposed an End-to-End spatiotemporal matching model for response selection. The model uses a dual stacked GRU or pre-trained BERT to embed utterances and candidates respectively and apply spatio-temporal matching block to measure the matching degree of a pair of context and candidate. Visualization of attention layers illustrates that our model has the good interpretative ability, and has the ability to pick out important words and sentences. In the future, we would like to explore the effectiveness of various attention methods to solve indefinite choices task with interpretive features. 5 Acknowledgement Junyu Lu, Chenbin Zhang and Zenglin Xu was partially supported by a grant from National Natural Science Foudation of China (No.61572111), Startup fundings of UESTC (Nos.A1098531023601041 and G05QNQR004), and a Research Fund for the Central Universities of China (No.ZYGX2016Z003). 49 References Rafael E Banchs and Haizhou Li. 2012. Iris: a chatoriented dialogue system based on the vector space model. In Proceedings of the ACL 2012 System Demonstrations, pages 37–42. Association for Computational Linguistics. Rich Caruana, Steve Lawrence, and C Lee Giles. 2001. Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping. pages 402–408. Nitesh V Chawla. 2009. Data mining for imbalanced datasets: An overview. In Data mining and knowledge discovery handbook, pages 875–886. Springer. Long Chen, Hanwang Zhang, Jun Xiao, Liqiang Nie, Jian Shao, Wei Liu, and Tat-Seng Chua. 2017. Scacnn: Spatial and channel-wise attention in convolutional networks for image captioning. pages 5659– 5667. Kyunghyun Cho, Bart Van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Rohit Girdhar and Deva Ramanan. 2017. Attentional pooling for action recognition. Neural Information Processing Systems (NIPS), pages 34–45. Kensho Hara, Hirokatsu Kataoka, and Yutaka Satoh. 2018. Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet? pages 6546–6555. Shuiwang Ji, Wei Xu, Ming Yang, and Kai Yu. 2013. 3d convolutional neural networks for human action recognition. IEEE transactions on pattern analysis and machine intelligence, 35(1):221–231. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Jonathan K Kummerfeld, Sai R Gouravajhala, Joseph Peper, Vignesh Athreya, Chulaka Gunasekara, Jatin Ganhotra, Siva Sankalp Patel, Lazaros Polymenakos, and Walter S Lasecki. 2018. Analyzing assumptions in conversation disentanglement research through the lens of a new dataset and model. arXiv preprint arXiv:1810.11118. Diane J Litman and Scott Silliman. 2004. Itspoke: An intelligent tutoring spoken dialogue system. In Demonstration papers at HLT-NAACL 2004, pages 5–8. Association for Computational Linguistics. Ryan Lowe, Michael Noseworthy, Iulian V Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. arXiv preprint arXiv:1708.07149. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015a. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. CoRR, abs/1506.08909. Ryan Lowe, Nissan Pow, Iulian V. Serban, and Joelle Pineau. 2015b. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. Proceedings of the SIGDIAL 2015 Conference, page 285294. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018b. Deep contextualized word representations. CoRR, abs/1802.05365. Dragomir R Radev, Hong Qi, Harris Wu, and Weiguo Fan. 2002. Evaluating web-based question answering systems. In LREC. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. arXiv preprint arXiv:1503.02364. Du Tran, Lubomir D. Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. 2014. C3D: generic features for video analysis. CoRR, abs/1412.0767. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. CoRR, abs/1706.03762. Ellen M Voorhees et al. 1999. The trec-8 question answering track report. In Trec, volume 99, pages 77– 82. Citeseer. Jason Williams, Antoine Raux, Deepak Ramachandran, and Alan Black. 2013. The dialog state tracking challenge. In Proceedings of the SIGDIAL 2013 Conference, pages 404–413. Yu Wu, Wei Wu, Chen Xing, Zhoujun Li, and Ming Zhou. 2017. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. Proceedings ofthe 55th Annual Meeting ofthe Association for Computational Linguistics, pages 496–505. Koichiro Yoshino, Chiori Hori, Julien Perez, Luis Fernando D’Haro, Lazaros Polymenakos, R. Chulaka Gunasekara, Walter S. Lasecki, Jonathan K. Kummerfeld, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Xiang Gao, Huda AlAmri, Tim K. Marks, Devi Parikh, and Dhruv Batra. 2019. 50 Dialog system technology challenge 7. CoRR, abs/1901.03461. Rui Zhang, Honglak Lee, Lazaros Polymenakos, and Dragomir Radev. 2018. Addressee and response selection in multi-party conversations with speaker interaction rnns. Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu. 2018. Multi-turn response selection for chatbots with deep attention matching network. Proceedings ofthe 56th Annual Meeting ofthe Association for Computational Linguistics, pages 1–10.
2019
6
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 629–639 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 629 Multi-Task Learning for Coherence Modeling Youmna Farag Helen Yannakoudakis Department of Computer Science and Technology The ALTA Institute University of Cambridge United Kingdom {youmna.farag,helen.yannakoudakis}@cl.cam.ac.uk Abstract We address the task of assessing discourse coherence, an aspect of text quality that is essential for many NLP tasks, such as summarization and language assessment. We propose a hierarchical neural network trained in a multitask fashion that learns to predict a documentlevel coherence score (at the network’s top layers) along with word-level grammatical roles (at the bottom layers), taking advantage of inductive transfer between the two tasks. We assess the extent to which our framework generalizes to different domains and prediction tasks, and demonstrate its effectiveness not only on standard binary evaluation coherence tasks, but also on real-world tasks involving the prediction of varying degrees of coherence, achieving a new state of the art. 1 Introduction Discourse coherence refers to the way textual units relate to one another and form a coherent whole. Coherence is an important aspect of text quality and therefore its modeling is essential in many NLP applications, including summarization (Barzilay et al., 2002; Parveen et al., 2016), question-answering (Verberne et al., 2007), question generation (Desai et al., 2018), and language assessment (Burstein et al., 2010; Somasundaran et al., 2014; Farag et al., 2018). A large body of work has investigated models for the assessment of inter-sentential coherence, that is, assessment in terms of transitions between adjacent sentences (Barzilay and Lapata, 2008; Yannakoudakis and Briscoe, 2012; Guinaudeau and Strube, 2013; Tien Nguyen and Joty, 2017; Joty et al., 2018). The properties of text that result in inter-sentential connectedness have been translated into a number of computational models – some of the most prominent ones include the entity-based approaches, inspired by Centering Theory (Grosz et al., 1995) and proposed in the pioneering work of Barzilay and Lapata (2005, 2008). Such approaches model local coherence in terms of entity transitions between adjacent sentences, where entities are represented by their syntactic role in the sentence (e.g., subject, object). Current state-of-the-art deep learning adaptations of the entity-based framework involve the use of Convolutional Neural Networks (CNNs) over an entity-based representation of text to discriminate between a coherent document and its incoherent variants containing a random reordering of the document’s sentences (Tien Nguyen and Joty, 2017); as well as lexicalized counterparts of such models that further incorporate lexical information regarding the entities, thereby distinguishing between different entities (Joty et al., 2018). In contrast to existing approaches, we propose a more generalized framework that allows neural models to encode information about the types of grammatical roles all words in a sentence participate in, rather than focusing only on the roles of entities within a sentence. Inspired by recent advances in Multi-Task Learning (MTL) (Rei and Yannakoudakis, 2017; Sanh et al., 2018), we propose a simple, yet effective hierarchical model trained in a multi-task fashion that learns to perform two tasks: scoring a document’s discourse coherence and predicting the type of grammatical role (GR) of a dependent with its head. We take advantage of inductive transfer between these tasks by giving a supervision signal at the bottom layers of a network with respect to the types of GRs, and a supervision signal at the top layers with respect to document-level coherence. Our contributions are four-fold: (1) We propose a MTL approach to coherence assessment and compare it against a number of baselines. We experimentally demonstrate that such a framework allows us to exploit more effectively the inter630 dependencies between the two prediction tasks and achieve state-of-the-art results in predicting document-level coherence; (2) We assess the extent to which the information encoded in the network generalizes to different domains and prediction tasks, and demonstrate the effectiveness of our approach not only on standard binary evaluation tasks on the Wall Street Journal (WSJ), but also on more realistic tasks involving the prediction of varying degrees of coherence in people’s everyday writing; (3) In contrast to existing work that has only investigated the impact of a specific set of grammatical roles (i.e., subject and object) on coherence, we instead investigate a large set of GR types, and train the model to predict the type of role dependents participate in. This allows the network to learn more generic patterns of language and composition, and a much richer set of representations than those induced by current approaches. In turn, this can be better exploited at the top layers of the network for predicting document-level coherence; (4) Finally, and contrary to previous work, our model does not rely on the availability of external linguistic tools at testing time as it directly learns to predict the GR types. 2 Related Work Several studies have proposed frameworks for modeling the textual properties that coherent texts exhibit. A popular approach is one based on the entity-grid (egrid) representation of texts, proposed by Barzilay and Lapata (2005, 2008) and inspired by Centering Theory (Grosz et al., 1995). In the egrid model, texts are represented as matrices of entities (columns) and sentences (rows). Entities in the matrix are represented by their grammatical role (i.e., subject, object, neither), and entity transitions across sentences are used as features for coherence assessment. A large body of work has utilized and extended the egrid approach (Elsner and Charniak, 2008; Burstein et al., 2010; Elsner and Charniak, 2011; Guinaudeau and Strube, 2013). Other features have also been leveraged, such as syntactic patterns (Louis and Nenkova, 2012) and discourse relations (Lin et al., 2011; Feng et al., 2014). Deep learning architectures have also been successfully applied to the task of coherence scoring, achieving state-of-theart results (Li and Jurafsky, 2017; Logeswaran et al., 2018; Cui et al., 2018). Some have exploited egrid features in a CNN model aimed at capturing long range entity transitions (Tien Nguyen and Joty, 2017; Joty et al., 2018); further details are provided in Section 4.2. Traditionally, coherence evaluation has been treated as a binary task, where a model is trained to distinguish between a coherent document and its incoherent counterparts created by randomly shuffling the sentences it contains. The news domain has been a popular source of well-written, coherent texts. Among the popular datasets are articles about EARTHQUAKES and AIRPLANES accidents (Barzilay and Lapata, 2008; Guinaudeau and Strube, 2013; Li and Jurafsky, 2017) and the Wall Street Journal (WSJ) portion of the Penn Treebank (Elsner and Charniak, 2008; Lin et al., 2011; Tien Nguyen and Joty, 2017). Elsner and Charniak (2008) argue that the WSJ documents are normal informative articles, whereas the AIRPLANES and EARTHQUAKES ones have a more constrained style. 3 Approach 3.1 Neural Single-Task Learning (STL) Our baseline model, shown in Figure 1, performs the single task of predicting an overall coherence score via a hierarchical model based on a Long Short-Term Memory (LSTM) network (Hochreiter and Schmidhuber, 1997). A document is composed of a sequence of sentences {s1, s2, ..., sm} and, in turn, each sentence consists of a sequence of words {w1, w2, ..., wn}. The input words are initialized with vectors from a pre-trained embedding space. A bidirectional LSTM (Bi-LSTM) is applied to the words in each sentence to get contextualized representations, and the output vectors from both directions are concatenated: −→ hw t = LSTM(wt, −−→ hw t−1) ←− hw t = LSTM(wt, ←−− hw t+1) hw t = [−→ hw t , ←− hw t ] (1) To compose a sentence representation s, the hidden states {hw 1 , ..., hw n } of its words are combined with an attention mechanism: uw t = tanh(W whw t ) aw t = exp(vwuw t ) P t exp(vwuw t ) s = X t aw t hw t (2) 631 Figure 1: The hierarchical architecture of the STL and MTL models. The dotted red box is specific to the MTL framework. The dotted purple box is applied if the document contains paragraph boundaries (which is the case for the Grammarly Corpus in Section 4.1) in order to create paragraph representations prior to the document one. where W w and vw are learnable parameters. Attention allows the model to focus on the salient words for coherence and build better sentence representations. Constructing a document representation d is similar to the sentence one – a second Bi-LSTM is utilized over sentences {s1, s2, ..., sm} to generate contextually rich sentence representations: −→ hs i = LSTM(si, −−→ hs i−1) ←− hs i = LSTM(si, ←−− hs i+1) hs i = [−→ hs i , ←− hs i ] (3) Subsequently, attention is applied over the sentence embeddings {hs 1, ..., hs m} to allow the model to focus on sentences that contribute highly to the overall coherence of the document: us i = tanh(W shs i) as i = exp(vsus i) P i exp(vsus i) d = X i as ths i (4) where W s and vs are trainable weights in the network. If a document consists of paragraphs {p1, p2, ..., pl}, a third Bi-LSTM is stacked over the sentence vectors and the output is aggregated with another attention layer to compose the document vector d. Finally, the coherence score of a document is predicted by applying a linear transformation to the vector d followed by a sigmoid operation to bound the score in [0, 1]: ˆy = σ(W d d) (5) where W d ∈Rdim is the linear function weight and dim represents the dimensionality of the document vector. In a binary classification task, where the document is labeled as either coherent or incoherent, the model predicts one value for ˆy ∈[0, 1]. In a multiclass classification setting where there are multiple classes y ∈C representing various degrees of coherence, a document is labeled with a one-hot vector with length |C| with a value of 1 in the index of the correct class and 0 everywhere else. The model predicts |C| scores, using Equation 5 with W d ∈Rdim×|C|, and learns to maximize the value corresponding to the gold label. For the binary task, the network’s parameters are optimized to minimize the negative loglikelihood of the document’s ground-truth label y, given the networks prediction ˆy: L1 = −y log(ˆy) −(1 −y)log(1 −ˆy) (6) For the multiclass task, we use mean squared error to minimize the discrepancy between the one-hot gold vector and the estimated one: L1 = 1 |C| |C| X j=1 (yj −ˆyj)2 (7) An alternative approach to the multiclass problem is to apply a softmax over the predictions instead 632 of a sigmoid, and minimize the categorical cross entropy; however, initial experiments on the development set showed that our formation yields better results. 3.2 Neural Multi-Task Learning (MTL) The model described in 3.1 performs the single task of predicting a coherence score for a text; all model parameters are tuned to minimize the loss (L1) in Equation 6 or 7 (depending on whether we are optimizing for a binary or a multiclass classification task respectively). We extend this model to a MTL framework by training it to optimize a secondary objective at the bottom layers of the network, along with the main one (L1). Specifically, the model is trained to predict a documentlevel score along with word-level labels indicating the (predicted) GR type of dependents in the document.1 The GRs are based on a predefined set R, generated from a dependency parser on the training set (Section 4.3). The set includes the types of GRs in which a word is a dependent (e.g., nsubj, amod, xcomp, iobj), and each type r ∈R is treated as a class (for the ‘root’ word, the type is root). In order to predict a probability distribution over R given a word representation ht (Equation 1), a linear operation normalized by a softmax function is applied: P(yr t |hw t ) = softmax(W rhw t ) (8) The secondary objective and the word-level loss is defined as the categorical cross-entropy, i.e., the negative log-probability of the correct labels: L2 = − X t X r yr t logP(yr t |hw t ) (9) Both the main (L1) and secondary (L2) objectives are optimized jointly (Ltotal), but with different weights to indicate the importance of each of these tasks during training: Ltotal = αL1 + βL2 (10) where α, β ∈[0, 1] are the loss weight hyperparameters. Figure 1 (red-dotted box) presents the complete MTL framework. MTL allows us to take advantage of inductive transfer between these tasks and learn a rich set of representations at the 1We make our code publicly available at https:// github.com/Youmna-H/coherence_mtl #Docs #Synthetic Docs Avg #Sents Train 1,376 25,767 21.0 Test 1,090 20,766 21.9 Table 1: Statistics for the WSJ data. #Docs represents the number of original articles and #Synthetic Docs the number of original articles + their permuted versions. #Docs Avg #Sents Yahoo Train 1000 7.5 Test 200 7.5 Clinton Train 1000 6.6 Test 200 6.6 Enron Train 1000 7.7 Test 200 7.8 Table 2: Statistics for the GCDC. bottom layers that can be exploited by the top layers of the network for predicting a document-level coherence score. Current state-of-the-art approaches utilizing the entity-based framework (Joty et al., 2018) focus solely on the subject and object types. To further assess the impact of our extended set of GR types, we re-train the same MTL model but now only utilize subject (S) and object (O) GR types as our secondary training signal. Following the current entity-based approaches, all other types are mapped to X, to represent ‘other’ roles; specifically, R = {S, O, X}. We refer to this baseline model as MTLsox. 4 Experiments 4.1 Data and Evaluation Metrics Synthetic Data. The Wall Street Journal (WSJ) portion of the Penn Treebank (Elsner and Charniak, 2008; Lin et al., 2011; Tien Nguyen and Joty, 2017) is one of the most popular datasets for (binary) coherence assessment, given its size and the nature of the texts it contains; i.e. long articles not constrained in style (Elsner and Charniak, 2008; Tien Nguyen and Joty, 2017). Following previous work (Tien Nguyen and Joty, 2017), we also use the WSJ and specifically sections 00 −13 for training and 14 −24 for testing (documents consisting of one sentence are removed). We create 20 permutations per document, making sure to exclude duplicates or versions that happen to have the same ordering of sentences as the original article. Table 1 presents the data statistics. To evaluate model performance on this dataset, 633 we again follow previous work (Barzilay and Lapata, 2008; Tien Nguyen and Joty, 2017) and calculate pairwise ranking accuracy (PRA) between an original text and its 20 permuted counterparts. Specifically, PRA calculates the fraction of correct pairwise rankings in the test data (i.e., a coherent/original text should be ranked higher than its permuted counterpart). Following Farag et al. (2018), we also report the total pairwise ranking accuracy (TPRA) that extends PRA to comparing each original text to all permuted texts in the test set rather than only its own set of permuted counterparts. Realistic Data. The Grammarly Corpus of Discourse Coherence (GCDC) is a newly-released dataset containing emails and reviews written with varying degrees of proficiency and care (Lai and Tetreault, 2018).2 In addition to the WSJ, we employ this dataset in order to assess the effectiveness of our coherence model for tasks involving the prediction of varying degrees of coherence in people’s everyday writing. Specifically, the dataset contains texts from four domains: Yahoo online forum posts, emails from Hillary Clinton’s office, emails from Enron and Yelp business reviews. As some of the reviews from the latter were subsequently removed by Yelp, we evaluate our model on each of the first three domains (Table 2). Annotators were instructed to rate each document with a score ∈{1, 2, 3}, representing low, medium and high levels of coherence respectively. For our experiments, we use the consensus rating of the expert scores as calculated by Lai and Tetreault (2018), and train the models to maximize the probability of the gold class within a multiclass classification framework (see Section 3). The gold label distribution is as follows: Yahoo 44.8% low, 17.9% medium, 37.25% high; Clinton 27.8% low, 20.3% medium, 51.8% high; Enron 30% low, 20.3% medium, 49.6% high. To evaluate model performance, we use three-way classification accuracy. 4.2 Models and Baselines CNN Egrid (Egrid CNNext). We replicate the model proposed by Tien Nguyen and Joty (2017) using their source code.3 The authors generate entity-grid representations of texts (i.e., matrices 2https://github.com/aylai/GCDC-corpus 3https://github.com/datienguyen/cnn_ coherence of entities as columns and sentences as rows, where entities are represented by their syntactic role: subject, object, or other) using the Brown coherence toolkit.4 They then employ a CNN over the entity transitions across sentences in order to capture high-level features and long-range transitions. Training is performed in a pairwise fashion where the model learns to rank a coherent document higher than its incoherent counterparts. To further improve performance, they extend the model by including three entity-specific features, attached to entities’ distributed representations: named entity type, salience (represented as the occurrence frequency of entities) and a binary feature indicating whether the entity has a proper mention. Lexicalized CNN Egrid (Egrid CNNlex). The aforementioned Egrid CNN model is agnostic to entities’ lexical properties, which are useful features for the task. To remedy this, Joty et al. (2018) further extend it with lexical information about the entities: they represent each entity with its lexical presentation and attach it to its syntactic role (S, O, X). For instance, if “Obama” appears as a subject and an object, there will be two different representations for it in the input embedding matrix: Obama-S and Obama-O. Joty et al. (2018) achieve state-of-the-art results on the WSJ, outperforming Egrid CNNext without including the three entity-specific features in their model. We also replicate their model using the authors’ source code.5 Local Coherence Model (LC). This model, initially proposed by Li and Hovy (2014), applies a window approach to assess a text’s local coherence. Sentences are encoded with a recurrent or recursive layer and a filter of weights is applied over each window of sentence vectors to extract “clique” scores that are aggregated to calculate the overall document coherence score. We use an improved variant that captures sentence representations via an LSTM and predicts an overall coherence score by averaging the local clique scores (Li and Jurafsky, 2017; Farag et al., 2018). Lai and Tetreault (2018) recently showed that the LC model achieves state-of-the-art results on the Clinton and Enron datasets. 4https://bitbucket.org/melsner/ browncoherence 5https://ntunlpsg.github.io/project/ coherence/n-coh-acl18/ 634 Paragraph sequence (PARSEQ). Lai and Tetreault (2018) implemented a hierarchical neural network consisting of three LSTMs to generate sentence, paragraph and document representations. The network’s architecture is similar to our STL model; the key difference is the attention mechanism we use for aggregation. The model was tested on the GCDC and was found to outperform other feature-engineered methods and give state-of-the-art results on the Yahoo dataset. Neural Single-Task Learning (STL). We implement the STL model as described in 3.1. For the WSJ data, the network utilizes two Bi-LSTMs to compose sentence and document representations. For the GCDC, we add a third Bi-LSTM, where sentence representations are aggregated via attention to form paragraph vectors. Given these paragraph vectors, we then apply a Bi-LSTM followed by attention to compose the document vectors that are to be scored for coherence. Neural Multi-Task Learning (MTL). We implement the MTL model as described in 3.2. The same architecture variants as the STL ones are applied on the different datasets. Neural S-O-X Multi-Task Learning (MTLSOX). As discussed in 3.2, we create another version of the MTL model where, for each word, we only predict subject (S), object (O) and ‘other’ (X) roles. GR types Concatenation Model (Concatgrs). Instead of learning to predict the GR types within a MTL framework, we incorporate them as input features to the model by concatenating them to the word representations in the STL framework. In this setup, we randomly initialize the types embedding matrix Egr ∈Rq×g, where g is the embedding size and q is the number of GR types in the training data. Each type is then mapped to a row in Egr and concatenated to its corresponding word at the model’s input layer. Here, the GRs are needed as input at both training and test time, unlike the MTL framework that only requires them during training. The concatgrs model allows us to further assess whether the MTL framework has an advantage over feeding the GR types as input features. 4.3 Experimental setup We extract the GR types of words using the Stanford Dependency Parser (v. 3.8) (Chen and Manword embed dim LSTM hidden dim α β hw hs hp WSJ 50 100 100 0.7 0.3 Yahoo 300 100 100 100 1 0.1 Clinton 300 100 200 100 1 0.1 Enron 300 100 100 100 1 0.2 Table 3: Model hypermarameters: w, s and p refer to word, sentence and paragraph hidden layers respectively; α is the main and β the secondary loss weight. ning, 2014) and obtain a total of 39 different types of Universal Dependencies and their subtypes (see Appendix A for the full list). For the MTLSOX model, we consider direct objects, indirect objects and subjects of passive verbs as objects (O). Our models are initialized with pre-trained GloVe embeddings (Pennington et al., 2014). We use minibatches of size 32, optimize the models using RMSProp (Tieleman and Hinton, 2012), and set the learning rate to 0.001. Dropout (Srivastava et al., 2014) is used for regularization with probability 0.5 and applied to the word embedding layer and the output of the Bi-LSTM sentence layer. Table 3 shows the different hyperparameters used for training.6 Training is done for 30 epochs and performance is monitored over the development set; the model with the highest performance (highest PRA on the synthetic data and highest classification accuracy on GCDC) on the development set is selected and applied at testing time. To reduce model variance, we run the WSJ experiments 5 times with different random initializations and the GCDC ones 10 times (following Lai and Tetreault (2018)), and average the predicted scores of the ensembles for the final evaluation. For the WSJ data, we use the same train/dev splits as Tien Nguyen and Joty (2017), and for GCDC, we follow Lai and Tetreault (2018) and split the training data with a 9:1 ratio for tuning. 5 Results and Discussion Binary Classification. Table 4 shows the results of the binary discrimination task on the WSJ. The results demonstrate the effectiveness of our MTL approach using a supervision signal at the bottom layers based on the words’ GR types, which significantly outperforms all other approaches and achieves state-of-the-art results on 6We note that hyperparameters are tuned per domain. 635 Model PRA TPRA Egrid CNNext 0.876 0.656 Egrid CNNlex 0.846 0.566 LC 0.741 0.728 STL 0.877 0.893 MTL 0.932* 0.941* MTLSOX 0.899 0.913 Concatgrs 0.896 0.908 Table 4: Results of the binary discrimination task on the WSJ. * indicates significance (p < 0.01) over all the other models based on the randomization test. Egrid models are significantly worse than MTLSOX and Concatgrs on the PRA metric and significantly worse than all models on TPRA.8 the WSJ (0.932 PRA and 0.941 TPRA).7 The performance of the Egrid neural models shows that despite their ability to rank a document higher than its incoherent counterparts (0.876 and 0.846 PRA), they do not generalize when documents are compared against counterparts from the whole test set (0.656 and 0.566 TPRA). This could be partly attributed to the pairwise training strategy adopted by these models and their inability to compare entity-transition patterns across different topics. The table also shows that models that utilize compositions over textual units to form document representations (the last four models) are significantly more effective than those explicitly utilizing only the local transitions between sentences (LC model). Furthermore, we observe that incorporating GR types (MTL, MTLSOX and Concatgrs) gives significantly better results compared to the STL model that is GR-agnostic. The superiority of the MTL model over Concatgrs and MTLSOX demonstrates that learning the GR types, within an MTL framework, allows the model to learn richer contextual representations (but also to be more efficient at testing time compared to e.g., Concatgrs since it does not require external linguistic tools). To further analyze performance, we calculate the Pearson correlation between: a) the similarity between a permuted document and its original counterpart in terms of the minimum number of adjacent transpositions needed to transform the former back to its original version (Lapata, 7Significance is calculated based on the randomization test (Yeh, 2000). 8Joty et al. (2018) reported 0.885 PRA for their Egrid CNNlex, which we were unable to replicate using their code; however, this is still lower compared to our results. Model Yahoo Clinton Enron LC 0.535 0.610 0.544 PARSEQ 0.549 0.602 0.532 STL 0.550 0.590 0.505 MTL 0.560 0.620* 0.560* MTLSOX 0.505 0.585 0.510 Concatgrs 0.455 0.570 0.460 Table 5: Model accuracy on the three-way classification task on GCDC. * indicates significance over STL with p < 0.01 using the randomization test. Results for PARSEQ and LC are those reported in Lai and Tetreault (2018) on the same data. Figure 2: F1 scores for subject and object predictions with the MTL and MTLSOX models over the first 20 epochs of training. Y-axis: F1 scores; x-axis: epochs. The graphs are based on the WSJ dev set. 2006), and b) the predicted coherence score for the permuted document. This allows us to investigate whether a higher similarity is linked to a higher coherence score. We observe that MTL, MTLSOX, Concatgrs and STL have the highest correlations (0.260, 0.232, 0.227, 0.225 respectively), followed by LC (0.076), Egrid CNNext (−0.0126) and Egrid CNNlex (−0.069).9 In order to further analyze the strengths of MTL, we plot in Figure 2 the F1 scores over the training epochs for predicting the subject and object types using MTL or MTLSOX. We can see that learning to predict a larger set of GR types enhances the model’s predictive power for the subject and object types, corroborating the value of entity-based properties for coherence. Three-way Classification. On GCDC (Table 5) we can see that MTL achieves state-of-the-art performance across all three datasets. Although different evaluation metrics are employed, we note that the numbers obtained on this dataset are quite low compared to those on the WSJ. Assessing 9We note that the low correlation is due to the nature of the task: binary evaluation rather than absolute scoring of coherence. 636 Figure 3: Visualization of the model’s gradients with respect to the input word embeddings for MTL and STL on the WSJ dev set. Words that contribute the most to coherence scoring (i.e., those with high gradient norms) are colored: the contribution of words decreases from dark red to lighter tones of orange. varying degrees of coherence is a more challenging task: differences in coherence between different documents is less pronounced than when taking a document and randomly shuffling its sentences. When comparing MTL to STL, the former is consistently better across all datasets, with significant improvements for two of them.10 Interestingly, we observe that MTLSOX and Concatgrs do not generalize to the more realistic domain. As shown in Table 3, our best MTL model uses smaller β and higher α values on the GCDC compared to the WSJ. This could be attributed to the performance of the parser and/or the nature of the GCDC and the properties of (in)coherence it exhibits, compared to the WSJ data. MTL allows the model more flexibility and control with respect to the features it learns in order to enhance performance on the main task, in contrast to Concatgrs where the GRs are given directly as input to the model (yielding the worst performance across all the GCDC datasets). The results on GCDC demonstrate that our main MTL approach generalizes to tasks involving the prediction of varying degrees of coherence in everyday writing. In general, however, we observe that, out of the three gold coherence labels (low, medium, high) both MTL and STL have difficulty in correctly classifying documents of medium coherence, which can be attributed to the smaller number of training examples for that class (Section 4.1). Visualization. In an attempt to better understand what the models have learnt, we visualize the words that contribute the most to coherence prediction. We calculate the model’s gradients with respect to the input word embeddings (similarly 10We also note that GR prediction is only required during training; therefore, at inference time, MTL uses the same number of parameters as STL. to Li et al. (2016)) to determine which words maximize the model’s prediction (more influential words should have higher gradient norms). Figure 3 presents example visualizations obtained with STL and MTL. We observe that for MTL, important words are those that are considered the center of attention: in the first example (top two sentences) where the document is about seats in the stock exchange, “seat” and “Seats” are considered more important than the subject entities. On the other hand, the STL model considers the subject of the first sentence (“The American Stock Exchange”) more important than the object “seat”. In the second example (last two sentences) where the document is about a canceled show by the NBC, for the MTL model, the name of the show (or part of it) in the first sentence (“Nutt”) is considered important, as well as “comedy” which also refers to the show; in addition to “show” in the second sentence. On the other hand, STL fails to identify the name of the show as important. In general, STL seems to be more distracted, focusing on words that do not necessarily contribute to coherence (e.g., determiners and prepositions), whereas MTL seems to be considering more informative parts of the text. Qualitative Analysis. Following previous work (Miltsakaki and Kukich, 2004; Li and Jurafsky, 2017), we perform a small-scale qualitative analysis: we apply our best model to a number of discourses that exhibit different types of coherence and investigate the predicted coherence scores. We observe that MTL can capture some aspects of lexical and centering/referential coherence: Mary ate some apples. She likes apples. 0.790 Mary ate some apples. She likes pears. 0.720 Mary ate some apples. She likes Paris. 0.742 She ate some apples. Mary likes apples. 0.747 637 John went to his favorite music store to buy a piano. He had frequented the store for many years. 0.753 John went to his favorite music store to buy a piano. It was a store John had frequented for many years. 0.743 On the other hand, it is not as good at recognizing temporal order and causal relationships; for example: Bret enjoys video games; therefore, he sometimes is late to appointments. 0.491 Bret sometimes is late to appointments; therefore, he enjoys video games. 0.499 6 Conclusion We have presented a hierarchical multi-task learning framework for discourse coherence that takes advantage of inductive transfer between two tasks: predicting the GR type of words at the bottom layers of the network and predicting a document-level coherence score at the top layers. We assessed the extent to which our framework generalizes to different domains and prediction tasks, and demonstrated its effectiveness against a number of baselines not only on standard binary evaluation coherence tasks, but also on tasks involving the prediction of varying degrees of coherence, achieving a new state of the art. As part of future work, we would like to investigate the use of contextualized embeddings (e.g., BERT, Devlin et al. (2018)) for coherence assessment – as such representations have been shown to carry syntactic information of words (Tenney et al., 2019) – and whether they allow multi-task learning frameworks to learn complementary aspects of language. Acknowledgments We thank Ted Briscoe and Marek Rei for their valuable suggestions and feedback. We also thank Paula Buttery, Andrew Caines, James Thorne, Christopher Bryant, Simone Teufel and the anonymous ACL reviewers for their insightful comments. We thank the NVIDIA Corporation for the donation of the Titan X Pascal GPU used in this research. We gratefully acknowledge our funding bodies: Youmna Farag was supported by the EPSRC and Cambridge Trust; Helen Yannakoudakis was supported by Cambridge Assessment, University of Cambridge. References Regina Barzilay, Noemie Elhadad, and Kathleen R. McKeown. 2002. Inferring strategies for sentence ordering in multidocument news summarization. J. Artif. Int. Res., 17(1):35–55. Regina Barzilay and Mirella Lapata. 2005. Modeling local coherence: An entity-based approach. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 141–148. Association for Computational Linguistics. Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computational Linguistics, 3(1):1–34. Jill Burstein, Joel Tetreault, and Slava Andreyev. 2010. Using entity-based features to model coherence in student essays. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 681–684. Association for Computational Linguistics. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740–750. Association for Computational Linguistics. Baiyun Cui, Yingming Li, Ming Chen, and Zhongfei Zhang. 2018. Deep attentive sentence ordering network. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4340–4349. Association for Computational Linguistics. Takshak Desai, Parag Dakle, and Dan Moldovan. 2018. Generating questions for reading comprehension using coherence relations. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications, pages 1–10. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Micha Elsner and Eugene Charniak. 2008. Coreference-inspired coherence modeling. In Proceedings of ACL-08: HLT, Short Papers, pages 41–44. Association for Computational Linguistics. Micha Elsner and Eugene Charniak. 2011. Extending the entity grid with entity-specific features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 125–129. Association for Computational Linguistics. Youmna Farag, Helen Yannakoudakis, and Ted Briscoe. 2018. Neural automated essay scoring and coherence modeling for adversarially crafted input. In Proceedings of the 2018 Conference of the North 638 American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 263–271. Association for Computational Linguistics. Vanessa Wei Feng, Ziheng Lin, and Graeme Hirst. 2014. The impact of deep hierarchical discourse structures in the evaluation of text coherence. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 940–949. Dublin City University and Association for Computational Linguistics. Barbara J. Grosz, Scott Weinstein, and Aravind K. Joshi. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2). Camille Guinaudeau and Michael Strube. 2013. Graph-based local coherence modeling. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 93–103. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735– 1780. Shafiq Joty, Muhammad Tasnim Mohiuddin, and Dat Tien Nguyen. 2018. Coherence modeling of asynchronous conversations: A neural entity grid approach. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 558–568. Association for Computational Linguistics. Alice Lai and Joel Tetreault. 2018. Discourse coherence in the wild: A dataset, evaluation and methods. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 214–223. Association for Computational Linguistics. Mirella Lapata. 2006. Automatic evaluation of information ordering: Kendall’s tau. Comput. Linguist., 32(4):471–484. Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in NLP. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 681–691. Association for Computational Linguistics. Jiwei Li and Eduard Hovy. 2014. A model of coherence based on distributed sentence representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2039–2048. Association for Computational Linguistics. Jiwei Li and Dan Jurafsky. 2017. Neural net models of open-domain discourse coherence. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 198–209. Association for Computational Linguistics. Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2011. Automatically evaluating text coherence using discourse relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 997–1006. Association for Computational Linguistics. Lajanugen Logeswaran, Honglak Lee, and Dragomir R. Radev. 2018. Sentence ordering and coherence modeling using recurrent neural networks. In AAAI, pages 5285–5292. AAAI Press. Annie Louis and Ani Nenkova. 2012. A coherence model based on syntactic patterns. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1157–1168. Association for Computational Linguistics. Eleni Miltsakaki and Karen Kukich. 2004. Evaluation of text coherence for electronic essay scoring systems. Natural Language Engineering, 10(1):25–55. Daraksha Parveen, Mohsen Mesgar, and Michael Strube. 2016. Generating coherent summaries of scientific articles using coherence patterns. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 772– 783. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Marek Rei and Helen Yannakoudakis. 2017. Auxiliary objectives for neural error detection models. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications. Victor Sanh, Thomas Wolf, and Sebastian Ruder. 2018. A hierarchical multi-task approach for learning embeddings from semantic tasks. arXiv preprint arXiv:1811.06031. Swapna Somasundaran, Jill Burstein, and Martin Chodorow. 2014. Lexical chaining for measuring discourse coherence quality in test-taker essays. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 950–961. Dublin City University and Association for Computational Linguistics. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958. 639 Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R Bowman, Dipanjan Das, et al. 2019. What do you learn from context? probing for sentence structure in contextualized word representations. arXiv preprint arXiv:1905.06316. Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5 - rmsprop. Technical report. Dat Tien Nguyen and Shafiq Joty. 2017. A neural local coherence model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1320– 1330. Association for Computational Linguistics. Suzan Verberne, LWJ Boves, NHJ Oostdijk, and PAJM Coppen. 2007. Evaluating discourse-based answer extraction for why-question answering. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval., pages 735–736. Helen Yannakoudakis and Ted Briscoe. 2012. Modeling coherence in esol learner texts. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pages 33–43. Association for Computational Linguistics. Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of the 18th conference on Computational linguistics-Volume 2, pages 947–953. Association for Computational Linguistics. A Grammatical roles Type Description acl [relcl] clausal modifier of noun (adjectival clause) advcl adverbial clause modifier advmod adverbial modifier amod adjectival modifier appos appositional modifier aux auxiliary auxpass passive auxiliary case case marking cc [preconj] coordinating conjunction ccomp clausal complement compound [prt] compound conj conjunct cop copula csubj clausal subject csubjpass clausal passive subject dep unspecified dependency det [predet] determiner discourse discourse element dobj direct object expl expletive iobj indirect object mark marker mwe multi-word expression neg negation modifier nmod [tmod, poss, npmod] nominal modifier nsubj nominal subject nsubjpass passive nominal subject nummod numeric modifier parataxis parataxis punct punctuation root root xcomp open clausal complement Table 6: The GR types (UDs) extracted from the WSJ training data. The text inside [] (left column) denotes the extracted subtypes (language specific types).a The total number of main types and their subtypes is 39.b aFor more details about subtypes please see http://universaldependencies.org/docsv1/ ext-dep-index.html. bFor the full list of UDs please see http: //universaldependencies.org/docsv1/u/ dep/index.html.
2019
60
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5985–5996 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5985 Towards Comprehensive Description Generation from Factual Attribute-value Tables Tianyu Liu1, Fuli Luo1, Pengcheng Yang1, Wei Wu1, Baobao Chang1,2 and Zhifang Sui1,2 1MOE Key Lab of Computational Linguistics, School of EECS, Peking University 2Peng Cheng Laboratory, Shenzhen, China {tianyu0421, luofuli, yang pc, wu.wei, chbb, szf}@pku.edu.cn Abstract The comprehensive descriptions for factual attribute-value tables, which should be accurate, informative and loyal, can be very helpful for end users to understand the structured data in this form. However previous neural generators might suffer from key attributes missing, less informative and groundless information problems, which impede the generation of high-quality comprehensive descriptions for tables. To relieve these problems, we first propose force attention (FA) method to encourage the generator to pay more attention to the uncovered attributes to avoid potential key attributes missing. Furthermore, we propose reinforcement learning for information richness to generate more informative as well as more loyal descriptions for tables. In our experiments, we utilize the widely used WIKIBIO dataset as a benchmark. Additionally we create WB-filter based on WIKIBIO to test our model in the simulated user-oriented scenarios, in which the generated descriptions should accord with particular user interests. Experimental results show that our model outperforms the state-of-the-art baselines on both automatic and human evaluation. 1 Introduction Generating descriptions for the factual attributevalue tables has attracted widely interests among NLP researchers especially in a neural end-to-end fashion (e.g. Lebret et al. (2016); Liu et al. (2018); Sha et al. (2018); Bao et al. (2018); Puduppully et al. (2018); Li and Wan (2018); Nema et al. (2018)) as shown in Fig 1a. For broader potential applications in this field, we also simulate useroriented generation, whose goal is to provide comprehensive generation for the selected attributes according to particular user interests like Fig 1b. However, we find that previous models might miss key information and generate less informaAttribute Value Birthplace Utah, America Position forward (soccer player) Comprehensive: A Utah soccer player who plays as forward Missing Key Attri.: A soccer player who plays as forward Groundless info: A Utah forward in the national team Less Informative: An American forward Table 1: An example for comprehensive generation. Suppose we only have two attribute-value tuples, the underlined content is groundless information not mentioned in source tables. tive and groundless content in its generated descriptions towards source tables. For example, in Table 1, the ‘missing key attribute’ case doesn’t mention where the player comes from (birthplace) while the ‘less informative’ one chooses American rather than Utah. The case with groundless information contains ‘in the national team’ which is not mentioned in the source attributes. Although the ‘key points missing’ problem exists in many text-to-text and data-to-text datasets, for largescale structured tables with vast heterogeneous attributes such as Wikipedia infoboxes, ‘Key attribute missing’ and ‘less informative’ problems might be even more challenging. As the key attributes, like the ‘position’ of a basketball player or the ‘political party’ of a senator, are very likely to be unique features to particular tables, which usually appear much less frequently and are seldomly mentioned than the common attributes like ‘Name’ and ‘Birthdate’. The ‘groundless information’, which is also known as the ‘hallucination’ problem, remains a long-standing problem in NLG. In this paper, we show that our model can generate more accurate and informative descriptions with less groundless content for tables. Firstly we design a force-attention (FA) method to encourage the decoder to pay more attention to the un5986 Attribute Value Name Dillon Sheppard Birthdate 27 Feb 1979 Birthplace Durban, South Africa Current Club Bidvest Wits Number 29 Height 1.80 m (5 ft 11 in) Position Left-winger (a) End-to-end (neural) Table-to-text Generation Table Encoder Description Decoder … …           (          )  (      (        ( )      (b) User-oriented Description Generation for the Tables User interests Attributes selected by users : Name ; Current Club ; Position Description Generation Name played as a Position in Current Club Wikipedia Infobox Figure 1: The end-to-end (a) and user-oriented table-totext generation (b) for an infobox (left) in WIKIBIO. covered attributes to avoid potential key attributes missing by both stepwise and global constraints. In addition, we define the ‘information richness’ measurement of the generated descriptions to the source tables. Based on that, we use the reinforcement learning to encourage the generator to cover infrequent and rarely mentioned attributes as well as generate more informative descriptions with less groundless content. We test our models on two settings: 1) For neural table-to-text generation like Fig 1a, we test our model on WIKIBIO (Lebret et al., 2016), a crawled dataset from Wikipedia with paired infoboxes and associated descriptions. It is a widely used benchmark dataset for description generation for factual attribute-value tables and also a quite meaningful testbed in the real-world scenarios with vast and heterogenous attributes. 2) To test our model in the user-oriented setting, we filter WIKIBIO to form WB-filter. In this setting, we suppose all attributes in the source tables of WB-filter are selected by users that should be covered in the corresponding descriptions. We try to make sure the gold descriptions in WB-filter cover all the attributes of the source tables in this condition. Details in Sec 4. Both automatic and human evaluation show that our model relieves the 3 problems (Table 1) and helps the generator to produce accurate, informative and loyal descriptions. We also achieve the state-of-the-art performance on the end-to-end table description and the user-oriented generation tasks. The remainder of this paper is organized as follows. We first introduce how we formulate tableto-text generation into encoder-decoder framework in Sec 2. After that, we discuss forceattention method (Sec 3.1) and richness-oriented reinforcement learning (Sec 3.2), which are motivated by the three goals we set up for comprehensive table descriptions (Table 1). Then we demonstrate how and why we create WB-filter (Sec 4.1) as well as evaluations (Sec 4.2), experimental configurations (Sec 4.3 and 4.4), case studies and visualizations (Sec 4.5) and error analysis (Sec 4.6). 2 Background: Table-to-Description 2.1 Table Encoder Given a structured table like Fig 1 (left), we model the attribute-value tuples in the table as a sequence of words with related attribute names. After serializing all the words in the ‘Value’ columns, for the i-th word in the table xak i whose attribute is ak (the k-th attribute), we use the attribute name ak and the word’s position in that tuple to locate the word (Lebret et al., 2016). Specifically we utilize a triple zak i = {ak, pak i+, pak i−} to represent the structure information for word xak i , in which pak i+ and pak i−are the positions of xak i counted from the beginning and end of ak, respectively. For example, for the ‘Birthplace’ attribute in Fig 1 (left), we can use triples {birthplace,1,4} and {birthplace,4,1} to represent the structure information for the words ‘Durban’ 1 and ‘Africa’. We concatenate the word xt and its structure representation zt at the t-th time step and feed them into LSTM (Hochreiter and Schmidhuber, 1997) unit to encode the table. ht = LSTM([xt; zt], ht−1) is the t-th hidden state among the encoder states H = {ht}T t=1. In the following sections, we might omit the superscript of xak i if it is not necessary. 2.2 Description Decoder For the generated description y∗, the generated token y∗ t at the t-th time step is predicted based on all the previously generated tokens y∗ <t before y∗ t and the hidden states H of the table encoder: P(y∗ t |H, y∗ <t) = softmax(Ws⊙tanh(Wt[st, ct])) (1) where ⊙ is element-wise product, st = LSTM(y∗ t−1, st−1) is the t-th hidden state of the decoder. ct = PT i=1 αi thi is the context vector, which is the weighted sum of encoder hidden states according to the attention matrix α. αi t ∝eg(st,hi) is the attention element of the tth decoder state st and the i-th encoder state hi. 1More concretely, ‘Durban’ is the first word counted from the begining and also the fourth word counted from the end of birthplace attribute in Fig 1 (left). 5987 where g(st, hi) is a relevance score between st and hi. We use Bahdanau-style attention mechanism (Bahdanau et al., 2014) to calculate g(st, hi). g(st, hi) = tanh(Wphi + Wqst + b) (2) Ws, Wt, Wp, Wq are learnable parameters. 3 Comprehensive Table Description The problems listed in Table 1 not only prevent the generators to produce comprehensive descriptions for selected entries in the tables (Fig 1b), but also prevent the generator to produce informative, accurate and loyal table descriptions (Fig 1a). So we propose two methods: force-attention (FA) and richness-oriented reinforcement learning to produce accurate, informative and loyal descriptions. 3.1 Force-Attention Module For ‘missing key attributes’ problem (Table 1), we find that the generator usually focuses on particular attributes while the other attributes have relatively low attention values in the entire decoding procedure. So force attention method is proposed to guide the decoder to pay more attention to the previous uncovered attributes with low attention values to avoid potential key attribute missing. Note that FA method focuses on attributelevel coverage rather than word-level coverage (Tu et al., 2016) as our goal is to reduce the ‘missing key attributes’ phenomenons instead of building rigid word-by-word alignment between tables and descriptions. Stepwise Forcing Attention: We define attributelevel attention βak t = avg(P xi∈ak αi t) at the t-th step for attribute ak as the average value of the word-level attention values for the words in that attribute. The word-level coverage is defined as the sum of attention vector before the t-th step θi t = θi t−1 + αi t (Tu et al., 2016). In the similar way, we define the attribute-level coverage γak t = γak t−k + βak t as the overall attention for attribute ak before the t-th time step. The average word-level and attribute-level coverage are θi t = θi t/t and γak t = γak t /t, respectively. Then we propose stepwise attention forcing, which explicitly guides the decoder to pay more attention on the uncovered attributes by calculating a new context vector ect = πct + (1 −π)vt to make compensation for the ignored attributes in the previous time steps. π is a learnable vector.             Dillon Sheppard 27 February 1979 Durban South Africa Bidvest Wits leftwinger      Dillon Sheppard 27 February 1979 Durban South Africa Bidvest Wits leftwinger Decoderat 14th timestep:Dillon Sheppard born 27 february1979, DurbanSouth Africais a Average Word-level Coverage !"# %"# = '()(!"# +"# Compensation Values ',- %"# −%"# Name Birthdate Birthplace Currentclub Position Average Attribute-level Coverage +"# High Compensation Low Compensation                        Figure 2: Stepwise forcing attention at the 14-th step for the filtered version of the original infobox in Fig 1 in the WB-filter dataset (The next word is ‘leftwinger’). The uncovered attributes like ‘currentclub’ and ‘position’ (marked in orange and green) get high attention compensation (rightmost). Note that word ‘Sheppard’ does not get any compensation (rightmost) because it has got high attention in the previous steps. vt is a compensation vector for the low-coverage attributes: vt = T X i=1 (max(ζt) −ζi t)hi; ζi t = min(θi t, γak t ) (3) ζt is the modified average word-level coverage regarding the average attribute-level coverage as the upper bound to avoid excessive compensation. Fig 2 shows a running example. The motivation behind is that we want the decoder to pay enough attention to all the attributes in the whole decoding process, which prevents missing key attributes because of the low attention value on them. Thus we make compensation for the previous uncovered attributes (like ‘currentclub’ and ‘position’ in Fig 2 ) by vt at the t-th time step. Global Forcing Attention: Inspired by the softattention constraint of (Xu et al., 2015) which encourages the generator to pay equal attention to every part of the image while generating image captions, we propose global forcing attention to avoid insufficient or excessive attention on certain attributes by adding the following loss to the prime seq2seq loss. LFA = λ K X k=1 [γak −1 −1/K]2 (4) where K is the number of attributes in the table, λ is a hyper-parameter which is set to 0.3 based 5988 on evaluations on the validation data. γak −1 is the average attribute-level coverage for attribute ak at the last time step. 3.2 Reinforced Richness-oriented Learning We also propose a reinforcement learning framework which encourages the generator to cover rare and seldom mentioned words and attributes in the table. The experiments and case studies show its effectiveness to deal with the ‘groundless information’ and ‘less informative’ problems in Table 1. 3.2.1 Information Richness The information richness (Eq 5) is the multiplication of the attribute-level and word-level richness of the descriptions towards the source tables. Attribute-level Information Richness: Different tables which describe different objects are always featured by the unique attributes in the table. For example, a sportsman often has the attributes like ‘position’, ‘debutyear’. The information in the unique attributes is harder to capture than that in the common attributes like ‘name’, ‘birthdate’ as the latters are very frequent in the training set. We define the information richness for an attribute ai as f(ak) = [freq(ak)]−1 by calculating its frequency in the training set. Word-level Information Richness: The unique words in the tables are more likely to be informative, such as a specific location, name or book. To calculate the word-level information richness, we firstly lemmatize all the words in the tables and filter the words with a stop-words list which including prepositions, symbols and numbers, etc. Then we randomly sample 5 synonyms of the certain word from WordNet (Miller, 1995). Finally, we calculate the word-level richness w(xak i ) for the i-th word in attribute ak by averaging the tf-idf values of xak i and its synonyms in the training set. For a generated description y∗, we lemmatize all the words in y∗to get y∗. Then we calculate the information richness based on the related source table with T words and the gold description y, respectively. Rich(y∗) = PT i=1[f(ak) · w(xak i ) · 1{˜xak i ∈y∗}] PT i=1[f(ak) · w(xak i )] (5) in which ˜xak i represents any word among xak i and its synonyms in the table. The information richness measures the ratio of covered information in the table by the description. 3.2.2 Reinforcement Learning Reward Function: Different from previous models which only measures how well the generated sentences match the target sentences, we design a mixed reward Rmix which contains both the BLEU-4 scores and the information richness of the generated descriptions towards the source tables. Rmix = λRinfo + (1 −λ)RBLEU (6) λ is set to 0.4 and 0.6 for WIKIBIO and WB-filter based on evaluations on the validation data. Fig 6 shows how we choose λ. Training Algorithm: We use the REINFORCE algorithm (Williams, 1992) to learn an agent to maximize the reward function Rmix. The training loss of sequence generation is defined as the negative expected reward. LRL = −Eys∼pφ[r(ys) · log(Pφ(ys))] (7) where Pφ(ys) is the agent’s policy, i.e. the word distribution of description decoder (Eq 1), and r(·) is the reward function defined in Eq 6. In the implementation, ys is a sequence that can be sampled from Pφ by Monte-Carlo sampling ys = {ys 1, ys 2, · · · , ys |Y |}. The policy gradients for Eq 7 can be calculated as: ∇φLRL = λ∇φRinfo + (1 −λ)∇φRBLEU (8) We use self-critical sequence training method (Rennie et al., 2017; Paulus et al., 2017) to reduce the variance of gradients by subtracting a baseline reward for the mix reward in Eq 6. ∇φRBLEU ≈−[B(ys, y)−B(yg, y)]∇φlog(Pφ(ys)) (9) where B(a, b) is the BLEU score of sequence a compared with sequence b, yg is a generated sequence using greedy search. To calculate the information richness reward Rinfo for the lemmatized sampled sequence ys, we use the information richness (Eq 5) of the related lemmatized gold description y towards the source table as the baseline reward. ∇φRinfo ≈−[Rich(ys)−Rich(y)]∇φlog(Pφ(ys)) (10) For more technical details, we refer the interested readers to (Williams, 1992; Ranzato et al., 2015; Rennie et al., 2017). 5989 Dataset WIKIBIO WB-filter # instances 728321 88287 # Tokens per Bio 26.1 30.2 # Tokens per Table 53.1 20.8 # Attri. per Table 19.7 6.3 # Word overlap 9.5 12.1 Figure 3: The ‘coverage-frequency’ figure (left) (each point represents an attribute) shows that many attributes have very low coverage and low frequency in the WIKIBIO dataset. Due to our filtering, the attributes in WB-filter have 100% Hit-1 coverage (Sec 4.2) and more overlapping words with the original tables as shown in the data statistics (right). 4 Experiments 4.1 Datasets We use two datasets to test our model in the context of end-to-end table description generation and comprehensive generation for selected attributes in user-oriented scenario. For end-to-end description generation, we use WIKIBIO dataset (Lebret et al., 2016) as the benchmark dataset, which contains 728,321 articles from English Wikipedia (Sep 2015) and uses the first sentence of each article as the description. To test our model in the user-oriented scenario, we filtered the WIKIBIO dataset to form a new dataset WB-filter. To simulate the user interests, we first select the top 100 frequent 2 attributes in WIKIBIO. After that we manually filter irrelevant attributes (like ’caption’, ’website’ or ’signature’) and merge identical attributes (like ’article title’ and ’name’) to avoid repetition. Then we leave out all the remaining attributes in the tables and filter the instances in WIKIBIO whose descriptions can not cover the selected attributes to form WB-filter. To achieve this, we firstly lemmatize all the tokens in the infoboxes as well as those in the related gold biographies and filter them by a stop-words list, then we randomly retrieve 5 synonyms for every word in the infoboxes from WordNet. Finally we make sure the gold biographies cover at least one word (or its synonym) for every attribute-value tuple among the chosen attributes and filter the unqualified instances in 2In this setup, the reason of choosing high frequent attributes is to ensure enough training instances in WB-filter for data-driven methods. WIKIBIO. The ‘frequency-coverage’ figure in Fig 3 shows 1) The filtering ensures that the WB-filter dataset achieves 100% Hit-1 coverage. 2) The WIKIBIO dataset suffers from both ‘low frequency’ and ‘low coverage’ problems, which means some key attributes in the tables are seldom mentioned by the descriptions. The cause of ‘low coverage’ problem is the loosely alignment between structured data and related descriptions. The two datasets are divided in to training (80%), testing (10%) and validation (10%) sets. 4.2 Evaluation Metrics Automatic Metrics: Following the previous work (Lebret et al., 2016; Sha et al., 2018; Liu et al., 2018), we use BLEU-4 (Papineni et al., 2002) and ROUGE-4 (F measure) (Lin, 2004) for automatic evaluation. Furthermore, to evaluate how the generated biographies cover the key points in the infoboxes, we also use information richness (Eq 5) as one of our automatic evaluation. ‘Hit at least 1 word’ for an attribute means that a biography has at least one overlapping word with the words (or their synonyms) in that attribute, which are lemmatized and filtered by a stop-words list like the way we get WB-filter in Sec 4.1. ‘HIT-1 coverage’ for an attribute is the ratio of the instances involving that attribute whose biographies ‘Hit at least 1 word’ in that attribute. Human Evaluation: Since automatic evaluations like BLEU may not be reliable for NLG systems (Callison-Burch et al., 2006; Reiter and Belz, 2009; Reiter, 2018). We use human evaluation which involves the generation fluency, coverage (how much given information in the infobox is mentioned in the related biography) and correctness (how much false or irrelevant information is mentioned in the biography). We firstly sampled 300 generated biographies from the generators for human evaluation. After that, we hired 3 thirdparty crowd-workers who are equipped with sufficient background knowledge to rank the given biographies. We present the generated descriptions to the annotators in a randomized order and ask them to be objective and not to guess which system a particular generated case is from. Two biographies may have the same ranking if it is hard to decide which one is better. The Pearson correlations of inter-annotator agreement are 0.76 and 0.71 (Table 3) on WIKIBIO and WB-filter, re5990 spectively. 4.3 Experimental Details Following previous work (Liu et al., 2018). For WIKIBIO We select the most frequent 20,000 words and 1480 attributes in the training set as the word and attribute vocabulary. We tune the hyper-parameters based on the model performance on the validation set. The dimensions of word embedding, attribute embedding, position embedding and hidden unit are 500, 50, 600, 10 respectively. The batch size, learning rate and optimizer for both two datasets are 32, 5e4 and Adam (Kingma and Ba, 2014), respectively. We use Xavier initialization (Glorot and Bengio, 2010) for all the parameters in our model. The global constraint of force-attention (Eq 4) is adapted after 4 and 1.5 epochs of training to avoid hurting the primary loss for the WIKIBIO and WB-filter datasets, respectively. Before the richness-oriented reinforced training, the neural generator is pre-trained 8 and 4 epochs for the WIKIBIO and WB-filter datasets (with or without force-attention module), respectively. We replace UNK tokens with the most relevant token in the source table according to the attention matrix (Jean et al., 2015). 4.4 Baselines KN & Template KN: A template-based KneserNey (KN) language model (Heafield et al., 2013) The extracted template for Table 1 is “name 1 name 2 (born birthdate 1 · · · ”. During inference, the decoder is constrained to emit words from the vocabulary or the special tokens in the tables. Table NLM: Lebret et al. (2016) proposed a neural language model Table NLM taking the attribute information into consideration. Order-planning: Sha et al. (2018) proposed a link matrix to model the order for the attributevalue tuples while generating biographies. Struct-aware: Liu et al. (2018) proposed a structure-aware model using a modified LSTM unit and a specific attention mechanism to incorporate the attribute information. Word & Attribute level Coverage: we also implement the implicit coverage method (Tu et al., 2016) for comparison. For word-level coverage, we replace Eq 2 with g(st, hi) = tanh(Wphi + Wqst + Wmθt + b). For attribute-level coverage, we replace Eq 2 with g(st, hi) = tanh(Wphi + Models BLEU ROUGE KN 2.21 0.38 Template KN 19.80 10.70 NLM 4.17 1.48 Table NLM 34.70 25.80 Order-planning 43.91 37.15 Struct-aware 44.89 41.21 Word-level Coverage* 43.44 39.84 Attri-level Coverage* 42.87 38.95 Seq2seq 43.51 39.61 + Force-Attention 44.46 40.58 + Richness RL † 45.47 41.54 (a) Automatic evaluation on WIKIBIO Models BLEU ROUGE Struct-aware* 40.81 36.52 Word-level Coverage* 38.85 35.11 Attri-level Coverage* 38.34 34.92 Seq2seq 39.17 35.39 + Force Attention 41.21 36.71 + Richness RL † 42.03 37.55 (b) Automatic evaluation on WB-filter Table 2: BLEU and ROUGE scores on the WIKIBIO and WB-filter datasets. The baselines with * are based on our implementation while the others are reported by their authors. Models with † are trained using the RL criterion specified in Sec 3.2.2 while the remaining models are trained using the maximum likelihood estimate (MLE). Wqst + Wmγt + b). θt and γt are the word-level and attribute-level coverage defined in Sec 3.1. 4.5 Analysis of Experimental Results Automatic evaluations are shown in Table 2 for WIKIBIO and WB-filter. The proposed forceattention module achieves 1.11/0.98 and 2.04/1.32 BLEU/ROUGE increases on the WIKIBIO and WB-filter datasets, respectively. Although the proposed force attention method does not outperform the ‘struct-aware’ method in terms of BLEU and ROUGE in the WIKIBIO dataset. We show its advantages in the user-oriented scenario as well as its ability to cover the key attributes as shown in Table 4 and 5. The richness-oriented reinforced module further enhances the model performance, helping our model outperform the state-of-the-art system (Liu et al., 2018) by about 0.79 BLEU and 0.58 ROUGE. Note that the BLEU and ROUGE scores are lower in the WB-filter datasets because firstly, the WIKIBIO has much larger training set. Secondly, the gold biographies might con5991 Models Fluency Coverage Correctness Seq2seq 1.87 1.99 1.95 Struct-aware 1.61 1.80 1.71 Our best 1.54 1.46 1.61 (a) Human evaluation on WIKIBIO Models Fluency Coverage Correctness Seq2seq 2.02 1.88 1.93 Struct-aware 1.58 1.52 1.65 Our best 1.54 1.39 1.54 (b) Human evaluation on WB-filter Table 3: Average ranking (lower is better) of 3 systems. We calculate the Pearson correlation to show the interannotator agreement. Models BLEU Rich 1 seq2seq 43.51 28.21 2 + Stepwise (only) 43.69 30.01 3 + Global loss (only) 44.21 31.65 4 + Stepwise + Global loss 44.46 32.90 5 + Richness RL (only) 45.23 35.84 6 + All 45.47 37.64 (a) Ablation studies on WIKIBIO Models BLEU Rich 1 seq2seq 39.17 56.30 2 + Stepwise (only) 39.59 59.29 3 + Global loss (only) 40.83 61.12 4 + Stepwise + Global loss 41.21 62.81 5 + Richness RL (only) 41.66 63.89 6 + All 42.03 64.41 (b) Ablation studies on WB-filter Table 4: The ablation studies for our model. Models 2-4 are from the force-attention method. ‘Rich’ is the ‘information richness’ defined in Eq 5. tain information beyond the tables. Although this phenomenon also occurs in WIKIBIO, the filtering of WB-filter magnifies this issue. Human evaluations in Table 3 show our model achieves better generation coverage and correctness than all the baselines. Table 4 shows that the ablation studies of our model. As demonstrated in Table 5, we select an infobox from WIKIBIO and WB-filter respectively for case studies. By observing the generated description in WIKIBIO, we find that 1) compared with the vanilla seq2seq model, our force-attention module can cover the information in the ‘Notableworks’ attribute. 2) The richnessoriented module further helps our model to cover the ‘Alma mater’ and ‘ Notableworks’ attributes as they are infrequent attributes (more informative) in the dataset. Additionally, due to the rareness of the word ‘kiev’, our model is able to cover the related information. Similarly, the generated description for WB-filter covers the information from ‘Organization’ and ‘ Birthplace’ with the help of pro seq2seq seq2seq+Force-attention struct-aware [Liu et al. 2017] coverage-oriented (ours) Name Dillon Sheppard Birthdate 27 February 1979 Birthplace Durban , South Africa Currentclub Bidvest Wits Position left-winger Name Dillon Sheppard Birthdate 27 February 1979 Birthplace Durban , South Africa Currentclub Bidvest Wits Position left-winger Birthdate 27 Febru Birthplace Durban , Currentclub Bidvest W Position left-wing Birthdate 27 February 1979 Birthplace Durban , South Africa Currentclub Bidvest Wits Position left-winger S2S+cover: Dillon Sheppard ( born 27 february 1979 ) is a soccer who plays for Bidvest Wits. Sha et al. 2017: Dillon Sheppard ( born 27 february 1979 ) is a soccer who plays for Bidvest Wits. Liu et al. 2017: Dillon Sheppard ( born 27 february 1979 ) is a South African soccer who plays for Bidvest Wits. Ours: Dillon Sheppard ( born 27 february 1979, Durban South Africa ) is a footballer who plays as left-winger for Bidvest Wits. seq2seq: Dillon Sheppard (born 27 february 1979) is a soccer who plays for Bidvest Wits. seq2seq+FA: Dillon Sheppard (born 27 february 1979, Durban South Africa) is a left-winger in Bidvest Wits. Figure 4: The average attribute-level (green) and word-level (red) coverage of the seq2seq models with or without force-attention module for an infobox in WB-filter (higher values are darker) in the last decoding step. The vanilla seq2seq model ignores the ‘birthplace’ and ‘position’ attributes as the low coverage on them while the FA module attracts enough attention on them while decoding. Figure 5: Hit-1 coverage (Sec 4.2) for attributes on the test sets of WIKIBIO and WB-filter. For better visualization, we first select the attributes whose frequencies are larger than 0.1%, then rank the Hit-1 coverage of these attributes (214 attributes in WIKIBIO; 26 attributes in WB-filter) in the descending ordering. posed model. Fig 4 shows the effectiveness of the forceattention module. The decoder is guided to pay more attention to the uncovered attributes (‘birthplace’ and ‘position’) while decoding. Fig 5 shows that both two proposed modules can boost the attribute-level coverage on the two datasets. Fig 6(left) explains why our model can also improve end-to-end table description generation. Attributes like ‘position’, ‘battles’ and ‘political party’ are key information to describe the infoboxes for sportsmen, soldiers and politicians. Fig 6(right) shows the effects of λ in Eq 6. 4.6 Error Analysis Although the proposed models achieve competitive performance, we also observe some failure cases. To sum up, the irrelevant information in the generated descriptions to the source tables. For ex5992 Name:Ivan Ohienko Metropolitan Ilarion ; Birthdate:2 January 1882 ;Birthplace:Brusilov, Kiev governorate, Russian empire ; Deathdate:29 March 1972;Deathplace:Winnipeg, Canada; Occupation:cleric, historian, ethnographer, and scholar, writer, and translator; Language:Ukrainian; Nationality:Ukrainian; Alma mater:Kiev university Notableworks:translation of the bible into ukrainian Article title:Ilarion Ohienko Seq2seq: Ivan Ohienko Metropolitan ( January 2 , 1882 – March 29 , 1972 ) was a Ukrainian cleric , historian , ethnographer, writer , linguist , writer and scolar. +Force-Attention: Ivan Ohienko Metropolitan Ilarion ( 2 January 1882 in Brusilov – 29 march 1972 in Winnipeg ) was a Ukrainian linguist , ethnographer , and scholar , best known for his translation of the bible into ukrainian . +Richness-oriented RL: Ivan Ohienko Metropolitan Ilarion ( 2 January 1882 , Krusilov , Kiev governorate– 29 march 1972 , Winnipeg ) was a Ukrainian cleric, historian , ethnographer , and scholar of Kiev university , best known for his translation of the bible into ukrainian . Name:Rajendra Singh ; Birthdate:06 August 1959 ;Birthplace:Daula, Bagpat District, Uttar Pradesh ; Nationality: Indian; Organization:Tarun Bharat Sangh; Occupation:water conservationist Alma mater:Allahabad University Seq2seq: Rajendra Singh is an Indian water conservationist. +Force-Attention: Rajendra Singh (born 6 August 1959) is an Indian conservationist and a senior fellow of the Tarun Bharat Sangh. +Richness-oriented RL: Rajendra Singh (born 6 august 1959, Uttar Pradesh) is an Indian water conservationist and a member of the Tarun Bharat Sangh. Table 5: The generated cases in WIKIBIO (above) and WB-filter (below) datasets. The underlined texts, which are the key information of the source tables, are ignored by seq2seq model. Hit-1 coverage of some key attributes while summarizing WIKIBIO How we choose the ! in Eq 6 for WIKIBIO !: $%&' = !$&)*+ + (1 −!)$1234 Figure 6: Hit-1 Coverage (Sec 4.2) for some key attributes (left) on the test set of WIKIBIO shows that our model can help to cover some key attributes while describing the tables. The right figure is the analysis of λ (Eq 6) for ‘Seq2seq + RL’ model on the validation set of WIKIBIO. ample, a biography about a football player might contain ‘in the national football league’ although the related infobox does not mention this piece of information as the similar expression exists in many instances of the training set. Although our model could largely relieve this problem as shown in human evaluation (Table 3), it is still a general problem in NLG. As for the ability to cover important information in the tables, although our model is able to cover much more comprehensive information than the previous models (Table 2 and 3). Some implicitly expressed (like if a person is retired or not) or rarely covered (like ‘spouse’ or ‘high school’) attributes in the source tables might still be ignored in the descriptions generated by our model. Furthermore, those pieces of information which need some form of inference across several attributes (like a time span) may not be well represented by our model. 5 Related Work Data-to-text a language generation task to generate text for structured data. Table-to-text belongs to the data-to-text generation (Reiter and Dale, 2000). Many previous work (Barzilay and Lapata, 2005, 2006; Liang et al., 2009) treated the task as a pipelined systems, which viewed content selection and surface realization as two separate tasks. Duboue and McKeown (2002) proposed a clustering approach in the biography domain by scoring the semantic relevance of the text and paired knowledge base. In a similar vein, Barzilay and Lapata (2005) modeled the dependencies between the American football records and identified the bits of information to be verbalized. Liang et al. (2009); Angeli et al. (2010) extended the work of Barzilay and Lapata (2005) to soccer and weather domains by learning the alignment between data and text using hidden variable models. Androutsopoulos et al. (2013) and Duma and Klein (2013) focused on generating descriptive language for Ontologies and RDF triples. Most recent work utilize neural networks on data-to-text generation (Mahapatra et al., 2016; Wiseman et al., 2017; Laha et al., 2018; Kaffee et al., 2018; Freitag and Roy, 2018; Qader et al., 2018; Dou et al., 2018; Yeh et al., 2018; Jhamtani et al., 2018; Jain et al., 2018; Liu et al., 2017b, 2019; Peng et al., 2019; 5993 Duˇsek et al., 2019). Some closely relevant work also focused on the table-to-text generation. Mei et al. (2016) proposed an encoder-aligner-decoder framework for generating weather broadcast. Hachey et al. (2017) used a table-text and text-table autoencoder framework for table-to-text generation. Nema et al. (2018) proposed gated orthogonalization to avoid repetitions. Wiseman et al. (2018) used neural semi-HMM to generate template-like descriptions for structured data. Our work somewhat shares similar goals as Kiddon et al. (2016); Tu et al. (2016); Liu et al. (2017a); Gong et al. (2018) in the sense that they emphasis easily ignored (usually less frequent) features or bits of information in the training procedure by smoothing or regularization. The greatest difference between our work and theirs is that our method is tailored for covering the key information embedded in the attributes (entries) of the key-value tables rather than single words or labels. Although the deficient score of Tu et al. (2016) in Table 2 has demonstrated that word-level coverage oriented methods may not still be suitable to the structured tables, we assume other word-level constraints may easily transfer to the structured tables without losing efficiency. We leave the recognition of potential applicable word-level constraints to the future work. This paper focused on generating one-sentence biographies for infoboxes like many previous works (Lebret et al., 2016; Hachey et al., 2017; Liu et al., 2018; Bao et al., 2018; Nema et al., 2018; Puduppully et al., 2018; Cao et al., 2018). Perez-Beltrachini and Lapata (2018) used the first paragraph of the wikipedia pages as the gold biographies aiming at generating longer biographies. We tried the same setting and unfortunately found most generated biographies contain too much groundless information compared with the source infoboxes. This is because the related gold biographies from first paragraph contain too much groundless information beyond the source infoboxes. 6 Conclusion and Future Work We set up 3 goals for comprehensive description generation for attribute-value factual tables: accurate, informative and loyal. To achieve these goals, we propose force-attention method, which encourages the generator to pay more attention to previous uncovered attributes to avoid potential key attribute missing. Richness-oriented reinforcement learning is proposed to cover more informative contents in source tables, which help the generator to generate informative and accurate descriptions. The experiments on the WIKIBIO and WB-filter datasets show the merits of our model. In the future, we will explore the representation for the implicit information like whether a man is retired or not or how long a sportsman’s career is given starting and ending years, in the table by including some inference strategies. Acknowledgments We would like to thank the anonymous reviewers for their valuable suggestions. This work is supported by the National Science Foundation of China under Grant No. 61876004, No. 61772040. The corresponding authors of this paper are Baobao Chang and Zhifang Sui. References Ion Androutsopoulos, Gerasimos Lampouras, and Dimitrios Galanis. 2013. Generating natural language descriptions from owl ontologies: the naturalowl system. Journal of Artificial Intelligence Research, 48:671–715. Gabor Angeli, Percy Liang, and Dan Klein. 2010. A simple domain-independent probabilistic approach to generation. In EMNLP 2010, pages 502–512. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. Jun-Wei Bao, Duyu Tang, Nan Duan, Zhao Yan, Yuanhua Lv, Ming Zhou, and Tiejun Zhao. 2018. Tableto-text: Describing table region with natural language. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5020–5027. Regina Barzilay and Mirella Lapata. 2005. Collective content selection for concept-to-text generation. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 331–338. Association for Computational Linguistics. Regina Barzilay and Mirella Lapata. 2006. Aggregation via set partitioning for natural language generation. In NAACL, pages 359–366. Association for Computational Linguistics. 5994 Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluation the role of bleu in machine translation research. In EACL 2006. Juan Cao, Junpeng Gong, and Pengzhou Zhang. 2018. Open-domain table-to-text generation based on seq2seq. In Proceedings of the 2018 International Conference on Algorithms, Computing and Artificial Intelligence, page 72. ACM. Longxu Dou, Guanghui Qin, Jinpeng Wang, Jin-Ge Yao, and Chin-Yew Lin. 2018. Data2text studio: Automated text generation from structured data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 13–18. Pablo A Duboue and Kathleen R McKeown. 2002. Content planner construction via evolutionary algorithms and a corpus-based fitness function. In Proceedings of INLG 2002, pages 89–96. Daniel Duma and Ewan Klein. 2013. Generating natural language from linked data: Unsupervised template extraction. In Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013)–Long Papers, pages 83–94. Ondˇrej Duˇsek, Jekaterina Novikova, and Verena Rieser. 2019. Evaluating the state-of-the-art of end-to-end natural language generation: The e2e nlg challenge. arXiv preprint arXiv:1901.07931. Markus Freitag and Scott Roy. 2018. Unsupervised natural language generation with denoising autoencoders. arXiv preprint arXiv:1804.07899. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 249–256. Chengyue Gong, Xu Tan, Di He, and Tao Qin. 2018. Sentence-wise smooth regularization for sequence to sequence learning. arXiv preprint arXiv:1812.04784. Ben Hachey, Will Radford, and Andrew Chisholm. 2017. Learning to generate one-sentence biographies from wikidata. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pages 633–642. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H Clark, and Philipp Koehn. 2013. Scalable modified kneser-ney language model estimation. In ACL (2), pages 690–696. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Parag Jain, Anirban Laha, Karthik Sankaranarayanan, Preksha Nema, Mitesh M Khapra, and Shreyas Shetty. 2018. A mixed hierarchical attention based encoder-decoder approach for standard table summarization. arXiv preprint arXiv:1804.07790. S´ebastien Jean, KyungHyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 1–10. Harsh Jhamtani, Varun Gangal, Eduard Hovy, Graham Neubig, and Taylor Berg-Kirkpatrick. 2018. Learning to generate move-by-move commentary for chess games from large-scale social forum data. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1661–1671. Lucie-Aim´ee Kaffee, Hady ElSahar, Pavlos Vougiouklis, Christophe Gravier, Fr´ed´erique Laforest, Jonathon S. Hare, and Elena Simperl. 2018. Learning to generate wikipedia summaries for underserved languages from wikidata. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers), pages 640–645. Chlo´e Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. Globally coherent text generation with neural checklist models. In EMNLP 2016, pages 329–339. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Anirban Laha, Parag Jain, Abhijit Mishra, and Karthik Sankaranarayanan. 2018. Scalable micro-planned generation of discourse from structured data. CoRR, abs/1810.02889. R´emi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In EMNLP 2016, pages 1203–1213. Liunian Li and Xiaojun Wan. 2018. Point precisely: Towards ensuring the precision of data in generated texts using delayed copy mechanism. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1044–1055. Percy Liang, Michael I Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervision. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 91–99. Association for Computational Linguistics. 5995 Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. Tianyu Liu, Fuli Luo, Qiaolin Xia, Shuming Ma, Baobao Chang, and Zhifang Sui. 2019. Hierarchical encoder with auxiliary supervision for neural tableto-text generation: Learning better representation for tables. In Proceedings of AAAI. Tianyu Liu, Kexiang Wang, Baobao Chang, and Zhifang Sui. 2017a. A soft-label method for noisetolerant distantly supervised relation extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1790–1795. Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2018. Table-to-text generation by structure-aware seq2seq learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 4881– 4888. Tianyu Liu, Bingzhen Wei, Baobao Chang, and Zhifang Sui. 2017b. Large-scale simple question generation by template-based seq2seq learning. In Natural Language Processing and Chinese Computing - 6th CCF International Conference, NLPCC 2017, Dalian, China, November 8-12, 2017, Proceedings, pages 75–87. Joy Mahapatra, Sudip Kumar Naskar, and Sivaji Bandyopadhyay. 2016. Statistical natural language generation from tabular non-textual data. In Proceedings of the 9th International Natural Language Generation conference, pages 143–152. Hongyuan Mei, Mohit Bansal, and Matthew R. Walter. 2016. What to talk about and how? selective generation using lstms with coarse-to-fine alignment. In NAACL HLT 2016, pages 720–730. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39– 41. Preksha Nema, Shreyas Shetty, Parag Jain, Anirban Laha, Karthik Sankaranarayanan, and Mitesh M Khapra. 2018. Generating descriptions from structured data using a bifocal attention mechanism and gated orthogonalization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1539–1550. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL2002, pages 311–318. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304. Hao Peng, Ankur P. Parikh, Manaal Faruqui, Bhuwan Dhingra, and Das Dipanjan. 2019. Text generation with exemplar-based adaptive decoding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Laura Perez-Beltrachini and Mirella Lapata. 2018. Bootstrapping generators from noisy data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1516–1527. Ratish Puduppully, Li Dong, and Mirella Lapata. 2018. Data-to-text generation with content selection and planning. arXiv preprint arXiv:1809.00582. Raheel Qader, Khoder Jneid, Franc¸ois Portet, and Cyril Labb´e. 2018. Generation of company descriptions using concept-to-text and text-to-text deep models: dataset collection and systems evaluation. In Proceedings of the 11th International Conference on Natural Language Generation, pages 254–263. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. CoRR, abs/1511.06732. Ehud Reiter. 2018. A structured review of the validity of bleu. Computational Linguistics, 44(3):393–401. Ehud Reiter and Anja Belz. 2009. An investigation into the validity of some metrics for automatically evaluating natural language generation systems. Computational Linguistics, 35(4):529–558. Ehud Reiter and Robert Dale. 2000. Building natural language generation systems. Cambridge university press. Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In CVPR, volume 1, page 3. Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, and Zhifang Sui. 2018. Order-planning neural text generation from structured data. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5414–5421. 5996 Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In ACL 2016, Volume 1: Long Papers. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. In Reinforcement Learning, pages 5–32. Springer. Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2253–2263. Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2018. Learning neural templates for text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 3174–3187. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning, pages 2048–2057. Shyh-Horng Yeh, Hen-Hsen Huang, and Hsin-Hsi Chen. 2018. Precise description generation for knowledge base entities with local pointer network. In 2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI), pages 214–221. IEEE.
2019
600
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5997–6007 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5997 Style Transformer: Unpaired Text Style Transfer without Disentangled Latent Representation Ning Dai, Jianze Liang, Xipeng Qiu∗, Xuanjing Huang Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University 825 Zhangheng Road, Shanghai, China {ndai16,jzliang18,xpqiu,xjhuang}@fudan.edu.cn Abstract Disentangling the content and style in the latent space is prevalent in unpaired text style transfer. However, two major issues exist in most of the current neural models. 1) It is difficult to completely strip the style information from the semantics for a sentence. 2) The recurrent neural network (RNN) based encoder and decoder, mediated by the latent representation, cannot well deal with the issue of the long-term dependency, resulting in poor preservation of non-stylistic semantic content. In this paper, we propose the Style Transformer, which makes no assumption about the latent representation of source sentence and equips the power of attention mechanism in Transformer to achieve better style transfer and better content preservation. Source code will be available on Github1. 1 Introduction Text style transfer is the task of changing the stylistic properties (e.g., sentiment) of the text while retaining the style-independent content within the context. Since the definition of the text style is vague, it is difficult to construct paired sentences with the same content and differing styles. Therefore, the studies of text style transfer focus on the unpaired transfer. Recently, neural networks have become the dominant methods in text style transfer. Most of the previous methods (Hu et al., 2017; Shen et al., 2017; Fu et al., 2018; Carlson et al., 2017; Zhang et al., 2018b,a; Prabhumoye et al., 2018; Jin et al., 2019; Melnyk et al., 2017; dos Santos et al., 2018) formulate the style transfer problem into the “encoder-decoder” framework. The encoder maps the text into a style-independent latent ∗Corresponding author 1https://github.com/fastnlp/fastNLP representation (vector representation), and the decoder generates a new text with the same content but a different style from the disentangled latent representation plus a style variable. These methods focus on how to disentangle the content and style in the latent space. The latent representation needs better preserve the meaning of the text while reducing its stylistic properties. Due to lacking paired sentence, an adversarial loss (Goodfellow et al., 2014) is used in the latent space to discourage encoding style information in the latent representation. Although the disentangled latent representation brings better interpretability, in this paper, we address the following concerns for these models. 1) It is difficult to judge the quality of disentanglement. As reported in (Elazar and Goldberg, 2018; Lample et al., 2019), the style information can be still recovered from the latent representation even the model has trained adversarially. Therefore, it is not easy to disentangle the stylistic property from the semantics of a sentence. 2) Disentanglement is also unnecessary. Lample et al. (2019) reported that a good decoder can generate the text with the desired style from an entangled latent representation by “overwriting” the original style. 3) Due to the limited capacity of vector representation, the latent representation is hard to capture the rich semantic information, especially for the long text. The recent progress of neural machine translation also proves that it is hard to recover the target sentence from the latent representation without referring to the original sentence. 4) To disentangle the content and style information in the latent space, all of the existing approaches have to assume the input sentence is encoded by a fix-sized latent vector. As a result, these approaches can not directly apply the attention mechanism to enhance the ability to preserve 5998 the information in the input sentence. 5) Most of these models adopt recurrent neural networks (RNNs) as encoder and decoder, which has a weak ability to capture the long-range dependencies between words in a sentence. Besides, without referring the original text, RNN-based decoder is also hard to preserve the content. The generation quality for long text is also uncontrollable. In this paper, we address the above concerns of disentangled models for style transfer. Different from them, we propose Style Transformer, which takes Transformer (Vaswani et al., 2017) as the basic block. Transformer is a fully-connected selfattention neural architecture, which has achieved many exciting results on natural language processing (NLP) tasks, such as machine translation (Vaswani et al., 2017), language modeling (Dai et al., 2019), text classification (Devlin et al., 2018). Different from RNNs, Transformer uses stacked self-attention and point-wise, fully connected layers for both the encoder and decoder. Moreover, Transformer decoder fetches the information from the encoder part via attention mechanism, compared to a fixed size vector used by RNNs. With the strong ability of Transformer, our model can transfer the style of a sentence while better preserving its meaning. The difference between our model and the previous model is shown in Figure 1. Our contributions are summarized as follows: • We introduce a novel training algorithm which makes no assumptions about the disentangled latent representations of the input sentences, and thus the model can employ attention mechanisms to improve its performance further. • To the best of our knowledge, this is the first work that applies the Transformer architecture to style transfer task. • Experimental results show that our proposed approach generally outperforms the other approaches on two style transfer datasets. Specifically, to the content preservation, Style Transformer achieves the best performance with a significant improvement. 2 Related Work Recently, many text style transfer approaches have been proposed. Among these approaches, there is a line of works aims to infer a latent representation for the input sentence, and manipulate the style of the generated sentence based on this learned latent representation. Shen et al. (2017) propose a cross-aligned auto-encoder with adversarial training to learn a shared latent content distribution and a separated latent style distribution. Hu et al. (2017) propose a new neural generative model which combines variational auto-encoders and holistic attribute discriminators for the effective imposition of semantic structures. Following their work, many methods (Fu et al., 2018; John et al., 2018; Zhang et al., 2018a,b) has been proposed based on standard encoder-decoder architecture. Although, learning a latent representation will make the model more interpretable and easy to manipulate, the model which is assumed a fixed size latent representation cannot utilize the information from the source sentence anymore. On the other hand, there are also some approaches without manipulating latent representation are proposed recently. Xu et al. (2018) propose a cycled reinforcement learning method for unpaired sentiment-to-sentiment translation task. Li et al. (2018) propose a three-stage method. Their model first extracts content words by deleting phrases a strong attribute value, then retrieves new phrases associated with the target attribute, and finally uses a neural model to combine these into a final output. Lample et al. (2019) reduce text style transfer to unsupervised machine translation problem (Lample et al., 2018). They employ Denoising Auto-encoders (Vincent et al., 2008) and back-translation (Sennrich et al., 2016) to build a translation style between different styles. However, both lines of the previous models make few attempts to utilize the attention mechanism to refer the long-term history or the source sentence, except Lample et al. (2019). In many NLP tasks, especially for text generation, attention mechanism has been proved to be an essential technique to enable the model to capture the longterm dependency (Bahdanau et al., 2014; Luong et al., 2015; Vaswani et al., 2017). In this paper, we follow the second line of work and propose a novel method which makes no assumption about the latent representation of source sentence and takes the proven self-attention network, Transformer, as a basic module to train a style transfer system. 5999 x Encoder z Decoder y s (a) Disentangled Style Transfer x Transformer y s (b) Style Transformer Figure 1: General illustration of previous models and our model. z denotes style-independent content vector and s denotes the style variable. 3 Style Transformer To make our discussion more clearly, in this section, we will first give a brief introduction to the style transfer task, and then start to discuss our proposed model based on our problem definition. 3.1 Problem Formalization In this paper, we define the style transfer problem as follows: Considering a bunch of datasets {Di}K i=1, and each dataset Di is composed of many natural language sentences. For all of the sentences in a single dataset Di , they share some specific characteristic (e.g. they are all the positive reviews for a specific product), and we refer this shared characteristic as the style of these sentences. In other words, a style is defined by the distribution of a dataset. Suppose we have K different datasets Di, then we can define K different styles, and we denote each style by the symbol s(i). The goal of style transfer is that: given a arbitrary natural language sentence x and a desired style bs ∈{s(i)}K i=1, rewrite this sentence to a new one bx which has the style bs and preserve the information in original sentence x as much as possible. 3.2 Model Overview To tackle the style transfer problem we defined above, our goal is to learn a mapping function fθ(x, s) where x is a natural language sentence and s is a style control variable. The output of this function is the transferred sentence bx for the input sentence x. A big challenge in the text style transfer is that we have no access to the parallel corpora. Thus we can’t directly obtain supervision to train our transfer model. In section 3.4, we employ two discriminator-based approaches to create supervision from non-parallel corpora. Finally, we will combine the Style Transformer network and discriminator network via an overall learning algorithm in section 3.5 to train our style transfer system. 3.3 Style Transformer Network Generally, Transformer follows the standard encoder-decoder architecture. Explicitly, for a input sentence x = (x1, x2, ..., xn), the Transformer encoder Enc(x; θE) maps inputs to a sequence of continuous representations z = (z1, z2, ..., zn). And the Transformer decoder Dec(z; θD) estimates the conditional probability for the output sentence y = (y1, y2, ..., yn) by auto-regressively factorized its as: pθ(y|x) = m Y t=1 pθ(yt|z, y1, ..., yt−1). (1) At each time step t, the probability of the next token is computed by a softmax classifier: pθ(yt|z, y1, ..., yt−1) = softmax(ot), (2) where ot is logit vector outputted by decoder network. To enable style control in the standard Transformer framework, we add a extra style embedding as input to the Transformer encoder Enc(x, s; θE). Therefore the network can compute the probability of the output condition both on the input sentence x and the style control variable s. Formally, this can be expressed as: pθ(y|x, s) = m Y t=1 pθ(yt|z, y1, ..., yt−1), (3) and we denote the predicted output sentence of this network by fθ(x, s). 3.4 Discriminator Network Suppose we use x and s to denote the sentence and its style from the dataset D. Because of the absence of the parallel corpora, we can’t directly obtain the supervision for the case fθ(x,bs) where s ̸= bs. Therefore, we introduce a discriminator network to learn this supervision from the nonparallel copora. The intuition behind the training of discriminator is based on the assumption below: As we mentioned above, we only have the supervision for the case fθ(x, s). In this case, because of the input sentence x and chosen style s are both come from 6000 the same dataset D, one of the optimum solutions, in this case, is to reproduce the input sentence. Thus, we can train our network to reconstruct the input in this case. In the case of fθ(x, s) where s ̸= bs, we construct supervision from two ways. 1) For the content preservation, we train the network to reconstruct original input sentence x when we feed transferred sentence by = fθ(x,bs) to the Style Transformer network with the original style label s. 2) For the style controlling, we train a discriminator network to assist the Style Transformer network to better control the style of the generated sentence. In short, the discriminator network is another Transformer encoder, which learns to distinguish the style of different sentences. And the Style Transformer network receives style supervision from this discriminator. To achieve this goal, we experiment with two different discriminator architectures. Conditional Discriminator In a setting similar to Conditional GANs (Mirza and Osindero, 2014), discriminator makes decision condition on a input style. Explicitly, a sentence x and a proposal style s are feed into discriminator dφ(x, s), and the discriminator is asked to answer whether the input sentence has the corresponding style. In discriminator training stage, the real sentence from datasets x, and the reconstructed sentence y = fθ(x, s) are labeled as positive, and the transferred sentences by = fθ(x,bs) where s ̸= bs, are labeled as negative. In Style Transformer network training stage, the network fθ is trained to maximize the probability of positive when feed fθ(x,bs) and bs to the discriminator. Multi-class Discriminator Different from the previous one, in this case, only one sentence is feed into discriminator dφ(x), and the discriminator aims to answer the style of this sentence. More concretely, the discriminator is a classifier with K + 1 classes. The first K classes represent K different styles, and the last class is stand for the generated data from fθ(x,bs) , which is also often referred as fake sample. In discriminator training stage, we label the real sentences x and reconstructed sentences y = fθ(x, s) to the label of the corresponding style. And for the transferred sentence by = fθ(x,bs) where s ̸= bs, is labeled as the class 0. In Style Transformer network learning stage, we train the network fθ(x,bs) to maximize x fθ(x, s) y s fθ(by, s) y fθ(x,bs) x by bs dφ(by) Lself Lstyle Lcycle Figure 2: The training process for Style Transformer network. The input sentence x and input style s(bs) is feed into Transformer network fθ. If the input style s is the same as the style of sentence x, generated sentence y will be trained to reconstruct x. Otherwise, the generated sentence by will be feed into Transformer fθ and discriminator dφ to reconstruct input sentence x and input style bs respectively. the probability of the class which is stand for style bs. 3.5 Learning Algorithm In this section, we will discuss how to train these two networks. And the training algorithm of our model can be divided into two parts: the discriminator learning and Style Transformer network learning. The brief illustration is shown in Figure 2. 3.5.1 Discriminator Learning Loosely speaking, in the discriminator training stage, we train our discriminator to distinguish between the real sentence x and reconstructed sentence y = fθ(x, s) from the transferred sentence by = fθ(x,bs). The loss function for the discriminator is simply the cross-entropy loss of the classification problem. For the conditional discriminator: Ldiscriminator(φ) = −pφ(c|x, s). (4) And for the multi-class discriminator: Ldiscriminator(φ) = −pφ(c|x). (5) According to the difference of discriminator architecture, there is a different protocol for how to label these sentences, and the details can be found in Algorithm 1. 6001 Algorithm 1: Discriminator Learning Input: Style Transformer fθ, discriminator dφ, and a dataset Di with style s 1 Sample a minibatch of m sentences {x1, x2, ...xm} from Di. ; 2 foreach x ∈{x1, x2, ...xm} do 3 Randomly sample a style bs(s ̸= bs); 4 Use fθ to generate two new sentence 5 y = fθ(x, s) 6 by = fθ(x,bs) ; 7 if dφ is conditional discriminator then 8 Label {(x, s), (y, s)} as 1 ; 9 Label {(x,bs), (by,bs)} as 0 ; 10 else 11 Label {x, y} as i ; 12 Label {by} as 0 ; 13 end 14 Compute loss for dφ by Eq. (4) or (5) . 15 end 3.5.2 Style Transformer Learning The training of Style Transformer is developed according to the different cases of fθ(x,bs) where s = bs or s ̸= bs. Self Reconstruction For the case s = bs , or equivalently, the case fθ(x, s). As we discussed before, the input sentence x and the input style s comes from the same dataset , we can simply train our Style Transformer to reconstruct the input sentence by minimizing negative log-likelihood: Lself(θ) = −pθ(y = x|x, s). (6) For the case s ̸= bs, we can’t obtain direct supervision from our training set. So, we introduce two different training loss to create supervision indirectly. Cycle Reconstruction To encourage generated sentence preserving the information in the input sentence x, we feed the generated sentence by = fθ(x,bs) to the Style Transformer with the style of x and training our network to reconstruct original input sentence by minimizing negative loglikelihood: Lcycle(θ) = −pθ(y = x|fθ(x,bs), s). (7) Style Controlling If we only train our Style Transformer to reconstruct the input sentence x from transferred sentence by = fθ(x,bs), the network can only learn to copy the input to the output. To handle this degeneration problem, we further add a style controlling loss for the generated sentence. Namely, the network generated sentence by is feed into discriminator to maximize the probability of style bs. For the conditional discriminator, the Style Transformer aims to minimize the negative loglikelihood of class 1 when feed to the discriminator with the style label bs: Lstyle(θ) = −pφ(c = 1|fθ(x,bs),bs). (8) And in the case of the multi-class discriminator, the Style Transformer is trained to minimize the the negative log-likelihood of the corresponding class of style bs: Lstyle(θ) = −pφ(c = bs|fθ(x,bs)). (9) Combining the loss function we discussed above, the training procedure of the Style Transformer is summarized in Algorithm 2. Algorithm 2: Style Transformer Learning Input: Style Transformer fθ, discriminator dφ, and a dataset Di with style s 1 Sample a minibatch of m sentences {x1, x2, ...xm} from Di. ; 2 foreach x ∈{x1, x2, ...xm} do 3 Randomly sample a style bs(s ̸= bs); 4 Use fθ to generate two new sentence 5 y = fθ(x, s) 6 by = fθ(x,bs) ; 7 Compute Lself(θ) for y by Eq. (6) ; 8 Compute Lcycle(θ) for by by Eq. (7) ; 9 Compute Lstyle(θ) for by by Eq. (8) or (9) ; 10 end 3.5.3 Summarization and Discussion Finally, we can construct our final training algorithm based on discriminator learning and Style Transformer learning steps. Similar to the training process of GANs (Goodfellow et al., 2014), in each training iteration, we first perform nd steps discriminator learning to get a better discriminator, and then train our Style Transformer nf steps to improve its performance. The training process is summarized in Algorithm 3. Before finishing this section, we finally discuss a problem which we will be faced with in the training process. Because of the discrete nature of the natural language, for the generated sentence by = fθ(x,bs), we can’t directly propagate gradients from the discriminator through the discrete samples. To handle this problem, one can use REINFORCE (Williams, 1992) or the Gumbel-Softmax trick (Kusner and Hern´andez-Lobato, 2016) to estimates gradients from the discriminator. However, these two approaches are faced with high 6002 Algorithm 3: Training Algorithm Input: A bunch of datasets {Di}K i=1, and each represent a different style s(i) 1 Initialize the Style Transformer network fθ, and the discriminator network dφ with random weights θ, φ ; 2 repeat 3 for nd step do 4 foreach dataset Di do 5 Accumulate loss by Algorithm 1 6 end 7 Perform gradient decent to update dφ. 8 end 9 for nf step do 10 foreach dataset Di do 11 Accumulate loss by Algorithm 2 12 end 13 Perform gradient decent to update fθ. 14 end 15 until network fθ(x, s) converges; variance problem, which will make the model hard to converge. In our experiment, we also observed that the Gumbel-Softmax trick would slow down the model converging, and didn’t bring much performance improvement to the model. For the reasons above, empirically, we view the softmax distribution generated by fθ as a “soft” generated sentence and feed this distribution to the downstream network to keep the continuity of the whole training process. When this approximation is used, we also switch our decoder network from greedy decoding to continuous decoding. Which is to say, at every time step, instead of feed the token that has maximum probability in previous prediction step to the network, we feed the whole softmax distribution (Eq. (2)) to the network. And the decoder uses this distribution to compute a weighted average embedding from embedding matrix for the input. 4 Experiment 4.1 Datasets We evaluated and compared our approach with several state-of-the-art systems on two review datasets, Yelp Review Dataset (Yelp) and IMDb Movie Review Dataset (IMDb). The statistics of the two datasets are shown in Table 1. Yelp Review Dataset (Yelp) The Yelp dataset is provided by the Yelp Dataset Challenge, consisting of restaurants and business reviews with sentiment labels (negative or positive). Following previous work, we use the possessed dataset provided by Li et al. (2018). Additionally, it also provides human reference sentences for the test set. Dataset Yelp IMDb Positive Negative Positive Negative Train 266,041 177,218 178,869 187,597 Dev 2,000 2,000 2,000 2,000 Test 500 500 1,000 1,000 Avg. Len. 8.9 18.5 Table 1: Datasets statistic. IMDb Movie Review Dataset (IMDb) The IMDb dataset consists of movie reviews written by online users. To get a high quality dataset, we use the highly polar movie reviews provided by Maas et al. (2011). Based on this dataset, we construct a highly polar sentence-level style transfer dataset by the following steps: 1) fine tune a BERT (Devlin et al., 2018) classifier on original training set, which achieves 95% accuracy on test set; 2) split each review in the original dataset into several sentences; 3) filter out sentences with confidence threshold below 0.9 by our fine-tuned BERT classifier; 4) remove sentences with uncommon words. Finally, this dataset contains 366K, 4k, 2k sentences for training, validation, and testing, respectively. 4.2 Evaluation A goal transferred sentence should be a fluent, content-complete one with target style. To evaluate the performance of the different model, following previous works, we compared three different dimensions of generated samples: 1) Style control, 2) Content preservation and 3) Fluency. 4.2.1 Automatic Evaluation Style Control We measure style control automatically by evaluating the target sentiment accuracy of transferred sentences. For an accurate evaluation of style control, we trained two sentiment classifiers on the training set of Yelp and IMDb using fastText (Joulin et al., 2017). Content Preservation To measure content preservation, we calculate the BLEU score (Papineni et al., 2002) between the transferred sentence and its source input using NLTK. A higher BLEU score indicates the transferred sentence can achieve better content preservation by retaining more words from the source sentence. If a human reference is available, we will calculate the BLEU score between the transferred sentence and corresponding reference as well. Two BLEU score metrics are referred to as self-BLEU and ref-BLEU 6003 Model Yelp IMDb ACC ref-BLEU self-BLEU PPL ACC self-BLEU PPL Input Copy 3.3 23 100 11 5.2 100 5 RetrieveOnly (Li et al., 2018) 92.9 0.4 0.7 10 N/A N/A N/A TemplateBased (Li et al., 2018) 84.2 13.7 44.1 67 N/A N/A N/A DeleteOnly (Li et al., 2018) 85.5 9.7 28.6 79 N/A N/A N/A DeleteAndRetrieve (Li et al., 2018) 88.0 10.4 29.1 61 58.7 55.4 18 ControlledGen (Hu et al., 2017) 88.9 14.3 45.7 201 93.9 62.1 58 CrossAlignment (Shen et al., 2017) 76.3 4.3 13.2 90 N/A N/A N/A MultiDecoder (Fu et al., 2018) 49.9 9.2 37.9 127 N/A N/A N/A CycleRL(Xu et al., 2018) 88.0 2.8 7.2 204 97.6 4.9 246 Ours (Conditional) 93.6 17.1 45.3 78 86.8 66.2 38 Ours (Multi-Class) 87.6 20.3 54.9 50 79.7 70.5 29 Table 2: Automatic evaluation results on Yelp and IMDb datset respectively. Fluency Fluency is measured by the perplexity of the transferred sentence, and we trained a 5-gram language model on the training set of two datasets using KenLM (Heafield, 2011). 4.2.2 Human Evaluation Due to the lack of parallel data in style transfer area, automatic metrics are insufficient to evaluate the quality of the transferred sentence. Therefore we also conduct human evaluation experiments on two datasets. We randomly select 100 source sentences (50 for each sentiment) from each test set for human evaluation. For each review, one source input and three anonymous transferred samples are shown to a reviewer. And the reviewer is asked to choose the best sentence for style control, content preservation, and fluency respectively. • Which sentence has the most opposite sentiment toward the source sentence? • Which sentence retains most content from the source sentence? • Which sentence is the most fluent one? To avoid interference from similar or same generated sentences, ”no preference.” is also an option answer to these questions. 4.3 Training Details In all of the experiment, for the encoder, decoder, and discriminator, we all use 4-layer Transformer with four attention heads in each layer. The hidden size, embedding size, and positional encoding size in Transformer are all 256 dimensions. Another embedding matrix with 256 hidden units is used to represent different style, which is feed into encoder as an extra token of the input sentence. And the positional encoding isn’t used for the style token. For the discriminator, similar to Radford et al. (2018) and Devlin et al. (2018), we further add a <cls> token to the input, and the output vector of the corresponding position is feed into a softmax classifier which represents the output of discriminator. In the experiment, we also found that preforming random word dropout for the input sentence when computing the self reconstruction loss (Eq. (6)) can help model more easily to converge to a reasonable performance. On the other hand, by adding a temperature parameter to the softmax layer (Eq. (2)) and using a sophisticated temperature decay schedule can also help the model to get a better result in some case. 4.4 Experimental Results Results using automatic metrics are presented in Table 2. Comparing to previous approaches, our models achieve competitive performance overall and get better content preservation at all of two datasets. Our conditional model can achieve a better style controlling compared to the multi-class model. Both our models are able to generate sentences with relatively low perplexity. For those previous models performing the best on a single metric, an obvious drawback can always be found on another metric. For the human evaluation, we choose two of the most well-performed models according to the automatic evaluation results as competitors: DeleteAndRetrieve (DAR) (Li et al., 2018) and 6004 Model Yelp IMDb Style Content Fluency Style Content Fluency CtrlGen 16.8 23.6 17.7 30.0 19.5 22.0 DAR 13.6 15.5 21.4 21.0 27.0 25.0 Ours 48.6 36.8 41.4 29.5 35.0 31.5 No Preference 20.9 24.1 19.5 19.5 18.5 21.5 Table 3: Human evaluation results on two datasets. Each cell indicates the proportion of being preferred. Controlled Generation (CtrlGen) (Hu et al., 2017). And the generated outputs from multi-class discriminator model is used as our final model. We have performed over 400 human evaluation reviews. Results are presented in Table 3. The human evaluation results are mainly conformed with our automatic evaluation results. And it also shows that our models are better in content preservation, compared to two competitor model. Finally, to better understand the characteristic of different models, we sampled several output sentences from the Yelp dataset, which are shown in Table 4. 4.5 Ablation Study To study the impact of different components on overall performance, we further did an ablation study for our model on Yelp dataset, and results are reported in Table 5. For better understanding the role of different loss functions, we disable each loss function by turns and retrain our model with the same setting for the rest of hyperparameters. After we disable self-reconstruction loss (Eq. (6)), our model failed to learn a meaningful output and only learned to generate a single word for any combination of input sentence and style. However, when we don’t use cycle reconstruction loss (Eq. (7)), it’s also possible to train the model successfully, and both of two models converge to reasonable performance. And comparing to the full model, there is a small improvement in style accuracy, but a significant drop in BLEU score. As our expected, the cycle reconstruction loss is able to encourage the model to preserve the information from the input sentence. At last, when the discriminator loss (Eq. (8) and (9)) is not used, the model quickly degenerates to a model which is only copying the input sentence to output without any style modification. This behaviour also conforms with our intuition. If the model is only asked to minimize the self-reconstruction loss and cycle reconstruction loss, directly copying input is one of the optimum solutions which is the easiest to achieve. In summary, each of these loss plays an important role in the Style Transformer training stage: 1) the self-reconstruction loss guides the model to generate readable natural language sentence. 2) the cycle reconstruction loss encourages the model to preserve the information in the source sentence. 3) the discriminator provides style supervision to help the model control the style of generated sentences. Another group of study is focused on the different type of samples used in the discriminator training step. In Algorithm 1, we used a mixture of real sentence x and generated sentence y as the positive training samples for the discriminator. By contrast, in the ablation study, we trained our model with only one of them. As the result shows, the generated sentence is the key component in discriminator training. When we remove the real sentence from the training data of discriminator, our model can also achieve a competitive result as the full model with only a small performance drop. However, if we only use the real sentence the model will lose a significant part of the ability to control the style of the generated sentence, and thus yields a bad performance in style accuracy. However, the model can still perform a style control far better than the input copy model discussed in the previous part. For the reasons above, we used a mixture of real sample and generated sample in our final version. 5 Conclusions and Future Work In this paper, we proposed the Style Transformer with a novel training algorithm for text style transfer task. Experimental results on two text style transfer datasets have shown that our model achieved a competitive or better performance compared to previous state-of-the-art approaches. Especially, because our proposed approach doesn’t assume a disentangled latent representation for manipulating the sentence style, our model can get better content preservation on both of two datasets. In the future, we are planning to adapt our Style Transformer to the multiple-attribute setting like Lample et al. (2019). On the other hand, the backtranslation technique developed in Lample et al. (2019) can also be adapted to the training process of Style Transformer. How to combine the back6005 negative to positive Input the food ’s ok , the service is among the worst i have encountered . DAR the food ’s ok , the service is among great and service among . CtrlGen the food ’s ok , the service is among the randy i have encountered . Ours the food ’s delicious , the service is among the best i have encountered . Human the food is good , and the service is one of the best i ’ve ever encountered . Input this is the worst walmart neighborhood market out of any of them . DAR walmart market is one of my favorite places in any neighborhood out of them . CtrlGen fantastic is the randy go neighborhood market out of any of them . Ours this is the best walmart neighborhood market out of any of them . Human this is the best walmart out of all of them . Input always rude in their tone and always have shitty customer service ! DAR i always enjoy going in always their kristen and always have shitty customer service ! CtrlGen always good in their tone and always have shitty customer service ! Ours always nice in their tone and always have provides customer service ! Human such nice customer service , they listen to anyones concerns and assist them with it . positive to negative Input everything is fresh and so delicious ! DAR small impression was ok , but lacking i have piss stuffing night . CtrlGen everything is disgrace and so bland ! Ours everything is overcooked and so cold ! Human everything was so stale . Input these two women are professionals . DAR these two scam women are professionals . CtrlGen shame two women are unimpressive . Ours these two women are amateur . Human these two women are not professionals . Input fantastic place to see a show as every seat is a great seat ! DAR there is no reason to see a show as every seat seat ! CtrlGen unsafe place to embarrassing lazy run as every seat is lazy disappointment seat ! Ours disgusting place to see a show as every seat is a terrible seat ! Human terrible place to see a show as every seat is a horrible seat ! Table 4: Case study from Yelp dataset. The red words indicate good transfer; the blue words indicate bad transfer; the brown words indicate grammar error. Conditional Multi-class Model ACC BLEU PPL ACC BLEU PPL Style Transformer 93.6 17.1 78 87.6 20.3 50 - self reconstruction 50.0 0 N/A 20.7 0 N/A - cycle reconstruction 94.2 8.6 56 93.2 8.7 40 - discriminator 3.3 22.9 11 3.3 22.9 11 - real sample 89.7 17.4 75 83.8 19.4 55 - generated sample 46.3 21.6 34 35.6 22.0 33 Table 5: Model ablation study results on Yelp dataset translation with our training algorithm is also a good research direction that is worth to explore. Acknowledgment We would like to thank the anonymous reviewers for their valuable comments. The research work is supported by National Natural Science Foundation of China (No. 61751201 and 61672162), Shanghai Municipal Science and Technology Commission (16JC1420401 and 17JC1404100), Shanghai Municipal Science and Technology Major Project(No.2018SHZDZX01)and ZJLab. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. Keith Carlson, Allen Riddell, and Daniel N. Rockmore. 2017. Zero-shot style transfer in text using recurrent neural networks. CoRR, abs/1711.04731. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. CoRR, abs/1901.02860. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. 6006 Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 11– 21. Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In Thirty-Second AAAI Conference on Artificial Intelligence. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680. Kenneth Heafield. 2011. Kenlm: Faster and smaller language model queries. In Proceedings of the sixth workshop on statistical machine translation, pages 187–197. Association for Computational Linguistics. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pages 1587–1596. JMLR. org. Zhijing Jin, Di Jin, Jonas Mueller, Nicholas Matthews, and Enrico Santus. 2019. Unsupervised Text Style Transfer via Iterative Matching and Translation. arXiv e-prints, page arXiv:1901.11333. Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2018. Disentangled representation learning for text style transfer. arXiv preprint arXiv:1808.04339. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427–431. Association for Computational Linguistics. Matt J. Kusner and Jos´e Miguel Hern´andez-Lobato. 2016. GANS for sequences of discrete elements with the gumbel-softmax distribution. CoRR, abs/1611.04051. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 5039–5049. Guillaume Lample, Sandeep Subramanian, Eric Smith, Ludovic Denoyer, Marc’Aurelio Ranzato, and YLan Boureau. 2019. Multiple-attribute text rewriting. In International Conference on Learning Representations. Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: A simple approach to sentiment and style transfer. arXiv preprint arXiv:1804.06437. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1412–1421. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics. Igor Melnyk, C´ıcero Nogueira dos Santos, Kahini Wadhawan, Inkit Padhi, and Abhishek Kumar. 2017. Improved neural text attribute transfer with nonparallel data. CoRR, abs/1711.09395. Mehdi Mirza and Simon Osindero. 2014. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, and Alan W. Black. 2018. Style transfer through back-translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 866–876. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3us-west-2. amazonaws. com/openai-assets/researchcovers/languageunsupervised/language understanding paper. pdf. C´ıcero Nogueira dos Santos, Igor Melnyk, and Inkit Padhi. 2018. Fighting offensive language on social media with unsupervised text style transfer. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pages 189–194. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. 6007 Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in neural information processing systems, pages 6830–6841. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 6000–6010. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders. In Machine Learning, Proceedings of the Twenty-Fifth International Conference (ICML 2008), Helsinki, Finland, June 5-9, 2008, pages 1096–1103. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, 8:229–256. Jingjing Xu, SUN Xu, Qi Zeng, Xiaodong Zhang, Xuancheng Ren, Houfeng Wang, and Wenjie Li. 2018. Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 979–988. Ye Zhang, Nan Ding, and Radu Soricut. 2018a. SHAPED: shared-private encoder-decoder for text style adaptation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1528–1538. Zhirui Zhang, Shuo Ren, Shujie Liu, Jianyong Wang, Peng Chen, Mu Li, Ming Zhou, and Enhong Chen. 2018b. Style transfer as unsupervised machine translation. CoRR, abs/1808.07894.
2019
601
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6008–6019 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6008 Generating Sentences from Disentangled Syntactic and Semantic Spaces Yu Bao1∗ Hao Zhou2∗ Shujian Huang1† Lei Li2 Lili Mou3 Olga Vechtomova3 Xinyu Dai1 Jiajun Chen1 1National Key Laboratory for Novel Software Technology, Nanjing University, China 2ByteDance AI Lab, Beijing, China 3University of Waterloo, Canada {baoy,huangsj,dxy,chenjj}@nlp.nju.edu.cn {zhouhao.nlp,lileilab}@bytedance.com [email protected], [email protected] Abstract Variational auto-encoders (VAEs) are widely used in natural language generation due to the regularization of the latent space. However, generating sentences from the continuous latent space does not explicitly model the syntactic information. In this paper, we propose to generate sentences from disentangled syntactic and semantic spaces. Our proposed method explicitly models syntactic information in the VAE’s latent space by using the linearized tree sequence, leading to better performance of language generation. Additionally, the advantage of sampling in the disentangled syntactic and semantic latent spaces enables us to perform novel applications, such as the unsupervised paraphrase generation and syntaxtransfer generation. Experimental results show that our proposed model achieves similar or better performance in various tasks, compared with state-of-the-art related work. ‡ 1 Introduction Variational auto-encoders (VAEs, Kingma and Welling, 2014) are widely used in language generation tasks (Serban et al., 2017; Kusner et al., 2017; Semeniuta et al., 2017; Li et al., 2018b). VAE encodes a sentence into a probabilistic latent space, from which it learns to decode the same sentence. In addition to traditional reconstruction loss of an autoencoder, VAE employs an extra regularization term, penalizing the Kullback– Leibler (KL) divergence between the encoded posterior distribution and its prior. This property enables us to sample and generate sentences from the continuous latent space. Additionally, we can ∗Equal contributions. †Corresponding author. ‡We release the implementation and models at https:// github.com/baoy-nlp/DSS-VAE even manually manipulate the latent space, inspiring various applications such as sentence interpolation (Bowman et al., 2016) and text style transfer (Hu et al., 2017). However, the continuous latent space of VAE blends syntactic and semantic information together, without modeling the syntax explicitly. We argue that it may be not necessarily the best in the text generation scenario. Recently, researchers have shown that explicitly syntactic modeling improves the generation quality in sequence-tosequence models (Eriguchi et al., 2016; Zhou et al., 2017; Li et al., 2017; Chen et al., 2017). It is straightforward to adopt such idea in the VAE setting, since a vanilla VAE does not explicitly model the syntax. A line of studies (Kusner et al., 2017; G´omez-Bombarelli et al., 2018; Dai et al., 2018) propose to impose context-free grammars (CFGs) as hard constraints in the VAE decoder, so that they could generate syntactically valid outputs of programs, molecules, etc. However, the above approaches cannot be applied to syntactic modeling in VAE’s continuous latent space, and thus, we do not enjoy the two benefits of VAE, namely, sampling and manipulation, towards the syntax of a sentence. In this paper, we propose to generate sentences from a disentangled syntactic and semantic spaces of VAE (called DSS-VAE). DSS-VAE explicitly models syntax in the continuous latent space of VAE, while retaining the sampling and manipulation benefits. In particular, we introduce two continuous latent variables to capture semantics and syntax, respectively. To separate the semantic and syntactic information from each other, we borrow the adversarial approaches from the text style-transfer research (Hu et al., 2017; Fu et al., 2018; John et al., 2018), but adapt it into our scenario of syntactic modeling. We also observe that syntax and semantics are highly interwoven, 6009 and therefore further propose an adversarial reconstruction loss to regularize the syntactic and semantic spaces. Our proposed DSS-VAE takes following advantages: First, explicitly syntactic modeling in VAE’s latent space improves the quality of unconditional language generation. Experiments show that, compared with traditional VAE, DSS-VAE generates more fluent sentences (lower perplexity), while preserving more amount of encoded information (higher BLEU scores for reconstruction). Comparisons with a state-of-the-art syntactic language model (Shen et al., 2017) are also included. Second, the advantage of manipulation in the syntactic and semantic spaces of DSS-VAE provides a natural way of unsupervised paraphrase generation. If we sample a vector in the syntactic space but perform max a posterior (MAP) inference in the semantic space, we are able to generate a sentence with the same meaning but different syntax. This is known as unsupervised paraphrase generation, as no parallel corpus is needed during training. Experiments show that DSS-VAE outperforms the traditional VAE as well as a state-of-theart Metropolis-Hastings sampling approach (Miao et al., 2019) in this task. Additionally, with the disentangled syntactic and semantic latent spaces, we propose an interesting application that transfers the syntax of one sentence to another. Both qualitative and quantitative experimental results show that DSS-VAE could graft the designed syntax to another sentence under certain circumstances. 2 Related Work The variational auto-encoders (VAEs) is proposed by Kingma and Welling (2014) for image generation. Bowman et al. (2016) successfully applied VAE in the NLP domain, showing that VAE improves recurrent neural network (RNN)-based language modeling (RNN-LM, Mikolov et al., 2010); that VAE allows sentence sampling and sentence interpolation in the continuous latent space. Later, VAE is widely used in various natural language generation tasks (Gupta et al., 2018; Kusner et al., 2017; Hu et al., 2017; Deriu and Cieliebak, 2018). Syntactic language modeling, to the best of our knowledge, could be dated back to Chelba (1997). Charniak (2001) and Clark (2001) propose to utilize a top-down parsing mechanism for language modeling. Dyer et al. (2016) and Kuncoro et al. (2017) introduce the neural network to this direction. The Parsing-Reading-Predict Network (PRPN, Shen et al., 2017), which reports a state-of-the-art results on syntactic language modeling, learns a latent syntax by training with a language modeling objective. Different from their work, our approach models syntax in a continuous space, facilitating sampling and manipulation of syntax. Our work is also related to style-transfer text generation (Fu et al., 2018; Li et al., 2018a; John et al., 2018). In previous work, the style is usually defined by categorical features such as sentiment. We move one step forward, extending their approach to the sequence level and dealing with more complicated, non-categorical syntactic spaces. Due to the complication of syntax, we further design adversarial reconstruction losses to encourage the separation of syntax and semantics. 3 Approach In this section, we present our proposed DSS-VAE in detail. We first introduce the variational autoencoder in §3.1. Then, we describe the general architecture of DSS-VAE in §3.2, where we explain how we generate sentences from disentangled syntactic and semantic latent spaces and how we disentangle information from the two separated spaces. Model training is discussed in §3.3. 3.1 Variational Autoencoder A traditional VAE employs a probabilistic latent variable z to encode the information of a sentence x, and then decodes the original x from z. The probability of a sentence x could be computed as: p(x) = Z p(z)p(x|z) dz (1) where p(z) is the prior, and p(x|z) is given by the decoder. VAE is trained by maximizing the evidence lower bound (ELBO): log p(x) ≥ELBO = E q(z|x)  log p(x|z)  −KL q(z|x) p(z)  (2) 3.2 Proposed Method: DSS-VAE Our DSS-VAE is built upon the vanilla VAE, but extends Eqn. (1) by adopting two separate latent variables zsem and zsyn to capture semantic and 6010 syntactic information, respectively. Specifically, we assume that the probability of a sentence x in DSS-VAE could be computed as: p(x) = Z p(zsem, zsyn)p(x|zsem, zsyn) dzsem dzsyn = Z p(zsem)p(zsyn)p(x|zsem, zsyn) dzsem dzsyn where p(zsem) and p(zsyn) are the priors; both are set to be independent multivariate Gaussian N(0, I). Similar to (2), we optimize the evidence lower bound (ELBO) for training: log p(x) ≥ELBO = E q(zsem|x)q(zsyn|x)  log p(x|zsem, zsyn)  −KL q(zsem|x) p(zsem)  −KL q(zsyn|x) p(zsyn)  where q(zsem|x) and q(zsyn|x) are posteriors for the two latent variables. We further assume the variational posterior families, q(zsem|x) and q(zsyn|x), are independent, taking the form N(µsem, σ2 sem) and N(µsyn, σ2 syn), respectively, We use RNN to parameterize the posteriors (also called the encoder). Here, µsem, σsem, µsyn, and σsyn are predicted by the encoder network, described as follows. Encoding In the encoding phase, we first obtain the sentence representation rx by an RNN with the gated recurrent units (GRUs, Cho et al., 2014); then, rx is evenly split into two spaces rx = [rsem x ; rsyn x ]. For the semantic encoder, we compute the mean and variance of q(zsem|x) from rsem x as:  µsem σsem  =  W µ sem W σ sem  ReLU(Wsemrsem x + bsem) where the activation function is the rectified linear unit (ReLU, Nair and Hinton, 2010). W µ sem,W σ sem,Wsem, and bsem are the parameters of the semantic encoder. Likewise, a syntactic encoder predicts µsyn and σsyn for q(zsyn|x) in the same way, with parameters W µ syn,W σ syn,Wsyn, and bsyn. Decoding in the Training Phase We first sample from the posterior distributions by the reparameterization trick (Kingma and Welling, 2014), S NP VP (.,.) (PRP,This) (VBZ,is) NP (DT,an) (JJ,interesting) (NN,idea) Constituency parse tree Linearized representation S NP PRP /NP VP VBZ NP DT JJ NN /NP /VP . /S Figure 1: The parse tree and its linearized tree sequence of a sentence “This is an interesting idea.” obtaining sampled semantic and syntactic representations, zsem and zsyn; then, they are concatenated as z = [zsem; zsyn] and fed as the initial state of the decoder for reconstruction. Decoding in the Test Phase The treatment depends on applications. If we would like to synthesize a sentence from scratch, both zsyn and zsem are sampled from prior. If we would like to preserve/vary semantics/syntax, max a posterior (MAP) inference or sampling could be applied in respective spaces. Details are provided in § 4. In the following part, we will introduce how syntax is modeled in our approach and how syntax and semantics are ensured to be separated. 3.2.1 Modeling Syntax by Predicting Linearized Tree Sequence While previous studies have tackled the problem of categorical sentiment modeling in the latent space (Hu et al., 2017; Fu et al., 2018), syntax is much more complicated and not finitely categorical. We propose to adopt the linearized tree sequence to explicitly model syntax in the latent space of VAE. Figure 1 shows the constituency parse tree of the sentence “This is an interesting idea.” The linearized tree sequence can be obtained by traversing the syntactic tree in a top-down order; if the node is non-terminal, we add a backtracking node (e.g., /NP) after its child nodes are traversed. We ensure that zsyn contains syntactic information by predicting the linearized tree sequence. In training, the parse tree for sentences are obtained by the ZPar1 toolkit, and serves as the groundtruth training signals; in testing, we do not need external syntactic trees. We build an RNN 1https://www.sutd.edu.sg/cmsresource/faculty/yuezhang/ zpar.html 6011 S NP .. . /S This is ... This is ... S NP .. . /S This is ... This is ... Figure 2: Overview of our DSS-VAE. Forward dashed arrows are multi-task losses; backward dashed arrows are adversarial losses. (independent of the VAE’s decoder) to predict such linearized parse trees, where each parsing token is represented by an embedding (similar to a traditional RNN decoder). Notice that, a node and its backtracking, e.g., NP and /NP, have different embeddings. The linearized tree sequence has achieved promising parsing results in a traditional constituency parsing task (Vinyals et al., 2015; Liu et al., 2018; Vaswani et al., 2017), which shows its ability of preserving syntactic information. Additionally, the linearized tree sequence works in a sequence-to-sequence fashion, so that it can be used to regularize the latent spaces. 3.2.2 Disentangling Syntax and Semantics into Different Latent Spaces Having solved the problem of syntactic modeling, we now turn to the question: how could we disentangle syntax and semantics from each other? We are inspired by the research in text style transfer and apply auxiliary losses to regularize the latent space (Hu et al., 2017; Fu et al., 2018). In particular, we adopt the multi-task and adversarial losses in John et al. (2018), but extend it to the sequence level. In §3.2.3, we further propose two adversarial reconstruction losses to discourage the model to encode a sentence from a single subspace. Multi-Task Loss Intuitively, a multi-task loss ensures that each space (zsyn or zsem) should capture respective information. For the semantic space, we predict the bag-ofwords (BoW) distribution of a sentence from zsem with softmax, whose objective is the cross-entropy loss against the groundtruth distribution t, given by: L(mul) sem = − X w∈V tw log p(w|zsem) (3) where p(w|zsyn) is the predicted distribution. BoW has been explored by previous work (Weng et al., 2017; John et al., 2018), showing good ability of preserving semantics. For the syntactic space, the multi-task loss trains a model to predict syntax on zsyn. Due to our proposal in §3.2.1, we could build a dedicated RNN, predicting the tokens in the linearized parse tree sequence, whose loss is: L(mul) syn = − Xn i=1 log p(si|s1 · · · si−1, zsyn) (4) where si is a token in the linearized parse tree (with a total length of n). Adversarial Loss The adversarial loss is widely used for aligning samples from different distributions. It has various applications, including style transfer (Hu et al., 2017; Fu et al., 2018; John et al., 2018) and domain adaptation (Tzeng et al., 2017). To apply adversarial losses, we add extra model components (known as adversaries) to predict semantic information tw based on the syntactic space zsyn, but to predict syntactic information s1 · · · sn−1 based on the semantic space zsem. They are denoted by padv(w|zsyn) and padv(si|s1 · · · si−1, zsem). The training of these adversaries are similar to (3) and (4), except that the gradient only trains the adversaries themselves, and does not backpropagate to VAE. Then, VAE is trained to “fool” the adversaries by maximizing their losses, i.e., minimizing the following terms: L(adv) sem = X w∈V tw log padv(w|zsyn) (5) L(adv) syn = Xn i=1 log padv(si|s1 · · · si−1, zsem) (6) In this phase, the adversaries are fixed and their parameters are not updated. 3.2.3 Adversarial Reconstruction Loss Our next intuition is that syntax and semantics are more interwoven to each other than other information such as style and content. Suppose, for example, the syntax and semantics have been perfectly separated by the losses in 6012 §3.2.2, where zsem could predict BoW well, but does not contain any information about the syntactic tree. Even in this ideal case, the decoder can reconstruct the original sentence from zsem by simply learning to re-order words (as zsem does contain BoW). Such word re-ordering knowledge is indeed learnable (Ma et al., 2018), and does not necessarily contain the syntactic information. Therefore, the multi-task and adversarial losses for syntax and semantics do not suffice to regularize DSS-VAE. We now propose an adversarial reconstruction loss to discourage the sentence being predicted by a single subspace zsyn or zsem. When combined, however, they should provide a holistic view of the entire sentence. Formally, let zs be a latent variable (zs = zsyn or zsem). A decoding adversary is trained to predict the sentence based on zs, denoted by prec(xi|x1 · · · xi−1, zs). Then, the adversarial reconstruction loss is imposed by minimizing L(adv) rec (zs) = XM i=1 log prec(xi|x<i, zs) (7) Such adversarial reconstruction loss is applied to both the syntactic and semantic spaces, shown by black bashed arrows in Figure 2. 3.3 Training Details Overall Training Objective The overall training loss is a combination of the VAE loss (2), the multi-task and adversarial losses for syntax and semantics (3–6), as well as the adversarial reconstruction losses (7), , i.e., minimizing L = Lvae + Laux = − E q(zsem|x)q(zsyn|x) log  p(x|zsem, zsyn)  + λKL sem KL q(zsem|x) p(zsem)  + λKL syn KL q(zsyn|x) p(zsyn)  + λmul semL(mul) sem + λadv semL(adv) sem + λrec semL(adv) rec (zsem) + λmul syn L(mul) syn + λadv synL(adv) syn + λrec synL(adv) rec (zsyn) (8) where the λKL sem, λKL syn, λmul sem, λadv sem, λrec sem, λmul syn , λadv syn, and λrec syn are the hyperparameters to adjust the importance of each loss in overall objective. Hyperparameter Tuning We select the parameter values with the lowest ELBO value on the validation set in all experiments. They are tuned by (grouped) grid search on the validation set, but due to the large hyperparameter space, we conduct tuning mostly for sensitive hyperparameters and admit that it is empirical. We choose the VAE as our baseline, and the KL weight of VAE is tuned in the same way. We list the hyperparameters in Appendix A. The training objective is optimized by Adam (Kingma and Ba, 2015) with β1 = 0.9, β2 = 0.995, and the initial learning rate is 0.001. Word embeddings are 300-dimensional and initialized randomly. The dimension of each latent space (namely, zsyn and zsem) is 100. KL Annealing and Word Dropout We adopt the tricks of KL annealing and word dropout from Bowman et al. (2016) to avoid KL collapse. We anneal λKL syn and λKL syn from zero to predefined values in a sigmoid manner. Besides, the word dropout trick randomly replaces the ground-truth token with <unk> with a fixed probability of 0.50 at each time step of the decoder during training. 4 Experiments We evaluate our method on reconstruction and unconditional language generation (§4.1). Then, we apply it two applications, namely, unsupervised paraphrase generation (§4.2) and syntax-transfer generation (§4.3). 4.1 Reconstruction and Unconditional Language Generation First, we compare our model in reconstruction and unconditional language generation with a traditional VAE and a syntactic language model (PRPN, Shen et al., 2017). Dataset We followed previous work (Bowman et al., 2016) and used a standard benchmark, the WSJ sections in the Penn Treebank (PTB) (Marcus et al., 1993). We also followed the standard split: Sections 2–21 for training, Section 24 for validation, and Section 23 for test. Settings We trained VAE and DSS-VAE, both with 100-dimensional RNN states. For the vocabulary, we chose 30k most frequent words. We trained PRPN with the default parameter in the code base.2 Evaluation We evaluate model performance with the following metrics: 2https://github.com/yikangshen/PRPN 6013 KL-Weight BLEU↑ Forward PPL↓ 1.3 7.26 34.01 1.2 7.41 35.00 1.0 8.19 36.53 0.7 8.98 42.44 0.5 9.07 44.11 0.3 9.26 48.70 0.1 9.36 49.73 Table 1: BLEU and Forward PPL of VAE with varying KL weights on the PTB test set. The larger↑(or lower↓), the better. 1. Reconstruction BLEU. The reconstruction task aims to generate the input sentence itself. In the task, both syntactic and semantic vectors are chosen as the predicted mean of the encoded distribution. We evaluate the reconstruction performance by the BLEU score (Papineni et al., 2002) with input as the reference.3 It reflects how well the model could preserve input information, and is crucial for representation learning and “goal-oriented” text generation. 2. Forward PPL. We then perform unconditioned generation, where both syntactic and semantic vectors are sampled from prior. Forward perplexity (PPL) (Zhao et al., 2018) is the generated sentences’ perplexity score predicted by a pertained language model.4 It shows the fluency of generated sentences from VAE’s prior. We computed Forward PPL based on 100K sampled sentences. 3. Reverse PPL. Unconditioned generation is further evaluated by Reverse PPL (Zhao et al., 2018). It is obtained by first training a language model5 on 100K sampled sentences from a generation model; then, Reverse PPL is the perplexity of the PTB test sets with the trained language model. Reverse PPL evaluates the diversity and fluency of sampled sentences from a language generation model. If sampled sentences are of low diversity, the language model would be trained only on similar sentences; if the sampled sentences are of low fluency, the language model would 3We evaluate the corpus BLEU implemented in https:// www.nltk.org/ modules/nltk/translate/bleu score.html 4We used an LSTM language model trained on the One-Billion-Word Corpus (http://www.statmt.org/ lm-benchmark). 5Tied LSTM-LM with 300 dimensions and two layers, implemented in https://github.com/pytorch/examples/ tree/master/word language model 47.33 8.98 45.6 9.6 49.73 9.36 49.79 11.09 syntax-VAE BLEU 7 8 9 10 11 12 Forward-PPL 31 36 41 46 51 VAE DSS-VAE !1 Figure 3: Comparing DSS-VAE and VAE in language generation with different KL weight. We performed linear regression for each model to show the trend. The upper-left corner (larger BLEU but smaller PPL) indicates a better performance. be trained on unfluent sentences. Both will lead to higher Reverse PPL. For comparing VAE and DSS-VAE, we sample latent variables from the prior, and feed them to the decoder for generation; for LSTM-LM, we first feed the start sentence token <s> to the decoder, and sample the word at each time step by predicted probabilities (i.e., forward sampling). Results We see in Table 1 that BLEU and PPL are more or less contradictory. Usually, a smaller KL weight makes the autoencoder less “variational” but more “deterministic,” leading to less fluent sampled sentences but better reconstruction. If the trade-off is not analyzed explicitly, the VAE variant could have arbitrary results based on KLweight tuning, which is unfair. We therefore present the scatter plot in Figure 3, showing the trend of forward PPL and BLEU scores with different KL weights. Clearly, DSSVAE outperforms a plain VAE in BLEU if Forward PPL is controlled, and in Forward PPL if BLEU is controlled. The scatter plot shows that our proposed DSS-VAE outperforms the original counterpart in language generation with different KL weights. In terms of Reverse PPL (Table 2), DSS-VAE also achieves better Reverse PPL than a traditional VAE. Since DSS-VAE leverages syntax to improve the sentence generation, we also include a state-of-the-art syntactic language model (PRPNLM, Shen et al., 2017) for comparison. Results show that DSS-VAE has achieved a Reverse PPL comparable to (and slightly better than) 6014 Model Reverse PPL↓ Real data 70.76 LSTM-LM 132.46 PRPN-LM 116.67 VAE 125.86 DSS-VAE 116.23 Table 2: Reverse PPL reflect the diversity and fluency of sampling data, the lower↓, the better. Training on the model sampled and evaluated on the real test set. We set the same KL weight for DSS-VAE and VAE here.(KL weight=1.0) PRPN-LM. It is also seen that explicitly modeling syntactic structures does yield better generation results—DSS-VAE and PRPN consistently outperform VAE and LSTM-LM in sentence generation. We also include the Reverse PPL of the real training sentences. As expected, training a language model on real data outperforms training on sampled sentences from a generation model, showing that there is still much room for improvement for all current sentence generators. 4.2 Unsupervised Paraphrase Generation Given an input sentence, paraphrase generation aims to synthesize a sentence that appears different from the input, but conveys the same meaning. We propose a novel approach to unsupervised paraphrase generation with DSS-VAE. Suppose a DSS-VAE is well trained according to §3.3, our approach works in the inference stage. For a particular input sentence x∗, let q(zsyn|x∗) and q(zsem|x∗) be the encoded posterior distributions of the syntactic and semantic spaces, respectively. The inferred latent vectors are: z∗ sem = argmaxzsem q(zsem|x∗) (9) z∗ syn ∼q(zsyn|x∗) (10) and are further combined as: z∗=  z∗ syn; z∗ sem  (11) Finally, z∗is fed to the decoder and perform a greedy decoding for paraphrase generation. The intuition behind is that, when generating the paraphrase, semantics should remain the same, but the syntax of a paraphrase could (and should) vary. Therefore, we sample a z∗ syn vector from its probabilistic distribution, while fixing z∗ sem. Model BLEU-ref↑ BLEU-ori↓ Origin Sentence† 30.49 100 VAE-SVG-eq (supervised)‡ 22.90 – VAE (unsupervised)† 9.25 27.23 CGMH† 18.85 50.18 DSS-VAE 20.54 52.77 Table 3: Performance of paraphrase generation. The larger↑(or lower↓), the better. Some results are quoted from †Miao et al. (2019) and ‡Gupta et al. (2018). Dataset We used the established Quora dataset6 to evaluate paraphrase generation, following previous work (Miao et al., 2019). The dataset contains 140k pairs of paraphrase sentences and 260k pairs of non-paraphrase sentences. In the standard dataset split, there are 3k and 30k held-out validation and test sets, respectively. In this experiment, we consider the unsupervised setting as Miao et al. (2019), using all non-paraphrase sentences as training samples. It is also noted that we only valid our model on the non-paraphrase held-out validation set by selecting with the lowest validation ELBO. Evaluation Since the test set contains a reference paraphrase for each input, it is straightforward to compute the BLEU against the reference, denoted by BLEU-ref. However, this metric alone does not model whether the generated sentence is different from the input, and thus, Miao et al. (2019) propose to measure this by computing BLEU against the original sentence (denoted as BLEU-ori), which ideally should be low. We only consider the DSS-VAE that yields a BLEUori lower than 55, which is empirically suggested by Miao et al. (2019) that ensures the obtained sentence is different from the original to at least a certain degree. Results Table 3 shows the performance of unsupervised paraphrase generation. In the first row of Table 3, simply copying the original sentences yields the highest BLEU-ref, but is meaningless as it has a BLEU-ori score of 100. We see that DSS-VAE outperforms the CGMH and the original VAE in BLEU-ref. Especially, DSS-VAE achieves a closer BLEU-ref compared with supervised paraphrase methods (Gupta et al., 2018). We admit that it is hard to present the trade-off by listing a single score for each model in the Table 3. We therefore have the scatter plot in Fig6https://www.kaggle.com/c/quora-question-pairs/data 6015 BLEU-ref 8 11.5 15 18.5 22 BLEU-ori 20 25 30 35 40 45 50 55 50.18, 18.85 27.23, 9.25 DSS-VAE VAE CGMH !1 Figure 4: Trade-off between BLEU-ori (the lower, the better) and BLEU-ref (the larger, the better) in unsupervised paraphrase generation. Again, the upper-left corner indicates a better performance. ure 4 to further compare these methods. As seen, the trade-off is pretty linear and less noisy compared with Figure 3. It is seen that the line of DSS-VAE is located to the upper-left of the competing methods. In other words, the plain VAE and CGMH are “inadmissible,” meaning that DSSVAE simultaneously outperforms them in both BLEU-ori and BLEU-ref, indicating that DSSVAE outperforms previous state-of-the-art methods in unsupervised paraphrase generation. 4.3 Syntax-Transfer Generation In this experiment, we propose a novel application of syntax-transfer text generation, inspired by previous sentiment-style transfer studies (Hu et al., 2017; Fu et al., 2018; John et al., 2018). Consider two sentences: x1: There is a dog behind the door. x2: The child is playing in the garden. If we would like to generate a sentence having the syntax of “there is/are” as x1 but conveying the meaning of x2, we could graft the respective syntactic and semantic vectors as: z∗ sem = argmaxzsem q(zsem|x2) z∗ syn = argmaxzsyn q(zsyn|x1) z =  z∗ sem; z∗ syn  and then feed z to the decoder to obtain a syntaxtransferred sentence. Dataset and Evaluation To evaluate this task, we constructed a subset of the Stanford Natural Language Inference (SNLI), containing 1000 nonparaphrase pairs. SNLI sentences can be thought of as a simple domain-specific corpus, but were all written by humans. In each pair we constructed, one sentence serves as the semantic provider (denoted by Refsem), and the other serves as the syntactic provider (denoted by Refsyn). The goal of syntax-transfer text generation is to synthesize a sentence that resembles Refsem but not Refsyn in semantics, and resembles Refsyn but not Refsem in syntax. For the semantic part, we use the traditional word-based BLEU scores to evaluate how the generated sentence is close to Refsem but different from Refsyn. For syntactic similarity, we use the zss package7 to calculate the Tree Edit Distance (TED, Zhang and Shasha, 1989). TED is essentially the minimum-cost sequence of node edit operations (namely, delete, insert, and rename) between two trees, which reflects the difference of two syntactic trees. Since we hope the generated sentence has a higher word-BLEU score compared with Refsem but a lower word-BLEU score compared with Refsyn, we compute their difference, denoted by ∆word-BLEU, to consider both. Likewise, ∆TED is also computed. We further take the geometric mean of ∆word-BLEU and ∆TED to take both into account. Results We see from Table 4 that a traditional VAE cannot accomplish the task of syntax transfer. This is because Refsyn and Refsem—even if we artificially split the latent space into two parts—play the same role in the decoder. With the multi-task and adversarial losses for syntactic and semantic latent spaces, the total difference is increased by 12.09, which shows the success of syntax-transfer sentence generation. This further implies that explicitly modeling syntax is feasible in the latent space of VAE. We incrementally applied the adversarial reconstruction loss, proposed in § 3.2.3. As seen, an adversarial reconstruction loss drastically strengthens the role of the other space. For example, +L(adv) rec (zsem) repels information to the syntactic space and achieves the highest ∆TED. When applying the adversarial reconstruction losses to both semantic and syntactic spaces, we have a balance between ∆word-BLEU and ∆TED, both ranking second in the respective columns. Eventually, we achieve the highest total difference, showing that our full DSS-VAE model 7https://github.com/timtadh/zhang-shasha 6016 Model word-BLEU (corpus) ∆word-BLEU↑ Average TED (per sentence) ∆TED↑ Geo Mean ∆↑ Refsem↑ Refsyn↓ Refsem↑ Refsyn↓ VAE 6.81 6.68 0.13 149.22 148.59 0.63 0.29 L(mul) sem + L(mul) syn + L(adv) sem + L(adv) syn 12.14 6.22 5.92 159.51 134.80 24.71 12.09 +L(adv) rec (zsem) 11.83 6.60 5.23 163.40 131.27 32.13 12.96 +L(adv) rec (zsyn) 14.33 6.07 8.26 159.20 134.22 24.98 14.36 +L(adv) rec (zsyn) + L(adv) rec (zsem) 13.74 6.15 7.59 161.94 131.09 30.85 15.30 Table 4: Performance of syntax-transfer generation. The larger↑(or lower↓), the better. The results of VAE are obtained by averaging interpolation. ∆word-BLEU = word-BLEU(Refsem)−word-BLEU(Refsyn). We also compute the difference as ∆TED = TED(Refsem) −TED(Refsyn) to measure if the generated sentence is syntactically similar to Refsyn but not Refsem. Due to the difference of scale between BLEU and TED, we compute the geometric mean of ∆word-BLEU and ∆TED reflect the total differences. achieves the best performance of syntax-transfer generation. Discussion on syntax transfer between incompatible sentences We provide a few case studies of syntax-transfer generation in Appendix B. We empirically find that the syntactic transfer between “compatible” sentences give more promising results than transfer between “incompatible” sentences. Intuitively, this is reasonable because it may be hard to transfer a sentence with a length of 5, say, to a sentence with a length of 50. 5 Conclusion In this paper, we propose a novel DSS-VAE model, which explicitly models syntax in the distributed latent space of VAE and enjoys the benefits of sampling and manipulation in terms of the syntax of a sentence. Experiments show that DSS-VAE outperforms the VAE baseline in reconstruction and unconditioned language generation. We further make use of the sampling and manipulation advantages of DSS-VAE in two novel applications, namely unsupervised paraphrase and syntax-transfer generation. In both experiments, DSS-VAE achieves promising results. Acknowledgments We would like to thank the anonymous reviewers for their insightful comments. This work is supported by the National Science Foundation of China (No. 61772261 and No. 61672277) and the Jiangsu Provincial Research Foundation for Basic Research (No. BK20170074). References Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In CoNLL, pages 10–21. Eugene Charniak. 2001. Immediate-head parsing for language models. In ACL, pages 124–131. Ciprian Chelba. 1997. A structured language model. In ACL, pages 498–500. Huadong Chen, Shujian Huang, David Chiang, and Jiajun Chen. 2017. Improved neural machine translation with a syntax-aware encoder and decoder. In ACL, pages 1936–1945. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In EMNLP, pages 1724–1734. Alexander Clark. 2001. Unsupervised induction of stochastic context-free grammars using distributional clustering. In Proceedings of the ACL 2001 Workshop on Computational Natural Language Learning. Hanjun Dai, Yingtao Tian, Bo Dai, Steven Skiena, and Le Song. 2018. Syntax-directed variational autoencoder for structured data. In ICLR. Jan Milan Deriu and Mark Cieliebak. 2018. Syntactic manipulation for generating more diverse and interesting texts. In Proceedings of the 11th International Conference on Natural Language Generation, pages 22–34. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A Smith. 2016. Recurrent neural network grammars. In NAACL, pages 199–209. Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016. Tree-to-sequence attentional neural machine translation. In ACL, pages 823–833. Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In AAAI, pages 663–670. 6017 Rafael G´omez-Bombarelli, Jennifer N Wei, David Duvenaud, Jos´e Miguel Hern´andez-Lobato, Benjam´ın S´anchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D Hirzel, Ryan P Adams, and Al´an Aspuru-Guzik. 2018. Automatic chemical design using a data-driven continuous representation of molecules. ACS Central Science, 4(2):268–276. Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2018. A deep generative framework for paraphrase generation. In AAAI, pages 5149–5156. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward controlled generation of text. In ICML, pages 1587– 1596. Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2018. Disentangled representation learning for text style transfer. arXiv preprint arXiv:1808.04339. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Diederik P Kingma and Max Welling. 2014. Autoencoding variational Bayes. In ICLR. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, and Noah A Smith. 2017. What do recurrent neural network grammars learn about syntax? In EACL, pages 1249–1258. Matt J. Kusner, Brooks Paige, and Jos´e Miguel Hern´andez-Lobato. 2017. Grammar variational autoencoder. In ICML, pages 1945–1954. Juncen Li, Robin Jia, He He, and Percy Liang. 2018a. Delete, Retrieve, Generate: a simple approach to sentiment and style transfer. In ACL, pages 1865– 1874. Junhui Li, Deyi Xiong, Zhaopeng Tu, Muhua Zhu, Min Zhang, and Guodong Zhou. 2017. Modeling source syntax for neural machine translation. In ACL, pages 688–697. Juntao Li, Yan Song, Haisong Zhang, Dongmin Chen, Shuming Shi, Dongyan Zhao, and Rui Yan. 2018b. Generating classical chinese poems via conditional variational autoencoder and adversarial training. In EMNLP, pages 3890–3900. Lemao Liu, Muhua Zhu, and Shuming Shi. 2018. Improving sequence-to-sequence constituency parsing. In AAAI, pages 4873–4880. Shuming Ma, Xu Sun, Yizhong Wang, and Junyang Lin. 2018. Bag-of-words as target for neural machine translation. In ACL, pages 332–338. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The Penn Treebank. Computational linguistics, 19(2):313–330. Ning Miao, Hao Zhou, Lili Mou, Rui Yan, and Lei Li. 2019. CGMH: Constrained sentence generation by Metropolis-Hastings sampling. In AAAI. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH. Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In ICML, pages 807–814. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In ACL, pages 311–318. Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. A hybrid convolutional variational autoencoder for text generation. In EMNLP, pages 627–637. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In AAAI, pages 3295–3301. Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron Courville. 2017. Neural language modeling by jointly learning syntax and lexicon. In ICLR. Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. In CVPR, pages 7167–7176. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 5998–6008. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In NIPS, pages 2773– 2781. Rongxiang Weng, Shujian Huang, Zaixiang Zheng, XIN-YU DAI, and CHEN Jiajun. 2017. Neural machine translation with word predictions. In EMNLP, pages 136–145. Kaizhong Zhang and Dennis Shasha. 1989. Simple fast algorithms for the editing distance between trees and related problems. SIAM journal on computing, 18(6):1245–1262. Junbo Jake Zhao, Yoon Kim, Kelly Zhang, Alexander M. Rush, and Yann LeCun. 2018. Adversarially regularized autoencoders. In ICML, pages 5897– 5906. Hao Zhou, Zhaopeng Tu, Shujian Huang, Xiaohua Liu, Hang Li, and Jiajun Chen. 2017. Chunk-based biscale decoder for neural machine translation. In ACL, pages 580–586. 6018 A Hyperparameter Details We list the hyperparaemters in Tables 5 and 6. Every 500 batch, we save the model if it achieves a lower evidence lower bound (ELBO) on the validation set. B Case Study of Syntax Transfer We provide a few examples in Table 7. We see in all cases that a plain VAE “interpolates” two sentences without the consideration of syntax and semantics, whereas our DSS-VAE is able to transfer the syntax without changing the meaning much. In the first example, DSS-VAE successfully transfer a “subject-be-predicative” sentence to a “there is/are” sentence. For the second example, the semantic reference has the same syntactic structure as the syntax reference, and as a result, DSS-VAE generates the same sentence as Refsem. For the last example, we transfer a “there is/are“ sentence to a “subject-be-predicative“ sentence, and our DSSVAE is also able to generate the desired syntax. 6019 Hyper-parameters Value λKL sem 1.0 λKL syn 1.0 λmul sem 0.5 λmul syn 0.5 λadv sem 0.5 λadv syn 0.5 λrec sem 0.5 λrec syn 0.5 Batch size 32 GRU Dropout 0.1 Table 5: The hyper-parameters we used in PTB dataset Hyper-parameters Value λKL sem 1/3 λKL syn 2/3 λmul sem 5.0 λmul syn 1.0 λadv sem 0.5 λadv syn 0.5 λrec sem 1.0 λrec syn 0.05 Batch size 50 GRU Dropout 0.3 Table 6: The hyper-parameters we used in Quora dataset. Semantic and Syntactic Providers Syntax-Transfer Output Refsyn: There is an apple on the table. Refsem: The airplane is in the sky. VAE: The man is in the kitchen. DSS-VAE: There is a airplane in the sky. Refsyn: The shellfish was cooked in a wok. Refsem: The stadium was packed with people. VAE: The man was filled with people. DSS-VAE: The stadium was packed with people. Refsyn: The child is playing in the garden. Refsem: There is a dog behind the door. VAE: There is a person in the garden. DSS-VAE: A dog is walking behind the door. Table 7: Case studies of syntax transfer generation.
2019
602
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6020–6026 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6020 Learning to Control the Fine-grained Sentiment for Story Ending Generation Fuli Luo1⇤, Damai Dai1,3⇤, Pengcheng Yang1,2, Tianyu Liu1, Baobao Chang1,3, Zhifang Sui1,3, Xu Sun1,2 1Key Lab of Computational Linguistics, School of EECS, Peking University 2Deep Learning Lab, Beijing Institute of Big Data Research, Peking University 3Peng Cheng Laboratory, China {luofuli,daidamai,yang pc,tianyu0421,chbb,szf,xusun}@pku.edu.cn Abstract Automatic story ending generation is an interesting and challenging task in natural language generation. Previous studies are mainly limited to generate coherent, reasonable and diversified story endings, and few works focus on controlling the sentiment of story endings. This paper focuses on generating a story ending which meets the given fine-grained sentiment intensity. There are two major challenges to this task. First is the lack of story corpus which has fine-grained sentiment labels. Second is the difficulty of explicitly controlling sentiment intensity when generating endings. Therefore, we propose a generic and novel framework which consists of a sentiment analyzer and a sentimental generator, respectively addressing the two challenges. The sentiment analyzer adopts a series of methods to acquire sentiment intensities of the story dataset. The sentimental generator introduces the sentiment intensity into decoder via a Gaussian Kernel Layer to control the sentiment of the output. To the best of our knowledge, this is the first endeavor to control the fine-grained sentiment for story ending generation without manually annotating sentiment labels. Experiments show that our proposed framework can generate story endings which are not only more coherent and fluent but also able to meet the given sentiment intensity better.1 1 Introduction Story ending generation aims at completing the plot and concluding a story given a story context. Previous works mainly study on how to generate a coherent, reasonable and diversified story ending (Li et al., 2018; Guan et al., 2018; Xu et al., 2018). However, few of them focus on controllable story ending generation, especially ⇤Equal Contribution. 1Our code and data can be found at https://github. com/luofuli/sentimental-story-ending Target Sentiment Generated Story Endings 0.1 She still lost the game and was very upset. 0.3 She almost won the game, but eventually lost. 0.5 The game ended with a draw. 0.7 She eventually won the game. 0.9 She won the game and was very proud of her team. Story context: Sally really loves to play soccer. She joined a team with her friends and she plays everyday. Her coach and her teammates are all really fun. Sally practiced extra hard for her first match. Figure 1: An example of the input story context and output story endings for this task. All of the story endings are coherent with the story context but express different sentiment intensities. controlling the sentiment for story ending generation. Yao et al. (2018b) is the only work on controlling the sentiment for story ending generation. However, their work needs manually label the story dataset with sentiment labels (happy, sad, unknown), which is time-consuming and laborintensive. What’s more, they only focus on coarsegrained sentiment. Different from previous work, we propose the task of controlling the sentiment for story ending generation at a fine-grained level, without any human annotation of story dataset2. Take Figure 1 as an example, given the same story context, our goal is to generate a story ending that satisfies the given sentiment intensity, where 0 denotes the most negative and 1 denotes the most positive, following the setting of sentiment intensity on sentiment intensity prediction task (Abdou et al., 2018; Akhtar et al., 2018). To the proposed task, there are two major challenges. First, how to annotate story corpus with sentiment intensities. Second, how to incorporate the fine-grained sentiment control into a generative model. 2Fine-grained sentiment is equivalent to sentiment intensity in this paper. 6021 Encoder Decoder Rule-Based Regression Model Domain Adversarial 𝑆𝑆 ො𝑦𝑦 𝑥𝑥 Sentiment Analyzer Sentimental Generator 𝑦𝑦 𝑦𝑦 𝑦𝑦 Training Stage Testing Stage User Input Figure 2: The overview of the proposed framework, which consists of a sentiment analyzer and a sentimental generator. During training, the target sentiment intensity s is computed by the sentiment analyzer. During testing, users can input any sentiment intensity to control the sentiment for story ending generation. In this work, we propose a framework which consists a sentiment analyzer and a sentimental generator. To address the first challenge, the sentiment analyzer adopts three methods including an unsupervised rule-based method, a regression model, and a domain-adversarial regression model to acquire sentiment intensities of the story training corpus. To address the second challenge, the sentimental generator uses a sentiment intensity controlled sequence-to-sequence model (SICSeq2Seq) to generate a story ending which expresses the given sentiment intensity. It introduces an explicit sentiment intensity control variable into the Seq2Seq model via a Gaussian Kernel Layer to guide the generation. Experiments show the effectiveness and generality of the proposed framework, since it can generate story endings which are not only coherent and fluent but also able to better meet the given sentiment intensity. 2 Proposed Model 2.1 Overview Here we formulate the task of fine-grained sentiment controllable story ending generation. Given the story context x = (x1, · · · , xm) which consists of m sentences, and the target sentiment intensity s, the goal of this task is to generate a story ending y that is coherent to story context x and expresses the target sentiment intensity s. Note that the sentiment intensity s 2 [0, 1]. Although existing datasets for story ending generation can provide paired data (x, y), the true sentiment s of y is not observable. To remedy this, the sentiment analyzer S employs several methods to acquire the sentiment intensity s of y. Then the sentimental generator G takes the story context x and the sentiment of the story ending s as input to generate the story ending y. The overview of our proposed framework is presented in Figure 2, which is composed of two modules: a sentiment analyzer S and a sentimental generator G. The next two sections will show detailed configurations in each module. 2.2 Sentiment Analyzer The sentiment analyzer S aims to predicting the sentiment intensity s of the gold story ending y to construct paired data (x, s; y). As the first attempt to solve the proposed task, we explore three kinds of sentiment analyzers as follows. Rule-based (RB): VADER (Hutto and Gilbert, 2014) is an rule-based unsupervised model for sentiment analysis. We use it to extract the sentiment intensity s of y and then scale s to [0, 1]. Regression Model (RM): We first train a linear regression model R on the Stanford Sentiment Treebank (SST) (Socher et al., 2013) dataset, which is widely-used for sentiment analysis. Then we use R to acquire the sentiment intensity of y. Domain-Adversarial (DA): In the absence of sentiment annotations for the story dataset, domain adaptation can provide an effective solution since there exists some labeled datasets of a similar task but from a different domain. We use adversarial learning (Ganin and Lempitsky, 2015) to extract a domain-independent feature which not only performs well in the SST sentiment regression task but also misleads the domain discriminator. Finally, we use the adapted regression model to acquire the sentiment intensity s of y. 2.3 Sentimental Generator The sentimental generator G aims to generate story endings that match the target sentiment intensities s. It consists of an encoder and a decoder equipped with a Gaussian Kernel Layer. The encoder is to map the input story context x into a compact vector that can capture its essential context features. Specifically, we use a normal bi-directional LSTM as the encoder. All context words xi are represented by their semantic embeddings E as the input and we use the concatenation of final forward and backward hidden states as the initial hidden state of the decoder. 6022 ℎ𝑡𝑡−1 Gaussian Kernel Layer Target Sentiment Intensity Sentiment Embeddings Semantic Embeddings 𝑃𝑃y𝑡𝑡 = 𝛼𝛼𝑃𝑃𝑅𝑅𝑦𝑦𝑡𝑡 + 𝛽𝛽𝑃𝑃𝑆𝑆𝑦𝑦𝑡𝑡 𝑐𝑐𝑡𝑡 Decoder Figure 3: The decoder of the sentimental generator. A Gaussian Kernel Layer is introduced to make use of the target sentiment intensity. The decoder aims to generate a story ending which accords with the target sentiment intensity s. As shown in Figure 3, the probability of generating a target word P is composed of two probabilities: P(yt) = ↵PR(yt) + βPS(yt) (1) where PR(yt) denotes the semantic generation probability, PS(yt) denotes the sentiment generation probability, ↵and β are trainable coefficients. Specifically, PR(yt) is defined as follow: PR(yt = w) = wT (WR · hyt + bR), (2) ht = LSTM(yt−1, ht−1, ct) (3) where w is a one-hot indicator vector of word w, WR and bR are trainable parameters, ht is the t-th hidden state of the LSTM decoder with attention mechanism (Luong et al., 2015). PS(yt) measures the generation probability of the target word given the target sentiment intensity s. For all words, beyond their semantic embeddings, they also have sentiment embeddings U. The sentiment embeddings of words reflect their sentiment properties. A Gaussian Kernel Layer (Luong et al., 2015; Zhang et al., 2018) is used to encourage words with sentiment intensity near to target sentiment s, and PS(yt) is defined as follow: PS(yt = w) = 1 p 2⇡σ exp ✓ −(ΦS(Uw) −s)2 2σ2 ◆ (4) ΦS(U, w) = sigmoid(wT (U · WU + bU)) (5) where σ2 is the variance, ΦS maps the sentiment embedding into a real value, the target sentiment intensity s is the mean of the Gaussian distribution, WU and bU are trainable parameters. 3 Experiment 3.1 Dataset We choose the widely-used ROCStories corpus (Mostafazadeh et al., 2016) which consists of 100k five-sentence stories. We split the data into a training set with 93,126 stories, a validation set with 5,173 stories and a test set with 5,175 stories. 3.2 Baselines Since there is no direct related work of this task, we design an intuitive pipeline (generate-andmodify) as baseline. It first generates a story ending using a general sequence-to-sequence model with attention (Luong et al., 2015), and then modifies the sentiment of the story ending towards the target sentiment intensity via a fine-grained sentiment modification method (Liao et al., 2018). We call this baseline Seq2Seq + SentiMod. 3.3 Experiment Settings We tune hyper-parameters on the validation set. For the RM and DA sentiment analyzer, we implement the encoder as a 3-layer bidirectional LSTM with a hidden size of 512. We implement the regression module as a MLP with 1 hidden layer of size 32. For domain adaption, we implement a domain discriminator as a MLP with 1 hidden layer of size 32. A Gradient Reversal Layer is added into the domain discriminator. For the sentimental generator, both the semantic and sentiment embeddings are 256 dimensions and randomly initialized. We implement both encoder and decoder as 1-layer bidirectional LSTM with a hidden size of 512. The variance σ2 of Gaussian Kernel Layer is set as 1. The batch size is 32 and the dropout (Srivastava et al., 2014) is 0.5. We use the Adam optimizer (Kingma and Ba, 2014) with an initial learning rate of 0.0003. 3.4 Evaluation Metrics For the proposed task, there are no existing accepted metrics. We propose both automatic evaluation and human evaluation for this task. 3.4.1 Automatic Evaluation Sentiment Consistency: We propose the pairwise sentiment consistency (SentiCons) to evaluate the consistency of two lists of sentiment intensities. For two lists A and B with the same length, 6023 Model H-M SentiCons Rule-Based (RB) 0.936 Regression Model (RM) 0.846 Domain Adversarial (DA) 0.747 Table 1: Automatic evaluation of sentiment analyzers. SentiCons(A, B) is calculated by P 1i<jn I(AiAj^BiBj)_(Ai≥Aj^Bi≥Bj) C2n , (6) where n is the length of the list and I is the indicator function. To evaluate the performance of sentiment analyzer, we calculate SentiCons of human-annotated sentiment intensities and modelpredicted sentiment intensities of gold story endings in the test set (H-M SentiCons). To evaluate the performance of sentimental generator, for each story context in the test set, we generate five story endings with five target sentiment intensity ranging from [0, 1]. Then we calculate SentiCons of input target sentiment intensities and sentiment intensities of the outputs predicted by the best sentiment analyzer (I-O SentiCons). BLEU: For each story in the test set, we take the context x and the human-annotated sentiment intensity s of the gold story ending y as input. The corresponding output is ˆy. Then we calculate the BLEU (Papineni et al., 2002) score of y and ˆy as the overall quality of the generated story endings. 3.4.2 Human Evaluation We hire two evaluators who are skilled in English to evaluate the generated story endings. For each story in the test set, we distribute the story context, five target sentiment intensities and corresponding generated story endings to the evaluators. Evaluators are required to score the generated endings from 1 to 5 in terms of three criteria: Coherency, Fluency and Sentiment. Coherency measures whether the endings are coherent with the context. Fluency measures whether the endings are fluent. Sentiment measures how much the endings express the target sentiment intensities. 3.5 Evaluation Results Table 1 shows the automatic evaluation results of three sentiment analyzers. We find that: (1) The rule-based method RB performs the best. This accords with the fact that story endings in the ROCStories corpus are simple and have relatively obvious emotional words. (2) DA can not improve Model BLEU-1 BLEU-2 I-O SentiCons Seq2Seq + SentiMod 10.7 3.2 0.788 SIC-Seq2Seq + RB 19.3 6.3 0.879 SIC-Seq2Seq + RM 19.5 6.2 0.830 SIC-Seq2Seq + DA 19.8 6.7 0.794 Table 2: Automatic evaluation of generation models. Model Coherency Fluency Sentiment Seq2Seq + SentiMod 1.50 2.50 3.68 SIC-Seq2Seq + RB 2.65 4.75 4.09 SIC-Seq2Seq + RM 2.15 4.60 3.65 SIC-Seq2Seq + DA 2.20 4.50 3.71 Table 3: Human evaluation of generation models. the performance of sentiment analysis in our task compared to RM. We hypothesize that is because the domains of labeled SST corpus and ROCStories corpus differ too much that affects the performance of domain adaptation. The automatic and human evaluation results of four generation models are shown in Table 2 and Table 3 respectively. We have the following observations: (1) Three models based on our proposed framework do not have obvious performance difference in terms of BLEU, Coherency, and Fluency. Meanwhile, all of them can largely outperform the Seq2Seq+SentiMod baseline which does not follow our framework. Thus it shows the effectiveness of the proposed framework. (2) HM SentiCons which measures the performance of sentiment analyzer is marginally consistent with the I-O SentiCons and Sentiment which measure the performance of sentimental generator. This accords with our expectations because the sentimental generator takes the sentiment intensity predicted by the sentiment analyzer as the input signal for controlling the sentiment of the output. From a comprehensive perspective, our framework can better control the sentiment while guaranteeing the coherency and fluency. 4 Case Study We provide an example of story ending generation with five different target sentiment intensities in Table 4. This demonstrates that our proposed framework can generate more fluent and coherent story endings than the Seq2Seq + SentiMod baseline which does not follow our framework. More importantly, at the same time, our framework has better control over the sentiment tenden6024 Story Context Madison really wanted to buy a new car. She applied to work at different restaurants around town. One day a local restaurant hired her to be their new waitress! Molly worked very hard as a waitress and earned a lot of tips. Outputs Seq2Seq + SentiMod s = 0.1 Dates sangria and drinks went loved the drinks! s = 0.3 Madison was never in once some showed up. s = 0.5 Madison’s finally cut and delicious wine. s = 0.7 Madison was happy so new great hospital! s = 0.9 Tom and satisfied big meal and sweet! Outputs SIC-Seq2Seq + RB s = 0.1 Madison got in trouble for not buying the car again. s = 0.3 Madison was so embarrassed that she threw her car out. s = 0.5 Madison was able to buy her car. s = 0.7 Madison was so excited to be able to buy her car! s = 0.9 Madison was happy to have a new car and be happy with her new car! Table 4: Example outputs with five different target sentiment intensities s ranging from 0 to 1. The generated story endings of the baseline (Seq2Seq + SentiMod) are shown at the top. The generated story endings of the best proposed model (SIC-Seq2Seq + RB) are shown at the bottom. cies of generated story endings, e.g. “in trouble” ! “embarrassed” ! “able to” ! “excited” ! “happy” and “new car”. 5 Related Work Story generation Automatic story generation has attracted interest over the past few years. Recently, many approaches are proposed to generate a better story in terms of coherence (Jain et al., 2017; Xu et al., 2018), rationality (Li et al., 2018), topic-consistence (Yao et al., 2018a). However, most of story generation methods lack the ability to receive guidance from users to achieve a specific goal. There are only a few works focus on the controllability of story generation, especially on sentiment. Tambwekar et al. (2018) introduces a policy gradient learning approach to ensure that the model ends with a specific type of event given in advance. Yao et al. (2018b) uses manually annotated story data to control the ending valence and storyline of story generation. Different from them, our proposed framework can acquire distant sentiment labels without the dependence on the human annotations. Sentimental Text Generation Generating sentimental and emotional texts is a key step towards building intelligent and controllable natural language generation systems. To date several works of dialogue generation (Zhou et al., 2018; Huang et al., 2018; Zhou and Wang, 2018) and text sentiment transfer task (Li et al.; Luo et al., 2019) have studied on generating emotional or sentimental text. They always pre-define a binary sentiment label (positive/negative) or a small limited set of emotions, such as “anger”, “love”. Different from them, controlling the fine-grained sentiment (a numeric value) for story ending generation is not limited to several emotional labels, thus we can not embed each sentiment label into a separate vector as usual. Therefore, we propose to introduce the numeric sentiment value via a Gaussian Kernel Layer. 6 Conclusion and Future Work In this paper, we make the first endeavor to control the fine-grained sentiment for story ending generation. The proposed framework is generic and novel, and does not need any human annotation of story dataset. Experiments show the effectiveness of the proposed framework to control the sentiment intensity on both automatic evaluation and human evaluation. Future work can combine the analyzer and generator via joint training, hopefully to achieve better results. Acknowledgments This paper is supported by NSFC project 61772040 and 61876004. The contact authors are Baobao Chang and Zhifang Sui. 6025 References Mostafa Abdou, Artur Kulmizev, and Joan Gin´es i Ametll´e. 2018. Affecthor at semeval-2018 task 1: A cross-linguistic approach to sentiment intensity quantification in tweets. In Proceedings of The 12th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT. Md. Shad Akhtar, Deepanway Ghosal, Asif Ekbal, and Pushpak Bhattacharyya. 2018. A multi-task ensemble framework for emotion, sentiment and intensity prediction. In arXiv preprint arXiv:1808.01216. Yaroslav Ganin and Victor S. Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In ICML. Jian Guan, Yansen Wang, and Minlie Huang. 2018. Story ending generation with incremental encoding and commonsense knowledge. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, AAAI-18. Chenyang Huang, Osmar Zaiane, Amine Trabelsi, and Nouha Dziri. 2018. Automatic dialogue generation with expressed emotions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). Clayton J Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Eighth international AAAI conference on weblogs and social media. Parag Jain, Priyanka Agrawal, Abhijit Mishra, Mohak Sukhwani, Anirban Laha, and Karthik Sankaranarayanan. 2017. Story generation from sequence of independent short descriptions. SIGKDD. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Juncen Li, Robin Jia, He He, and Percy Liang. Delete, retrieve, generate: a simple approach to sentiment and style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018. Zhongyang Li, Xiao Ding, and Ting Liu. 2018. Generating reasonable and diversified story ending using sequence to sequence model with adversarial training. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018. Yi Liao, Lidong Bing, Piji Li, Shuming Shi, Wai Lam, and Tong Zhang. 2018. Quase: Sequence editing under quantifiable guidance. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Fuli Luo, Peng Li, Jie Zhou, Pengcheng Yang, Baobao Chang, Zhifang Sui, and Xu Sun. 2019. A dual reinforcement learning framework for unsupervised text style transfer. CoRR, abs/1905.10060. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025. Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and evaluation framework for deeper understanding of commonsense stories. arXiv preprint arXiv:1604.01696. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research. Pradyumna Tambwekar, Murtaza Dhuliawala, Animesh Mehta, Lara J. Martin, Brent Harrison, and Mark O. Riedl. 2018. Controllable neural story generation via reinforcement learning. Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, AAAI-18. Jingjing Xu, Xuancheng Ren, Yi Zhang, Qi Zeng, Xiaoyan Cai, and Xu Sun. 2018. A skeleton-based model for promoting coherence among sentences in narrative story generation. In Proceedings of EMNLP. Lili Yao, Nanyun Peng, Ralph M. Weischedel, Kevin Knight, Dongyan Zhao, and Rui Yan. 2018a. Planand-write: Towards better automatic storytelling. Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, AAAI-18. Lili Yao, Nanyun Peng, Ralph M. Weischedel, Kevin Knight, Dongyan Zhao, and Rui Yan. 2018b. Towards controllable story generation. NAACL Workshop. Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu, and Xueqi Cheng. 2018. Learning to control the specificity in neural response generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 6026 Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional chatting machine: Emotional conversation generation with internal and external memory. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, AAAI-18. Xianda Zhou and William Yang Wang. 2018. Mojitalk: Generating emotional responses at scale. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018.
2019
603
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6027–6032 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6027 Self-Attention Architectures for Answer-Agnostic Neural Question Generation Thomas Scialom LIP6 - Sorbonne Universit´es reciTAL [email protected] Benjamin Piwowarski CNRS LIP6 - Sorbonne Universit´es UPMC Univ Paris 06 UMR 7606 [email protected] Jacopo Staiano reciTAL [email protected] Abstract Neural architectures based on self-attention, such as Transformers, recently attracted interest from the research community, and obtained significant improvements over the state of the art in several tasks. We explore how Transformers can be adapted to the task of Neural Question Generation without constraining the model to focus on a specific answer passage. We study the effect of several strategies to deal with out-of-vocabulary words such as copy mechanisms, placeholders, and contextual word embeddings. We report improvements obtained over the state-of-the-art on the SQuAD dataset according to automated metrics (BLEU, ROUGE), as well as qualitative human assessments of the system outputs. 1 Introduction The Machine Reading Comprehension (MRC) community focuses on the development of models and algorithms allowing machines to correctly represent the meaning imbued in natural sentences, in order to perform useful and valuable high-level downstream tasks such as providing answers to questions, generate summaries, and generate relevant questions given a piece of text. Performance on those downstream tasks is indicative of the extent to which the different proposed architectures are able to capture meaning from natural language input. Recently, neural architectures based on selfattention have obtained significant improvements over the state of the art in several tasks such as language modelling and machine translation, for which abundant data is available. Yet, they have not been thoroughly evaluated on problems for which relatively scarcer datasets are available. We thus investigate the application of Transformers to the task of Neural Question Generation (NQG): given a text snippet, the model is called to generate relevant and meaningful questions about it. Question Generation (QG) is an active field of research within the context of machine reading. it matches human behavior when assessing comprehension on a given topic: an expert is able to ask the relevant questions to others to assess their competences. Its potential applications cover a broad range of scenarios, such as Information Retrieval, chat-bots, AI-supported learning technologies. Furthermore, it can be used as a strategy for data augmentation in the context of Question Answering systems. The QG task has been originally tackled using rule-based systems (Rus et al., 2010), with the research community turning to neural approaches in recent years. In its most popular declination, the task is answer-aware, i.e. the target answer within the source text is known and given as input to the QG model (Zhou et al., 2017). Under this scenario, Song et al. (2017) proposed a generative model, jointly trained for question generation and answering. More recently, Zhao et al. (2018) obtained state-of-the-art results using a gated selfattention encoder and a maxout pointer decoder. All these works employ the SQuAD (Rajpurkar et al., 2016) Question Answering dataset, thus directly leveraging the provided answer spans. Conversely, the answer-agnostic scenario lifts the constraint of knowing the target answers before generating the questions; Du et al. (2017) proposed an end-to-end sequence to sequence approach, based on a RNN encoder-decoder architecture with a global attention mechanism. While casting NQG as answer-aware is certainly relevant and useful (for instance, as a data-augmentation strategy for question answering data), the ability of generating questions without such constraint is very attractive. Indeed, removing the dependency on an answer-selection 6028 component allows to reduce the bias towards named entities, thus increasing the model’s degrees of freedom. This makes the task more challenging, but potentially more useful for certain applications – e.g. those requiring a natural interaction with a final user. In this work we follow the task as originally defined by Du et al. (2017): we avoid constraining the generation based on a specific answer, effectively operating in an end-to-end answer-agnostic scenario. To adapt Transformers to the NQG task, we complement the base architecture with a copying mechanism, placeholders, and contextual word embeddings: those mechanisms are useful for the treatment of out-of-vocabulary words, which are more likely to affect performance in data-scarce tasks. We study the effect of each of those mechanisms on architectures based on self-attention, reporting improvements over the state-of-the-art systems. 2 Architecture Neural sequence-to-sequence models often rely on Encoder-Decoder architectures: indeed, Recurrent Neural Networks (RNNs) have consistently provided state-of-the-art results for Natural Language Processing tasks such as summarization (Chopra et al., 2016) and translation (Sutskever et al., 2014). Drawbacks of RNN models include the inherent obstacles to parallelism and the consequent computational cost as well as the difficulties in handling long-range dependencies. The recently proposed Transformer model (Vaswani et al., 2017) has proved to be very effective on several tasks (Devlin et al., 2018; Radford et al., 2018), overcoming such issues by not relying on any recurrent gate: it can be briefly described as a sequence-to-sequence model with a symmetric encoder and decoder based on a self-attention mechanism. For an exhaustive description, we refer the reader to (Vaswani et al., 2017) or high-quality blog posts (e.g. “The annotated Transformer”1). Implementation-wise, we used a smaller architecture, with the following hyper-parameters: N = 2 (number of blocks), d model = 256 (hidden state dimension), d ff = 512 (position-wise feed-forward networks dimension), h = 2 (number of attention heads). Experiments run with the original hyper-parameters as proposed by Vaswani 1http://nlp.seas.harvard.edu/2018/04/ 03/attention.html et al. (2017)2 obtained consistent and numerically similar results. Throughout our experiments, we used the spaCy 2.0 library3 for Named Entity Recognition (NER), Part-of-Speech (POS) tagging, and tokenization. 3 Experiments In a preliminary experiment, we observed poor performances when applying a Vanilla Transformer architecture to the NQG task: we thus investigate how several mechanisms can be exploited within a Transformer architecture and how they affect the performances on the task. In the following, we describe and evaluate the benefits of augmenting the base Transformer architecture with: • a copying mechanism; • a placeholding strategy; • and, contextualized word embeddings. 3.1 Data We resort to the widely used Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016): it contains roughly 100,000 questions posed by crowd-workers on selected Wikipedia articles; each question is associated with the corresponding answer, and with the reading passage (the context) that contains it. In our experiments, we only use the question-context pairs. We evaluate performances through the commonly used BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004), and compare with the current state-of-the-art answer-agnostic NQG model described in (Du et al., 2017), considering the question context at sentence-level and using exactly the same splits provided by the authors4. 3.2 Context-free Word Representations To deal with rare/unseen words, the Transformer (Vaswani et al., 2017) architecture leverages large amounts of data and sub-word tokenization; in Table 1 we show how the performance obtained with a Vanilla Transformer is not satisfactory on the NQG task. 2N=6, d model=512, d ff=2048, h=8. 3http://spacy.io 4https://github.com/xinyadu/nqg/tree/ master/data/raw 6029 BLEU1 BLEU2 BLEU3 BLEU4 ROUGE-L copy% Vanilla Transformer 36.13 17.77 10.04 6.04 33.17 4.2 Transformer base 38.74 20.54 12.26 7.66 35.69 5.7 +Copying 39.81 22.47 14.25 9.32 37.28 9.1 +ELMO 40.44 23.87 15.74 10.62 38.32 6.5 +Copying+ELMO 41.72 25.07 16.77 11.58 39.22 10.4 +Placeholding 41.54 25.52 17.56 12.49 39.26 48.4 +Placeholding+ELMO 42.2 26.2 18.14 12.92 40.23 49.4 +Placeholding+Copying 42.72 26.52 18.28 13.0 39.63 50.9 +Placeholding+Copying+ELMO 43.33 26.27 18.32 13.23 40.22 51.7 Du et al. (2017) 43.09 25.96 17.50 12.28 39.75 Table 1: Comparison with SOTA; the last column reports the percentage of OOV/placeholders tokens propagated correctly (according to the ground truth) from the source contexts to the generated questions. To assess model stability, we independently trained 10 models with our best architecture, and computed the standard deviation of their BLUE4 performances on the test set: std < 0.009. We hypothesize that this is a consequence of the relatively small size of the task-specific data. Therefore, in our experiments, we use word-level tokenization and GloVe (Pennington et al., 2014) as context-free pre-trained word vectors5. Further, consistently with (Chen and Manning, 2014; Zhou et al., 2017), we augment the word representation using learned POS embeddings. The Transformer base architecture, upon which all subsequent models are built, uses word-level tokenization and pre-trained GloVe embeddings instead of sub-word tokenization as in the Vanilla Transformer. 3.3 Placeholding Strategy One method to help the model deal with rare/unseen words is to replace specific tokens with fixed placeholder keywords. Such mechanism is often used in industry-grade Neural Machine Translation systems (Crego et al., 2016; Levin et al., 2017), to enforce the copy of named entities from the source to the target language. Recognizing that named entities are also likely to be among rare/unseen tokens, we resort to such strategy and replace them with fixed tokens: all tokens in the context that are marked as named entity by the NER model are replaced with a token indicating their entity type and order of appearance, with the mapping kept in memory. For instance, “Nikola Tesla was born in 1856.” becomes “Person 1 Person 2 was born in Date 1”. At training time, the same procedure is 5http://nlp.stanford.edu/data/glove. 840B.300d.zip applied to the target questions; at inference time, the placeholders are replaced by the corresponding named entities as a post-processing step. This means that a different, randomly initialized, learnable vector is used as embedding for each placeholder, in place of the GloVe representation corresponding to the original token (or to OOV). As shown in Table 1, this mechanism alone allows the Transformer base architecture to achieve state-of-the-art results. Further, it provides the biggest relative improvement wrt the base architecture. This can be explained by the nature of the SQuAD dataset, in which more than 50% of the answers are named entities (see Table 2 in Rajpurkar et al. (2016)), consistently with the percentage of tokens copied by the placeholding mechanism alone. Moreover, placeholding allows for a significant reduction of the vocabulary size (∼30%). Nonetheless, a strong limitation of placeholding lies in its full dependency on the NER tagger: if the latter fails to recognize an entity, placeholding has no effect – which is especially damaging when a word was not frequent enough to be included in the vocabulary. 3.4 Copying Mechanism As the questions generated from a given context usually tend to refer to specific phrasing or entities appearing therein, Gulcehre et al. (2016) propose using a pointing mechanism (called pointersoftmax) to select words to be copied from the source sentence; intuitively, such method is of particular use in the case of rare or unknown words. 6030 Correctness Fluency Soundness Answerability Relevance Transformer base 4.49 4.02 3.33 1.7 2.51 +Placeholding+Copying+ELMO 4.5 4.12 3.78 2.87** 3.59* Du et al. (2017) 4.53 4.15 3.64 2.45 3.27 Table 2: Human assessment: two-tailed t-test results are reported for our best method compared to Du et al. (2017) (∗: p < 0.05, ∗∗: p < 0.005). The generation probability pgen ∈[0, 1] at timestep t is calculated as: pgen = σ(W · (h∗⊕st ⊕xt)) where W is a learnable parameter vector, h∗ represents the context and is computed through attention (i.e. as a linear combination of the final encoder representations [h1, . . . , ht]), st is the decoder state, and xt the decoder input. We tested several attention mechanisms to enable the copying, including global attention (Luong et al., 2015); since no significant differences were observed, for our experiments we used the raw attention scores of the Transformer, thus avoiding the addition of more trainable parameters. The results reported in Table 1 show how the addition of copying benefits the model performance, and particularly how it allows the amount of tokens copied to increase, complementing the placeholding mechanisms when the named entities are not correctly recognized. The following example from SQuAD exemplifies the contribution of the copying mechanism: given the context “Beyonc´e attended St. Mary’s elementary school in Fredericksburg, Texas, where [...]”, for which the NER fails to mark Beyonc´e as named entity (moreover, Beyonc´e is not in the vocabulary) the Transformer + placeholding produces where did madonna attend st. mary ’s school ?, while the addition of copying allows to correctly recover the correct entity and allows the model to emit a correct question: where did beyonc´e attend school ? 3.5 Contextualized Embeddings Contextualized representation approaches allow to compute the embedding of a given token depending on the context it appears in, as opposed to the fixed, context-free vectors provided by GloVe, therefore allowing to capture more information for OOV tokens. The placeholding strategy described above has the downside of depriving the input text representation of any semantic information besides the entity type. For instance, two entiFigure 1: Percentage of OOV tokens copied by the different mechanisms and combinations thereof, over all OOV tokens copied. ties such as Tesla and Edison could have close representations in the word embedding space, within a scientific-related subset of tokens: the use of a placeholder would thus prevent the use of such information. Therefore, we concatenate the contextfree vectors (see 3.2) for a specific token with the corresponding ELMO (Peters et al., 2018) representation at the encoding stage. In our experiments, those are only used in the encoding stage since they can only have a meaning when applied to full sentences. Combined with the previously described mechanism, contextualized embeddings allow to further improve the performances, obtaining a BLEU4 score of 13.23, almost one absolute point above the current state-of-the-art in the answer-agnostic task. As depicted in Figure 1, they also contribute to the selection of relevant OOV tokens to copy from the context to the generated question. 4 Human Assessment Finally, we proceeded to a qualitative evaluation of the generated outputs, by randomly sampling 100 context-question pairs from the test set. Three professional English speakers were asked to evaluate, the questions generated by: a) Transformer base, b) our best performing model, and c) the state-ofthe-art model by Du et al. (2017)6. 6To reproduce the outputs of Du et al. (2017) we used the code from https://github.com/xinyadu/nqg. 6031 The questions generated by the different models were shuffled before the assessment. Ratings were collected on a 1-to-5 likert scale, to measure to what extent the generated questions were: • answerable, by looking at their context (Answerability); • relevant to their context (Relevance); • grammatically correct (Correctness); • semantically sound (Soundness); • and, well-posed and natural (Fluency). As shown in Table 2, this analysis indicates a qualitative improvement over SOTA in terms of Relevance and Answerability. Below, we report a few sample outputs, randomly selected, generated by the proposed model. 5 Conclusions and Future Work We have described a preliminary study on the adaptation of Transformer architectures to Neural Question Generation. The results obtained show the contribution of auxiliary techniques such as copying mechanism, placeholding, and contextualized embeddings, which complement each other. The best performance is obtained when using the three mechanisms altogether, reaching an improvement of almost one BLEU4 point (and of 0.5 for ROUGE-L) over the current state-of-the-art approaches. Furthermore, a qualitative assessment indicated improvements in terms of Relevance and Answerability. We are extending the proposed approach to other QA datasets, and adapting it to use pretrained language models such as BERT (Devlin et al., 2018), to evaluate the consistency of the mechanisms introduced. Sentence 1: Under Rockne, the Irish would post a record of 105 wins, 12 losses, and five ties. Human: What was the amount of wins Knute Rockne attained at Notre Dame while head coach? Ours: how many losses did the irish have ? Sentence 2: Chopin was of slight build, and even in early childhood was prone to illnesses. Human: What was Fr´ed´eric prone to during early childhood as a result of his slight build? Ours: what type of disease did chopin have ? Sentence 3: Montana contains thousands of named rivers and creeks, 450 miles (720 km) of which are known for ”blue-ribbon” trout fishing. Human: How many miles of rivers are known for high class trout? Ours: how many miles of rivers does montana contain ? Sentence 4: In 1648 before the term genocide had been coined, the Peace of Westphalia was established to protect ethnic, national, racial and in some instances religious groups. Human: What year was the Peace of Westphalia signed? Ours: when was the peace of westphalia established ? Sentence 5: A bactericidal activity of antibacterials may depend on the bacterial growth phase, and it often requires ongoing metabolic activity and division of bacterial cells. Human: What besides ongoing metabolic activity is required in bactericidal activity? Ours: what type of activity do antibacterials depend on ? Sentence 6: The Montana Act led to the arrest of over 200 individuals and the conviction of 78, mostly of German or Austrian descent. Human: How many people were arrested from the Montana Act? Ours: how many individuals were killed in the montana act ? 6032 References Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 740–750. Sumit Chopra, Michael Auli, and Alexander M Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 93–98. Josep Crego, Jungi Kim, Guillaume Klein, Anabel Rebollo, Kathy Yang, Jean Senellart, Egor Akhanov, Patrice Brunelle, Aurelien Coquard, Yongchao Deng, et al. 2016. Systran’s pure neural machine translation systems. arXiv preprint arXiv:1610.05540. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1342–1352. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 140–149. Pavel Levin, Nishikant Dhanuka, Talaat Khalil, Fedor Kovalev, and Maxim Khalilov. 2017. Toward a full-scale neural machine translation in production: the booking. com use case. arXiv preprint arXiv:1709.05820. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318, Stroudsburg, PA, USA. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pretraining. URL https://s3-us-west-2. amazonaws. com/openai-assets/research-covers/languageunsupervised/language understanding paper. pdf. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Vasile Rus, Brendan Wyse, Paul Piwek, Mihai Lintean, Svetlana Stoyanchev, and Cristian Moldovan. 2010. The first question generation shared task evaluation challenge. In Proceedings of the 6th International Natural Language Generation Conference, INLG ’10, pages 251–257, Stroudsburg, PA, USA. Association for Computational Linguistics. Linfeng Song, Zhiguo Wang, and Wael Hamza. 2017. A unified query-based generative model for question generation and question answering. arXiv preprint arXiv:1709.01058. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proc. NIPS, Montreal, CA. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. 2018. Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3901–3910. Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2017. Neural question generation from text: A preliminary study. In National CCF Conference on Natural Language Processing and Chinese Computing, pages 662–671. Springer.
2019
604
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6033–6039 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6033 Unsupervised Paraphrasing without Translation Aurko Roy Google Research [email protected] David Grangier Google Research [email protected] Abstract Paraphrasing exemplifies the ability to abstract semantic content from surface forms. Recent work on automatic paraphrasing is dominated by methods leveraging Machine Translation (MT) as an intermediate step. This contrasts with humans, who can paraphrase without being bilingual. This work proposes to learn paraphrasing models from an unlabeled monolingual corpus only. To that end, we propose a residual variant of vector-quantized variational auto-encoder. We compare with MT-based approaches on paraphrase identification, generation, and training augmentation. Monolingual paraphrasing outperforms unsupervised translation in all settings. Comparisons with supervised translation are more mixed: monolingual paraphrasing is interesting for identification and augmentation; supervised translation is superior for generation. 1 Introduction Many methods have been developed to generate paraphrases automatically (Madnani and J. Dorr, 2010). Approaches relying on Machine Translation (MT) have proven popular due to the scarcity of labeled paraphrase pairs (Callison-Burch, 2007; Mallinson et al., 2017; Iyyer et al., 2018). Recent progress in MT with neural methods (Bahdanau et al., 2014; Vaswani et al., 2017) has popularized this latter strategy. Conceptually, translation is appealing since it abstracts semantic content from its linguistic realization. For instance, assigning the same source sentence to multiple translators will result in a rich set of semantically close sentences (Callison-Burch, 2007). At the same time, bilingualism does not seem necessary to humans to generate paraphrases. This work evaluates if data in two languages is necessary for paraphrasing. We consider three settings: supervised translation (parallel bilingual data is used), unsupervised translation (nonparallel corpora in two languages are used) and monolingual (only unlabeled data in the paraphrasing language is used). Our comparison devises comparable encoder-decoder neural networks for all three settings. While the literature on supervised (Bahdanau et al., 2014; Cho et al., 2014; Vaswani et al., 2017) and unsupervised translation (Lample et al., 2018a; Artetxe et al., 2018; Lample et al., 2018b) offer solutions for the bilingual settings, monolingual neural paraphrase generation has not received the same attention. We consider discrete and continuous autoencoders in an unlabeled monolingual setting, and contribute improvements in that context. We introduce a model based on Vector-Quantized AutoEncoders, VQ-VAE (van den Oord et al., 2017), for generating paraphrases in a purely monolingual setting. Our model introduces residual connections parallel to the quantized bottleneck. This lets us interpolate from classical continuous autoencoder (Vincent et al., 2010) to VQ-VAE. Compared to VQ-VAE, our architecture offers a better control over the decoder entropy and eases optimization. Compared to continuous auto-encoder, our method permits the generation of diverse, but semantically close sentences from an input sentence. We compare paraphrasing models over intrinsic and extrinsic metrics. Our intrinsic evaluation evaluates paraphrase identification, and generations. Our extrinsic evaluation reports the impact of training augmentation with paraphrases on text classification. Overall, monolingual approaches can outperform unsupervised translation in all settings. Comparison with supervised translation shows that parallel data provides valuable information for paraphrase generation compared 6034 to purely monolingual training. 2 Related Work Paraphrase Generation Paraphrases express the same content with alternative surface forms. Their automatic generation has been studied for decades: rule-based (McKeown, 1980; Meteer and Shaked, 1988) and data-driven methods (Madnani and J. Dorr, 2010) have been explored. Data-driven approaches have considered different source of training data, including multiple translations of the same text (Barzilay and McKeown, 2001; Pang et al., 2003) or alignments of comparable corpora, such as news from the same period (Dolan et al., 2004; Barzilay and Lee, 2003). Machine translation later emerged as a dominant method for paraphrase generation. Bannard and Callison-Burch (2005) identify equivalent English phrases mapping to the same non-English phrases from an MT phrase table. Kok and Brockett (2010) performs random walks across multiple phrase tables. Translation-based paraphrasing has recently benefited from neural networks for MT (Bahdanau et al., 2014; Vaswani et al., 2017). Neural MT can generate paraphrase pairs by translating one side of a parallel corpus (Wieting and Gimpel, 2018; Iyyer et al., 2018). Paraphrase generation with pivot/round-trip neural translation has also been used (Mallinson et al., 2017; Yu et al., 2018). Although less common, monolingual neural sequence models have also been proposed. In supervised settings, Prakash et al. (2016); Gupta et al. (2018) learn sequence-to-sequence models on paraphrase data. In unsupervised settings, Bowman et al. (2016) apply a VAE to paraphrase detection while Li et al. (2017) train a paraphrase generator with adversarial training. Paraphrase Evaluation Evaluation can be performed by human raters, evaluating both text fluency and semantic similarity. Automatic evaluation is more challenging but necessary for system development and larger scale statistical analysis (Callison-Burch, 2007; Madnani and J. Dorr, 2010). Automatic evaluation and generation are actually linked: if an automated metric would reliably assess the semantic similarity and fluency of a pair of sentences, one would generate by searching the space of sentences to maximize that metric. Automated evaluation can report the overlap with a reference paraphrase, like for translation (Papineni et al., 2002) or summarization (Lin, 2004). BLEU, METEOR and TER metrics have been used (Prakash et al., 2016; Gupta et al., 2018). These metrics do not evaluate whether the generated paraphrase differs from the input sentence and large amount of input copying is not penalized. Galley et al. (2015) compare overlap with multiple references, weighted by quality; while Sun and Zhou (2012) explicitly penalize overlap with the input sentence. Grangier and Auli (2018) alternatively compare systems which have first been calibrated to a reference level of overlap with the input. We follow this strategy and calibrate the generation overlap to match the average overlap observed in paraphrases from humans. In addition to generation, probabilistic models can be assessed through scoring. For a sentence pair (x, y), the model estimate of P(y|x) can be used to discriminate between paraphrase and non-paraphrase pairs (Dolan and Brockett, 2005). The correlation of model scores with human judgments (Cer et al., 2017) can also be assessed. We report both types of evaluation. Finally, paraphrasing can also impact downstream tasks, e.g. to generate additional training data by paraphrasing training sentences (Marton et al., 2009; Zhang et al., 2015; Yu et al., 2018). We evaluate this impact for classification tasks. 3 Residual VQ-VAE for Unsupervised Monolingual Paraphrasing Auto-encoders can be applied to monolingual paraphrasing. Our work combines Transformer networks (Vaswani et al., 2017) and VQVAE (van den Oord et al., 2017), building upon recent work in discrete latent models for translation (Kaiser et al., 2018; Roy et al., 2018). VQVAEs, as opposed to continuous VAEs, rely on discrete latent variables. This is interesting for paraphrasing as it equips the model with an explicit control over the latent code capacity, allowing the model to group multiple related examples under the same latent assignment, similarly to classical clustering algorithms (Macqueen, 1967). This is conceptually simpler and more effective than rate regularization (Higgins et al., 2016) or denoising objectives (Vincent et al., 2010) for continuous auto-encoders. At the same time, training auto-encoder with discrete bottleneck is difficult (Roy et al., 2018). We address this difficulty with an hybrid model using a continuous residual 6035 connection around the quantization module. We modify the Transformer encoder (Vaswani et al., 2017) as depicted in Figure 1. Our encoder maps a sentence into a fixed size vector. This is simple and avoids choosing a fixed length compression rate between the input and the latent representation (Kaiser et al., 2018). Our strategy to produce a fixed sized representation from transformer is analogous to the special token employed for sentence classification in (Devlin et al., 2018). At the first layer, we extend the input sequences with one or more fixed positions which are part of the self-attention stack. At the output layer, the encoder output is restricted to these special positions which constitute the encoder fixed sizedoutput. As in (Kaiser et al., 2018), this vector is split into multiple heads (sub-vectors of equal dimensions) which each goes through a quantization module. For each head h, the encoder output eh is quantized as, qh(eh) = ck, where k = argmin i ∥eh −ci∥2 where {ci}K i=0 denotes the codebook vectors. The codebook is shared across heads and training combines straight-through gradient estimation and exponentiated moving averages (van den Oord et al., 2017). The quantization module is completed with a residual connection, with a learnable weight α, zh(eh) = αeh + (1 −α)qh(eh). One can observe that residual vectors and quantized vectors always have similar norms by definition of the VQ module. This is a fundamental difference with classical continuous residual networks, where the network can reduce activation norms of some modules to effectively rely mostly on the residual path. This makes α an important parameter to trade-off continuous and discrete auto-encoding. Our learning encourages the quantized path with a squared penalty α2. After residual addition, the multiple heads of the resulting vector are presented as a matrix to which a regular transformer decoder can attend. Models are trained to maximize the likelihood of the training set with Adam optimizer using the learning schedule from (Vaswani et al., 2017). 4 Experiments & Results We compare neural paraphrasing with and without access to bilingual data. For bilingual settings, we consider supervised and unsupervised translation using round-trip translation (Mallinson Token + Position Embeddings Fixed Position Embeddings Self Attention N Quantization Self Attention 1 Fixed Truncated Encoding Figure 1: Encoder Architecture et al., 2017; Yu et al., 2018) with German as the pivot language. Supervised translation trains the transformer base model (Vaswani et al., 2017) on the WMT’17 English-German parallel data (Bojar et al., 2017). Unsupervised translation considers a pair of comparable corpora for training, German and English WMT-Newscrawl corpora, and relies on the transformer models from Lample et al. (2018b). Both MT cases train a model from English to German and from German to English to perform round-trip MT. For each model, we also distill the round-trip model into a single artificial English to English model by generating a training set from pivoted data. Distillation relies on the billion word corpus, LM1B (Chelba et al., 2013). Monolingual Residual VQ-VAE is trained only on LM1B with K = 216, with 2 heads and fixed window of size 16. We also evaluate plain VQVAE α = 0 to highlight the value of our residual modification. We further compare with a monolingual continuous denoising auto-encoder (DN-AE), with noising from Lample et al. (2018b). Paraphrase Identification For classification of sentence pairs (x, y) over Microsoft Research Paraphrase Corpus (MRPC) from Dolan and Brockett (2005), we train logistic regression on P(y|x) and P(x|y) from the model, complemented with encoder outputs in fixed context settings. We also perform paraphrase quality regression on Semantic Textual Similarity (STS) from Cer et al. (2017) by training ridge regression on the same features. Finally, we perform paraphrase ranking on Multiple Translation Chinese (MTC) from Huang et al. 6036 Parapharase Identification Generation MRPC STS MTC BLEU Pref. Supervised Translation 70.6 46.0 78.6 8.73 36.8 + Distillation 66.5 60.0 55.6 7.08 – Unsupervised Translation 66.0 13.2 65.8 6.59 28.1 + Distillation 66.9 45.0 52.0 6.45 – Mono. DN-AE 66.8 46.2 91.6 5.13 – Mono. VQVAE 66.3 10.6 69.0 3.85 – + Residual 73.3 59.8 94.0 7.26 31.9 + Distillation 71.3 54.3 88.4 6.88 – Table 1: Paraphrase Identification & Generation. Identification is evaluated with accuracy on MRPC, Pearson Correlation on STS and ranking on MTC. Generation is evaluated with BLEU and human preferences on MTC. SST-2 TREC Acc. F1 Acc F1 NB-SVM (trigram) 81.93 83.15 89.77 84.81 Supervised Translation 81.55 82.75 90.78 85.44 + Distillation 81.16 66.59 90.38 86.05 Unsupervised Translation 81.87 83.18 88.17 83.42 + Distillation 81.49 82.78 89.18 84.41 Mono. DN-AE 81.11 82.48 89.37 84.08 Mono. VQ-VAE 81.98 82.95 89.17 83.64 + Residual 82.12 83.23 89.98 84.31 + Distillation 81.60 82.81 89.78 84.31 Table 2: Paraphrasing for Data Augmentation: Accuracy and F1-scores of a Naive Bayes-SVM classifier on sentiment (SST-2) and question (TREC) classification. (2002). MTC contains English paraphrases collected as translations of the same Chinese sentences from multiple translators (Mallinson et al., 2017). We pair each MTC sentence x with a paraphrase y and 100 randomly chosen nonparaphrases y′. We compare the paraphrase score P(y|x) to the 100 non-paraphrase scores P(y′|x) and report the fraction of comparisons where the paraphrase score is higher. Table 1 (left) reports that our residual model outperforms alternatives in all identification setting, except for STS, where our Pearson correlation is slightly under supervised translation. Paraphrases for Data Augmentation We augment the training set of text classification tasks for sentiment analysis on Stanford Sentiment Treebank (SST-2) (Socher et al., 2013) and question classification on Text REtrieval Conference (TREC) (Voorhees and Tice, 2000). In both cases, we double training set size by paraphrasing each sentence and train Support Vector Machines with Naive Bayes features (Wang and Manning, 2012). In Table 2, augmentation with monolingual models yield the best performance for SST-2 sentiment classification. TREC question classification is better with supervised translation augmentation. Unfortunately, our monolingual training set LM1B does not contain many question sentences. Future work will revisit monolingual training on larger, more diverse resources. Paraphrase Generation Paraphrase generation are evaluated on MTC. We select the 4 best translators according to MTC documentation and paraphrase pairs with a length ratio under 1.2. Our evaluation prevents trivial copying solutions. We select sampling temperature for all models such that their generation overlap with the input is 20.9 BLEU, the average overlap between humans on MTC. We report BLEU overlap with the target and run a blind human evaluation where raters pick the best generation among supervised translation, unsupervised translation and monolingual. Table 3 shows examples. Table 1 (right) reports that monolingual paraphrasing compares favorably with unsupervised translation while supervised translation is the best technique. This high6037 In: a worthy substitute Out: A worthy replacement. In: Local governments will manage the smaller enterprises. Out: Local governments will manage smaller companies. In: Inchon is 40 kilometers away from the border of North Korea. Out: Inchon is 40 km away from the North Korean border. In: Executive Chairman of Palestinian Liberation Organization, Yasar Arafat, and other leaders are often critical of aiding countries not fulfilling their promise to provide funds in a timely fashion. Out: Yasar Arafat , executive chairman of the Palestinian Liberation Organization and other leaders are often critical of helping countries meet their pledge not to provide funds in a timely fashion. Table 3: Examples of generated paraphrases from the monolingual residual model (Greedy search). lights the value of parallel data for paraphrase generation. 5 Discussions Our experiments highlight the importance of the residual connection for paraphrase identification. From Table 1, we see that a model without the residual connection obtains 66.3%, 10.6% and 69.0% accuracy on MRPC, STS and MTC. Adding the residual connection improves this to 73.3%, 59.8% and 94.0% respectively. The examples in Table 3 show paraphrases generated by the model. The overlap with the input from these examples is high. It is possible to generate sentences with less overlap at higher sampling temperatures, we however observe that this strategy impairs fluency and adequacy. We plan to explore strategies which allow to condition the decoding process on an overlap requirement instead of varying sampling temperatures (Grangier and Auli, 2018). 6 Conclusion We compared neural paraphrasing with and without access to bilingual data. Bilingual settings considered supervised and unsupervised translation. Monolingual settings considered autoencoders trained on unlabeled text and introduced continuous residual connections for discrete autoencoders. This method is advantageous over both discrete and continuous auto-encoders. Overall, we showed that monolingual models can outperform bilingual ones for paraphrase identification and data-augmentation through paraphrasing. We also reported that generation quality from monolingual models can be higher than model based on unsupervised translation but not supervised translation. Access to parallel data is therefore still advantageous for paraphrase generation and our monolingual method can be a helpful resource for languages where such data is not available. Acknowledgments We thanks the anonymous reviewers for their suggestions. We thank the authors of the Tensor2tensor library used in our experiments (Vaswani et al., 2018). References Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. In Proceedings of the Sixth International Conference on Learning Representations. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with bilingual parallel corpora. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 597–604. Association for Computational Linguistics. Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: an unsupervised approach using multiple-sequence alignment. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, pages 16– 23. Association for Computational Linguistics. Regina Barzilay and Kathleen R McKeown. 2001. Extracting paraphrases from a parallel corpus. In Proceedings of the 39th annual meeting on Association for Computational Linguistics, pages 50–57. Association for Computational Linguistics. 6038 Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017. Findings of the 2017 conference on machine translation (wmt17). In Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, pages 169–214, Copenhagen, Denmark. Association for Computational Linguistics. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal J´ozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of the SIGNLL’16, pages 10– 21. Christopher Callison-Burch. 2007. Paraphrasing and translation. Ph.D. thesis, University of Edinburgh Edinburgh. Daniel Cer, Mona Diab, Eneko Agirre, Inigo LopezGazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2013. One billion word benchmark for measuring progress in statistical language modeling. CoRR, abs/1312.3005. Kyunghyun Cho, Bart Van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Proceedings of the 20th international conference on Computational Linguistics, page 350. Association for Computational Linguistics. William B Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao, and Bill Dolan. 2015. deltableu: A discriminative metric for generation tasks with intrinsically diverse targets. arXiv preprint arXiv:1506.06863. David Grangier and Michael Auli. 2018. Quickedit: Editing text & translations by crossing words out. In Proc. of NAACL. Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2018. A deep generative framework for paraphrase generation. In Thirty-Second AAAI Conference on Artificial Intelligence. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2016. beta-vae: Learning basic visual concepts with a constrained variational framework. Shudong Huang, David Graff, and George Doddington. 2002. Multiple-translation Chinese corpus. Linguistic Data Consortium, University of Pennsylvania. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of NAACL-HLT, pages 1875–1885. Łukasz Kaiser, Aurko Roy, Ashish Vaswani, Niki Pamar, Samy Bengio, Jakob Uszkoreit, and Noam Shazeer. 2018. Fast decoding in sequence models using discrete latent variables. arXiv preprint arXiv:1803.03382. Stanley Kok and Chris Brockett. 2010. Hitting the right paraphrases in good time. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 145–153. Association for Computational Linguistics. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Representations (ICLR). Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018b. Phrase-based & neural unsupervised machine translation. arXiv preprint arXiv:1804.07755. Zichao Li, Xin Jiang, Lifeng Shang, and Hang Li. 2017. Paraphrase generation with deep reinforcement learning. arXiv preprint arXiv:1711.00279. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. J. B. Macqueen. 1967. Unified techniques for vector quantization and hidden markov modeling using semi-continuous models. Nitin Madnani and Bonnie J. Dorr. 2010. Generating phrasal and sentential paraphrases: A survey of data-driven methods. Computational Linguistics, 36:341–387. Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2017. Paraphrasing revisited with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association 6039 for Computational Linguistics: Volume 1, Long Papers, volume 1, pages 881–893. Yuval Marton, Chris Callison-Burch, and Philip Resnik. 2009. Improved statistical machine translation using monolingually-derived paraphrases. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 381–390. Association for Computational Linguistics. Kathleen R McKeown. 1980. Paraphrasing using given and new information in a question-answer system. Technical Reports (CIS), page 723. Marie Meteer and Varda Shaked. 1988. Strategies for effective paraphrasing. In Proceedings of the 12th conference on Computational linguistics-Volume 2, pages 431–436. Association for Computational Linguistics. A¨aron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. 2017. Neural discrete representation learning. CoRR, abs/1711.00937. Bo Pang, Kevin Knight, and Daniel Marcu. 2003. Syntax-based alignment of multiple translations: Extracting paraphrases and generating new sentences. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Aaditya Prakash, Sadid A Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual lstm networks. arXiv preprint arXiv:1610.03098. Aurko Roy, Ashish Vaswani, Arvind Neelakantan, and Niki Parmar. 2018. Theory and experiments on vector quantized autoencoders. arXiv preprint arXiv:1805.11063. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631–1642. Hong Sun and Ming Zhou. 2012. Joint learning of a dual smt system for paraphrase generation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short PapersVolume 2, pages 38–42. Association for Computational Linguistics. Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, et al. 2018. Tensor2tensor for neural machine translation. arXiv preprint arXiv:1803.07416. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. CoRR. Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11:3371–3408. Ellen M Voorhees and Dawn M Tice. 2000. Building a question answering test collection. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 200–207. ACM. Sida Wang and Christopher D Manning. 2012. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2, pages 90–94. Association for Computational Linguistics. John Wieting and Kevin Gimpel. 2018. Paranmt-50m: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 451–462. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. arXiv preprint arXiv:1804.09541. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pages 649–657.
2019
605
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6040–6046 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6040 Storyboarding of Recipes: Grounded Contextual Generation Khyathi Raghavi Chandu Eric Nyberg Alan W Black Language Technologies Institute, Carnegie Mellon University {kchandu, ehn, awb}@cs.cmu.edu Abstract Information need of humans is essentially multimodal in nature, enabling maximum exploitation of situated context. We introduce a dataset for sequential procedural (how-to) text generation from images in cooking domain. The dataset consists of 16,441 cooking recipes with 160,479 photos associated with different steps. We setup a baseline motivated by the best performing model in terms of human evaluation for the Visual Story Telling (ViST) task. In addition, we introduce two models to incorporate high level structure learnt by a Finite State Machine (FSM) in neural sequential generation process by: (1) Scaffolding Structure in Decoder (SSiD) (2) Scaffolding Structure in Loss (SSiL). Our best performing model (SSiL) achieves a METEOR score of 0.31, which is an improvement of 0.6 over the baseline model. We also conducted human evaluation of the generated grounded recipes, which reveal that 61% found that our proposed (SSiL) model is better than the baseline model in terms of overall recipes. We also discuss analysis of the output highlighting key important NLP issues for prospective directions. 1 Introduction Interpretation is heavily conditioned on context. Real world interactions provide this context in multiple modalities. In this paper, the context is derived from vision and language. The description of a picture changes drastically when seen in a sequential narrative context. Formally, this task is defined as: given a sequence of images I = {I1, I2, ..., In} and pairwise associated textual descriptions, T = {T1, T2, ..., Tn}; for a new sequence I ′, our task is to generate the corresponding T ′. Figure 1 depicts an example for making vegetable lasagna, where the input is the first row and the output is the second row. We call this a ‘storyboard’, since it unravels the most important steps of a procedure associated with corresponding natural language text. The sequential context differLasagna ingredients: tomato sauce or canned tomatoes for making sauce - at least 4-6 cups, one box no boil lasagna noodles, one zucchini, one yellow squash, one jalapeno (or bell pepper!), 1/2 an onion, pinch of oregano, pinch of basil. To make the sauce, cook diced onion in olive oil, and then add the ground beef, garlic and tomato paste. Stir until fragrant and then meat starts to brown and break up, and then add the crushed tomatoes. Pour some water into tomato can and swish it around and then pour that into the pot. Stir well and let simmer while the veg continue to lose moisture. Spoon your ricotta into a bowl and add a good pinch of Italian seasoning and crushed red pepper. I like to add a little black pepper too. Mix around until well combined. Shred your mozzarella or cut into small slices. This is the way I layered: spoonful of sauce on the bottom of the pan, lasagna noodles, 1/2 the ricotta cheese, 1/2 the sauteed vegetables, mozzarella cheese, sauce to cover. Do that twice and then sprinkle parmesan cheese on the top. Bake in a 400 F oven for 30-40 minutes or until you can easily pierce through the noodles with a knife and the top is lightly browned. Try not to eat it all at once. The boy and I have eaten 1/2 of it, and it's only been a day since I made it. :D Figure 1: Storyboard for the recipe of vegetable lasagna entiates this task from image captioning in isolation. The dataset is similar to that of ViST (Huang et al., 2016) with an apparent difference between stories and instructional in-domain text which is the clear transition in phases of the narrative. This task supplements the task of ViST with richer context of goal oriented procedure (how-to). Numerous online blogs and videos depict various categories of how-to guides for games, do-it-yourself (DIY) crafts, technology etc. This task lays initial foundations for full fledged storyboarding of a given video, by selecting the right junctions/clips to ground significant events and generate sequential textual descriptions. We are going to focus on the domain of cooking recipes in the rest of this paper.In this paper, we discuss our approach in generating more structural/coherent cooking recipes by explicitly modeling the state transitions between different stages of cooking (phases). We introduce a framework to apply traditional FSMs to incorporate more structure in neural generation. The two main contributions of this paper are: (1) A dataset of 16k recipes targeted for sequential multimodal procedural text generation, (2) Two models (SSiD: Structural Scaffolding in Decoder ,and SSiL: Structural Scaffolding in Loss) for incorporating high level structure learnt by an FSM into a neural text generation model to improve structure/coherence. 6041 2 Related Work Why domain constraint? Martin et al. (2017) and Khalifa et al. (2017) demonstrated that the predictive ability of a seq2seq model improves as the language corpus is reduced to a specialized domain with specific actions. Our choice of restricting domain to recipes is inspired from this, where the set of events are specialized (such as ‘cut’, ‘mix’, ‘add’) although we are not using event representations explicitly. These specialized set of events are correlated to phases of procedural text as described in the following sections. Planning while writing content: A major challenge faced by neural text generation (Lu et al., 2018) while generating long sequences is the inability to maintain structure, contravening the coherence of the overall generated text. This aspect was also observed in various tasks like summarization (Liu et al., 2018), story generation (Fan et al., 2019). Pre-selecting content and planning to generate accordingly was explored by Puduppully et al. (2018) and Lukin et al. (2015) in contrast to generate as you proceed paradigm. Fan et al. (2018) adapt a hierarchical approach to generate a premise and then stories to improve coherence and fluency. Yao et al. (2018) experimented with static and dynamic schema to realize the entire storyline before generating. However, in this work we propose a hierarchical multi task approach to perform structure aware generation. Comprehending Food: Recent times have seen large scale datasets in food, such as Recipe1M (Marin et al., 2018), Food-101 (Bossard et al., 2014).Food recognition (Arora et al., 2019) addresses understanding food from a vision perspective. Salvador et al. (2018) worked on generating cooking instructions by inferring ingredients from an image. Zhou et al. (2018) proposed a method to generate procedure segments for YouCook2 data. In NLP domain, this is studied as generating procedural text by including ingredients as checklists (Kiddon et al., 2016) or treating the recipe as a flow graph (Mori et al., 2014). Our work is at the intersection of two modalities (language and vision) by generating procedural text for recipes from a sequence of images. (Bosselut et al., 2017) worked on reasoning non-mentioned causal effects thereby improving the understanding and generation of procedural text for cooking recipes. This is done by dynamically tracking entities by modeling actions using state transformers. Visual Story Telling: Research at the intersection of language and vision is accelerating with tasks like image captioning (Hossain et al., 2019), visual question answering (Wu et al., 2017), visual dialog (Das et al., 2017; Mostafazadeh et al., 2017; De Vries et al., 2017; de Vries et al., 2018). ViST (Huang et al., 2016) is a sequential vision to language task demonstrating differences between descriptions in isolation and stories in sequences. Similarly, Gella et al. (2018) created VideoStory dataset from videos on social media with the task of generating a multi-sentence story captions for them. Smilevski et al. (2018) proposed a late fusion based model for ViST challenge. Kim et al. (2018) attained the highest scores on human readability in this task by attending to both global and local contexts. We use this as our baseline model and propose two techniques on top of this baseline to impose structure needed for procedural text. 3 Data Description We identified two how-to blogs from: instructables.comand snapguide.com, comprising stepwise instructions (images and text) of various how-to activities like games, crafts etc,. We gathered 16,441 samples with 160,479 photos for food, dessert and recipe topics. We used 80% for training, 10% for validation and 10% for testing our models. In some cases, there are multiple images for the same step and we randomly select an image from the set of images. We indicate that there is a potential space for research here, in selecting most distinguishing/representative/meaningful image. Details of the datasets are presented in Table 1. The data and visualization of distribution of topics is here1. A trivial extension could be done on other domains like gardening, origami crafts, fixing guitar strings etc, which is left for future work. 4 Model Description We first describe a baseline model for the task of storyboarding cooking recipes in this section. We then propose two models with incremental improvements to incorporate the structure of procedural text in the generated recipes : SSiD (Scaffolding Structure in Decoder) and SSiL (Scaffolding Structure in Loss). The architecture of scaffolding structure is presented in Figure 2, of which different aspects are described in the following subsections. 4.1 Baseline Model (Glocal): The baseline model is inspired from the best performing system in ViST challenge with respect to 1https://storyboarding.github.io/ story-boarding/ 6042 ResNet ResNet ResNet ResNet LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM Image Representation Context Representation Structure Representation Textual Recipes FSM Clustering Figure 2: Architecture for incorporating high level structure in neural recipe generation Data Sources # Recipes # Avg Steps instructables 9,101 7.14 snapguide 7,340 13.01 Table 1: Details of dataset for storyboarding recipes human evaluation (Kim et al., 2018). The images are first resized into 224 X 224. Image features for each step are extracted from the penultimate layer of pre-trained ResNet-152 (He et al., 2016). These features are then passed through an affinity layer to obtain an image feature of dimension 1024. To maintain the context of the entire recipe (global context), the sequence of these image features are passed through a two layered Bi-LSTM with a hidden size of 1024. To maintain specificity of the current image (local context), the image features for the current step are concatenated using a skip connection to the output of the Bi-LSTM to obtain glocal representation. Dropout of 0.5 is applied systematically at the affinity layer to obtain the image feature representation and after the Bi-LSTM layer. Batch normalization is applied with a momentum 0.01. This completes the encoder part of the sequence to sequence architecture. These glocal vectors are used for decoding each step. These features are passed through a fully connected layer to obtain a representation of 1024 dimension followed by a non-linear transformation using ReLU. These features are then passed through a decoder LSTM for each step in the recipe which are trained by teacher forcing. The overall coherence in generation is addressed by feeding the decoder state of the previous step to the next one. This is a seq2seq model translating one modality into another. The model is optimized using Adam with a learning rate of 0.001 and weight decay of 1e-5. The model described above does not explicitly cater to the structure of the narration of recipes in the generation process. However, we know that procedural text has a high level structure that carries a skeleton of the narrative. In the subsequent subsections, we present two models that impose this high level narrative structure as a scaffold. While this scaffold lies external to the baseline model, it functions on imposing the structure in decoder (SSiD) and in the loss term (SSiL). 4.2 Scaffolding Structure in Decoder (SSiD): There is a high level latent structure involved in a cooking recipe that adheres to transitions between steps, that we define as phases. Note that the steps and phases are different here. To be specific, according to our definition, one or more steps map to a phase (this work does not deal with multiple phases being a part of a single step). Phases may be ‘listing ingredients’, ‘baking’, ‘garnishing’ etc., The key idea of the SSiD model is to incorporate the sequence of phases in the decoder to impose structure during text generation There are two sources of supervision to drive the model: (1) multimodal dataset M = {I, T} from Section 3, (2) unimodal textual recipes2 U to learn phase sequences. Finer phases are learnt using clustering followed by an FSM. Clustering: K-Means clustering is performed on the sentence embeddings with compositional ngram features (Pagliardini et al., 2018) on each step of the recipe in U. Aligning with our intu2www.ffts.com/recipes.htm 6043 ition, when k is 3, it is observed that these clusters roughly indicate categories of desserts, drinks and main course foods (pizza, quesadilla etc,). However, we need to find out finer categories of the phases corresponding to the phases in the recipes. We use k-means clustering to obtain the categories of these phases. We experimented with different number of phases P as shown in Table 2. For example, let an example recipe comprise of 4 steps i.e, a sequence of 4 images. At this point, each recipe can be represented as a hard sequence of phases r = ⟨p1, p2, p3, p4 ⟩. FSM: The phases learnt through clustering are not ground truth phases. We explore the usage of an FSM to individually model hard and a softer representation of the phase sequences by leveraging the states in an FSM. We first describe how the hard representation is modeled. The algorithm was originally developed for building language models for limited token sets in grapheme to phoneme prediction. The iterative algorithm starts with an ergodic state for all phase types and uses entropy to find the best state split that would maximize the prediction. As opposed to phase sequences, each recipe is now represented as a state sequence (decoded from FSM) i.e, r = ⟨s1, s2, s3, s4⟩(hard states). This is a hard representation of the sequence of states. We next describe how a soft representation of these states is modeled. Since the phases are learnt in an unsupervised fashion and the ground truth of the phases is not available, we explored a softer representation of the states. We hypothesize that a soft representation of the states might smooth the irregularities of phases learnt. From the output of the FSM, we obtain the state transition probabilities from each state to every other state. Each state si can be represented as ⟨qij ∀j ∈S⟩(soft states), where qij is the state transition probability from si to sj and S is the total number of states. This is the soft representation of state sequences. The structure in the recipe is learnt as a sequence of phases and/or states (hard or soft). This is the structural scaffold that we would like to incorporate in the baseline model. In SSiD model, for each step in the recipe, we identify which phase it is in using the clustering model and use the phase sequence to decode state transitions from the FSM. The state sequences are concatenated to the decoder in the hard version and the state transition probabilities are concatenated in the decoder in the soft version at every time step. At this point, we have 2 dimensions, one is the complexity of the phases (P) and the other is the FST Complexity 1 20 40 60 80 100 120 20 Phases 11.27 11.60 12.31 13.71 12.32 12.51 12.36 40 Phases 12.03 12.44 11.48 12.58 12.50 13.91 11.82 60 Phases 11.13 11.18 12.74 12.26 12.47 12.98 11.47 Table 2: BLEU Scores for different number of phases (P) and states(S) complexity of the states in FSM (S). Comprehensive results of searching this space is presented in Table 2. We plan to explore the usage of hidden markov model in place of FSM in future. 4.3 Scaffolding Structure in Loss (SSiL): In addition to imposing structure via SSiD, we explored measuring the deviation of the structure learnt through phase/state sequences from the original structure. This leads to our next model where the deviation of the structure in the generated output from that of the original structure is reflected in the loss. The decoded steps are passed through the clustering model to get phase sequences and then state transition probabilities are decoded from FSM for the generated output. We go a step further to investigate the divergence between the phases of generated and original steps. This can also be viewed as hierarchical multitask learning (Sanh et al., 2018). The first task is to decode each step in the recipe (which uses a cross entropy criterion, L1). The second task uses KL divergence between phase sequences of decoded and original steps to penalize the model (say, L2).When there are τ steps in a recipe, we obtain o(sτ 1) and g(sτ 1) as the distributions of phases comprising of soft states for the original and generated recipes respectively. We measure the KL divergence(DKL) between these distributions: DKL(o(sτ 1)||g(sτ 1)) = τ X i=1 S X j=1 o(si[j])log o(si[j]) g(si[j]) Each task optimizes different functions and we minimize the combination of the two losses. P I,T∈I,T L1(I, T) + α P U∈U L2(U) This combined loss is used to penalize the model. Here, α is obtained from KL annealing (Bowman et al., 2015) function that gradually increases the weight of KL term from 0 to 1 during train time. 5 Results and Discussion The two dimensions explored in FSM are P and S and exhaustive results are presented in Table 6044 Models Phenomena Glocal Model This is a simple recipe for making a delicious chicken salad. You will need: a butter knife a plate of bread flour a little bit of salt a dash of pepper flakes a couple of tablespoons of olive oil a pinch of sugar. Add butter evenly on the pan. Put the chicken on the grill and set aside. Ingredients phase wrongly identified. Wrong ingredients. Improper conclusion. SSiD Model This is a simple recipe for making a delicious and easy dish. Ingredients: 4 pounds chicken 2 tsp salt, ½ tsp sugar, marinara sauce, mozzarella cheese ( i used provolone ). Tools: a knife, an oven for the chicken, tongs. Mix all ingredients in a bag. Add butter evenly on the pan. Serve the baked chicken wings and enjoy the evening! Learnt majority structure (step 1) + Got ‘tongs’ right because of separate tools mention. The action of baking is not explicitly mentioned (before ‘baked’ wings). SSiL Model You will need: 5 pounds of chicken wings, ½ cup all purpose flour, ½ tsp salt, 2 tsp of paprika, melted butter, silicon mat, baking pan. Preheat oven to 450 F. Mix dry ingredients in the dry ziplock bag. Place a mat on the baking pan and spread butter evenly on it. Spread the chicken pieces on butter on the baking pan. Bake until crispy for 30 minutes. Serve and enjoy! + Global context of baking maintained in preheating. + Non-repetitive ingredients phase. + Referring expressions (baking pan -> it). Not mentioned tools (tongs). Figure 3: Comparison of generated storyboards for Easy Oven Baked Crispy Chicken Wings Models BLEU METEOR ROUGE-L Glocal 10.74 0.25 0.31 SSiD (hard phases) 11.49 0.24 0.31 SSiD (hard states) 11.93 0.25 0.31 SSiD (soft phases) 13.91 0.29 0.32 SSiL (soft phases) 16.38 0.31 0.34 Table 3: Evaluation of storyboarding recipes 2. The BLEU score (Papineni et al., 2002) is the highest when P is 40 and S is 100. Fixing these values, we compare the models proposed in Table 3. The models with hard phases and hard states are not as stable as the one with soft phases since backprop affects the impact of the scaffolded phases. Upon manual inspection, a key observation is that for SSiD model, most of the recipes followed a similar structure. It seemed to be conditioned on a global structure learnt from all recipes rather than the current input. However, SSiL model seems to generate recipe that is conditioned on the structure of that particular example. Human Evaluation: We have also performed human evaluation by conducting user preference study to compare the baseline with our best performing SSiL model. We randomly sampled generated outputs of 20 recipes and asked 10 users to answer two preferences: (1) overall recipe based on images, (2) structurally coherent recipe. Our SSiL model was preferred 61% and 72.5% for overall and structural preferences respectively. This shows that while there is a viable space to improve structure, generating an edible recipe needs to be explored to improve the overall preference. 5.1 Qualitative Analysis: Figure 3 presents the generated text from the three models with an analysis described below. Coherence of Referring Expressions: Introducing referring expressions is a key aspect of coherence (Dale, 2006, 1992), as seen in the case of ‘baking pan’ being referred as ‘it’ in the SSiL model. Context Maintenance: Maintaining overall context explicitly affects generating each step. This is seen in SSiL model where ‘preheating’ in the second step is learnt from baking step that appears later although the image does not show an oven. Schema for Procedural Text: Explicit modeling of structure has enabled SSiD and SSiL models to conclude the recipe by generating words like ‘serve’ and ‘enjoy’. Lacking this structure, glocal model talks about ‘setting aside’ at the end. Precision of Entities and Actions: SSiD model introduces ‘sugar’ in ingredients after generating ‘salt’. A brief manual examination revealed that this co-occurrence is a common phenomenon. SSiL model misses ‘tongs’ in the first step. 6 Conclusions Our main focus in this paper is instilling structure learnt from FSMs in neural models for sequential procedural text generation with multimodal data. We gather a dataset of 16k recipes where each step has text and associated images. We setup a baseline inspired from the best performing model in ViST. We propose two ways of imposing structure from phases and states of a recipe derived from FSM. The first model imposes structure on the decoder and the second model imposes structure on the loss function by modeling it as a hierarchical multi-task learning problem. We show that our proposed approach improves upon the baseline and achieves a METEOR score of 0.31. We plan to explore explicit evaluation of the latent structure learnt. We plan on exploring backpropable variants as a scaffold for structure and also extend the techniques to other how-to domains in future. 6045 References Sandhya Arora, Gauri Chaware, Devangi Chinchankar, Eesha Dixit, and Shevi Jain. 2019. Survey of different approaches used for food recognition. In Information and Communication Technology for Competitive Strategies, pages 551–560. Springer. Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. 2014. Food-101–mining discriminative components with random forests. In European Conference on Computer Vision, pages 446–461. Springer. Antoine Bosselut, Omer Levy, Ari Holtzman, Corin Ennis, Dieter Fox, and Yejin Choi. 2017. Simulating action dynamics with neural process networks. arXiv preprint arXiv:1711.05313. Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. 2015. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349. Robert Dale. 1992. Generating referring expressions: Constructing descriptions in a domain of objects and processes. The MIT Press. Robert Dale. 2006. Generating referring expressions. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e MF Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 2. Harm De Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron C Courville. 2017. Guesswhat?! visual object discovery through multi-modal dialogue. In CVPR, volume 1, page 3. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. arXiv preprint arXiv:1805.04833. Angela Fan, Mike Lewis, and Yann Dauphin. 2019. Strategies for structuring story generation. arXiv preprint arXiv:1902.01109. Spandana Gella, Mike Lewis, and Marcus Rohrbach. 2018. A dataset for telling the stories of social media videos. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 968–974. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. MD Hossain, Ferdous Sohel, Mohd Fairuz Shiratuddin, and Hamid Laga. 2019. A comprehensive survey of deep learning for image captioning. ACM Computing Surveys (CSUR), 51(6):118. Ting-Hao Kenneth Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Aishwarya Agrawal, Jacob Devlin, Ross Girshick, Xiaodong He, Pushmeet Kohli, Dhruv Batra, et al. 2016. Visual storytelling. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1233–1239. Ahmed Khalifa, Gabriella AB Barros, and Julian Togelius. 2017. Deeptingle. arXiv preprint arXiv:1705.03557. Chlo´e Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. Globally coherent text generation with neural checklist models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 329–339. Taehyeong Kim, Min-Oh Heo, Seonil Son, KyoungWha Park, and Byoung-Tak Zhang. 2018. Glac net: Glocal attention cascading networks for multiimage cued story generation. arXiv preprint arXiv:1805.10973. Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. arXiv preprint arXiv:1801.10198. Sidi Lu, Yaoming Zhu, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Neural text generation: past, present and beyond. arXiv preprint arXiv:1803.07133. Stephanie M Lukin, Lena I Reed, and Marilyn A Walker. 2015. Generating sentence planning variations for story telling. In 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 188. Javier Marin, Aritro Biswas, Ferda Ofli, Nicholas Hynes, Amaia Salvador, Yusuf Aytar, Ingmar Weber, and Antonio Torralba. 2018. Recipe1m: A dataset for learning cross-modal embeddings for cooking recipes and food images. arXiv preprint arXiv:1810.06553. Lara J Martin, Prithviraj Ammanabrolu, Xinyu Wang, William Hancock, Shruti Singh, Brent Harrison, and Mark O Riedl. 2017. Event representations for automated story generation with deep neural nets. arXiv preprint arXiv:1706.01331. Shinsuke Mori, Hirokuni Maeta, Yoko Yamakata, and Tetsuro Sasada. 2014. Flow graph corpus from recipe texts. In LREC, pages 2370–2377. Nasrin Mostafazadeh, Chris Brockett, Bill Dolan, Michel Galley, Jianfeng Gao, Georgios P Spithourakis, and Lucy Vanderwende. 2017. Imagegrounded conversations: Multimodal context for natural question and response generation. arXiv preprint arXiv:1701.08251. 6046 Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2018. Unsupervised learning of sentence embeddings using compositional n-gram features. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 528–540. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Ratish Puduppully, Li Dong, and Mirella Lapata. 2018. Data-to-text generation with content selection and planning. arXiv preprint arXiv:1809.00582. Amaia Salvador, Michal Drozdzal, Xavier Giro-i Nieto, and Adriana Romero. 2018. Inverse cooking: Recipe generation from food images. arXiv preprint arXiv:1812.06164. Victor Sanh, Thomas Wolf, and Sebastian Ruder. 2018. A hierarchical multi-task approach for learning embeddings from semantic tasks. arXiv preprint arXiv:1811.06031. Marko Smilevski, Ilija Lalkovski, and Gjorgi Madzarov. 2018. Stories for images-in-sequence by using visual and narrative components. arXiv preprint arXiv:1805.05622. Harm de Vries, Kurt Shuster, Dhruv Batra, Devi Parikh, Jason Weston, and Douwe Kiela. 2018. Talk the walk: Navigating new york city through grounded dialogue. arXiv preprint arXiv:1807.03367. Qi Wu, Damien Teney, Peng Wang, Chunhua Shen, Anthony Dick, and Anton van den Hengel. 2017. Visual question answering: A survey of methods and datasets. Computer Vision and Image Understanding, 163:21–40. Lili Yao, Nanyun Peng, Weischedel Ralph, Kevin Knight, Dongyan Zhao, and Rui Yan. 2018. Planand-write: Towards better automatic storytelling. arXiv preprint arXiv:1811.05701. Luowei Zhou, Chenliang Xu, and Jason J Corso. 2018. Towards automatic learning of procedures from web instructional videos. In Thirty-Second AAAI Conference on Artificial Intelligence.
2019
606
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6047–6052 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6047 Negative Lexically Constrained Decoding for Paraphrase Generation Tomoyuki Kajiwara Institute for Datability Science Osaka University, Osaka, Japan [email protected] Abstract Paraphrase generation can be regarded as monolingual translation. Unlike bilingual machine translation, paraphrase generation rewrites only a limited portion of an input sentence. Hence, previous methods based on machine translation often perform conservatively to fail to make necessary rewrites. To solve this problem, we propose a neural model for paraphrase generation that first identifies words in the source sentence that should be paraphrased. Then, these words are paraphrased by the negative lexically constrained decoding that avoids outputting these words as they are. Experiments on text simplification and formality transfer show that our model improves the quality of paraphrasing by making necessary rewrites to an input sentence. 1 Introduction Paraphrase generation is a generic term for tasks that generate sentences semantically equivalent to input sentences. These techniques make it possible to control information other than the meaning of the text. Typical paraphrase generation tasks include subtasks such as text simplification to control complexity, formality transfer to control formality, grammatical error correction to control fluency, and sentence compression to control sentence length. These paraphrase generation applications not only support communication and language learning but also contribute to the performance improvement of other natural language processing applications (Evans, 2011; ˇStajner and Popovi´c, 2016). Paraphrase generation can be considered as a monolingual machine translation problem. Sentential paraphrases with different complexities (Coster and Kauchak, 2011; Xu et al., 2015) and formalities (Rao and Tetreault, 2018) were created manually, and parallel corpora specialized for each subtask were constructed. As in the field of machine translation, phrasebased (Coster and Kauchak, 2011; Xu et al., 2012) and syntax-based (Zhu et al., 2010; Xu et al., 2016) methods were proposed early. In recent years, the encode-decoder model based on the attention mechanism (Nisioi et al., 2017; Zhang and Lapata, 2017; Jhamtani et al., 2017; Niu et al., 2018) has been studied, inspired by the success of neural machine translation (Bahdanau et al., 2015). In machine translation, all words appearing in an input sentence must be rewritten in the target language. However, paraphrase generation does not require rewriting of all words. When some criteria are provided, words not satisfying the criteria in the input sentence are identified and rewritten. For example, the criterion for text simplification is the textual complexity, and rewrites complex words to simpler synonymous words. Owing to the characteristics of the task where only a limited portion of an input sentence needs to be rewritten, previous methods based on machine translation often perform conservatively and fail to produce necessary rewrites (Zhang and Lapata, 2017; Niu et al., 2018). To solve the problem of conservative paraphrasing that copies many parts of the input sentence, we propose a neural model for paraphrase generation that first identifies words in the source sentence requiring paraphrasing. Subsequently, these words are paraphrased by the negative lexically constrained decoding that avoids outputting them as they are. We evaluate the performance of the proposed method with two major paraphrase generation tasks. Experiments on text simplification (Xu et al., 2015) and formality transfer (Rao and Tetreault, 2018) show that our model improves the quality of paraphrasing by performing necessary rewrites to an input sentence. 6048 2 Proposed Method To improve the conservative rewriting of the neural paraphrase generation, we first identify the words to be paraphrased for a given input sentence (Section 2.1). Next, we paraphrase the input sentence using the pretrained paraphrase generation model. Here, we select sentences not including those words by adding negative lexically constrained decoding to the beam search (Section 2.2). Because our method only changes the beam search, it can be applied to various paraphrase generation models and model retraining is not necessary. 2.1 Identification of Word to be Paraphrased We extract words strongly related to the source style included in the input sentence si as vocabulary Vi to be paraphrased. Point-wise mutual information is used to estimate the relatedness between each word w ∈si and style z ∈ {x, y} (Pavlick and Nenkova, 2015). Here, x and y are the source style (e.g. informal) and the target style (e.g. formal), respectively. PMI(w, z) = log p(w, z) p(w)p(z) = log p(w|z) p(w) (1) We define the vocabulary Vi to be paraphrased using the threshold θ as follows. Vi = {w | w ∈si ∧PMI(w, x) ≥θ} (2) After extracting the vocabulary Vi to be paraphrased for each input sentence si, we generate paraphrase sentences using it as a hard constraints. Note that PMI score is calculated using a training parallel corpus for paraphrase generation. 2.2 Negative Lexically Constrained Decoding Lexically constrained decoding (Anderson et al., 2017; Hokamp and Liu, 2017; Post and Vilar, 2018) adds constraints to the beam search to force the output text to include certain words. The effectiveness of these methods are demonstrated in image captioning using given image tags (Anderson et al., 2017) and in the post-editing of machine translation (Hokamp and Liu, 2017). In paraphrase generation, there is no situation that words to be included in the output sentence are given. Therefore, positive lexical constraints used in the image captioning and post-editing of machine translation cannot be applied to this task Train Dev Test Newsela 94,208 1,129 1,077 GYAFC-E&M 52,595 2,877 1,416 GYAFC-F&R 51,967 2,788 1,332 Table 1: Number of sentence pairs for each dataset. as they are. Meanwhile, negative lexical constraints that are forced to not include certain words in output sentence are promising for paraphrase generation. This is because, for example, text simplification is a task of generating sentential paraphrase without using complex words that appear in the source sentence. In this study, we add negative lexical constraints to beam search using dynamic beam allocation (Post and Vilar, 2018), which is the fastest lexically constrained decoding algorithm. In negative lexical constraints, we exclude hypotheses including the given words during beam search. Consequently, the words identified in Section 2.1 will not appear in our generated sentences. 3 Experiment We evaluate the performance of the proposed method on two major paraphrase generation tasks. We conduct experiments on text simplification and formality transfer using datasets shown in Table 1. For text simplification, we identify complex words in the input sentence and generate simple paraphrase sentence without using these complex words. Similarly, for formality transfer, we identify informal words in the input sentence and generate formal paraphrase sentence without using these informal words. 3.1 Setup For text simplification, we used the Newsela dataset (Xu et al., 2015) split and tokenized with the same settings as the previous study (Zhang and Lapata, 2017). For formality transfer, we used the GYAFC dataset (Rao and Tetreault, 2018) normalized and tokenized using Moses toolkit.1 For each task, we used byte-pair encoding2 (Sennrich et al., 2016) to limit the number of token types to 16, 000. In the GYAFC dataset, it is reported that a correlation exists between manual evaluation 1https://github.com/moses-smt/mosesdecoder 2https://github.com/rsennrich/subword-nmt 6049 Newsela GYAFC-E&M GYAFC-F&R Add Keep Del BLEU SARI Add Keep Del BLEU Add Keep Del BLEU RNN-Base 1.8 60.8 22.3 24.1 17.4 31.9 90.0 57.5 71.2 32.9 90.5 61.1 74.7 RNN-PMI 2.8 61.1 36.5 24.7 22.8 33.5 90.0 59.9 71.7 34.3 90.9 63.1 75.9 RNN-Oracle 10.4 82.9 89.9 36.4 40.0 34.8 92.7 72.4 75.2 35.7 93.2 74.6 79.3 SAN-Base 1.8 60.9 23.8 24.0 17.8 34.4 90.0 59.9 71.8 34.5 91.1 63.2 76.7 SAN-PMI 2.5 61.3 38.0 24.6 23.3 35.2 90.0 61.2 72.1 35.3 91.1 64.0 77.0 SAN-Oracle 10.1 82.0 89.4 35.9 39.9 36.6 92.4 71.4 75.1 36.6 92.9 73.7 79.8 Table 2: Performance of our paraphrase generation models on text simplification (complex →simple) in Newsela dataset and formality transfer (informal →formal) in GYAFC dataset. For both RNN and SAN models, our method consistently improves BLEU and SARI scores across styles or domains. In addition, a consistent improvement on Add and Del means that our method promotes active rewriting. and automatic evaluation using BLEU only when paraphrasing from an informal style to formal style (Rao and Tetreault, 2018). Therefore, we will only experiment with this setting. For lexical constraints, we identified words with a PMI score above the threshold θ. We selected a threshold θ ∈{0.0, 0.1, 0.2, ..., 0.7} that maximizes the BLEU score between the output sentence and the reference sentence in the development dataset. We calculated PMI scores using each training dataset shown in Table 1. As a paraphrase generation model, we constructed the recurrent neural network (RNN) and self-attention network (SAN) models using the Sockeye toolkit (Hieber et al., 2017).3 Our RNN model uses a single LSTM with a layer size of 512 for both the encoder and decoder, and MLP attention with a layer size of 512. Our SAN model uses a six-layer transformer with a model size of 512 and a single attention head. We used word embeddings in 512 dimensions tying the source, target, and the output layer’s weight matrix. We added dropout to the embeddings and hidden layers with probability 0.2. In addition, we used layer-normalization and label-smoothing for regularization. We trained using the Adam optimizer (Kingma and Ba, 2014) with a batch size of 4,096 tokens and checkpoint the model every 1,000 updates. The training stopped after five checkpoints without improvement in validation perplexity. BLEU (Papineni et al., 2002) is primarily used for our evaluation metrics; SARI (Xu et al., 2016) is also used for text simplification. For a more detailed comparison of the models, we evaluated the F1 score of the words that are added (Add), kept 3https://github.com/awslabs/sockeye (Keep), and deleted (Del) by the models.4 Our proposed method is compared with previous methods trained only on the dataset shown in Table 1. For detailed analysis, we chose the methods whose model outputs are published. Among these, Dress-LS (Zhang and Lapata, 2017) and BiFT-Ens (Niu et al., 2018) with the highest BLEU score in each task are compared with our model. Following BiFT-Ens, we also used a bidirectional domain-mixed ensemble model for formality transfer task. We also experimented with Oracle settings that can properly identify words to be paraphrased. In this setting, we used all words that did not appear in the reference sentence among the words included in the input sentence as lexical constraints. 3.2 Results The experimental results are shown in Table 2. These results in both RNN and SAN architectures and three datasets showed that our PMI-based method consistently improves the Base method that does not use constraints in both BLEU and SARI metrics. As a result of a detailed analysis of the model outputs, our PMI method always improves the Base method in terms of Add and Del in both model architectures. These results mean that our proposed method promotes active rewriting as expected. In addition, since Oracle method shows higher performance, it is worthwhile to further improve PMI-based identification. In this study, we identified words to be paraphrased using the training corpus for paraphrase generation. In future work, we plan to identify these words using not only a parallel corpus but also larger data. 4Because the test dataset of GYAFC is multi-reference, the F1 scores of each reference sentence does not reach 100. 6050 Newsela GYAFC-E&M GYAFC-F&R Add Keep Del BLEU SARI Add Keep Del BLEU Add Keep Del BLEU Source 0.0 60.3 0.0 21.4 2.8 0.0 85.4 0.0 49.1 0.0 85.8 0.0 51.0 Reference 100 100 100 100 70.3 57.2 82.9 61.2 100 56.5 82.7 60.6 100 Dress-LS 2.4 60.7 44.9 24.3 26.6 BiFT-Ens 32.1 90.0 58.2 71.4 32.6 90.6 60.9 74.5 Ours (RNN) 2.8 61.1 36.5 24.7 22.8 33.5 90.0 59.9 71.7 34.3 90.9 63.1 75.9 Ours (SAN) 2.5 61.3 38.0 24.6 23.3 35.2 90.0 61.2 72.1 35.3 91.1 64.0 77.0 Table 3: Comparison with previous models on text simplification in Newsela dataset and formality transfer in GYAFC dataset. Our models achieved the best BLEU scores across styles and domains. GYAFC-E&M: Informal →Formal Source mama so ugly, she scares buzzards off of a meat wagon. Reference Your mother is so unattractive she scared buzzards off of a meat wagon. SAN-BASE mama is so ugly, she scares buzzards off of a meat wagon. SAN-PMI The mother is so unattractive that she scares buzzards off of a meat wagon. GYAFC-F&R: Informal →Formal Source Well, if the one boy picks on you, why like him? Reference Well, if that one boy bullies you, why the attraction to him? SAN-BASE If the one boy picks on you, why like him? SAN-PMI Well, if the one boy teases you, why like him? Table 4: Examples of formality transfer. Bolded words are words that are identified as the source style (informal). We succeeded in paraphrasing as follows: mama →mother, picks on →teases. Table 3 shows a comparison between our models and comparative models. Whereas Dress-LS has a higher SARI score because it directly optimizes SARI using reinforcement learning, our models achieved the best BLEU scores across styles and domains. Table 4 shows examples of generated paraphrases in formality transfer task. We succeeded in identifying informal expressions of mama and picks, and successfully paraphrased them. Our proposed method avoids these informal words during beam search, and outputs their synonymous formal expressions, i.e., mother and teases. Figure 1 shows the sensitivity of the quality of generated paraphrases to PMI threshold θ on the development dataset. Too low thresholds cause a large amount of constraints, which adversely affect paraphrase quality. However, with a high threshold, the proposed method can achieve high performance stably. Finally, we used a threshold of θ = 0.5 to maximize the BLEU score on the development dataset for formality transfer tasks. Similarly, in the text simplification task, we used a threshold of θ = 0.2. 40 45 50 55 60 65 70 0.1 0.2 0.3 0.4 0.5 0.6 0.7 BLEU (Dev) θ RNN (E&M) RNN (F&R) SAN (E&M) SAN (F&R) Figure 1: Thresholds of PMI and quality of generated paraphrases on the development dataset. 4 Related Work 4.1 Style-Sensitive Paraphrase Acquisition Pavlick and Nenkova (2015) worked on a stylesensitive paraphrase acquisition. They used a large-scale raw corpus in each style to calculate PMI scores for each word or phrase and assigned style scores to paraphrase pairs in the paraphrase database (Ganitkevitch et al., 2013; Pavlick et al., 6051 2015). Pavlick and Callison-Burch (2016) further improved style-sensitive paraphrase acquisition based on supervised learning with additional features such as frequency and word embeddings. In this study, as in these previous studies, we have identified words that are strongly related to a particular style. Furthermore, we used these words to control the neural paraphrase generation model and improved the performance of sentential paraphrase generation. 4.2 Lexically Constrained Paraphrasing Hu et al. (2019b) automatically constructed a large-scale paraphrase corpus5 via lexically constrained machine translation. In a Czech–English bilingual corpus, sentence pairs of a Czech-toEnglish machine translation and an English reference can be regarded as automatically generated sentential paraphrase pairs (Wieting and Gimpel, 2018). They used words in reference sentences as positive or negative constraints and succeeded in generating diverse paraphrases via machine translation. In addition, recent work (Hu et al., 2019a) has used lexically constrained paraphrase generation for data augmentation and improve performance in some NLP applications. Unlike these previous studies, we focused on the paraphrase generation as an application. Furthermore, we have shown that negative lexical constraints consistently improve the performance of paraphrase generation applications such as text simplification and formality transfer. 5 Conclusion To improve the conservative rewriting of the paraphrase generation model, we proposed the identification of words to be paraphrased and the addition of negative lexical constraints on beam search. Experimental results on English text simplification and formality transfer indicated that the proposed method consistently improved the quality of paraphrase generation for both RNN and SAN models across styles or domains. Our proposed method deleted complex or informal words appearing in source sentences and promoted the addition of simple or formal words to paraphrased sentences. Acknowledgments We are grateful to Atsushi Fujita, Yuki Arase and Chenhui Chu for helpful discussions. We also 5http://decomp.io/projects/parabank/ thank anonymous reviewers for their constructive comments. This work was supported by JST, ACT-I Grant Number JPMJPR18UB, Japan. References Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2017. Guided Open Vocabulary Image Captioning with Constrained Beam Search. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 936–945. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the 3rd International Conference on Learning Representations, pages 1–15. William Coster and David Kauchak. 2011. Simple English Wikipedia: A New Text Simplification Task. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 665–669. Richard J. Evans. 2011. Comparing Methods for the Syntactic Simplification of Sentences in Information Extraction. Literary and Linguistic Computing, 26(4):371–388. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Paraphrase Database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 758–764. Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2017. Sockeye: A Toolkit for Neural Machine Translation. arXiv:1712.05690. Chris Hokamp and Qun Liu. 2017. Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1535–1546. J. Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, and Benjamin Van Durme. 2019a. Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839–850. J. Edward Hu, Rachel Rudinger, Matt Post, and Benjamin Van Durme. 2019b. ParaBank: Monolingual Bitext Generation and Sentential Paraphrasing via Lexically-constrained Neural Machine Translation. arXiv:1901.03644. 6052 Harsh Jhamtani, Varun Gangal, Eduard Hovy, and Eric Nyberg. 2017. Shakespearizing Modern Language Using Copy-Enriched Sequence to Sequence Models. In Proceedings of the Workshop on Stylistic Variation, pages 10–19. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. In Proceedings of the International Conference on Learning Representations. Sergiu Nisioi, Sanja ˇStajner, Simone Paolo Ponzetto, and Liviu P. Dinu. 2017. Exploring Neural Text Simplification Models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 85–91. Xing Niu, Sudha Rao, and Marine Carpuat. 2018. Multi-Task Neural Models for Translating Between Styles Within and Across Languages. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1008–1021. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318. Ellie Pavlick and Chris Callison-Burch. 2016. Simple PPDB: A Paraphrase Database for Simplification. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 143– 148. Ellie Pavlick and Ani Nenkova. 2015. Inducing Lexical Style Properties for Paraphrase and Genre Differentiation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 218–224. Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better Paraphrase Ranking, Finegrained Entailment Relations, Word Embeddings, and Style Classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 425–430. Matt Post and David Vilar. 2018. Fast Lexically Constrained Decoding with Dynamic Beam Allocation for Neural Machine Translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1314–1324. Sudha Rao and Joel Tetreault. 2018. Dear Sir or Madam, May I Introduce the GYAFC Dataset: Corpus, Benchmarks and Metrics for Formality Style Transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 129–140. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1715–1725. Sanja ˇStajner and Maja Popovi´c. 2016. Can Text Simplification Help Machine Translation? Baltic Journal of Modern Computing, 4(2):230–242. John Wieting and Kevin Gimpel. 2018. ParaNMT50M: Pushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 451– 462. Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in Current Text Simplification Research: New Data Can Help. Transactions of the Association for Computational Linguistics, 3:283– 297. Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing Statistical Machine Translation for Text Simplification. Transactions of the Association for Computational Linguistics, 4:401–415. Wei Xu, Alan Ritter, William B. Dolan, Ralph Grishman, and Colin Cherry. 2012. Paraphrasing for Style. In Proceedings of the 24th International Conference on Computational Linguistics, pages 2899– 2914. Xingxing Zhang and Mirella Lapata. 2017. Sentence Simplification with Deep Reinforcement Learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 584–594. Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A Monolingual Tree-based Translation Model for Sentence Simplification. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 1353–1361.
2019
607
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6053–6058 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6053 Large-Scale Transfer Learning for Natural Language Generation Sergey Golovanov *1, Rauf Kurbanov *1, Sergey Nikolenko *12, Kyryl Truskovskyi *1, Alexander Tselousov *, and Thomas Wolf *3 1Neuromation OU, Liivalaia tn 45, 10145 Tallinn, Estonia 2Steklov Mathematical Institute at St. Petersburg, nab. r. Fontanki 27, St. Petersburg 191023, Russia 3Huggingface Inc., 81 Prospect St. Brooklyn, New York 11201, USA [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] *All authors contributed equally, names in alphabetical order. Abstract Large-scale pretrained language models define state of the art in natural language processing, achieving outstanding performance on a variety of tasks. We study how these architectures can be applied and adapted for natural language generation, comparing a number of architectural and training schemes. We focus in particular on open-domain dialog as a typical high entropy generation task, presenting and comparing different architectures for adapting pretrained models with state of the art results. 1 Introduction Over the past few years, the field of natural language processing (NLP) has witnessed the emergence of transfer learning methods which have significantly improved the state of the art (Dai and Le, 2015; Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2018). These methods depart from classical supervised machine learning where a predictive model for a given task is trained in isolation on a single dataset. Here, a model is pretrained on large text corpora and then fine-tuned on the target task. Such models are usually evaluated on natural language understanding (NLU) tasks such as text classification or question answering (Wang et al.; Rajpurkar et al., 2016), but natural language generation (NLG) tasks such as summarization, dialog, or machine translation remain relatively underexplored. At first glance, large-scale pretrained models appear to be a natural fit for NLG since their pretraining objectives are often derived from language modeling. However, interesting questions and problems still arise. We consider a text-only NLG task where the generation of an output sequence of symbols y = (y1,...,ym) is conditioned on a context X = (x1,...,xK) composed of one or several sequences of symbols xk = (xk 1,...,xk n). Several types of contexts may warrant different treatment in the model. E.g., in case of dialog generation they may include: (i) facts from a knowledge base, (ii) dialog history, and (iii) the sequence of already generated output tokens (y1,...,ym−1). Thus, there arises a general question of how to adapt a singleinput pretrained model to a multi-input downstream generation task. In this work, we study two general schemes to adapt a pretrained language model to an NLG task. In the single-input setting, contexts are concatenated to create a sequence prefix from which the output is decoded as a continuation by the pretrained language model following Radford et al. (2018, 2019). The model can be used as is or with a small number of special token embeddings added to the vocabulary to identify the contexts. In the multi-input setting, the pretrained model is duplicated to form an encoder-decoder structure where the encoder processes contexts while the decoder generates the output. 2 Related work Unsupervised pretraining for transfer learning has a long history in natural language processing, and a common thread has been to reduce the amount of task-specific architecture added on top of pretrained modules. Most early methods (Mikolov et al., 2013; Pennington et al., 2014) focused on learning word representations using shallow models, with complex recurrent or convolutional networks later added on top for specific tasks. With 6054 Persona for Speaker 1 (P1) I like to ski My wife does not like me anymore I have went to Mexico 4 times this year I hate Mexican food I like to eat cheetos P1: Hi P2: Hello! How are you today? P1: I am good thank you, how are you. P2: Great, thanks! My children and I were just about to watch Game of Thrones. P1: Nice! How old are your children? P2: I have four that range in age from 10 to 21. You? P1: I do not have children at the moment. P2: That just means you get to keep all the popcorn for yourself. P1: And Cheetos at the moment! P2: Good choice. Do you watch Game of Thrones? P1: No, I do not have much time for TV. P2: I usually spend my time painting: but, I love the show. Table 1: Sample dialogue from PersonaChat with persona facts for Speaker 1 (P1). Speaker 2 (P2) also has a random persona (not shown). increased computing capacities, it has now become feasible to pretrain deep neural language models. Dai and Le (2015); Ramachandran et al. (2016) proposed unsupervised pretraining of a language model for transfer learning and to initialize encoder and decoder in a seq2seq model for machine translation tasks. Works in zero-shot machine translation used large corpora of monolingual data to improve performances for lowresource languages (Johnson et al., 2017; Wada and Iwata, 2018; Lample and Conneau, 2019). Most of the work transfering large-scale language models from and for monolingual NLG tasks focus on classification and natural language understanding (Kiros et al., 2015; Jozefowicz et al., 2016). Recently, Radford et al. (2019) studied large-scale language models for various generation tasks in the zero-shot setting focusing on summarization and translation and Wolf et al. (2019) presented early work on chit-chat. 3 Problem setting and dataset NLG tasks can be divided into high entropy (story generation, chit-chat dialog) and low entropy (summarization, machine translation) tasks. We focus on the high entropy task of chit-chat dialog to study the use and effect of various types of contexts: facts, history and previous tokens. Table 1 shows a typical dialog from PersonaChat (Zhang et al., 2018b), one of the largest multi-turn open-domain dialog dataset available. PersonaChat consists of crowdsourced conversations between real human beings who were asked to chit-chat. Each participant was given a set of 4-5 profile sentences that define his/her persona (a) Single-input model Current prefix Dialog history Persona facts Single input (concatenated) Decoder Linear Output (b) Multi-input model Current prefix Dialog history Persona facts Encoder Decoder Linear Output Figure 1: General model architectures: (a) single-input model; (b) multi-input model. Token embeddings Positional embeddings Context type embeddings (a) Single-input model I am an artist Hi Hello ! How are you today ? + + (b) Multi-input model < i > I am an artist < /i > < p1 > Hi < /p1 > < p2 > Hello ! How are you today ? < /p2 > + + Figure 2: Token embeddings: (a) single-input model with CTE; (b) multi-input model with start/end tokens. ×N Input Embedding Multi-Head Attention Layer Normalization Feedforward Layer Layer Normalization Output + + Figure 3: OpenAI GPT ×N Dialog history embed. Current state embed. Persona info embed. Multihead att. Multihead att. Multihead att. Avg Layer Normalization + Feedforward layer + Layer Normalization ... ... ... Figure 4: Multi-input Transformer-based architecture. 6055 for the conversation and asked to chitchat naturally and try to get to know each other. The dataset contains 162,064 utterances over 10,907 dialogs with 1,155 possible personas and 7 speaker turns per dialogue on average. Although it is one of the largest multi-turn dialogue datasets, PersonaChat is still too small to train a large-scale model; state of the art models trained directly on PersonaChat are very prone to overfitting (Dinan et al., 2019), hence the motivation for the present work. 4 Single- and multi-input adaptation While we expect many more large-scale pretrained language models to become publicly available soon (Radford et al., 2019), our work is based on the only large-scale pretrained language model that was available at the time of this study, the OpenAI GPT (Radford et al., 2018). We refer to this publication for the details of the model, which is a 12-layer decoder-only Transformer (Vaswani et al., 2017) with masked multi-head attention. The model uses a bytepair encoding (BPE) vocabulary (Sennrich et al., 2015) with 40,000 merges and learned positional embeddings for sequences with at most 512 positions. We now detail the various adaptation schemes we used to adapt this model to the task of opendomain dialogue. More specifically, in our target task the inputs to the model are: (i) a set of personality sentences, (ii) a dialog history involving two speakers, and (iii) the history of previously generated tokens for auto-regressive generation. In the first adaptation setting, which we call the single-input model, the pretrained language model is used as is to generate an output sequence y = (y1,...,ym) without any architectural modifications. Contexts are concatenated to create a sequence prefix from which the output is then decoded as a continuation. In this direction, several ways to construct prefixes from heterogeneous contexts can be investigated: (i) concatenating contexts with natural separators to make the test data distribution close to the training data (Radford et al., 2019) (in our case we added double quotes to each utterance to mimic dialog punctuation); (ii) concatenating contexts with additional spatial-separator tokens (fine-tuned on the target task) to build an input sequence (Radford et al., 2018); (iii) concatenating contexts and supplementing the input sequence with a parallel sequence of context-type embeddings (CTE) to be added to the token and positional embeddings (Devlin et al., 2018). Each CTE shows the context type for its input token as shown on Fig. 2a: winfo CTE for persona info, wp1 CTE for dialog history coming from person 1, and wp2 CTE for person 2. These vectors are also fine-tuned on the target task. In the second adaptation scheme, the multiinput model, the pretrained language model is duplicated in an encoder-decoder architecture (Fig. 1b). Similar to the single-input model, natural separators, spatial-separator tokens or contexttype embeddings can be added for each persona fact and dialog utterance, surrounding the corresponding text with these tokens as preprocessing, as shown on Fig. 2b. Persona information and dialogue history are successively processed in the encoder (Fig. 4) to obtain two respective sequences of vector representations to be used as input to the decoder model. The multi-head attention layers of the decoder are modified to process the three inputs as follows (see Fig. 4). We copy the multi-headed attention layer of the decoder three times—for the embeddings of the current state, persona facts, and dialog history—averaging the results (Zhang et al., 2018a). The weights in both encoder and decoder are initialized from the pretrained model. Using both encoder and decoder allows to separate the contexts (dialogue history and persona information) and alleviate the maximal length constraint of 512 tokens. Weight sharing between encoder and decoder reduces the total number of model parameters and allows for multi-task learning. On the other hand, untying the decoder and encoder lets the attention heads and architectures specialize for each task. 5 Results We have performed a series of quantitative evaluation on the test subset of the PersonaChat dataset as well as a few quantitative evaluations. Following the recommendations of the Endto-End conversation Modeling Task at DSTC-7 Workshop (Michel Galley and Gao), we evaluated the models on the following set of metrics: METEOR (Lavie and Agarwal, 2007), NIST-4, BLEU (Papineni et al., 2002) as well as diversity metrics: Entropy-4, Distinct-2, and the average length of the generated utterances. Table 2 illustrates the results for three typical models: the single-input model in the zero-shot set6056 Model METEOR NIST-4 BLEU Entropy-4 Distinct-2 Average Length Single-input (zero-shot) 0.07727 1.264 2.5362 9.454 0.1759 9.671 Single-input (additional embeddings) 0.07641 1.222 2.5615 9.234 0.1614 9.43 Multi-input 0.07878 1.278 2.7745 9.211 0.1546 9.298 Table 2: Selected evaluation results and statistics. 1 1.5 2 2.5 3 3.5 4 8·10−2 0.1 0.12 0.14 0.16 Training epochs Word counts SIM, persona MIM, persona SIM, history MIM, history SIM, both MIM, both 1 2 3 4 2.4 2.5 2.6 Training epochs BLEU SIM, BLEU MIM, BLEU 0.071 0.073 0.075 0.077 0.079 METEOR SIM, METEOR MIM, METEOR Figure 5: Results for single- (SIM) and multi-input (MIM) models; left: word statistics; right: evaluation metrics. ting (no modification) and with additional embeddings fine-tuned on the target task, and the multi-input model in which the encoder and decoder are not shared, which is thus a high-capacity model in comparison to the previous two models. We can see that both approaches reach comparable performances on the automatic metrics with the multi-input model performing better on METEOR, NIST-4 and BLEU. We investigated in greater detail the evolution of the single-input and multi-input models during training to understand the origin of their differences. To this aim, we tagged the words generated by each model according to four categories: (i) content words that were mentioned in the persona facts, (ii) content words that were mentioned in the dialog history, (iii) content words that were mentioned in both, and (iv) all other generated words. Fig. 5 shows the statistics of these types of words along a representative training run obtained using compare-mt (Neubig et al., 2019). An interesting observation is that single-input and multi-input models adopt differing behaviors which can be related to an intrinsic difference between two contextual inputs: dialog history and personality facts. While dialog history is very related to sequentiality, personality facts are not sequential in essence: they are not ordered, a welltrained model should be invariant to the ordering of the facts. Moreover, a personality fact can be relevant anywhere in a dialog. On the contrary, dialog history is sequential; it cannot be reordered freely without changing the meaning of the dialog and the relevance of a particular utterance of the dialog history is strongly dependent on its location in the dialog: older history becomes less relevant. This difference in nature can be related to differences in the models. Single-input adaptation is closer to a bare language-model and the comparison with multi-input model shows that the former tends to stay closer to the dialog history and consistently uses more words from the history than multi-input model. On the other hand, splitting encoder and decoder makes persona facts available to the multi-input model in a non-sequential manner and we can see that the multi-input model use more and more persona facts as the training evolves, out-performing the single-input model when it comes to reusing words from persona facts. We also note that the multi-input model, with its unshared encoder and decoder, may be able to specialize his sub-modules. 6 Conclusion In this work, we have presented various ways in which large-scale pretrained language models can be adapted to natural language generation tasks, comparing single-input and multi-input solutions. This comparison sheds some light on the characteristic features of different types of contextual inputs, and our results indicate that the various archi6057 tectures we presented have different inductive bias with regards to the type of input context. Further work on these inductive biases could help understand how a pretrained transfer learning model can be adapted in the most optimal fashion to a given target task. References Andrew M. Dai and Quoc V. Le. 2015. Semisupervised Sequence Learning. arXiv:1511.01432 [cs]. ArXiv: 1511.01432. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, et al. 2019. The second conversational intelligence challenge (convai2). arXiv preprint arXiv:1902.00098. Jeremy Howard and Sebastian Ruder. 2018. Finetuned language models for text classification. CoRR, abs/1801.06146. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vigas, Martin Wattenberg, and Greg Corrado. 2017. Googles multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the Limits of Language Modeling. arXiv:1602.02410 [cs]. ArXiv: 1602.02410. Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-Thought Vectors. arXiv:1506.06726 [cs]. ArXiv: 1506.06726. Guillaume Lample and Alexis Conneau. 2019. Cross-lingual Language Model Pretraining. arXiv:1901.07291 [cs]. ArXiv: 1901.07291. Alon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceedings of the Second Workshop on Statistical Machine Translation, StatMT ’07, pages 228–231, Stroudsburg, PA, USA. Association for Computational Linguistics. Xiang Gao Bill Dolan Michel Galley, Chris Brockett and Jianfeng Gao. End-to-end conversation modeling: Dstc7 task 2 description. In DSTC7 workshop (forthcoming). Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. pages 3111–3119. Graham Neubig, Zi-Yi Dou, Junjie Hu, Paul Michel, Danish Pruthi, and Xinyi Wang. 2019. compare-mt: A tool for holistic comparison of language generation systems. In Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL) Demo Track, Minneapolis, USA. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. CoRR, abs/1802.05365. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3us-west-2. amazonaws. com/openai-assets/researchcovers/languageunsupervised/language understanding paper. pdf. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. page 24. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. arXiv:1606.05250 [cs]. ArXiv: 1606.05250. Prajit Ramachandran, Peter J. Liu, and Quoc V. Le. 2016. Unsupervised Pretraining for Sequence to Sequence Learning. arXiv:1611.02683 [cs]. ArXiv: 1611.02683. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. CoRR, abs/1508.07909. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Takashi Wada and Tomoharu Iwata. 2018. Unsupervised cross-lingual word embedding by multilingual neural language models. arXiv preprint arXiv:1809.02306. 6058 Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. page 14. Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. TransferTransfo: A Transfer Learning Approach for Neural Network Based Conversational Agents. arXiv:1901.08149 [cs]. ArXiv: 1901.08149. Biao Zhang, Deyi Xiong, and Jinsong Su. 2018a. Accelerating neural transformer via an average attention network. CoRR, abs/1805.00631. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018b. Personalizing dialogue agents: I have a dog, do you have pets too? CoRR, abs/1801.07243.
2019
608
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6059–6064 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6059 Automatic Grammatical Error Correction for Sequence-to-sequence Text Generation: An Empirical Study Tao Ge Xingxing Zhang Furu Wei Ming Zhou Microsoft Research Asia {tage, xizhang, fuwei, mingzhou}@microsoft.com Abstract Sequence-to-sequence (seq2seq) models have achieved tremendous success in text generation tasks. However, there is no guarantee that they can always generate sentences without grammatical errors. In this paper, we present a preliminary empirical study on whether and how much automatic grammatical error correction can help improve seq2seq text generation. We conduct experiments across various seq2seq text generation tasks including machine translation, formality style transfer, sentence compression and simplification. Experiments show the state-of-the-art grammatical error correction system can improve the grammaticality of generated text and can bring taskoriented improvements in the tasks where target sentences are in a formal style. 1 Introduction Sequence-to-sequence (seq2seq) text generation (Cho et al., 2014; Sutskever et al., 2014) has attracted growing attention in natural language processing (NLP). Despite various advantages of seq2seq models, they tend to have a weakness: there is no guarantee that they can always generate sentences without grammatical errors. Table 1 shows examples generated by seq2seq models in various tasks with grammatical errors. One valid solution to this challenge is conducting grammatical error correction (GEC) for machine generated sentences. Recent GEC systems (Chollampatt and Ng, 2018; Junczys-Dowmunt et al., 2018; Grundkiewicz and Junczys-Dowmunt, 2018; Ge et al., 2018a,b) can achieve human-level performance in GEC benchmarks. We are curious whether they can help improve seq2seq based natural language generation (NLG) models. We therefore propose an empirical study on GEC post editing for various text generation tasks (i.e., machine translation, style transfer, sentence compresTasks Examples Machine Translation Das Team-Ereignis ist immer am besten. →The team event is always (the) best. Style Transfer = ) who do u thinks better? →Who do you think (is) better? Sentence Compression Mickey Rooney died yesterday age 93 at his home in Studio City, California... →Mickey Rooney died yesterday (at) age 93. Table 1: Seq2seq model outputs for German-English translation, formality style transfer and sentence compression. The texts in round brackets are edits by GEC. sion and simplification) using both automatic and human evaluation methods. Experimental results demonstrate that a state-of-the-art GEC system is helpful for improving the grammaticality of generated text and that it can bring task-oriented improvements in the tasks where target sentences are in a formal style. The contributions of this paper are twofold: • We present an empirical study on GEC post editing for seq2seq text generation. To the best of our knowledge, it is the first work to study improving seq2seq based NLG models using GEC. • We show some interesting results by thoroughly comparing and analyzing GEC post editing for various seq2seq text generation tasks, shedding light on the potential of GEC for NLG. 2 Background 2.1 Sequence-to-sequence Text Generation The sequence-to-sequence (seq2seq) framework has been proven to be successful for many NLP tasks. Given a source sentence xs, a seq2seq model learns to predict its target sentence xt. It usually has an encoder to learn the representation of xs and a decoder to generate xt based on the encoded representation of xs. The model is 6060 usually trained by minimizing the negative loglikelihood of the training source-target sentence pairs. During inference, an output sequence xo is generated (one token at a time) with beam search by maximizing PΘ(xo|xs). 2.2 Automatic Grammatical Error Correction Most recent GEC systems are based on the seq2seq framework and are trained with errorcorrected sentence pairs. Due to massive training data, the state-of-the-art GEC system (Grundkiewicz and Junczys-Dowmunt, 2018; Ge et al., 2018b) can achieve human-level performance in GEC benchmarks and be practically used for correcting grammatical errors. 3 Experiments and Evaluation We use the state-of-art GEC system (Ge et al., 2018b) as our GEC model which is a 7-layer convolutional seq2seq model trained with a fluency boost learning strategy on both original GEC training data and augmented fluency boost sentence pairs. We use the GEC model to do post editing for sentences decoded by a seq2seq model to test if GEC improves the results. We choose machine translation, style transfer, sentence compression and simplification as typical seq2seq text generation tasks. Due to the page limit, the detailed configuration of the models we implemented in this section are put in the supplementary notes. 3.1 Machine translation We take Machine translation (MT) as the main task to study whether GEC helps improve translation quality. We conduct experiments by using GEC to edit the results of the state-of-the-art neural machine translation (NMT) system (Google Translate) on the French-English (FR-EN) in WMT14, German-English (DE-EN) and ChineseEnglish (ZH-EN) news test sets in WMT17. Table 2 shows BLEU with/without post-editing by the GEC system. Although GEC post-editing does not improve BLEU much, when we look into the results by analyzing the sentences edited by GEC, we observe only a small proportion of sentences are modified by the GEC system – approximately 5% in FR-EN and DE-EN, while 10% in ZH-EN test sets. The sentence-level BLEU of around 50% of the edited sentences are improved, NMT NMT+GEC #edited FR-EN 38.70 38.69 (−0.01) 131 (63↑68↓) out of 3,003 DE-EN 35.45 35.48 (+0.03) 141 (65↑76↓) out of 3,004 ZH-EN 28.85 28.96 (+0.11) 271 (148↑123↓) out of 2,001 Table 2: BLEU with/without post editing by GEC. #edited shows the number of sentences modified by GEC, where ↑and ↓indicate the number of sentences whose BLEU improves or decreases. MT MT+GEC Unsupervised SMT 27.09 27.33 (+0.24) Unsupervised NMT 28.30 28.52 (+0.22) Google Translate 38.70 38.69 (−0.01) Table 3: BLEU of the unsupervised SMT and NMT systems in the WMT14 FR-EN test set. while the remaining suffer a BLEU decrease. To understand the reasons for the BLEU changes, we manually check each sentence edited by GEC in WMT14 FR-EN dataset and show the results in Table 4. The main reason (90.5% cases) for a BLEU improvement is that GEC corrects errors in NMT’s results and improves the translation quality. In contrast, the reasons why BLEU decreases are various. First, the correction of grammatical errors by GEC may decrease BLEU though it improves the sentence’s grammaticality, as shown in Table 4. Second, the GEC system is not perfect: it sometimes edits a sentence without grammatical errors. Even though such edits usually bring no adverse effects, it is likely to decrease BLEU. Last, we find reference sentences occasionally have grammatical errors, as Reference Error in Table 4 shows. When GEC fixes the errors in such cases, BLEU decreases. Moreover, we test the effects of GEC on MT in a low resource setting. We use the state-of-the-art unsupervised SMT and NMT model in Ren et al. (2019) and use the GEC system to edit their results. According to the results shown in Table 3, the unsupervised MT systems benefit more from GEC than the state-of-the-art supervised NMT (i.e, Google translate) because they are more likely to generate sentences that are not fluent than the supervised MT models, which can be addressed by GEC. We also conduct experiments on the WMT17 Automatic Post-Editing (APE) task. However, we observe a large number of grammatical errors in the references which make the automatic evaluation less reliable. We include the results in the supplementary notes due to the page limit. 6061 BLEU change Reasons Examples BLEU↑(63) Correction (90.5%) NMT: They know their business better than anyone. (76.7) GEC: They know their business better than anyone else. (100) REF: They know their business better than anyone else. Accidental (9.5%) NMT: But this pacified identity only had a time. (12.1) GEC: But this pacified identity only had time. (12.2) REF: Yet, this pacified identity has had its day. BLEU↓(68) Correction (52.9%) NMT: It’s good child, it’s cool. (51.3) GEC: It’s a good child, it’s cool. (45.2) REF: It’s relaxed, it’s cool. GEC Error (30.9%) NMT: At the piano, dancers take turns to play the scores. (100) GEC: At the piano, dancers take turns playing the scores. (64.1) REF: At the piano, dancers take turns to play the scores. Reference Error (16.2%) NMT: FAA may lift ban on certain electronic devices during take-off and landing (46.6) GEC: FAA may lift a ban on certain electronic devices during take-off and landing (16.3) REF: FAA may lift ban on some electronic devices during takeoff and landing Table 4: Reasons for BLEU changes in WMT14 FR-EN dataset. The numbers in the round brackets following example sentences are sentence-level BLEU. Informal→Formal Formal→Informal BLEU Acc BLEU Acc Transformer 73.79 83.0 38.49 68.7 Transformer+GEC 74.84 84.2 38.85 47.1 State-of-the-art 75.37 39.09 Table 5: Results for GEC post-editing on formality style transfer on the GYAFC test set in “Family & Relationships” domain, containing about 1,000 sentences. Acc is evaluated with the help of a CNN model for style classification. The state-of-the-art (Niu et al., 2018) is an ensemble model trained with additional data. 3.2 Formality style transfer In addition to MT, we test GEC on the text style transfer task. We study formality style transfer which transfers an informal (formal) sentence to a formal (informal) style and choose GYAFC corpus (Rao and Tetreault, 2018) as our testbed. We use a 2-layer transformer model as our base model and train a model with approximately 100K parallel sentences in the GYAFC corpus for informal→ informal and formal→informal respectively. We use the GEC model to edit the base models’ outputs, and show the result in Table 5. While GEC improves BLEU in both transfer directions, we observe differences when we look into style accuracy. For Informal→Formal transfer, accuracy is improved (83.0% →84.2%) after GEC post editing; while for Formal→Informal transfer, it decreases (68.7% →47.1%) because grammaticality improvements by GEC may make a sentence become less like an informal sentence. 3.3 Sentence compression and simplification We also test effects of GEC post-editing on sentence compression and simplification. For sentence compression, following Filippova et al. (2015), we train a 2-layer LSTM seq2seq model, which generates a 0/1 sequence to indicate whether to delete a word, as our base model and test on Google’s sentence compression dataset1 (GoogComp). For sentence simplification, we use the state-of-the-art deep reinforcement model DRESS (Zhang and Lapata, 2017) as our base model and test on Newsela text simplification dataset. Table 6 shows the results for the effects of GEC on sentence compression and simplification. For sentence compression, BLEU decreases from 60.38 to 58.77 after GEC post editing. We manually analyze the results and find there are many grammatical errors in the reference sentences. This is not surprising, since the reference sentences are constructed with an automatic approach (Filippova and Altun, 2013). The grammatical errors in the references affect the BLEU evaluation and make it less reliable. The BLEU decrease is also observed in sentence simplification task but for a different reason. In the Newsela dataset, the reference sentences are written by humans and therefore have much fewer grammatical errors compared to GoogComp. In contrast to sentence compression where reference errors are the main reason for the BLEU decrease, the BLEU decrease in sentence simplification usually happens in the cases where the correction of grammatical errors reduces the sentence’s n-gram overlap with the reference sentence, as shown in Table 6 (similar to the phenomenon observed in the experiments for MT; see Table 4). In addition, GEC errors and occasional errors in reference sen1https://github.com/google-research-datasets/sentencecompression 6062 Tasks #edited BLEU BLEU change Reasons Examples Sentence Compression 110 60.38 ↓ 58.77 10↑ Accidental (100%) Base: Domestic flights were cancelled Sunday. (9.4) GEC: Domestic flights were cancelled on Sunday. (9.9) REF: Several domestic flights were cancelled due to the bad weather. 100↓ Reference Error (45.0%) Base: An tanker caught fire in a garage. (100) GEC: A tanker caught fire in a garage. (84.1) REF: An tanker caught fire in a garage. Correction (37.0%) Base: A undersea earthquake shook eastern Indonesia. (72.9) GEC: An undersea earthquake shook eastern Indonesia. (70.1) REF: A strong undersea earthquake shook eastern Indonesia. GEC Error (18.0%) Base: Nine persons were arrested over the weekend. (25.1) GEC: Nine people were arrested over the weekend. (4.8) REF: Nine persons were arrested after a series of drug finds. Sentence Simplification 96 22.64 ↓ 22.54 41↑ Correction (51.2%) Base: She also speak to younger women who are interested in science and math. (77.4) GEC: She also speaks to younger women who are interested in science and math. (85.6) REF: She speaks to younger women who are interested in science and math. Accidental (48.8%) Base: For mining, there’s the International Seabed Authority. (11.9) GEC: For mining, there is the International Seabed Authority. (12.4) REF: The International Seabed Authority is for mining. 55↓ Correction (58.2%) Base: The rocks moves forward for a few days. (1.3) GEC: The rocks move forward for a few days. (1.2) REF: The lava moves for a few days, then stops for weeks before starting again. GEC Error (36.4%) Base: In 2010, a group of chimpanzees was sent from the Netherlands to a zoo in Scotland. (51.7) GEC: In 2010, a group of chimpanzees were sent from the Netherlands to a zoo in Scotland. (45.0) REF: In 2010, a group of chimpanzees was taken from a zoo in the Netherlands. Reference Error (5.5%) Base: Richie wrote the winning word “magician.” (35.5) GEC: Richie wrote the winning word “magician”. (7.9) REF: The winning word was “magician.” Table 6: Results for sentence compression and sentence simplification. As in Table 4, the numbers in the round brackets following the example sentences are sentence-level BLEU. tences lead to a decrease of BLEU after GEC post editing. 3.4 Human Evaluation In addition to automatic evaluation (e.g., BLEU), we present human evaluation results for GEC post editing on the tasks. The evaluation includes two aspects: First, we evaluate how much helpful GEC is for improving the grammaticality of sentences generated by the seq2seq models, which is independent to a specific task; Second, we evaluate if GEC’s edits bring task-oriented improvements. The evaluation is done by a human judge through comparing the results with/without GEC’s edits. Table 7 shows the human evaluation results. For most sentences edited by GEC, their grammaticality is improved; while the bad cases are only in a small proportion (≤10%) in all the six tasks. In contrast, the task-oriented improvements vary across the tasks. For example, for Informal→Formal style transfer, GEC performs well because most of its edits improve the sentences’ grammaticality and make the sentences become more formal; in contrast, for Formal→Informal style transfer, GEC improves sentences’ grammaticality but affects their styles, making them become less informal. Moreover, it is observed that GEC is more beneficial to the seq2seq models trained in a low resource setting, by comparing the results of supervised and unsupervised MT, which is consistent with results in Table 3. For sentence compression and simplification, many grammatical improvements do not bring task-oriented improvements. The reason is that the parts GEC edits are not the content that should be kept in the results. Also, it is notable that except for Formal→Informal style transfer whose target sentences should be in an informal style, GEC brings much more improvements than adverse effects on the tasks, demonstrating the potential of GEC for NLG. 4 Related Work and Discussion The most related work to ours is the automatic post editing (APE) (Bojar et al., 2016) which has been extensively studied for MT (e.g., (Pal et al., 2016, 2017; Chatterjee et al., 2017; Hokamp, 2017; Tan et al., 2017)) in the past few years. These APE approaches are usually trained with source language input data, target language MT output and target language post editing (PE) data. Although these APE models and systems have proven to be successful in improving MT results, they are taskspecific and cannot be used for other NLG tasks. In contrast, we propose a general post editing approach by applying the current state-of-the-art GEC system to editing the outputs of NLG systems. To the best of our knowledge, this is the first attempt to explore improving seq2seq based NLG models with a state-of-the-art neural GEC system despite some early studies on post-processing SMT outputs using a (mainly rule-based) grammar checker (Stymne and Ahrenberg, 2010). Experiments show GEC post editing can effectively improve the grammaticality of generated text and lead to a task-oriented improvement in the NLG 6063 Tasks #edited / #all Grammaticality Task-oriented ↑ ↓ → ↑ ↓ → Supervised FR-EN NMT 131 / 3,003 79% 10% 11% 63% 10% 27% Unsupervised FR-EN NMT 474 / 3,003 85% 4% 11% 80% 4% 16% Informal→Formal 143 / 1,332 74% 6% 20% 61% 6% 33% Formal→Informal 259 / 1,019 91% 2% 7% 4% 79% 17% Sentence compression 110 / 2,000 75% 10% 15% 44% 13% 44% Sentence simplification 96 / 1,077 79% 9% 12% 47% 12% 41% Table 7: Human evaluation results for the sentences edited by GEC. ↑, ↓and →denote GEC makes a sentence better, worse and neither better nor worse. The percentages are the proportion of the corresponding cases. tasks where target sentences are in a formal style, especially in a low-resource setting. Acknowledgments We thank the anonymous reviewers for their valuable comments. Specially, we thank Shujie Liu for the discussion and constructive suggestions to this paper. We also thank Shuo Ren, Shuangzhi Wu, Zhirui Zhang and Yi Zhang for their help with the evaluation in the machine translation and formality style transfer task. References Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, et al. 2016. Findings of the 2016 conference on machine translation. In ACL 2016 FIRST CONFERENCE ON MACHINE TRANSLATION (WMT16), pages 131–198. The Association for Computational Linguistics. Rajen Chatterjee, M Amin Farajian, Matteo Negri, Marco Turchi, Ankit Srivastava, and Santanu Pal. 2017. Multi-source neural automatic post-editing: Fbks participation in the wmt 2017 ape shared task. In Proceedings of the Second Conference on Machine Translation, pages 630–638. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Shamil Chollampatt and Hwee Tou Ng. 2018. A multilayer convolutional encoder-decoder neural network for grammatical error correction. arXiv preprint arXiv:1801.08831. Katja Filippova, Enrique Alfonseca, Carlos A Colmenares, Lukasz Kaiser, and Oriol Vinyals. 2015. Sentence compression by deletion with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 360–368. Katja Filippova and Yasemin Altun. 2013. Overcoming the lack of parallel data in sentence compression. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1481–1491. Tao Ge, Furu Wei, and Ming Zhou. 2018a. Fluency boost learning and inference for neural grammatical error correction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1055–1065. Tao Ge, Furu Wei, and Ming Zhou. 2018b. Reaching human-level performance in automatic grammatical error correction: An empirical study. arXiv preprint arXiv:1807.01270. Roman Grundkiewicz and Marcin Junczys-Dowmunt. 2018. Near human-level performance in grammatical error correction with hybrid machine translation. arXiv preprint arXiv:1804.05945. Chris Hokamp. 2017. Ensembling factored neural machine translation models for automatic postediting and quality estimation. arXiv preprint arXiv:1706.05083. Marcin Junczys-Dowmunt, Roman Grundkiewicz, Shubha Guha, and Kenneth Heafield. 2018. Approaching neural grammatical error correction as a low-resource machine translation task. arXiv preprint arXiv:1804.05940. Xing Niu, Sudha Rao, and Marine Carpuat. 2018. Multi-task neural models for translating between styles within and across languages. arXiv preprint arXiv:1806.04357. Santanu Pal, Sudip Kumar Naskar, Mihaela Vela, and Josef van Genabith. 2016. A neural network based approach to automatic post-editing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 281–286. Santanu Pal, Sudip Kumar Naskar, Mihaela Vela, Qun Liu, and Josef van Genabith. 2017. Neural automatic post-editing using prior alignment and reranking. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, volume 2, pages 349–355. 6064 Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may i introduce the gyafc dataset: Corpus, benchmarks and metrics for formality style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 129–140. Shuo Ren, Zhirui Zhang, Shujie Liu, Ming Zhou, and Shuai Ma. 2019. Unsupervised neural machine translation with smt as posterior regularization. arXiv preprint arXiv:1901.04112. Sara Stymne and Lars Ahrenberg. 2010. Using a grammar checker for evaluation and postprocessing of statistical machine translation. In LREC. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Yiming Tan, Zhiming Chen, Liu Huang, Lilin Zhang, Maoxi Li, and Mingwen Wang. 2017. Neural postediting based on quality estimation. In Proceedings of the Second Conference on Machine Translation, pages 655–660. Xingxing Zhang and Mirella Lapata. 2017. Sentence simplification with deep reinforcement learning. arXiv preprint arXiv:1703.10931.
2019
609
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 640–645 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 640 Data Programming for Learning Discourse Structure Sonia Badene1,2, Kate Thompson1,3, Jean-Pierre Lorr´e2, Nicholas Asher1,3 1IRIT, 2Linagora, 3Universit´e Toulouse III & CNRS {sonia.badene,kate.thompson,nicholas.asher}@irit.fr, {sbadene,jplorre}@linagora.com Abstract This paper investigates the advantages and limits of data programming for the task of learning discourse structure. The data programming paradigm implemented in the Snorkel framework allows a user to label training data using expert-composed heuristics, which are then transformed via the “generative step” into probability distributions of the class labels given the training candidates. These results are later generalized using a discriminative model. Snorkel’s attractive promise to create a large amount of annotated data from a smaller set of training data by unifying the output of a set of heuristics has yet to be used for computationally difficult tasks, such as that of discourse attachment, in which one must decide where a given discourse unit attaches to other units in a text in order to form a coherent discourse structure. Although approaching this problem using Snorkel requires significant modifications to the structure of the heuristics, we show that weak supervision methods can be more than competitive with classical supervised learning approaches to the attachment problem. 1 Introduction Discourse structures for texts represent relational semantic structures that convey causal, topical, argumentative relations inter alia or more generally coherence relations. Following (Muller et al., 2012; Li et al., 2014; Morey et al., 2018), we represent them as dependency structures or graphs containing a set of nodes that represent discourse units (DUs), or instances of propositional content, and a set of labelled arcs that represent coherent relations between DUs. For dialogues with multiple interlocutors, extraction of their discourse structures could provide useful semantic information to the “downstream” models used, for example, in the production of intelligent meeting managers or the analysis of user interactions in online fora. However, despite considerable efforts on computational discourse-analysis (Duverle and Prendinger, 2009; Joty et al., 2013; Ji and Eisenstein, 2014; Surdeanu et al., 2015; Yoshida et al., 2014; Li et al., 2016), we are still a long way from usable discourse models, especially for dialogue. The problem of extracting full discourse structures is difficult: standard supervised models struggle to capture the sparse long distance attachments, even when relatively large annotated corpora are available. In addition, the annotation process is time consuming and often fraught with errors and disagreements, even among expert annotators. This motivated us to explore a weak supervision approach, data programming (Ratner et al., 2016), in which we exploit expert linguistic knowledge in a more compact and consistent rule-based form. In our study, we restrict the structure learning problem to predicting edges or attachments between DU pairs in the dependency graph. After training a supervised deep learning algorithm to predict attachments on the STAC corpus1, we then constructed a weakly supervised learning system in which we used 10% of the corpus as a development set. Experts on discourse structure wrote a set of attachment rules, Labeling Functions (LFs), with reference to this development set. Although the whole of the STAC corpus is annotated, we treated the remainder of the corpus as unseen/unannotated data in order to simulate the conditions in which the snorkel framework is meant to be used, i.e. where there is a large amount of unlabeled data but where it is only feasible to hand label a relatively small portion of it. Accordingly, we applied the completed LFs to our “unseen” training set, 80% of the corpus, and used the final 10% as our test set. 1https://www.irit.fr/STAC/ 641 After applying the LFs to the unannotated data and training the generative model, the F1 score for attachment was 4 points higher than that for the supervised method, showing that hybrid learning architectures combining expert linguistic conceptual knowledge with data-driven techniques can be highly competitive with standard learning approaches. 2 State of the Art Given that our interest lies in the analysis of multiparty dialogue, we followed (Afantenos et al., 2015; Perret et al., 2016) and used the STAC corpus, in which dialogue structures are assumed to be directed acyclical graphs (DAG) as in SDRT2 (Asher and Lascarides, 2003; Asher et al., 2016). An SDRT discourse structure is a graph, ⟨V, E1, E2, ℓ, Last⟩, where: V is a set of nodes or discourse units (DUs); E1 ⊆V 2 is a set of edges between DUs representing coherence relations; E2 ⊆V 2 represents a dependency relation between DUs; ℓ: E1 →R is a labeling function that assigns a semantic type to an edge in E1 from a set R of discourse relation types, and Last is a designated element of V giving the last DU relative to textual or temporal order. E2 is used to represent Complex Discourse Units (CDUs), which are clusters of two or more DUs which are connected as an ensemble to other DUs in the graph. As learning this recursive structure presents difficulties beyond the scope of this paper, we followed a “flattening” strategy similar to (Muller et al., 2012) to remove CDUs. This process yields a set V ∗, which is V without CDUs, and a set E∗1, a flattened version of E1. To build an SDRT discourse structure, we need to: (i) segment the text into DUs; (ii) predict the attachments between DUs, i.e. identify the elements in E1; (iii) predict the semantic type of the edge in E1. This paper focuses on step (ii). Our dialogue structures are thus of the form ⟨V ∗, E∗1, Last⟩. Step (ii) is a difficult problem for automatic processing: attachments are theoretically possible between any two DUs in a dialogue or text, and often graphs include long-distance relations. (Muller et al., 2012) is the first paper we know of on the discourse parsing attachment problem for monologue. It also targets a restricted version of an SDRT graph. It trains a simple MaxEnt algorithm to produce probability distributions over pairs of 2Segmented Discourse Representation Theory elementary discourse units, a “local model”, with a positive F1 attachment score of 63.5; global decoding constraints produce a slight improvement in attachment scores. (Afantenos et al., 2015) uses a similar strategy on an early version of the STAC corpus. (Perret et al., 2016) targets a more elaborate approximation of SDRT graphs on a later version of the STAC corpus and reports a local model F1 attachment of .483. It then uses Integer Linear Programming (ILP) to encode global decoding constraints to improve the F1 attachment score (0.689). (Ratner et al., 2016) introduced the data programming paradigm, along with a framework, Snorkel (Ratner et al., 2017), which uses a weak supervision method (Zhou, 2017), to apply labels to large data sets by way of heuristic labeling functions that can access distant, disparate knowledge sources. These labels are then used to train classic data-hungry machine learning (ML) algorithms. The crucial step in the data programming process uses a generative model to unify the noisy labels by generating a probability distribution for all labels for each data point. This set of probabilities replaces the ground-truth labels in a standard discriminative model outfitted with a noise-aware loss function and trained on a sufficiently large data set. 3 The STAC Annotated Corpus 3.1 Overview While earlier versions only included linguistic moves by players, the latest version of STAC is a multi-modal corpus of multi-party chats between players of an online game (Asher et al., 2016; Hunter et al., 2018). It includes 2,593 dialogues (each with a weakly connected DAG discourse structure), 12,588 “linguistic” DUs, 31,811 “nonlinguistic” DUs and 31,251 semantic relations. A dialogue begins at the beginning of a player’s turn, and ends at the end of that player’s turn. In the interim, players can bargain with each other or make spontaneous conversation. These player utterances are the “linguistic” turns. In addition the corpus contains information given visually in the game interface but transcribed in the corpus into Server or interface messages, “non-linguistic” turns (Hunter et al., 2018). All turns are segmented into DUs, and these units are then connected by semantic relations. 642 3.2 Data Preparation To concentrate on the attachment task, we implemented the following simplifying measures on the corpus used: 1. Roughly 56% of the dialogues in the corpus contain only non-linguistic DUs. The discourse structure of these dialogues are more regular and thus less challenging; so we ignore these dialogues for our prediction task. 2. 98% of the discourse relations in our development corpus span 10 DUs or less. To reduce class imbalance, we restricted the relations we consider to a distance of ≤10. 3. Following (Muller et al., 2012; Perret et al., 2016) we “flatten” CDUs by connecting all relations incoming or outgoing from a CDU to the “head” of the CDU, or its first DU. The STAC corpus as we use it in our learning experiments thus includes 1,130 dialogues, 13,734 linguistic DUs, 18,767 non-linguistic DUs and 22,098 semantic relations. 4 Data Programming Experiments 4.1 Candidates and Labeling Functions Our weak supervision approach follows the Snorkel implementation of the data programming paradigm. The first step is candidate extraction, followed by LF creation. Candidates are the units of data for which labels will be predicted: all pairs of DUs in a dialogue for attachment problem in discourse. LFs are expert-composed functions that make an attachment prediction for a given candidate: each LF returns a 1, a 0 or a -1 (“attached”/“do not know”/“not attached”). The LFs should have maximal and if possible overlapping coverage of the candidates to optimize the results of the generative model. To predict dialogue attachment, our LFs exploit information about candidates including whether they are linguistic or non-linguistic DUs, the dialogue acts they express, their speaker identities, lexical content and grammatical category, as well as the distance between DUs: all features also used in supervised learning methods (Perret et al., 2016; Afantenos et al., 2015; Muller et al., 2012). Furthermore, our LFs take into account the particular behavior of each relation type, information that expert annotators consider when deciding whether two DUs are attached. Thus the LFs were divided among the 9 relation types as well as the combination of DU endpoints for each type, e.g. linguistic/non-linguistic. We also fix the order in which each LF “sees” the candidates such that it considers adjacent DUs before distant DUs. This allows LFs to exploit information about previously predicted attachments and dialogue history in new predictions. Our complete rule set, along with descriptions of each of the relation types, is available here: https://tizirinagh.github.io/acl2019/. In Table 1 we list the rules and their performances on the portion of the development set to which they apply. For example, the LF for Question-answer-pair between two linguistic endpoints (QAP LL) has a coverage of 32% – which is the proportion of the development set containing relations between two linguistic endpoints– and has an accuracy of 89%. 4.2 The Generative Model Once the LFs are applied to all the candidates, we have a matrix of labels (Λ) given by each LF Λ for each candidate. The generative model as specified in (1) provides a general distribution of marginal probabilities relative to n accuracy dependencies φj(Λi, yi) for an LF λj with respect inputs xi, the LF’s outputs on i Λij and true labels yi that depend on parameters θj where: φj(Λi, yi) := yiΛij pθ(Λ, Y ) ∝exp( m X i=1 n X j=1 θjφj(Λi, yi)) (1) The parameters are estimated by minimizing the negative log marginal likelihood of the output of an observed matrix Λ as in (2). argminθ −log X Y pθ(Λ, Y ) (2) The generative model thus estimates the accuracy of each LF, a marginal probability for each label, and consequently a probability for positive attachment. In this model, the true class labels yi are latent variables that generate the labeling function outputs. The model in (1) presupposes that the LFs are independent, but this assumption doesn’t always hold: one LF might be a variation of another or they might depend on a common source of information (Mintz et al., 2009). We will look at dependencies between LFs in future work. 643 Individual LF Performances Coverage True Pos True Neg False Pos False Neg Accuracy QAP LL 0.32 282 9397 239 150 0.8928 QAP NLNL 0.31 84 9476 4 0 0.9995 Result NLNL 0.31 758 8636 134 36 0.9822 Result LNL 0.16 13 4596 319 97 0.9117 Result LL 0.32 21 9371 617 41 0.9345 Result NLL 0.21 2 6535 0 2 0.9996 Continuation LL 0.32 16 9818 110 106 0.9785 Continuation NLNL 0.31 613 8867 83 1 0.9912 Sequence NLL 0.21 82 6351 84 22 0.9837 Sequence NLNL 0.31 236 8199 1053 76 0.8819 Comment LL 0.32 123 8632 1140 0 0.8847 Comment NLL 0.21 12 6369 57 101 0.9758 Conditional LL 0.32 9 10026 7 0 0.9993 Elaboration LL 0.32 67 9694 214 75 0.9712 Elaboration NLNL 0.31 48 9420 96 0 0.9899 Acknowledgement LL 0.32 50 9612 251 137 0.9613 Contrast LL 0.32 14 9978 11 47 0.9942 Table 1: Performances of each LF on the development set. ”Coverage” describes the percentage of the development set to which the LF applies, and is determined by the types of endpoints of the relation. Generative Model Discriminative Model on Test Dev Train Test with Marginals with Gold annotations Precision 0.45 0.50 0.40 0.28 0.33 Recall 0.70 0.74 0.72 0.59 0.80 F1 score 0.55 0.59 0.51 0.38 0.47 Accuracy 0.87 0.88 0.84 0.74 0.75 Table 2: Evaluations of attachment with the weakly supervised and supervised approaches. 4.3 Discriminative Model The standard Snorkel approach inputs the marginal probabilities from the generative step directly into a discriminative model, which is trained on those probabilities using a noise-aware loss function (Ratner et al., 2016). Ideally, this step generalizes the LFs by augmenting the feature representation - from, say, dozens of LFs to a high dimensional feature space - and allows the model to predict labels for more new data. Thus the precision potentially lost in the generalization is offset by a larger increase in recall. The discriminative model we use in our study is a single layer BI-LSTM with 300 neurons, which takes as input 100 dimensional-embeddings for the text of each EDU in the candidate pair. We concatenated the outputs of the BI-LSTM and fed them to a simple perceptron with one hidden layer and Rectified Linear Unit (ReLU) (Hahnloser et al., 2000; Jarrett et al., 2009; Nair and Hinton, 2010) activation and optimized with Adam (Kingma and Ba, 2014). Given that our data is extremely unbalanced in favor of the “unattached” class (“attached” candidates make up roughly 13% of the development set), we also implement a class-balancing method inspired by (King and Zeng, 2001) which maps class indices to weight values used for weighting the loss function during training. In order to use this method, we had to binarize the marginals before moving to the discriminative step using a threshold of p > .5 (the threshold that gave us the best F1 score on the development corpus). Though this marks a departure from the standard Snorkel approach, we found that our discriminative model results were higher when the marginals were binarized and when the class rebalancing was used, albeit much lower than expected overall. 644 5 Results and Analysis We first evaluated our LFs individually on the development corpus, which permitted us to measure their coverage and accuracy on a subset of the data3. We then evaluated the generative model and the generative + discriminative model with the Snorkel architecture on the test set with the results in Table 2. While our supervised discriminative model gave results on a par with the local model of (Perret et al., 2016) (which had an F1 of 0.483), our generative model (using a threshold value of p > .5 for positive attachment) had significantly better results, competitive with those in the literature on the attachment problem. Our models show strong recall but weaker precision than (Perret et al., 2016), and we believe this is in part because our LFs were expressly written to broadly cover relations and we have written very few rules on non-attachments. The Snorkel coupling of generative and discriminative models did not produce the anticipated improvement over the results of generative model. When we trained the discriminative model directly on the marginals, we got a score of 0.26 for F1. To improve these results (column 4 in the Table 2), we used the class re-balancing method above. However in order to do this, we had to binarize the outputs of the generative model before training the discriminative model, which also contributed to lower precision scores by effectively reducing the total information available to the model. 6 Conclusions and Future Work We have compared a weak supervision approach, as implemented in Snorkel, with a standard supervised model on the difficult task of discursive attachment. The results of the model from Snorkel’s generative step surpass those of a standard supervised learning approach, showing it to be a promising method for generating a lot of annotated data in a very short time relative to what is needed for a traditional approach: from (Asher et al., 2016) we infer that the STAC corpus took at least 4 years to build; we created and refined our label functions in 2 months. Still it is clear that we must further investigate the interaction of the generative and discriminative models in order to eventually leverage the power of generalization 3https://tizirinagh.github.io/acl2019/ a discriminative model is supposed to afford. In future work, we will enrich our weak supervision system by giving the LFs access to more sophisticated contexts that take into account global structuring constraints in order to see how they compare to exogenous decoding constraints applied in (Muller et al., 2012; Perret et al., 2016). We hope such experiments with the weak supervision paradigm will eventually lead us to understand how weakly supervised methods might effectively capture the global structural constraints on discourse structures without decoding or more elaborate learning architectures. References Stergos Afantenos, Eric Kow, Nicholas Asher, and J´er´emy Perret. 2015. Discourse parsing for multiparty chat dialogues. In Association for Computational Linguistics (ACL). Nicholas Asher, Julie Hunter, Mathieu Morey, Farah Benamara, and Stergos D Afantenos. 2016. Discourse structure and dialogue acts in multiparty dialogue: the stac corpus. In LREC. Nicholas Asher and Alex Lascarides. 2003. Logics of conversation. Cambridge University Press. David A Duverle and Helmut Prendinger. 2009. A novel discourse parser based on support vector machine classification. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2Volume 2, pages 665–673. Association for Computational Linguistics. Richard HR Hahnloser, Rahul Sarpeshkar, Misha A Mahowald, Rodney J Douglas, and H Sebastian Seung. 2000. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature, 405(6789):947. Julie Hunter, Nicholas Asher, and Alex Lascarides. 2018. A formal semantics for situated conversation. Semantics and Pragmatics, 11. DOI: http://dx.doi.org/10.3765/sp.11.10. Kevin Jarrett, Koray Kavukcuoglu, Yann LeCun, et al. 2009. What is the best multi-stage architecture for object recognition? In 2009 IEEE 12th International Conference on Computer Vision (ICCV), pages 2146–2153. IEEE. Yangfeng Ji and Jacob Eisenstein. 2014. Representation learning for text-level discourse parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13–24, Baltimore, Maryland. Association for Computational Linguistics. 645 Shafiq R Joty, Giuseppe Carenini, Raymond T Ng, and Yashar Mehdad. 2013. Combining intra-and multisentential rhetorical parsing for document-level discourse analysis. In ACL (1), pages 486–496. Gary King and Langche Zeng. 2001. Logistic regression in rare events data. Political analysis, 9(2):137– 163. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Qi Li, Tianshi Li, and Baobao Chang. 2016. Discourse parsing with attention-based hierarchical neural networks. In EMNLP, pages 362–371. Sujian Li, Liang Wang, Ziqiang Cao, and Wenjie Li. 2014. Text-level discourse dependency parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 25–35. Association for Computational Linguistics. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1003–1011. Association for Computational Linguistics. Mathieu Morey, Philippe Muller, and Nicholas Asher. 2018. A dependency perspective on rst discourse parsing and evaluation. Computational Linguistics, pages 198–235. Philippe Muller, Stergos Afantenos, Pascal Denis, and Nicholas Asher. 2012. Constrained decoding for text-level discourse parsing. Proceedings of COLING 2012, pages 1883–1900. Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807–814. J´er´emy Perret, Stergos Afantenos, Nicholas Asher, and Mathieu Morey. 2016. Integer linear programming for discourse parsing. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 99–109. Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher R´e. 2017. Snorkel: Rapid training data creation with weak supervision. Proceedings of the VLDB Endowment, 11(3):269–282. Alexander J Ratner, Christopher M De Sa, Sen Wu, Daniel Selsam, and Christopher R´e. 2016. Data programming: Creating large training sets, quickly. In Advances in neural information processing systems, pages 3567–3575. Mihai Surdeanu, Thomas Hicks, and Marco A Valenzuela-Esc´arcega. 2015. Two practical rhetorical structure theory parsers. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 1–5. Yasuhisa Yoshida, Jun Suzuki, Tsutomu Hirao, and Masaaki Nagata. 2014. Dependency-based discourse parser for single-document summarization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1834–1839, Doha, Qatar. Association for Computational Linguistics. Zhi-Hua Zhou. 2017. A brief introduction to weakly supervised learning. National Science Review, 5(1):44–53.
2019
61
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6065–6075 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6065 Improving the Robustness of Question Answering Systems to Question Paraphrasing Wee Chung Gan Department of Computer Science National University of Singapore gan [email protected] Hwee Tou Ng Department of Computer Science National University of Singapore [email protected] Abstract Despite the advancement of question answering (QA) systems and rapid improvements on held-out test sets, their generalizability is a topic of concern. We explore the robustness of QA models to question paraphrasing by creating two test sets consisting of paraphrased SQuAD questions. Paraphrased questions from the first test set are very similar to the original questions designed to test QA models’ over-sensitivity, while questions from the second test set are paraphrased using context words near an incorrect answer candidate in an attempt to confuse QA models. We show that both paraphrased test sets lead to significant decrease in performance on multiple state-of-the-art QA models. Using a neural paraphrasing model trained to generate multiple paraphrased questions for a given source question and a set of paraphrase suggestions, we propose a data augmentation approach that requires no human intervention to re-train the models for improved robustness to question paraphrasing. 1 Introduction With the release of large-scale, high-quality, and increasingly challenging question answering (QA) datasets (Rajpurkar et al., 2016; Nguyen et al., 2016; Joshi et al., 2017; Reddy et al., 2018), the research community has made rapid progress on QA systems. On the popular SQuAD dataset (Rajpurkar et al., 2016), top QA models have achieved higher evaluation scores compared to human. However, since the test set is typically a randomly selected subset of the whole set of data collected, and thus follows the same distribution as the training and development sets, the performance of models on the test set tends to overestimate the models’ ability to generalize to other unseen test data. It is thus important for QA models to be evaluated on other unseen test data for a Context: ... commentators had debated whether the figure could be reached as the growth in subscriber numbers elsewhere in Europe flattened. Original Question: What was happening to subscriber numbers in other areas of Europe? Prediction: flattened Paraphrased Question: What was going on with subscriber numbers in other areas of Europe? Prediction: growth Context: ... According to the Second law of thermodynamics, nonconservative forces necessarily result in energy transformations within closed systems from ordered to more random conditions as entropy increases. Original Question: What is the law of thermodynamics associated with closed system heat exchange? Prediction: Second law of thermodynamics Paraphrased Question: What is the law of thermodynamics related to closed system heat exchange? Prediction: nonconservative forces Figure 1: Examples of brittleness to paraphrasing. Both examples show an initially correct prediction turning into a wrong prediction after small changes in the question. better indication of their generalization ability. In this paper, we explore QA models’ robustness to question paraphrasing. Our motivation stems from the observation that when a question is phrased in a slightly different but semantically similar way, QA models can output a wrong prediction despite being able to answer the original question correctly. Figure 1 shows two such examples. Sensitivity to such paraphrasing needs to 6066 be improved for better reliability of QA models on unseen test questions. We focus on the SQuAD QA task in this paper. SQuAD was created by getting crowd workers to create questions and answers from Wikipedia paragraphs. SQuAD serves as a benchmark for QA systems, taking as input a question and a context to predict the correct answer. Two evaluation metrics are used: exact match (EM) and F1. Since an answer must be a span from the context, most models output a probability distribution for the start and end token separately, and constrain the end token to be after the start token. Despite the availability of SQuAD 2.0 (Rajpurkar et al., 2018) which requires models to additionally decide whether a question is unanswerable, we focus on the original version of SQuAD (Rajpurkar et al., 2016). This is due to the simpler task of the original SQuAD which allows us to concentrate on robustness of models to question paraphrasing. We created two paraphrased test sets by paraphrasing SQuAD questions so as to evaluate the robustness of models to question paraphrasing. Using a neural paraphrasing model trained to generate a paraphrased question given a source question and a paraphrase suggestion, we created a non-adversarial paraphrased test set from SQuAD development questions which is subsequently verified by human annotators. We also created an adversarial paraphrased test set by re-writing the original question using words in the context near a confusing answer candidate of the same type as the correct answer. Both test sets lead to significant decrease in the performance of QA models. We hypothesize that exposing a model to various ways of asking the same question during training will improve its robustness to question paraphrasing. To this end, we use the trained paraphrasing model to introduce additional training examples containing paraphrased training questions to augment the original training data for retraining. The contributions of this paper are as follows: • We introduce a novel method to generate diverse paraphrased questions by guiding the model with paraphrase suggestions. • We release two paraphrased test sets1 using SQuAD development questions for eval1The two test sets are available at https://github.com/nusnlp/paraphrasing-squad. uation of QA models’ robustness to question paraphrasing. The non-adversarial paraphrased test set consists of 1,062 questions paraphrased with slight perturbations from the original questions. The adversarial paraphrased test set consists of 56 questions paraphrased using context words near a confusing answer candidate. • We show that all three state-of-the-art QA models that we experimented with, including one that outperforms human on SQuAD, have worse performance on the nonadversarial paraphrased test set even though they are semantically and syntactically similar to the original questions. All three QA models have drastically lower performance on the adversarial paraphrased test set. • We show that it is possible to improve the robustness of QA models to paraphrased questions for both paraphrased test sets, using a fully automatic approach to augment the training set and retraining the model on the augmented training set. 2 Paraphrase-Guided Paraphrasing Network In this section, we introduce our method to train a neural network that is able to take as input a source question together with a paraphrase suggestion (a word or phrase) to generate a paraphrased question. To do so, we require a training dataset where each training example is of the form (source question, paraphrase suggestion, target question). Since we want the generated paraphrase to contain the paraphrase suggestion provided, the suggestion given during training must be part of the target question. We elaborate on the construction of our training dataset in Section 2.2. By training our model to make use of a paraphrase suggestion to paraphrase a source question, we are able to leverage a database of word and phrasal paraphrases (Section 3.1.1) to generate multiple paraphrases for a given SQuAD question. This is useful for the creation of the nonadversarial paraphrased test set (Section 3.1) and additional training data for improvement on this test set (Section 4.2.1). This model is also useful for training data augmentation for improvement on the adversarial paraphrased test set (Section 4.2.2). 6067 2.1 Model Architecture We use the transformer model from Vaswani et al. (2017) which is an encoder-decoder architecture that relies mainly on a self-attention mechanism. We extend the decoder using the copy mechanism of See et al. (2017) which allows tokens to be copied from the source question. This is achieved by augmenting the probability distribution of the output vocabulary to include tokens from the source question. The input to the encoder is the concatenation of a paraphrase suggestion and the source question separated by a special token: “<suggestion> <sep> <source question>”, tokenized using the subword tokenizer SentencePiece by Kudo and Richardson (2018). 2.2 Dataset Preparation We use a combination of the WikiAnswers paraphrase corpus (Fader et al., 2013) and the Quora Question Pairs dataset2 for training. The two questions in a question pair in the Quora dataset are typically very similar in meaning. In contrast, the WikiAnswers paraphrase corpus tends to be noisier but one source question is paired with multiple target questions. This allows the model to be trained to output different target questions depending on the paraphrase suggestion given. A combination of these two datasets thus provides a balance between good paraphrasing and using a paraphrase suggestion to generate a paraphrase. 2.2.1 Obtaining Source and Target Questions WikiAnswers dataset: This paraphrase corpus contains over 22 million question pairs. We use only a small portion of this dataset so as not to overwhelm the Quora dataset. We only keep a question pair if each question is at least 7 tokens long, since training on longer sentences is more helpful. We also attempt to filter out erroneous question pairs by removing all question pairs with paraphrase similarity scores below 0.7 using a pre-trained model by Wieting and Gimpel (2018). Then, we randomly sample source questions to obtain about 350,000 question pairs. Quora dataset: For the Quora dataset, we use a pair of questions as two training examples by including both source question to target question and vice versa in the training set, i.e., we include QuestionA →QuestionB and QuestionB →QuestionA 2https://data.quora.com/First-Quora-Dataset-ReleaseQuestion-Pairs in the training set. A total of about 280,000 training examples come from the Quora dataset. 2.2.2 Obtaining Paraphrase Suggestions WikiAnswers dataset: For each source and target question pair, we use word alignments that come with the dataset to match words and phrases from the source to target question to obtain phrase alignment pairs. The alignment pairs are filtered to keep phrases that occur in the target question but are not in the source question. Given a source and target question pair, we thus have a set of possible paraphrase suggestions to choose from. We show an example in Figure 2. Since most source questions have multiple target questions in this dataset, given one source question q and all of its corresponding target questions t1, t2, ..., tk, we thus have k sets of possible paraphrase suggestions S1, S2, ..., Sk. From each set of possible paraphrase suggestions Si, we select one suggestion si ∈Si to construct a training example (q, si, ti). We constrain the selection such that all paraphrase suggestions chosen are unique, i.e., ∀i, j(i ̸= j ⇒si ̸= sj). This is to ensure that there are no duplicate (q, si) input pairs in the training dataset which will result in the model being trained on different targets given the same input. Furthermore, to enable the model to paraphrase even without a suggestion given, some paraphrase suggestions are randomly selected to be replaced with a special empty token. Quora dataset: Since the Quora dataset does not come with word alignments, we first use TextRank (Mihalcea and Tarau, 2004) to obtain question keywords from both source and target questions. Then, the paraphrase suggestion is the highest ranked key phrase in the target question that is not in the source question. We do not allow stopwords to be selected as a paraphrase suggestion. Similarly, a random subset of the paraphrase suggestions is replaced with the special empty token. We show an example of obtaining paraphrase suggestions for this dataset in Figure 3. 2.3 Implementation We train our paraphrasing model using the implementation by OpenNMT (Klein et al., 2018), following the hyper-parameters of Vaswani et al. (2017). We lowercase all data for training and create a tokenized vocabulary of size 8k from SentencePiece (Kudo and Richardson, 2018). 6068 Phrase Alignments what nutrients do green peppers have in them ? what nutrients does a green pepper contain ? (what, what)  (green, a green)  (have in them, contain)  ... Candidate  Suggestions a green,  pepper,  contain Source Target Word Alignments Question Figure 2: An example of finding possible paraphrase suggestions for a source and target question pair from the WikiAnswers dataset. Since there can be multiple target questions for a given source question, we ensure that there are no duplicates in the suggestions chosen for the same source question. Question Keywords Candidate Suggestions Selected Suggestion Source how can i find out how many devices are connected to my wifi? wifi, connected, many devices, devices, find wifi network, network, know wifi network Target how can i know how many devices are connected to my wifi network? wifi network, network, wifi, connected, many devices, devices, know Figure 3: An example of obtaining a paraphrase suggestion for a source and target question pair from the Quora dataset. Keywords from the questions are obtained from TextRank. Since our model is not directly comparable to other neural paraphrasing models in the literature, we do not perform automatic evaluation and instead leave the evaluation of our model’s performance to Section 3.1.2, where we employ human annotators to evaluate the paraphrasing quality of our model on SQuAD questions. 3 Paraphrasing SQuAD Questions In this section, we discuss the creation of two paraphrased test sets using SQuAD development questions for the evaluation of the robustness of QA models to question paraphrasing. 3.1 Non-Adversarial Paraphrased Test Set We use the trained paraphrasing model from Section 2 to create a non-adversarial paraphrased test set. We employ human annotators to ensure the quality of the questions for this test set, which also serves as evaluation for our paraphrasing model. In contrast to methods that query the model to create adversarial examples, this dataset is created in a completely model-independent way designed to provide a better indication on performance during actual use. 3.1.1 Paraphrasing Process To obtain paraphrase suggestions for input to our paraphrasing model to paraphrase SQuAD questions, we rely on the paraphrase database PPDB (Pavlick et al., 2015), which is an automatically extracted database consisting of millions of paraphrase pairs. The paraphrase pairs can contain a single word or multiple words. PPDB comes in 6 different sizes, with larger sizes having greater coverage but are less accurate. First, we obtain all n-grams (up to 6-grams) from the source question and remove unigrams that are stopwords. Next, we search the PPDB (XL size) for paraphrases of the remaining n-grams with equivalence score above 0.25. This gives us a set of paraphrase suggestions for the model to generate paraphrased questions. We use a threshold of 0.25 for a balance between having a larger set of paraphrase suggestions and having a less noisy set of suggestions. After paraphrase generation, we perform postprocessing to remove semantically dissimilar paraphrases. Similar to filtering question pairs from the WikiAnswers corpus, we use the pretrained model by Wieting and Gimpel (2018) to obtain paraphrase similarity score for the generated questions and keep only those scoring above 0.95. This is required due to noisiness of the paraphrase suggestions obtained from PPDB and to ensure that a larger number of paraphrased questions are semantically similar to the original question. We summarise the paraphrasing process in Fig6069 Figure 4: Process to paraphrase SQuAD questions. We first use PPDB to obtain paraphrase suggestions before passing both the original question and the suggestions to our paraphrasing model to generate paraphrases. A generated paraphrase is accepted if its similarity score with the original question is above 0.95. L refers to the use of the original SQuAD question and the previous output as inputs to the next step. Original Question the european court of justice cannot uphold measures that are incompatible with what? Paraphrased Questions 1. the european court of justice cannot uphold a number of measures that are incompatible with what? 2. the european court of justice cannot uphold measures that are inconsistent with what? 3. the european court of justice cannot uphold measures which are not compatible with what? 4. the european court of justice has not been able to uphold measures that are incompatible with what? Figure 5: Examples of generated paraphrases. ure 4 and show four example paraphrases generated by our model from the same question in Figure 5. 3.1.2 Human Evaluation To evaluate the quality of the automatically generated paraphrases, we employ human annotators from Amazon Mechanical Turk (AMT) to rate the semantic equivalence and fluency of the paraphrased questions. We paraphrase questions from the SQuAD development set and randomly select 3,000 generated paraphrases, containing between 2 and 3 paraphrased questions for each original question. For each pair of questions, we ask 2 annotators from AMT to state how well they agree with the following two statements, on a scale of one to five (strongly disagree, disagree, neutral, agree, or strongly agree): 1. The paraphrased question has the same meaning as the original question (i.e., both the paraphrased and the original question are expected to yield the same answer). 2. The paraphrased question is written in fluent English. For better annotation quality, we employ two annotators to annotate each paraphrased question and require the annotators to have at least 99% approval rate with at least 1,000 approved HITs. The evaluation results are shown in Figures 6 and 7, where we plot the number of annotations against the scores assigned by the annotators, which are between 1 (Strongly Disagree) to 5 (Strongly Agree). 78.1% of the generated paraphrases are judged to be semantically equivalent and 78.6% are judged to be fluent, where annotators agree or strongly agree to questions 1 and 2 respectively. 3.1.3 Test Set Creation We only include a generated paraphrased question into the test set if both annotators agree or strongly agree that the paraphrased question and the original question are semantically equivalent. To ensure that no question is over-represented, if there are multiple accepted paraphrased questions from an original question, we randomly select only one of the paraphrased questions to be included in the test set. A total of 1,062 paraphrased questions are produced. 6070 1 2 3 4 5 0 1,000 2,000 3,000 Figure 6: Semantic equivalence ratings 1 2 3 4 5 0 1,000 2,000 3,000 Figure 7: Fluency ratings 3.2 Adversarial Paraphrased Test Set Motivated by the observation that QA models trained on SQuAD tend to perform string matching to return an answer of an appropriate type near a region of significant word overlap between the context and the question (Jia and Liang, 2017; Rondeau and Hazen, 2018), we create a test set to exploit this weakness of the models. In the context of question paraphrasing, we can simply paraphrase the question by using words in the context near a wrong answer candidate of the same type to generate a natural adversarial example. We show in Figure 8 an example of producing such a paraphrased question. Since the correct answer “2009” is a year, we locate another year “1963” in the context and use the nearby context words “been televised” to paraphrase the original question. We perform such paraphrasing manually by going through question and context pairs from the SQuAD development set and re-writing the question using context words near a confusing answer candidate if such a candidate exists and there are suitable nearby context words for use in paraphrasing. We create a total of 56 paraphrased questions for the adversarial test set. Context: 826 Doctor Who instalments have been televised since 1963 ... Starting with the 2009 special “Planet of the Dead”, the series was filmed in 1080i for HDTV ... Original Question: In what year did Doctor Who begin being shown in HDTV? Prediction: 2009 Paraphrased Question: Since what year has Doctor Who been televised in HDTV? Prediction: 1963 Figure 8: An example of paraphrasing question using context words (underlined) near a confusing answer candidate to generate a natural adversarial example. 4 Experiments on QA Models We conduct experiments on three state-of-the-art QA models: BERT (Devlin et al., 2018)3, DrQA4 (Chen et al., 2017), and BiDAF5 (Seo et al., 2016). BERT, in particular, outperforms human on the SQuAD task. 4.1 Evaluating Performance on the Two Paraphrased Test Sets For each paraphrased test set, we compare the performance of the three QA models on the original questions from the SQuAD development set and the corresponding paraphrased questions. 4.1.1 Non-Adversarial Paraphrased Test Set The performance of the QA models on the original and paraphrased questions for the non-adversarial paraphrased test set is given in Table 1. Despite the paraphrased set being semantically similar, and no model querying is performed to intentionally locate weaknesses of the QA models, all three models suffer a significant drop in performance. This highlights the brittleness of the trained models to question paraphrasing. 4.1.2 Adversarial Paraphrased Test Set We compare the performance of QA models on the original and paraphrased questions for the adversarial paraphrased test set in Table 2. 3We used the PyTorch re-implementation available at https://github.com/huggingface/pytorch-pretrained-BERT 4We used the re-implementation focusing on the reader module available at https://github.com/hitvoice/DrQA 5We used the original implementation available at https://github.com/allenai/bi-att-flow 6071 Model EM Score F1 Score Orig Q Para Q Orig Q Para Q BERT 83.62 79.85 90.78 87.63 DrQA 67.33 65.25 76.25 74.25 BiDAF 67.80 63.84 76.85 73.51 Table 1: Performance of QA models on the original questions (Orig Q) compared to non-adversarial paraphrased questions (Para Q). Model EM Score F1 Score Orig Q Adv Q Orig Q Adv Q BERT 82.14 57.14 89.31 63.18 DrQA 71.43 39.29 81.02 48.94 BiDAF 75.00 30.36 81.55 38.30 Table 2: Performance of QA models on the original questions (Orig Q) compared to adversarial paraphrased questions (Adv Q). The adversarial paraphrased test set is able to exploit the reliance of QA models on string matching to cause drastic decrease in the models’ performance. BiDAF demonstrated the weakest resilience to such a deliberate attack with a decrease of 43.25 F1, while BERT and DrQA suffered a decrease of 26.13 F1 and 32.08 F1 respectively. This sharp drop in performance highlights a serious flaw in QA models trained on the SQuAD dataset: if we ask a question that matches the context words near a confusing answer candidate, we are likely to get a wrong answer. 4.2 Re-Training Using Training Data Augmentation Our evaluation suggests that the original training dataset does not contain sufficiently diverse question phrasing. This leads to the models not learning to respond correctly to various ways of asking the same question. A natural way to improve the robustness of QA models to question paraphrasing would thus be to expose them to more diverse question phrasing. We attempt to achieve this by using our paraphrasing model to paraphrase the training set of questions. 4.2.1 Non-Adversarial Paraphrased Test Set For improvements on the non-adversarial paraphrased test set, we use the same approach described in Section 3.1.1 to automatically generate paraphrased questions from the training set of questions and keep paraphrased questions with Model EM Score F1 Score Before After Before After BERT 79.85 80.89 87.63 88.62 DrQA 65.25 67.33 74.25 75.00 BiDAF 63.84 66.20 73.51 75.94 Table 3: Performance on the non-adversarial paraphrased test set before and after re-training. Model EM Score F1 Score Before After Before After BERT 84.02 83.76 91.00 90.88 DrQA 69.04 68.74 78.38 77.86 BiDAF 67.67 67.49 77.46 77.10 Table 4: Performance on the original development set before and after re-training. similarity score above 0.9. This acceptance threshold is lower than that used in Section 3.1.1 in order to create more diverse paraphrased questions as training data (as a result, these questions are expected to be noisier). No human annotator is employed to check the semantic equivalence of the paraphrased questions and the original questions. We randomly sample 25,000 paraphrased questions to be used as additional training data. We retrain all three QA models using the original training data and the additional 25,000 paraphrased questions. The performance of the three QA models on the paraphrased test set before and after retraining is shown in Table 3. Even though the augmented training dataset is noisy (since not all generated questions are true paraphrases), all QA models still show improvement on the paraphrased test set after retraining. Furthermore, re-training causes only a negligible drop to the performance of QA models on the original development set, as shown in Table 4. 4.2.2 Adversarial Paraphrased Test Set In contrast to using PPDB to obtain paraphrase suggestions for the neural paraphrasing model, we now require the paraphrase suggestions to be from the context of the associated question. We use Flair6 (Akbik et al., 2018) trained on the Ontonotes dataset7 which contains 12 named entity classes to label which named entity class, if any, that the answer belongs to. Then, we extract 6Pre-trained models available at https://github.com/zalandoresearch/flair 7https://catalog.ldc.upenn.edu/docs/LDC2013T19/ OntoNotes-Release-5.0.pdf 6072 Model EM Score F1 Score Before After Before After BERT 57.14 69.64 63.18 73.85 DrQA 39.29 41.07 48.94 49.86 BiDAF 30.36 39.29 38.30 47.49 Table 5: Performance of QA models on the adversarial test set before and after re-training. Model EM Score F1 Score Before After Before After BERT 84.02 83.33 91.00 90.49 DrQA 69.04 67.93 78.38 77.45 BiDAF 67.67 66.23 77.46 76.19 Table 6: Performance on the original development set before and after re-training. sentences from the context containing named entities of the same type if the named entity contains no overlapping words with the answer. We perform syntactic chunking on the extracted sentences using Flair trained on the CoNLL-2000 dataset (Sang and Buchholz, 2000). We use the noun and verb phrases from the result of chunking to form the set of paraphrase suggestions for the given question. We ensure that each suggestion obtained contains at least two words and does not overlap with the answer. After using the paraphrasing model to paraphrase questions from the SQuAD training set using context words as suggestions, we keep only paraphrased questions with paraphrase similarity score above 0.83. This similarity threshold is set lower than the previous selection criterion since we want to allow context words that could be very different from the question words to appear in the generated paraphrase. We similarly re-train all three QA models with an additional 25,000 paraphrased training examples. The results are shown in Table 5. We see that re-training leads to a significant improvement in the performance of BERT and BiDAF on the adversarial paraphrased test set, although it still falls short of the performance on the corresponding original questions. However, re-training is only able to improve DrQA’s performance slightly. In all cases, re-training also only causes a slight decrease in performance on the original SQuAD development set (Table 6). 5 Related Work We present related work in this section, divided into three sub-topics. 5.1 Adversarial Examples for Question Answering Jia and Liang (2017) showed that QA models can be confused by appending a distracting sentence to the end of a passage. While this highlighted an important weakness of trained models, the adversarial examples created are unnatural and not expected to be present in naturally occurring passages. In contrast, semantic preserving changes to an input question that lead to returning the wrong answers present more relevant failure cases that occur in practice. Some previous work used question paraphrasing to create more natural adversarial examples. Ribeiro et al. (2018) made use of back translation to obtain paraphrasing rules that were subsequently filtered by human annotators. Examples of rules obtained include “What VERB → So what VERB” and “What NOUN →Which NOUN”. Rychalska et al. (2018) replaced the most important question word identified using the LIME framework with a synonym from WordNet and ELMo embeddings, which was verified by human annotators. These replacements are expected to maintain the meaning of the questions but can sometimes change initially correct answers. In contrast, we do not restrict ourselves to specific types of paraphrasing when creating the nonadversarial paraphrased test set. Our paraphrasing model can produce paraphrases including but not limited to those in the above two methods. Furthermore, we do not perform any model querying when creating the test set. The ability of our generic approach to decrease the performance of all evaluated state-of-the-art QA models demonstrates the need to improve the robustness of current QA models. The creation of the adversarial paraphrased test set which aims to trick QA models intentionally also contrasts with the approach by Jia and Liang (2017), as the examples created in this work are natural and coherent. 5.2 Neural Paraphrasing Networks There are a number of neural architectures introduced to automatically generate a paraphrase given an input sentence (Prakash et al., 2016; 6073 Huang et al., 2018; Wang et al., 2018). One conceptually simple approach that does not require a paraphrase corpus is to carry out back translation (Lapata et al., 2017), by first translating the source sentence to a pivot foreign language and back. Besides single paraphrase generation, the value of generating multiple paraphrases for a given input sentence has also been explored. Gupta et al. (2018) achieved this by using a variational autoencoder (VAE) with a long short-term memory (LSTM) network. Xu et al. (2018) assumed that different paraphrasing styles used different rewriting patterns, which were represented as latent embeddings. These embeddings were used to augment the decoder’s hidden state to generate different paraphrases. In contrast to previous work, we introduce a more guided approach to generate diverse paraphrases, by using a paraphrase suggestion together with a source question to generate a paraphrased question. Given k suggestions, our model is thus able to generate up to k paraphrased questions. 5.3 Paraphrasing as an Intermediate Task to Question Answering Some previous work considers question reformulation as a subtask of question answering. The intuition for doing this is to reduce the space of question paraphrases that the QA model is required to understand. Models trained by this approach are expected to be more robust to various question paraphrases since the model can paraphrase a question to one which it understands. Dong et al. (2017) first generated multiple paraphrases for a given question and used a neural network to score the quality of each paraphrase. The probability distribution of the answer was then generated for each paraphrased question, which was subsequently weighted by the score of each paraphrased question to compute the overall conditional probability of the answer given the question. Buck et al. (2017) formulated QA as a reinforcement learning problem and introduced a paraphrasing agent trained to paraphrase a question to one that was able to get the best answer from the QA model. Similarly, multiple question paraphrases were generated to obtain multiple answers from the QA model before answer selection was performed. In contrast to previous work, we consider question paraphrasing as a separate task instead of a subtask. Our approach is conceptually simpler since it only augments the training data to expose models to various question paraphrases and requires no change to the system during test time. Furthermore, the previous approaches require multiple queries to the QA model for a single question, resulting in longer inference time. 6 Conclusion In this paper, we propose a novel approach to train a neural paraphrasing network to paraphrase questions utilizing paraphrase suggestions. We use the approach to construct a test set of paraphrased SQuAD questions containing questions similar to the original to test models’ robustness to question paraphrasing. We also create an adversarial paraphrased test set to test models’ reliance on string matching. We show that all three state-of-the-art QA models give poorer performance on the first test set and drastically reduced performance on the second test set. We also show that a completely automatic approach to augment the training data can improve the robustness of the QA models to the paraphrased questions, while still retaining performance on the original questions. Our experiments highlight the need for separate adversarial testing and the importance of improving the robustness of QA models to question paraphrasing for better reliability when tested on future unseen test questions. There are several possible future directions stemming from this work. As post-processing is required to remove semantically dissimilar paraphrased questions, there is scope for developing better techniques for semantic similarity scoring. There is also scope for better techniques to generate more coherent question paraphrasing when significant question re-writing is required, such as for the situation when we want to paraphrase the question using context words. In addition, we have only considered paraphrasing the question in this paper. Paraphrasing the context is another area to explore but poses significant technical challenge, since it requires altering words over multiple sentences while still retaining the original meaning of the context. Acknowledgments This research is supported by the National Research Foundation Singapore under its AI Singapore Programme AISG-RP-2018-007. 6074 References Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638–1649. Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Andrea Gesmundo, Neil Houlsby, Wojciech Gajewski, and Wei Wang. 2017. Ask the right questions: Active question reformulation with reinforcement learning. CoRR, abs/1705.07830. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1870–1879. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Li Dong, Jonathan Mallinson, Siva Reddy, and Mirella Lapata. 2017. Learning to paraphrase for question answering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 875–886. Anthony Fader, Luke S. Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learning for open question answering. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1608–1618. Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2018. A deep generative framework for paraphrase generation. In Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence, pages 5149–5156. Shaohan Huang, Yu Wu, Furu Wei, and Ming Zhou. 2018. Dictionary-guided editing networks for paraphrase generation. CoRR, abs/1806.08077. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021–2031. Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1601–1611. Guillaume Klein, Yoon Kim, Yuntian Deng, Vincent Nguyen, Jean Senellart, and Alexander M. Rush. 2018. OpenNMT: neural machine translation toolkit. In Proceedings of the 13th Conference of the Association for Machine Translation in the Americas, pages 177–184. Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (System Demonstrations), pages 66–71. Mirella Lapata, Rico Sennrich, and Jonathan Mallinson. 2017. Paraphrasing revisited with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 881–893. Rada Mihalcea and Paul Tarau. 2004. TextRank: bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 404–411. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Proceedings of the Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches. Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 425–430. Aaditya Prakash, Sadid A. Hasan, Kathy Lee, Vivek V. Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual LSTM networks. In Proceedings of the 26th International Conference on Computational Linguistics, pages 2923–2934. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 784–789. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Siva Reddy, Danqi Chen, and Christopher D. Manning. 2018. Coqa: A conversational question answering challenge. CoRR, abs/1808.07042. Marco T´ulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging NLP models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 856–865. Marc-Antoine Rondeau and Timothy J. Hazen. 2018. Systematic error analysis of the Stanford question 6075 answering dataset. In Proceedings of the ACL Workshop on Machine Reading for Question Answering, pages 12–20. Barbara Rychalska, Dominika Basaj, and Przemyslaw Biecek. 2018. Are you tough enough? Framework for robustness validation of machine comprehension systems. CoRR, abs/1812.02205. Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the conll-2000 shared task chunking. In Proceedings of the Fourth Conference on Computational Natural Language Learning and the Second Learning Language in Logic Workshop, pages 127– 132. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1073–1083. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. CoRR, abs/1611.01603. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the Thirty-First Conference on Neural Information Processing Systems. Su Wang, Rahul Gupta, Nancy Chang, and Jason Baldridge. 2018. A task in a suit and a tie: paraphrase generation with semantic augmentation. CoRR, abs/1811.00119. John Wieting and Kevin Gimpel. 2018. ParaNMT50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 451– 462. Qiongkai Xu, Juyan Zhang, Lizhen Qu, Lexing Xie, and Richard Nock. 2018. D-PAGE: Diverse paraphrase generation. CoRR, abs/1808.04364.
2019
610
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6076–6085 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6076 RankQA: Neural Question Answering with Answer Re-Ranking Bernhard Kratzwald♠ Anna Eigenmann♦ Stefan Feuerriegel♠ ♠Chair of Management Information Systems, ETH Zurich ♦Department of Mathematics, ETH Zurich {bkratzwald, eianna, sfeuerriegel}@ethz.ch Abstract The conventional paradigm in neural question answering (QA) for narrative content is limited to a two-stage process: first, relevant text passages are retrieved and, subsequently, a neural network for machine comprehension extracts the likeliest answer. However, both stages are largely isolated in the status quo and, hence, information from the two phases is never properly fused. In contrast, this work proposes RankQA1: RankQA extends the conventional two-stage process in neural QA with a third stage that performs an additional answer reranking. The re-ranking leverages different features that are directly extracted from the QA pipeline, i. e., a combination of retrieval and comprehension features. While our intentionally simple design allows for an efficient, data-sparse estimation, it nevertheless outperforms more complex QA systems by a significant margin: in fact, RankQA achieves stateof-the-art performance on 3 out of 4 benchmark datasets. Furthermore, its performance is especially superior in settings where the size of the corpus is dynamic. Here the answer reranking provides an effective remedy against the underlying noise-information trade-off due to a variable corpus size. As a consequence, RankQA represents a novel, powerful, and thus challenging baseline for future research in content-based QA. 1 Introduction Question answering (QA) has recently experienced considerable success in variety of benchmarks due to the development of neural QA (Chen et al., 2017; Wang et al., 2018). These systems largely follow a two-stage process. First, a module for information retrieval selects text passages which appear relevant to the query from the cor1Code is available from https://github.com/ bernhard2202/rankqa pus. Second, a module for machine comprehension extracts the final answer, which is then returned to the user. This two-stage process is necessary for condensing the original corpus to passages and eventually answers; however, the dependence limits the extent to which information is passed on from one stage to the other. Extensive efforts have been made to facilitate better information flow between the two stages. These works primarily address the interface between the stages (Lee et al., 2018; Lin et al., 2018), i. e., which passages and how many of them are forwarded from information retrieval to machine comprehension. For instance, the QA performance is dependent on the corpus size and the number of top-n passages that are fed into the module for machine comprehension (Kratzwald and Feuerriegel, 2018). Nevertheless, machine comprehension in this approach makes use of only limited information (e. g., it ignores the confidence or similarity information computed during retrieval). State-of-the-art approaches for selecting better answers engineer additional features within the machine comprehension model with the implicit goal of considering information retrieval. For instance, the DrQA architecture of Chen et al. (2017) includes features pertaining to the match between question words and words in the paragraph. Certain other works also incorporate a linear combination of paragraph and answer score (Lee et al., 2018). Despite that, the use is limited to simplistic features and the potential gains of re-ranking remain untapped. Prior literature has recently hinted at potential benefits from answer re-ranking, albeit in a different setting (Wang et al., 2017): the authors studied multi-paragraph machine comprehension at sentence level, instead of a complete QA pipeline involving an actual information retrieval module 6077 #1 Information Retrieval #2 Machine Comprehension Content Base #3 Answer Re-ranking Question Answer text passages IR features answer candidates MC features Figure 1: The RankQA system consisting of three modules for information retrieval, machine comprehension, and our novel answer re-ranking. RankQA fuses information from the information retrieval and machine comprehension phase to re-rank answer candidates within a full neural QA pipeline. over a full corpus of documents. However, when adapting it from a multi-paragraph setting to a complete corpus, this type of approach is known to become computationally infeasible (cf. discussion in Lee et al., 2018). In contrast, answer reranking as part of an actual QA pipeline not been previously studied. Proposed RankQA: This paper proposes a novel paradigm for neural QA. That is, we augment the conventional two-staged process with an additional third stage for efficient answer reranking. This approach, named “RankQA”, overcomes the limitations of a two-stage process in the status quo whereby both stages operate largely in isolation and where information from the two is never properly fused. In contrast, our module for answer re-ranking fuses features that stem from both retrieval and comprehension. Our approach is intentionally light-weight, which contributes to an efficient estimation, even when directly integrated into the full QA pipeline. We show the robustness of our approach by demonstrating significant performance improvements over different QA pipelines. Contributions: To the best of our knowledge, RankQA represents the first neural QA pipeline with an additional third stage for answer re-ranking. Despite the light-weight architecture, RankQA achieves state-of-the-art performance across 3 established benchmark datasets. In fact, it even outperforms more complex approaches by a considerable margin. This particularly holds true when the corpus size is variable and where the resulting noise-information trade-off requires an effective remedy. Altogether, RankQA yields a strong new baseline for contentbased question answering. 2 RankQA RankQA is designed as a pipeline of three consecutive modules (see Fig. 1), as detailed in the following. Our main contribution lies in the design of the answer re-ranking component and its integration into the full QA pipeline. In order to demonstrate the robustness of our approach, we later experiment with two implementations in which we vary module 2. 2.1 Module 1: Information Retrieval For a given query, the information retrieval module retrieves the top-n (here: n = 10) matching documents from the content repository and then splits these articles into paragraphs. These paragraphs are then passed on to the machine comprehension component. The information retrieval module is implemented analogously to the default specification of Chen et al. (2017), scoring documents by hashed bi-gram counts. 2.2 Module 2: Machine Comprehension The machine comprehension module extracts and scores one candidate answer for every paragraph of all top-n documents. Hence, this should result in ≫n candidate answers; however, out of these, the machine comprehension module selects only the top-k candidate answers [c1, . . . , ck], which are then passed on to the re-ranker. The size k is a hyperparameter (here: k = 40). We choose two different implementations for the machine comprehension module in order to show the robustness of our approach. Implementation 1 (DrQA): Our first implementation is based on the DrQA document reader (Chen et al., 2017). This is the primary system in our experiments for two reasons. First, in neu6078 ral QA, DrQA is a well-established baseline. Second, DrQA has become a widespread benchmark with several adaptations, which lets us compare our approach for answer re-ranking with other extensions that improve the retrieval of paragraphs (Lee et al., 2018) or limit the information flow between the retrieval and comprehension phases (Kratzwald and Feuerriegel, 2018). Implementation 2 (BERT-QA): QA systems whose machine comprehension module is based on BERT are gaining in popularity (Yang et al., 2019a,b). Following this, we implement a second QA pipeline where the document reader from DrQA is replaced with BERT (Devlin et al., 2019).2 We call this system BERT-QA and use it as a second robustness check in our experiments. 2.3 Module 3: Answer Re-Ranking Our re-ranking module receives the top-k candidate answers [c1, . . . , ck] from the machine comprehension module as input. Each candidate ci, i = 1, . . . , k, consists of the actual answer span si (i. e., the textual answer) and additional metainformation φi such as the document ID and paragraph ID from which it was extracted. Our module follows a three-step procedure in order to re-rank answers: (i) Feature extraction: First, we extract a set of information retrieval and machine comprehension features for every answer candidate directly from the individual modules of the QA pipeline. (ii) Answer aggregation: It is frequently the case that several answer candidates ci are duplicates and, hence, such identical answers are aggregated. This creates additional aggregation features, which should be highly informative and thus aid the subsequent reranking. (iii) Re-ranking network: Every top-k answer candidate is re-ranked based on the features generated in (i) and (ii). 2.3.1 Feature Extraction During this step, we extract several features from the information retrieval and machine comprehension modules for all top-k answer candidates, 2We used the official implementation from https:// github.com/google-research/bert which can later be fused; see a detailed overview in Tbl. 1. These features are analogously computed by most neural QA systems, albeit for other purposes than re-ranking. Nevertheless, this fact should highlight that such features can be obtained without additional costs. The actual set of features depends on the implementation of the QA system (e. g., DrQA extracts additional named entity features, as opposed to BERT-QA). From the information retrieval module, we obtain: (i) the document-question similarity; (ii) the paragraph-question similarity; (iii) the paragraph length; (iv) the question length; and (v) indicator variables that specify with which word a question starts (e. g., “what”, “who”, “when”, etc.). From the machine comprehension module, we extract: (i) the original score of the answer candidate; (ii) the original rank of the candidate answer; (iii) part-of-speech tags of the answer; and (iv) named entity features of the answer. The latter two are extracted only for DrQA and encoded via indicator variables that specify whether the answer span contains a named entity or part-of-speech tag (e. g., PERSON=1 or NNS=1). 2.3.2 Answer Aggregation It is frequently the case that several candidate answers are identical and, hence, we encode this knowledge as a set of additional features. The idea of answer aggregation is similar to Lee et al. (2018) and Wang et al. (2017), although there are methodological differences: the previous authors sum the probability scores for identical answers, whereas the aim in RankQA is to generate a rich set of aggregation features. That is, we group all answer candidates with an identical answer span. Formally, we merge two candidate answers ci and cj if their answer span is equal, i. e., si = sj. We keep the information retrieval and machine comprehension features of the initially higher-ranked candidate cmin{i,j}. In addition, we generate further aggregation features as follows: (i) the number of times a candidate with an equal answer span appears within the topk candidates; (ii) the rank of its first occurrence; (iii) the sum, mean, minimum, and maximum of the span scores; and (iv) the sum, mean, minimum, and maximum of the document-question similarity scores. Altogether, this results, for each candidate answer ci, in a vector xi containing all features from information retrieval, machine comprehension, and answer aggregation. 6079 Feature Group Description Aggregation Impl. INFORMATION RETRIEVAL FEATURES Document-query similarity Similarity between the question and the full document the answer was extracted from. min, max, avg, sum both Paragraph-query similarity Similarity between the question and the paragraph the answer was extracted from. — both Length features Length of the document, length of the paragraph, and length of the question. — both Question type The question type is a 13-dimensional vector indicating weather the questions started with the words What was, What is, What, In what, In which, In, When, Where, Who, Why, Which, Is, or <other>. — both MACHINE COMPREHENSION FEATURES Span features The score of the answer candidate as assigned directly from the MC module, proportional to the probability of the answer given the paragraph, i. e., ∝p(a|p). min, max, avg, sum both Named entity features A 13-dimensional vector indicating whether one of following 13 named entities is contained within the answer span: location, person, organization, money, percent, date, time, set, duration, number, ordinal, misc, and <other>. — only 1 Part-of-speech features A 45-dimensional vector indicating which part-ofspeech tag is contained within the answer span. We use the Penn Treebank PoS tagset. (Marcus et al., 1993). — only 1 Ranking Original ranking of the answer candidate. number of occurrences both Table 1: Detailed description of all features used in our answer re-ranking component. 2.3.3 Re-Ranking Network Let xi ∈Rd be the d-dimensional feature vector for the answer candidate ci, i = 1, . . . , k. We score each candidate via the following ranking network, i. e., a two-layer feed-forward network f(xi) that is given by f(xi) = ReLU(xiAT + b1) BT + b2, (1) where A ∈Rm×d and B ∈R1×m are trainable weight matrices and where b1 ∈Rm and b2 ∈R are linear offset vectors. During our experiments, we tested various ranking mechanisms, even more complicated architectures such as recurrent neural networks that read answers, paragraphs, and questions. Despite their additional complexity, the resulting performance improvements over our straightforward re-ranking mechanisms were only marginal and, oftentimes, we even observed a decline. 2.4 Estimation: Custom Loss/Sub-Sampling The parameters in f(·) are not trivial to learn. We found that sampling negative (incorrect) and positive (correct) candidates, in combination with a binary classification loss or a regression loss, was not successful. As a remedy, we propose the following combination of ranking loss and subsampling, which proved beneficial in our experiments. We implement a loss L, which represents a combination of a pair-wise ranking loss Lrank and an additional regularization Lreg, in order to train our model. Given two candidate answers i, j with i ̸= j for a given question, the binary variables yi and yj denote whether the respective candidate answers are correct or incorrect. Then we minimize the following pair-wise ranking loss adapted from Burges et al. (2005), i. e., Lrank(xi, xj) =  yi −σ (f(xi) −f(xj)) 2 . (2) Here f(·) denotes our previous ranking network and σ(·) the sigmoid function. An additional penalty is used to regularize the parameters and prevent the network from overfitting. It is given by Lreg = ∥A∥1 + ∥B∥1 + ∥b1∥1 + ∥b2∥1. (3) Finally, we optimize L = Lrank + λLreg using 6080 mini-batch gradient descent with λ as a tuning parameter. We further implement a customized subsampling procedure, since the majority of candidate answers generated during training are likely to be incorrect. To address the pair-wise loss during sub-sampling, we proceed as follows: we first generate a list of answer candidates for every question in our training set using the feature extraction and aggregation mechanisms from our re-ranking. Then we iterate through this list and sample a pair of candidate answers (xi, xj) if and only if they are at adjacent ranks (i is ranked directly before j i. e., iff j = i + 1). We specifically let our training focus on pairs that are originally ranked high, i. e., j < 4, and ignore training pairs ranked lower. During inference, we still score all top-10 answer candidates and select the best-scoring answer. 3 Experimental Design 3.1 Content Base and Datasets Following earlier research, our content base comprises documents from the English Wikipedia. For comparison purposes, we use the same dump as in prior work (e. g., Chen et al., 2017; Lee et al., 2018).3 We do not use pre-selected documents or other textual content in order to answer questions. We base our experiments on four wellestablished datasets. SQuAD The Stanford Question and Answer Dataset (SQuAD) contains more than 100 000 question-answer-paragraph triples (Rajpurkar et al., 2016). We use SQuADOPEN, which ignores the paragraph information. WikiMovies This dataset contains several thousand question-answer pairs from the movie industry (Miller et al., 2016). It is designed such that all questions can be answered by a knowledge-base (i. e., Open Movie Database) or full-text content (Wikipedia). CuratedTREC This dataset is a collection of question-answer pairs from four years of Text Retrieval Conference (TREC) QA challenges (Baudiˇs and ˇSediv´y, 2015). WebQuestions The answers to questions in this dataset are entities in the Freebase 3Downloaded from https://github.com/ facebookresearch/DrQA knowledge-base (Berant et al., 2013). We use the adapted version of Chen et al. (2017), who replaced the Freebase-IDs with textual answers. 3.2 Training Details Our sourcecode and pre-trained model are available at: https://github.com/ bernhard2202/rankqa. RankQA: The information retrieval module is based on the official implementation of Chen et al. (2017).4 The same holds true for the pre-trained DrQA-DS model, which we used without alterations. For BERT-QA, we use the uncased BERT base model and fine-tune it for three epochs on the SQuAD training split with the default parameters.5 Datasets: We use the training splits of SQuAD, CuratedTREC, WikiMovies, and WebQuestions for training and model selection. In order to balance differently-sized datasets, we use 10 % of the smallest training split for model selection and 90 % for training. For every other dataset, we take the same percentage of samples for model selection and all other samples for training. We monitor the loss on the model selection data and stop training if it did not decrease within the last 10 epochs or after a total of 100 epochs. Finally, we use the model with the lowest error on the model selection data for evaluation. Analogous to prior work, we use the test splits of CuratedTREC, WikiMovies, and WebQuestions, as well as the development split for SQuAD, though only for the final evaluation. In order to account for different characteristics in the datasets, we train a task-specific model individually for every dataset following the same procedure. Parameters: During training, we use Adam (Kingma and Ba, 2014) with a learning rate of 0.0005 and a batch size of 256. The hidden layer is set to m = 512 units. We set the number of top-n documents to n = 10 and the number of top-k candidate answers that are initially generated to k = 40. We optimize λ over λ ∈ {5 · 10−4, 5 · 10−5}. All numerical features are scaled to be within [0, 1]. Moreover, we apply an additional log-transformation. 4Available at https://github.com/ facebookresearch/DrQA 5Available at https://github.com/ google-research/bert 6081 SQuADOPEN CuratedTREC WebQuestions WikiMovies Baseline: DrQA (Chen et al., 2017) 29.8 25.4 20.7 36.5 DrQA extensions: Paragraph Ranker (Lee et al., 2018) 30.2 35.4 19.9 39.1 Adaptive Retrieval 29.6 29.3 19.6 38.4 (Kratzwald and Feuerriegel, 2018) Other architectures: R3 (Wang et al., 2018) 29.1 28.4 17.1 38.8 DS-QA (Lin et al., 2018) — 29.1 18.5 — Min. Context (Min et al., 2018) 34.6 — — — RankQA (general) 34.5 32.4 21.8 43.3 RankQA (task-specific) 35.3 34.7 22.3 43.1 Upper bound: perfect re-ranking for k = 40 54.2 65.9 53.8 65.0 Table 2: Exact matches of RankQA compared to DrQA as natural baseline without re-ranking and state-of-the-art systems for neural QA. We use a general model that is trained on all datasets, and a task-specific model that is trained individually for every dataset. The two best results for every dataset are marked in bold. 4 Results We conduct a series of experiments to evaluate our RankQA system. First, we evaluate the end-to-end performance over the four abovementioned benchmark datasets and compare our system to various other baselines. Second, we show the robustness of answer re-ranking by repeating these experiments with our second implementation, namely BERT-QA. Third, we replicate the experiments of Kratzwald and Feuerriegel (2018) to evaluate the robustness against varying corpus sizes. Fourth, we analyze errors and discuss feature importance in numerical experiments. During our experiments, we measure the endto-end performance of the entire QA pipeline in terms of exact matches. That is, we count the fraction of questions for which the provided answer matches one of the ground truth answers exactly. Unless explicitly mentioned otherwise, we refer to the first implementation, namely re-ranking based on the DrQA architecture. 4.1 Performance Improvement from Answer Re-Ranking Tbl. 2 compares performance across different neural QA systems from the literature. The DrQA system (Chen et al., 2017) is our main baseline as it resembles RankQA without the answer re-ranking step. Furthermore, we compare ourselves against other extensions of the DrQA pipeline such as the Paragraph Ranker (Lee et al., 2018) or Adaptive Retrieval (Kratzwald and Feuerriegel, 2018). Finally, we compare against other state-of-the-art QA pipelines, namely, R3 (Wang et al., 2018), DSQA (Lin et al., 2018), and the Min. Context system from Min et al. (2018). For RankQA, we use, on the one hand, a general model that is trained on all four datasets simultaneously. On the other hand, we account for the different characteristics of the datasets and thus employ task-specific models that are trained separately on every dataset. A direct comparison between DrQA and RankQA demonstrates a performance improvement from up to 7.0 percentage points when using RankQA, with an average gain of 4.9 percentage points over all datasets. Given the identical implementation of information retrieval and machine comprehension, this increase is solely attributable to our answer re-ranking. Our RankQA also outperforms all other state-of-the-art QA systems in 3 out of 4 datasets by a notable margin. This holds true for extensions of DrQA (Paragraph Ranker and Adaptive Retrieval) and other neural QA architectures (R3 and DS-QA). This behavior is also observed in the case of the task-specific re-ranking model, which is trained for every dataset individually. Here we achieve performance improvements of up to 9.3 percentage points, with an average performance gain of 5.8 percentage points. The results on the CuratedTREC task deserve further discussion. Evidently, the dataset is particular in the sense that it is very sensitive to specific features. This is confirmed later in our analysis of feature importance and explains why the task-specific RankQA is inferior the general model by a large margin. Finally, in the last row of Tbl. 2, we provide the results of a perfect re-ranker that always chooses 6082 20 25 30 103 104 105 106 corpus size exact match [%] top−1 top−5 top−10 adaptive retrieval rankQA Figure 2: Robustness of answer re-ranking against a variable corpus size. We measure the exact matches for the CuratedTREC dataset while varying the corpus size from one thousand to over five million documents. the correct answer if present. This system represents an upper bound of the degree to which reranking could improve results without changing the information retrieval or machine comprehension models. 4.2 Robustness Check: BERT-QA In order to demonstrate the robustness of answer re-ranking across different implementations, we repeat experiments from above based on the BERT-QA system. The results are shown in Tbl. 3. The first row displays the results without answer re-ranking. The second row shows the results after integrating our re-ranking module in the QA pipeline. As one can see, answer re-ranking yields significant performance improvements over all four datasets, ranging between 12.5 and 5.5 percentage points. The last row again lists an upper bound as would have been obtained by a perfect re-ranking system with access to the groundtruth labels. The performance differences between DrQA and BERT can be attributed to the fact that we trained BERT only on the SQuAD dataset, while the pre-trained DrQA model was trained on all four datasets. 4.3 Performance Sensitivity to Corpus Size Corpora of variable size are known to pose difficulties for neural QA systems. Kratzwald and Feuerriegel (2018) ran a series of experiments in which they monitored the end-to-end performance of different top-n systems (i. e., extracting the answer from the top-10 documents compared to extracting the answer from the top-1 document only). During the experiments, they increased the size of the corpus from one thousand to over five million documents. They found that selecting n = 10 is more beneficial for a large corpus, while n = 1 is preferable for small ones. They referred to this phenomenon as a noise-information trade-off: a large n increases the probability that the correct answer is extracted, while a small n reduces the chance that noisy answers will be included in the candidate list. As a remedy, the authors proposed an approach for adaptive retrieval that chooses an independent top-n retrieval for every query. We replicated the experiments of Kratzwald and Feuerriegel (2018)6 and evaluated our RankQA system in the same setting, as shown in Fig. 2. We see that answer re-ranking represents an efficient remedy against the noise-information tradeoff. The performance of our system (solid red line) exceeds that of any other system configuration for any given corpus size. Furthermore, our approach behaves in a more stable fashion than adaptive retrieval. Adaptive retrieval, like many other recent advancements (e. g., Lee et al., 2018; Lin et al., 2018), limits the amount of information that flows between the information retrieval and machine comprehension modules in order to select better answers. However, RankQA does not limit the information, but directly re-ranks the answers to remove noisy candidates. Our experiments suggest that answer re-ranking is more efficient than limiting the information flow when dealing with variable-size corpora. 4.4 Error Analysis and Feature Importance We analyze whether our system is capable of keeping the set of correctly answered questions after applying the re-ranking step. Therefore, we measure the fraction of correctly answered questions out of those questions that had been answered correctly before re-ranking. Specifically, we found that the ratio of answers that remained correct varies between 94.6 % and 96.1 %. Hence, our model does not substantially change initially correct rankings. Feature importance: Tbl. 4 compares the relative importance of different features. This is measured by training the model with the same pa6Source code for adaptive retrieval available at: www.github.com/bernhard2202/ adaptive-ir-for-qa 6083 SQuADOPEN CuratedTrec WebQuestions WikiMovies Baseline: BERT-QA (no re-ranking) 23.3 19.7 8.2 10.9 RankQA (implementation 2) 35.8 32.0 13.7 20.6 Upper bound: perfect re-ranking for k = 40 61.2 66.6 39.6 49.8 Table 3: Exact matches of RankQA based on the BERT-QA pipeline. We show results of the the pipline without re-ranking, the results obtained by our re-ranking model, and an upper bound (i. e., perfect re-ranking). SQuADOPEN CuratedTrec WebQuestions WikiMovies Baseline: DrQA (Chen et al., 2017) 29.8 25.4 20.7 36.5 RankQA (general) 34.5 32.4 21.8 43.3 Information Retrieval Features RankQA w/o query-document similarity 33.0 29.8 20.6 42.0 RankQA w/o query-paragraph similarity 32.1 32.0 22.0 42.1 RankQA w/o length features 32.9 31.4 22.3 42.6 Machine Comprehension Features RankQA w/o linguistic features (POS&NER) 34.4 31.8 21.5 42.3 RankQA w/o ranking features 34.1 31.8 21.4 43.3 RankQA w/o span score 33.4 30.1 21.3 42.3 Feature Aggregation RankQA w/o aggregation features 33.6 26.9 18.5 41.5 Table 4: Feature importance (i. e., averaged performance of exact matches on a hold-out sample). We train the general model using the same data, but blind one group of features every time. We underline results that undershoot the baseline and mark results in bold that surpass the general model trained on all features. rameters and hyperparameters as before; however, we blind one (group of) feature(s) in every round. This was done as follows: when the information retrieval or machine comprehension features were blinded, we also removed the corresponding aggregated features. When omitting aggregation features, we keep the original un-aggregated feature. We show the performance of DrQA (i. e., system without answer re-ranking) and the full re-ranker for the sake of comparison. The original performance increase can only be achieved when all features are included. This has important implications for our approach to properly fusing information from information retrieval and machine comprehension. It suggests that aggregation features are especially informative and that it is not sufficient to use only a subset of those. We can see that individual datasets reveal a different sensitivity to all feature groups. The CuratedTREC or WebQuestions datasets, for instance, are highly sensitive to some information retrieval features. However, in all cases, the fused combination of features from both information retrieval and machine comprehension is crucial for obtaining a strong performance. 5 Related Work This work focus on question answering for unstructured textual content in English. Earlier systems of this type comprise various modules such as, for example, query reformulation (e. g., Brill et al., 2002), question classification (Li and Roth, 2006), passage retrieval (e. g., Harabagiu et al., 2000), or answer extraction (Shen and Klakow, 2006). However, the aforementioned modules have been reduced to two consecutive steps with the advent of neural QA. 5.1 Neural Question Answering Neural QA systems, such as DrQA (Chen et al., 2017) or R3 (Wang et al., 2018), are usually designed as pipelines of two consecutive stages, namely a module for information retrieval and a module for machine comprehension. The overall performance depends on how many top-n passages are fed into the module for machine comprehension, which then essentially generates multiple candidate answers out of which the one with the highest answer probability score is chosen. However, this gives rise to a noise-information trade-off (Kratzwald and Feuerriegel, 2018). That is, selecting a large n generates many candidate 6084 answers, but increases the probability of selecting the wrong final answer. Similarly, retrieving a small number of top-n passages reduces the chance that the candidate answers contain the correct answer at all. Resolving the noise-information trade-off in neural QA has been primarily addressed by improving the interplay of modules for information retrieval and machine comprehension. Min et al. (2018) employ sentence-level retrieval in order to remove noisy content. Similarly, Lin et al. (2018) utilize neural networks in order to filter noisy text passages, while Kratzwald and Feuerriegel (2018) forward a query-specific number of text passages. Lee et al. (2018) re-rank the paragraphs before forwarding them to machine comprehension. However, none of the listed works introduce answer reranking to neural QA. 5.2 Answer Re-Ranking Answer re-ranking has been widely studied for systems other than neural QA, such as factoid (Severyn and Moschitti, 2012), non-factoid (Moschitti and Quarteroni, 2011), and definitional question answering (Chen et al., 2006). These methods target traditional QA systems that construct answers in non-neural ways, e. g., based on n-gram tiling (Brill et al., 2002) or constituency trees (Shen and Klakow, 2006). However, neural QA extracts an answer directly from text using end-to-end trainable models, rather than constructing it. With respect to the conceptual idea, closest to our work is the approach of Wang et al. (2017), who use a single recurrent model to re-rank multiple candidate-answers given the paragraphs they have been extracted from. However, this work is different from our RankQA in two ways. First, the authors must read multiple paragraphs in parallel via recurrent neural networks, which limits scalability and the maximum length of paragraphs; see the discussion in Lee et al. (2018). In contrast, our approach is highly scalable and can even be used together with complete corpora and long documents. Second, the authors evaluated their reranking in isolation, whereas we integrate our reranking into the full QA pipeline where the complete system is subject to extensive experiments. There are strong theoretical arguments as to why a better fusion of information retrieval and machine comprehension should be beneficial. First, features from information retrieval can potentially be decisive during answer selection (for instance, similarity features or document/paragraph length). Second, answer selection in state-of-the-art systems ignores linguistic features that are computed during the machine comprehension phase (e. g., DrQA uses part-of-speech and named entity information). Third, although some works aggregate scores for similar answers (e. g., Lee et al., 2018; Wang et al., 2017), the complete body information is largely ignored during aggregation. This particularly pertains to, e. g., how often and with which original rank the topn answers were generated. 6 Conclusion Our experiments confirm the effectiveness of a three-stage architecture in neural QA. Here answer re-ranking is responsible for bolstering the overall performance considerably: our RankQA represents the state-of-the-art system for 3 out of 4 datasets. When comparing it to corresponding two-staged architecture, answer re-ranking can be credited with an average performance improvement of 4.9 percentage points. This performance was even rendered possible with a light-weight architecture that allows for the efficient fusion of information retrieval and machine comprehension features during training. Altogether, RankQA provides a new, strong baseline for future research on neural QA. Acknowledgments We thank the anonymous reviewers for their helpful comments. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPUs used for this research. References Petr Baudiˇs and Jan ˇSediv´y. 2015. Modeling of the Question Answering Task in the YodaQA System. In International Conference on Experimental IR Meets Multilinguality, Multimodality, and Interaction, pages 222–228. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic Parsing on Freebase from Question-Answer Pairs. In Empirical Methods in Natural Language Processing (EMNLP), pages 1533–1544. Eric Brill, Susan Dumais, and Michele Banko. 2002. An analysis of the AskMSR question-answering sys6085 tem. In Empirical Methods in Natural Language Processing (EMNLP), pages 257–264. Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. 2005. Learning to rank using gradient descent. In International Conference on Machine learning (ICML), pages 89–96. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to Answer OpenDomain Questions. In Annual Meeting of the Association for Computational Linguistics (ACL), pages 1870–1879. Yi Chen, Ming Zhou, and Shilong Wang. 2006. Reranking answers for definitional QA using language modeling. In Annual Meeting of the Association for Computational Linguistics (ACL), pages 1081–1088. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Sanda M. Harabagiu, Dan I. Moldovan, Marius Paca, Rada Mihalcea, Mihai Surdeanu, Rzvan Bunescu, Corina R. Gˆırju, Vasile Rus, and Paul Morrescu. 2000. FALCON: Boosting Knowledge for Answer Engines. In Text Retrieval Conference (TREC), pages 479–488. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. International Conference on Learning Representations (ICLR). Bernhard Kratzwald and Stefan Feuerriegel. 2018. Adaptive Document Retrieval for Deep Question Answering. In Empirical Methods in Natural Language Processing (EMNLP), pages 576–587. Jinhyuk Lee, Seongjun Yun, Hyunjae Kim, Miyoung Ko, and Jaewoo Kang. 2018. Ranking Paragraphs for Improving Answer Recall in Open-Domain Question Answering. In Empirical Methods in Natural Language Processing (EMNLP), pages 565– 569. Xin Li and Dan Roth. 2006. Learning question classifiers: The role of semantic information. Natural Language Engineering, 12(03):229–249. Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. 2018. Denoising Distantly Supervised OpenDomain Question Answering. In Annual Meeting of the Association for Computational Linguistics (ACL), pages 1736–1745. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Alexander Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-Value Memory Networks for Directly Reading Documents. In Empirical Methods in Natural Language Processing (EMNLP), pages 1400– 1409. Sewon Min, Victor Zhong, Richard Socher, and Caiming Xiong. 2018. Efficient and Robust Question Answering from Minimal Context over Documents. In Annual Meeting of the Association for Computational Linguistics (ACL), pages 1725–1735. Alessandro Moschitti and Silvia Quarteroni. 2011. Linguistic kernels for answer re-ranking in question answering systems. Information Processing and Management, 47(6):825–842. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Empirical Methods in Natural Language Processing (EMNLP) Language Processing, pages 2383–2392. Aliaksei Severyn and Alessandro Moschitti. 2012. Structural relationships for large-scale learning of answer re-ranking. In ACM SIGIR Conference on Research and Development in Information Retrieval, pages 741–750. Dan Shen and Dietrich Klakow. 2006. Exploring correlation of dependency relation paths for answer extraction. In Annual Meeting of the Association for Computational Linguistics (ACL), pages 889–896. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2018. Rˆ3: Reinforced Reader-Ranker for Open-Domain Question Answering. In Association for the Advancement of Artificial Intelligence (AAAI). Shuohang Wang, Mo Yu, Jing Jiang, Wei Zhang, Xiaoxiao Guo, Shiyu Chang, Zhiguo Wang, Tim Klinger, Gerald Tesauro, and Murray Campbell. 2017. Evidence Aggregation for Answer ReRanking in Open-Domain Question Answering. International Conference on Learning Representations (ICLR). Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019a. End-to-End Open-Domain Question Answering with BERTserini. In Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL, Demo). Wei Yang, Yuqing Xie, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019b. Data Augmentation for BERT Fine-Tuning in Open-Domain Question Answering. arXiv preprint arxiv: 1904.06652.
2019
611
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6086 Latent Retrieval for Weakly Supervised Open Domain Question Answering Kenton Lee Ming-Wei Chang Kristina Toutanova Google Research Seattle, WA {kentonl,mingweichang,kristout}@google.com Abstract Recent work on open domain question answering (QA) assumes strong supervision of the supporting evidence and/or assumes a blackbox information retrieval (IR) system to retrieve evidence candidates. We argue that both are suboptimal, since gold evidence is not always available, and QA is fundamentally different from IR. We show for the first time that it is possible to jointly learn the retriever and reader from question-answer string pairs and without any IR system. In this setting, evidence retrieval from all of Wikipedia is treated as a latent variable. Since this is impractical to learn from scratch, we pre-train the retriever with an Inverse Cloze Task. We evaluate on open versions of five QA datasets. On datasets where the questioner already knows the answer, a traditional IR system such as BM25 is sufficient. On datasets where a user is genuinely seeking an answer, we show that learned retrieval is crucial, outperforming BM25 by up to 19 points in exact match. 1 Introduction Due to recent advances in reading comprehension systems, there has been a revival of interest in open domain question answering (QA), where the evidence must be retrieved from an open corpus, rather than being given as input. This presents a more realistic scenario for practical applications. Current approaches require a blackbox information retrieval (IR) system to do much of the heavy lifting, even though it cannot be fine-tuned on the downstream task. In the strongly supervised setting popularized by DrQA (Chen et al., 2017), they also assume a reading comprehension model trained on question-answer-evidence triples, such as SQuAD (Rajpurkar et al., 2016). The IR system is used at test time to generate evidence candidates in place of the gold evidence. In the weakly supervised setting, proposed by TriviaQA (Joshi et al., 2017), SearchQA (Dunn et al., 2017), and Quasar (Dhingra et al., 2017), the dependency on strong supervision is removed by assuming that the IR system provides noisy gold evidence. These approaches rely on the IR system to massively reduce the search space and/or reduce spurious ambiguity. However, QA is fundamentally different from IR (Singh, 2012). Whereas IR is concerned with lexical and semantic matching, questions are by definition under-specified and require more language understanding, since users are explicitly looking for unknown information. Instead of being subject to the recall ceiling from blackbox IR systems, we should directly learn to retrieve using question-answering data. In this work, we introduce the first OpenRetrieval Question Answering system (ORQA). ORQA learns to retrieve evidence from an open corpus, and is supervised only by questionanswer string pairs. While recent work on improving evidence retrieval has made significant progress (Wang et al., 2018; Kratzwald and Feuerriegel, 2018; Lee et al., 2018; Das et al., 2019), they still only rerank a closed evidence set. The main challenge to fully end-to-end learning is that retrieval over the open corpus must be considered a latent variable that would be impractical to train from scratch. IR systems offer a reasonable but potentially suboptimal starting point. The key insight of this work is that end-toend learning is possible if we pre-train the retriever with an unsupervised Inverse Cloze Task (ICT). In ICT, a sentence is treated as a pseudoquestion, and its context is treated as pseudoevidence. Given a pseudo-question, ICT requires selecting the corresponding pseudo-evidence out of the candidates in a batch. ICT pre-training provides a sufficiently strong initialization such that ORQA, a joint retriever and reader model, can be fine-tuned end-to-end by simply optimiz6087 Task Training Evaluation Example Evidence Answer Evidence Answer Reading Comprehension given span given string SQuAD (Rajpurkar et al., 2016) Open-domain QA Unsupervised QA none none none string GPT-2 (Radford et al., 2019) Strongly Supervised QA given span heuristic string DrQA (Chen et al., 2017) Weakly Supervised QA Closed Retrieval QA heuristic string heuristic string TriviaQA (Joshi et al., 2017) Open Retrieval QA learned string learned string ORQA (this work) Table 1: Comparison of assumptions made by related tasks, along with references to examples. Heuristic evidence refers to the typical strategy of considering only a closed set of evidence documents from a traditional IR system, which sets a strict upper-bound on task performance. In this work (ORQA), only question-answer string pairs are observed during training, and evidence retrieval is learned in a completely end-to-end manner. ing the marginal log-likelihood of correct answers that were found. We evaluate ORQA on open versions of five existing QA datasets. On datasets where the question writers already know the answer—SQuAD (Rajpurkar et al., 2016) and TriviaQA (Joshi et al., 2017)—the retrieval problem resembles traditional IR, and BM25 (Robertson et al., 2009) provides state-of-the-art retrieval. On datasets where question writers do not know the answer— Natural Questions (Kwiatkowski et al., 2019), WebQuestions (Berant et al., 2013), and CuratedTrec (Baudis and Sediv´y, 2015)—we show that learned retrieval is crucial, providing improvements of 6 to 19 points in exact match over BM25. 2 Overview In this section, we introduce notation for open domain QA that is useful for comparing prior work, baselines, and our proposed model. 2.1 Task In open domain question answering, the input q is a question string, and the output a is an answer string. Unlike reading comprehension, the source of evidence is a modeling choice rather than a part of the task definition. We compare the assumptions made by variants of reading comprehension and question answering tasks in Table 1. Evaluation is exact match with any of the reference answer strings after minor normalization such as lowercasing, following evaluation scripts from DrQA (Chen et al., 2017). 2.2 Formal Definitions We introduce several general definitions of model components that subsume many retrieval-based open domain question answering systems. Models are defined with respect to an unstructured text corpus that is split into B blocks of evidence texts. An answer derivation is a pair (b, s), where 1 ≤b ≤B indicates the index of an evidence block and s denotes a span of text within block b. The start and end token indices of span s are denoted by START(s) and END(s) respectively. Models define a scoring function S(b, s, q) indicating the goodness of an answer derivation (b, s) given a question q. Typically, this scoring function is decomposed over a retrieval component Sretr(b, q) and a reader component Sread(b, s, q): S(b, s, q) = Sretr(b, q) + Sread(b, s, q) During inference, the model outputs the answer string of the highest scoring derivation: a∗= TEXT(argmax b,s S(b, s, q)) where TEXT(b, s) deterministically maps answer derivation (b, s) to an answer string. A major challenge of any open domain question answering system is handling the scale. In our experiments on the English Wikipedia corpus, we consider over 13 million evidence blocks b, each with over 2000 possible answer spans s. 2.3 Existing Pipelined Models In existing retrieval-based open domain question answering systems, a blackbox IR system first chooses a closed set of evidence candidates. For example, the score from the retriever component of DrQA (Chen et al., 2017) is defined as: Sretr(b, q) = ( 0 b ∈TOP(k, TF-IDF(q, b)) −∞ otherwise Most work following DrQA use the same candidates from TF-IDF and focus on reading comprehension or re-ranking. The reading component 6088 BERTQ(q) [CLS]What does the zip in zip code stand for?[SEP] BERTB(0) [CLS]...The term ‘ZIP’ is an acronym for Zone Improvement Plan...[SEP] BERTB(1) [CLS]...group of zebras are referred to as a herd or dazzle...[SEP] BERTB(2) [CLS]...ZIPs for other operating systems may be preceded by...[SEP] BERTB(...) ... Sretr(0, q) Sretr(1, q) Sretr(2, q) Sretr(..., q) BERTR(q, 0) [CLS] What does the zip in zip code stand for? [SEP]...The term ‘ZIP’ is an acronym for Zone Improvement Plan...[SEP] BERTR(q, 2) [CLS] What does the zip in zip code stand for? [SEP]...ZIPs for other operating systems may be preceded by...[SEP] Top K Top K Sread(0, “The term”, q) Sread(0, “Zone Improvement Plan”, q) Sread(0, ..., q) MLP MLP MLP Sread(2, “ZIPs”, q) Sread(2, “operating systems”, q) Sread(2, ..., q) MLP MLP MLP Figure 1: Overview of ORQA. A subset of all possible answer derivations given a question q is shown here. Retrieval scores Sretr(q, b) are computed via inner products between BERT-based encoders. Top-scoring evidence blocks are jointly encoded with the question, and span representations are scored with a multi-layer perceptron (MLP) to compute Sread(q, b, s). The final joint model score is Sretr(q, b) + Sread(q, b, s). Unlike previous work using IR systems for candidate proposal, we learn to retrieve from all of Wikipedia directly. Sread(b, s, q) is learned from gold answer derivations, typically from the SQuAD (Rajpurkar et al., 2016) dataset, where the evidence text is given. In work that is more closely related to our approach, the reader is learned entirely from weak supervision (Joshi et al., 2017; Dhingra et al., 2017; Dunn et al., 2017). Spurious ambiguities (see Table 2) are heuristically removed by the retrieval system, and the cleaned results are treated as gold derivations. 3 Open-Retrieval Question Answering (ORQA) We propose an end-to-end model where the retriever and reader components are jointly learned, which we refer to as the Open-Retrieval Question Answering (ORQA) model. An important aspect of ORQA is its expressivity—it is capable of retrieving any text in an open corpus, rather than being limited to the closed set returned by a blackbox IR system. An illustration of how ORQA scores answer derivations is presented in Figure 1. Following recent advances in transfer learning, all scoring components are derived from BERT (Devlin et al., 2018), a bidirectional transformer that has been pre-trained on unsupervised language-modeling data. We refer the reader to the original paper for details of the architecture. In this work, the relevant abstraction can be described by the following function: BERT(x1, [x2]) = {CLS : hCLS, 1 : h1, 2 : h2, ...} The BERT function takes one or two string inputs (x1 and optionally x2) as arguments. It returns vectors corresponding to representations of the CLS pooling token or the input tokens. Retriever component In order for the retriever to be learnable, we define the retrieval score as the inner product of dense vector representations of the question q and the evidence block b. hq = WqBERTQ(q)[CLS] hb = WbBERTB(b)[CLS] Sretr(b, q) = h⊤ q hb where Wq and Wb are matrices that project the BERT output into 128-dimensional vectors. Reader component The reader is a span-based variant of the reading comprehension model proposed in Devlin et al. (2018): hstart = BERTR(q, b)[START(s)] hend = BERTR(q, b)[END(s)] Sread(b, s, q) = MLP([hstart; hend]) Following Lee et al. (2016), a span is represented by the concatenation of its end points, which is scored by a multi-layer perceptron to enable start/end interaction. Inference & Learning Challenges The model described above is conceptually simple. However, inference and learning are challenging since (1) an 6089 Example Supportive Spurious Evidence Ambiguity Q: Who is credited with developing the XY coordinate plane? ...invention of Cartesian coordinates by Ren´e Descartes revolutionized... ...Ren´e Descartes was born in La Haye en Touraine, France... A: Ren´e Descartes Q: How many districts are in the state of Alabama? ...Alabama is currently divided into seven congressional districts, each represented by ... ...Alabama is one of seven states that levy a tax on food at the same rate as other goods... A: seven Table 2: Examples of spurious ambiguities arising from the use of weak supervision. Good evidence retrieval is needed to generate a meaningful learning signal. open evidence corpus presents an enormous search space (over 13 million evidence blocks), and (2) how to navigate this space is entirely latent, so standard teacher-forcing approaches do not apply. Latent-variable methods are also difficult to apply naively due to the large number of spuriously ambiguous derivations. For example, as shown in Table 2, many irrelevant passages in Wikipedia would contain the answer string “seven.” We address these challenges by carefully initializing the retriever with unsupervised pre-training (Section 4). The pre-trained retriever allows us to (1) pre-encode all evidence blocks from Wikipedia, enabling dynamic yet fast top-k retrieval during fine-tuning (Section 5), and (2) bias the retrieval away from spurious ambiguities and towards supportive evidence (Section 6). 4 Inverse Cloze Task The goal of our proposed pre-training procedure is for the retriever to solve an unsupervised task that closely resembles evidence retrieval for QA. Intuitively, useful evidence typically discusses entities, events, and relations from the question. It also contains extra information (the answer) that is not present in the question. An unsupervised analog of a question-evidence pair is a sentencecontext pair—the context of a sentence is semantically relevant and can be used to infer information missing from the sentence. Following this intuition, we propose to pre-train our retrieval module with an Inverse Cloze Task (ICT). In the standard Cloze task (Taylor, 1953), the goal is to predict masked-out text based on its context. ICT instead requires predicting the inverse—given a sentence, predict its context (see BERTQ(q) [CLS]They are generally slower than horses, but their great stamina helps them outrun predators.[SEP] BERTB(0) [CLS]...Zebras have four gaits: walk, trot, canter and gallop. When chased, a zebra will zig-zag from side to side... ...[SEP] BERTB(1) [CLS]...Gagarin was further selected for an elite training group known as the Sochi Six...[SEP] BERTB(...) ... Sretr(0, q) Sretr(1, q) Sretr(..., q) Figure 2: Example of the Inverse Cloze Task (ICT), used for retrieval pre-training. A random sentence (pseudo-query) and its context (pseudo evidence text) are derived from the text snippet: “...Zebras have four gaits: walk, trot, canter and gallop. They are generally slower than horses, but their great stamina helps them outrun predators. When chased, a zebra will zigzag from side to side...” The objective is to select the true context among candidates in the batch. Figure 2). We use a discriminative objective that is analogous to downstream retrieval: PICT(b|q) = exp(Sretr(b, q)) X b′∈BATCH exp(Sretr(b′, q)) where q is a random sentence that is treated as a pseudo-question, b is the text surrounding q, and BATCH is the set of evidence blocks in the batch that are used as sampled negatives. An important aspect of ICT is that it requires learning more than word matching features, since the pseudo-question is not present in the evidence. For example, the pseudo-question in Figure 2 never explicitly mentions “Zebras”, but the retriever must still be able to select the context that discusses Zebras. Being able to infer the semantics from under-specified language is what sets QA apart from traditional IR. However, we also do not want to dissuade the retriever from learning to perform word matching—lexical overlap is ultimately a very useful feature for retrieval. Therefore, we only remove the sentence from its context in 90% of the examples, encouraging the model to learn both abstract representations when needed and low-level word matching features when available. ICT pre-training accomplishes two main goals: 1. Despite the mismatch between sentences dur6090 ing pre-training and questions during finetuning, we expect zero-shot evidence retrieval performance to be sufficient for bootstrapping the latent-variable learning. 2. There is no such mismatch between pretrained evidence blocks and downstream evidence blocks. We can expect the block encoder BERTB(b) to work well without further training. Only the question encoder needs to be fine-tuned on downstream data. As we will see in the following section, these two properties are crucial for enabling computationally feasible inference and end-to-end learning. 5 Inference Since fixed block encoders already provide a useful representation for retrieval, we can precompute all block encodings in the evidence corpus. As a result, the enormous set of evidence blocks does not need to be re-encoded while finetuning, and it can be pre-compiled into an index for fast maximum inner product search using existing tools such as Locality Sensitive Hashing. With the pre-compiled index, inference follows a standard beam-search procedure. We retrieve the top-k evidence blocks and only compute the expensive reader scores for those k blocks. While we only considering the top-k evidence blocks during a single inference step, this set dynamically changes during training since the question encoder is fine-tuned according to the weakly supervised QA data, as discussed in the following section. 6 Learning Learning is relatively straightforward, since ICT should provide non-trivial zero-shot retrieval. We first define a distribution over answer derivations: P(b, s|q) = exp(S(b, s, q)) X b′∈TOP(k) X s′∈b′ exp(S(b′, s′, q)) where TOP(k) denotes the top k retrieved blocks based on Sretr. We use k = 5 in our experiments. Given a gold answer string a, we find all (possibly spuriously) correct derivations in the beam, and optimize their marginal log-likelihood: Lfull(q, a) = −log X b∈TOP(k) X s∈b, a=TEXT(s) P ′(b, s|q) where a = TEXT(s) indicates whether the answer string a matches exactly the span s. To encourage more aggressive learning, we also include an early update, where we consider a larger set of c evidence blocks but only update the retrieval score, which is cheap to compute: Pearly(b|q) = exp(Sretr(b, q)) X b′∈TOP(c) exp(Sretr(b′, q)) Learly(q, a) = −log X b∈TOP(c), a∈TEXT(b) Pearly(b|q) where a ∈ TEXT(b) indicates whether answer string a appears in evidence block b. We use c = 5000 in our experiments. The final loss includes both updates: L(q, a) = Learly(q, a) + Lfull(q, a) If no matching answers are found at all, then the example is discarded. While we would expect almost all examples to be discarded with random initialization, we discard less than 10% of examples in practice due to ICT pre-training. As previously mentioned, we fine-tune all parameters except those in the evidence block encoder. Since the query encoder is trainable, the model can potentially learn to retrieve any evidence block. This expressivity is a crucial difference from blackbox IR systems, where recall can only be improved by retrieving more evidence. 7 Experimental Setup 7.1 Open Domain QA Datasets We train and evaluate on data from 5 existing question answering or reading comprehension datasets. Not all of them are intended as open domain QA datasets in their original form, so we convert them to open formats, following DrQA (Chen et al., 2017). Each example in the open version of the datasets consists of a single question string and a set of reference answer strings. Natural Questions contains question from aggregated queries to Google Search (Kwiatkowski et al., 2019). To gather an open version of this dataset, we only keep questions with short answers and discard the given evidence document. Answers with many tokens often resemble extractive snippets rather than canonical answers, so we discard answers with more than 5 tokens. 6091 Dataset Train Dev Test Example Question Example Answer Natural Questions 79168 8757 3610 What does the zip in zip code stand for? Zone Improvement Plan WebQuestions 3417 361 2032 What airport is closer to downtown Houston? William P. Hobby Airport CuratedTrec 1353 133 694 What metal has the highest melting point? Tungsten TriviaQA 78785 8837 11313 What did L. Fran Baum, author of The Wonderful Wizard of Oz, call his home in Hollywood? Ozcot SQuAD 78713 8886 10570 Other than the Automobile Club of Southern California, what other AAA Auto Club chose to simplify the divide? California State Automobile Association Table 3: Statistics and examples for the datasets that we evaluate on. There are slightly differences from the original datasets as described in Section 7.1, since not all of them were intended to be used in the open setting. WebQuestions contains questions that were sampled from the Google Suggest API (Berant et al., 2013). The answers are annotated with respect to Freebase, but we only keep the string representation of the entities. CuratedTrec is a corpus of question-answer pairs derived from TREC QA data curated by Baudis and Sediv´y (2015). The questions come from various sources of real queries, such as MSNSearch or AskJeeves logs, where the question askers do not observe any evidence documents (Voorhees, 2001). TriviaQA is a collection of trivia questionanswer pairs that were scraped from the web (Joshi et al., 2017). We use their unfiltered set and discard their distantly supervised evidence. SQuAD was designed to be a reading comprehension dataset rather than an open domain QA dataset (Rajpurkar et al., 2016). Answer spans were selected from a Wikipedia paragraph, and the questions were written by annotators who were instructed to ask questions that are answered by a given answer in a given context. On datasets where a development set does not exist, we randomly hold out 10% of the training data for development. On datasets where the test set is hidden, we also randomly hold out 10% of the training data for development, and use the original development set for testing (following DrQA). A summary of dataset statistics and examples are shown in Table 3. 7.2 Dataset Biases Evaluating on this diverse set of question-answer pairs is crucial, because all existing datasets have inherent biases that are problematic for open domain QA systems with learned retrieval. These biases are summarized in Table 4. In the Natural Questions, WebQuestions, and CuratedTrec, the question askers do not already know the answer. This accurately reflects a distribution of genuine information-seeking questions. However, annotators must separately find correct answers, which requires assistance from automatic tools and can introduce a moderate bias towards results from the tool. In TriviaQA and SQuAD, automatic tools are not needed since the questions are written with known answers in mind. However, this introduces another set of biases that are arguably more problematic. Question writing is not motivated by an information need. This often results in many hints in the question that would not be present in naturally occurring questions, as shown in the examples in Table 3. This is particularly problematic for SQuAD, where the question askers are also prompted with a specific piece of evidence for the answer, leading to artificially large lexical overlap between the question and evidence. Note that these are simply properties of the datasets rather than actionable criticisms—such data collection methods are necessary to scale up, and it is unclear how one could collect a truly unbiased dataset without impractical costs. 7.3 Implementation Details We mainly evaluate in the setting where only question-answer string pairs are available for supervision. See Section 9 for head-to-head comparisons with the DrQA setting that uses the same evidence corpus and the same type of supervision. Evidence Corpus We use the English Wikipedia snapshot from December 20, 2018 as the evidence corpus.1 The corpus is greedily 1We deviate from DrQA’s 2016 Wikipedia evidence corpus because the original snapshot is no longer publicly available. The 12-20-2018 snapshot is available at https:// archive.org/download/enwiki-20181220. 6092 Dataset Question Question Toolwriter writer assisted knows knows answer answer evidence Natural Questions  WebQuestions  CuratedTrec  TriviaQA  SQuAD   Table 4: A breakdown of biases in existing QA datasets. These biases are associated with either the question or the answer. split into chunks of at most 288 wordpieces based on BERT’s tokenizer, while preserving sentence boundaries. This results in just over 13 million evidence blocks. The title of the document is included in the block encoder. Hyperparameters In all uses of BERT (both the retriever and reader), we initialize from the uncased base model, which consists of 12 transformer layers with a hidden size of 768. As mentioned in Section 3, the retrieval representations, hq and hb, have 128 dimensions. The small hidden size was chosen so that the final QA model can comfortably run on a single machine. We use the default optimizer from BERT. When pre-training the retriever with ICT, we use a learning rate of 10−4 and a batch size of 4096 on Google Cloud TPUs for 100k steps. When finetuning, we use a learning rate of 10−5 and a batch size of 1 on a single machine with a 12GB GPU. Answer spans are limited to 10 tokens. We perform 2 epochs of fine-tuning for the larger datasets (Natural Questions, TriviaQA, and SQuAD), and 20 epochs for the smaller datasets (WebQuestions and CuratedTrec). 8 Main Results 8.1 Baselines We compare against other retrieval methods by using alternate retrieval scores Sretr(b, q), but with the same reader. BM25 A de-facto state-of-the-art unsupervised retrieval method is BM25 (Robertson et al., 2009). It has been shown to be robust for both traditional information retrieval tasks, and evidence retrieval for question answering (Yang et al., 2017).2 Since 2We also include the title, which was slightly beneficial. Model BM25 NNLM ELMO ORQA +BERT +BERT +BERT Dev Natural Questions 24.8 3.2 3.6 31.3 WebQuestions 20.8 9.1 17.7 38.5 CuratedTrec 27.1 6.0 8.3 36.8 TriviaQA 47.2 7.3 6.0 45.1 SQuAD 28.1 2.8 1.9 26.5 Test Natural Questions 26.5 4.0 4.7 33.3 WebQuestions 17.7 7.3 15.6 36.4 CuratedTrec 21.3 4.5 6.8 30.1 TriviaQA 47.1 7.1 5.7 45.0 SQuAD 33.2 3.2 2.3 20.2 Table 5: Main results: End-to-end exact match for open-domain question answering from questionanswer pairs only. Datasets where question askers know the answer behave differently from datasets where they do not. BM25 is not trainable, the retrieved evidence considered during fine-tuning is static. Inspired by BERTserini (Yang et al., 2019), the final score is a learned weighted sum of the BM25 and reader score. Our implementation is based on Lucene.3 Language Models While unsupervised neural retrieval is notoriously difficult to improve over traditional IR (Lin, 2019), we include them as baselines for comparison. We experiment with unsupervised pooled representations from neural language models (LM), which has been shown to be state-of-the-art unsupervised representations (Perone et al., 2018). We compare with two widely-used 128-dimensional representations: (1) NNLM, context-independent embeddings from a feed-forward LMs (Bengio et al., 2003),4 and (2) ELMO (small), a context-dependent bidirectional LSTM (Peters et al., 2018).5 As with ICT, we use the alternate encoders to pre-compute the encoded evidence blocks hb and to initialize the question encoding hq, which is fine-tuned. Based on existing IR literature and the intuition that LMs do not explicitly optimize for retrieval, we do not expect these to be strong baselines, but they demonstrate the difficulty of encoding blocks of text into 128 dimensions. 8.2 Results The main results are show in Table 5. The first result to note is that BM25 is a powerful retrieval system. Word matching is important, and 3https://lucene.apache.org/ 4https://tfhub.dev/google/nnlm-en-dim128/1 5https://allennlp.org/elmo 6093 Model Evidence SQuAD Retrieved DRQA 5 documents 27.1 DRQA (DS) 5 documents 28.4 DRQA (DS + MTL) 5 documents 29.8 BERTSERINI 5 documents 19.1 BERTSERINI 29 paragraphs 36.6 BERTSERINI 100 paragraphs 38.6 BM25 + BERT 5 blocks 34.7 (gold deriv.) Table 6: Analysis: Results comparable to previous work in the strongly supervised setting, where models have access to gold derivations from SQuAD. Different systems segment Wikipedia differently. There are 5.1M documents, 29.5M paragraphs, and 12.1M blocks in the December 12, 2016 Wikipedia snapshot. dense vector representations derived from language models do not readily capture this. We also show that on questions that were derived from real users who are seeking information (Natural Questions, WebQuestions, and CuratedTrec), our ICT pre-trained retriever outperforms BM25 by a large marge—6 to 19 points in exact match depending on the dataset. However, in datasets where the question askers already know the answer, i.e. SQuAD and TriviaQA, the retrieval problem resembles traditional IR. In this setting, a highly compressed 128dimensional vector cannot match BM25’s ability to precisely represent every word in the evidence. The notable drop between development and test accuracy for SQuAD is a reflection of an artifact in the dataset—its 100k questions are derived from only 536 documents. Therefore, good retrieval targets are highly correlated between training examples, violating the IID assumption, and making it unsuitable for learned retrieval. We strongly suggest that those who are interested in end-to-end open-domain QA models no longer train and evaluate with SQuAD for this reason. 9 Analysis 9.1 Strongly supervised comparison To verify that our BM25 baseline is indeed state of the art, we also provide direct comparisons with DrQA’s setup, where systems have access to gold answer derivations from SQuAD (Rajpurkar et al., 2016). While many systems have been proposed following DrQA’s original setting, we compare only to the original system and the best system that 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 20 25 30 35 ICT masking rate Natural Questions Exact Match ORQA BM25 + BERT Figure 3: Analysis: Performance on our open version of the Natural Questions dev set with various masking rates for the ICT pre-training. Too much masking prevents the model from learning to exploit exact ngram overlap. Too little masking makes language understanding unnecessary. we are aware of—BERTserini (Yang et al., 2019). DrQA’s reader is DocReader (Chen et al., 2017), and they use TF-IDF to retrieve the top k documents. They also include distant supervision based on TF-IDF retrieval. BERTserini’s reader is derived from base BERT (much like our reader), and they use BM25 to retrieve the top k paragraphs (much like our BM25 baseline). A major difference is that BERTserini uses true paragraphs from Wikipedia rather than arbitrary blocks, resulting in more evidence blocks due to uneven lengths. For fair comparison with these strongly supervised systems, we pre-train the reader on SQuAD data.6 In Table 6, our BM25 baseline, which retrieves 5 evidence blocks, greatly outperforms 5-document BERTserini and is close to 29paragraph BERTserini. 9.2 Masking Rate in the Inverse Cloze Task The pseudo-query is masked from the evidence block 90% of the time, motivated by intuition in Section 4. We empirically verify our intuitions in Figure 3 by varying the masking rate, and comparing results on our open version of the Natural Questions development set. If we always mask the pseudo-query, the retriever never learns that n-gram overlap is a powerful retrieval signal, losing almost 10 points in end-to-end performance. If we never mask the pseudo-query, the problem is reduced to memorization and does not generalize well to question answering. The latter loses 6 points in end-to-end performance, which—perhaps not surprisingly— produces near-identical results to BM25. 6We use DrQA’s December 12, 2016 snapshot of Wikipedia for an apples-to-apples comparison. 6094 Example ORQA BM25 + BERT Q: what is the new orleans saints symbol called ...The team’s primary colors are old gold and black; their logo is a simplified fleur-de-lis. They played their home games in Tulane Stadium through the 1974 NFL season.... ...the SkyDome was owned by Sportsco at the time... the sale of the New Orleans Saints with team owner Tom Benson... the Saints became a symbol for that community... A: fleur-de-lis Q: how many senators per state in the us ...powers of the Senate are established in Article One of the U.S. Constitution. Each U.S. state is represented by two senators... ...The Georgia Constitution mandates a maximum of 56 senators, elected from single-member districts... A: two Q: when was germany given a permanent seat on the council of the league of nations ...Under the Weimar Republic, Germany (in fact the “Deutsches Reich” or German Empire) was admitted to the League of Nations through a resolution passed on September 8 1926. An additional 15 countries joined later... ...the accession of the German Democratic Republic to the Federal Republic of Germany, it was effective on 3 October 1990...Germany has been elected as a non-permanent member of the United Nations Security Council... A: 1926 Q: when was diary of a wimpy kid double down published ...“Diary of a Wimpy Kid” first appeared on FunBrain in 2004, where it was read 20 million times. The abridged hardcover adaptation was released on April 1, 2007... Diary of a Wimpy Kid: Double Down is the eleventh book in the ”Diary of a Wimpy Kid” series by Jeff Kinney... The book was published on November 1, 2016... A: November 1, 2016 Table 7: Analysis: Example predictions on our open version of the Natural Questions dev set. We show the highest scoring derivation, consisting of the evidence block and the predicted answer in bold. ORQA is more robust at separating semantically distinct text that have high lexical overlap. However, the limitation of the 128-dimensional vectors is that extremely specific concepts are less precisely represented. 9.3 Example Predictions For a more intuitive understanding of the improvements from ORQA, we compare its predictions with baseline predictions in Table 7. We find that ORQA is more robust at separating semantically distinct text with high lexical overlap, as shown in the first three examples. However, it is expected that there are limits to how much information can be compressed into 128-dimensional vectors. The last example shows that ORQA has trouble precisely representing extremely specific concepts that sparse representations can cleanly separate. These errors indicate that a hybrid approach would be promising future work. 10 Related Work Recent progress has been made towards improving evidence retrieval (Wang et al., 2018; Kratzwald and Feuerriegel, 2018; Lee et al., 2018; Das et al., 2019) by learning to aggregate from multiple retrieval steps. They re-rank evidence candidates from a closed set, and we aim to integrate these complementary approaches in future work. Our approach is also reminiscent of weakly supervised semantic parsing (Clarke et al., 2010; Liang et al., 2013; Artzi and Zettlemoyer, 2013; Fader et al., 2014; Berant et al., 2013; Kwiatkowski et al., 2013), with which we share similar challenges—(1) inference and learning are tightly coupled, (2) latent derivations must be discovered, and (3) strong inductive biases are needed to find positive learning signal while avoiding spurious ambiguities. While we motivate ICT from first principles as an unsupervised proxy for evidence retrieval, it is closely related to existing representation learning literature. ICT can be considered a generalization of the skip-gram objective (Mikolov et al., 2013), with a coarser granularity, deep architecture, and in-batch negative sampling from Logeswaran and Lee (2018). Consulting external evidence sources with latent retrieval has also been explored in information extraction (Narasimhan et al., 2016). In comparison, we are able to learn a much more expressive retriever due to the strong inductive biases from ICT pre-training. 11 Conclusion We presented ORQA, the first open domain question answering system where the retriever and reader are jointly learned end-to-end using only question-answer pairs and without any IR system. This is made possible by pre-training the retriever using an Inverse Cloze Task (ICT). Experiments show that learning to retrieve is crucial when the questions reflect an information need, i.e. the question writers do not already know the answer. Acknowledgements We thank the Google AI Language Team for valuable suggestions and feedback. 6095 References Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics, 1(1):49–62. Petr Baudis and Jan Sediv´y. 2015. Modeling of the question answering task in the yodaqa system. In CLEF. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137–1155. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1870–1879. James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world’s response. In Proceedings of the fourteenth conference on computational natural language learning, pages 18–27. Association for Computational Linguistics. Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, and Andrew McCallum. 2019. Multi-step retrieverreader interaction for scalable open-domain question answering. In International Conference on Learning Representations. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Bhuwan Dhingra, Kathryn Mazaitis, and William W Cohen. 2017. Quasar: Datasets for question answering by search and reading. arXiv preprint arXiv:1707.03904. Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1156– 1165. ACM. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1601–1611. Bernhard Kratzwald and Stefan Feuerriegel. 2018. Adaptive document retrieval for deep question answering. arXiv preprint arXiv:1808.06528. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1545–1556. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Rhinehart, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics. Jinhyuk Lee, Seongjun Yun, Hyunjae Kim, Miyoung Ko, and Jaewoo Kang. 2018. Ranking paragraphs for improving answer recall in open-domain question answering. arXiv preprint arXiv:1810.00494. Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, and Jonathan Berant. 2016. Learning recurrent span representations for extractive question answering. arXiv preprint arXiv:1611.01436. Percy Liang, Michael I Jordan, and Dan Klein. 2013. Learning dependency-based compositional semantics. Computational Linguistics, 39(2):389–446. Jimmy Lin. 2019. The neural hype and comparisons against weak baselines. In ACM SIGIR Forum. Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. arXiv preprint arXiv:1803.02893. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Karthik Narasimhan, Adam Yala, and Regina Barzilay. 2016. Improving information extraction by acquiring external evidence with reinforcement learning. arXiv preprint arXiv:1603.07954. Christian S Perone, Roberto Silveira, and Thomas S Paula. 2018. Evaluation of sentence embeddings in downstream and linguistic probing tasks. arXiv preprint arXiv:1806.06259. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. 6096 Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends in Information Retrieval, 3(4):333–389. Amit Singh. 2012. Entity based q&a retrieval. In Proceedings of the 2012 Joint conference on empirical methods in natural language processing and computational natural language learning, pages 1266– 1277. Association for Computational Linguistics. Wilson L Taylor. 1953. “Cloze procedure”: A new tool for measuring readability. Journalism Bulletin, 30(4):415–433. Ellen M Voorhees. 2001. Overview of the trec 2001 question answering track. In In Proceedings of the Tenth Text REtrieval Conference (TREC. Citeseer. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerry Tesauro, Bowen Zhou, and Jing Jiang. 2018. R 3: Reinforced ranker-reader for open-domain question answering. In Thirty-Second AAAI Conference on Artificial Intelligence. Peilin Yang, Hui Fang, and Jimmy Lin. 2017. Anserini: Enabling the use of lucene for information retrieval research. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1253–1256. ACM. Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with bertserini. arXiv preprint arXiv:1902.01718.
2019
612
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6097–6109 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6097 Multi-hop Reading Comprehension through Question Decomposition and Rescoring Sewon Min1, Victor Zhong1, Luke Zettlemoyer1, Hannaneh Hajishirzi1,2 1University of Washington 2Allen Institute for Artificial Intelligence {sewon,vzhong,lsz,hannaneh}@cs.washington.edu Abstract Multi-hop Reading Comprehension (RC) requires reasoning and aggregation across several paragraphs. We propose a system for multi-hop RC that decomposes a compositional question into simpler sub-questions that can be answered by off-the-shelf single-hop RC models. Since annotations for such decomposition are expensive, we recast subquestion generation as a span prediction problem and show that our method, trained using only 400 labeled examples, generates sub-questions that are as effective as humanauthored sub-questions. We also introduce a new global rescoring approach that considers each decomposition (i.e. the sub-questions and their answers) to select the best final answer, greatly improving overall performance. Our experiments on HOTPOTQA show that this approach achieves the state-of-the-art results, while providing explainable evidence for its decision making in the form of sub-questions. 1 Introduction Multi-hop reading comprehension (RC) is challenging because it requires the aggregation of evidence across several paragraphs to answer a question. Table 1 shows an example of multi-hop RC, where the question “Which team does the player named 2015 Diamond Head Classics MVP play for?” requires first finding the player who won MVP from one paragraph, and then finding the team that player plays for from another paragraph. In this paper, we propose DECOMPRC, a system for multi-hop RC, that learns to break compositional multi-hop questions into simpler, singlehop sub-questions using spans from the original question. For example, for the question in Table 1, we can create the sub-questions “Which player named 2015 Diamond Head Classics MVP?” and “Which team does ANS play for?”, Q Which team does the player named 2015 Diamond Head Classics MVP play for? P1 The 2015 Diamond Head Classic was ... Buddy Hield was named the tournament’s MVP. P2 Chavano Rainier Buddy Hield is a Bahamian professional basketball player for the Sacramento Kings ... Q1 Which player named 2015 Diamond Head Classics MVP? Q2 Which team does ANS play for? Table 1: An example of multi-hop question from HOTPOTQA. The first cell shows given question and two of given paragraphs (other eight paragraphs are not shown), where the red text is the groundtruth answer. Our system selects a span over the question and writes two sub-questions shown in the second cell. where the token ANS is replaced by the answer to the first sub-question. The final answer is then the answer to the second sub-question. Recent work on question decomposition relies on distant supervision data created on top of underlying relational logical forms (Talmor and Berant, 2018), making it difficult to generalize to diverse natural language questions such as those on HOTPOTQA (Yang et al., 2018). In contrast, our method presents a new approach which simplifies the process as a span prediction, thus requiring only 400 decomposition examples to train a competitive decomposition neural model. Furthermore, we propose a rescoring approach which obtains answers from different possible decompositions and rescores each decomposition with the answer to decide on the final answer, rather than deciding on the decomposition in the beginning. Our experiments show that DECOMPRC outperforms other published methods on HOTPOTQA (Yang et al., 2018), while providing explainable evidence in the form of sub-questions. In addition, we evaluate with alternative distrator paragraphs and questions and show that our decomposition-based approach is more ro6098 bust than an end-to-end BERT baseline (Devlin et al., 2019). Finally, our ablation studies show that our sub-questions, with 400 supervised examples of decompositions, are as effective as humanwritten sub-questions, and that our answer-aware rescoring method significantly improves the performance. Our code and interactive demo are publicly available at https://github.com/ shmsw25/DecompRC. 2 Related Work Reading Comprehension. In reading comprehension, a system reads a document and answers questions regarding the content of the document (Richardson et al., 2013). Recently, the availability of large-scale reading comprehensiondatasets (Hermann et al., 2015; Rajpurkar et al., 2016; Joshi et al., 2017) has led to the development of advanced RC models (Seo et al., 2017; Xiong et al., 2018; Yu et al., 2018; Devlin et al., 2019). Most of the questions on these datasets can be answered in a single sentence (Min et al., 2018), which is a key difference from multi-hop reading comprehension. Multi-hop Reading Comprehension. In multihop reading comprehension, the evidence for answering the question is scattered across multiple paragraphs. Some multi-hop datasets contain questions that are, or are based on relational queries (Welbl et al., 2017; Talmor and Berant, 2018). In contrast, HOTPOTQA (Yang et al., 2018), on which we evaluate our method, contains more natural, hand-written questions that are not based on relational queries. Prior methods on multi-hop reading comprehension focus on answering relational queries, and emphasize attention models that reason over coreference chains (Dhingra et al., 2018; Zhong et al., 2019; Cao et al., 2019). In contrast, our method focuses on answering natural language questions via question decomposition. By providing decomposed single-hop sub-questions, our method allows the model’s decisions to be explainable. Our work is most related to Talmor and Berant (2018), which answers questions over web snippets via decomposition. There are three key differences between our method and theirs. First, they decompose questions that are correspond to relational queries, whereas we focus on natural language questions. Next, they rely on an underlying relational query (SPARQL) to build distant supervision data for training their model, while our method requires only 400 decomposition examples. Finally, they decide on a decomposition operation exclusively based on the question. In contrast, we decompose the question in multiple ways, obtain answers, and determine the best decomposition based on all given context, which we show is crucial to improving performance. Semantic Parsing. Semantic parsing is a larger area of work that involves producing logical forms from natural language utterances, which are then usually executed over structured knowledge graphs (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Liang et al., 2011). Our work is inspired by the idea of compositionality from semantic parsing, however, we focus on answering natural language questions over unstructured text documents. 3 Model 3.1 Overview In multi-hop reading comprehension, a system answers a question over a collection of paragraphs by combining evidence from multiple paragraphs. In contrast to single-hop reading comprehension, in which a system can obtain good performance using a single sentence (Min et al., 2018), multi-hop reading comprehension typically requires more complex reasoning over how two pieces of evidence relate to each other. We propose DECOMPRC for multi-hop reading comprehension via question decomposition. DECOMPRC answers questions through a three step process: 1. First, DECOMPRC decomposes the original, multi-hop question into several single-hop sub-questions according to a few reasoning types in parallel, based on span predictions. Figure 1 illustrates an example in which a question is decomposed through four different reasoning types. Section 3.2 details our decomposition approach. 2. Then, for every reasoning types DECOMPRC leverages a single-hop reading comprehension model to answer each sub-question, and combines the answers according to the reasoning type. Figure 1 shows an example for which bridging produces ‘City of New 6099 Figure 1: The overall diagram of how our system works. Given the question, DECOMPRC decomposes the question via all possible reasoning types (Section 3.2). Then, each sub-question interacts with the off-the-shelf RC model and produces the answer (Section 3.3). Lastly, the decomposition scorer decides which answer will be the final answer (Section 3.4). Here, “City of New York”, obtained by bridging, is determined as a final answer. Type Bridging (47%) requires finding the first-hop evidence in order to find another, second-hop evidence. Q Which team does the player named 2015 Diamond Head Classics MVP play for? Q1 Which player named 2015 Diamond Head Classics MVP? Q2 Which team does ANS play for? Type Intersection (23%) requires finding an entity that satisfies two independent conditions. Q Stories USA starred ✓which actor and comedian ✓from ‘The Office’? Q1 Stories USA starred which actor and comedian? Q2 Which actor and comedian from ‘The Office’? Type Comparison (22%) requires comparing the property of two different entities. Q Who was born earlier, Emma Bull or Virginia Woolf? Q1 Emma Bull was born when? Q2 Virginia Woolf was born when? Q3 Which is smaller (Emma Bull, ANS) (Virgina Woolf, ANS) Table 2: The example multi-hop questions from each category of reasoning type on HOTPOTQA. Q indicates the original, multi-hop question, while Q1, Q2 and Q3 indicate sub-questions. DECOMPRC predicts span and ✓ through Pointerc, generates sub-questions, and answers them iteratively through single-hop RC model. York’ as an answer while intersection produces ‘Columbia University’ as an answer. Section 3.3 details the single-hop reading comprehension procedure. 3. Finally, DECOMPRC leverages a decomposition scorer to judge which decomposition is the most suitable, and outputs the answer from that decomposition as the final answer. In Figure 1, “City of New York”, obtained via bridging, is decided as the final answer. Section 3.4 details our rescoring step. We identify several reasoning types in multi-hop reading comprehension, which we use to decompose the original question and rescore the decompositions. These reasoning types are bridging, intersection and comparison. Table 2 shows examples of each reasoning type. On a sample of 200 questions from the dev set of HOTPOTQA, we find that 92% of multi-hop questions belong to one of these types. Specifically, among 184 samples out of 200 which require multi-hop reasoning, 47% are bridging questions, 23% are intersection questions, 22% are comparison questions, and 8% do not belong to one of three types. In addition, these multi-hop reasoning types correspond to the types of compositional questions identified by Berant et al. (2013) and Talmor and Berant (2018). 3.2 Decomposition The goal of question decomposition is to convert a multi-hop question into simpler, single-hop subquestions. A key challenge of decomposition is that it is difficult to obtain annotations for how to decompose questions. Moreover, generating the question word-by-word is known to be a difficult task that requires substantial training data and is not straight-forward to evaluate (Gatt and Krahmer, 2018; Novikova et al., 2017). Instead, we propose a method to create subquestions using span prediction over the question. 6100 The key idea is that, in practice, each sub-question can be formed by copying and lightly editing a key span from the original question, with different span extraction and editing required for each reasoning type. For instance, the bridging question in Table 2 requires finding “the player named 2015 Diamond Head Classic MVP” which is easily extracted as a span. Similarly, the intersection question in Table 2 specifies the type of entity to find (“which actor and comedian”), with two conditions (“Stories USA starred” and “from “The Office””), all of which can be extracted. Comparison questions compare two entities using a discrete operation over some properties of the entities, e.g., “which is smaller”. When two entities are extracted as spans, the question can be converted into two sub-questions and one discrete operation over the answers of the sub-questions. Span Prediction for Sub-question Generation Our approach simplifies the sub-question generation problem into a span prediction problem that requires little supervision (400 annotations). The annotations are collected by mapping the question into several points that segment the question into spans (details in Section 4.2). We train a model Pointerc that learns to map a question into c points, which are subsequently used to compose sub-questions for each reasoning type through Algorithm 1. Pointerc is a function that points to c indices ind1, . . . , indc in an input sequence.1 Let S = [s1, . . . , sn] denote a sequence of n words in the input sequence. The model encodes S using BERT (Devlin et al., 2019): U = BERT(S) ∈Rn×h, (1) where h is the output dimension of the encoder. Let W ∈Rh×c denote a trainable parameter matrix. We compute a pointer score matrix Y = softmax(UW) ∈Rn×c, (2) where P(i = indj) = Yij denotes the probability that the ith word is the jth index produced by the pointer. The model extracts c indices that yield the highest joint probability at inference: ind1, . . . , indc = argmax i1≤···≤ic c Y j=1 P(ij = indj) 1c is a hyperparameter which differs in different reasoning types. 2Details for find op, form subq in Appendix B. Algorithm 1 Sub-questions generation using Pointerc.2 procedure GENERATESUBQ(Q : question, Pointerc) /* Find qb 1 and qb 2 for Bridging */ ind1, ind2, ind3 ←Pointer3(Q) qb 1 ←Qind1:ind3 qb 2 ←Q:ind1 : ANS : Qind3: article in Qind2−5:ind2 ←‘which’ /* Find qi 1 and qi 2 for Intersecion */ ind1, ind2 ←Pointer2(Q) s1, s2, s3 ←Q:ind1, Qind1:ind2, Qind2: if s2 starts with wh-word then qi 1 ←s1 : s2, qi 2 ←s2 : s3 else qi 1 ←s1 : s2, qi 2 ←s1 : s3 /* Find qc 1, qc 2 and qc 3 for Comparison */ ind1, ind2, ind3, ind4 ←Pointer4(Q) ent1, ent2 ←Qind1:ind2, Qind3:ind4 op ←find op(Q, ent1, ent2) qc 1, qc 2 ←form subq(Q, ent1, ent2, op) qc 3 ←op (ent1, ANS) (ent2, ANS) 3.3 Single-hop Reading Comprehension Given a decomposition, we use a single-hop RC model to answer each sub-question. Specifically, the goal is to obtain the answer and the evidence, given the sub-question and N paragraphs. Here, the answer is a span from one of paragraphs, yes or no. The evidence is one of N paragraphs on which the answer is based. Any off-the-shelf RC model can be used. In this work, we use the BERT reading comprehension model (Devlin et al., 2019) combined with the paragraph selection approach from Clark and Gardner (2018) to handle multiple paragraphs. Given N paragraphs S1, . . . , SN, this approach independently computes answeri and ynone i from each paragraph Si, where answeri and ynone i denote the answer candidate from ith paragraph and the score indicating ith paragraph does not contain the answer. The final answer is selected from the paragraph with the lowest ynone i . Although this approach takes a set of multiple paragraphs as an input, it is not capable of jointly reasoning across different paragraphs. For each paragraph Si, let Ui ∈Rn×h be the BERT encoding of the sub-question concatenated with a paragraph Si, obtained by Equation 1. We compute four scores, yspan i yyes i , yno i and ynone i , indicating if the answer is a phrase in the paragraph, yes, no, or does not exist. [yspan i ; yyes i ; yno i ; ynone i ] = max(Ui)W1 ∈R4, where max denotes a max-pooling operation across the input sequence, and W1 ∈Rh×4 de6101 notes a parameter matrix. Additionally, the model computes spani, which is defined by its start and end points starti and endi. starti, endi = argmax j≤k Pi,start(j)Pi,end(k), where Pi,start(j) and Pi,end(k) indicate the probability that the jth word is the start and the kth word is the end of the answer span, respectively. Pi,start(j) and Pi,end(k) are obtained by the jth element of pstart i and the kth element of pend i from pstart i = softmax(UiWstart) ∈Rn (3) pend i = softmax(UiWend) ∈Rn (4) Here, Wstart, Wend ∈Rh are the parameter matrices. Finally, answeri is determined as one of spani, yes or no based on which of yspan i , yyes i and yno i is the highest. The model is trained using questions that only require single-hop reasoning, obtained from SQUAD (Rajpurkar et al., 2016) and easy examples of HOTPOTQA (Yang et al., 2018) (details in Section 4.2). Once trained, it is used as an offthe-shelf RC model and is never directly trained on multi-hop questions. 3.4 Decomposition Scorer Each decomposition consists of sub-questions, their answers, and evidence corresponding to a reasoning type. DECOMPRC scores decompositions and takes the answer of the top-scoring decomposition to be the final answer. The score indicates if a decomposition leads to a correct final answer to the multi-hop question. Let t be the reasoning type, and let answert and evidencet be the answer and the evidence from the reasoning type t. Let x denote a sequence of n words formed by the concatenation of the question, the reasoning type t, the answer answert, and the evidence evidencet. The decomposition scorer encodes this input x using BERT to obtain Ut ∈Rn×h similar to Equation (1). The score pt is computed as pt = sigmoid(W T 2 max(Ut)) ∈R, where W2 ∈Rh is a trainable matrix. During inference, the reasoning type is decided as argmaxt pt. The answer corresponding to this reasoning type is chosen as the final answer. Pipeline Approach. An alternative to the decomposition scorer is a pipeline approach, in which the reasoning type is determined in the beginning, before decomposing the question and obtaining the answers to sub-questions. Section 4.6 compares our scoring step with this approach to show the effectiveness of the decomposition scorer. Here, we briefly describe the model used for the pipeline approach. First, we form a sequence S of n words from the question and obtain ˜S ∈Rn×h from Equation 1. Then, we compute 4-dimensional vector pt by: pt = softmax(W3max( ˜S)) ∈R4 where W3 ∈Rh×4 is a parameter matrix. Each element of 4-dimensional vector pt indicates the reasoning type is bridging, intersection, comparison or original. 4 Experiments 4.1 HOTPOTQA We experiment on HOTPOTQA (Yang et al., 2018), a recently introduced multi-hop RC dataset over Wikipedia articles. There are two types of questions—bridge and comparison. Note that their categorization is based on the data collection and is different from our categorization (bridging, intersection and comparison) which is based on the required reasoning type. We evaluate our model on dev and test sets in two different settings, following prior work. Distractor setting contains the question and a collection of 10 paragraphs: 2 paragraphs are provided to crowd workers to write a multi-hop question, and 8 distractor paragraphs are collected separately via TF-IDF between the question and the paragraph. The train set contains easy, medium and hard examples, where easy examples are single-hop, and medium and hard examples are multi-hop. The dev and test sets are made up of only hard examples. Full wiki setting is an open-domain setting which contains the same questions as distractor setting but does not provide the collection of paragraphs. Following Chen et al. (2017), we retrieve 30 Wikipedia paragraphs based on TF-IDF similarity between the paragraph and the question (or subquestion). 6102 Distractor setting Full wiki setting All Bridge Comp Single Multi All Bridge Comp Single Multi DECOMPRC 70.57 72.53 62.78 84.31 58.74 43.26 40.30 55.04 52.11 35.64 1hop train 61.73 61.57 62.36 79.38 46.53 39.17 35.30 54.57 50.03 29.83 BERT 67.08 69.41 57.81 82.98 53.38 38.40 34.77 52.85 46.14 31.74 1hop train 56.27 62.77 30.40 87.21 29.64 29.97 32.15 21.29 47.14 15.18 BiDAF 58.28 59.09 55.05 34.36 30.42 50.70 Table 3: F1 scores on the dev set of HOTPOTQA in both distractor (left) and full wiki settings (right). We compare DECOMPRC (our model), BERT, and BiDAF, and variants of the models that are only trained on single-hop QA data (1hop train). Bridge and Comp indicate original splits in HOTPOTQA; Single and Multi refer to dev set splits that can be solved (or not) by all of three BERT models trained on single-hop QA data. Model Dist F1 Open F1 DECOMPRC 69.63 40.65 Cognitive Graph 48.87 BERT Plus 69.76 MultiQA 40.23 DFGN+BERT 68.49 QFE 68.06 38.06 GRN 66.71 36.48 BiDAF 59.02 32.89 Table 4: F1 score on the test set of HOTPOTQA distractor and full wiki setting. All numbers from the official leaderboard. All models except BiDAF are concurrent work (not published). DECOMPRC achieves the best result out of models reported to both distractor and full wiki setting. 4.2 Implementations Details Training Pointer for Decomposition. We obtain a set of 200 annotations for bridging to train Pointer3, and another set of 200 annotations for intersection to train Pointer2, hence 400 in total. Each bridging question pairs with three points in the question, and each intersection question pairs with two points in the question. For comparison, we create training data in which each question pairs with four points (the start and end of the first entity and those of the second entity) to train Pointer4, requiring no extra annotation.3 Training Single-hop RC Model. We create single-hop QA data by combining HOTPOTQA easy examples and SQuAD (Rajpurkar et al., 2016) examples to form the training data for our single-hop RC model described in Section 3.3. To convert SQUAD to a multi-paragraph setting, we retrieve n other Wikipedia paragraphs based 3Details in Appendix B. on TF-IDF similarity between the question and the paragraph, using Document Retriever from DrQA (Chen et al., 2017). We train 3 instances with n = 0, 2, 4 for an ensemble, which we use as the single-hop model. To deal with ungrammatical questions generated through our decomposition procedure, we augment the training data with ungrammatical samples. Specifically, we add noise in the question by randomly dropping tokens with probability of 5%, and replace wh-word into ‘the’ with probability of 5%. Training Decomposition Scorer We create training data by making inferences for all reasoning types on HOTPOTQA medium and hard examples. We take the reasoning type that yields the correct answer as the gold reasoning type. Appendix C provides the full details. 4.3 Baseline Models We compare our system DECOMPRC with the state-of-the-art on the HOTPOTQA dataset as well as strong baselines. BiDAF is the state-of-the-art RC model on HOTPOTQA, originally from Seo et al. (2017) and implemented by Yang et al. (2018). BERT is a large, language-model-pretrained model, achieving the state-of-the-art results across many different NLP tasks (Devlin et al., 2019). This model is the same as our single-hop model described in Section 3.3, but trained on the entirety of HOTPOTQA. BERT–1hop train is the same model but trained on single-hop QA data without HOTPOTQA medium and hard examples. DECOMPRC–1hop train is a variant of DECOMPRC that does not use multi-hop QA data except 400 decomposition annotations. Since there 6103 Model F1 DECOMPRC 70.57 →59.07 DECOMPRC–1hop train 61.73 →58.30 BERT 67.08 →44.68 BERT–1hop train 56.27 →49.64 Model Orig F1 Inv F1 Joint F1 DECOMPRC 67.80 65.78 55.80 BERT 54.65 32.49 19.27 Table 5: Left: modifying distractor paragraphs. F1 score on the original dev set and the new dev set made up with a different set of distractor paragraphs. DECOMPRC is our model and DECOMPRC–1hop train is DECOMPRC trained on only single-hop QA data and 400 decomposition annotations. BERT and BERT–1hop train are the baseline models, trained on HOTPOTQA and single-hop data, respectively. Right: adversarial comparison questions. F1 score on a subset of binary comparison questions. Orig F1, Inv F1 and Joint F1 indicate F1 score on the original example, the inverted example and the joint of two (example-wise minimum of two), respectively. is no access to the groundtruth answers of multihop questions, a decomposition scorer cannot be trained. Therefore, a final answer is obtained based on the confidence score from the single-hop RC model, without a rescoring procedure. 4.4 Results Table 3 compares the results of DECOMPRC with other baselines on the HOTPOTQA development set. We observe that DECOMPRC outperforms all baselines in both distractor and full wiki settings, outperforming the previous published result by a large margin. An interesting observation is that DECOMPRC not trained on multi-hop QA pairs (DECOMPRC–1hop train) shows reasonable performance across all data splits. We also observe that BERT trained on singlehop RC achieves a high F1 score, even though it does not draw inferences across different paragraphs. For further analysis, we split the HOTPOTQA development set into single-hop solvable (Single) and single-hop non-solvable (Multi).4 We observe that DECOMPRC outperforms BERT by a large margin in single-hop non-solvable (Multi) examples. This supports our attempt toward more explainable methods for answering multihop questions. Finally, Table 4 shows the F1 score on the test set for distractor setting and full wiki setting on the leaderboard.5 These include unpublished models that are concurrent to our work. DECOMPRC achieves the best result out of models that report both distractor and full wiki setting. 4We consider an example to be solvable if all of three models of the BERT–1hop train ensemble obtains nonnegative F1. This leads to 3426 single-hop solvable and 3979 single-hop non-solvable examples out of 7405 development examples, respectively. 5Retrieved on March 4th 2019 from https://https: //hotpotqa.github.io 4.5 Evaluating Robustness In order to evaluate the robustness of different methods to changes in the data distribution, we set up two adversarial settings in which the trained model remains the same but the evaluation dataset is different. Modifying Distractor Paragraphs. We collect a new set of distractor paragraphs to evaluate if the models are robust to the change in distractors.6 In particular, we follow the same strategy as the original approach (Yang et al., 2018) using TF-IDF similarity between the question and the paragraph, but with no overlapping distractor paragraph with the original distractor paragraphs. Table 5 compares the F1 score of DECOMPRC and BERT in the original distractor setting and in the modified distractor setting. As expected, the performance of both methods degrade, but DECOMPRC is more robust to the change in distractors. Namely, DECOMPRC–1hop train degrades much less (only 3.41 F1) compared to other approaches because it is only trained on single-hop data and therefore does not exploit the data distribution. These results confirm our hypothesis that the end-to-end model is sensitive to the change of the data and our model is more robust. Adversarial Comparison Questions. We create an adversarial set of comparison questions by altering the original question so that the correct answer is inverted. For example, we change “Who was born earlier, Emma Bull or Virginia Woolf?” to “Who was born later, Emma Bull or Virginia Woolf?” We automatically invert 665 questions (details in Appendix D). We report the joint F1, taken as the minimum of the prediction F1 on the original and the inverted examples. Table 5 shows 6We choose 8 distractor paragraphs that do not to change the groundtruth answer. 6104 Question Robert Smith founded the multinational company headquartered in what city? Span-based Q1: Robert Smith founded which multinational company? Q2: ANS headquartered in what city? Free-form Q1: Which multinational company was founded by Robert Smith? Q2: Which city contains a headquarter of ANS? Table 6: An example of the original question, span-based human-annotated sub-questions and free-form humanauthored sub-questions. Sub-questions F1 Span (Pointerc trained on 200) 65.44 Span (Pointerc trained on 400) 69.44 Span (human) 70.41 Free-form (human) 70.76 Decomposition decision method F1 Confidence-based 61.73 Pipeline 63.59 Decomposition scorer (DECOMPRC) 70.57 Oracle 76.75 Table 7: Left: ablations in sub-questions. F1 score on a sample of 50 bridging questions from the dev set of HOTPOTQA, Pointerc is our span-based model trained with 200 or 400 annotations. Right: ablations in decomposition decision method. F1 score on the dev set of HOTPOTQA with ablating decomposition decision method. Oracle indicates that the ground truth reasoning type is selected. the joint F1 score of DECOMPRC and BERT. We find that DECOMPRC is robust to inverted questions, and outperforms BERT by 36.53 F1. 4.6 Ablations Span-based vs. Free-form sub-questions. We evaluate the quality of generated sub-questions using span-based question decomposition. We replace the question decomposition component using Pointer3 with (i) sub-question decomposition through groundtruth spans, (ii) sub-question decomposition with free-form, hand-written subquestions (examples shown in Table 6). Table 7 (left) compares the question answering performance of DECOMPRC when replaced with alternative sub-questions on a sample of 50 bridging questions.7 There is little difference in model performance between span-based and sub-questions written by human. This indicates that our span-based sub-questions are as effective as free-form sub-questions. In addition, Pointer3 trained on 200 or 400 examples obtains close to human performance. We think that identifying spans often rely on syntactic information of the question, which BERT has likely learned from language modeling. We use the model trained on 200 examples for DECOMPRC to demonstrate sample-efficiency, and expect performance improvement with more annotations. Ablations in decomposition decision method. Table 7 (right) compares different ablations to evaluate the effect of the decomposition scorer. 7A full set of samples is shown in Appendix E. Breakdown of 15 failure cases Incorrect groundtruth 1 Partial match with the groundtruth 3 Mistake from human 3 Confusing question 1 Sub-question requires cross-paragraph reasoning 2 Decomposed sub-questions miss some information 2 Answer to the first sub-question can be multiple 3 Table 8: The error analyses of human experiment, where the upperbound F1 score of span-based subquestions with no decomposition scorer is measured. For comparison, we report the F1 score of the confidence-based method which chooses the decomposition with the maximum confidence score from the single-hop RC model, and the pipeline approach which independently selects the reasoning type as described in Section 3.4. In addition, we report an oracle which takes the maximum F1 score across different reasoning types to provide an upperbound. A pipeline method gets lower F1 score than the decomposition scorer. This suggests that using more context from decomposition (e.g., the answer and the evidence) helps avoid cascading errors from the pipeline. Moreover, a gap between DECOMPRC and oracle (6.2 F1) indicates that there is still room to improve. Upperbound of Span-based Sub-questions without a decomposition scorer. To measure an upperbound of span-based sub-questions without a decomposition scorer, where a human-level RC model is assumed, we conduct a human experiment on a sample of 50 bridging ques6105 Q What country is the Selun located in? P1 Selun lies between the valley of Toggenburg and Lake Walenstadt in the canton of St. Gallen. P2 The canton of St. Gallen is a canton of Switzerland. Q Which pizza chain has locations in more cities, Round Table Pizza or Marion’s Piazza? P1 Round Table Pizza is a large chain of pizza parlors in the western United States. P2 Marion’s Piazza ... the company currently operates 9 restaurants throughout the greater Dayton area. Q1 Round Table Pizza has locations in how many cities? Q2 Marion ’s Piazza has locations in how many cities? Q Which magazine had more previous names, Watercolor Artist or The General? P1 Watercolor Artist, formerly Watercolor Magic, is an American bi-monthly magazine that focuses on ... P2 The General (magazine): Over the years the magazine was variously called ‘The Avalon Hill General’, ‘Avalon Hill’s General’, ‘The General Magazine’, or simply ‘General’. Q1 Watercolor Artist had how many previous names? Q2 The General had how many previous names? Table 9: The failure cases of DECOMPRC, where Q, P1 and P2 indicate the given question and paragraphs, and Q1 and Q2 indicate sub-questions from DECOMPRC. (Top) The required multi-hop reasoning is implicit, and the question cannot be decomposed. (Middle) DECOMPRC decomposes the question well but fails to answer the first sub-question because there is no explicit answer. (Bottom) DECOMPRC is incapable of counting. tions.8 In this experiment, humans are given each sub-question from decomposition annotations and are asked to answer it without an access to the original, multi-hop question. They are asked to answer each sub-question with no cross-paragraph reasoning, and mark it as a failure case if it is impossible. The resulting F1 score, calculated by replacing RC model to humans, is 72.67 F1. Table 8 reports the breakdown of fifteen error cases. 53% of such cases are due to the incorrect groundtruth, partial match with the groundtruth or mistake from humans. 47% are genuine failures in the decomposition. For example, a multi-hop question “Which animal races annually for a national title as part of a post-season NCAA Division I Football Bowl Subdivision college football game?” corresponds to the last category in Table 8. The question can be decomposed into “Which post-season NCAA Division I Football Bowl Subdivision college football game?” and “Which animal races annually for a national title as part of ANS?”. However in the given set of paragraphs, there are multiple games that can be the answer to the first sub-question. Although only one of them is held with the animal racing, it is impossible to get the correct answer only given the first subquestion. We think that incorporating the original question along with the sub-questions can be one solution to address this problem, which is partially done by a decomposition scorer in DECOMPRC. Limitations. We show the overall limitations of DECOMPRC in Table 9. First, some questions are not compositional but require implicit multihop reasoning, hence cannot be decomposed. Sec8A full set of samples is shown in Appendix E. ond, there are questions that can be decomposed but the answer for each sub-question does not exist explicitly in the text, and must instead by inferred with commonsense reasoning. Lastly, the required reasoning is sometimes beyond our reasoning types (e.g. counting or calculation). Addressing these remaining problems is a promising area for future work. 5 Conclusion We proposed DECOMPRC, a system for multihop RC that decomposes a multi-hop question into simpler, single-hop sub-questions. We recasted sub-question generation as a span prediction problem, allowing the model to be trained on 400 labeled examples to generate high quality sub-questions. Moreover, DECOMPRC achieved further gains from the decomposition scoring step. DECOMPRC achieved the state-of-the-art on HOTPOTQA distractor setting and full wiki setting, while providing explainable evidence for its decision making in the form of sub-questions and being more robust to adversarial settings than strong baselines. Acknowledgments This research was supported by ONR (N0001418-1-2826, N00014-17-S-B001), NSF (IIS 1616112, IIS 1252835, IIS 1562364), ARO (W911NF-16-1-0121), an Allen Distinguished Investigator Award, Samsung GRO and gifts from Allen Institute for AI, Google, and Amazon. We thank the anonymous reviewers and UW NLP members for their thoughtful comments and discussions. 6106 References Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In EMNLP. Nicola De Cao, Wilker Aziz, and Ivan Titov. 2019. Question answering by reasoning across documents with graph convolutional networks. In NAACL. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In ACL. Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In ACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL. Bhuwan Dhingra, Qiao Jin, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. 2018. Neural models for reasoning over multiple mentions using coreference. In NAACL. Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Artificial Intelligence Research. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In NIPS. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In ACL. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Percy Liang, Michael Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In ACL. Sewon Min, Victor Zhong, Richard Socher, and Caiming Xiong. 2018. Efficient and robust question answering from minimal context over documents. In ACL. Jekaterina Novikova, Ondej Duek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In EMNLP. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In ICLR. Alon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex questions. In NAACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2017. Constructing Datasets for Multi-hop Reading Comprehension Across Documents. In TACL. Caiming Xiong, Victor Zhong, and Richard Socher. 2018. DCN+: Mixed objective and deep residual coattention for question answering. In ICLR. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In EMNLP. Adams Wei Yu, David Dohan, Quoc Le, Thang Luong, Rui Zhao, and Kai Chen. 2018. Fast and accurate reading comprehension by combining self-attention and convolution. In ICLR. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In AAAI/IAAI. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In UAI. Victor Zhong, Caiming Xiong, Nitish Shirish Keskar, and Richard Socher. 2019. Coarse-grain fine-grain coattention network for multi-evidence question answering. In ICLR. 6107 A Span Annotation Figure 2: Annotation procedure. Top four figures show annotation for bridging question. Bottom three figures show annotation for intersection question. In this section, we describe span annotation collection procedure for bridging and intersection questions. The goal is to collect three points (bridging) or two points (intersection) given a multi-hop question. We design an interface to annotate span over the question by clicking the word in the question. First, given a question, the annotator is asked to identify which reasoning type out of bridging, intersection, one-hop and neither is the most proper.9 Since bridging type is the most common, bridging is checked by default. If the question type is bridging, the annotator is asked to make three clicks for the start of the span, the end of 9Note that we exclude comparison questions for annotations, since comparison questions are already labeled on HOTPOTQA. the span, and the head-word (top four examples in Figure 2). After three clicks are all made, the annotator can see the heuristically generated subquestions. If the question type is intersection, the annotator is asked to make two clicks for the start and the end of the second segment out of three segments (bottom three examples in Figure 2). Similarly, the annotator can see the heuristically generated sub-questions after two clicks. If the question type is one-hop or neither, the annotator does not have to make any click. If the question can be decomposed into more than one way, the annotator is asked to choose the more natural decomposition. If the question is ambiguous, the annotator is asked to pass the example, and only annotate for the clear cases. For the quality control, all annotators have enough in person, one-on-one tutorial sessions and are given 100 example annotations for the reference. B Decompotision for Comparison In this section, we describe the decomposition procedure for comparison, which does not require any extra annotation. Comparison requires to compare a property of two different entities, usually requiring discrete operations. We identify 10 discrete operations which sufficently cover comparison operations, shown in Table 10. Based on these pre-defined discrete operations, we decompose the question through the following three steps. First, we extract two entities under comparison. We use Pointer4 to obtain ind1, . . . , ind4, where ind1 and ind2 indicate the start and the end of the first entity, and ind3 and ind4 indicate those of the second entity. We create a training data which each example contains the question and four points as follows: we filter out bridge questions in HOTPOTQA to leave comparison questions, extract the entities using Spacy10 NER tagger in the question and in two supporting facts (annotated sentences in the dataset which serve as evidence to answer the question), and match them to find two entities which appear in one supporting sentence but not in the other supporting sentence. Then, we identity the suitable discrete operation, following Algorithm 2. Finally, we generate sub-questions according to the discrete operation. Two sub-questions are obtained for each entity. 10https://spacy.io/ 6108 Operation & Example Type: Numeric Is greater (ANS) (ANS) →yes or no Is smaller (ANS) (ANS) →yes or no Which is greater (ENT, ANS) (ENT, ANS) →ENT Which is smaller (ENT, ANS) (ENT, ANS) →ENT Did the Battle of Stones River occur before the Battle of Saipan? Q1: The Battle of Stones River occur when? →1862 Q2: The Battle of Saipan River occur when? →1944 Q3: Is smaller (the Battle of Stones River, 1862) (the Battle of Saipan, 1944) →yes Type: Logical And (ANS) (ANS) →yes or no Or (ANS) (ANS) →yes or no Which is true (ENT, ANS) (ENT, ANS) →ENT In between Atsushi Ogata and Ralpha Smart who graduated from Harvard College? Q1: Atsushi Ogata graduated from Harvard College? →yes Q2: Ralpha Smart graduated from Harvard College? →no Q3: Which is true (Atsushi Ogata, yes) (Ralpha Smart, no) →Atsushi Ogata Type: String Is equal (ANS) (ANS) →yes or no Not equal (ANS) (ANS) →yes or no Intersection (ANS) (ANS) →string Are Cardinal Health and Kansas City Southern located in the same state? Q1: Cardinal Health located in which state? →Ohio Q2: Cardinal Health located in which state? →Missouri Q3: Is equal (Ohio) (Missouri) →no Table 10: A set of discrete operations proposed for comparison questions, along with the example on each type. ANS is the answer of each query, and ENT is the entity corresponding to each query. The answer of each query is shown in the right side of →. If the question and two entities for comparison are given, queries and a discrete operation can be obtained by heuristics. C Implementation Details Implementation Details. We use PyTorch (Paszke et al., 2017) on top of Hugging Face’s BERT implementation.11 We tune our model from Google’s pretrained BERTBASE (lowercased)12, containing 12 layers of Transformers (Vaswani et al., 2017) and a hidden dimension of 768. We optimize the objective function using Adam (Kingma and Ba, 2015) with learning rate 5 × 10−5. We lowercase the input and set the maximum sequence length |S| to 300 for models which input is both the question and the paragraph, and 50 for the models which input is the question only. D Creating Inverted Binary Comparison Questions We identify the comparison question with 7 out of 10 discrete operations (Is greater, Is smaller, 11https://github.com/huggingface/ pytorch-pretrained-BERT 12https://github.com/google-research/ bert Which is greater, Which is smaller, Which is true, Is equal, Not equal) can automatically be inverted. It leads to 665 inverted questions. E A Set of Samples used for Ablations A set of samples used for ablations in Section 4.6 is shown in Table 11. 6109 Algorithm 2 Algorithm for Identifying Discrete Operation. First, given two entities for comparison, the coordination and the preconjunct or the predeterminer are identified. Then, the quantitative indicator and the head entity is identified if they exist, where a set of uantitative indicators is pre-defined. In case any quantitative indicator exists, the discrete operation is determined as one of numeric operations. If there is no quantitative indicator, the discrete operation is determined as one of logical operations or string operations. procedure FIND OPERATION(question, entity1, entity2) coordination, preconjunct ←f(question, entity1, entity2) Determine if the question is either question or both question from coordination and preconjunct head entity ←fhead(question, entity1, entity2) if more, most, later, last, latest, longer, larger, younger, newer, taller, higher in question then if head entity exists then discrete operation ←Which is greater else discrete operation ←Is greater else if less, earlier, earliest, first, shorter, smaller, older, closer in question then if head entity exists then discrete operation ←Which is smaller else discrete operation ←Is smaller else if head entity exists then discrete operation ←Which is true else if question is not yes/no question and asks for the property in common then discrete operation ←Intersection else if question is yes/no question then Determine if question asks for logical comparison or string comparison if question asks for logical comparison then if either question then discrete operation ←Or else if both question then discrete operation ←And else if question asks for string comparison then if asks for same? then discrete operation ←Is equal else if asks for difference? then discrete operation ←Not equal return discrete operation 5abce73055429959677d6b34,5a80071f5542992bc0c4a684,5a840a9e5542992ef85e2397,5a7e02cf5542997cc2c474f4,5ac1c9a15542994ab5c67e1c 5a81ea115542995ce29dcc78,5ae7308d5542991e8301cbb8,5ae527945542993aec5ec167,5ae748d1554299572ea547b0,5a71148b5542994082a3e567 5ae531695542990ba0bbb1fb,5a8f5273554299458435d5b1,5ac2db67554299657fa290a6,5ae0c7e755429945ae95944c,5a7150c75542994082a3e7be 5abffc0d5542990832d3a1e2,5a721bbc55429971e9dc9279,5ab57fc4554299488d4d99c0,5abbda84554299642a094b5b,5ae7936d5542997ec27276a7 5ab2d3df554299194fa9352c,5ac279345542990b17b153b0,5ab8179f5542990e739ec817,5ae20cd25542997283cd2376,5ae67def5542991bbc9760f3 5a901b985542995651fb50b0,5a808cbd5542996402f6a54b,5a84574455429933447460e6,5ab9b1fd5542996be202058e,5a7f1ad155429934daa2fce2 5ade03da5542997dc7907120,5a809fe75542996402f6a5ba,5ae28058554299495565da90,5abd09585542996e802b469b,5a7f9cbd5542994857a7677c 5a7b4073554299042af8f733,5ac119335542992a796dede4,5a7e1a2955429965cec5ea5d,5a8febb555429916514e73e4,5a87184a5542991e771816c5 5a86681c5542991e77181644,5abba584554299642a094afa,5add39e75542997545bbbcc4,5a7f354b5542992e7d278c8c,5a89810655429946c8d6e929 5a78c7db55429974737f7882,5a8d0c1b5542994ba4e3dbb3,5a87e5345542993e715abffb,5ae736cb5542991bbc9761c2,5ae057fd55429945ae959328 Table 11: Question IDs from a set of samples used for ablations in Section 4.6.
2019
613
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6110–6119 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6110 Combining Knowledge Hunting and Neural Language Models to Solve the Winograd Schema Challenge Ashok Prakash, Arpit Sharma, Arindam Mitra, Chitta Baral Arizona State University Tempe, USA {apraka23,asharm73,amitra7,chitta}@asu.edu Abstract Winograd Schema Challenge (WSC) is a pronoun resolution task which seems to require reasoning with commonsense knowledge. The needed knowledge is not present in the given text. Automatic extraction of the needed knowledge is a bottleneck in solving the challenge. The existing state-of-the-art approach uses the knowledge embedded in their pretrained language model. However, the language models only embed part of the knowledge, the ones related to frequently co-existing concepts. This limits the performance of such models on the WSC problems. In this work, we build-up on the language model based methods and augment them with a commonsense knowledge hunting (using automatic extraction from text) module and an explicit reasoning module. Our end-to-end system built in such a manner improves on the accuracy of two of the available language model based approaches by 5.53% and 7.7% respectively. Overall our system achieves the state-of-theart accuracy of 71.06% on the WSC dataset, an improvement of 7.36% over the previous best. 1 Introduction Reasoning with commonsense knowledge is an integral component of human behavior. It is due to this capability that people know that they should dodge a stone that is thrown towards them. It has been a long standing goal of the Artificial Intelligence community to simulate such commonsense reasoning abilities in machines. Over the years, many advances have been made and various challenges have been proposed to test their abilities (Clark et al., 2018; Mihaylov et al., 2018; Mishra et al., 2018). The Winograd Schema Challenge (WSC) (Levesque et al., 2011) is one such natural language understanding challenge. It is made up of pronoun resolution problems of a particular kind. The main part of each WSC problem is a set of sentences containing a pronoun. In addition, two definite noun phrases, called “answer choices” are also given. The answer choices are part of the input set of sentences. The goal is to determine which answer provides the most natural resolution for the pronoun. Below is an example problem from the WSC. Sentences (S1): The fish ate the worm. It was tasty. Pronoun to resolve: It Answer Choices: a) fish b) worm A WSC problem also specifies a “special word” that occurs in the sentences, and an “alternate word.” Replacing the former by the latter changes the resolution of the pronoun. In the example above, the special word is tasty and the alternate word is hungry. The resolution of the pronoun is difficult because the commonsense knowledge that is required to perform the resolution is not explicitly present in the input text. The above example requires the commonsense knowledge that ‘something that is eaten may be tasty’. There have been attempts (Sharma et al., 2015b; Emami et al., 2018a) to extract such knowledge from text repositories. Those approaches find the sentences which are similar to the sentences in a WSC problem but without the co-reference ambiguity. For example a sentence (which contains knowledge without ambiguity) corresponding to the above WSC problem is ‘John ate a tasty apple’. Such an approach to extract and use sentences which contain evidence for co-reference resolution is termed as Knowledge Hunting (Sharma et al., 2015b; Emami et al., 2018b). There are two main modules in the knowledge hunting approach, namely a knowledge extraction module and a reasoning module. To be able to use the extracted knowledge, the reasoning module puts several restrictions on the structure of the knowledge. If the knowledge extraction module could not find any knowledge pertain6111 ing to those restrictions, the extracted knowledge would probably be of no use. Sometimes the needed knowledge are embedded in the pre-trained language models. Let us consider the WSC example mentioned below. S2: The painting in Mark’s living room shows an oak tree. It is to the right of a house. Pronoun to resolve: It Answer Choices: a) painting b) tree Here, the knowledge that ‘a tree is to the right of a house’ is more likely than ‘a painting is to the right of a house’ is needed. With recent developments in neural network architectures for language modeling, it is evident that they are able to capture such knowledge by predicting that ‘a tree is to the right of a house’ is a more probable phrase than ‘a painting is to the right of a house’. This is because language models are trained on huge amounts of text and they are able to learn the frequently co-occurring concepts from that text. Although the knowledge from language models is helpful in many examples, it is not suitable for several others. For example, the language models in (Trinh and Le, 2018) predict that ‘fish is tasty’ is a more probable than ‘worm is tasty’. This is because the words ‘fish’ and ‘tasty’ occur in the same context more often than the words ‘worm’ and ‘tasty’. So, considering the benefits and limitations of the above mentioned approaches, in this work, we combine the knowledge hunting and neural language models to solve the Winograd Schema Challenge (WSC). The main contribution of this work is to tackle the WSC by: • developing and utilizing an automated knowledge hunting approach to extract the needed knowledge and reason with it without relying on a strict formal representation, • utilizing the knowledge that is embedded in the language models, and • combining the knowledge extracted from knowledge hunting and the knowledge in language models. As a result, our approach improves on the existing state-of-the-art accuracy by 7.36% and solves 71.06% of the WSC problems correctly. 2 Related Work The Winograd Schema Challenge is a co-reference resolution problem. The problem of co-reference resolution has received large amount of attention in the field of Natural Language Processing (Raghunathan et al., 2010; Carbonell and Brown, 1988; Ng, 2017). However the requirement to use commonsense knowledge makes the Winograd Schema Challenge hard and the other approaches that are trained on their respective corpora do not perform well in the Winograd Schema problems. The Winograd Schema Challenge was first proposed in 2011 and since then various works have been proposed to address it. These approaches can be broadly categorized into two types: 1. The approaches which use explicit commonsense knowledge and reasoning with the knowledge. Such approaches can further be divided into two types. (a) The approaches which provide a reasoning theory (Bailey et al., 2015; Sch¨uller, 2014; Sharma et al., 2015b) with respect to a few specific types of commonsense knowledge and takes question specific knowledge while solving a Winograd Schema problem. One of the major shortcomings of such approaches is that they work only for the specific knowledge types and hence their coverage is restricted. Another shortcoming of such approaches is that they rely on strict formal representations of natural language text. The automatic development of such representations boils down to the well known complex problem of translating a natural language text into its formal meaning representation. Among these works, only the work of (Sharma et al., 2015b) accepts natural language knowledge sentences which it automatically converts into their required graphical representation (Sharma et al., 2015a). The remaining two (Bailey et al., 2015; Sch¨uller, 2014) requires the knowledge to be provided in a logical form. (b) These approaches (Isaak and Michael, 2016) also answer a Winograd Schema problem with formal reasoning but use an existing knowledge base of facts and first-order rules to do that. 6112 2. These approaches (Liu et al., 2017; Trinh and Le, 2018) utilize the recent advancement in the field of neural networks, particularly the benefits of word embedding and neural language model. The work of (Liu et al., 2017) uses ConceptNet and raw texts to train word embeddings which they later use to solve a Winograd Schema problem by a simple inference algorithm. The work of (Trinh and Le, 2018) on the other hand uses majority voting from several language models to resolve the co-reference. In layman terms, the system in (Trinh and Le, 2018) replaces the pronoun with the two answer choices to obtain two different sentences and then use the language models to find out which of the two replacement is more probable. 3 Our Method In this section we first explain how our knowledge hunting approach and the neural language models are used to generate respective intermediate results. Then we explain the details of a Probabilistic Soft Logic (PSL) module which combines the intermediate results and predicts the confidence for each of the answer choices in a WSC example. 3.1 Knowledge Hunting Approach There are two main modules in the Knowledge Hunting approach. The first module extracts a set of sentences corresponding to a WSC problem such that the extracted sentences may contain the needed commonsense knowledge. We call such a set of sentences, a knowledge text. The second module uses a knowledge text and generates a correspondence between the answer choices and the pronoun in a WSC text, and the entities in a knowledge text. We call such a correspondence as entity alignment. Such an entity alignment is an intermediate result from the knowledge hunting module. In the following we provide the details of knowledge text extraction and entity alignment modules. 3.1.1 Knowledge Extraction The goal of the knowledge extraction module is to automatically extract a set of knowledge texts for a given WSC problem. Ideally, a knowledge text should be able to justify the answer of the associated WSC problem. In this vein, we aim to extract the texts that depict a scenario that is similar to that of the associated WSC problem. We roughly characterize a WSC scenario in terms of the events (verb phrases) and the properties of the entities that are associated with the scenario. The characterization of a scenario optionally includes the discourse connectives between the events and properties of the scenario. For example, in the WSC sentence “The city councilmen refused the demonstrators a permit because they feared violence .”, the scenario is mainly characterized by the verb phrases “refused” and “feared”, and the discourse connective “because”. In this work, we use this abstract notion of a scenario to extract knowledge texts which depict similar scenarios. The following are the steps in the extraction module. 1. First, the module identifies the verb phrases, properties and discourse connectives in a given WSC scenario. For example the oneword verb phrases “refused” and “feared”, and the discourse connective “because” in the example mentioned above. 2. Secondly, the module automatically generates a set of search queries by using the keywords extracted in the previous step. The first query in the set is an ordered combination (as per the WSC sentence) of the keywords extracted in the previous step. For example the query “* refused * because * feared * ” is the first query for the problem mentioned above. Afterwards the following set of modifications are performed with respect to the first query and the results are added to the set of queries. • The verb phrases are converted to their base form. For example, “ * refuse * because * fear * ”. • The discourse connectives are omitted. For example, “* refuse * fear * ”. • The verbs in verb phrases and the adjectives are replaced with their synonyms from the WordNet KB (Miller, 1995). The top five synonyms from the top synset of the same part of speech are considered. An example query generated after this step is “* decline * because * fear * ”. 3. Thirdly, the module uses the generated queries to search and extract text snippets, of length up to 30 words, from a search engine. The top 10 results (urls) from the search engine are retrieved for each query and text 6113 snippets from those results are scraped. Out of the extracted texts, the 10 text snippets which are most similar to the WSC text are filtered and passed to the alignment module. We used a natural language inference model (Parikh et al., 2016) to find the most similar sentences. Since we also do not want to extract the snippets which contain the corresponding WSC sentences (because of ambiguity), this module removes the results with WSC sentences in them. We filtered out the knowledge texts which contained 80% or more words from the sentences in any of the WSC problems. An example knowledge text extracted by using the query “ * refused * because * feared * ” via the steps mentioned above is, “He also refused to give his full name because he feared for his safety.” 3.1.2 Entity Alignment A total of up to 10 knowledge texts are extracted with respect to each WSC problem. Each of them is processed individually along with the WSC problem to produce a corresponding intermediate result from the knowledge hunting module. Let W = ⟨S, A1, A2, P, K⟩be a modified WSC problem such that S be a set of WSC sentences, A1 and A2 be the answer choices one and two respectively, P be the pronoun to be resolved, and K be a knowledge text. The existing solvers (Sharma et al., 2015b) that use explicit knowledge to solve a WSC problem of the form W first convert K and S into a logical form and then use a set of axioms to compute the answer. However, it is a daunting task to convert free form text into a logical representation. Thus these methods often produce low recall. In this work, we take a detour from this approach and aim to build an “alignment” function. Informally, the task of the alignment function is to align the answer choices (A1 and A2) and the pronoun to be resolved (P) in S with the corresponding entities (noun/pronoun phrases) in K. These alignments are the intermediate results of the knowledge hunting module. By the choice of knowledge extraction approach, the knowledge texts are similar to the WSC sentences in terms of events, i.e., they contain similar verb phrases, properties and discourse connectives. So, in an ideal situation we will have entities in K corresponding to each one of the concerned entities (A1, A2 and P) in W respectively. The goal of the alignment algorithm is to find that mapping. The mapping result is generated in the form of a aligned with predicate of arity three. The first argument represents an entity (an answer choice or the pronoun) from S, the second argument represents an entity from K and the third argument is an identifier of the knowledge text used. We define an entity (noun phrase) Ej from a knowledge text K to be aligned with to an entity Aj from a WSC text S if the following holds: 1. There exists a verb v in S and v′ in K such that either v = v′ or v is a synonym of v′. 2. The “semantic role” of Aj with respect to v is same as the “semantic role” of Ej with respect to v′. We use the semantic role labelling function, called QASRL (He et al., 2015) to compute the semantic roles of each entity. QASRL represents the semantic roles of an entity, in terms of question-answer pairs. Figure 1 shows the QASRL representation of the knowledge text “He also refused to give his full name because he feared for his safety.” It involves three verbs “refused”, “feared” and “give”. The questions represent the roles of the participating entities. An example alignment generated for the WSC sentence, S = “The city councilmen refused the demonstrators a permit because they feared violence.” and the knowledge text, K = “He also refused to give his full name because he feared for his safety.” is, aligned with(city councilmen,He,K) aligned with(they,he,K) There are three relevant entities in an input WSC problem, i.e., A1, A2 and P. Based on the existence of the entities corresponding to the entities in the WSC problem there are 28 possible cases. For example, the case {True True True}, abbreviated as {TTT}, represents that each of the entities A1, A2 and P are aligned with corresponding entities in a knowledge text. The intuition behind the alignment approach is to find a common entity in a knowledge text such that it aligns with one of the answer choices (say Ai) and also with the pronoun to be resolved (P). 6114 Figure 1: QASRL output for the sentence “He also refused to give his full name because he feared for his safety.” Case Details Example TTT Each entity (among A1, A2 and P) in the WSC sentences W have corresponding entities in the corresponding knowledge text K WSC Sentence: Jim comforted Kevin because he was so upset . Knowledge Text (K): She says I comforted her, because she was so upset Alignments: aligns with(Jim,I,K), aligns with(Kevin,her,K), aligns with(he,she,K) TFT Only the entity representing the answer choice one (A1) and the pronoun to be resolved (P) have corresponding entities in the knowledge text K WSC Sentence: The trophy does not fit into the brown suitcase because it is too large . Knowledge Text (K): installed CPU and fan would not fit in because the fan was too large Alignments: aligns with(trophy,fan,K), aligns with(it,fan,K) FTT Only the entity representing the answer choice 2 (A2) and the pronoun to be resolved (P) have corresponding entities in the knowledge text K WSC Sentence: James asked Robert for a favor but he refused . Knowledge Text (K): He asked the LORD what he should do, but the LORD refused to answer him, either by dreams or by sacred lots or by the prophets. Alignments: aligns with(Robert,LORD,K) and aligns with(he,LORD,K) Table 1: Alignment Cases in the Knowledge Hunting Approach. A1 and A2 are answer choices one and two, P is pronoun to resolve, Ek1, Ek2 and Ek3 are entities in a knowledge text (K) Then we can say that both Ai and P refer to same entity and hence they refer to each other. An important aspect of such a scenario is the existence of the entities in a knowledge text which align with at least one of the answer choices and the pronoun to be resolved. In other words the cases {TTT}, {TFT} and {FTT}. So we consider the alignments generated only with respect to these three cases as an output of the alignment module. The three cases and their details are shown in the Table 1 along with examples from the dataset. 3.2 Using the Knowledge from Language Models In recent years, deep neural networks have achieved great success in the field of natural language processing (Liu et al., 2019; Chen et al., 2018). With the recent advancements in the neural network architectures and availability of powerful machine it is possible to train unsupervised language models and use them in various tasks (Devlin et al., 2018; Trinh and Le, 2018). Such language models are able to capture the knowledge which is helpful in solving many WSC problems. Let us consider the WSC problem shown below. S3: I put the heavy book on the table and it broke. Pronoun to resolve: it Answer Choices: a) table b) book A knowledge that, “table broke is more likely than book broke” is sufficient to solve the above WSC problem. Such a knowledge is easily learned by the language models because they are trained on huge amounts of text snippets which are transcribed by people. Furthermore, these models are good at learning the frequently occurring patterns 6115 from data. In this work, we aim to utilize such knowledge that is embedded in the neural language models. We replace the pronoun to be resolved in the WSC text with the two answer choices, one at a time, generating two possible texts. For example the two texts generated in the above WSC example are, S3(a) = I put the heavy book on the table and table broke., S3(b) = I put the heavy book on the table and book broke. Then a pre-trained language model is used to predict the probability of each of the generated texts. Let Pa be the probability of S3(a) and Pb be the probability of S3(b). To be able to use the result of language models in Probabilistic Soft Logic (PSL) (Kimmig et al., 2012), the output of this step contains coref(P,A1):PROB1 and coref(P,A2):PROB2, where P is the pronoun to be resolved, A1 and A2 are answer choices one and two respectively, and PROB1 and PROB2 are the probabilities of the texts generated by replacing P with A1 and A2 in the WSC text respectively, i.e., Pa and Pb in the example above. 3.3 Combining Knowledge Hunting and Language Models In this step, the alignment results generated from the knowledge hunting module and the coreference probabilities generated from the language models are combined in a Probabilistic Soft Logic (PSL) (Kimmig et al., 2012) framework to infer the confidence for each of the answer choices in a WSC problem. PSL is a probabilistic logic framework designed to have efficient inference. A key distinguishing feature of PSL is that ground atoms have soft, continuous truth values in the interval [0, 1] rather than binary truth values as used in Markov Logic Networks and most other kinds of probabilistic logic. Given a set of weighted logical formulas, PSL builds a graphical model defining a probability distribution over the continuous space of values of the random variables in the model. A PSL model is defined using a set of weighted if-then rules in first-order logic, as in the following example: 0.7 : ∀x, y, z.spouse(x, y) ∧isChildOf(z, x) →isChildOf(z, y) (1) Here, x, y and z represent variables. The above rule states that a person’s child is also a child of his/her spouse. The weight (0.7) associated with the rule encodes the strength of the rule. Each grounded atom, in a rule of a PSL model has a soft truth value in the interval [0, 1], which is denoted by I(a). Following formulas are used to compute soft truth values for the conjunctions (∧), disjunctions (∨) and negations (¬) in the logical formulas. I(l1 ∧l2) = max{0, I(l1) + I(l2) −1} I(l1 ∨l2) = min{I(l1) + I(l2), 1} I(¬l1) = 1 −I(l1) (2) Then, a given rule r ≡rbody →rhead, it is said to be satisfied (i.e. I(r) = 1) iff I(rbody) ≤I(rhead). Otherwise, PSL defines a distance to satisfaction d(r) which captures how far a rule r is from being satisfied: d(r) = max{0, I(rbody) - I(rhead)}. For example, assume we have the set of evidence: I(spouse(B, A)) = 1, I(isChildOf(P, B)) = 0.9, I(isChildOf(P, A)) = 0.7, and that r is the resulting ground instance of rule (1). Then I(spouse(B, A) ∧isChildOf(P, B))=max{0,1+0.9-1}=0.9, and d(r)=max{0,0.9-0.6}=0.3 PSL is primarily designed to support Most Probable Explanation (MPE) inference. MPE inference is the task of finding the overall interpretation (combination of grounded atoms) with the maximum probability given a set of evidence. Intuitively, the interpretation with the highest probability is the interpretation with the lowest distance to satisfaction. In other words, it is the interpretation that tries to satisfy all rules as much as possible. We used the PSL framework to combine the results from the other modules in our approach and generate the confidence scores for each of the answer choices. The confidence scores are generated for the predicate coref(p,ai) where p is the variable representing a pronoun to be resolved in a WSC problem and ai is a variable representing an answer choice in the WSC problem. To be able to use the alignment information from the knowledge hunting approach, following PSL rule was written. It is used to generate the coref predicate and its truth value for the answer 6116 choices. w : {∀a, e1, e2, k, p. aligned with(a, e1, k)∧ aligned with(p, e2, k)∧ similar(e1, e2)∧ →coref(p, a)} (3) Here w is the weight of the rule, a, p, e1, e2 and k are variables such that a is an answer choice in a WSC problem, p is the pronoun to be resolved in a WSC problem, and e1 and e2 are entities in a knowledge text k. The groundings of the aligned with predicate are generated from the knowledge hunting module and the groundings of the similar predicate encode the similar entities in k. The truth value of a grounding of similar predicate is used to represent how similar the two entities, i.e., e1 and e2, are to each other. Although any kind of semantic similarity calculation algorithm may be used for producing the similar predicate, we used BERT (Devlin et al., 2018) to calculate the similarity between two entities. In case the values of e1 and e2 are same (say E) the truth value of the grounded atom similar(E, E) becomes 1. Intuitively, the above rule means that if an answer choice and the pronoun to be resolved in a WSC problem align with similar entities in a knowledge text corresponding to the WSC problem then the pronoun refers to the answer choice. The above rule applies to all the three cases mentioned in the Table 1. The neural language models approach produces two groundings of the atom defined by the binary predicate coref as its result (see section 3.2). The two groundings refer to the co-reference between the pronoun to be resolved and the two answer choices respectively. The groundings are accompanied with their probabilities which we used as their truth values. These grounded coref atoms are directly entered as input to the PSL framework along with the output from knowledge hunting approach to infer the truth values for the coref atom with respect to each of the answer choices. Finally, the answer choice with higher truth value is considered as the correct co-referent of the pronoun to be resolved and hence the final answer. 4 Experiments 4.1 Dataset The Winograd Schema Challenge corpus1 consists of pronoun resolution problems where a set of sentences is given along with a pronoun in the sentences and two possible answer choices such that only one choice is correct. There are 285 problems in the WSC dataset. From this point onward, we will call this dataset as WSC285. The generation of the original WSC dataset itself is an ongoing work. Hence the dataset keeps getting updated. This is why the works earlier than ours, used a smaller dataset containing 273 problems. All the problems in it are also present in WSC285. From this point onward, we will call this subset of WSC285 as WSC273. For a fair comparison between our work and others’, we performed our experiments with respect to both WSC285 and WSC273. The core to reproduce the results of this paper is available at https: //github.com/Ashprakash/CKLM. 4.2 Experimental Setup and Results First, we compared the results of our system with the previous works in terms of the number of correct predictions. The language models based component of our approach relies on pre-trained language models. Here we compared two different language models. First we used the ensemble of 14 pre-trained language models which are used in (Trinh and Le, 2018). Secondly, we used BERT (Devlin et al., 2018) pre-trained model. Based on the language model used, in the following experiments we use OUR METHODT2018 to represent our approach which uses models from (Trinh and Le, 2018) and OUR METHODBERT to represent our approach which uses the BERT language model. We compared our method with five other methods (two language models based and three others). The comparison results are as shown in the Table 2. The first two, (Sharma et al., 2015b) and (Liu et al., 2017) hereafter called S2015 and L2017 respectively, address a subset of WSC problems (71 problems). Both of them are able to exploit only causal knowledge. This explains their low coverage over the entire corpus. We overcome this issue by using any form of knowledge text making predictions for each of 1Available at https://cs.nyu.edu/ faculty/davise/papers/WinogradSchemas/ WSCollection.xml 6117 the problems in the dataset. More recently, two approaches on solving the WSC273 dataset have been proposed. The first work (Emami et al., 2018a) (hereafter called E2018) extract knowledge in form of sentences to find evidences to support each of the possible answer choices. A comparison between their results and our is present in the Table 2. Another work (Trinh and Le, 2018) (hereafter called T2018) uses a neural network architecture to learn language models from huge data sources to predict the probability of choosing one answer over the other is also compared as shown in the Table 2. We performed a second set of experiments to further investigate the robustness of our method as compared to the state-of-the-art system (T2018). Each problem in the WSC has a sister problem in the WSC such that the texts in the two problems differ only by a word or two but the same pronoun refers to different entities. The two answer choices for both the problems in the pair are also same. For example, consider the following pair of problems. S4: The firemen arrived after the police because they were coming from so far away. Pronoun to resolve: they Answer Choices: a) firemen b) police S5: The firemen arrived before the police because they were coming from so far away . Pronoun to resolve: they Answer Choices: a) firemen b) police In the above problems, only changing one word (before/after) in the sentence changes the answer to the problem. Due to this property of the dataset, a system can achieve an accuracy of 50% by just answering choice 1 as the correct answer for every problem. To make sure that this is not the case in our system, we performed the following two experiments. 1. Experiment to Evaluate Pairwise Accuracy: In this experiment we evaluate our method and the other methods to find out how many of the problem pairs were correctly solved. The table 3 shows the results of the experiment. It can be seen from the results that our best performing method(OUR METHODBERT on WSC273) solves 57 pairs correctly, which is significantly more than its baseline ‘BERT Only’ method. Similar pattern for the other methods can be seen in the Table 3. 2. Experiment to Evaluate System Bias: In this experiment we evaluate our method and the others to find out if the methods are biased to chose the answer choice which is closer to the pronoun in a WSC sentence. We found that usually the answer choice 2 in the problem is closer to the pronoun to be resolved. Hence the experiments were performed to figure out how many times a method answers choice 2 as the final answer. The results of the experiments are as shown in the Table 3. As seen from the results, both, the language model based methods and our methods are not particularly biased towards one of the answer choices. 4.3 Remarks Our best performing setting (OUR METHODBERT on WSC273) correctly answers 26 problems which are incorrectly answered by the baseline language model (BERT Only on WSC273). We found that the main reason for such a behavior is the addition of the suitable knowledge from the knowledge hunting module. It helps in generating the support for the correct answer to the extent that it overturns the decision of the language model. For example, we observed that for the WSC sentence ‘The woman held the girl against her will’ the BERT language model predicted that ‘her’ refers to ‘The woman’ with the probability score of 0.513, which is incorrect, and to ‘the girl’ with the probability score of 0.486. But the knowledge hunting approach alone within the PSL framework predicted the answer to be ‘the girl’ with the probability score of 0.966, which is correct, and the answer ‘the woman’ with the probability score of 0.034. Overall the PSL inference engine combined scores from both the approaches and corrected the decision made by the language model by predicting ‘the girl’ as the correct answer with the probability score of 0.967. On the other hand five problems were found to be incorrectly answered by our approach which were correctly answered by the language model. In all such cases the probabilities corresponding to the answer choices were found to be very close to each other and inclining towards the incor6118 #correct % Correct S2015 49 18.0 L2017 43 15.0 E2018 119 44.0 T2018 (WSC273) 174 63.70 T2018 (WSC285) 180 63.15 BERT Only (WSC273) 173 63.36 BERT Only (WSC285) 179 62.80 OUR METHODT2018 (WSC273) 189 69.23 OUR METHODT2018 (WSC285) 195 68.42 OUR METHODBERT (WSC273) 194 71.06 OUR METHODBERT (WSC285) 200 70.17 Table 2: Evaluation Results Correct Pairs Incorrect Pairs #Times Choice2 is Chosen T2018 (WSC273) 42 89 142 T2018 (WSC285) 44 97 146 BERT Only (WSC273) 36 94 129 BERT Only (WSC285) 37 101 131 OUR METHODT2018 (WSC273) 60 71 143 OUR METHODT2018 (WSC285) 61 80 148 OUR METHODBERT (WSC273) 57 74 130 OUR METHODBERT (WSC285) 58 83 134 Table 3: Additional Experiments rect answer. The difference between language model probabilities generally being very small, the combined approach answered incorrectly in such cases. The main reason for such a behavior is the availability of unsuitable knowledge text. For example the knowledge text for the WSC sentence ‘The man lifted the boy onto his shoulders .’ was ‘If she scores I’ll feel really bad!’ New documentary lifts the lid on life for female stars who are partners but line up for rival clubs’. A similar pattern was found in the other settings as well. 5 Conclusion Automatic extraction of the needed commonsense knowledge is a major obstacle in solving the Winograd Schema Challenge. We observed that sometimes the needed knowledge can be retrieved from the pre-trained neural language models. At other times a more involved knowledge about actions and properties is needed. So, in this work we utilized the knowledge embedded in the pretrained language models and developed a technique to automatically extract the more involved commonsense knowledge from text repositories. Then we defined an approach to combine the two kinds of knowledge in a probabilistic soft logic based framework to solve the Winograd Schema Challenge (WSC). The experimental results show that the combined approach possesses the benefits of both the approaches and achieves the state-ofthe-art accuracy on the WSC. This work presents an approach to combine the ideas of knowledge hunting and language modeling to perform commonsense reasoning. It is a general approach may be applied to other commonsense reasoning tasks which require the both the knowledge embedded in the pre-trained language models and more involved knowledge about actions and properties. Acknowledgement Support from DARPA and NSF grant 1816039 is acknowledged. 6119 References Dan Bailey, Amelia Harrison, Yuliya Lierler, Vladimir Lifschitz, and Julian Michael. 2015. The winograd schema challenge and reasoning about correlation. In In Working Notes of the Symposium on Logical Formalizations of Commonsense Reasoning. Jaime G Carbonell and Ralf D Brown. 1988. Anaphora resolution: a multi-strategy approach. In Proceedings of the 12th conference on Computational linguistics-Volume 1, pages 96–101. Association for Computational Linguistics. Yongrui Chen, Huiying Li, and Zejian Xu. 2018. Convolutional neural network-based question answering over knowledge base with type constraint. In China Conference on Knowledge Graph and Semantic Computing, pages 28–39. Springer. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Ali Emami, Noelia De La Cruz, Adam Trischler, Kaheer Suleman, and Jackie Chi Kit Cheung. 2018a. A knowledge hunting framework for common sense reasoning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1949–1958. Ali Emami, Adam Trischler, Kaheer Suleman, and Jackie Chi Kit Cheung. 2018b. A generalized knowledge hunting framework for the winograd schema challenge. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 25–31. Luheng He, Mike Lewis, and Luke Zettlemoyer. 2015. Question-answer driven semantic role labeling: Using natural language to annotate natural language. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 643– 653. Nicos Isaak and Loizos Michael. 2016. Tackling the winograd schema challenge through machine logical inferences. In STAIRS, volume 284, pages 75–86. Angelika Kimmig, Stephen H Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. 2012. A short introduction to probabilistic soft logic. In NIPS Workshop on probabilistic programming: Foundations and applications, volume 1, page 3. Hector J Levesque, Ernest Davis, and Leora Morgenstern. 2011. The winograd schema challenge. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, volume 46, page 47. Quan Liu, Hui Jiang, Andrew Evdokimov, Zhen-Hua Ling, Xiaodan Zhu, Si Wei, and Yu Hu. 2017. Cause-effect knowledge acquisition and neural association model for solving a set of winograd schema problems. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI), pages 2344–2350. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39– 41. Bhavana Dalvi Mishra, Lifu Huang, Niket Tandon, Wen-tau Yih, and Peter Clark. 2018. Tracking state changes in procedural text: A challenge dataset and models for process paragraph comprehension. arXiv preprint arXiv:1805.06975. Vincent Ng. 2017. Machine learning for entity coreference resolution: A retrospective look at two decades of research. In AAAI, pages 4877–4884. Ankur P Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. arXiv preprint arXiv:1606.01933. Karthik Raghunathan, Heeyoung Lee, Sudarshan Rangarajan, Nathanael Chambers, Mihai Surdeanu, Dan Jurafsky, and Christopher Manning. 2010. A multipass sieve for coreference resolution. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 492–501. Association for Computational Linguistics. Peter Sch¨uller. 2014. Tackling winograd schemas by formalizing relevance theory in knowledge graphs. In Fourteenth International Conference on the Principles of Knowledge Representation and Reasoning. Arpit Sharma, Nguyen Vo, Somak Aditya, and Chitta Baral. 2015a. Identifying various kinds of event mentions in k-parser output. In Proceedings of the The 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation, pages 82–88. Arpit Sharma, Nguyen Ha Vo, Somak Aditya, and Chitta Baral. 2015b. Towards addressing the winograd schema challenge-building and using a semantic parser and a knowledge hunting module. In IJCAI, pages 1319–1325. Trieu H Trinh and Quoc V Le. 2018. A simple method for commonsense reasoning. arXiv preprint arXiv:1806.02847.
2019
614
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6120–6129 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6120 Careful Selection of Knowledge to solve Open Book Question Answering Pratyay Banerjee∗and Kuntal Kumar Pal∗and Arindam Mitra∗and Chitta Baral Department of Computer Science, Arizona State University pbanerj6,kkpal,amitra7,[email protected] Abstract Open book question answering is a type of natural language based QA (NLQA) where questions are expected to be answered with respect to a given set of open book facts, and common knowledge about a topic. Recently a challenge involving such QA, OpenBookQA, has been proposed. Unlike most other NLQA tasks that focus on linguistic understanding, OpenBookQA requires deeper reasoning involving linguistic understanding as well as reasoning with common knowledge. In this paper we address QA with respect to the OpenBookQA dataset and combine state of the art language models with abductive information retrieval (IR), information gain based re-ranking, passage selection and weighted scoring to achieve 72.0% accuracy, an 11.6% improvement over the current state of the art. 1 Introduction Natural language based question answering (NLQA) not only involves linguistic understanding, but often involves reasoning with various kinds of knowledge. In recent years, many NLQA datasets and challenges have been proposed, for example, SQuAD (Rajpurkar et al., 2016), TriviaQA (Joshi et al., 2017) and MultiRC (Khashabi et al., 2018), and each of them have their own focus, sometimes by design and other times by virtue of their development methodology. Many of these datasets and challenges try to mimic human question answering settings. One such setting is open book question answering where humans are asked to answer questions in a setup where they can refer to books and other materials related to their questions. In such a setting, the focus is not on memorization but, as mentioned in Mihaylov et al. (2018), on “deeper understanding of the materials and its application ∗These authors contributed equally to this work. to new situations (Jenkins, 1995; Landsberger, 1996).” In Mihaylov et al. (2018), they propose the OpenBookQA dataset mimicking this setting. Question: A tool used to identify the percent chance of a trait being passed down has how many squares ? (A) Two squares (B) Four squares (C) Six squares (D) Eight squares Extracted from OpenBook: a punnett square is used to identify the percent chance of a trait being passed down from a parent to its offspring. Retrieved Missing Knowledge: Two squares is four. The Punnett square is made up of 4 squares and 2 of them are blue and 2 of them are brown, this means you have a 50% chance of having blue or brown eyes. Table 1: An example of distracting retrieved knowledge The OpenBookQA dataset has a collection of questions and four answer choices for each question. The dataset comes with 1326 facts representing an open book. It is expected that answering each question requires at least one of these facts. In addition it requires common knowledge. To obtain relevant common knowledge we use an IR system (Clark et al., 2016) front end to a set of knowledge rich sentences. Compared to reading comprehension based QA (RCQA) setup where the answers to a question is usually found in the given small paragraph, in the OpenBookQA setup the open book part is much larger (than a small paragraph) and is not complete as additional common knowledge may be required. This leads to multiple challenges. First, finding the relevant facts in an open book (which is much bigger than the small paragraphs in the RCQA setting) is a 6121 challenge. Then, finding the relevant common knowledge using the IR front end is an even bigger challenge, especially since standard IR approaches can be misled by distractions. For example, Table 1 shows a sample question from the OpenBookQA dataset. We can see the retrieved missing knowledge contains words which overlap with both answer options A and B. Introduction of such knowledge sentences increases confusion for the question answering model. Finally, reasoning involving both facts from open book, and common knowledge leads to multi-hop reasoning with respect to natural language text, which is also a challenge. We address the first two challenges and make the following contributions in this paper: (a) We improve on knowledge extraction from the OpenBook present in the dataset. We use semantic textual similarity models that are trained with different datasets for this task; (b) We propose natural language abduction to generate queries for retrieving missing knowledge; (c) We show how to use Information Gain based Re-ranking to reduce distractions and remove redundant information; (d) We provide an analysis of the dataset and the limitations of BERT Large model for such a question answering task. The current best model on the leaderboard of OpenBookQA is the BERT Large model (Devlin et al., 2018). It has an accuracy of 60.4% and does not use external knowledge. Our knowledge selection and retrieval techniques achieves an accuracy of 72%, with a margin of 11.6% on the current state of the art. We study how the accuracy of the BERT Large model varies with varying number of knowledge facts extracted from the OpenBook and through IR. 2 Related Work In recent years, several datasets have been proposed for natural language question answering (Rajpurkar et al., 2016; Joshi et al., 2017; Khashabi et al., 2018; Richardson et al., 2013; Lai et al., 2017; Reddy et al., 2018; Choi et al., 2018; Tafjord et al., 2018; Mitra et al., 2019) and many attempts have been made to solve these challenges (Devlin et al., 2018; Vaswani et al., 2017; Seo et al., 2016). Among these, the closest to our work is the work in (Devlin et al., 2018) which perform QA using fine tuned language model and the works of (Sun et al., 2018; Zhang et al., 2018) which performs QA using external knowledge. Related to our work for extracting missing knowledge are the works of (Ni et al., 2018; Musa et al., 2018; Khashabi et al., 2017) which respectively generate a query either by extracting key terms from a question and an answer option or by classifying key terms or by Seq2Seq models to generate key terms. In comparison, we generate queries using the question, an answer option and an extracted fact using natural language abduction. The task of natural language abduction for natural language understanding has been studied for a long time (Norvig, 1983, 1987; Hobbs, 2004; Hobbs et al., 1993; Wilensky, 1983; Wilensky et al., 2000; Charniak and Goldman, 1988, 1989). However, such works transform the natural language text to a logical form and then use formal reasoning to perform the abduction. On the contrary, our system performs abduction over natural language text without translating the texts to a logical form. 3 Approach Our approach involves six main modules: Hypothesis Generation, OpenBook Knowledge Extraction, Abductive Information Retrieval, Information Gain based Re-ranking, Passage Selection and Question Answering. A key aspect of our approach is to accurately hunt the needed knowledge facts from the OpenBook knowledge corpus and hunt missing common knowledge using IR. We explain our approach in the example given in Table 2. Question: A red-tailed hawk is searching for prey. It is most likely to swoop down on what? (A) a gecko Generated Hypothesis : H : A red-tailed hawk is searching for prey. It is most likely to swoop down on a gecko. Retrieved Fact from OpenBook: F : hawks eat lizards Abduced Query to find missing knowledge: K : gecko is lizard Retrieved Missing Knowledge using IR: K : Every gecko is a lizard. Table 2: Our approach with an example for the correct option In Hypothesis Generation, our system generates 6122 Figure 1: Our approach a hypothesis Hij for the ith question and jth answer option, where j ∈{1, 2, 3, 4}. In OpenBook Knowledge Extraction, our system retrieves appropriate knowledge Fij for a given hypothesis Hij using semantic textual similarity, from the OpenBook knowledge corpus F. In Abductive Information Retrieval, our system abduces missing knowledge from Hij and Fij. The system formulates queries to perform IR to retrieve missing knowledge Kij. With the retrieved Kij, Fij, Information Gain based Re-ranking and Passage Selection our system creates a knowledge passage Pij. In Question Answering, our system uses Pij to answer the questions using a BERT Large based MCQ model, similar to its use in solving SWAG (Zellers et al., 2018). 3.1 Hypothesis Generation Our system creates a hypothesis for each of the questions and candidate answer options as part of the data preparation phase as shown in the example in Table 2. The questions in the OpenBookQA dataset are either with wh word or are incomplete statements. To create hypothesis statements for questions with wh words, we use the rule-based model of Demszky et al. (2018). For the rest of the questions, we concatenate the questions with each of the answers to produce the four hypotheses. This has been done for all the training, test and validation sets. 3.2 OpenBook Knowledge Extraction To retrieve a small set of relevant knowledge facts from the knowledge corpus F, a textual similarity model is trained in a supervised fashion on two different datasets and the results are compared. We use the large-cased BERT (Devlin et al., 2018) (BERT Large) as the textual similarity model. 3.2.1 BERT Model Trained on STS-B We train it on the semantic textual similarity (STSB) data from the GLUE dataset (Wang et al., 2018). The trained model is then used to retrieve the top ten knowledge facts from corpus F based on the STS-B scores. The STS-B scores range from 0 to 5.0, with 0 being least similar. 3.2.2 BERT Model Trained on OpenBookQA We generate the dataset using the gold OpenBookQA facts from F for the train and validation set provided. To prepare the train set, we first find the similarity of the OpenBook F facts with respect to each other using the BERT model trained on STS-B dataset. We assign a score 5.0 for the gold ˆFi fact for a hypothesis. We then sample different facts from the OpenBook and assign the STS-B similarity scores between the sampled fact and the gold fact ˆFi as the target score for that fact Fij and Hij. For example: Hypothesis : Frilled sharks and angler fish live far beneath the surface of the ocean, which is why they are known as Deep sea animals. Gold Fact : deep sea animals live deep in the ocean : Score : 5.0 Sampled Facts : coral lives in the ocean : Score : 3.4 a fish lives in water : Score : 2.8 We do this to ensure a balanced target score is present for each hypothesis and fact. We use this trained model to retrieve top ten relevant facts for each Hij from the knowledge corpus F. 3.3 Natural Language Abduction and IR To search for the missing knowledge, we need to know what we are missing. We use “abduction” to figure that out. Abduction is a long studied task in AI, where normally, both the observation (hypothesis) and the domain knowledge (known fact) is represented in a formal language from which a logical solver abduces possible explanations (missing knowledge). However, in our case, both the observation and the domain knowledge are given as natural language sentences from which we want to find out a possible missing knowledge, which we will then hunt using IR. For example, one of the hypothesis Hij is “A redtailed hawk is searching for prey. It is most likely to swoop down on a gecko.”, and for which the known fact Fij is “hawks eats lizards”. From this we expect the output of the natural language abduction system to be Kij or “gecko is a lizard”. We will refer to this as “natural language abduction”. 6123 For natural language abduction, we propose three models, compare them against a baseline model and evaluate each on a downstream question answering task. All the models ignore stop words except the Seq2Seq model. We describe the three models and a baseline model in the subsequent subsections. 3.3.1 Word Symmetric Difference Model We design a simple heuristic based model defined as below: Kij = (Hij ∪Fij)\(Hij ∩Fij) ∀j ∈{1, 2, 3, 4} where i is the ith question, j is the jth option, Hij, Fij, Kij represents set of unique words of each instance of hypothesis, facts retrieved from knowledge corpus F and abduced missing knowledge of validation and test data respectively. 3.3.2 Supervised Bag of Words Model In the Supervised Bag of Words model, we select words which satisfy the following condition: P(wn ∈Kij) > θ where wn ∈{Hij ∪Fij}. To elaborate, we learn the probability of a given word wn from the set of words in Hij ∪Fij belonging to the abduced missing knowledge Kij. We select those words which are above the threshold θ. To learn this probability, we create a training and validation dataset where the words similar (cosine similarity using spaCy) (Honnibal and Montani, 2017) to the words in the gold missing knowledge ˆKi (provided in the dataset) are labelled as positive class and all the other words not present in ˆKi but in Hij ∪Fij are labelled as negative class. Both classes are ensured to be balanced. Finally, we train a binary classifier using BERT Large with one additional feed forward network for classification. We define value for the threshold θ using the accuracy of the classifier on validation set. 0.4 was selected as the threshold. 3.3.3 Copynet Seq2Seq Model In the final approach, we used the copynet sequence to sequence model (Gu et al., 2016) to generate, instead of predict, the missing knowledge given, the hypothesis H and knowledge fact from the corpus F. The intuition behind using copynet model is to make use of the copy mechanism to generate essential yet precise (minimizing distractors) information which can help in answering the question. We generate the training and validation dataset using the gold ˆKi as the target sentence, but we replace out-of-vocabulary words from the target with words similar (cosine similarity using spaCy) (Honnibal and Montani, 2017) to the words present in Hij ∪Fij. Here, however, we did not remove the stopwords. We choose one, out of multiple generated knowledge based on our model which provided maximum overlap score, given by overlap score = P i count(( ˆHi ∪Fi) ∩Ki) P i count( ˆ Ki) where i is the ith question, ˆHi being the set of unique words of correct hypothesis, Fi being the set of unique words from retrieved facts from knowledge corpus F, Ki being the set of unique words of predicted missing knowledge and ˆ Ki being the set of unique words of the gold missing knowledge . 3.3.4 Word Union Model To see if abduction helps, we compare the above models with a Word Union Model. To extract the candidate words for missing knowledge, we used the set of unique words from both the hypothesis and OpenBook knowledge as candidate keywords. The model can be formally represented with the following: Kij = (Hij ∪Fij) ∀j ∈{1, 2, 3, 4} 3.4 Information Gain based Re-ranking In our experiments we observe that, BERT QA model gives a higher score if similar sentences are repeated, leading to wrong classification. Thus, we introduce Information Gain based Re-ranking to remove redundant information. We use the same BERT Knowledge Extraction model Trained on OpenBookQA data (section 3.2.2), which is used for extraction of knowledge facts from corpus F to do an initial ranking of the retrieved missing knowledge K. The scores of this knowledge extraction model is used as relevancy score, rel. To extract the top ten missing knowledge K, we define a redundancy score, redij , as the maximum cosine similarity, sim, between the previously selected missing knowledge, in the previous iterations till i, and the candidate missing knowledge Kj. If the last selected missing knowledge is Ki, then redij(Kj) = max(redi−1,j(Kj), sim(Ki, Kj)) 6124 rank score = (1 −redi,j(Kj)) ∗rel(Kj) For missing knowledge selection, we first take the missing knowledge with the highest rel score. From the subsequent iteration, we compute the redundancy score with the last selected missing knowledge for each of the candidates and then rank them using the updated rank score. We select the top ten missing knowledge for each Hij. 3.5 Question Answering Once the OpenBook knowledge facts F and missing knowledge K have been extracted, we move onto the task of answering the questions. 3.5.1 Question-Answering Model We use BERT Large model for the question answering task. For each question, we create a passage using the extracted facts and missing knowledge and fine-tune the BERT Large model for the QA task with one additional feed-forward layer for classification. The passages for the train dataset were prepared using the knowledge corpus facts, F. We create a passage using the top N facts, similar to the actual gold fact ˆFi, for the train set. The similarities were scored using the STS-B trained model (section 3.2.1). The passages for the training dataset do not use the gold missing knowledge ˆKi provided in the dataset. For each of our experiments, we use the same trained model, with passages from different IR models. The BERT Large model limits passage length to be lesser than equal to 512. This restricts the size of the passage. To be within the restrictions we create a passage for each of the answer options, and score for all answer options against each passage. We refer to this scoring as sum score, defined as follows: For each answer options, Aj, we create a passage Pj and score against each of the answer options Ai. To compute the final score for the answer, we sum up each individual scores. If Q is the question, the score for the answer is defined as Pr(Q, Ai) = 4 X j=1 score(Pj, Q, Ai) where score is the classification score given by the BERT Large model. The final answer is chosen based on, A = arg max A Pr(Q, Ai) 3.5.2 Passage Selection and Weighted Scoring In the first round, we score each of the answer options using a passage created from the selected knowledge facts from corpus F. For each question, we ignore the passages of the answer options which are in the bottom two. We refer to this as Passage Selection. In the second round, we score for only those passages which are selected after adding the missing knowledge K. We assume that the correct answer has the highest score in each round. Therefore we multiply the scores obtained after both rounds. We refer to this as Weighted Scoring. We define the combined passage selected scores and weighted scores as follows : Pr(F, Q, Ai) = 4 X j=1 score(Pj, Q, Ai) where Pj is the passage created from extracted OpenBook knowledge, F. The top two passages were selected based on the scores of Pr(F, Q, Ai). Pr(F ∪K, Q, Ai) = 4 X k=1 δ ∗score(Pk, Q, Ai) where δ = 1 for the top two scores and δ = 0 for the rest. Pk is the passage created using both the facts and missing knowledge. The final weighted score is : wPr(Q, Ai) = Pr(F, Q, Ai)∗Pr(F∪K, Q, Ai) The answer is chosen based on the top weighted scores as below: A = arg max A wPr(Q, Ai) 4 Experiments 4.1 Dataset and Experimental Setup The dataset of OpenBookQA contains 4957 questions in the train set and 500 multiple choice questions in validation and test respectively. We train a BERT Large based QA model using the top ten knowledge facts from the corpus F, as a passage for both training and validation set. We select the model which gives the best score for the validation set. The same model is used to score the validation and test set with different passages derived from different methods of Abductive IR. The best Abductive IR model, the number of facts from F and K are selected from the best validation scores for the QA task. 6125 F Any Passage Correct Passage Accuracy(%) N TF-IDF Trained STS-B TF-IDF Trained STS-B TF-IDF Trained STS-B 1 228 258 288 196 229 234 52.6 63.6 59.2 2 294 324 347 264 293 304 57.4 66.2 60.6 3 324 358 368 290 328 337 59.2 65.0 60.2 5 350 391 398 319 370 366 61.6 65.4 62.8 7 356 411 411 328 390 384 59.4 65.2 61.8 10 373 423 420 354 405 396 60.4 65.2 59.4 Table 3: Compares (a) The number of correct facts that appears across any four passages (b) The number of correct facts that appears in the passage of the correct hypothesis (c) The accuracy for TF-IDF, BERT model trained on STS-B dataset and BERT model trained on OpenBook dataset. N is the number of facts considered. 4.2 OpenBook Knowledge Extraction Question: .. they decide the best way to save money is ? (A) to quit eating lunch out (B) to make more phone calls (C) to buy less with monopoly money (D) to have lunch with friends Knowledge extraction trained with STS-B: using less resources usually causes money to be saved a disperser disperses each season occurs once per year Knowledge extraction trained with OpenBookQA: using less resources usually causes money to be saved decreasing something negative has a positive impact on a thing conserving resources has a positive impact on the environment Table 3 shows a comparative study of our three approaches for OpenBook knowledge extraction. We show, the number of correct OpenBook knowledge extracted for all of the four answer options using the three approaches TF-IDF, BERT model trained on STS-B data and BERT model Trained on OpenBook data. Apart from that, we also show the count of the number of facts present precisely across the correct answer options. It can be seen that the Precision@N for the BERT model trained on OpenBook data is better than the other models as N increases. The above example presents the facts retrieved from BERT model trained on OpenBook which are more relevant than the facts retrieved from BERT model trained on STS-B. Both the models were able to find the most relevant fact, but the other facts for STS-B model introduce more distractors and have lesser relevance. The impact of this is visible from the accuracy scores for the QA task in Table 3 . The best performance of the BERT QA model can be seen to be 66.2% using only OpenBook facts. 4.3 Abductive Information Retrieval We evaluate the abductive IR techniques at different values for number of facts from F and number of missing knowledge K extracted using IR. Figure 2 shows the accuracy against different combinations of F and K , for all four techniques of IR prior to Information gain based Re-ranking. In general, we noticed that the trained models performed poorly compared to the baselines. The Word Symmetric Difference model performs better, indicating abductive IR helps. The poor performance of the trained models can be attributed to the challenge of learning abductive inference. For the above example it can be seen, the pre-reranking facts are relevant to the question but contribute very less considering the knowledge facts retrieved from the corpus F and the correct answer. Figure 3 shows the impact of Information gain based Re-ranking. Removal of redundant data allows the scope of more relevant information being present in the Top N retrieved missing knowledge K. Question: A red-tailed hawk is searching for prey. It is most likely to swoop down on what? (A) an eagle (B) a cow (C) a gecko (D) a deer Fact from F : hawks eats lizards Pre-Reranking K : red-tail hawk in their search for prey Red-tailed hawks soar over the prairie and woodlands in search of prey. Post-Reranking K: Geckos - only vocal lizards. Every gecko is a lizard. 6126 Figure 2: Accuracy v/s Number of facts from F - number of facts from K, without Information Gain based Re-ranking for 3 abductive IR models and Word Union model. 1 Figure 3: Accuracy v/s Number of facts from F - number of facts from K, with Information Gain based Reranking for 3 abductive IR models and Word Union model. 1 4.4 Question Answering Table 4 shows the incremental improvement on the baselines after inclusion of carefully selected knowledge. Passage Selection and Weighted Scoring are used to overcome the challenge of boosted prediction scores due to cascading effect of errors in each stage. Question: What eat plants? (A) leopards (B) eagles (C) owls (D) robin Appropriate extracted Fact from F : some birds eat plants Wrong Extracted Fact from F : a salamander eats insects Wrong Retrieved Missing Knowledge: Leopard geckos eat mostly insects For the example shown above, the wrong answer leopards had very low score with only the Solver Accuracy (%) Leaderboard Guess All (“random”) 25.0 Plausible Answer Detector 49.6 Odd-one-out Solver 50.2 Question Match 50.2 Reading Strategies 55.8 Model - BERT-Large (SOTA) Only Question (No KB) 60.4 Model - BERT-Large (Our) F - TF-IDF 61.6 F - Trained KE 66.2 F ∪K 70.0 F ∪K with Weighted Scoring 70.4 F ∪K with Passage Selection 70.8 F ∪K with Both 72.0 Oracle - BERT-Large F gold 74.4 F ∪K gold 92.0 Table 4: Test Set Comparison of Different Components. Current state of the art (SOTA) is the Only Question model. K is retrieved from Symmetric Difference Model. KE refers to Knowledge Extraction. facts extracted from knowledge corpus F. But introduction of missing knowledge from the wrong fact from F boosts its scores, leading to wrong prediction. Passage selection helps in removal of such options and Weighted Scoring gives preference to those answer options whose scores are relatively high before and after inclusion of missing knowledge. 5 Analysis & Discussion 5.1 Model Analysis BERT Question Answering model: BERT performs well on this task, but is prone to distractions. Repetition of information leads to boosted prediction scores. BERT performs well for lookup based QA, as in RCQA tasks like SQuAD. But this poses a challenge for Open Domain QA, as the extracted knowledge enables lookup for all answer options, leading to an adversarial setting for lookup based QA. This model is able to find the correct answer, even under the adversarial setting, which is shown by the performance of the sum score to select the answer after passage selection. Symmetric Difference Model This model improves on the baseline Word Union model by 11No Passage Selection and Weighted Scoring. 6127 2%. The improvement is dwarfed because of inappropriate domain knowledge from F being used for abduction. The intersection between the inappropriate domain knowledge and the answer hypothesis is ∅, which leads to queries which are exactly same as the Word Union model. Supervised learned models The supervised learned models for abduction under-perform. The Bag of Words and the Seq2Seq models fail to extract keywords for many F −H pairs, sometimes missing the keywords from the answers. The Seq2Seq model sometimes extracts the exact missing knowledge, for example it generates “some birds is robin” or “lizard is gecko”. This shows there is promise in this approach and the poor performance can be attributed to insufficient train data size, which was 4957 only. A fact verification model might improve the accuracy of the supervised learned models. But, for many questions, it fails to extract proper keywords, copying just a part of the question or the knowledge fact. 5.2 Error Analysis Other than errors due to distractions and failed IR, which were around 85% of the total errors, the errors seen are of four broad categories. Temporal Reasoning: In the example 2 shown below, even though both the options can be considered as night, the fact that 2:00 AM is more suitable for the bats than 6:00 PM makes it difficult to reason. Such issues accounted for 5% of the errors. Question: Owls are likely to hunt at? (A) 3:00 PM (B) 2:00 AM (C) 6:00 PM (D) 7:00 AM Negation: In the example shown below, a model is needed which handles negations specifically to reject incorrect options. Such issues accounted for 1% of the errors. Question: Which of the following is not an input in photosynthesis? (A) sunlight (B) oxygen (C) water (D) carbon dioxide Conjunctive Reasoning: In the example as shown below, each answer options are partially correct as the word “ bear” is present. Thus a model has to learn whether all parts of the answer are true or not, i.e Conjunctive Reasoning. Logically, all answers are correct, as we can see 2Predictions are in italics, Correct answers are in Bold. an “or”, but option (A) makes more sense. Such issues accounted for 1% of the errors. Question: Some berries may be eaten by (A) a bear or person (B) a bear or shark (C) a bear or lion (D) a bear or wolf Qualitative Reasoning: In the example shown below, each answer options would stop a car but option (D) is more suitable since it will stop the car quicker. A deeper qualitative reasoning is needed to reject incorrect options. Such issues accounted for 8% of the errors. Question: Which of these would stop a car quicker? (A) a wheel with wet brake pads (B) a wheel without brake pads (C) a wheel with worn brake pads (D) a wheel with dry brake pads 6 Conclusion In this work, we have pushed the current state of the art for the OpenBookQA task using simple techniques and careful selection of knowledge. We have provided two new ways of performing knowledge extraction over a knowledge base for QA and evaluated three ways to perform abductive inference over natural language. All techniques are shown to improve on the performance of the final task of QA, but there is still a long way to reach human performance. We analyzed the performance of various components of our QA system. For the natural language abduction, the heuristic technique performs better than the supervised techniques. Our analysis also shows the limitations of BERT based MCQ models, the challenge of learning natural language abductive inference and the multiple types of reasoning required for an OpenBookQA task. Nevertheless, our overall system improves on the state of the art by 11.6%. 7 Acknowledgement We thank NSF for the grant 1816039 and DARPA for partially supporting this research. References Eugene Charniak and Robert Goldman. 1988. A logic for semantic interpretation. In Proceedings of the 26th annual meeting on Association for Compu6128 tational Linguistics, pages 87–94. Association for Computational Linguistics. Eugene Charniak and Robert P Goldman. 1989. A semantics for probabilistic quantifier-free first-order languages, with particular application to story understanding. In IJCAI, volume 89, pages 1074– 1079. Citeseer. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. Quac: Question answering in context. arXiv preprint arXiv:1808.07036. Peter Clark, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Turney, and Daniel Khashabi. 2016. Combining retrieval, statistics, and inference to answer elementary science questions. In Thirtieth AAAI Conference on Artificial Intelligence. Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018. Transforming question answering datasets into natural language inference datasets. arXiv preprint arXiv:1809.02922. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631–1640. Association for Computational Linguistics. Jerry R Hobbs. 2004. Abduction in natural language understanding. Handbook of pragmatics, pages 724–741. Jerry R Hobbs, Mark E Stickel, Douglas E Appelt, and Paul Martin. 1993. Interpretation as abduction. Artificial intelligence, 63(1-2):69–142. Matthew Honnibal and Ines Montani. 2017. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing. To appear. Tony Jenkins. 1995. Open Book Assessment in Computing Degree Programmes. Citeseer. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611. Association for Computational Linguistics. Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 252–262. Daniel Khashabi, Tushar Khot, Ashish Sabharwal, and Dan Roth. 2017. Learning what is essential in questions. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 80–89. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785– 794. Association for Computational Linguistics. J Landsberger. 1996. Study guides and strategies. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In EMNLP. Arindam Mitra, Peter Clark, Oyvind Tafjord, and Chitta Baral. 2019. Declarative question answering over knowledge bases containing natural language text with answer set programming. Ryan Musa, Xiaoyan Wang, Achille Fokoue, Nicholas Mattei, Maria Chang, Pavan Kapanipathi, Bassem Makni, Kartik Talamadupula, and Michael Witbrock. 2018. Answering science exam questions using query rewriting with background knowledge. arXiv preprint arXiv:1809.05726. Jianmo Ni, Chenguang Zhu, Weizhu Chen, and Julian McAuley. 2018. Learning to attend on essential terms: An enhanced retriever-reader model for scientific question answering. arXiv preprint arXiv:1808.09492. Peter Norvig. 1983. Frame activated inferences in a story understanding program. In IJCAI, pages 624– 626. Peter Norvig. 1987. Inference in text understanding. In AAAI, pages 561–565. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Association for Computational Linguistics. Siva Reddy, Danqi Chen, and Christopher D Manning. 2018. Coqa: A conversational question answering challenge. arXiv preprint arXiv:1808.07042. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. 6129 In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 193–203. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. Kai Sun, Dian Yu, Dong Yu, and Claire Cardie. 2018. Improving machine reading comprehension with general reading strategies. CoRR, abs/1810.13441. Oyvind Tafjord, Peter Clark, Matt Gardner, Wen-tau Yih, and Ashish Sabharwal. 2018. Quarel: A dataset and models for answering questions about qualitative relationships. arXiv preprint arXiv:1811.08048. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Robert Wilensky. 1983. Planning and understanding: A computational approach to human reasoning. Robert Wilensky, David N Chin, Marc Luria, James Martin, James Mayfield, and Dekai Wu. 2000. The berkeley unix consultant project. In Intelligent Help Systems for UNIX, pages 49–94. Springer. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. arXiv preprint arXiv:1808.05326. Yuyu Zhang, Hanjun Dai, Kamil Toraman, and Le Song. 2018. Kgˆ 2: Learning to reason science exam questions with contextual knowledge graph embeddings. arXiv preprint arXiv:1805.12393.
2019
615
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6130–6139 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6130 Learning Representation Mapping for Relation Detection in Knowledge Base Question Answering Peng Wu1,2, Shujian Huang1,2 , Rongxiang Weng1,2, Zaixiang Zheng1,2, Jianbing Zhang1,2, Xiaohui Yan3, Jiajun Chen1,2 1National Key Laboratory for Novel Software Technology, Nanjing, China 2Nanjing University, Nanjing, China 3Poisson Lab, Huawei Technologies, Beijing, China {wup, wengrx, zhengzx}@nlp.nju.edu.cn {huangsj, zjb, chenjj}@nju.edu.cn [email protected] Abstract Relation detection is a core step in many natural language process applications including knowledge base question answering. Previous efforts show that single-fact questions could be answered with high accuracy. However, one critical problem is that current approaches only get high accuracy for questions whose relations have been seen in the training data. But for unseen relations, the performance will drop rapidly. The main reason for this problem is that the representations for unseen relations are missing. In this paper, we propose a simple mapping method, named representation adapter, to learn the representation mapping for both seen and unseen relations based on previously learned relation embedding. We employ the adversarial objective and the reconstruction objective to improve the mapping performance. We re-organize the popular SimpleQuestion dataset to reveal and evaluate the problem of detecting unseen relations. Experiments show that our method can greatly improve the performance of unseen relations while the performance for those seen part is kept comparable to the state-of-the-art.1 1 Introduction The task of Knowledge Base Question Answering (KBQA) has been well developed in recent years (Berant et al., 2013; Bordes et al., 2014; Yao and Van Durme, 2014). It answers questions using an open-domain knowledge base, such as Freebase (Bollacker et al., 2008), DBpedia (Lehmann et al., 2015) or NELL (Carlson et al., 2010). The knowledge base usually contains a large set of triples. 1Our code and data are available at https://github. com/wudapeng268/KBQA-Adapter. Each triple is in the form of ⟨subject, relation, object⟩, indicating the relation between the subject entity and the object entity. Typical KBQA systems (Yao and Van Durme, 2014; Yin et al., 2016; Dai et al., 2016; Yu et al., 2017; Hao et al., 2018) can be divided into two steps: the entity linking step first identifies the target entity of the question, which corresponds to the subject of the triple; the relation detection step then determines the relation that the question asks from a set of candidate relations. After the two steps, the answer could be obtained by extracting the corresponding triple from the knowledge base (as shown in Figure 1). Our main focus in this paper is the relation detection step, which is more challenging because it needs to consider the meaning of the whole question sentence (e.g., the pattern “where was ... born”), as well as the meaning of the candidate relation (e.g., “place of birth”). For comparison, the entity linking step benefits more from the matching of surface forms between the words in the question and subject entity (e.g., “Mark Mifsud”). In recent deep learning based relation detection approaches, each word or relation is represented by a dense vector representation, called embedding, which is usually learned automatically while optimizing the relation detection objective. Then, the inference processes of these approaches are executed by neural network computations. Such approaches enjoy great success in common KBQA datasets, such as SimpleQuestion (Bordes et al., 2015), achieving over 90% accuracy in relation detection. In the words of Petrochuk and Zettlemoyer (2018), “SimpleQuestion is nearly solved.” However, we notice that in the common split of 6131 Question: Candidate Relations: Triple: where was Mark Mifsud born? people.person.nationality <Mark Mifsud, people.person.place_of_birth, Malta> people.person.place_of_birth people.person.profession … Figure 1: A KBQA example. The bold words in the question are the target entity, identified in the entity linking step. The relation detection step selects the correct relation (marked with bold font) from a set of candidate relations. The answer of this question is the object entity of the triple extracted from the knowledge base. the SimpleQuestion dataset, 99% of the relations in the test set also exist in the training data, which means their embeddings could be learned well during training. On the contrary, for those relations which are never seen in the training data (called unseen relations), their embeddings have never been trained since initialization. As a result, the corresponding detection performance could be arbitrary, which is a problem that has not been carefully studied. We emphasize that the detection for these unseen relations is critical because it is infeasible to build training data for all the relations in a large-scale knowledge base. For example, SimpleQuestion is a large-scale human annotated dataset, which contains 108,442 natural language questions for 1,837 relations sampled from FB2M (Bordes et al., 2015). FB2M is a subset of FreeBase (Bollacker et al., 2008) which have 2 million entities, 6,700 relations. A large portion of these relations can not be covered by the humanannotated dataset such as SimpleQuestion. Therefore, for building up a practical KBQA system that could answer questions based on FB2M or other large-scale knowledge bases, dealing with the unseen relations is very important and challenging. This problem could be considered as a zero-shot learning problem (Palatucci et al., 2009) where the labels for test instances are unseen in the training dataset. In this paper, we present a detailed study on this zero-shot relation detection problem. Our contributions could be summarized as follows: 1. Instead of learning the relation representation barely from the training data, we employ methods to learn the representations from the whole knowledge graph which has much wider coverage. 2. We propose a mapping mechanism, called representation adapter, or simply adapter, to incorporate the learned representations into the relation detection model. We start with the simple mean square error loss for the non-trivial training of the adapter and propose to incorporate adversarial and reconstruction objectives to improve the training process. 3. We re-organize the SimpleQuestion dataset as SimpleQuestion-Balance to evaluate the performance for seen and unseen relations, separately. 4. We present experiments showing that our proposed method brings a great improvement to the detection of unseen relations, while still keep comparable to the state-of-the-art method for the seen relations. 2 Representation Adapter 2.1 Motivation Representation learning of human annotated data is limited by the size and coverage of the training data. In our case, because the unseen relations and their corresponding questions do not occur in the training data, their representations cannot be properly trained, leading to poor detection performance. A possible solution for this problem is to employ a large number of unannotated data, which may be much easier to obtain, to provide better coverage. Usually, pre-trained representations are not directly applicable to specific tasks. One popular way to utilize these representations is using them as initialization. These initialized representations are then fine-tuned on the labeled training data, with a task specific objective. However, with the above mentioned coverage issues, the representations of unseen relations will not be updated properly during fine-tuning, leading to poor test performance. To solve this problem, we keep the representation unchanged during training, and propose a representation adapter to bridge the gap between general purposed representations and task specific ones. We will then present the basic adapter framework, introduce the adversarial adapter and the reconstruction objective as enhancements. Throughout this paper, we use the following notations: let r denote a single relation; S and U denote the set of seen and unseen relations, respectively; e(r) or e denote the embedding of r; specifically, we use eg to denote the general pre-trained 6132 Input eg Output eo ̂e Target G Input eg Output eo ̂e Target G Output Input ̂e Target Recon. e′!o eo eg G′! G Basic Adapter Adver. Adapter Adapter with recon. loss lossD lossmse lossmse/lossD lossR Figure 2: The structures of representation adapter. On the left is the basic adapter; on the middle is the adversarial adapter; on the right is the adapter with the reconstruction loss. Adver. and recon. are the abbreviation of adversarial and reconstruction, respectively. embedding. 2.2 Basic Adapter Pseudo Target Representations The basic idea is to use a neural network representation adapter to perform the mapping from the general purposed representation to the task specific one. The input of the adapter is the embedding learned from the knowledge base. However, the output of the adapter is undecided, because there is no oracle representation for the relation detection task. Therefore, we first train a traditional relation detection model similar to Yu et al. (2017). During training, the representations for relations in the training set (seen relations) will be updated for the relation detection task. We use these representations as pseudo target representations, denoted as ˆe, for training the adapter. Linear Mapping Inspired by Mikolov et al. (2013), which shows the representation space of similar languages can be transferred by a linear mapping, we also employ a linear mapping function G(·) to map the general embedding eg to the task specific (pseudo target) representation ˆe (Figure 2, left). The major difference between our adapter and an extra layer of neural network is that specific losses are designed to train the adapter, instead of implicitly learning the adapter as a part of the whole network. We train the adapter to optimize the following objective function on the seen relations: Ladapter = X r∈S loss(ˆe, G(eg)). (1) Here the loss function could be any metric that evaluates the difference between the two representations. The most common and simple one is the mean square error loss (Equation (2)), which we employ in our basic adapter. We will discuss other possibilities in the following sub-sections. lossMSE(ˆe, G(eg)) = ||ˆe −G(eg)||2 2 (2) 2.3 Adversarial Adapter The mean square error loss only measures the absolute distance between two embedding vectors. Inspired by the popular generative adversarial networks (GAN) (Goodfellow et al., 2014; Arjovsky et al., 2017) and some previous works in unsupervised machine translation (Conneau et al., 2018; Zhang et al., 2017a,b), we use a discriminator to provide an adversarial loss to guide the training (Figure 2, middle). It is a different way to minimize the difference between G(e) and ˆe. In detail, we train a discriminator, D(·) , to discriminate the “real” representation, i.e., the finetuned relation embedding ˆe, from the “fake” representation, which is the output of the adapter. The adapter G(·) is acting as the generator in GAN, which tries to generate a representation that is similar to the “real” representation. We use WassersteinGAN (Arjovsky et al., 2017) to train our adapter. For any relations sampled from the training set, the objective function for the discriminator lossD and generator lossG are: lossD = Er∈S[D(G(eg))] −Er∈S[D(ˆe)] (3) lossG = −Er∈S[D(G(eg))] (4) Here for D(·), we use a feed forward neural network without the sigmoid function of the last layer (Arjovsky et al., 2017). 2.4 Reconstruction Loss The adapter could only learn the mapping by using the representations of seen relations, which neglects the potential large set of unseen relations. Here we propose to use an additional reconstruction loss to augment the adapter (Figure 2, right). More specifically, we employ a reversed adapter G′(·), mapping the representation G(e) back to e. The advantage of introducing the reversed training is two-fold. On the one hand, the reversed adapter could be trained with the representation 6133 for all the relations, both seen and unseen ones. On the other hand, the reversed mapping could also serve as an extra constraint for regularizing the forward mapping. For the reversed adapter G′(·), We simply use a similar linear mapping function as for G(·), and train it with the mean square error loss: lossR = X r∈S∪U ||G′(G(eg)) −eg||2 2 (5) Please note that, different from previous loss functions, this reconstruction loss is defined for both seen and unseen relations. 3 Relation Detection with the Adapter We integrate our adapter into the state-of-the-art relation detection framework (Yu et al., 2017, Hierarchical Residual BiLSTM (HR-BiLSTM)). Framework The framework uses a question network to encode the question sentence as a vector qf and a relation network to encode the relation as a vector rf. Both of the two networks are based on the Bi-LSTM with max-pooling operation. Then, the cosine similarity is introduced to compute the distance between the qf and rf, which determines the detection result. Our adapter is an additional module which is used in the relation network to enhance this framework (Figure 3). Adapting the Relation Representation The relation network proposed in Yu et al. (2017) has two parts for relation representations: one is at wordlevel and the other is at relation-level. The two parts are fed into the relation network to generate the final relation representation. Different from previous approaches, we employ the proposed adapter G(·) on the relation-level representations to solve unseen relation detection problem. There are several approaches to obtain the relation representations from the knowledge base into a universal space (Bordes et al., 2013; Wang et al., 2014; Han et al., 2018). In practice, we use the JointNRE embedding (Han et al., 2018), because its word and relation representations are in the same space. Training Following Yu et al. (2017), the relation detection model is trained by the hinge loss (Bengio et al., 2003) which tries to separate the score of each negative relation from the positive relation G(eg) Question Relation Adapter eg q1 q3 q2 q4 w1 w2 w3 Max Pooling qf rf Max Pooling Lower Hidden States Cosine Similarity Upper Hidden States Figure 3: KBQA baseline with the adapter. Shared Bi-LSTM is marked with the same color. The adapter maps task independent representations for each relation to the task specific ones, which are fed into the relation network. by a margin: Lrd = X max(0, γ−s(qf, r+ f )+s(qf, r− f )), (6) where γ is the margin; r+ f is the positive relation from the annotated training data; r− f is the relation negative sampled from the rest relations; s(·, ·) is the cosine distance between qf and rf. The basic relation detection model is pretrained to get the pseudo target representations. Then, the adapter is incorporated into the training process, and jointly optimized with the relation detection model. For the adversarial adapter, the generator and the discriminator are trained alternatively following the common practice. 4 SimpleQuestion-Balance (SQB) As mentioned before, SimpleQuestion (SQ) is a large-scale KBQA dataset. Each sample in SQ includes a human annotated question and the corresponding knowledge triple. However, the distribution of the relations in the test set is unbalanced. Most of the relations in the test set have been seen in the training data. To better evaluate the performance of unseen relation detection, we re-organize the SQ dataset to balance the number of seen and unseen relations in development and test sets, and the new dataset is denoted as SimpleQuestion-Balance (SQB). The re-organization is performed by randomly shuffle and split into 5 sets, i.e. Train, Dev-seen, Den-unseen, Test-seen and Test-unseen, while 6134 Datasets SQ SQB Train 75,910 75,819 Dev-seen 10,774 5,383 Dev-unseen 71 5,758 Test-seen 21,526 10,766 Test-unseen 161 10,717 Table 1: The number of instances in each subset from SimpleQuestion (SQ) and SimpleQuestionBalance (SQB) datasets. Dev-seen and Dev-unseen are seen and unseen part of development set; Test-seen and Test-unseen are seen and unseen part of test set, respectively. checking the overlapping of relations and the percentage of seen/unseen samples in each set. We require the sizes of the training, development and test sets are similar to SQ. The details of the resulting SQB and SQ are shown in Table 1. The SQ dataset only have 0.65% (71 / 10845) and 0.74% (161 / 21687) of the unseen samples in the dev set (Dev-unseen) and test set (Test-unseen), respectively. 5 Experiment 5.1 Settings Implementation Details We use RMProp (Tieleman and Hinton, 2012) as the optimization strategy to train the proposed adapter. The learning rate is set as 10−4. We set the batch size as 256. Following Arjovsky et al. (2017), we clip the parameters of discriminator into [−c, c], where c is 0.1. Dropout rate is set as 0.2 to regularize the adapter. The baseline relation detection model is almost same as Yu et al. (2017), except that the word embedding and relation embedding of our model are pre-trained by JointNRE (Han et al., 2018) on FB2M and Wikipedia , with the default settings reported in the Han et al. (2018). The embeddings are fine-tuned with the model. More specifically, the dimension of relation representation is 300. The dimension for the hidden state of Bi-LSTM is set to 256. Parameters in the neural models are initialized using a uniform sampling. The number of negative sampled relations is 256. The γ in hinge loss (Equation (6)) is set to 0.1. Evaluation To evaluate the performance of relation detection, we assume that the results of entity linking are correct. Two metrics are employed. Micro average accuracy (Tsoumakas et al., 2010) is the average accuracy of all samples, which is the metric used in previous work. Macro average accuracy (Sebastiani, 2002; Manning et al., 2008; Tsoumakas et al., 2010) is the average accuracy of the relations. Please note that because different relations may correspond to the different number of samples in the test set, the micro average accuracy may be affected by the distribution of unseen relations in the test set. In this case, the macro average accuracy will serve as an alternative indicator. We report the average and standard deviation (std) of 10-folds cross validation to avoid contingency. 5.2 Main Results Main results for baseline and the proposed model with the different settings are listed in Table 2. The detailed comparison is as follows: Baseline The baseline HR-BiLSTM (line 1) shows the best performance on Test-seen, but the performance is much worse on Test-unseen. For comparison, training the model without finetuning (line 2) achieves much better results on Test-unseen, demonstrating our motivation that the embeddings are the reason for the weak performance on unseen relations, and fine-tuning makes them worse. Using Adapters Line 3 shows the results of adding an extra mapping layer of neural networks between the pretrained embedding and the relation detection networks, without any loss. Although ideally, it is possible to learn the mapping implicitly with the training, in practice, this does not lead to a better result (line 3 v.s. line 2). While keeping similar performance on the Testseen with the HR-BiLSTM, all the models using the representation adapter achieve great improvement on the Test-unseen set. With the simplest form of adapter (line 4), the accuracy on Testunseen improves to 76.0% / 69.5%. It shows that our model can predict unseen relation with better accuracy. Using adversarial adapter (line 6) can further improve the performance on the Test-unseen in both micro and macro average scores. Using Reconstruction Loss Adding reconstruction loss to basic adapter can also improve the performance (line 5 v.s. line 4) slightly. The similar improvement is obtained for the adversarial 6135 # Model Micro / Macro Average Accuracy on SQB (%) Test-seen Test-unseen All 1 HR-BiLSTM 93.5±0.6 / 84.7±1.4 33.0±5.7 / 49.3±1.7 63.3±3.6 / 71.2±1.3 2 + no fine-tune 93.4±0.7 / 83.8±0.7 57.8±9.8 / 60.8±2.0 75.6±5.0 / 75.0±0.6 3 + no fine-tune + mapping 93.3±0.7 / 84.0±1.6 52.0±7.2 / 60.6±2.1 72.7±3.8 / 75.1±1.3 4 + Basic-Adapter 92.8±0.7 / 84.1±1.2 76.0±7.5† / 69.5±2.0† 84.5±3.5 / 78.5±1.3 5 + reconstruction 93.0±0.5 / 84.4±0.8 76.1±7.0† / 70.7±1.8† 84.6±3.3 / 79.2±0.8 6 + Adversarial-Adapter 92.6±0.9 / 86.4±1.4 77.1±7.1† / 73.2±2.1† 84.9±3.2 / 81.4±1.4 7 + reconstruction [Final] 92.4±0.8 / 86.1±0.7 77.3±7.6† / 73.0±1.7† 84.9±3.5 / 81.1±0.8 Table 2: The micro average accuracy and macro average accuracy of relation detection on the SQB dataset. “†” indicates statistically significant difference (p < 0.01) from the HR-BiLSTM. adapter in micro average accuracy (line 7 v.s. line 6). Finally, using all the techniques together (line 7) gets the score of 77.3% / 73.0% on Test-unseen, and 84.9% / 81.1% on the union of Test-seen and Test-unseen in micro/macro average accuracy, respectively. We mainly use this model as our final model for further comparison and analysis. We notice that the results of our model on Testseen are slightly lower than that of HR-BiLSTM. It is because we use the mapped representations for the seen relations instead of the directly finetuned representations. This dropping is negligible compared with the improvement in the unseen relations. Integration to the KBQA To confirm the influence of unseen relation detection for the entire KBQA, we integrate our relation detection model into a prototype KBQA framework. During the entity linking step, we use FocusPrune (Dai et al., 2016) to get the mention of questions. Then, the candidate mentions are linked to the entities in the knowledge base. Because the FreeBase API was deprecated 2, we restrict the entity linking to an exact match for simplicity. The candidate relations are the set of relations linked with candidate subjects. We evaluate the KBQA results using the micro average accuracy introduced in Bordes et al. (2015), which considers the prediction as correct if both the subject and relation are correct. As shown in Table 3, the proposed adapter method can improve KBQA from 48.5% to 63.7%. Comparing with the result of relation detection, we find that the boost of relation detection could indeed lead to the improvement of a KBQA system. 2https://developers.google.com/ freebase/ Model Accuracy (%) HR-BiLSTM 48.5±3.3 + no fine-tune 56.4±3.4 Final 63.7±3.2 Table 3: The micro average accuracy of the whole KBQA system with different relation detection models. Model Seen Rate ↓(%) HR-BiLSTM 47.2±2.0 + no fine-tune 34.8±2.3 Final 21.2±1.7 Table 4: Seen relation prediction rate in the Testunseen set. We calculate the macro average of this rate. 5.3 Analysis Seen Relation Bias We use macro-average to calculate the percentage of instances whose relations are wrongly predicted to be a seen relation on Test-unseen. We call this indicator the seen rate, the lower the better. Because the seen relations are better learned after fine-tuning, while the representations for unseen relations are not updated well. So the relation detection model may have a strong trend to select those seen relations as the answer. The result in Table 4 shows that our adapter makes the trend of choosing seen relation weaker, which helps to promote a fair choice between seen and unseen relations. Influence of Number of Relations for Training We discuss the influence of the number of relations in the training set for our adapter. Our adapter are trained mainly by the seen relations, because we can get pseudo target representation for these relations. In this experiment, we random sample 60,000 samples from the training set to perform the training, and plot the accuracy against the different number of relations for training. We report the macro average accuracy on Test-unseen. 6136 Figure 4: Macro average accuracy for different relation size in the training set. (a) JointNRE (b) HR-BiLSTM (c) Final (d) JointNRE* (e) HR-BiLSTM* (f) Final* Figure 5: Relation Representation Visualization of different models. The yellow (light) point represent the seen relation, and the blue (dark) point represent the unseen relation. As shown in Figure 4, with different number of relations, our model still perform better than HR-BiLSTM. Note that, our adapter can beat HRBiLSTM with even a smaller number of seen relations. When there are more relations for training, the performance will be improved as expected. Relation Representation Analysis We visualize the relation representation in JointNRE, HRBiLSTM and the output representation of our final adapter by principal component analysis (PCA) with the help of TensorBoard. We use the yellow (light) point represents the seen relation, and the blue (dark) point represents the unseen relation. As shown in Figure 5a), the JointNRE representation is pre-trained by the interaction between knowledge graph and text. Because without knowing the relation detection tasks, seen and unseen relations are randomly distributed. 3 3We also notice that there is a big cluster of relations on the left hand side. This is presumably the set of less updated Model Accuracy Final 77.3±7.6 / 73.0±1.7 Final* 77.5±6.0 / 72.4±1.8 Table 5: Results on Test-unseen with and without the adapter in training JointNRE. After training with HR-BiLSTM (Figure 5b), the seen and unseen relations are easily separated, because the training objective is to discriminate the seen relations from the other relations for the corresponding question. Although the embeddings of unseen relations are also updated due to negative sampling, they are never updated towards their correct position in the embedding space. As a result, the relation detection accuracy for the unseen relations is poor. The training of our final model uses the adapter to fit the training data, instead of directly updating the embeddings. Despite the comparable performance on seen relations, the distribution of seen and unseen relations (Figure 5c) is much similar to the original JointNRE, which is the core reason for its ability to obtain better results on unseen relations. Adapting JointNRE Interestingly, we notice that JointNRE is to train the embedding of relations with a corpus of text that may not cover all the relations, which is also a process that needs the adapter. As a simple solution, we use a similar adapter to adapt the representation from TransE 4 (Lin et al., 2015) to the training of JointNRE. With the resulting relation embedding, denoted as JointNRE*, we train the baseline and final relation detection models, denoted as HRBiLSTM* and Final*, respectively. We visualize the relation representation in these models again. Clearly, the distribution of seen and unseen relations in JointNRE* (Figure 5d) looks more reasonable than before. This distribution is interrupted by fine-tuning process of HRBiLSTM* (Figure 5e), while is retained by our adapter model (Figure 5f). Furthermore, as shown in Table 5, using JointNRE* can further improve the unseen relation detection performance (77.5% v.s. 77.3%). This provides further evidence of the importance of reprerelations in the training of JointNRE, due to lack of correspondence with the text data. This cluster does not affect our main observation with adapter training. 4https://github.com/thunlp/Fast-TransX 6137 Question 1 who produced recording Twenty One Candidate Relations music.recording.producer music.recording.artist HR-BiLSTM music.recording.artist Final music.recording.producer Question 2 what is Tetsuo Ichikawa’s profession Candidate Relations people.person.gender people.person.profession HR-BiLSTM people.person.profession Final people.person.profession Question 3 which village is in Arenac county ? Candidate Relations location.us county.hud county place location.location.contains HR-BiLSTM location.us county.hud county place Final location.us county.hud county place Table 6: Case studies for relation detection using different models. For each question, the gold relation is marked with bold font; the gold target entity of the question is marked with italic font. The models and notations are the same as in Table 2. sentations for unseen relations. Case Study In the first case of Table 6, Twenty One is the subject of question. “music.recording.producer” is the gold relation, but it is an unseen relation. The baseline model predicts “music.recording.artist” because this relation is seen and perhaps relevant in the training set. A dig into the set of relations shown that there is a seen relation, “music.recording.engineer”, which happens to be the closest relation in the mapped representation to the gold relation. It is possible that the knowledge graph embedding is able to capture the relatedness between the two relations. In the second case, although the gold relation “people.person.profession” is unseen, both baseline and our model predict the correct answer because of strong lexical evidences: “profession”. In the last case, both the gold relation and predict error relation are unseen relation. “Hud county place” refers to the name of a town in a county, but “location.location.contains” has a broader meaning. When asked about “village”, “location.location.contains” is more appropriate. This case shows that our model still can not process the minor semantic difference between word. We will leave it for future work. 6 Related Work Relation Detection in KBQA Yu et al. (2017) first noticed the zero-shot problem in KBQA relation detection. They split relation into word sequences and use it as a part of the relation representation. In this paper, we push this line further and present the first in-depth discussion about this zero-shot problem. We propose the first relationlevel solution and present a re-organized dataset for evaluation as well. Embedding Mapping Our main idea of embedding mapping is inspired by previous work about learning the mapping of bilingual word embedding. Mikolov et al. (2013) observed the linear relation of bilingual word embedding, and used a small starting dictionary to learn this mapping. Zhang et al. (2017a) use Generative Adversarial Nets (Goodfellow et al., 2014) to learn the mapping of bilingual word embedding in an unsupervised manner. Different from this work which maps words in different languages, we perform mappings between representations generated from heterogeneous data, i.e., knowledge base and question-triple pairs. Zero-Shot Learning Zero-shot learning has been studied in the area of natural language process. Hamaguchi et al. (2017) use a neighborhood knowledge graph as a bridge between out of knowledge base entities to train the knowledge graph. Levy et al. (2017) connect nature language question with relation query to tackle zero shot relation extraction problem. Elsahar et al. (2018) extend the copy actions (Luong et al., 2015) to solve the rare words problem in text generation. Some attempts have been made to build machine translation systems for language pairs without direct parallel data, where they relying on one or more other languages as the pivot (Firat et al., 2016; Ha et al., 2016; Chen et al., 2017). In this paper, we use knowledge graph embedding as a bridge between seen and unseen relations, which shares the same spirit with previous work. However, less study has been done in relation detection. 7 Conclusion In this paper, we discuss unseen relation detection in KBQA, where the main problem lies in the learning of representations. We re-organize the SimpleQuestion dataset as SimpleQuestionBalance to reveal and evaluate the problem, and propose an adapter which significantly improves the results. We emphasize that for any other tasks which contain a large number of unseen samples, train6138 ing, fine-tuning the model according to the performance on the seen samples alone is not fair. Similar problems may exist in other NLP tasks, which will be interesting to investigate in the future. Acknowledgement We would like to thank the anonymous reviewers for their insightful comments. Shujian Huang is the corresponding author. This work is supported by the National Science Foundation of China (No. 61772261), the Jiangsu Provincial Research Foundation for Basic Research (No. BK20170074). Part of this research is supported by the Huawei Innovation Research Program (HO2018085291). References Martin Arjovsky, Soumith Chintala, and L´eon Bottou. 2017. Wasserstein gan. In NIPS. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. JMLR. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In EMNLP. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In SIGMOD Conference. Antoine Bordes, Sumit Chopra, and Jason Weston. 2014. Question answering with subgraph embeddings. In EMNLP. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. CoRR. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In NIPS. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka Jr, and Tom M Mitchell. 2010. Toward an architecture for neverending language learning. In AAAI. Yun Chen, Yang Liu, Yong Cheng, and Victor OK Li. 2017. A teacher-student framework for zeroresource neural machine translation. In ACL. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word translation without parallel data. In ICLR. Zihang Dai, Lei Li, and Wei Xu. 2016. Cfo: Conditional focused neural question answering with largescale knowledge bases. In ACL. Hady Elsahar, Christophe Gravier, and Frederique Laforest. 2018. Zero-shot question generation from knowledge graphs for unseen predicates and entity types. In NAACL. Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T Yarman Vural, and Kyunghyun Cho. 2016. Zero-resource translation with multi-lingual neural machine translation. In EMNLP. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In NIPS. Thanh-Le Ha, Jan Niehues, and Alexander Waibel. 2016. Toward multilingual neural machine translation with universal encoder and decoder. CoRR. Takuo Hamaguchi, Hidekazu Oiwa, Masashi Shimbo, and Yuji Matsumoto. 2017. Knowledge transfer for out-of-knowledge-base entities: A graph neural network approach. In IJCAI. Xu Han, Zhiyuan Liu, and Maosong Sun. 2018. Neural knowledge acquisition via mutual attention between knowledge graph and text. In AAAI. Yanchao Hao, Hao Liu, Shizhu He, Kang Liu, and Jun Zhao. 2018. Pattern-revising enhanced simple question answering over knowledge bases. In COLING. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, S¨oren Auer, et al. 2015. Dbpedia–a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web. Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. In CoNLL. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In AAAI. Minh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In ACL. Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch¨utze. 2008. Introduction to Information Retrieval. Cambridge University Press, New York, NY, USA. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. CoRR. Mark Palatucci, Dean Pomerleau, Geoffrey E Hinton, and Tom M Mitchell. 2009. Zero-shot learning with semantic output codes. In NIPS. 6139 Michael Petrochuk and Luke Zettlemoyer. 2018. Simplequestions nearly solved: A new upperbound and baseline approach. In EMNLP. Fabrizio Sebastiani. 2002. Machine learning in automated text categorization. ACM Comput. Surv. Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning. Grigorios Tsoumakas, Ioannis Katakis, and Ioannis Vlahavas. 2010. Mining multi-label data. In Data Mining and Knowledge Discovery Handbook, 2nd ed. Springer. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In AAAI. Xuchen Yao and Benjamin Van Durme. 2014. Information extraction over structured data: Question answering with freebase. In ACL. Wenpeng Yin, Mo Yu, Bing Xiang, Bowen Zhou, and Hinrich Sch¨utze. 2016. Simple question answering by attentive convolutional neural network. In COLING. Mo Yu, Wenpeng Yin, Kazi Saidul Hasan, Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2017. Improved neural relation detection for knowledge base question answering. In ACL. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017a. Adversarial training for unsupervised bilingual lexicon induction. In ACL. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017b. Earth mover’s distance minimization for unsupervised bilingual lexicon induction. In EMNLP.
2019
616
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6140–6150 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6140 Dynamically Fused Graph Network for Multi-hop Reasoning Lin Qiu1† Yunxuan Xiao1† Yanru Qu1† Hao Zhou2 Lei Li2 Weinan Zhang1 Yong Yu1 1 Shanghai Jiao Tong University 2 ByteDance AI Lab, China {lqiu, kevinqu, yyu}@apex.sjtu.edu.cn {xiaoyunxuan, wnzhang}@sjtu.edu.cn {zhouhao.nlp, lilei.02}@bytedance.com Abstract Text-based question answering (TBQA) has been studied extensively in recent years. Most existing approaches focus on finding the answer to a question within a single paragraph. However, many difficult questions require multiple supporting evidence from scattered text across two or more documents. In this paper, we propose the Dynamically Fused Graph Network (DFGN), a novel method to answer those questions requiring multiple scattered evidence and reasoning over them. Inspired by human’s step-by-step reasoning behavior, DFGN includes a dynamic fusion layer that starts from the entities mentioned in the given query, explores along the entity graph dynamically built from the text, and gradually finds relevant supporting entities from the given documents. We evaluate DFGN on HotpotQA, a public TBQA dataset requiring multi-hop reasoning. DFGN achieves competitive results on the public board. Furthermore, our analysis shows DFGN could produce interpretable reasoning chains. 1 Introduction Question answering (QA) has been a popular topic in natural language processing. QA provides a quantifiable way to evaluate an NLP system’s capability on language understanding and reasoning (Hermann et al., 2015; Rajpurkar et al., 2016, 2018). Most previous work focus on finding evidence and answers from a single paragraph (Seo et al., 2016; Liu et al., 2017; Wang et al., 2017). It rarely tests deep reasoning capabilities of the underlying model. In fact, Min et al. (2018) observe that most questions in existing QA benchmarks can be answered by retrieving †These authors contributed equally. The order of authorship is decided through dice rolling. Work done while Lin Qiu was a research intern in ByteDance AI Lab. The Sum of All Fears is a best-selling thriller novel by Tom Clancy ... It was the fourth of Clancy's Jack Ryan books to be turned into a film ... Dr. John Patrick Jack Ryan Sr., KCVO (Hon.), Ph.D. is a fictional character created by Tom Clancy who appears in many of his novels and their respective film adaptations ... Net Force Explorers is a series of young adult novels created by Tom Clancy and Steve Pieczenik as a spin-off of the military fiction series ... Question: What fiction character created by Tom Clancy was turned into a film in 2002? Answer: Jack Ryan Input Paragraphs: Original Entity Graph Second Mask Applied First Mask Applied Figure 1: Example of multi-hop text-based QA. One question and three document paragraphs are given. Our proposed DFGN conducts multi-step reasoning over the facts by constructing an entity graph from multiple paragraphs, predicting a dynamic mask to select a subgraph, propagating information along the graph, and finally transfer the information from the graph back to the text in order to localize the answer. Nodes are entity occurrences, with the color denoting the underlying entity. Edges are constructed from co-occurrences. The gray circles are selected by DFGN in each step. a small set of sentences without reasoning. To address this issue, there are several recently proposed QA datasets particularly designed to evaluate a system’s multi-hop reasoning capabilities, including WikiHop (Welbl et al., 2018), ComplexWebQuestions (Talmor and Berant, 2018), and HotpotQA (Yang et al., 2018). In this paper, we study the problem of multi-hop text-based QA, which requires multi-hop reasoning among evidence scattered around multiple raw documents. In particular, a query utterance and a set of accompanying documents are given, but not 6141 all of them are relevant. The answer can only be obtained by selecting two or more evidence from the documents and inferring among them (see Figure 1 for an example). This setup is versatile and does not rely on any additional predefined knowledge base. Therefore the models are expected to generalize well and to answer questions in open domains. There are two main challenges to answer questions of this kind. Firstly, since not every document contain relevant information, multi-hop textbased QA requires filtering out noises from multiple paragraphs and extracting useful information. To address this, recent studies propose to build entity graphs from input paragraphs and apply graph neural networks (GNNs) to aggregate the information through entity graphs (Dhingra et al., 2018; De Cao et al., 2018; Song et al., 2018a). However, all of the existing work apply GNNs based on a static global entity graph of each QA pair, which can be considered as performing implicit reasoning. Instead of them, we argue that the queryguided multi-hop reasoning should be explicitly performed on a dynamic local entity graph tailored according to the query. Secondly, previous work on multi-hop QA (e.g. WikiHop) usually aggregates document information to an entity graph, and answers are then directly selected on entities of the entity graph. However, in a more realistic setting, the answers may even not reside in entities of the extracted entity graph. Thus, existing approaches can hardly be directly applied to open-domain multi-hop QA tasks like HotpotQA. In this paper, we propose Dynamically Fused Graph Network (DFGN), a novel method to address the aforementioned concerns for multi-hop text-based QA. For the first challenge, DFGN constructs a dynamic entity graph based on entity mentions in the query and documents. This process iterates in multiple rounds to achieve multihop reasoning. In each round, DFGN generates and reasons on a dynamic graph, where irrelevant entities are masked out while only reasoning sources are preserved, via a mask prediction module. Figure 1 shows how DFGN works on a multi-hop text-based QA example in HotpotQA. The mask prediction module is learned in an endto-end fashion, alleviating the error propagation problem. To solve the second challenge, we propose a fusion process in DFGN to solve the unrestricted QA challenge. We not only aggregate information from documents to the entity graph (doc2graph), but also propagate the information of the entity graph back to document representations (graph2doc). The fusion process is iteratively performed at each hop through the document tokens and entities, and the final resulting answer is then obtained from document tokens. The fusion process of doc2graph and graph2doc along with the dynamic entity graph jointly improve the interaction between the information of documents and the entity graph, leading to a less noisy entity graph and thus more accurate answers. As one merit, DFGN’s predicted masks implicitly induce reasoning chains, which can explain the reasoning results. Since the ground truth reasoning chain is very hard to define and label for open-domain corpus, we propose a feasible way to weakly supervise the mask learning. We propose a new metric to evaluate the quality of predicted reasoning chains and constructed entity graphs. Our contributions are summarized as follows: • We propose DFGN, a novel method for the multi-hop text-based QA problem. • We provide a way to explain and evaluate the reasoning chains via interpreting the entity graph masks predicted by DFGN. The mask prediction module is additionally weakly trained. • We provide an experimental study on a public dataset (HotpotQA) to demonstrate that our proposed DFGN is competitive against stateof-the-art unpublished work. 2 Related work Text-based Question Answering Depending on whether the supporting information is structured or not, QA tasks can be categorized into knowledge-based (KBQA), text-based (TBQA), mixed, and others. In KBQA, the supporting information is from structured knowledge bases (KBs), while the queries can be either structure or natural language utterances. For example, SimpleQuestions is one large scale dataset of this kind (Bordes et al., 2015). In contrast, TBQA’s supporting information is raw text, and hence the query is also text. SQuAD (Rajpurkar et al., 2016) and HotpotQA (Yang et al., 2018) are two such datasets. There are also mixed QA tasks which combine both text and KBs, e.g. WikiHop (Welbl 6142 Paragraph 1: Australia at the 2012 Winter Youth Olympics Australia competed at the 2012 Winter Youth Olympics in Innsbruck. The chef de mission of the team will be former Olympic champion Alisa Camplin, the first time a woman is the chef de mission of any Australian Olympic team. The Australian team will consist of 13 athletes in 8 sports. Paragraph 2: Alisa Camplin Alisa Peta Camplin OAM (born 10 November 1974) is an Australian aerial skier who won gold at the 2002 Winter Olympics, the second ever winter Olympic gold medal for Australia. At the 2006 Winter Olympics, Camplin finished third to receive a bronze medal. She is the first Australian skier to win medals at consecutive Winter Olympics, making her one of Australia's best skiers. Distractor Paragraphs 3 - 10 ... Q: The first woman to be the chef de mission of an Australian Olympic team won gold medal in which winter Olympics ? A: 2002 Winter Olympics The Hanging Gardens, in Mumbai, also known as Pherozeshah Mehta Gardens, are terraced gardens ? They provide sunset views over the Arabian Sea. Mumbai (also known as Bombay, the official name until 1995) is the capital city of the Indian state of Maharashtra. It is the most populous city in India ? The Arabian Sea is a region of the northern Indian Ocean bounded on the north by Pakistan and Iran, on the west by northeastern Somalia and the Arabian Peninsula, and on the east by India ? Q: (Hanging gardens of Mumbai, country, ?) Options: {Iran, India, Pakistan, Somalia, ? } A: India HotpotQA WikiHop Figure 2: Comparison between HotpotQA (left) and WikiHop (right). In HotpotQA, the questions are proposed by crowd workers and the blue words in paragraphs are labeled supporting facts corresponding to the question. In WikiHop, the questions and answers are formed with relations and entities in the underlying KB respectively, thus the questions are inherently restricted by the KB schema. The colored words and phrases are entities in the KB. et al., 2018) and ComplexWebQuestions (Talmor and Berant, 2018). In this paper, we focus on TBQA, since TBQA tests a system’s end-to-end capability of extracting relevant facts from raw language and reasoning about them. Depending on the complexity in underlying reasoning, QA problems can be categorized into single-hop and multi-hop ones. Single-hop QA only requires one fact extracted from the underlying information, no matter structured or unstructured, e.g. “which city is the capital of California”. The SQuAD dataset belongs to this type (Rajpurkar et al., 2016). On the contrary, multi-hop QA requires identifying multiple related facts and reasoning about them, e.g. “what is the capital city of the largest state in the U.S.”. Example tasks and benchmarks of this kind include WikiHop, ComplexWebQuestions, and HotpotQA. Many IR techniques can be applied to answer single-hop questions (Rajpurkar et al., 2016). However, these IR techniques are hardly introduced in multi-hop QA, since a single fact can only partially match a question. Note that existing multi-hop QA datasets WikiHop and ComplexWebQuestions, are constructed using existing KBs and constrained by the schema of the KBs they use. For example, the answers are limited in entities in WikiHop rather than formed by free texts in HotpotQA (see Figure 2 for an example). In this work, we focus on multi-hop textbased QA, so we only evaluate on HotpotQA. Multi-hop Reasoning for QA Popular GNN frameworks, e.g. graph convolution network (Kipf and Welling, 2017), graph attention network (Veliˇckovi´c et al., 2018), and graph recurrent network (Song et al., 2018b), have been previously studied and show promising results in QA tasks requiring reasoning (Dhingra et al., 2018; De Cao et al., 2018; Song et al., 2018a). Coref-GRN extracts and aggregates entity information in different references from scattered paragraphs (Dhingra et al., 2018). Coref-GRN utilizes co-reference resolution to detect different mentions of the same entity. These mentions are combined with a graph recurrent neural network (GRN) (Song et al., 2018b) to produce aggregated entity representations. MHQA-GRN (Song et al., 2018a) follows Coref-GRN and refines the graph construction procedure with more connections: sliding-window, same entity, and co-reference, which shows further improvements. Entity-GCN (De Cao et al., 2018) proposes to distinguish different relations in the graphs through a relational graph convolutional neural network (GCN) (Kipf and Welling, 2017). Coref-GRN, MHQA-GRN and Entity-GCN explore the graph construction problem in answering real-world questions. However, it is yet to investigate how to effectively reason about the constructed graphs, which is the main problem studied in this work. Another group of sequential models deals with multi-hop reasoning following Memory Networks (Sukhbaatar et al., 2015). Such models construct representations for queries and memory cells for contexts, then make interactions between them in a multi-hop manner. Munkhdalai and Yu (2017) 6143 and Onishi et al. (2016) incorporate a hypothesis testing loop to update the query representation at each reasoning step and select the best answer among the candidate entities at the last step. IRNet (Zhou et al., 2018) generates a subject state and a relation state at each step, computing the similarity score between all the entities and relations given by the dataset KB. The ones with the highest score at each time step are linked together to form an interpretable reasoning chain. However, these models perform reasoning on simple synthetic datasets with a limited number of entities and relations, which are quite different with largescale QA dataset with complex questions. Also, the supervision of entity-level reasoning chains in synthetic datasets can be easily given following some patterns while they are not available in HotpotQA. 3 Dynamically Fused Graph Network We describe dynamically fused graph network (DFGN) in this section. Our intuition is drawn from the human reasoning process for QA. One starts from an entity of interest in the query, focuses on the words surrounding the start entities, connects to some related entity either found in the neighborhood or linked by the same surface mention, repeats the step to form a reasoning chain, and lands on some entity or snippets likely to be the answer. To mimic human reasoning behavior, we develop five components in our proposed QA system (Fig. 3): a paragraph selection subnetwork, a module for entity graph construction, an encoding layer, a fusion block for multi-hop reasoning, and a final prediction layer. 3.1 Paragraph Selection For each question, we assume that Np paragraphs are given (e.g. Np = 10 in HotpotQA). Since not every piece of text is relevant to the question, we train a sub-network to select relevant paragraphs. The sub-network is based on a pre-trained BERT model (Devlin et al., 2018) followed by a sentence classification layer with sigmoid prediction. The selector network takes a query Q and a paragraph as input and outputs a relevance score between 0 and 1. Training labels are constructed by assigning 1’s to the paragraphs with at least one supporting sentence for each Q&A pair. During inference, paragraphs with predicted scores greater than η (= 0.1 in experiments) are selected and concateEncoder Input Documents Input Query Context Entity Graph Fusion Block LSTM Prediction Layer Paragraph Selector Graph Constructor BERT Bi-attention Supporting Sentences Answer Span Answer Type multi-hop Figure 3: Overview of DFGN. nated together as the context C. η is properly chosen to ensure the selector reaches a significantly high recall of relevant paragraphs. Q and C are further processed by upper layers. 3.2 Constructing Entity Graph We do not assume a global knowledge base. Instead, we use the Stanford corenlp toolkit (Manning et al., 2014) to recognize named entities from the context C. The number of extracted entities is denoted as N. The entity graph is constructed with the entities as nodes and edges built as follows. The edges are added 1. for every pair of entities appear in the same sentence in C (sentencelevel links); 2. for every pair of entities with the same mention text in C (context-level links); and 3. between a central entity node and other entities within the same paragraph (paragraph-level links). The central entities are extracted from the title sentence for each paragraph. Notice the context-level links ensures that entities across multiple documents are connected in a certain way. We do not apply co-reference resolution for pronouns because it introduces both additional useful and erroneous links. 3.3 Encoding Query and Context We concatenate the query Q with the context C and pass the resulting sequence to a pre-trained BERT model to obtain representations Q = [q1, . . . , qL] ∈RL×d1 and C⊤= [c1, . . . , cM] ∈ RM×d1, where L,M are lengths of query and context, and d1 is the size of BERT hidden states. In experiments, we find concatenating queries and 6144 contexts performs better than passing them separately to BERT. The representations are further passed through a bi-attention layer (Seo et al., 2016) to enhance cross interactions between the query and the context. In practice, we find adding the bi-attention layer achieves better performance than the BERT encoding only. The output representation are Q0 ∈RL×d2 and C0 ∈RM×d2, where d2 is the output embedding size. 3.4 Reasoning with the Fusion Block With the embeddings calculated for the query Q and context C, the remaining challenge is how to identify supporting entities and the text span of potential answers. We propose a fusion block to mimic human’s one-step reasoning behavior – starting from Q0 and C0 and finding one-step supporting entities. A fusion block achieves the following: 1. passing information from tokens to entities by computing entity embeddings from tokens (Doc2Graph flow); 2. propagating information on entity graph; and 3. passing information from entity graph to document tokens since the final prediction is on tokens (Graph2Doc flow). Fig. 4 depicts the inside structure of the fusion block in DFGN. Document to Graph Flow. Since each entity is recognized via the NER tool, the text spans associated with the entities are utilized to compute entity embeddings (Doc2Graph). To this end, we construct a binary matrix M, where Mi,j is 1 if i-th token in the context is within the span of the j-th entity. M is used to select the text span associated with an entity. The token embeddings calculated from the above section (which is a matrix containing only selected columns of Ct−1) is passed into a mean-max pooling to calculate entity embeddings Et−1 = [et−1,1, . . . , et−1,N]. Et−1 will be of size 2d2×N, where N is the number of entities, and each of the 2d2 dimensions will produce both mean-pooling and max-pooling results. This module is denoted as Tok2Ent. Dynamic Graph Attention. After obtaining entity embeddings from the input context Ct−1, we apply a graph neural network to propagate node information to their neighbors. We propose a dynamic graph attention mechanism to mimic human’s step-by-step exploring and reasoning behavior. In each reasoning step, we assume every node has some information to disseminate to M MeanPool Soft Mask Context Ct-1 Query Qt-1 Dynamic Graph Attention M Entity Et Entity Graph G Doc2Graph Graph2Doc Bi-Attention Entity Et-1 Context Ct Query Qt ... ... ... ... ... ... ... Query Update Figure 4: Reasoning with the fusion block in DFGN neighbors. The more relevant to the query, the neighbor nodes receive more information from nearby. We first identify nodes relevant to the query by creating a soft mask on entities. It serves as an information gatekeeper, i.e. only those entity nodes pertaining to the query are allowed to disseminate information. We use an attention network between the query embeddings and the entity embeddings to predict a soft mask mt, which aims to signify the start entities in the t-th reasoning step: ˜q(t−1) = MeanPooling(Q(t−1)) (1) γ(t) i = ˜q(t−1)V(t)e(t−1) i / p d2 (2) m(t) = σ([γ(t) 1 , · · · , γ(t) N ]) (3) ˜E(t−1) = [m(t) 1 e(t−1) 1 , . . . , m(t) N e(t−1) N ] (4) where Vt is a linear projection matrix, and σ is the sigmoid function. By multiplying the soft mask and the initial entity embeddings, the desired start entities will be encouraged and others will be penalized. As a result, this step of information propagation is restricted to a dynamic sub-part of the entity graph. The next step is to disseminate information across the dynamic sub-graph. Inspired by GAT (Veliˇckovi´c et al., 2018), we compute attention score α between two entities by: h(t) i = Ut˜e(t−1) i + bt (5) β(t) i,j = LeakyReLU(W⊤ t [h(t) i , h(t) j ]) (6) α(t) i,j = exp(β(t) i,j ) P k exp(β(t) i,k) (7) where Ut ∈Rd2×2d2, Wt ∈R2d2 are linear projection parameters. Here the i-th row of α rep6145 resents the proportion of information that will be assigned to the neighbors of entity i. Note that the information flow in our model is different from most previous GATs. In dynamic graph attention, each node sums over its column, which forms a new entity state containing the total information it received from the neighbors: e(t) i = ReLU( X j∈Bi α(t) j,ih(t) j ) (8) where Bi is the set of neighbors of entity i. Then we obtain the updated entity embeddings E(t) = [e(t) 1 , . . . , e(t) N ]. Updating Query. A reasoning chain contains multiple steps, and the newly visited entities by one step will be the start entities of the next step. In order to predict the expected start entities for the next step, we introduce a query update mechanism, where the query embeddings are updated by the entity embeddings of the current step. In our implementation, we utilize a bi-attention network (Seo et al., 2016) to update the query embeddings: Q(t) = Bi-Attention(Q(t−1), E(t)) (9) Graph to Document Flow. Using Tok2Ent and dynamic graph attention, we realize a reasoning step at the entity level. However, the unrestricted answer still cannot be backtraced. To address this, we develop a Graph2Doc module to keep information flowing from entity back to tokens in the context. Therefore the text span pertaining to the answers can be localized in the context. Using the same binary matrix M as described above, the previous token embeddings in Ct−1 are concatenated with the associated entity embedding corresponding to the token. Each row in M corresponds to one token, therefore we use it to select one entity’s embedding from Et if the token participates in the entity’s mention. This information is further processed with a LSTM layer (Hochreiter and Schmidhuber, 1997) to produce the nextlevel context representation: C(t) = LSTM([C(t−1), ME(t)⊤]) (10) where ; refers to concatenation and C(t) ∈RM×d2 serves as the input of the next fusion block. At this time, the reasoning information of current subgraph has been propagated onto the whole context. 3.5 Prediction We follow the same structure of prediction layers as (Yang et al., 2018). The framework has four output dimensions, including 1. supporting sentences, 2. the start position of the answer, 3. the end position of the answer, and 4. the answer type. We use a cascade structure to solve the output dependency, where four isomorphic LSTMs Fi are stacked layer by layer. The context representation of the last fusion block is sent to the first LSTM F0. Each Fi outputs a logit O ∈RM×d2 and computes a cross entropy loss over these logits. Osup = F0(C(t)) (11) Ostart = F1([C(t), Osup]) (12) Oend = F2([C(t), Osup, Ostart]) (13) Otype = F3([C(t), Osup, Oend]) (14) We jointly optimize these four cross entropy losses. Each loss term is weighted by a coefficient. L = Lstart + Lend + λsLsup + λtLtype (15) Weak Supervision. In addition, we introduce a weakly supervised signal to induce the soft masks at each fusion block to match the heuristic masks. For each training case, the heuristic masks contain a start mask detected from the query, and additional BFS masks obtained by applying breadthfirst search (BFS) on the adjacent matrices give the start mask. A binary cross entropy loss between the predicted soft masks and the heuristics is then added to the objective. We skip those cases whose start masks cannot be detected from the queries. 4 Experiments We evaluate our Dynamically Fused Graph Network (DFGN) on HotpotQA (Yang et al., 2018) in the distractor setting. For the full wiki setting where the entire Wikipedia articles are given as input, we consider the bottleneck is about information retrieval, thus we do not include the full wiki setting in our experiments. 4.1 Implementation Details In paragraph selection stage, we use the uncased version of BERT Tokenizer (Devlin et al., 2018) to tokenize all passages and questions. The encoding vectors of sentence pairs are generated from a pre-trained BERT model (Devlin et al., 2018). We set a relatively low threshold during selection to 6146 Model Answer Sup Fact Joint EM F1 EM F1 EM F1 Baseline Model 45.60 59.02 20.32 64.49 10.83 40.16 GRN∗ 52.92 66.71 52.37 84.11 31.77 58.47 DFGN(Ours) 55.17 68.49 49.85 81.06 31.87 58.23 QFE∗ 53.86 68.06 57.75 84.49 34.63 59.61 DFGN(Ours)† 56.31 69.69 51.50 81.62 33.62 59.82 Table 1: Performance comparison on the private test set of HotpotQA in the distractor setting. Our DFGN is the second best result on the leaderboard before submission (on March 1st). The baseline model is from Yang et al. (2018) and the results with ∗is unpublished. DFGN(Ours)† refers to the same model with a revised entity graph, whose entities are recognized by a BERT NER model. Note that the result of DFGN(Ours)† is submitted to the leaderboard during the review process of our paper. Setting EM F1 DFGN (2-layer) 55.42 69.23 - BFS Supervision 54.48 68.15 - Entity Mask 54.64 68.25 - Query Update 54.44 67.98 - E2T Process 53.91 67.45 - 1 Fusion Block 54.14 67.70 - 2 Fusion Blocks 53.44 67.11 - 2 Fusion Blocks & Bi-attn 50.03 62.83 gold paragraphs only 55.67 69.15 supporting facts only 57.57 71.67 Table 2: Ablation study of question answering performances in the development set of HotpotQA in the distractor setting. We use a DFGN with 2-layer fusion blocks as the origin model. The upper part is the model ablation results and the lower part is the dataset ablation results. keep a high recall (97%) and a reasonable precision (69%) on supporting facts. In graph construction stage, we use a pretrained NER model from Stanford CoreNLP Toolkits1 (Manning et al., 2014) to extract named entities. The maximum number of entities in a graph is set to be 40. Each entity node in the entity graphs has an average degree of 3.52. In the encoding stage, we also use a pre-trained BERT model as the encoder, thus d1 is 768. All the hidden state dimensions d2 are set to 300. We set the dropout rate for all hidden units of LSTM and dynamic graph attention to 0.3 and 0.5 respectively. For optimization, we use Adam Optimizer (Kingma and Ba, 2015) with an initial learning rate of 1e−4. 1https://nlp.stanford.edu/software/ CRF-NER.shtml 4.2 Main Results We first present a comparison between baseline models and our DFGN2. Table 1 shows the performance of different models in the private test set of HotpotQA. From the table we can see that our model achieves the second best result on the leaderboard now3 (on March 1st). Besides, the answer performance and the joint performance of our model are competitive against state-of-the-art unpublished models. We also include the result of our model with a revised entity graph whose entities are recognized by a BERT NER model (Devlin et al., 2018). We fine-tune the pre-trained BERT model on the dataset of the CoNLL’03 NER shared task (Sang and De Meulder, 2003) and use it to extract named entities from the input paragraphs. The results show that our model achieves a 1.5% gain in the joint F1-score with the entity graph built from a better entity recognizer. To evaluate the performance of different components in our DFGN, we perform ablation study on both model components and dataset segments. Here we follow the experiment setting in Yang et al. (2018) to perform the dataset ablation study, where we only use golden paragraphs or supporting facts as the input context. The ablation results of QA performances in the development set of HotpotQA are shown in Table 2. From the table we can see that each of our model components can provide from 1% to 2% relative gain over the QA performance. Particularly, using a 1-layer fusion block leads to an obvious performance loss, which implies the significance of performing multi-hop reasoning in HotpotQA. Besides, the dataset abla2Our code is available in https://github.com/ woshiyyya/DFGN-pytorch. 3The leaderboard can be found on https: //hotpotqa.github.io 6147 tion results show that our model is not very sensitive to the noisy paragraphs comparing with the baseline model which can achieve a more than 5% performance gain in the “gold paragraphs only” and “supporting facts only” settings. (Yang et al., 2018). 4.3 Evaluation on Graph Construction and Reasoning Chains The chain of reasoning is a directed path on the entity graph, so high-quality entity graphs are the basis of good reasoning. Since the limited accuracy of NER model and the incompleteness of our graph construction, 31.3% of the cases in the development set are unable to perform a complete reasoning process, where at least one supporting sentence is not reachable through the entity graph, i.e. no entity is recognized by NER model in this sentence. We name such cases as “missing supporting entity”, and the ratio of such cases can evaluate the quality of graph construction. We focus on the rest 68.7% good cases in the following analysis. In the following, we first give several definitions before presenting ESP (Entity-level Support) scores. Path A path is a sequence of entities visited by the fusion blocks, denoting as P = [ep1, . . . , ept+1] (suppose t-layer fusion blocks). Path Score The score of a path is acquired by multiplying corresponding soft masks and attention scores along the path, i.e. score(P) = Qt i=1 m(i) pi α(i) pi,pi+1 (Eq. (3), (7)). Hit Given a path and a supporting sentence, if at least one entity of the supporting sentence is visited by the path, we call this supporting sentence is hit4. Given a case with m supporting sentences, we select the top-k paths with the highest scores as the predicted reasoning chains. For each supporting sentence, we use the k paths to calculate how many supporting sentences are hit. In the following, we introduce two metrics to evaluate the quality of multi-hop reasoning through entity-level supporting (ESP) scores. 4A supporting sentence may contain irrelevant information, thus we do not have to visit all entities in a supporting sentence. Besides, due to the fusion mechanism of DFGN, the entity information will be propagated to the whole sentence. Therefore, we define a “hit” occurs when at least one entity of the supporting sentence is visited. k 1 2 5 10 ESP EM(≤40) 7.4% 15.5% 29.8% 41.0% ESP EM(≤80) 7.1% 14.7% 29.9% 44.8% ESP Recall(≤40) 37.3% 46.1% 58.4% 66.4% ESP Recall(≤80) 34.9% 44.6% 59.1% 70.0% Table 3: Evaluation of reasoning chains by ESP scores on two versions of the entity graphs in the development set. ≤40 and ≤80 indicate to the maximum number of nodes in entity graphs. Note that ≤40 refers to the entity graph whose entities are extracted by Stanford CoreNLP, while ≤80 refers to the entity graph whose entities are extracted by the aforementioned BERT NER model. ESP EM (Exact Match) For a case with m supporting sentences, if all the m sentences are hit, we call this case exact match. The ESP EM score is the ratio of exactly matched cases. ESP Recall For a case with m supporting sentences and h of them are hit, this case has a recall score of h/m. The averaged recall of the whole dataset is the ESP Recall. We train a DFGN with 2 fusion blocks to select paths with top-k scores. In the development set, the average number of paths of length 2 is 174.7. We choose k as 1, 2, 5, 10 to compute ESP EM and ESP Recall scores. As we can see in Table 3, regarding the supporting sentences as the ground truth of reasoning chains, our framework can predict reliable information flow. The most informative flow can cover the supporting facts and help produce reliable reasoning results. Here we present the results from two versions of the entity graphs. The results with a maximum number of nodes ≤40 are from the entity graph whose entities are extracted by Stanford CoreNLP. The results with a maximum number of nodes ≤80 are from the entity graph whose entities are extracted by the aforementioned BERT NER model. Since the BERT NER model performs better, we use a larger maximum number of nodes. In addition, as the size of an entity graph gets larger, the expansion of reasoning chain space makes a Hit even more difficult. However, the BERT NER model still keeps comparative and even better performance on metrics of EM and Recall. Thus the entity graph built from the BERT NER model is better than the previous version. 6148 Supporting Fact 1: "Farrukhzad Khosrau V was briefly king of the Sasanian Empire from March 631 to ..." Supporting Fact 2: "The Sasanian Empire, which succeeded the Parthian Empire, was recognised as ... the Roman-Byzantine Empire, for a period of more than 400 years." Q2: From March 631 to April 631, Farrukhzad Khosrau V was the king of an empire that succeeded which empire? Answer: the Parthian Empire Prediction: Parthian Empire Top 1 Reasoning Chain: n/a Supporting Fact 1: "Barrack buster is the colloquial name given to several improvised mortars, developed in the 1990s by the engineering group of the Provisional Irish Republican Army (IRA)." Supporting Fact 2: " On 20 March 1994, a British Army Lynx helicopter was shot down by the Provisional Irish Republican Army (IRA) in Northern Ireland." Q1: Who used a Barrack buster to shoot down a British Army Lynx helicopter? Answer: IRA Prediction: IRA Top 1 Reasoning Chain: British Army Lynx, Provisional Irish Republican Army, IRA Mask1 Mask2 End Supporting Fact 1: "George Archainbaud (May 7, 1890 ? February 20, 1959) was a French-born American film and television director." Supporting Fact 2: "Ralph Murphy (May 1, 1895 ? February 10, 1967) was an American film director." Q3: Who died first, George Archainbaud or Ralph Murphy? Answer: George Archainbaud Prediction: Ralph Murphy Top 1 Reasoning Chain: Ralph Murphy, May 1, 1895, Ralph Murphy Figure 5: Case study of three samples in the development set. We train a DFGN with 2-layer fusion blocks to produce the results. The numbers on the left side indicate the importance scores of the predicted masks. The text on the right side include the queries, answers, predictions, predicted top-1 reasoning chains and the supporting facts of three samples with the recognized entities highlighted by different colors. 4.4 Case Study We present a case study in Figure 5. The first case illustrates the reasoning process in a DFGN with 2-layer fusion blocks. At the first step, by comparing the query with entities, our model generates Mask1 as the start entity mask of reasoning, where “Barrack” and “British Army Lynx” are detected as the start entities of two reasoning chains. Information of two start entities is then passed to their neighbors on the entity graph. At the second step, mentions of the same entity “IRA” are detected by Mask2, serving as a bridge for propagating information across two paragraphs. Finally, two reasoning chains are linked together by the bridge entity “IRA”, which is exactly the answer. The second case in Figure 5 is a bad case. Due to the malfunction of the NER module, the only start entity, “Farrukhzad Khosrau V”, was not successfully detected. Without the start entities, the reasoning chains cannot be established, and the further information flow in the entity graph is blocked at the first step. The third case in Figure 5 is also a bad case, which includes a query of the Comparison query type. Due to the lack of numerical computation ability of our model, it fails to give a correct answer, although the query is just a simple comparison between two days “February 20, 1959” and “February 10, 1967”. It is an essential problem to incorporate numerical operations for further improving the performance in cases of the comparison query type. 5 Conclusion We introduce Dynamically Fused Graph Network (DFGN) to address multi-hop reasoning. Specifically, we propose a dynamic fusion reasoning block based on graph neural networks. Different from previous approaches in QA, DFGN is capable of predicting the sub-graphs dynamically at each reasoning step, and the entity-level reasoning is fused with token-level contexts. We evaluate DFGN on HotpotQA and achieve leading results. Besides, our analysis shows DFGN can produce reliable and explainable reasoning chains. In the future, we may incorporate new advances in building entity graphs from texts, and solve more difficult reasoning problems, e.g. the cases of comparison query type in HotpotQA. References Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple ques6149 tion answering with memory networks. CoRR, abs/1506.02075. Nicola De Cao, Wilker Aziz, and Ivan Titov. 2018. Question answering by reasoning across documents with graph convolutional networks. arXiv preprint arXiv:1808.09920. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Bhuwan Dhingra, Qiao Jin, Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. 2018. Neural models for reasoning over multiple mentions using coreference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 42–48. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1693– 1701. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations. Thomas N Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In Proceedings of the International Conference on Learning Representations. Xiaodong Liu, Yelong Shen, Kevin Duh, and Jianfeng Gao. 2017. Stochastic answer networks for machine reading comprehension. arXiv preprint arXiv:1712.03556. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60. Sewon Min, Victor Zhong, Richard Socher, and Caiming Xiong. 2018. Efficient and robust question answering from minimal context over documents. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1725–1735. Tsendsuren Munkhdalai and Hong Yu. 2017. Reasoning with memory augmented neural networks for language comprehension. In Proceedings of the International Conference on Learning Representations. Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2230– 2235. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you dont know: Unanswerable questions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 784–789. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. In Proceedings of the International Conference on Learning Representations. Linfeng Song, Zhiguo Wang, Mo Yu, Yue Zhang, Radu Florian, and Daniel Gildea. 2018a. Exploring graph-structured passage representation for multihop reading comprehension with graph neural networks. arXiv preprint arXiv:1809.02040. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018b. A graph-to-sequence model for amrto-text generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1616–1626. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448. Alon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex questions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 641–651. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In Proceedings of the International Conference on Learning Representations. 6150 Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 189–198. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. Transactions of the Association of Computational Linguistics, 6:287–302. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369–2380. Mantong Zhou, Minlie Huang, and Xiaoyan Zhu. 2018. An interpretable reasoning network for multirelation question answering. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2010–2022.
2019
617
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6151–6161 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6151 NLProlog: Reasoning with Weak Unification for Question Answering in Natural Language Leon Weber Humboldt-Universität zu Berlin [email protected] Pasquale Minervini University College London [email protected] Jannes Münchmeyer GFZ German Research Center for Geoscience Potsdam [email protected] Ulf Leser Humboldt-Universität zu Berlin [email protected] Tim Rocktäschel University College London [email protected] Abstract Rule-based models are attractive for various tasks because they inherently lead to interpretable and explainable decisions and can easily incorporate prior knowledge. However, such systems are difficult to apply to problems involving natural language, due to its linguistic variability. In contrast, neural models can cope very well with ambiguity by learning distributed representations of words and their composition from data, but lead to models that are difficult to interpret. In this paper, we describe a model combining neural networks with logic programming in a novel manner for solving multi-hop reasoning tasks over natural language. Specifically, we propose to use a Prolog prover which we extend to utilize a similarity function over pretrained sentence encoders. We fine-tune the representations for the similarity function via backpropagation. This leads to a system that can apply rulebased reasoning to natural language, and induce domain-specific rules from training data. We evaluate the proposed system on two different question answering tasks, showing that it outperforms two baselines – BIDAF (Seo et al., 2016a) and FASTQA (Weissenborn et al., 2017b) on a subset of the WIKIHOP corpus and achieves competitive results on the MEDHOP data set (Welbl et al., 2017). 1 Introduction We consider the problem of multi-hop reasoning on natural language data. For instance, consider the statements “Socrates was born in Athens” and “Athens belongs to Greece”, and the question “Where was Socrates born?”. There are two possible answers following from the given statements, namely “Athens” and “Greece”. While the answer “Athens” follows directly from “Socrates was born in Athens”, the answer “Greece” requires the reader to combine both statements, using the knowledge that a person born in a city X, located in a country Y , is also born in Y . This step of combining multiple pieces of information is referred to as multi-hop reasoning (Welbl et al., 2017). In the literature, such multi-hop reading comprehension tasks are frequently solved via end-to-end differentiable (deep learning) models (Sukhbaatar et al., 2015; Peng et al., 2015; Seo et al., 2016b; Raison et al., 2018; Henaff et al., 2016; Kumar et al., 2016; Graves et al., 2016; Dhingra et al., 2018). Such models are capable of dealing with the linguistic variability and ambiguity of natural language by learning word and sentence-level representations from data. However, in such models, explaining the reasoning steps leading to an answer and interpreting the model parameters to extrapolate new knowledge is a very challenging task (DoshiVelez and Kim, 2017; Lipton, 2018; Guidotti et al., 2019). Moreover, such models tend to require large amounts of training data to generalise correctly, and incorporating background knowledge is still an open problem (Rocktäschel et al., 2015; Weissenborn et al., 2017a; Rocktäschel and Riedel, 2017; Evans and Grefenstette, 2017). In contrast, rule-based models are easily interpretable, naturally produce explanations for their decisions, and can generalise from smaller quantities of data. However, these methods are not robust to noise and can hardly be applied to domains where data is ambiguous, such as vision and language (Moldovan et al., 2003; Rocktäschel and Riedel, 2017; Evans and Grefenstette, 2017). In this paper, we introduce NLPROLOG, a system combining a symbolic reasoner and a rulelearning method with distributed sentence and entity representations to perform rule-based multihop reasoning on natural language input.1 NLPROLOG generates partially interpretable and explain1NLPROLOG and our evaluation code is available at https://github.com/leonweber/nlprolog. 6152 able models, and allows for easy incorporation of prior knowledge. It can be applied to natural language without the need of converting it to an intermediate logic form. At the core of NLPROLOG is a backward-chaining theorem prover, analogous to the backward-chaining algorithm used by Prolog reasoners (Russell and Norvig, 2010b), where comparisons between symbols are replaced by differentiable similarity function between their distributed representations (Sessa, 2002). To this end, we use end-to-end differentiable sentence encoders, which are initialized with pretrained sentence embeddings (Pagliardini et al., 2017) and then finetuned on a downstream task. The differentiable fine-tuning objective enables us learning domainspecific logic rules – such as transitivity of the relation is in – from natural language data. We evaluate our approach on two challenging multi-hop Question Answering data sets, namely MEDHOP and WIKIHOP (Welbl et al., 2017). Our main contributions are the following: i) We show how backward-chaining reasoning can be applied to natural language data by using a combination of pretrained sentence embeddings, a logic prover, and fine-tuning via backpropagation, ii) We describe how a Prolog reasoner can be enhanced with a differentiable unification function based on distributed representations (embeddings), iii) We evaluate the proposed system on two different Question Answering (QA) datasets, and demonstrate that it achieves competitive results in comparison with strong neural QA models while providing interpretable proofs using learned rules. 2 Related Work Our work touches in general on weak-unification based fuzzy logic (Sessa, 2002) and focuses on multi-hop reasoning for QA, the combination of logic and distributed representations, and theorem proving for question answering. Multi-hop Reasoning for QA. One prominent approach for enabling multi-hop reasoning in neural QA models is to iteratively update a query embedding by integrating information from embeddings of context sentences, usually using an attention mechanism and some form of recurrency (Sukhbaatar et al., 2015; Peng et al., 2015; Seo et al., 2016b; Raison et al., 2018). These models have achieved state-of-the-art results in a number of reasoning-focused QA tasks. Henaff et al. (2016) employ a differentiable memory structure that is updated each time a new piece of information is processed. The memory slots can be used to track the state of various entities, which can be considered as a form of temporal reasoning. Similarly, the Neural Turing Machine (Graves et al., 2016) and the Dynamic Memory Network (Kumar et al., 2016), which are built on differentiable memory structures, have been used to solve synthetic QA problems requiring multi-hop reasoning. Dhingra et al. (2018) modify an existing neural QA model to additionally incorporate coreference information provided by a coreference resolution model. De Cao et al. (2018) build a graph connecting entities and apply Graph Convolutional Networks (Kipf and Welling, 2016) to perform multi-hop reasoning, which leads to strong results on WIKIHOP. Zhong et al. (2019) propose a new neural QA architecture that combines a combination of coarse-grained and fine-grained reasoning to achieve very strong results on WIKIHOP. All of the methods above perform reasoning implicitly as a sequence of opaque differentiable operations, making the interpretation of the intermediate reasoning steps very challenging. Furthermore, it is not obvious how to leverage user-defined inference rules during the reasoning procedure. Combining Rule-based and Neural Models. In Artificial Intelligence literature, integrating symbolic and sub-symbolic representations is a longstanding problem (Besold et al., 2017). Our work is very related to the integration of Markov Logic Networks (Richardson and Domingos, 2006) and Probabilistic Soft Logic (Bach et al., 2017) with word embeddings, which was applied to Recognizing Textual Entailment (RTE) and Semantic Textual Similarity (STS) tasks (Garrette et al., 2011, 2014; Beltagy et al., 2013, 2014), improving over purely rule-based and neural baselines. An area in which neural multi-hop reasoning models have been investigated is Knowledge Base Completion (KBC) (Das et al., 2016; Cohen, 2016; Neelakantan et al., 2015; Rocktäschel and Riedel, 2017; Das et al., 2017; Evans and Grefenstette, 2018). While QA could be in principle modeled as a KBC task, the construction of a Knowledge Base (KB) from text is a brittle and error prone process, due to the inherent ambiguity of natural language. Very related to our approach are Neural Theorem Provers (NTPs) (Rocktäschel and Riedel, 2017): given a goal, its truth score is computed via a continuous relaxation of the backward-chaining rea6153 soning algorithm, using a differentiable unification operator. Since the number of candidate proofs grows exponentially with the length of proofs, NTPs cannot scale even to moderately sized knowledge bases, and are thus not applicable to natural language problems in its current form. We solve this issue by using an external prover and pretrained sentence representations to efficiently discard all proof trees producing proof scores lower than a given threshold, significantly reducing the number of candidate proofs. Theorem Proving for Question Answering. Our work is not the first to apply theorem proving to QA problems. Angeli et al. (2016) employ a system based on Natural Logic to search a large KB for a single statement that entails the candidate answer. This is different from our approach, as we aim to learn a set of rules that combine multiple statements to answer a question. Systems like Watson (Ferrucci et al., 2010) and COGEX (Moldovan et al., 2003) utilize an integrated theorem prover, but require a transformation of the natural language sentences to logical atoms. In the case of COGEX, this improves the accuracy of the underlying system by 30%, and increases its interpretability. While this work is similar in spirit, we greatly simplify the preprocessing step by replacing the transformation of natural language to logic with the simpler approach of transforming text to triples by using co-occurences of named entities. Fader et al. (2014) propose OPENQA, a system that utilizes a mixture of handwritten and automatically obtained operators that are able to parse, paraphrase and rewrite queries, which allows them to perform large-scale QA on KBs that include Open IE triples. While this work shares the same goal – answering questions using facts represented by natural language triples – we choose to address the problem of linguistic variability by integrating neural components, and focus on the combination of multiple facts by learning logical rules. 3 Background In the following, we briefly introduce the backward chaining algorithm and unification procedure (Russell and Norvig, 2016) used by Prolog reasoners, which lies at the core of NLPROLOG. We consider Prolog programs that consists of a set of rules in the form of Horn clauses: h(fh 1 , . . . , fh n) ⇐ p1(f1 1 , . . . , f1 m) ∧. . . ∧pB(fB 1 , . . . , fB l ), where h, pi are predicate symbols, and fi j are either function (denoted in lower case) or variable (upper case) symbols. The domain of function symbols is denoted by F, and the domain of predicate symbols by P. h(fh 1 , . . . , fh n) is called the head and p1(f1 1 , . . . , f1 m) ∧. . . ∧pB(fB 1 , . . . , fB l ) the body of the rule. We call B the body size of the rule and rules with a body size of zero are named atoms (short for atomic formula). If an atom does not contain any variable symbols it is termed fact. For simplicity, we only consider function-free Prolog in our experiments, i.e. Datalog (Gallaire and Minker, 1978) programs where all function symbols have arity zero and are called entities and, similarly to related work (Sessa, 2002; JuliánIranzo et al., 2009), we disregard negation and disjunction. However, in principle NLPROLOG also supports functions with higher arity. A central component in a Prolog reasoner is the unification operator: given two atoms, it tries to find variable substitutions that make both atoms syntactically equal. For example, the atoms country(Greece, Socrates) and country(X, Y) result in the following variable substitutions after unification: {X/Greece, Y/Socrates}. Prolog uses backward chaining for proving assertions. Given a goal atom g, this procedure first checks whether g is explicitly stated in the KB – in this case, it can be proven. If it is not, the algorithm attempts to prove it by applying suitable rules, thereby generating subgoals that are proved next. To find applicable rules, it attempts to unify g with the heads of all available rules. If this unification succeeds, the resulting variable substitutions are applied to the atoms in the rule body: each of those atoms becomes a subgoal, and each subgoal is recursively proven using the same strategy. For instance, the application of the rule country(X, Y ) ⇐ born_in(Y, X) to the goal country(Greece, Socrates) would yield the subgoal born_in(Socrates, Greece). Then the process is repeated for all subgoals until no subgoal is left to be proven. The result of this procedure is a set of rule applications and variable substitutions referred to as proof. Note that the number of possible proofs grows exponentially with its depth, as every rule might be used in the proof of each subgoal. 6154 Pseudo code for weak unification can be found in Appendix A – we refer the reader to (Russell and Norvig, 2010a) for an in-depth treatment of the unification procedure. 4 NLProlog Applying a logic reasoner to QA requires transforming the natural language paragraphs to logical representations, which is a brittle and error-prone process. Our aim is reasoning with natural language representations in the form of triples, where entities and relations may appear under different surface forms. For instance, the textual mentions is located in and lies in express the same concept. We propose replacing the exact matching between symbols in the Prolog unification operator with a weak unification operator (Sessa, 2002), which allows to unify two different symbols s1, s2, by comparing their representations using a differentiable similarity function s1 ∼θ s2 ∈[0, 1] with parameters θ. With the weak unification operator, the comparison between two logical atoms results in an unification score resulting from the aggregation of each similarity score. Inspired by fuzzy logic tnorms (Gupta and Qi, 1991), aggregation operators are e.g. the minimum or the product of all scores. The result of backward-chaining with weak unification is a set of proofs, each associated with a proof score measuring the truth degree of the goal with respect to a given proof. Similarly to backward chaining, where only successful proofs are considered, in NLPROLOG the final proof success score is obtained by taking the maximum over the success scores of all found proofs. NLPROLOG combines inference based on the weak unification operator and distributed representations, to allow reasoning over sub-symbolic representations – such as embeddings – obtained from natural language statements. Each natural language statement is first translated into a triple, where the first and third element denote the entities involved in the sentence, and the second element denotes the textual surface pattern connecting the entities. All elements in each triple – both the entities and the textual surface pattern – are then embedded into a vector space. These vector representations are used by the similarity function ∼θ for computing similarities between two entities or two textual surface patterns and, in turn, by the backward chaining algorithm with the weak unification operator for deriving a proof score for a given assertion. Note that the resulting proof score is fully end-to-end differentiable with respect to the model parameters θ: we can train NLPROLOG using gradient-based optimisation by back-propagating the prediction error to θ. Fig. 1 shows an outline of the model, its components and their interactions. 4.1 Triple Extraction To transform the support documents to natural language triples, we first detect entities by performing entity recognition with SPACY (Honnibal and Montani, 2017). From these, we generate triples by extracting all entity pairs that co-occur in the same sentence and use the sentence as the predicate blinding the entities. For instance, the sentence “Socrates was born in Athens and his father was Sophronicus” is converted in the following triples: i) (Socrates, ENT1 was born in ENT2 and his father was Sophronicus, Athens), ii) (Socrates, ENT1 was born in Athens and his father was ENT2, Sophronicus), and iii) (Athens, Socrates was born in ENT1 and his father was ENT2, Sophronicus). We also experimented with various Open Information Extraction frameworks (Niklaus et al., 2018): in our experiments, such methods had very low recall, which led to significantly lower accuracy values. 4.2 Similarity Computation Embedding representations of the symbols in a triple are computed using an encoder eθ : F ∪P 7→ Rd parameterized by θ – where F, P denote the sets of entity and predicate symbols, and d denotes the embedding size. The resulting embeddings are used to induce the similarity function ∼θ: (F ∪P)2 7→[0, 1], given by their cosine similarity scaled to [0, 1]: s1 ∼θ s2 = 1 2  1 + eθ(s1)⊤eθ(s2) ||eθ(s1)|| · ||eθ(s2)||  (1) In our experiments, for using textual surface patterns, we use a sentence encoder composed of a static pre-trained component – namely, SENT2VEC (Pagliardini et al., 2017) – and a MultiLayer Perceptron (MLP) with one hidden layer and Rectified Linear Unit (ReLU) activations (Jarrett et al., 2009). For encoding predicate symbols and entities, we use a randomly initialised embedding matrix. During training, both the MLP and the embedding matrix are learned via backpropagation, while the sentence encoder is kept fixed. 6155 Additionally, we introduce a third lookup table and MLP for the predicate symbols of rules and goals. The main reason of this choice is that semantics of goal and rule predicates may differ from the semantics of fact predicates, even if they share the same surface form. For instance, the query (X, parent, Y) can be interpreted either as (X, is the parent of, Y) or as (X, has parent, Y), which are semantically dissimilar. 4.3 Training the Encoders We train the encoder parameters θ on a downstream task via gradient-based optimization. Specifically, we train NLPROLOG with backpropagation using a learning from entailment setting (Muggleton and Raedt, 1994), in which the model is trained to decide whether a Prolog program R entails the truth of a candidate triple c ∈C, where C is the set of candidate triples. The objective is a model that assigns high probabilities p(c|R; θ) to true candidate triples, and low probabilities to false triples. During training, we minimize the following loss: L(θ) = −log p(a|R; θ) −log  1 − max c∈C\{a} p(c|R; θ)  , (2) where a ∈C is the correct answer. For simplicity, we assume that there is only one correct answer per example, but an adaptation to multiple correct answers would be straight-forward, e.g. by taking the minimum of all answer scores. To estimate p(c|R; θ), we enumerate all proofs for the triple c up to a given depth D, where D is a user-defined hyperparameter. This search yields a number of proofs, each with a success score Si. We set p(c|R; θ) to be the maximum of such proof scores: p(c|R; θ) = Smax = max i Si ∈[0, 1]. Note that the final proof score p(c|R; θ) only depends on the proof with maximum success score Smax. Thus, we propose to first conduct the proof search by using a prover utilizing the similarity function induced by the current parameters ∼θt, which allows us to compute the maximum proof score Smax. The score for each proof is given by the aggregation – either using the minimum or the product functions – of the weak unification scores, which in turn are computed via the differentiable similarity function ∼θ. It follows that p(c|R; θ) is end-to-end differentiable, and can be used for updating the model parameters θ via Stochastic Gradient Descent. 4.4 Runtime Complexity of Proof Search The worst case complexity vanilla logic programming is exponential in the depth of the proof (Russell and Norvig, 2010a). However, in our case, this is a particular problem because weak unification requires the prover to attempt unification between all entity and predicate symbols. To keep things tractable, NLPROLOG only attempts to unify symbols with a similarity greater than some user-defined threshold λ. Furthermore, in the search step for one statement q, for the rest of the search, λ is set to max(λ, S) whenever a proof for q with success score S is found. Due to the monotonicity of the employed aggregation functions, this allows to prune the search tree without losing the guarantee to find the proof yielding the maximum success score Smax, provided that Smax ≥λ. We found this optimization to be crucial to make the proof search scale on the considered data sets. 4.5 Rule Learning In NLPROLOG, the reasoning process depends on rules that describe the relations between predicates. While it is possible to write down rules involving natural language patterns, this approach does not scale. Thus, we follow Rocktäschel and Riedel (2017) and use rule templates to perform Inductive Logic Programming (ILP) (Muggleton, 1991), which allows NLPROLOG to learn rules from training data. In this setting, a user has to define a set of rules with a given structure as input. Then, NLPROLOG can learn the rule predicate embeddings from data by minimizing the loss function in Eq. (2) using gradient-based optimization methods. For instance, to induce a rule that can model transitivity, we can use a rule template of the form p1(X, Z) ⇐p2(X, Y ) ∧p3(Y, Z), and NLPROLOG will instantiate multiple rules with randomly initialized embeddings for p1, p2, and p3, and finetune them on a downstream task. The exact number and structure of the rule templates is treated as a hyperparameter. Unless explicitly stated otherwise, all experiments were performed with the same set of rule templates containing two rules for each of the forms q(X, Y ) ⇐p2(X, Y ), p1(X, Y ) ⇐ p2(Y, X) and p1(X, Z) ⇐p2(X, Y ) ∧p3(Y, Z), 6156 Figure 1: Overview of NLPROLOG – all components are depicted as ellipses, while inputs and outputs are drawn as squares. Phrases with red background are entities and blue ones are predicates. where q is the query predicate. The number and structure of these rule templates can be easily modified, allowing the user to incorporate additional domain-specific background knowledge, such as born_in(X, Z) ⇐born_in(X, Y ) ∧ located_in(Y, Z) 5 Evaluation We evaluate our method on two QA datasets, namely MEDHOP, and several subsets of WIKIHOP (Welbl et al., 2017). These data sets are constructed in such a way that it is often necessary to combine information from multiple documents to derive the correct answer. In both data sets, each data point consists of a query p(e, X), where e is an entity, X is a variable – representing the entity that needs to be predicted, C is a list of candidates entities, a ∈C is an answer entity and p is the query predicate. Furthermore, every query is accompanied by a set of support documents which can be used to decide which of the candidate entities is the correct answer. 5.1 MedHop MEDHOP is a challenging multi-hop QA data set, and contains only a single query predicate. The goal in MEDHOP is to predict whether two drugs interact with each other, by considering the interactions between proteins that are mentioned in the support documents. Entities in the support documents are mapped to data base identifiers. To compute better entity representations, we reverse this mapping and replace all mentions with the drug and proteins names gathered from DRUGBANK (Wishart et al., 2006) and UNIPROT (Apweiler et al., 2004). 5.2 Subsets of WikiHop To further validate the effectiveness of our method, we evaluate on different subsets of WIKIHOP (Welbl et al., 2017), each containing a single query predicate. We consider the predicates publisher, developer, country, and record_label, because their semantics ensure that the annotated answer is unique and they contain a relatively large amount of questions that are annotated as requiring multi-hop reasoning. For the predicate publisher, this yields 509 training and 54 validation questions, for developer 267 and 29, for country 742 and 194, and for record_label 2305 and 283. As the test set of WIKIHOP is not publicly available, we report scores for the validation set. 5.3 Baselines Following Welbl et al. (2017), we use two neural QA models, namely BIDAF (Seo et al., 2016a) and FASTQA (Weissenborn et al., 2017b), as baselines for the considered WIKIHOP predicates. We use the implementation provided by the JACK 2 QA framework (Weissenborn et al., 2018) with the same hyperparameters as used by Welbl et al. (2017), and train a separate model for each predicate.3 To ensure that the performance of the 2https://github.com/uclmr/jack 3We also experimented with the AllenNLP implementation of BIDAF, available at https://github.com/ allenai/allennlp/blob/master/allennlp/ 6157 baseline is not adversely affected by the relatively small number of training examples, we also evaluate the BIDAF model trained on the whole WIKIHOP corpus. In order to compensate for the fact that both models are extractive QA models which cannot make use of the candidate entities, we additionally evaluate modified versions which transform both the predicted answer and all candidates to vectors using the wiki-unigrams model of SENT2VEC (Pagliardini et al., 2017). Consequently, we return the candidate entity which has the highest cosine similarity to the predicted entity. We use the normalized version of MEDHOP for training and evaluating the baselines, since we observed that denormalizing it (as for NLPROLOG) severely harmed performance. Furthermore on MEDHOP, we equip the models with word embeddings that were pretrained on a large biomedical corpus (Pyysalo et al., 2013). 5.4 Hyperparameter Configuration On MEDHOP we optimize the embeddings of predicate symbols of rules and query triples, as well as of entities. WIKIHOP has a large number of unique entity symbols and thus, learning their embeddings is prohibitive. Thus, we only train the predicate symbols of rules and query triples on this data set. For MEDHOP we use bigram SENT2VEC embeddings trained on a large biomedical corpus 4, and for WIKIHOP the wikiunigrams model5 of SENT2VEC. All experiments were performed with the same set of rule templates containing two rules for each of the forms p(X, Y ) ⇐q(X, Y ), p(X, Y ) ⇐q(Y, X) and p(X, Z) ⇐q(X, Y ) ∧r(Y, Z) and set the similarity threshold λ to 0.5 and maximum proof depth to 3. We use Adam (Kingma and Ba, 2014) with default parameters. 5.5 Results The results for the development portions of WIKIHOP and MEDHOP are shown in Table 1. For all predicates but developer, NLPROLOG strongly outperforms all tested neural QA models, while achieving the same accuracy as the best performing QA model on developer. We evaluated NLPROLOG on the hidden test set of MedHop and obtained models/reading_comprehension/bidaf.py, obtaining comparable results. 4https://github.com/ncbi-nlp/ BioSentVec 5https://drive.google.com/open?id= 0B6VhzidiLvjSa19uYWlLUEkzX3c an accuracy of 29.3%, which is 6.1 pp better than FastQA and 18.5 pp worse than BiDAF.6. As the test set is hidden, we cannot diagnose the exact reason for the inconsistency with the results on the development set, but observe that FastQA suffers from a similar drop in performance. 5.6 Importance of Rules Exemplary proofs generated by NLPROLOG for the predicates record_label and country can be found in Fig. 2. To study the impact of the rule-based reasoning on the predictive performance, we perform an ablation experiment in which we train NLPROLOG without any rule templates. The results can be found in the bottom half of Table 1. On three of the five evaluated data sets, performance decreases markedly when no rules can be used and does not change on the remaining two data sets. This indicates that reasoning with logic rules is beneficial in some cases and does not hurt performance in the remaining ones. 5.7 Impact of Entity Embeddings In a qualitative analysis, we observed that in many cases multi-hop reasoning was performed via aligning entities and not by applying a multi-hop rule. For instance, the proof of the statement country(Oktabrskiy Big Concert Hall, Russia) visualized in Figure 2, is performed by making the embeddings of the entities Oktabrskiy Big Concert Hall and Saint Petersburg sufficiently similar. To gauge the extent of this effect, we evaluate an ablation in which we remove the MLP on top of the entity embeddings. The results, which can be found in Table 1, show that fine-tuning entity embeddings plays an integral role, as the performance degrades drastically. Interestingly, the observed performance degradation is much worse than when training without rules, suggesting that much of the reasoning is actually performed by finding a suitable transformation of the entity embeddings. 5.8 Error Analysis We performed an error analysis for each of the WIKIHOP predicates. To this end, we examined all instances in which one of the neural QA models (with SENT2VEC) produced a correct prediction 6Note, that these numbers are taken from Welbl et al. (2017) and were obtained with different implementations of BIDAF and FASTQA 6158 Model MedHop publisher developer country recordlabel BiDAF 42.98 66.67 65.52 53.09 68.90 + Sent2Vec — 75.93 68.97 61.86 75.62 + Sent2Vec + wikihop — 74.07 62.07 66.49 78.09 FastQA 52.63 62.96 62.07 57.21 70.32 + Sent2Vec — 75.93 58.62 64.95 78.09 NLProlog 65.78 83.33 68.97 77.84 79.51 - rules 64.33 83.33 68.97 74.23 74.91 - entity MLP 37.13 68.52 41.38 72.16 64.66 Table 1: Accuracy scores in percent for different predicates on the development set of the respective predicates. +/denote independent modifications to the base algorithm. Figure 2: Example proof trees generated by NLPROLOG, showing a combination of multiple rules. Entities are shown in red and predicates in blue. Note, that entities do not need to match exactly. The first and third proofs were obtained without the entity MLP (as described in Section 5.7), while the second one was obtained in the full configuration of NLPROLOG. and NLPROLOG did not, and labeled them with predefined error categories. Of the 55 instances, 49% of the errors were due to NLPROLOG unifying the wrong entities, mainly because of an over-reliance on heuristics, such as predicting a record label if it is from the same country as the artist. In 25% of the cases, NLPROLOG produced a correct prediction, but another candidate was defined as the answer. In 22% the prediction was due to an error in predicate unification, i.e. NLPROLOG identified the correct entities, but the sentence did not express the target relation. Furthermore, we performed an evaluation on all problems of the studied WIKIHOP predicates that were unanimously labeled as containing the correct answer in the support texts by Welbl et al. (2017). On this subset, the microaveraged accuracy of NLPROLOG shows an absolute increase of 3.08 pp, while the accuracy of BIDAF (FASTQA) augmented with SENT2VEC decreases by 3.26 (3.63) pp. We conjecture that this might be due to NLPROLOG’s reliance on explicit reasoning, which could make it less susceptible to spurious correlations between the query and supporting text. 6 Discussion and Future Work We proposed NLPROLOG, a system that is able to perform rule-based reasoning on natural language, and can learn domain-specific rules from data. To this end, we proposed to combine a symbolic prover with pretrained sentence embeddings, and to train the resulting system using backpropagation. We evaluated NLPROLOG on two different QA tasks, showing that it can learn domainspecific rules and produce predictions which outperform those of the two strong baselines BIDAF and FASTQA in most cases. While we focused on a subset of First Order Logic in this work, the expressiveness of NLPROLOG could be extended by incorporating a different symbolic prover. For instance, a prover for temporal logic (Orgun and Ma, 1994) would allow to model temporal dynamics in natural language. We are also interested in incorporating future improvements of symbolic provers, triple extraction systems and pretrained sentence representations to further enhance the performance of NLPROLOG. Additionally, it would be interesting to study the behavior of NLPROLOG in the presence of multiple WIKIHOP query predicates. 6159 Acknowledgments Leon Weber and Jannes Münchmeyer acknowledge the support of the Helmholtz Einstein International Berlin Research School in Data Science (HEIBRiDS). We would like to thank the anonymous reviewers for the constructive feedback. We gratefully acknowledge the support of NVIDIA Corporation with the donation of a Titan X Pascal GPU used for this research. References Gabor Angeli, Neha Nayak, and Christopher D Manning. 2016. Combining natural logic and shallow reasoning for question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 442–452. Rolf Apweiler, Amos Bairoch, Cathy H Wu, Winona C Barker, Brigitte Boeckmann, Serenella Ferro, Elisabeth Gasteiger, Hongzhan Huang, Rodrigo Lopez, Michele Magrane, et al. 2004. Uniprot: the universal protein knowledgebase. Nucleic acids research, 32(suppl_1):D115–D119. Stephen H. Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. 2017. Hinge-loss markov random fields and probabilistic soft logic. Journal of Machine Learning Research, 18:109:1–109:67. Islam Beltagy, Cuong Chau, Gemma Boleda, Dan Garrette, Katrin Erk, and Raymond Mooney. 2013. Montague meets markov: Deep semantics with probabilistic logical form. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, volume 1, pages 11–21. Islam Beltagy, Katrin Erk, and Raymond Mooney. 2014. Probabilistic soft logic for semantic textual similarity. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1210– 1219. Tarek R Besold, Artur d’Avila Garcez, Sebastian Bader, Howard Bowman, Pedro Domingos, Pascal Hitzler, Kai-Uwe Kühnberger, Luis C Lamb, Daniel Lowd, Priscila Machado Vieira Lima, et al. 2017. Neuralsymbolic learning and reasoning: A survey and interpretation. arXiv preprint arXiv:1711.03902. William W Cohen. 2016. Tensorlog: A differentiable deductive database. arXiv preprint arXiv:1605.06523. Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, Luke Vilnis, Ishan Durugkar, Akshay Krishnamurthy, Alex Smola, and Andrew McCallum. 2017. Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning. arXiv preprint arXiv:1711.05851. Rajarshi Das, Arvind Neelakantan, David Belanger, and Andrew McCallum. 2016. Chains of reasoning over entities, relations, and text using recurrent neural networks. arXiv preprint arXiv:1607.01426. Nicola De Cao, Wilker Aziz, and Ivan Titov. 2018. Question answering by reasoning across documents with graph convolutional networks. arXiv preprint arXiv:1808.09920. Bhuwan Dhingra, Qiao Jin, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. 2018. Neural models for reasoning over multiple mentions using coreference. arXiv preprint arXiv:1804.05922. Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv. Richard Evans and Edward Grefenstette. 2017. Learning explanatory rules from noisy data. CoRR, abs/1711.04574. Richard Evans and Edward Grefenstette. 2018. Learning explanatory rules from noisy data. J. Artif. Intell. Res., 61:1–64. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, pages 1156–1165, New York, NY, USA. ACM. David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, and Others. 2010. Building watson: An overview of the DeepQA project. AI magazine, 31(3):59–79. Hervé Gallaire and Jack Minker, editors. 1978. Logic and Data Bases, Symposium on Logic and Data Bases, Centre d’études et de recherches de Toulouse, 1977, Advances in Data Base Theory. Plemum Press, New York. Dan Garrette, Katrin Erk, and Raymond Mooney. 2011. Integrating logical representations with probabilistic information using markov logic. In Proceedings of the Ninth International Conference on Computational Semantics, pages 105–114. Association for Computational Linguistics. Dan Garrette, Katrin Erk, and Raymond Mooney. 2014. A formal approach to linking logical form and vector-space lexical semantics. In Computing meaning, pages 27–48. Springer. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwi´nska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. 6160 2016. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471. Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2019. A survey of methods for explaining black box models. ACM Comput. Surv., 51(5):93:1–93:42. M. M. Gupta and J. Qi. 1991. Theory of T-norms and Fuzzy Inference Methods. Fuzzy Sets and Systems, 40(3):431–450. Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. 2016. Tracking the world state with recurrent entity networks. arXiv preprint arXiv:1612.03969. Matthew Honnibal and Ines Montani. 2017. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing. To appear. Kevin Jarrett, Koray Kavukcuoglu, Marc’Aurelio Ranzato, and Yann LeCun. 2009. What is the best multistage architecture for object recognition? In ICCV, pages 2146–2153. IEEE Computer Society. Pascual Julián-Iranzo, Clemente Rubio-Manzano, and Juan Gallardo-Casero. 2009. Bousi prolog: a prolog extension language for flexible query answering. Electron. Notes Theor. Comput. Sci., 248(Supplement C):131–147. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In International Conference on Machine Learning, pages 1378–1387. Zachary C. Lipton. 2018. The mythos of model interpretability. Commun. ACM, 61(10):36–43. Dan Moldovan, Christine Clark, Sanda Harabagiu, and Steve Maiorano. 2003. COGEX: A logic prover for question answering. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, NAACL ’03, pages 87–93, Stroudsburg, PA, USA. Association for Computational Linguistics. Stephen Muggleton. 1991. Inductive logic programming. New generation computing, 8(4):295–318. Stephen Muggleton and Luc De Raedt. 1994. Inductive logic programming: Theory and methods. J. Log. Program., 19/20:629–679. Arvind Neelakantan, Benjamin Roth, and Andrew McCallum. 2015. Compositional vector space models for knowledge base completion. arXiv preprint arXiv:1504.06662. Christina Niklaus, Matthias Cetto, André Freitas, and Siegfried Handschuh. 2018. A survey on open information extraction. In COLING, pages 3866–3878. Association for Computational Linguistics. Mehmet A Orgun and Wanli Ma. 1994. An overview of temporal and modal logic programming. In Temporal logic, pages 445–479. Springer. Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2017. Unsupervised learning of sentence embeddings using compositional n-gram features. arXiv preprint arXiv:1703.02507. Baolin Peng, Zhengdong Lu, Hang Li, and Kam-Fai Wong. 2015. Towards neural network-based reasoning. arXiv preprint arXiv:1508.05508. Sampo Pyysalo, Filip Ginter, Hans Moen, Tapio Salakoski, and Sophia Ananiadou. 2013. Distributional semantics resources for biomedical text processing. Martin Raison, Pierre-Emmanuel Mazaré, Rajarshi Das, and Antoine Bordes. 2018. Weaver: Deep coencoding of questions and documents for machine reading. arXiv preprint arXiv:1804.10490. Matthew Richardson and Pedro M. Domingos. 2006. Markov logic networks. Machine Learning, 62(12):107–136. Tim Rocktäschel and Sebastian Riedel. 2017. End-toend differentiable proving. In Advances in Neural Information Processing Systems, pages 3788–3800. Tim Rocktäschel, Sameer Singh, and Sebastian Riedel. 2015. Injecting logical background knowledge into embeddings for relation extraction. In NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA, May 31 - June 5, 2015, pages 1119– 1129. Stuart J. Russell and Peter Norvig. 2010a. Artificial Intelligence - A Modern Approach (3. internat. ed.). Pearson Education. Stuart J Russell and Peter Norvig. 2010b. Artificial Intelligence: A Modern Approach. Stuart J Russell and Peter Norvig. 2016. Artificial intelligence: a modern approach. Malaysia; Pearson Education Limited,. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016a. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. 6161 Minjoon Seo, Sewon Min, Ali Farhadi, and Hannaneh Hajishirzi. 2016b. Query-reduction networks for question answering. arXiv preprint arXiv:1606.04582. Maria I Sessa. 2002. Approximate reasoning by similarity-based sld resolution. Theoretical computer science, 275(1-2):389–426. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448. Dirk Weissenborn, Tomas Kocisky, and Chris Dyer. 2017a. Dynamic integration of background knowledge in neural nlu systems. CoRR, abs/1706.02596. Dirk Weissenborn, Pasquale Minervini, Isabelle Augenstein, Johannes Welbl, Tim Rocktäschel, Matko Bosnjak, Jeff Mitchell, Thomas Demeester, Tim Dettmers, Pontus Stenetorp, and Sebastian Riedel. 2018. Jack the reader - A machine reading framework. In Proceedings of ACL 2018, Melbourne, Australia, July 15-20, 2018, System Demonstrations, pages 25–30. Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017b. Fastqa: A simple and efficient neural architecture for question answering. arxiv preprint. arXiv preprint arXiv:1703.04816. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2017. Constructing datasets for multi-hop reading comprehension across documents. arXiv preprint arXiv:1710.06481. David S Wishart, Craig Knox, An Chi Guo, Savita Shrivastava, Murtaza Hassanali, Paul Stothard, Zhan Chang, and Jennifer Woolsey. 2006. Drugbank: a comprehensive resource for in silico drug discovery and exploration. Nucleic acids research, 34(suppl_1):D668–D672. Victor Zhong, Caiming Xiong, Nitish Shirish Keskar, and Richard Socher. 2019. Coarse-grain fine-grain coattention network for multi-evidence question answering. arXiv preprint arXiv:1901.00603. Appendices A Algorithms fun unify(x, y, θ, S) Input: x: function f(. . .) | atom p(. . .) | variable | list x1 :: x2 :: . . . :: xn y: function f′(. . .) | atom p′(. . .) | variable | list y1 :: y2 :: . . . :: ym θ: current substitutions, default = {} S: current success score, default = 1.0 Output: (Unifying substitution θ′ or failure, Updated success score S′) if θ = failure then return (failure, 0) else if S < λ then return (failure, 0) else if x = y then return (θ, S) else if x is Var then return unify_var(x, y, θ, S) else if y is Var then return unify_var(y, x, θ, S) else if x is f(x1, . . . , xn), y is f′(y1, . . . , yn), and f ∼f′ ≥λ then S′ := S ∧f ∼f′ return unify(x1 :: . . . :: xn, y1 :: . . . :: yn, θ, S′) end else if x is p(x1, . . . , xn), y is p′(y1, . . . , yn), and p ∼p′ ≥λ then S′ := S ∧f ∼f′ return unify(x1 :: . . . :: xn, y1 :: . . . :: yn, θ, S′) end else if x is x1 :: . . . :: xn and y is y1 :: . . . :: yn then (θ′, S′) := unify(x1, y1, θ, S) return unify(x2 :: . . . :: xn, y2 :: . . . :: yn, θ′, S′) end else if x is empty list and y is empty list then return (θ, S) else return (failure, 0) fun unify_var(v, o, θ, S) if {v/val} ∈θ then return unify(val, o, θ, S) else if {o/val} ∈θ then return unify(var, val, θ, S) else return ({v/o} + θ, S) Algorithm 1: The weak unification algorithm in NLPROLOG without occurs check
2019
618
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6162–6167 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6162 Modeling Intra-Relation in Math Word Problems with Different Functional Multi-Head Attentions Jierui Li1, Lei Wang1∗, Jipeng Zhang1, Yan Wang2, Bing Tian Dai3, Dongxiang Zhang45 1Center for Future Media and School of Computer Science & Engineering, UESTC, 2Tencent AI Lab 3School of Information Systems, Singapore Management University, 4Afanti Research, 5Zhejiang University {lijierui, zhangjipeng20}@std.uestc.edu.cn, [email protected] [email protected], [email protected], [email protected] Abstract Several deep learning models have been proposed for solving math word problems (MWPs) automatically. Although these models have the ability to capture features without manual efforts, their approaches to capturing features are not specifically designed for MWPs. To utilize the merits of deep learning models with simultaneous consideration of MWPs’ specific features, we propose a group attention mechanism to extract global features, quantity-related features, quantitypair features and question-related features in MWPs respectively. The experimental results show that the proposed approach performs significantly better than previous state-of-the-art methods, and boost performance from 66.9% to 69.5% on Math23K with training-test split, from 65.8% to 66.9% on Math23K with 5-fold cross-validation and from 69.2% to 76.1% on MAWPS. 1 Introduction Computer systems, dating back to 1960s, have been developing to automatically solve math word problems (MWPs) (Feigenbaum and Feldman, 1963; Bobrow, 1964). As illustrated in Table 1, when solving this problem, machines are asked to infer “how many shelves would Tom fill up ” based on the textual problem description. It requires systems having the ability to map the natural language text into the machine-understandable form, reason in terms of sets of numbers or unknown variables, and then derive the numeric answer. In recent years, a growing number of deep learning models for MWPs (Wang et al., 2017; Ling et al., 2017; Wang et al., 2018b,a; Huang et al., 2018a,b; Wang et al., 2019) have drawn inspiration from advances in machine translation. ∗corresponding author Problem: For a birthday party Tom bought 4 regular sodas and 52 diet sodas. If his fridge would only hold 7 on each shelf, how many shelves would he fill up? Equation: x = (4.0 + 52.0)/7.0 Solution: 8 Table 1: A math word problem. The core idea is to leverage the immense capacity of neural networks to strengthen the process of equation generating. Compared to statistical machine learning-based methods (Kushman et al., 2014; Mitra and Baral, 2016; Roy and Roth, 2018; Zhou et al., 2015; Huang et al., 2016) and semantic parsing-based methods (Shi et al., 2015; Koncel-Kedziorski et al., 2015; Roy and Roth, 2015; Huang et al., 2017), these methods do not need hand-crafted features and achieve high performance on large datasets. However, they lack in capturing the specific MWPs features, which are an evidently vital component in solving MWP. More related work and feature-related information can be found in Zhang et al. (2018). Inspired by recent work on modeling locality using multi-head attention (Li et al., 2018; Yang et al., 2018, 2019), we introduce a group attention that contains different attention mechanisms to extract various types of MWPs features. More explicitly, there are four kinds of attention mechanisms: 1) Global attention to grab global information; 2) Quantity-related attention to model the relations between the current quantity and its neighbor-words; 3) Quantity-pair attention to acquire the relations between quantities; 4) Question-related attention to capture the connections between the question and quantities. The experimental results show that the proposed model establishes the state-of-the-art performance 6163 on both Math23K and MAWPS datasets. In addtion, we release the source code of our model in Github1. 2 Background: Self-Attention Network Self-attention networks have shown impressive results in various natural language processing tasks, such as machine translation (Vaswani et al., 2017; Shaw et al., 2018) and natural language inference (Shen et al., 2018) due to their flexibility in parallel computation and power of modeling long dependencies. It can model pairwise relevance by calculating attention weights between pairs of elements of an input sequence. In Vaswani et al. (2017), they propose a self-attention computation module, known as “Scaled Dot-Product Attention”(SDPA). It is used as the basic unit of multihead attention. This module’s input contains query matrix Q ∈Rm×dk, key matrix K ∈Rm×dk and value matrix V ∈Rm×dv, where m is the number of input tokens, dk is the dimension of query or key vector, dv is the dimension of value vector. Output can be computed by: head = softmax(QKT √dk )V, (1) As Vaswani et al. (2017) found, performing attention by projecting the queries, keys, and values into subspace with different learnable projection functions instead of a single attention can enhance the capacity to capture various context information. More specifically, this attention model first transforms Q, K, and V into {Qh, Kh, Vh} with weights {W h Q, W h K, W h V }, and then obtains the output features {head1, head2, · · · , headk} by SDPA, where k is the number of SDPA modules. Finally, these output features are concatenated and projected to produce the final output state O ′. 3 Approach In this section, we introduce how the proposed framework works and the four different types of attention we designed. 3.1 Overview We propose a sequence-to-sequence (SEQ2SEQ) model with group attention to capture different types of features in MWPs. The SEQ2SEQ model 1 https://github.com/lijierui/ group-attention Group Attention Block has 57 Janet apples … … … … … … Bi-LSTM LSTM Attention <S> <E> + !" !# $ & ' ( ) * K Add&Norm Add&Norm Feed Forward Group Attention Figure 1: Framework of our approach. takes the text of the whole problem as the input and corresponding equation as the output. Specifically, the group attention consists of four different types of multi-head attention modules. As illustrated in Figure 1, the pre-processed input X = {x1, · · · , xm} is transformed into He = {he 1, · · · , he m} through Bi-LSTM. We set Q = K = V = He. The output of the group attention O ′ is produced by: O ′ = GroupAtt(Q, K, V), (2) Following the same paradigm in (Vaswani et al., 2017), we add a fully-connected feed forward layer to the multi-head attention mechanism layer (i.e., group attention), and each layer is followed by a residual connection and layer normalization. Consequently, the output of group attention block O is obtained. During decoding, we employ the pipeline in (Wang et al., 2018a). The output Y is obtained through yt = Softmax(Attention(hd t , oj)), (3) where hd t is the hidden state at the t-th step, oj is the j-th state vector from the output O of the group attention block. 3.2 Pre-Processing of MWPs Given a MWP P and its corresponding groudtruth equation, we project words of the MWP {wP i }m i=1 into word embedding vectors {eP i }m i=1 through a word embedding matrix E, i.e., eP i = EwP i . Considering the diversity of quantities in natural language, we follow the work of Wang et al. (2017) which proposed to map quantities 6164 Figure 2: Example for how to separate MWPs. into special tokens in the problem text by the following two rules: 1) All the quantities that appear in the MWP are determined if they are significant quantities that will be used in the equation using Significant Number Identify (SNI); 2) All recognized significant quantities in the MWP P are mapped to a list of mapped quantity tokens {n1, ..., nl} in terms of their appearance order in the problem text, where l is the number of quantities. Through the above rules, the mapped MWP text X = {x1, · · · , xm} that will be used as the input of the SEQ2SEQ model can be acquired. In addition, the quantity tokens in the equation are also substituted according to the corresponding mapping in problem text. For example, the mapped quantity tokens and the mapped equation of the problem in Table 1 are {n1 = 4, n2 = 52, n3 = 7} and (n1 + n2) ÷ n3 respectively. To address the issue that a MWP may have more than one correct solution equations (e.g., 3×2 and 2×3 are both correct equations to solve the problem ”How many apples will Tom eat after 3 days if he eats 2 apples per day?”), we normalize the equations to postfix expressions following the rules in Wang et al. (2018a), ensuring that every problem is corresponding to a unique equation. Thus, we can obtain the mapped equation Eq that will be regarded as the target sequence. 3.3 Group Attention With the aim of implementing group attention, as illustrated in Figure 2, we separate the problem text X = {x1, · · · , xm} into quantity spans Xquant = {Xquant,1, · · · , Xquant,l} and the question span Xquest. The quantity span includes one or more quantity and their neighborhood words, and the question span consists of words of the question. For simplicity, the spans are separated by commas and periods, which naturally separate the sentence semantically and each span often contains one quantity, and spans with quantity (but not last) are considered as quantity spans while the last span is considered as question span since it always contains the question. By doing this, spans do not Figure 3: Group attention: (a) Global attention; (b) Quantity-related attention; (c) Quantity-pair attention; (d) Question-related attention. overlap with each other. As illustrated in Figure 3, following how the problem text is divided, {Q, K, V } are masked into the input of group attention, {Qg, Kg, Vg}, {Qc, Kc, Vc}, {Qp, Kp, Vp} and {Qq, Kq, Vq}, where g, c, p, and q are the notations of global, quantity-related, quantitypair and question-related attention. After that, {Og, Oc, Op, Oq} are computed by different groups of SDPA modules. The output of group attention O is produced by concatenating and projecting again: O ′ = Concat(Og, Oc, Op, Oq), (4) We will describe four types of group attention in detail in the following passage. Global Attention: Document-level features play an important role in distinguishing the category of MWPs and quantities order in equations. To capture these features from a global perspective, we introduce a type of attention named as global attention, which computes the attention vector based on the whole input sequence. For Qg, Kg, and Vg, we set them to He. The output Og can be obtained by SDPA modules belonging to global attention. For example, the word “apple” illustrated in Figure 2 will attend to the words in the whole problem text from “Janet” to “?”. Quantity-Related Attention: The words around quantity usually provide beneficial clues for MWPs solving. Hence, we introduce quantityrelated attention, which focuses on the question span or quantities span where the current quantity resides. 6165 For i-th span, its Qc, Kc, and Vc are all derived from Xquant,i within its own part. For example, as illustrated in Figure 2, the word “she” only attends to the words in the 2-nd quantity span “She finds another 95,”. Quantity-Pair Attention: The relationship between two quantities is of great importance in determining their associated operator. We design an attention module called quantity-pair attention, which is used to model this relationship between quantities. The question span can be viewed as the quantity span containing an unknown quantity. Thus, the computation process consists of two parts: 1) Attention between quantities: the query Qp is derived from Xquant,i, and corresponding Kp and Vp are stemmed from Xquant,j(j ̸= i). For example, as illustrated in Figure 2, the word “has” in the 1-st quantity span can only attend to words from the 2nd quantity span; 2) Attention between quantities and question: the query Qp is originated Xquest within the question span, and corresponding Kp and Vp are derived from Xquant. For example, as illustrated in Figure 2, the word “How” attends to the words in the quantity spans from “Janet” to “95,”. Question-Related Attention: The question can also derive distinguishing information such as whether the answer value is positive. Thus, we propose question-related attention, which is utilized to model the connections between question and problem description stem. There are also two parts when modeling this type of relation: 1) Attention for quantity span: the query Qq is derived from Xquant,i, the corresponding Kq and Vq are stemmed from Xquest. For example, as illustrated in Figure 2, the word “apples” in quantity span only attends to the words from the question span; 2) Attention for question span: for the query Qq corresponding to Xquest, the corresponding Kq and Vq are extracted according to Xquant. For example, as illustrated in Figure 2, the word “does” in question span attends to the words in all the quantity spans. 4 Experiment 4.1 Experimental Setup We evaluate the proposed model on these datasets, Math23K (Wang et al., 2017) and MAWPS (Koncel-Kedziorski et al., 2016). Datasets: Math23K is collected from multiple online educational websites. This dataset contains 23,162 Chinese elementary school level MWPs. MAWPS is another large scale dataset which owns 2,373 arithmetic word problems after harvesting ones with a single unknown variable. Evaluation Metrics: We use answer accuracy to evaluate our model. The accuracy calculation follows a simple formula. If a generated equation produces an answer equal to the corresponding ground truth answer, we consider it to be right. Implementation details: For Math23K, we follow the training and test set released by (Wang et al., 2017), and we also evaluate our proposed method with 5-fold cross-validation in main results table. We adopt the pre-trained word embeddings with dimension set to 128 and use a twolayer Bi-LSTM with 256 hidden units and a group attention with four different functional 2-head attention as the encoder, and a two-layer LSTM with 512 hidden units as the decoder. Dropout probabilities for word embeddings, LSTM and group attention are all set to 0.3. The number of epochs and mini-batch size are set to 300 and 128 respectively. As to the optimizer, we use the Adam optimizer with β1 = 0.9, β2 = 0.98 and e = 10−9. Refer to (Vaswani et al., 2017), we use the same policy to vary the learning rate with warmup steps=2000. For MAWPS, we use 5fold cross-validation, and the parameter setting is similar to those on Math23K. Baselines: We compare our approach with retrieval models, deep learning based solvers. The retrieval models Jaccard and Cosine in (Robaidek et al., 2018) find the most similar math word problem in training set under a distance metric and use its equation template to compute the result. DNS (Wang et al., 2017) first applies a vanilla SEQ2SEQ model with GRU as encoder and LSTM as the decoder to solve MWPs. In (Wang et al., 2018a), the authors apply BiLSTM with equation normalization to reinforce the vanilla SEQ2SEQ model. T-RNN (Wang et al., 2019) launches a two-stage system named as T-RNN that first predicts a tree-structure template to be filled, and then accomplishes the template with operators predicted by the recursive neural network. In S-Aligned (Chiang and Chen, 2019), the encoder is designed to understand the semantics of problems, and the decoder focuses on deciding which symbol to generate next over semantic meanings of the generated symbols. 6166 4.2 Main Results MAWPS Math23K Math23K* Jaccard 45.6 47.2 Cosine 38.2 23.8 DNS 59.5 58.1 Bi-LSTM 69.2 66.7 T-RNN 66.8 66.9 S-Aligned 65.8 GROUP-ATT 76.1 69.5 66.9 Table 2: Model comparison. Notice that Math23K means the open training-test split and Math23K* means 5-fold cross-validation. As illustrated in Table 2, we can see that retrieval approaches work poorly on both two datasets. Our method named as GROUP-ATT performs substantially better than existing deep learning based methods, increasing the accuracy from 66.9% to 69.5% on Math23K based on trainingtest split, from 65.8% to 66.9% on Math23K with 5-fold cross-validation and from 69.2% to 76.1% on MAWPS. In addition, DNS and T-RNN also boost the performance by integrating with retrieval methods, while (Wang et al., 2018a) improves the performance by combining different SEQ2SEQ models. However, we only focus on improving the performance of single model. It is worth noting that GROUP-ATT also achieves higher accuracy than the state-of-the-art ensemble models (Wang et al., 2019) (68.7% on Math23K based on training-test split, 67.0% on MAWPS). Math23K Bi-LSTM 66.7 w/ Global Attention 68.2 w/ Quantity-Related Attention 68.2 w/ Quantity-Pair Attention 67.7 w/ Question-Related Attention 68.1 Table 3: The ablation study to quantify the role of each type of attention in group attention. In addition, we perform an ablation study to empirically examine the ability of designed group attentions. We adopt the same parameter settings as GROUP-ATT while applying a single kind of attention with 8 heads. Table 3 shows the results of ablation study on Math23K. Although each specified attention tries to catch related information alone, it still outperforms Bi-LSTM by a margin from 1.0% to 1.5%, showing its effectiveness. In a parking lot, there are !" cars and motorcycles in total, each car has !# wheels, and each motorcycle has n& wheels. These cars have !' wheels in total, so how many motorcycles are there in the parking lot? equa,-.!: 0 = (!"!# −!')/(!# −!&) Attention for which word Quantity-pair attention Quantity-related attention Question-related attention Figure 4: An example of attention visualization 4.3 Visualization Analysis of Attention To better understand how the group attention mechanism works, we implement an attention visualization on a typical example from Math23K. As shown in Figure 4, n3 describes how many wheels a motorcycle has. Through quantity-pair and quantity-related attention heads, n3 pays attention to all quantities that describe the number of wheels. Question-related attention helps n3 attend to “motorcycle” in question. In addition, surprisingly, in the quantity-pair heads, the attention of n3 becomes more focused on the words “These”, “in total” from “These vehicles have n4 wheels in total”. This indicates part-whole relation(i.e., one quantity is part of a larger quantity), mentioned in (Mitra and Baral, 2016; Roy and Roth, 2018), which is of great importance in MWPs solving. Our analysis illustrates that the hand-crafted grouping can force the model to utilize distinct information and relations conducive to solving MWPs. 5 Conclusion In this paper, we introduce a group attention method which can reinforce the capacity of model to grab various types of MWPs specific features. We conduct experiments on two benchmarks and show significant improvements over a collection of competitive baselines, verifying the value of our model. Plus, our ablation study demonstrates the effectiveness of each group attention mechanism. References D. Bobrow. 1964. Natural language input for a computer problem solving system. In Semantic information processing, pages 146–226. MIT Press. Ting-Rui Chiang and Yun-Nung Chen. 2019. Semantically-aligned equation generation for 6167 solving and reasoning math word problems. In NAACL-HLT. Edward A. Feigenbaum and Julian Feldman. 1963. Computers and Thought. McGraw-Hill, Inc., New York, NY, USA. Danqing Huang, Jing Liu, Chin-Yew Lin, and Jian Yin. 2018a. Neural math word problem solver with reinforcement learning. In COLING, pages 213–223. Danqing Huang, Shuming Shi, Chin-Yew Lin, and Jian Yin. 2017. Learning fine-grained expressions to solve math word problems. In EMNLP, pages 805– 814. Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin, and Wei-Ying Ma. 2016. How well do computers solve math word problems? large-scale dataset construction and evaluation. Danqing Huang, Jin-Ge Yao, Chin-Yew Lin, Qingyu Zhou, and Jian Yin. 2018b. Using intermediate representations to solve math word problems. In ACL, pages 419–428. Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. 2015. Parsing algebraic word problems into equations. TACL, 3:585–597. Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. 2016. MAWPS: A math word problem repository. In NAACL, pages 1152–1157. Nate Kushman, Luke Zettlemoyer, Regina Barzilay, and Yoav Artzi. 2014. Learning to automatically solve algebra word problems. In ACL, pages 271– 281. Jian Li, Zhaopeng Tu, Baosong Yang, Michael R. Lyu, and Tong Zhang. 2018. Multi-head attention with disagreement regularization. In EMNLP, pages 2897–2903. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In ACL, pages 158–167. Arindam Mitra and Chitta Baral. 2016. Learning to use formulas to solve simple arithmetic problems. In ACL. Benjamin Robaidek, Rik Koncel-Kedziorski, and Hannaneh Hajishirzi. 2018. Data-driven methods for solving algebra word problems. CoRR. Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In EMNLP, pages 1743– 1752. Subhro Roy and Dan Roth. 2018. Mapping to declarative knowledge for word problem solving. TACL, 6:159–172. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In NAACL-HLT. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2018. Disan: Directional self-attention network for rnn/cnn-free language understanding. In AAAI. Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang Liu, and Yong Rui. 2015. Automatically solving number word problems by semantic parsing and reasoning. In EMNLP, pages 1132–1142. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 5998–6008. Lei Wang, Yan Wang, Deng Cai, Dongxiang Zhang, and Xiaojiang Liu. 2018a. Translating a math word problem to an expression tree. In EMNLP. Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan Song, Long Guo, and Heng Tao Shen. 2018b. Mathdqn: Solving arithmetic word problems via deep reinforcement learning. In AAAI. Lei Wang, Dongxiang Zhang, Jipeng Zhang, Xing Xu, Lianli Gao, Bing Tian Dai, and Heng Tao Shen. 2019. Template-based math word problem solvers with recursive neural networks. In AAAI. Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017. Deep neural solver for math word problems. In EMNLP, pages 845–854. Baosong Yang, Jian Li, Derek F. Wong, Lidia S. Chao, Xing Wang, and Zhaopeng Tu. 2019. Context-aware self-attention networks. CoRR, abs/1902.05766. Baosong Yang, Zhaopeng Tu, Derek F. Wong, Fandong Meng, Lidia S. Chao, and Tong Zhang. 2018. Modeling localness for self-attention networks. In EMNLP, pages 4449–4458. Dongxiang Zhang, Lei Wang, Nuo Xu, Bing Tian Dai, and Heng Tao Shen. 2018. The gap of semantic parsing: A survey on automatic math word problem solvers. arXiv preprint arXiv:1808.07290. Lipu Zhou, Shuaixiang Dai, and Liwei Chen. 2015. Learn to solve algebra word problems using quadratic programming. In EMNLP, pages 817– 822.
2019
619
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 646–653 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 646 Evaluating Discourse in Structured Text Representations Elisa Ferracane1, Greg Durrett2, Junyi Jessy Li1 and Katrin Erk1 1Department of Linguistics 2Department of Computer Science The University of Texas at Austin [email protected], [email protected] [email protected], [email protected] Abstract Discourse structure is integral to understanding a text and is helpful in many NLP tasks. Learning latent representations of discourse is an attractive alternative to acquiring expensive labeled discourse data. Liu and Lapata (2018) propose a structured attention mechanism for text classification that derives a tree over a text, akin to an RST discourse tree. We examine this model in detail, and evaluate on additional discourse-relevant tasks and datasets, in order to assess whether the structured attention improves performance on the end task and whether it captures a text’s discourse structure. We find the learned latent trees have little to no structure and instead focus on lexical cues; even after obtaining more structured trees with proposed model modifications, the trees are still far from capturing discourse structure when compared to discourse dependency trees from an existing discourse parser. Finally, ablation studies show the structured attention provides little benefit, sometimes even hurting performance.1 1 Introduction Discourse describes how a document is organized, and how discourse units are rhetorically connected to each other. Taking into account this structure has shown to help many NLP end tasks, including summarization (Hirao et al., 2013; Durrett et al., 2016), machine translation (Joty et al., 2017), and sentiment analysis (Ji and Smith, 2017). However, annotating discourse requires considerable effort by trained experts and may not always yield a structure appropriate for the end task. As a result, having a model induce the discourse structure of a text is an attractive option. Our goal in this paper is to evaluate such an induced structure. 1Code and data available at https://github.com/ elisaF/structured Inducing structure has been a recent popular approach in syntax (Yogatama et al., 2017; Choi et al., 2018; Bisk and Tran, 2018). Evaluations of these latent trees have shown they are inconsistent, shallower than their explicitly parsed counterparts (Penn Treebank parses) and do not resemble any linguistic syntax theory (Williams et al., 2018). For discourse, Liu and Lapata (2018) (L&L) induce a document-level structure while performing text classification with a structured attention that is constrained to resolve to a non-projective dependency tree. We evaluate the document-level structure induced by this model. In order to compare the induced structure to existing linguisticallymotivated structures, we choose Rhetorical Structure Theory (RST) (Mann and Thompson, 1988), a widely-used framework for discourse structure, because it also produces tree-shaped structures.2 We evaluate on some of the same tasks as L&L, but add two more tasks we theorize to be more discourse-sensitive: text classification of writing quality, and sentence order discrimination (as proposed by Barzilay and Lapata (2008)). Our research uncovers multiple negative results. We find that, contrary to L&L, the structured attention does not help performance in most cases; further, the model is not learning discourse. Instead, the model learns trees with little to no structure heavily influenced by lexical cues to the task. In an effort to induce better trees, we propose several principled modifications to the model, some of which yield more structured trees. However, even the more structured trees bear little resemblance to ground truth RST trees. We conclude the model holds promise, but re2The Penn Discourse Treebank (PDTB; Prasad et al., 2008) captures lexically-grounded discourse for individual connectives and adjacent sentences, and does not span an entire document; Segmented Discourse Representation Theory (Lascarides and Asher, 2008) is a graph. 647 inter-sentence biLSTM structured attention s1 s2 st … d1 semantic discourse e1 d2 e2 dt et y max-pooling document-level words sentences class structured attention +compose structured attention +compose document sentence Compose eʹt eʹ2 eʹ1 … … Figure 1: Model of Liu and Lapata (2018) with the document-level portion (right) that composes sentences into a document representation. quires moving beyond text classification, and injecting supervision (as in Strubell et al. (2018)). Our contributions are (1) comprehensive performance results on existing and additional tasks and datasets showing document-level structured attention is largely unhelpful, (2) in-depth analyses of induced trees showing they do not represent discourse, and (3) several principled model changes to produce better structures but that still do not resemble the structure of discourse. 2 Rhetorical Structure Theory (RST) In RST, coherent texts consist of minimal units, which are linked to each other, recursively, through rhetorical relations (Mann and Thompson, 1988). Thus, the goal of RST is to describe the rhetorical organization of a text by using a hierarchical tree structure that captures the communicative intent of the writer. An RST discourse tree can further be represented as a discourse dependency tree. We follow the algorithm of Hirao et al. (2013) to create an unlabelled dependency tree based on the nuclearity of the tree. 3 Models We present two models: one for text classification, and one for sentence ordering. Both are based on the L&L model, with a design change to cause stronger percolation of information up the tree (we also experiment without this change). Text classification The left-hand side of Figure 1 presents an overview of the model: the model operates first at the sentence-level to create sentence representations, and then at the document-level to create a document representation from the previously created sentence representations. In more detail, the model composes GloVe embeddings (Pennington et al., 2014) into a sentence representation using structured attention (from which a tree can be derived), then sentence representations into a single document representation for class prediction. At both sentence and document level, each object (word or sentence, respectively) attends to other objects that could be its parent in the tree. Since the sentence and document-level parts of the model are identical, we focus on the document level (Figure 1, right), which is of interest to us for evaluating discourse effects. Sentence representations s1, . . . , st are fed to a bidirectional LSTM, and the hidden representations [h1, . . . , ht] consist of a semantic part (et) and a structure part (dt): [et, dt] = ht. Unnormalized scores fij representing potentials between parent i and child j are calculated using a bilinear function over the structure vector: tp = tanh(Wpdi); tc = tanh(Wcdj) (1) fij = tT p Watc (2) The matrix-tree theorem allows us to compute marginal probabilities aij of dependency arcs under the distribution over non-projective dependency trees induced by fij (details in Koo et al. (2007)). This computation is fully differentiable, allowing it to be treated as another neural network layer in the model. We importantly note the model only uses the marginals. We can post-hoc use the Chu-Liu-Edmonds algorithm to retrieve the highest-scoring tree under f, which we call fbest (Chu and Liu, 1965; Edmonds, 1967). The semantic vectors of sentences e are then updated using this attention. Here we diverge from the L&L model: in their implementation,3 each node is updated based on a weighted sum over its parents in the tree (their paper states both parents and children).4 We instead inform each node by a sum over its children, more in line with past work where information more intuitively percolates from children to parents and not the other way (Ji and Smith, 2017) (we also run experiments without this design change). We calculate the context for all possible children of that sentence as: ci = n X k=1 aikek (3) where aik is the probability that k is the child of i, and ek is the semantic vector of the child. The children vectors are then passed through a non-linear function, resulting in the updated semantic vector e′ i for parent node i. e′ i = tanh(Wr[ei, ci]) (4) 3https://github.com/nlpyang/structured 4We found similar results for using both parents and children as well as using parents only. 648 Yelp Debates WQ WQTC WSJSO L&L(orig) 68.51 | 68.27 (0.19) 81.82 | 79.48 (2.90) 84.14 | 82.69 (1.36) 80.73 | 79.63 (1.03) 96.17 | 95.29 (0.84) L&L(ours) 68.51 | 68.23 (0.23) 78.88 | 77.81 (1.80) 84.14 | 82.70 (1.36) 82.49 | 81.11 (0.95) 95.57 | 94.76 (1.11) −doc attn 68.34 | 68.13 (0.17) 82.89 | 81.42 (1.08) 83.75 | 82.80 (0.94) 80.60 | 79.25 (0.94) 95.57 | 95.11 (0.42) −both attn 68.19 | 68.05 (0.13) 79.95 | 77.34 (1.79) 84.27 | 83.16 (1.25) 77.58 | 76.16 (1.25) 95.23 | 94.68 (0.37) L&L(reported) 68.6 76.5 Table 1: Max | mean (standard deviation) accuracy on the test set averaged across four training runs with different initialization weights. Bolded numbers are within 1 standard deviation of the best performing model. L&L(orig) uses the original L&L code; L&L(ours) includes the design change and bug fix. L&L(reported) lists results reported by L&L on a single training run. Finally, a max pooling layer over e′ i followed by a linear layer produces the predicted document class y. The model is trained with cross entropy loss. Additionally, the released L&L implementation has a bug where attention scores and marginals are not masked correctly in the matrix-tree computation, which we correct. Sentence order discrimination This model is identical, except for task-specific changes. The goal of this synthetic task, proposed by Barzilay and Lapata (2008), is to capture discourse coherence. A negative class is created by generating random permutations of a text’s original sentence ordering (the positive class). A coherence score is produced for each positive and negative example, with the intuition that the originally ordered text will be more coherent than the jumbled version. Because we compare two examples at a time (original and permuted order), we modify the model to handle paired inputs and replace cross-entropy loss with a max-margin ranking loss. 4 Experiments We evaluate the model on four text classification tasks and one sentence order discrimination task. 4.1 Datasets Details and statistics are included in Appendix A.5 Yelp (in L&L, 5-way classification) comprises customer reviews from the Yelp Dataset Challenge (collected by Tang et al. (2015)). Each review is labeled with a 1 to 5 rating (least to most positive). Debates (in L&L, binary classification) are transcribed debates on Congressional bills from the U.S. House of Representatives (compiled by Thomas et al. (2006), preprocessed by Yogatama 5Of the document-level datasets used in L&L (SNLI was sentence-level), we omit IMDB and Czech Movies because on IMDB their model did not outperform prior work, and Czech (a language with freer word order than English) highlighted the non-projectivity of their sentence-level trees, which is not the focus of our work. and Smith (2014)). Each speech is labeled with 1 or 0 indicating whether the speaker voted in favor of or against the bill. Writing quality (WQ) (not in L&L, binary classification) contains science articles from the New York Times (extracted from Louis and Nenkova (2013)). Each article is labeled as either ‘very good’ or ‘typical’ to describe its writing quality. While both classes contain well-written text, Louis and Nenkova (2013) find features associated with discourse including sentiment, readability, along with PDTB-style discourse relations are helpful in distinguishing between the two classes. Writing quality with topic control (WQTC) (not in L&L, binary classification) is similar to WQ, but controlled for topic using a topic similarity list included with the WQ source corpus.6 Wall Street Journal Sentence Order (WSJSO) (not in L&L, sentence order discrimination) is the WSJ portion of PTB (Marcus et al., 1993). 4.2 Settings For each experiment, we train the model four times varying only the random seed for weight initializations. The model is trained for a fixed amount of time, and the model from the timestep with highest development performance is chosen. We report accuracies on the test set, and tree analyses on the development set. Our implementation is built on the L&L released implementation, with changes as noted in Section 3. Preprocessing and training details are in Appendix A. 4.3 Results We report accuracy (as in prior work) in Table 1, and perform two ablations: removing the structured attention at the document level, and removing it at both document and sentence levels. Additionally, we run experiments on the original code 6An analysis in section 4.3 shows the WQ-trained model focuses on lexical items strongly related to the article topic. 649 Yelp Debates WQ WQTC WSJSO tree height 2.049 2.751 2.909 4.035 2.288 prop. of leaf nodes 0.825 0.849 0.958 0.931 0.892 norm. arc length 0.433 0.397 0.420 0.396 0.426 % vacuous trees 73% 38% 42% 14% 100% Table 2: Statistics for learned trees averaged across four runs (similar results without the design change or bug fix are in the Appendix Table 6). See Table 4 for gold statistics on WQTC. without the design change or bug fix to confirm our findings are similar (see L&L(orig) in Table 1). Document-level structured attention does not help. Structured attention at the sentence level helps performance for all except WQ, where no form of attention helps. However, structured attention at the document level yields mostly negative results, in contrast to the improvements reported in L&L. In Yelp, WSJSO, and WQ, there is no difference. In Debates, the attention hurts performance. Only in WQTC does the structured attention provide a benefit. While a single training run could produce the improvements seen in L&L, the results across four runs depict a more accurate picture. When inducing structures, it is particularly important to repeat experiments as the structures can be highly inconsistent due to the noise caused by random initialization (Williams et al., 2018). 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ROOT (1)madam speaker, i rise in opposition to h.r. 3283 on both process and policy grounds....(17)look beyond the majority’s smoke and mirrors, and vote against this ill-timed and ill-conceived legislation. Figure 2: Learned dependency tree from Debates. Trees do not learn discourse. Although document level structured attention provides little benefit in performance, we probe whether the model could still be learning some discourse. We visually inspect the learned fbest trees and in Table 2 we report statistics on them (see Appendix Table 6 for similar results with the original code). The visual inspection (Figure 2) reveals shallow trees (also reported in L&L), but furthermore the trees have little to no structure.7 We observe an interesting pattern where the model picks one of the first two or last two sentences as the root, and 7While shallow trees are expected in PDTB-style discourse, even these trees would exhibit meaningful structure between adjacent sentences, which is entirely absent here. Yelp uuu, sterne, star, rating, deduct, 0, edit, underwhelmed, update, allgemein Debates oppose, republican, majority, thank, gentleman, leadership, california, measure, president, vote WQ valley, mp3, firm, capital, universal, venture, silicon, analyst, capitalist, street Table 3: Top 10 words most associated with the root sentence (measured with PPMI). all other sentences are children of that node. We label these trees as ‘vacuous’ and the strength of this pattern is reflected in the tree statistics (Table 2). The height of trees is small, showing the trees are shallow. The proportion of leaf nodes is high, that is, most nodes have no children. Finally, the normalized arc length is high, where nodes that are halfway into the document still connect to the root. We further probe the root sentence, as the model places so much attention on it. We hypothesize the root sentence has strong lexical cues for the task, suggesting the model is instead attending to particular words. In Yelp, reviewers often start or end with a sentiment-laden sentence summarizing their rating. In Debates, speakers begin or end their speech by stating their stance on the bill. In WQ and WQTC, the interpretation of the root is less clear. In WSJSO, we find the root is always the first sentence of the correctly ordered document, which is reasonable and commonly attested in a discourse tree, but the remainder of the vacuous tree is entirely implausible. To confirm our suspicion that the root sentence is lexically marked, we measure the association between words appearing in the root sentence and those elsewhere by calculating their positive pointwise mutual information scores (Table 3). In Yelp, we find root words often express sentiment and explicitly mention the number of stars given (‘sterne’ in German, or ‘uuu’ as coined by a particularly prolific Yelper), which are clear indicators of the rating label. For Debates, words express speaker opinion, politeness and stance which are strong markers for the binary voting label. The list for WQ revolves around tech, suggesting the model is learning topics instead of writing quality. Thus, in WQTC we control for topics. 5 Learning better structure We next probe whether the structure in L&L can be improved to be more linguistically appropriate, while still performing well on the end task. Given that structured attention helps only on WQTC and 650 Acc height leaf arc vacuous Full 81.11 4.035 0.931 0.396 14% -biLSTM 77.80 11.51 0.769 0.353 4% -biLSTM, +w 75.57 7.364 0.856 0.359 3% -biLSTM, +p 77.11 10.430 0.790 0.349 3% -biLSTM, +4p 81.71 9.588 0.811 0.353 3% parsed RST 25.084 0.567 0.063 0% Table 4: Mean test accuracy and tree statistics on the WQTC dev set (averaged across four runs). -biLSTM removes the document-level biLSTM, +w uses the weighted sum, +p performs 1 extra percolation, and +4p does 4 levels of percolation. The last row are (‘gold’) parsed RST discourse dependency trees. learns vacuous trees less frequently, we focus on this task. We experiment with three modifications. First, we remove the document-level biLSTM since it performs a level of composition that might prevent the attention from learning the true structure. Second, we note equation 3 captures possible children only at one level of the tree, but not possible subtrees. We thus perform an additional level of percolation over the marginals to incorporate the children’s children of the tree. That is, after equation 4, we calculate: c′ i = n X k=1 aike′ i; e′′ i = tanh(Wr[e′ i, c′ i]) (5) Third, the max-pooling layer gives the model a way of aggregating over sentences while ignoring the learned structure. Instead, we propose a sum that is weighted by the probability of a given sentence being the root, i.e., using the learned root attention score ar i : yi = Pn i=1 ar i e′′ i . We include ablations of these modifications and additionally derive RST discourse dependency trees,8 collapsing intrasentential nodes, as an approximation to the ground truth. The results (Table 4) show that simply removing the biLSTM produces trees with more structure (deeper trees, fewer leaf nodes, shorter arc lengths, and less vacuous trees), confirming our intuition that it was doing the work for the structured attention. However, it also results in lower performance. Changing the pooling layer from max to weighted sum both hurts performance and results in shallower trees (though still deeper than Full), which we attribute to this layer still being a pooling function. Introducing an extra level of tree percolation yields better trees but also a drop in performance. Finally, using 4 levels of percola8We use the RST parser in Feng and Hirst (2014) and follow Hirao et al. (2013) to derive discourse dependency trees. tion both reaches the accuracy of Full and retains the more structured trees.9 We hypothesize accuracy doesn’t surpass Full because this change also introduces extra parameters for the model to learn. While our results are a step in the right direction, the structures are decidedly not discourse when compared to the parsed RST dependency trees, which are far deeper with far fewer leaf nodes, shorter arcs and no vacuous trees. Importantly, the tree statistics show the structures do not follow the typical right-branching structure in news: the trees are shallow, nodes often connect to the root instead of a more immediate parent, and the vast majority of nodes have no children. In work concurrent to ours, Liu et al. (2019) proposes a new iterative algorithm for the structured attention (in the same spirit as our extra percolations) and applies it to a transformer-based summarization model. However, even these induced trees are not comparable to RST discourse trees. The induced trees are multi-rooted by design (each root is a summary sentence) which is unusual for RST;10 their reported tree height and edge agreement with RST trees are low. 6 Conclusion In this paper, we evaluate structured attention in document representations as a proxy for discourse structure. We first find structured attention at the document level is largely unhelpful, and second it instead captures lexical cues resulting in vacuous trees with little structure. We propose several principled changes to induce better structures with comparable performance. Nevertheless, calculating statistics on these trees and comparing them to parsed RST trees shows they still contain no meaningful discourse structure. We theorize some amount of supervision, such as using ground-truth discourse trees, is needed for guiding and constraining the tree induction. Acknowledgments We thank the reviewers for insightful feedback. We acknowledge the Texas Advanced Computing Center for grid resources. The first author was supported by the NSF Graduate Research Fellowship Program under Grant No. 2017247409. 9More than 4 levels caused training to become unstable. 10Less than 25% of trees in the RST Discourse Treebank (Carlson et al., 2001) have more than 1 root; less than 8% have more than 2 roots. 651 References Regina Barzilay and Mirella Lapata. 2008. Modeling Local Coherence - An Entity-Based Approach. Computational Linguistics, 34(1):1–34. Yonatan Bisk and Ke Tran. 2018. Inducing grammars with and for neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 25–35. Association for Computational Linguistics. Lynn Carlson, Daniel Marcu, and Mary Ellen Okurovsky. 2001. Building a discourse-tagged corpus in the framework of rhetorical structure theory. In Proceedings of the Second SIGdial Workshop on Discourse and Dialogue, pages 1–10. Association for Computational Linguistics. Jihun Choi, Kang Min Yoo, and Sang-goo Lee. 2018. Learning to compose task-specific tree structures. In Proceedings of the 2018 Association for the Advancement of Artificial Intelligence (AAAI). Y.J. Chu and T.H. Liu. 1965. On the shortest arborescence of a directed graph. Science Sinica, 14:1396– 1400. Greg Durrett, Taylor Berg-Kirkpatrick, and Dan Klein. 2016. Learning-based single-document summarization with compression and anaphoricity constraints. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1998–2008. Association for Computational Linguistics. Jack Edmonds. 1967. Optimum branchings. Journal of Research of the National Bureau of Standards, 71:233–240. Vanessa Wei Feng and Graeme Hirst. 2014. A lineartime bottom-up discourse parser with constraints and post-editing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 511–521. Association for Computational Linguistics. Tsutomu Hirao, Yasuhisa Yoshida, Masaaki Nishino, Norihito Yasuda, and Masaaki Nagata. 2013. Single-document summarization as a tree knapsack problem. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1515–1520. Yangfeng Ji and Noah A. Smith. 2017. Neural Discourse Structure for Text Categorization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 996–1005. Association for Computational Linguistics. Shafiq Joty, Francisco Guzm´an, Llu´ıs M`arquez, and Preslav Nakov. 2017. Discourse structure in machine translation evaluation. Computational Linguistics, 43(4):683–722. Terry Koo, Amir Globerson, Xavier Carreras, and Michael Collins. 2007. Structured prediction models via the matrix-tree theorem. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Alex Lascarides and Nicholas Asher. 2008. Segmented discourse representation theory: Dynamic semantics with discourse structure. In Computing meaning, pages 87–124. Springer. Yang Liu and Mirella Lapata. 2018. ”Learning Structured Text Representations”. Transactions of the Association for Computational Linguistics, 6:63–75. Yang Liu, Ivan Titov, and Mirella Lapata. 2019. Single document summarization as tree induction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1745–1755, Minneapolis, Minnesota. Association for Computational Linguistics. Annie Louis and Ani Nenkova. 2013. What Makes Writing Great? First Experiments on Article Quality Prediction in the Science Journalism Domain. TACL. William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse, 8(3):243–281. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational linguistics, 19(2):313–330. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Association for Computational Linguistics. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind K Joshi, and Bonnie L Webber. 2008. The Penn Discourse TreeBank 2.0. In LREC. Evan Sandhaus. 2008. The New York Times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752. Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5027–5038. Association for Computational Linguistics. 652 Duyu Tang, Bing Qin, and Ting Liu. 2015. ”Learning Semantic Representations of Users and Products for Document Level Sentiment Classification”. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1014– 1023. Association for Computational Linguistics. Matt Thomas, Bo Pang, and Lillian Lee. 2006. ”Get out the vote: Determining support or opposition from Congressional floor-debate transcripts”. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 327–335. Association for Computational Linguistics. Adina Williams, Andrew Drozdov, and Samuel R. Bowman. 2018. Do latent tree learning models identify meaningful structure in sentences? Transactions of the Association for Computational Linguistics, 6:253–267. Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Wang Ling. 2017. Learning to compose words into sentences with reinforcement learning. ICLR. Dani Yogatama and Noah A. Smith. 2014. ”Linguistic Structured Sparsity in Text Categorization”. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 786–796. Association for Computational Linguistics. A Appendices Datasets Statistics for the datasets are listed in Table 5. For WQ, the very good class was created by Louis and Nenkova (2013) using as a seed the 63 articles in the New York Times corpus (Sandhaus, 2008) deemed to be high-quality writing by a team of expert journalists. The class was then expanded by adding all other science articles in the NYT corpus that were written by the seed authors (4,253 articles). For the typical class, science articles by all other authors were included (19,520). Because the data is very imbalanced, we undersample the typical class to be the same size as the very good. We split this data into 80/10/10 for training, development and test, with both classes equally represented in each partition. For WQTC, the original dataset authors provide a list of the 10 most topically similar articles for each article.11 We make use of this list to explicitly sample topically similar documents. 11http://www.cis.upenn.edu/˜nlp/ corpora/scinewscorpus.html Preprocessing For Debates and Yelp, we follow the same preprocessing steps as in L&L, but do not set a minimum frequency threshold when creating the word embeddings. For our three datasets, sentences are split and tokenized using Stanford Core NLP. Training For all models, we use the Adagrad optimizer with a learning rate of 0.05. For WQ, WQTC, and WSJSO, gradient clipping is performed using the global norm with a ratio of 1.0. The batch size is 32 for all models except WSJSO uses 16. All models are trained for a maximum of 8 hours on a GeForce GTX 1080 Ti card. Results Because our results hinge on multiple runs of experiments each initialized with different random weights, we include here more detailed versions of our results to more accurately illustrate their variability. Table 6 supplements Table 2 with tree statistics from L&L(orig), the model without the design change or bug fix, to illustrate the derived trees on this model are similar. Finally, Table 7 is a more detailed version of Table 4, which additionally includes maximum accuracy, standard deviation for accuracy, as well as the average parent entropy calculated over the latent trees. 653 Number of documents Dataset Classes Total Train Dev Test Vocab. Yelp 5 333K 266,522 33,333 33,317 53K Debates 2 1.5K 1,050 102 374 21K WQ 2 7.7K 6,195 775 763 150K WQTC 2 7.8K 6,241 777 794 131K WSJSO 2.4K 1,950 (35,165) 247 (4,392) 241 (4,383) 49K Table 5: Statistics for the datasets used in the text classification and discrimination tasks (calculated after preprocessing). For WSJSO, the number of generated pairs are in parentheses. Yelp Debates WQ WQTC WSJSO tree height 2.049 (2.248) 2.751 (2.444) 2.909 (2.300) 4.035 (2.468) 2.288 (2.368) prop. of leaf nodes 0.825 (0.801) 0.849 (0.869) 0.958 (0.971) 0.931 (0.966) 0.892 (0.888) norm. arc length 0.433 (0.468) 0.397 (0.377) 0.420 (0.377) 0.396 (0.391) 0.426 (0.374) % vacuous trees 73% (68%) 38% (40%) 42% (28%) 14% (21%) 100% (56%) Table 6: Statistics for the learned trees averaged across four runs on the L&L(ours) model with comparisons (in parentheses) to results using the original L&L code without the design change or bug fix. Accuracy tree height prop. of leaf parent entr. norm. arc length % vacuous trees Full 82.49 | 81.11 (0.95) 4.035 0.931 0.774 0.396 14% -biLSTM 80.35 | 77.80 (1.72) 11.51 0.769 1.876 0.353 4% -biLSTM, +p 78.72 | 77.11 (2.18) 10.430 0.790 0.349 0.349 3% -biLSTM, +4p 82.75 | 81.71 (0.70) 9.588 0.811 1.60 0.353 3% -biLSTM, +w 78.46 | 75.57 (2.52) 7.364 0.856 1.307 0.359 3% -biLSTM, +w, +p 77.08 | 74.78 (2.58) 8.747 0.826 1.519 0.349 4% parsed RST 25.084 0.567 2.711 0.063 0% Table 7: Max | mean (standard deviation) test accuracy and tree statistics of the WQTC dev set (averaged across four training runs with different initialization weights). Bolded numbers are within 1 standard deviation of the best performing model. +w uses the weighted sum, +p adds 1 extra level of percolation, +4p adds 4 levels of percolation. The last row are the (‘gold’) parsed RST discourse dependency trees.
2019
62
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6168–6173 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6168 Synthetic QA Corpora Generation with Roundtrip Consistency Chris Alberti Daniel Andor Emily Pitler Jacob Devlin Michael Collins Google Research {chrisalberti, andor, epitler, jacobdevlin, mjcollins}@google.com Abstract We introduce a novel method of generating synthetic question answering corpora by combining models of question generation and answer extraction, and by filtering the results to ensure roundtrip consistency. By pretraining on the resulting corpora we obtain significant improvements on SQuAD2 (Rajpurkar et al., 2018) and NQ (Kwiatkowski et al., 2019), establishing a new state-of-the-art on the latter. Our synthetic data generation models, for both question generation and answer extraction, can be fully reproduced by finetuning a publicly available BERT model (Devlin et al., 2018) on the extractive subsets of SQuAD2 and NQ. We also describe a more powerful variant that does full sequence-to-sequence pretraining for question generation, obtaining exact match and F1 at less than 0.1% and 0.4% from human performance on SQuAD2. 1 Introduction Significant advances in Question Answering (QA) have recently been achieved by pretraining deep transformer language models on large amounts of unlabeled text data, and finetuning the pretrained models on hand labeled QA datasets, e.g. with BERT (Devlin et al., 2018). Language modeling is however just one example of how an auxiliary prediction task can be constructed from widely available natural text, namely by masking some words from each passage and training the model to predict them. It seems plausible that other auxiliary tasks might exist that are better suited for QA, but can still be constructed from widely available natural text. It also seems intuitive that such auxiliary tasks will be more helpful the closer they are to the particular QA task we are attempting to solve. Based on this intuition we construct auxiliary tasks for QA, generating millions of synInput (C) ... in 1903, boston participated in the first modern world series, going up against the pittsburgh pirates ... (1) C →A 1903 (2) C, A →Q when did the red sox first go to the world series (3) C, Q →A′ 1903 (4) A ?= A′ Yes Table 1: Example of how synthetic question-answer pairs are generated. The model’s predicted answer (A′) matches the original answer the question was generated from, so the example is kept. thetic question-answer-context triples from unlabeled passages of text, pretraining a model on these examples, and finally finetuning on a particular labeled dataset. Our auxiliary tasks are illustrated in Table 1. For a given passage C, we sample an extractive short answer A (Step (1) in Table 1). In Step (2), we generate a question Q conditioned on A and C, then (Step (3)) predict the extractive answer A′ conditioned on Q and C. If A and A′ match we finally emit (C, Q, A) as a new synthetic training example (Step (4)). We train a separate model on labeled QA data for each of the first three steps, and then apply the models in sequence on a large number of unlabeled text passages. We show that pretraining on synthetic data generated through this procedure provides us with significant improvements on two challenging datasets, SQuAD2 (Rajpurkar et al., 2018) and NQ (Kwiatkowski et al., 2019), achieving a new state of the art on the latter. 2 Related Work Question generation is a well-studied task in its own right (Heilman and Smith, 2010; Du et al., 2017; Du and Cardie, 2018). Yang et al. (2017) and Dhingra et al. (2018) both use generated 6169 question-answer pairs to improve a QA system, showing large improvements in low-resource settings with few gold labeled examples. Validating and improving the accuracy of these generated QA pairs, however, is relatively unexplored. In machine translation, modeling consistency with dual learning (He et al., 2016) or backtranslation (Sennrich et al., 2016) across both translation directions improves the quality of translation models. Back-translation, which adds synthetically generated parallel data as training examples, was an inspiration for this work, and has led to state-of-the-art results in both the supervised (Edunov et al., 2018) and the unsupervised settings (Lample et al., 2018). Lewis and Fan (2019) model the joint distribution of questions and answers given a context and use this model directly, whereas our work uses generative models to generate synthetic data to be used for pretraining. Combining these two approaches could be an area of fruitful future work. 3 Model Given a dataset of contexts, questions, and answers: {(c(i), q(i), a(i)) : i = 1, . . . , N}, we train three models: (1) answer extraction: p(a|c; θA), (2) question generation: p(q|c, a; θQ), and (3) question answering: p(a|c, q; θA′). We use BERT (Devlin et al., 2018)∗to model each of these distributions. Inputs to each of these models are fixed length sequences of wordpieces, listing the tokenized question (if one was available) followed by the context c. The answer extraction model is detailed in §3.1 and two variants of question generation models in §3.2 and §3.3. The question answering model follows Alberti et al. (2019). 3.1 Question (Un)Conditional Extractive QA We define a question-unconditional extractive answer model p(a|c; θA) and a question-conditional extractive answer model p(a|q, c; θA′) as follows: p(a|c; θA) = efJ(a,c;θA) P a′′ efJ(a′′,c;θA) p(a|c, q; θA′) = efI(a,c,q;θA′) P a′′ efI(a′′,c,q;θA′) ∗Some experiments use a variant of BERT that masks out whole words at training time, similar to Sun et al. (2019). See https://github.com/ google-research/bert for both the original and whole word masked versions of BERT. where a, a′′ are defined to be token spans over c. For p(a|c; θA), a and a′′ are constrained to be of length up to LA, set to 32 word piece tokens. The key difference between the two expressions is that fI scores the start and the end of each span independently, while fJ scores them jointly. Specifically we define fJ : Rh →R and fI : Rh →R to be transformations of the final token representations computed by a BERT model: fJ(a, c; θA) = MLPJ(CONCAT(BERT(c)[s], BERT(c)[e])) fI(a, q, c; θA′)) = AFFI(BERT(q, c)[s]) + AFFI(BERT(q, c)[e]). Here h is the hidden representation dimension, (s, e) = a is the answer span, BERT(t)[i] is the BERT representation of the i’th token in token sequence t. MLPJ is a multi-layer perceptron with a single hidden layer, and AFFI is an affine transformation. We found it was critical to model span start and end points jointly in p(a|c; θA) because, when the question is not given, there are usually multiple acceptable answers for a given context, so that the start point of an answer span cannot be determined separately from the end point. 3.2 Question Generation: Fine-tuning Only Text generation allows for a variety of choices in model architecture and training data. In this section we opt for a simple adaptation of the public BERT model for text generation. This adaptation does not require any additional pretraining and no extra parameters need to be trained from scratch at finetuning time. This question generation system can be reproduced by simply finetuning a publicly available pretrained BERT model on the extractive subsets of datasets like SQuAD2 and NQ. Fine-tuning We define the p(q|c, a; θQ) model as a left-to-right language model p(q|a, c; θQ) = LQ Y i=1 p(qi|q1, . . . , qi−1, a, c; θQ) = LQ Y i=1 efQ(q1,...,qi,a,c;θQ) P q′ i efQ(q1,...,q′ i,a,c;θQ) , where q = (q1, . . . , qLQ) is the sequence of question tokens and LQ is a predetermined maximum question length, but, unlike the more usual 6170 encoder-decoder approach, we compute fQ using the single encoder stack from the BERT model: fQ(q1, . . . , qi, a, c; θQ) = BERT(q1, . . . , qi−1, a, c)[i −1] · W ⊺ BERT, where WBERT is the word piece embedding matrix in BERT. All parameters of BERT including WBERT are finetuned. In the context of question generation, the input answer is encoded by introducing a new token type id for the tokens in the extractive answer span, e.g. the question tokens being generated have type 0 and the context tokens have type 1, except for the ones in the answer span that have type 2. We always pad or truncate the question being input to BERT to a constant length LQ to avoid giving the model information about the length of the question we want it to generate. This model can be trained efficiently by using an attention mask that forces to zero all the attention weights from c to q and from qi to qi+1 . . . qLQ for all i. Question Generation At inference time we generate questions through iterative greedy decoding, by computing argmaxqi fQ(q1, . . . , qi, a, c) for i = 1, . . . , LQ. Question-answer pairs are kept only if they satisfy roundtrip consistency. 3.3 Question Generation: Full Pretraining The prior section addressed a restricted setting in which a BERT model was fine-tuned, without any further changes. In this section, we describe an alternative approach for question generation that fully pretrains and fine-tunes a sequence-tosequence generation model. Pretraining Section 3.2 used only an encoder for question generation. In this section, we use a full sequence-to-sequence Transformer (both encoder and decoder). The encoder is trained identically (BERT pretraining, Wikipedia data), while the decoder is trained to output the next sentence. Fine-tuning Fine-tuning is done identically as in Section 3.2, where the input is (C, A) and the output is Q from tuples from a supervised question-answering dataset (e.g., SQuAD). Question Generation To get examples of synthetic (C, Q, A) triples, we sample from the decoder with both beam search and Monte Carlo search. As before, we use roundtrip consistency to keep only the high precision triples. 3.4 Why Does Roundtrip Consistency Work? A key question for future work is to develop a more formal understanding of why the roundtrip method improves accuracy on question answering tasks (similar questions arise for the backtranslation methods of Edunov et al. (2018) and Sennrich et al. (2016); a similar theory may apply to these methods). In the supplementary material we sketch a possible approach, inspired by the method of Balcan and Blum (2005) for learning with labeled and unlabeled data. This section is intentionally rather speculative but is intended to develop intuition about the methods, and to propose possible directions for future work on developing a formal grounding. In brief, the approach discussed in the supplementary material suggests optimizing the loglikelihood of the labeled training examples, under a constraint that some measure of roundtrip consistency β(θA′) on unlabeled data is greater than some value γ. The value for γ can be estimated using performance on development data. The auxiliary function β(θA′) is chosen such that: (1) the constraint β(θA′) ≥γ eliminates a substantial part of the parameter space, and hence reduces sample complexity; (2) the constraint β(θA′) ≥γ nevertheless includes ‘good’ parameter values that fit the training data well. The final step in the argument is to make the case that the algorithms described in the current paper may effectively be optimizing a criterion of this kind. Specifically, the auxiliary function β(θA′) is defined as the log-likelihood of noisy (c, q, a) triples generated from unlabeled data using the C →A and C, A →Q models; constraining the parameters θA′ to achieve a relatively high value on β(θA′) is achieved by pre-training the model on these examples. Future work should consider this connection in more detail. 4 Experiments 4.1 Experimental Setup We considered two datasets in this work: SQuAD2 (Rajpurkar et al., 2018) and the Natural Questions (NQ) (Kwiatkowski et al., 2019). SQuAD2 is a dataset of QA examples of questions with answers formulated and answered by human annotators about Wikipedia passages. NQ is a dataset of Google queries with answers from Wikipedia pages provided by human annotators. We used the full text from the training set of NQ (1B words) as 6171 Dev Test EM F1 EM F1 Fine-tuning Only BERT-Large (Original) 78.7 81.9 80.0 83.1 + 3M synth SQuAD2 80.1 82.8 + 4M synth NQ 81.2 84.0 82.0 84.8 Full Pretraining BERT (Whole Word Masking)† 82.6 85.2 + 50M synth SQuAD2 85.1 87.9 85.2 87.7 + ensemble 86.0 88.6 86.7 89.1 Human 86.8 89.5 Table 2: Our results on SQuAD2. For our fine-tuning only setting, we compare a BERT baseline (BERT single model - Google AI Language on the SQuAD2 leaderboard) to similar models pretrained on our synthetic SQuAD2-style corpus and on a corpus containing both SQuAD2- and NQ-style data. For the full pretraining setting, we report our best single model and ensemble results. a source of unlabeled data. In our fine-tuning only experiments (Section 3.2) we trained two triples of models (θA, θQ, θA′) on the extractive subsets of SQuAD2 and NQ. We extracted 8M unlabeled windows of 512 tokens from the NQ training set. For each unlabeled window we generated one example from the SQuAD2-trained models and one example from the NQ-trained models. For A we picked an answer uniformly from the top 10 extractive answers according to p(a|c; θA). For A′ we picked the best extractive answer according to p(a|c, q; θA′). Filtering for roundtrip consistency gave us 2.4M and 3.2M synthetic positive instances from SQuAD2and NQ-trained models respectively. We then added synthetic unanswerable instances by taking the question generated from a window and associating it with a non-overlapping window from the same Wikipedia page. We then sampled negatives to obtain a total of 3M and 4M synthetic training instances for SQuAD2 and NQ respectively. We trained models analogous to Alberti et al. (2019) initializing from the public BERT model, with a batch size of 128 examples for one epoch on each of the two sets of synthetic examples and on the union of the two, with a learning rate of 2 · 10−5 and no learning rate decay. We then fine-tuned the the resulting models on SQuAD2 and NQ. In our full pretraining experiments (Section 3.3) we only trained (θA, θQ, θA′) on SQuAD2. How†https://github.com/google-research/ bert 78 79 80 81 0 1 2 3 4 5 6 7 8 Best exact match on SQuAD2.0 dev set Number of synthetic examples (M) NQ+SQuAD Synth NQ+SQuAD Synth no-RT SQuAD Synth SQuAD Synth no-RT Figure 1: Learning curves for pretraining using synthetic question-answering data (fine-tuning only setting). “no-RT” refers to omitting the roundtrip consistency check. Best exact match is reported after finetuning on SQuAD2. Performance improves with the amount of synthetic data. For a fixed amount of synthetic data, having a more diverse source (NQ+SQuAD vs. just SQuAD) yields higher accuracies. Roundtrip filtering gives further improvements. ever, we pretrained our question generation model on all of the BERT pretraining data, generating the next sentence left-to-right. We created a synthetic, roundtrip filtered corpus with 50M examples. We then fine-tuned the model on SQuAD2 as previously described. We experimented with both the single model setting and an ensemble of 6 models. 4.2 Results The final results are shown in Tables 2 and 3. We found that pretraining on SQuAD2 and NQ synthetic data increases the performance of the finetuned model by a significant margin. On the NQ short answer task, the relative reduction in headroom is 50% to the single human performance and 10% to human ensemble performance. We additionally found that pretraining on the union of synthetic SQuAD2 and NQ data is very beneficial on the SQuAD2 task, but does not improve NQ results. The full pretraining approach with ensembling obtains the highest EM and F1 listed in Table 2. This result is only 0.1 −0.4% from human performance and is the third best model on the SQuAD2 leaderboard as of this writing (5/31/19). Roundtrip Filtering Roundtrip filtering appears to be consistently beneficial. As shown in Figure 1, models pretrained on roundtrip consistent data outperform their counterparts pretrained without filtering. From manual inspection, of 46 (C, Q, A) triples that were roundtrip consistent 6172 Long Answer Dev Long Answer Test Short Answer Dev Short Answer Test P R F1 P R F1 P R F1 P R F1 BERTjoint 61.3 68.4 64.7 64.1 68.3 66.2 59.5 47.3 52.7 63.8 44.0 52.1 + 4M synth NQ 62.3 70.0 65.9 65.2 68.4 66.8 60.7 50.4 55.1 62.1 47.7 53.9 Single Human 80.4 67.6 73.4 63.4 52.6 57.5 Super-annotator 90.0 84.6 87.2 79.1 72.6 75.7 Table 3: Our results on NQ, compared to the previous best system and to the performance of a human annotator and of an ensemble of human annotators. BERTjoint is the model described in Alberti et al. (2019). Question Answer NQ what was the population of chicago in 1857? over 90,000 SQuAD2 what was the weight of the brigg’s hotel? 22,000 tons NQ where is the death of the virgin located? louvre SQuAD2 what person replaced the painting? carlo saraceni NQ when did rick and morty get released? 2012 SQuAD2 what executive suggested that rick be a grandfather? nick weidenfeld Table 4: Comparison of question-answer pairs generated by NQ and SQuAD2 models for the same passage of text. 39% were correct, while of 44 triples that were discarded only 16% were correct. Data Source Generated question-answer pairs are illustrative of the differences in the style of questions between SQuAD2 and NQ. We show a few examples in Table 4, where the same passage is used to create a SQuAD2-style and an NQ-style question-answer pair. The SQuAD2 models seem better at creating questions that directly query a specific property of an entity expressed in the text. The NQ models seem instead to attempt to create questions around popular themes, like famous works of art or TV shows, and then extract the answer by combining information from the entire passage. 5 Conclusion We presented a novel method to generate synthetic QA instances and demonstrated improvements from this data on SQuAD2 and on NQ. We additionally proposed a possible direction for formal grounding of this method, which we hope to develop more thoroughly in future work. References Chris Alberti, Kenton Lee, and Michael Collins. 2019. A bert baseline for the natural questions. arXiv preprint arXiv:1901.08634. Maria-Florina Balcan and Avrim Blum. 2005. A pacstyle model for learning from labeled and unlabeled data. In Proceedings of the 18th Annual Conference on Learning Theory, COLT’05, pages 111– 126, Berlin, Heidelberg. Springer-Verlag. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Bhuwan Dhingra, Danish Danish, and Dheeraj Rajagopal. 2018. Simple and effective semi-supervised question answering. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 582–587. Xinya Du and Claire Cardie. 2018. Harvesting paragraph-level question-answer pairs from wikipedia. arXiv preprint arXiv:1805.05942. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1342– 1352. Association for Computational Linguistics. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on 6173 Empirical Methods in Natural Language Processing, pages 489–500. Association for Computational Linguistics. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Advances in Neural Information Processing Systems, pages 820–828. Michael Heilman and Noah A Smith. 2010. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 609–617. Association for Computational Linguistics. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Rhinehart, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, et al. 2018. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049. Mike Lewis and Angela Fan. 2019. Generative question answering: Learning to answer the whole question. International Conference on Learning Representations (ICLR). Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you dont know: Unanswerable questions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 784–789. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 86–96. Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced representation through knowledge integration. CoRR, abs/1904.09223. Zhilin Yang, Junjie Hu, Ruslan Salakhutdinov, and William Cohen. 2017. Semi-supervised qa with generative domain-adaptive nets. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1040–1050.
2019
620
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6174–6184 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6174 Are Red Roses Red? Evaluating Consistency of Question-Answering Models Marco Tulio Ribeiro Microsoft Research [email protected] Carlos Guestrin University of Washington [email protected] Sameer Singh University of California, Irvine [email protected] Abstract Although current evaluation of questionanswering systems treats predictions in isolation, we need to consider the relationship between predictions to measure true understanding. A model should be penalized for answering “no” to “Is the rose red?” if it answers “red” to “What color is the rose?”. We propose a method to automatically extract such implications for instances from two QA datasets, VQA and SQuAD, which we then use to evaluate the consistency of models. Human evaluation shows these generated implications are well formed and valid. Consistency evaluation provides crucial insights into gaps in existing models, and retraining with implicationaugmented data improves consistency on both synthetic and human-generated implications. 1 Introduction Question-answering (QA) systems have become popular benchmarks for AI systems, as they require the ability to comprehend and employ complex reasoning about the question and the associated context. In order to really excel in machine comprehension (Rajpurkar et al., 2016), for example, models need to understand the entities, coreferences, and relations in the paragraph, and align them to the information need encoded in the question. Similarly, Visual Question Answering (Antol et al., 2015) requires not only perception abilities (fine-grained recognition, object detection), but also “higher level reasoning” about how the question is related to the visual information, commonsense reasoning, knowledge based reasoning, and the understanding of location/color/size attributes. However, recent work has shown that popular benchmarks have crucial limitations in their ability to test reasoning and comprehension. For example, Weissenborn et al. (2017) show that models can do well in the SQuAD dataset by using heuristic (a) Input image from the VQA dataset. How many birds? A: 1 Is there 1 bird? A: no Are there 2 birds? A: yes Are there any birds? A: no (b) Model (Zhang et al., 2018) provides inconsistent answers. Kublai originally named his eldest son, Zhenjin, as the Crown Prince, but he died before Kublai in 1285. (c) Excerpt from an input paragraph, SQuAD dataset. Q: When did Zhenjin die? A: 1285 Q: Who died in 1285? A: Kublai (d) Model (Peters et al., 2018) provides inconsistent answers. Figure 1: Inconsistent QA Predictions: Models that are accurate for questions from these datasets (first row in (b) and (d)) are not able to correctly answer followup questions whose answers are implied by the original question/answer. We generate such questions automatically, and evaluate existing models on their consistency. lexical and type overlap between the context and the question. Biases have also been observed in the popular VQA dataset, e.g. answering questions starting with “Do you see a ...” with “yes” results in 87% accuracy, and “tennis” is the correct answer for 41% of questions starting with “What sport is ...” (Goyal et al., 2017). While there are laudable efforts to try to diminish such biases (Rajpurkar et al., 2018; Goyal et al., 2017), they do not address a fundamental evaluation question: it is not only individual predictions that matter, but also whether multiple answers reflect a consistent and coherent model. For example, in Figure 1, models answer original questions correctly but answer follow-up questions in an inconsistent manner, which indicates they do not really understand the context or the questions (e.g. simultaneously predicting 0, 1, and 2 birds in Figure 1b). 6175 In this paper, we propose evaluation for QA systems that measures the extent to which model predictions are consistent. We first automatically generate new question-answer pairs that are implied by existing instances from the dataset (such as the ones in Figure 1). We use this generated dataset to evaluate models by penalizing them when their predictions are not consistent with these implications. Human evaluation verifies that the generated implications are valid and well formed when compared to original instances, and thus can be used to evaluate and gain insights into models for VQA and SQuAD. Finally, we propose a simple data augmentation procedure that results in models nearly as accurate as the original models on the original data, while being more consistent when measured by our implications and by human generated implications (and thus expected to generalize better in the real world). 2 Related Work Since QA models often exploit shortcuts to be accurate without really understanding questions and contexts, alternative evaluations have been proposed, consisting of solutions that mitigate known biases or propose separate diagnostic datasets. Examples of the former include adding multiple images for which the answer to the same question is different (Goyal et al., 2017; Zhang et al., 2016), or questions for which an answer is not present (Rajpurkar et al., 2018). While useful, these do not take the relationship between predictions into account, and thus do not capture problems like the ones in Figure 1. Exceptions exist when trying to gauge robustness: Ribeiro et al. (2018) consider the robustness of QA models to automatically generated input rephrasings, while Shah et al. (2019) evaluate VQA models on crowdsourced rephrasings for robustness. While important for evaluation, these efforts are orthogonal to our focus on consistency. Various automatically generated diagnostic datasets have been proposed (Weston et al., 2015; Johnson et al., 2017). While these recognize the need to evaluate multiple capabilities, evaluation is still restricted to individual units and thus cannot capture inconsistencies between predictions, like predicting that an object is at the same time to the left and to the right of another object. Furthermore, questions/contexts can be sufficiently artificial for models to reverse-engineer how the dataset was created. An exception contemporaneous with our (a) Example input image. Q: What room is this? A: bathroom (b) Example (q, a) pair. Type Cov Example Logeq 56.8% Is this a bathroom? Yes Nec 50.2% Is there a bathroom in the picture? Yes Mutex 34.6% Is this a kitchen? No (c) Implication types, with coverage and examples. Figure 2: VQA Implications and examples. Implications can be generated for 67.3% of the original data. work is GQA (Hudson and Manning, 2019), where real images are used, and metrics such as consistency (similar to our own) are used for a fraction of inputs. Since questions are still synthetic, and “not as natural as other VQA datasets” (Hudson and Manning, 2019), it remains to be seen whether models will overfit to the generation procedure or to the implications encoded (e.g. many are simple spatial rules such as “X to the left of Y implies Y to the right of X”). Their approach is complementary to ours – they provide implications for ∼54% of their synthetic dataset, while we generate different implications for ∼67% of human generated questions in VQA, and ∼73% of SQuAD questions. 3 Generating Implications Let an instance from a QA datset be represented by (c, q, a) denoting respectively the context (image or paragraph), question, and answer (c may be omitted for clarity). We define logical implications as (c, q, a) →(c, q′, a′) , i.e. an answer a to q implies that a′ is the answer for question q′ for the same context. We now present a rule-based system that takes (q, a) and generates (q, a) →(q′, a′). Visual QA (q, a) pairs in VQA often have both positive and negative implications that we encode into three types of yes/no implications, illustrated in Figure 2: logical equivalence (Logeq), necessary condition (Nec) and mutual exclusion (Mutex) (more examples in appendices). To generate such instances, we use a dependency parser (Dozat et al., 2017) to recognize root/subject/object and build the implication appropriately, and to detect auxiliary/copula that may need to be moved. Logical equivalence implications are generated by trans6176 forming the original (q, a) into a proposition, and then asking the “yes-no” equivalent by moving auxiliary/copula, adding “do” auxiliaries, etc (e.g. “Who painted the wall? man” →“Did the man paint the wall? yes”). Necessary conditions are created via heuristics such as taking numerical answers to “How many X” questions and asking if there are any X present (e.g. “How many birds? 1” →“Are there any birds? yes”), or asking if answer nouns are in the picture (e.g. bathroom in Figure 2c). We used WordNet (Miller, 1995) to find antonyms and other plausible answers (hyponyms of the original answer’s hypernym) when generating mutual exclusion implications, as illustrated in changing “bathroom” to “kitchen” in Figure 2c. We also used a 4-gram language model (Heafield et al., 2013) to smooth implication questions (e.g. adding “the”, “a”, etc before inserting the original answers into implication questions). SQuAD Since the answers need to be spans in the paragraph, we cannot generate the same kinds of implications (e.g. yes/no questions are not suitable). Instead, we use the QA2D system of Demszky et al. (2018) to transform a (q, a) into declarative form d, and then use the dependency parse of d to extract questions about the subject (Subj), direct object (Dobj), adjectival modifiers (Amod), or prepositional phrases (Prep) (Table 1). To decide which WH-word to introduce, we use a NER tagger (Honnibal and Montani, 2017) coupled with heuristics, e.g. if the answer is “in DATE” or “in LOC”, the WH-words are “when” and “where”, respectively. Evaluating consistency We want the generated implications to meet the following criteria: (1) the questions are well formed, (2) the answers are correct, and (3) the implication is valid, i.e. if we generate an implication (q, a) →(q′, a′), an answer a to q really implies that a′ is the answer to q′. If these are met (Section 4), we can evaluate the consistency of a large fraction of predictions in these datasets (67.3% of VQA and 73.2% of SQuAD) by taking (q, a) instances predicted correctly by the model, generating implications (q, a) →(q′, a′), and measuring the frequency at which the model predicts the generated questions correctly. 4 Experiments In this section, we assess the quality of the generated (q′, a′) pairs, measure consistency of models for VQA and SQuAD, and evaluate whether data Type Cov Example Subj 29.3% When did Zhenjin die? 1285 →Who died in 1285? Zhenjin Dobj 10.0% When did Denmark join the EU? 1972 →What did Denmark join in 1972? the EU Amod 29.7% When did the Chinese famine begin? 1331 →Which famine began in 1331? Chinese Prep 46.1% Who received a bid in 1915? Edison →When did Edison receive a bid? 1915 Table 1: SQuAD Implication types and examples. Implications cover 73.2% of the original data. (a) VQA (b) SQuAD Figure 3: Quality of implications (q′, a′) and original (q, a) as judged by workers: grammaticality and naturalness of questions, and correctness of answers. augmentation with implications can improve the consistency of existing models. 4.1 Quality of Implications We randomly select 100 generated implications and original instances for each dataset, and ask 5 different crowd workers on Amazon Mechanical Turk to rate each question for grammaticality and naturalness on a scale of 1 to 5 (following Demszky et al. (2018)). We also ask workers to evaluate the correctness of the answer given the question and context (image or paragraph). The results presented in Figures 3a and 3b show that the average scores on all criteria are nearly indistinguishable between original instances and the generated implications, which indicates that implication questions are well formed and answers are correct. 4.2 Validity of Implications In order to check if (q, a) really implies (q′, a′) (i.e. check if the implication is valid), we show workers the (q, a) without the context and ask them to answer the implication question q′ assuming the original answer a is correct. If (q, a) →(q′, a′), workers should be able to answer q′ correctly even in the absence of the image or paragraph. As an example, the answer to the implication question in Figure 4a should be “yes” for any image, if the original 6177 Original Q: How many zebras are there? A: 4 Implication Q: Are there any zebras? Control Q: Is this scene taken in the wild? (a) Example from the VQA dataset. Original Q:Which IPCC author criticized the TAR? A: Richard Lindzen Implication Q: What did Richard Lindzen criticize? Control Q: Who responded to Lindzen’s criticisms? (b) Example from the SQuAD dataset. Figure 4: Testing the validity of implications: given an original (q, a) pair, humans should be able to deduce the answer for the implication question without context, but not necessarily for the control question. VQA SQuAD Impl Control Impl Control #Answered 99% 13% 95% 4% #Correct|Answered 97% 77% 97% 50% Table 2: Validating Implications: Crowd evaluation of the validity of implications, where the first row indicates how often workers provide an answer, while the second row indicates the precision of their answers. (q, a) holds. For control purposes, we also include question-answer pairs asked of the same context from the dataset, expecting that workers would not be able to answer these without the original context most of the time (Figure 4a provides an example where a reasonable guess can be made, which is not true in Figure 4b). We take the same 100 implications from the previous experiment and add 100 control questions, each evaluated by 5 workers. Workers are instructed to abstain from answering if the original (q, a) does not give them enough information to answer q′ or the control question. For each question, we evaluate the worker majority answer w.r.t. the implication or control answer. The results in Table 2 are quite positive: workers almost always provide the correct answer a′ to our implication question q′ when given only the original (q, a) pair and no additional context, which indicates the implication is valid. On the other hand, workers under-predict and are inaccurate for the control questions, which is expected since there is no necessary logical connection between (q, a) and the control question. 4.3 Evaluating Consistency of QA Models Having concluded that our generated implications are high quality and typically valid, we proceed to use them to evaluate the logical consistency of models. For VQA, we evaluate the SAAA baseline (Kazemi and Elqursh, 2017), a recent model with a counting module (Count; Zhang et al., 2018), and bilinear attention networks (BAN; Kim et al., 2018). For SQuAD, we evaluate bidaf (Seo et al., 2017), bidaf with ELMO embeddings (bidaf+e; Peters et al., 2018), rnet (Wang et al., 2017), and Mnemonic Reader (mnem; Hu et al., 2018). All models are trained with available open source code with default parameters. The results for VQA are presented in Table 3. Note that more accurate models are not necessarily more consistent, and that all models are particularly inconsistent in the Mutex category. One specific category of Mutex that affects all models was asking the equivalent n+1 questions when the answer is a number n, e.g. “How many birds? 1” implies “Are there 2 birds? no”. SAAA, Count, and BAN had, respectively, 35.3%, 22.4% and 32.2% consistency in this category even though Count has a module specific for counting (implications are binary yes/no questions, and thus random guessing would give 50% consistency). This is probably because the original dataset contains numbers in 12.3% of answers, but only in 0.3% of questions, thus models learn how to answer numbers, but not how to reason about numbers that appear in the question. Evaluating consistency in this case is useful for finding gaps in models’ understanding, and similar insights can be reached by considering other violated implications. For SQuAD (Table 4), we consider a prediction as consistent if it had any overlap with the implied answer. Again, models with different accuracies do not vary as much in consistency. All models are less consistent on direct object implications. Interestingly, ∼12% of questions in the training data have the WH-word in the direct object subtree (e.g. “Who did Hayk defeat?”), while 53% are in the subject subtree (e.g. “Who is Moses?”), which may warrant further investigation. All models had average consistency lower or equal to 75%, which indicates they do not possess real comprehension of the concepts behind many of their correct predictions. Besides surfacing this, consistency evaluation provides clues as to potential sources of such problems, such as the lack of 6178 Model Acc LogEq Mutex Nec Avg SAAA 61.5 76.6 42.3 90.2 72.7 Count 65.2 81.2 42.8 92.0 75.0 BAN 64.5 73.1 50.4 87.3 72.5 Table 3: Consistency of VQA Models. Model F1 Subj Dobj Amod Prep Avg bidaf 77.9 70.6 65.9 75.1 72.4 72.1 bidaf+e 81.3 71.2 69.3 75.8 72.8 72.9 rnet 79.5 68.5 67.0 74.7 70.7 70.9 mnem 81.5 70.3 68.0 75.8 71.9 72.2 Table 4: Consistency of SQuAD Models. questions with numbers in VQA. 4.4 Data Augmentation with Implications We propose a simple data augmentation technique: for each (q, a) in the training set, add a generated implication (q′, a′) if one exists. We evaluate the consistency of models trained with augmentation on held-out implications, to check whether they generalize to unseen generated implications. Further, to verify if augmentation improves consistency “in the wild”, we collect new implications from Mechanical Turk by showing workers (q, a) pairs without context (image or paragraph), and asking them to produce new (q′, a′) that are implied by (q, a) for any context. For VQA, we restrict a′ to be yes / no, while for SQuAD we filter out all a′ that are not present in the original paragraph, resulting in a total of 3, 277 unique implication annotations for VQA and 1, 027 for SQuAD. While workers sometimes create implications similar to ours, they also include new patterns; implications that contain negations (all models are very inconsistent on these), word forms for numbers (e.g. “one”), comparatives (“more”, “less”), and implications that require common sense, such as (“What type of buses are these? double decker”→“Do the buses have 2 levels? yes”). The results are presented in Table 5. Accuracy on the validation set remains comparable after augmentation, while consistency on both generated and worker-provided implications improves across models and tasks. We also evaluate SAAA on the GQA dataset (Hudson and Manning, 2019) (Count and BAN use features that are not allowed in GQA): while accuracy is comparable (41.4% before augmentation, 40.4% after), consistency goes up significantly (59.3% before, 64.7% after). These results indicate that data augmentation is useful for increasing consistency with Model Validation Consistency Consistency Accuracy (rule-based) (crowdsourced) VQA SAAA 61.5 60.8 72.7 94.4 73.0 75.6 Count 65.2 64.8 75.0 94.1 73.8 77.3 BAN 64.5 64.6 72.4 95.0 72.3 77.9 SQuAD bidaf 77.9 76.4 72.1 79.1 68.2 70.9 bidaf+e 81.3 80.7 72.9 81.2 70.7 70.6 rnet 79.5 79.5 70.9 79.8 66.5 68.1 mnem 81.5 81.3 72.2 81.5 68.7 73.9 Table 5: Data Augmentation: Accuracy (F1 for SQuAD) and consistency results before and after data augmentation . Consistency (rule-based) is computed on our generated implications, while (crowdsourced) is computed on crowdsourced implications. a small trade off in accuracy. We leave more sophisticated methods of enforcing consistency (e.g. in models themselves) for future work. 5 Discussion We argued that evaluation of QA systems should take into account the relationship between predictions rather than each prediction in isolation, and proposed a rule-based implication generator which we validated in crowdsourcing experiments. The results of this approach are promising: consistency evaluation reveals gaps in models, and augmenting training data produces models that are more consistent even in human generated implications. However, data augmentation has its limitations: it may add new biases to data, and it cannot cover all the different implications or ways of writing questions. Ideally, we want models to be able to reason that “What color is the rose? Red” implies “Is the rose red? Yes” without needing to add every possible implication or rephrasing of every (q, a) to the training data. We hope that our work persuades others to consider the importance of consistency, and initiates a body of work in QA models that achieve real understanding by design. To support such endeavours, generated implications for VQA and SQuAD, along with the code to generate them and for evaluating consistency of models, is available at https://github.com/marcotcr/qa consistency. Acknowledgments We would like to thank Sara Ribeiro, Julian Michael, Tongshuang Wu, Tobias Schnabel, and Eric Horvitz for helpful discussions and feedback. This work was funded in part by the NSF award #IIS-1756023. 6179 References Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In International Conference on Computer Vision (ICCV). Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018. Transforming question answering datasets into natural language inference datasets. arXiv preprint arXiv:1809.02922. Timothy Dozat, Peng Qi, and Christopher D Manning. 2017. Stanford’s graph-based neural dependency parser at the conll 2017 shared task. Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 20–30. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In CVPR, volume 1, page 3. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H Clark, and Philipp Koehn. 2013. Scalable modified kneserney language model estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 690–696. Matthew Honnibal and Ines Montani. 2017. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing. To appear. Minghao Hu, Yuxing Peng, Zhen Huang, Xipeng Qiu, Furu Wei, and Ming Zhou. 2018. Reinforced mnemonic reader for machine reading comprehension. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 4099– 4106. AAAI Press. Drew A Hudson and Christopher D Manning. 2019. Gqa: a new dataset for compositional question answering over real-world images. arXiv preprint arXiv:1902.09506. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 1988– 1997. IEEE. Vahid Kazemi and Ali Elqursh. 2017. Show, ask, attend, and answer: A strong baseline for visual question answering. arXiv preprint arXiv:1704.03162. Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. 2018. Bilinear attention networks. In Advances in Neural Information Processing Systems, pages 1571–1581. George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In Empirical Methods in Natural Language Processing (EMNLP). Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging nlp models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 856–865. Association for Computational Linguistics. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In International Conference on Learning Representations (ICLR). Meet Shah, Xinlei Chen, Marcus Rohrbach, and Devi Parikh. 2019. Cycle-consistency for robust visual question answering. arXiv preprint arXiv:1902.05660. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 189–198. Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Making neural qa as simple as possible but not simpler. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 271–280. Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698. Peng Zhang, Yash Goyal, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2016. Yin and Yang: Balancing and answering binary visual questions. In Conference on Computer Vision and Pattern Recognition (CVPR). Yan Zhang, Jonathon Hare, and Adam Prgel-Bennett. 2018. Learning to count objects in natural images for visual question answering. In International Conference on Learning Representations. 6180 A Implications selected at random: VQA dataset Context Question/Answers Original: Where is the fork? A: left of plate Logeq: Is the fork on the left of the plate? A: yes Original: What are the men sitting on? A: bench Logeq: Are the men sitting on the bench? A: yes Nec: Is there a bench in the picture? A: yes Original: What is the number on the bus A: 38 Logeq: Is the number on the bus 38? A: yes Original: What kind of headwater is the man on the left wearing? A: headband Logeq: Is the man on the left wearing a headband? A: yes Nec: Is there a headband in the picture? A: yes 6181 Context Question/Answers Original: What is on the top of the clock tower A: cross Logeq: Is the cross on the top of the clock tower? A: yes Mutex: Is the area on the top of the clock tower? A: no Nec: Is there a cross in the picture? A: yes Original: Is this a Christian home? A: yes Mutex: Is this an unchristian home? A: no Nec: Is this a home? A: yes Original: What separates the meadow from the mountains in the background? A: water Logeq: Does water separate the meadow from the mountains in the background? A: yes Original: What color is the couch? A: blue Logeq: Is the couch blue? A: yes Mutex: Is the couch orange? A: no Nec: Is there anything blue in the picture? A: yes Original: How many toppings are on this pizza? A: 2 Logeq: Are 2 toppings on this pizza? A: yes Mutex: Are 3 toppings on this pizza? A: no Nec: Are any toppings on this pizza? A: yes Original: What material is the building in the back, made of? A: brick Logeq: Is the building in the back, made of brick? A: yes Mutex: Is the building in the back, made of stone? A: no Nec: Is there a brick in the picture? A: yes 6182 B Implications selected at random: SQuAD dataset Context: The first commercially viable process for producing liquid oxygen was independently developed in 1895 by German engineer Carl von Linde and British engineer William Hampson. Original: When was liquid oxygen developed for commercial use? A: 1895 Subj: What was developed for commercial use in 1895? A: liquid oxygen Amod: Liquid oxygen was developed for which use in 1895? A: commercial Context: In the 1960s, a series of discoveries, the most important of which was seafloor spreading, showed that the Earth’s lithosphere, which includes the crust and rigid uppermost portion of the upper mantle Original: Which parts of the Earth are included in the lithosphere? A: the crust and rigid uppermost portion of the Amod: Which portion of the upper mantle are included in the lithosphere? A: crust and rigid uppermost Amod: The crust and rigid uppermost portion of which mantle are included in the lithosphere? A: upper Prep: The crust and rigid uppermost portion of what are included in the lithosphere? A: upper mantle Prep: Where are the crust and rigid uppermost portion of the upper mantle included? A: lithosphere Context: Around 1800 Richard Trevithick and, separately, Oliver Evans in 1801 introduced engines using high-pressure steam; Trevithick obtained his high-pressure engine patent in 1802. Original: In what year did Richard Trevithick patent his device? A: 1802 Subj: Who patented his device in 1802? A: Richard Trevithick 6183 Context: The average Mongol garrison family of the Yuan dynasty seems to have lived a life of decaying rural leisure, with income from the harvests of their Chinese tenants eaten up by costs of equipping and dispatching men for their tours of duty. Original: How were the Mongol garrison families earning money? A: harvests of their Chinese tenants Amod: The Mongol garrison families were earning money by the harvests of their which tenants? A: Chinese Prep: The Mongol garrison families were earning money by the harvests of what? A: their Chinese tenants Context: Of particular concern with Internet pharmacies is the ease with which people, youth in particular, can obtain controlled substances (e.g., Vicodin, generically known as hydrocodone) via the Internet.. Original: What is an example of a controlled substance? A: Vicodin Amod: An example of which kind of substance is Vicodin? A: controlled Prep: An example of what is Vicodin? A: controlled substance Context: ...the exterior mosaic panels in the parapet were designed by Reuben Townroe who also designed the plaster work in the library Original: Who designed the plaster work in the Art Library? A: Reuben Townroe Dobj: What did Reuben Townroe design in the Art Library? A: plaster work Prep: Where did Reuben Townroe design the plaster work? A: Art Library Context: Combustion hazards also apply to compounds of oxygen with a high oxidative potential, such as peroxides, chlorates, nitrates, perchlorates, and dichromates because they can donate oxygen to a fire. Original: What other sources of high oxidative potential can add to a fire? A: compounds of oxygen Prep: Compounds of what can add to a fire? A: oxygen Prep: What can compounds of oxygen add to? A: fire 6184 Context: In 1881, Tesla moved to Budapest to work under Ferenc Pusks at a telegraph company, the Budapest Telephone Exchange. Original: Which company did Tesla work for in 1881? A: the Budapest Telephone Exchange Subj: Who worked for the Budapest Telephone Exchange in 1881? A: Tesla Prep: When did Tesla work for the Budapest Telephone Exchange? A: 1881 Context: ...membrane is used to run proton pumps and carry out oxidative phosphorylation across to generate ATP energy. Original: What does oxidative phosphorylation do? A: generate ATP energy Subj: What generates ATP energy? A: oxidative phosphorylation Dobj: What does oxidative phosphorylation generate? A: ATP energy Context: formerly model C schools tend to set much higher school fees than other public schools. Original: How do the fees at former Model C schools compare to those at other schools? A: much higher Amod: The fees at former Model C schools compare to those at which schools by much higher ? A: other
2019
621
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6185–6190 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6185 MC2: Multi-perspective Convolutional Cube for Conversational Machine Reading Comprehension Xuanyu Zhang College of Information Science and Technology Beijing Normal University, Beijing, 100875, China [email protected] Abstract Conversational machine reading comprehension (CMRC) extends traditional single-turn machine reading comprehension (MRC) by multi-turn interactions, which requires machines to consider the history of conversation. Most of models simply combine previous questions for conversation understanding and only employ recurrent neural networks (RNN) for reasoning. To comprehend context profoundly and efficiently from different perspectives, we propose a novel neural network model, Multi-perspective Convolutional Cube (MC2). We regard each conversation as a cube. 1D and 2D convolutions are integrated with RNN in our model. To avoid models previewing the next turn of conversation, we also extend causal convolution partially to 2D. Experiments on the Conversational Question Answering (CoQA) dataset show that our model achieves state-of-the-art results. 1 Introduction Conversation is one of the most important approaches for humans to acquire information. Different from traditional machine reading comprehension (MRC), conversational machine reading comprehension (CMRC) requires machines to answer multiple follow-up questions according to a passage and dialogue history. However, these questions usually have complicated linguistic phenomena, such as co-reference, ellipsis and so on. Only considering conversation context profoundly can we answer the current question correctly. Recently, many CMRC datasets, such as CoQA (Reddy et al., 2019) and QuAC (Choi et al., 2018), are proposed to enable models to understand passages and answer questions in dialogue. Here is an example from the CoQA dataset in Figure 1. We can observe that the second and third questions omit key information. It is impossible for both huBilly went to the farm to buy some beef for his brother's birthday. When he arrived there, he saw that all six of the cows were sad and had brown spots. The cows were all eating their breakfast in a big grassy meadow … … History: Who went to the farm? Billy Why? To buy some beef Q1 Q2 A1 A2 For what? His brother's birthday Q3 A3 Passage: Figure 1: An example in the CoQA dataset. mans and machines to understand such questions without dialogue history. Most of existing methods consider conversation history by prepending previous questions and answers to the current question, such as BiDAF++ (Yatskar, 2019), DrQA+PGNet (Reddy et al., 2019), SDNet (Zhu et al., 2018) and so on. However, the latent semantic information of dialogue history is neglected. And the model may confuse some unrelated questions and answers in a sentence. Although FlowQA (Huang et al., 2019) utilizes intermediate representations of previous conversation, the flow mechanism can not synthesize the information of different words in different turns of conversation simultaneously. Moreover, previous models only use recurrent neural network (RNN) as their main skeleton, which is not parallel due to recurrent nature. And RNN can only grasp information from two directions, either forward or backward. But for conversation, humans usually consider history from different perspectives and answer questions comprehensively. To address these issues, we propose a novel model, i.e. Multi-perspective Convolutional Cube (MC2). Every conversation is represented as a 6186 Interaction Reasoning Layer Answer Prediction Layer Passage Question(t-th) Embedding Embedding RNN Contextual Encoding Layer Perspective Ⅰ Perspective Ⅱ Perspective Ⅰ Perspective Ⅲ-1 Perspective Ⅰ Perspective Ⅲ-1 Attention Attention Attention start end ① ② ③ ④ ⑤ ⑥ RNN Perspective Ⅰ Attention Figure 2: MC2 structure overview. cube, three dimensions of which are question answering (QA) turns, passage words and hidden states of words, separately. For one thing, convolutional neural networks (CNN) can extract local information effectively across dimensions in parallel. Introducing CNN to RNN allows the model to take into account local and global features efficiently. For another thing, machines can comprehend conversation history more deeply from different perspectives by fusing 1D and 2D convolutions in our model. In addition, to avoid information leakage of the next turn of dialogue, we extend causal convolution to 2D. Experiments on the Conversational Question Answering (CoQA) dataset show that our model improves the result of the published state-of-the-art model by 3.2%. 2 Approaches In this section, we propose our novel model, MC2, for the task of conversational machine reading comprehension, which can be formulated as follows. For one conversation, given a passage with n tokens P = {pi}n i=1 and multiple questions with c turns Q = {Qt}c t=1, machines need to give the corresponding answers A = {At}c t=1. The t-th question with m tokens is Qt = {qt j}m j=1. The neural network is required to model the probability distribution p(At|Q≤t, P) for the t-th QA turn in the conversation. As shown in Figure 2, there are three main layers in our model, i.e., contextual encoding layer, interaction reasoning layer and answer prediction layer. Our proposed cube is used in the middle layer. For convenience, we will illustrate our model from bottom to top. 2.1 Contextual Encoding Layer The purpose of this layer is to extract useful information for upper layers. We embed questions and passages into a sequence of vectors with the latest contextualized model, BERT (Devlin et al., 2019), separately. Instead of fine-tuning BERT with extra scoring layers, we fix the weights of BERT like SDNet (Zhu et al., 2018) and aggregate L hidden layers generated by BERT as contextualized embedding for all BPE (Sennrich et al., 2016) tokens. To introduce other linguistic features token by words and facilitate answer selection, we choose the first token of a word in BPE to represent the word. Generally, the first token is often the root of the word and can represent main meaning of the whole word. And it also contains information of rest tokens in the word with the bidirectional structure of BERT. Besides, we split the long sentence by shorter windows and combine them again when the sentence exceeds the maximum length of pre-trained BERT. In detail, suppose hl i ∈Rd is the l-th hidden layer of the first BPE token in the i-th word. We collapse all hidden layers generated by BERT into a single vector for each word following ELMo (Peters et al., 2018). The contextualized embedding for the i-th word is ei = γ PL l=0 αlhl i, where γ is designed to scale the vector and αl is softmax-normalized weight for the l-th layer. These weights are all trainable. To be consistent with the number of turns of question EQ = {eQ t,j}m j=1 c t=1 ∈Rc×m×d, the passage eP i is expanded c times to EP = {eP t,i}n i=1 c t=1 ∈Rc×n×d. To incorporate other linguistic information, three additional features are utilized for each word pi in the passage following Chen et al. (2017), i.e. part-of-speech (POS) tags, named entity recognition (NER) tags and aligned question embeddings. The embeddings of POS epos i and NER ener i are learned for different tags, separately. And aligned question embeddings can be obtained in Eq. 1. Following Huang et al. (2018), we use f(x, y) = ReLU(Ux)TDReLU(Uy) as the attention score function between x, y, where D is a diagonal matrix and D, U are trainable. 6187 Passage Words QA Turns Perspective Ⅰ Perspective Ⅱ Perspective Ⅲ-1 Perspective Ⅲ-2 QA Turns QA Turns Figure 3: Different perspectives of the cube. si j = f(eP t,i, eQ t,j) ai j = exp(si j)/ Xm k=1 exp(si k) eattn t,i = Xm j=1 ai jeQ t,j (1) We then concatenate these features and embeddings to rP t,i for passages and employ bidirectional RNN to refine the question to rQ t,j. rP t,i = [eP t,i; epos t,i ; ener t,i ; eattn t,i ] rQ t,j = BiRNN(rQ t,j−1, eQ t,j) (2) 2.2 Interaction Reasoning Layer This layer plays an important role in our model, which aims to incorporate question information into passage representation further and reason from different perspectives by our proposed convolutional cube. The cube represents the hidden states of passages in a conversation. We will describe these perspectives in Figure 3 in the order of x to } in Figure 2. To consider global context of each turn besides local information across different dimensions, Perspective I equipped with RNN is inserted before other CNN perspectives. We first observe the cube from Perspective I and feed the hidden states of the cube rP t,i to bidirectional RNN for each turn of conversation cP t,i = BiRNN(cP t,i−1, rP t,i). Then the cube is viewed from Perspective II along QA turns for different words, separately. Since the (t+1)-th turn of information can not be used when processing the t-th turn, we employ 1D causal convolution (Oord et al., 2016) to the cube by moving the padding at the end to the beginning. And the representation of the cube can be updated from cP t,i into ¯cP t,i. After viewed from these two perspectives (x y in Figure 2), the hidden states of every word in passages grasp information from two dimensions of the cube. Next, we observe the cube from Perspective I again to fuse previous hidden states and generate global context ˆcP t,i for each turn of conversation. To reason from more dimensions simultaneously, 2D CNN is utilized to generate hidden states of the cube hP t,i along the dimension of both QA turns and passage words from Perspective III-1. Different from other models, three kinds of information can be considered comprehensively by this process: the same word in different QA turns, different words in the same QA turn and different words in different QA turns. Similar to 1D CNN above, the 2D CNN also requires to be unidirectional on the dimension of QA turns to avoid information leakage. But it is more reasonable to capture bidirectional information on the dimension of passage words. We thus extend traditional causal convolution partially to 2D CNN by moving padding only on one dimension. These two perspectives (z { in Figure 2) strengthen the representation of our cube further. For questions in this layer, we pass them as the input to another RNN for reasoning hQ t,j = BiRNN(hQ t,j−1, rQ t,j). Then we employ the attention score function mentioned above to integrate new information of questions to passages. si j = f([eP t,i; cP t,i; ˆcP t,i], [eQ t,j; rQ t,j; hQ t,j]) ai j = exp(si j)/ Xm k=1 exp(si k) hattn t,i = Xm j=1 ai jhQ t,j (3) As shown in Figure 2, we repeat the process of z { in | } for deeper understanding and reasoning. RNN takes [hP t,i; hattn t,i ] and generates ¯hP t,i from Perspective I. Then 2D CNN generates ˜hP t,i from Perspective III-1. We use self-attention to enhance the current passage representation as follows: si j = f([cP t,i; ˆcP t,i; ¯hP t,i], [cP t,j; ˆcP t,j; ¯hP t,j]) ai j = exp(si j)/ Xn k=1 exp(si k) hself t,i = Xn j=1 ai j¯hP t,j (4) 6188 Model In-domain Out-of-domain Overall Child. Liter Mid-High. News Wiki Reddit Science PGNet 49.0 43.3 47.5 47.5 45.1 38.6 38.1 44.1 DrQA 46.7 53.9 54.1 57.8 59.4 45.0 51.0 52.6 DrQA+PGNet 64.2 63.7 67.1 68.3 71.4 57.8 63.1 65.1 Augmt. DrQA 66.0 63.3 66.2 71.0 71.3 57.7 63.0 65.4 BiDAF++ 66.5 65.7 70.2 71.6 72.6 60.8 67.1 67.8 FlowQA 73.7 71.6 76.8 79.0 80.2 67.8 76.1 75.0 SDNet 75.4 73.9 77.1 80.3 83.1 69.8 76.8 76.6 MC2 78.4 76.7 81.1 83.0 84.8 73.8 80.6 79.8 Human 90.2 88.4 89.8 88.6 89.9 86.7 88.1 88.8 Table 1: Model and human performance (% in F1 score) on the CoQA test set. 0 10 20 30 40 50 60 70 80 90 DrQA BiDAF++ FlowQA SDNet MC² (ours) Out 47.9 63.8 71.8 73.1 77.1 △F1 6.6 5.6 4.5 4.9 3.8 In-domain F1 score 54.5 69.4 76.3 78.0 80.9 Figure 4: F1 score of models on in-domain and out-of-domain parts of the CoQA test set. At last, we view the cube from Perspective I again to synthesize the global information ˆhP t,i = BiRNN(ˆhP t,i−1, [¯hP t,i; ˜hP t,i; hself t,i ]). 2.3 Answer Prediction Layer This layer is the top one of our model. We use similar methods (Chen et al., 2017; Huang et al., 2019; Zhu et al., 2018) to predict the position of the answer in the passage. We project the question representation into one vector for each turn of dialogue ˆhQ t = Pm j=1 at,jhQ t,j, where at,j = exp(WhQ t,j)/ Pm k=1 exp(WhQ t,k) and W is trainable. Then two different bilinear attention functions are used to estimate the probability of the start and end according to ˆhP t,i and ˆhQ t . We choose the position of the maximum product of these two probabilities as the best span. For other answer types, such as yes, no and unknown, we condense the passage representation ˆhP t,i to ˆhP t like questions and classify the answer according to [ˆhP t ; ˆhQ t ]. To train the cube, we minimize the sum of the negative log probabilities of the ground truth start position, end position and answer type by the predicted distributions. 3 Experiments 3.1 Data and Metric We conduct our experiments on the CoQA (Reddy et al., 2019), a large-scale CMRC dataset annotated by human. It consists of 127k questions with answers collected from 8k conversations over text passages. As shown in Table 1, it covers seven diverse domains (five of them are in-domain and two are out-of-domain). The out-of-domain passages only appear in the test set. Aligned with the official evaluation, F1 score is used as the metric, which measures the overlap between the prediction and the ground truth at word level. 3.2 Implementation Details We use pre-trained BERTLARGE model for contextualized embeddings, the dimension of which is 1024. And spaCy is applied for tokenization, part-of-speech and named entity recognition. The last turn of the answer is added to the next turn as guidance in the dataset. Each batch contains one cube for one conversation. We employ LSTM as the structure of RNN, the hidden size of which is 250 throughout our model. The kernel size is set to 5 and 3 for 1D and 2D CNN, respectively. And the dropout rate is set to 0.4. The Adamax (Kingma and Ba, 2015) is used as our optimizer with 0.1 learning rate. 6189 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 F1 Score Epoch MC² (ours) SDNet SDNet ⃰ FlowQA BiDAF++ DrQA+PGNet Figure 5: F1 score on the CoQA dev set under different training epochs. 1 Configuration F1 ∆F1 MC2 81.266 w/o y { } 77.363 -3.903 w/o y 80.718 -0.548 w/o { 80.867 -0.399 w/o } 80.849 -0.417 replace y with { 80.932 -0.334 replace { with y 80.473 -0.793 replace } with III-2 81.087 -0.179 exchange y with { 81.102 -0.164 Table 2: Ablation study on the CoQA dev set. (y { } come from Fig. 2. III-2 comes from Fig. 3.) 3.3 Result We compare our MC2 with other baseline models 2 in Table 1: PGNet (See et al., 2017), DrQA (Chen et al., 2017), DrQA+PGNet (Reddy et al., 2019), Augmented DrQA (Reddy et al., 2019), BiDAF++ (Yatskar, 2019), FlowQA (Huang et al., 2019) and SDNet (Zhu et al., 2018). Our model achieves significant improvement over these published models. Comparing with the previous state-of-the-art model, SDNet, our model outperforms it by 3.2% on F1 score. And SDNet also takes pre-trained BERT as embedding without fine-tuning. Especially, our single model surpasses the ensemble model of both FlowQA and SDNet. Figure 4 shows the gap between in-domain and out-of-domain on the test set. Although all mod1SDNet comes from experiments of the original author. SDNet∗refers to the proportion of Fig. 2 in the original paper. 2We only consider published models on the CoQA. Although some models perform better on the leaderboard recently, they usually focus on fine-tuning BERT model. els perform worse on out-of-domain datasets compared to in-domain datasets, our model only drops 3.8% on F1 score. It is the smallest drop between in-domain and out-of-domain among all models, which proves that our model has very good generalization ability. Besides, our model achieves the best performance on both in-domain and out-ofdomain datasets. The learning curve is shown in Figure 5. It reflects the performance of models under different training epochs on the development set. We can observe that our model completely surpasses SDNet at every epoch. And it outperforms all baseline models only after 5 epochs and achieves the best performance after 18 epochs. Especially, our model achieves 72.472% on F1 score only after the first epoch, which is about 10% to 20% higher than SDNet. Thus with fewer training epochs, our model still can perform well. 3.4 Ablation Studies To study how each perspective of our proposed cube contributes to the performance, we conduct an ablation analysis on the development set in Table 2. The results show that removing all CNN perspectives of the cube, i.e. y { } in Figure 2, will cause a substantial performance drop (3.90% on F1 score). And removing any of them also results in marginal decrease in performance. It is clear that the improvement of reading from different perspectives simultaneously is larger than that of the sum of reading from single perspective separately. Besides, replacing 2D CNN (Perspective III-1) with 1D CNN (Perspective II) also causes a significant decline of performance (0.79% on F1 score). We also explore 3D CNN (Perspective III2), but it brings no improvement as expected. 4 Conclusion In this paper, we introduce Multi-perspective Convolutional Cube (MC2), a novel model for conversational machine reading comprehension. The cube is viewed from different perspectives to fully understand the history of conversation. By integrating CNN with RNN, fusing 1D and 2D convolutions, extending causal convolution to 2D, our model achieves the best results among published models on the CoQA dataset without fine-tuning BERT. We will study further the capability of our approaches on other datasets and tasks in the future work. 6190 References Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. QuAC: Question answering in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Hsin-Yuan Huang, Eunsol Choi, and Wen-tau Yih. 2019. FlowQA: Grasping flow in history for conversational machine comprehension. In Proceedings of the 7th International Conference on Learning Representations. Hsin-Yuan Huang, Chenguang Zhu, Yelong Shen, and Weizhu Chen. 2018. Fusionnet: Fusing via fullyaware attention with application to machine comprehension. In International Conference on Learning Representations. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations. Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. SSW, 125. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Mark Yatskar. 2019. A qualitative comparison of CoQA, SQuAD 2.0 and QuAC. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Chenguang Zhu, Michael Zeng, and Xuedong Huang. 2018. SDNet: Contextualized attention-based deep network for conversational question answering. arXiv preprint arXiv:1812.03593.
2019
622
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6191–6196 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6191 Reducing Word Omission Errors in Neural Machine Translation: A Contrastive Learning Approach Zonghan Yang♠ Yong Cheng♣ Yang Liu♠♦∗ Maosong Sun♠ ♠Institute for Artificial Intelligence State Key Laboratory of Intelligent Technology and Systems Department of Computer Science and Technology, Tsinghua University, Beijing, China Beijing National Research Center for Information Science and Technology ♣Google AI ♦Beijing Advanced Innovation Center for Language Resources Abstract While neural machine translation (NMT) has achieved remarkable success, NMT systems are prone to make word omission errors. In this work, we propose a contrastive learning approach to reducing word omission errors in NMT. The basic idea is to enable the NMT model to assign a higher probability to a ground-truth translation and a lower probability to an erroneous translation, which is automatically constructed from the ground-truth translation by omitting words. We design different types of negative examples depending on the number of omitted words, word frequency, and part of speech. Experiments on Chinese-to-English, German-to-English, and Russian-to-English translation tasks show that our approach is effective in reducing word omission errors and achieves better translation performance than three baseline methods. 1 Introduction While neural machine translation (NMT) has achieved remarkable success (Sutskever et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017), there still remains a severe challenge: NMT systems are prone to omit essential words on the source side, which severely deteriorate the adequacy of machine translation. Due to the lack of interpretability of neural networks, it is hard to explain how these omission errors occur and design methods to eliminate them. Existing methods for reducing word omission errors in NMT have focused on modeling coverage (Tu et al., 2016; Mi et al., 2016; Wu et al., 2016; Wang et al., 2016; Tu et al., 2017). The central idea is to model the fertility (i.e., the number of corresponding target words) of a source word based on attention weights to avoid word omission. Although these methods prove to be effective in modeling coverage for NMT, they heavily rely on the attention weights provided by the ∗Corresponding author: Yang Liu RNNsearch model (Bahdanau et al., 2015). Since the attention weights between input and output are not readily available in the state-of-the-art Transformer model (Vaswani et al., 2017), it is hard for existing methods to be directly applicable. As a result, it is important to develop model-agnostic methods for addressing the word omission problem in NMT. In this paper, we propose a simple and effective contrastive learning approach to reducing word omission errors in NMT. The basic idea is to maximize the margin between the probability of a ground-truth translation and that of an erroneous translation for a given source sentence. The erroneous translations are automatically constructed via omitting words among the ground-truth translations. We design several types of erroneous translations in respect of omission counts, word frequency, and part of speech. Our approach has the following advantages: • Model agnostic. Our approach is applicable to all existing NMT models. Only the training objective and training data need to be changed. • Language independent. Our approach is independent of languages and can be applied to arbitrary languages. • Fast to train. Contrastive learning starts with a pre-trained NMT model and usually converges in only hundreds of steps. We evaluate our approach on German-toEnglish, Chinese-to-English, and Russian-toEnglish translation tasks. Experiments show that contrastive learning can not only effectively reduce word omission errors but also achieve better translation performance than existing methods in both automatic and human evaluations. 6192 2 A Contrastive Learning Approach Let x be a source sentence and y be a target sentence. We use P(y|x; θ) to denote an NMT model parameterized by θ. Given trained parameters ˆθ, the translation of a source sentence is given by ˆy = argmax y n P(y|x; ˆθ) o (1) During decoding process, the NMT model chooses the candidate sentence with the highest probability as the output translation. When a word omission error occurs, erroneous translations, which are mistakenly assigned with higher probabilities, are more likely to be chosen than ground-truth translations. Therefore, to reduce word omission errors, the probability that the NMT model assigns to an erroneous translation must be lower than that of a ground-truth translation. Our proposed contrastive learning method is shown in Algorithm 1 , which consists of three steps. In the first step, the model is trained using maximum likelihood estimation (MLE) on a parallel corpus (lines 1-2). In the second step, negative examples are automatically constructed by omitting words in ground-truth translations (line 3). In the third step, the model is finetuned using contrastive learning with the estimates of MLE as a starting point. More formally, given a parallel training set D = {⟨x(s), y(s)⟩}S s=1, the first step is to find a set of model parameters that maximizes the loglikelihood of the training set: ˆθMLE = argmax θ n L(θ) o , (2) where the log-likelihood is defined as L(θ) = S X s=1 log P(y(s)|x(s); θ) (3) The second step is to construct negative examples based on the ground-truth parallel corpus. Given a ground-truth sentence pair ⟨x, y⟩from the parallel training set D, an erroneous sentence pair ⟨x, ˜y⟩can be automatically constructed by omitting words from the translation y in the groundtruth sentence pair. In this work, we distinguish between three methods for omitting words: • Random omission. One or more source words are omitted according to a random uniform distribution. Algorithm 1 Contrastive Learning for NMT Input: D = {⟨x(s), y(s)⟩}S s=1 Output: ˆθCL 1: Obtain ˆθMLE using maximum likelihood estimation on D with random initialization; 2: Construct ˜D = {⟨x(s), ˜y(s)⟩}S s=1 based on D automatically; 3: Obtain ˆθCL using contrastive learning on ˜D with ˆθMLE as a starting point. • Omission by word frequency. One or more source words are omitted according to word frequencies. • Omission by part of speech. One or more source words are omitted according to parts of speech. Contrastive learning starts with the model parameters trained by MLE. Our contrastive learning approach is equipped with a max-margin loss. The max-margin loss ensures that the margins of the log-likelihood between the ground-truth pairs and the contrastive examples are higher than the setting η: ˆθCL = argmin θ n J(θ) o , (4) where the max-margin loss is defined as J(θ)= S X s=1 max ( N X n=1 log P(˜y(s) n |x(s); θ)+η −N log P(y(s)|x(s); θ), 0 ) . (5) For each ground-truth sentence pair ⟨x(s), y(s)⟩, it is possible to sample N negative examples ⟨x(s), ˜y(s) 1 ⟩, . . . , ⟨x(s), ˜y(s) N ⟩. For simplicity, we set N = 1 and use ˜D = {⟨x(s), ˜y(s)⟩}S s=1 in our experiments. 3 Experiments We evaluated the proposed method on Chineseto-English, German-to-English, and Russian-toEnglish translation tasks. 3.1 Setup For the Chinese-to-English translation task, we use the WMT 2017 dataset as the training set, 6193 which is composed of the News Commentary v12, UN Parallel Corpus v1.0, and CWMT corpora. The training set contains 25M sentence pairs. The newsdev2017 and newstest2017 datasets are used as the development set and test set, respectively. For the German-to-English translation task, we use the WMT 2017 dataset as the training set, which consists of 6M preprocessed sentence pairs. The newstest2014 and newstest2017 datasets are used as the development set and test set, respectively. For the Russian-to-English translation task, we use the WMT 2017 preprocessed dataset as the training set, which consists of 25M preprocessed sentence pairs. The newstest2015 and newstest2016 datasets are used as the development set and test set, respectively. Following Sennrich et al. (2016b), we split words into sub-word units. The numbers of merge operations in byte pair encoding (BPE) for both language pairs are set to 32K. After performing BPE, the training set of the Chinese-to-English task contains 550M Chinese sub-word units and 615M English sub-word units, the training set of the German-to-English task consists of 157M German sub-word units and 153M English subword units, and the training set of the Russian-toEnglish task consists of 653M Russian sub-word units and 629M English sub-word units. We used three baselines in our experiments: • MLE: Maximum likelihood estimation. The setting of hyper-parameters is the same with (Vaswani et al., 2017); • MLE + CP: Imposing the coverage penalty (Wu et al., 2016) constraint on the decoding process of MLE. We treat the softmax weight matrix in the uppermost “encoder-decoder attention” layer of Transformer as the attention weight matrix to calculate coverage penalty; • WordDropout: Implementing the word dropout technique proposed by Sennrich et al. (2016a) during MLE training. For our contrastive learning method, we compare different settings of erroneous training set ˜D: • CLone/two/three: ˜D is constructed via omitting one/two/three words randomly from the ground-truth translations in D; • CLlow/high: ˜D is constructed via omitting the word with the lowest/highest frequency from each ground-truth translation in D; Figure 1: Visualization of margin differences between CLone and MLE on 500 sampled sentence pairs. We use red to highlight sentence pairs on which CLone achieves a larger margin than MLE. Blue points denote MLE achieves a higher margin. • CLV/IN: ˜D is constructed via omitting one verb or preposition randomly from the ground-truth translation in D. The part-ofspeech information is given by the Stanford Parser (Manning et al., 2014). 3.2 Comparison of Margins To find out whether CL increases the margin compared with MLE, we calculate the following margin difference for a ground-truth sentence pair ⟨x, y⟩and an erroneous sentence pair ⟨x, ˜y⟩: ∆M =log P(y|x; ˆθCL)−log P(˜y|x; ˆθCL) − log P(y|x; ˆθMLE)+log P(˜y|x; ˆθMLE) (6) Figure 1 shows the margin difference between CLone and MLE on 500 sampled sentence pairs from the training set for the Chinese-to-English task. “Sentence length” denotes the sum of the lengths of the source and target sentences (i.e., |x| + |y|). Red points denote sentence pairs on which CLone has a larger margin than MLE (i.e., ∆M > 0), while the blue ones denote the ∆M < 0 case. We find that CLone has a larger margin than MLE on 95% of the 500 sampled sentence pairs, with an average margin difference of 1.4. 3.3 Automatic Evaluation Results Table 1 shows the results of automatic evaluation on Chinese-to-English, German-to-English, and Russian-to-English translation tasks. The evaluation metric is case-insensitive BLEU score (Papineni et al., 2002). Contrastive learning starts with the model parameters trained by MLE and converges in only 150 steps. For fair comparison, all 6194 Method Zh-En De-En Ru-En MLE 23.90 34.88 31.24 MLE + CP 24.04 34.93 31.36 WordDropout 23.73 34.63 31.05 CLone 24.92 ++∗∗†† 35.74 ++∗∗†† 32.04 ++∗∗†† CLtwo 24.76 ++∗∗†† 35.54 ++∗∗†† 31.94 ++∗†† CLthree 24.52 +∗†† 35.44 ++∗†† 32.20 ++∗∗†† CLlow 24.13 † 34.96 † 31.47 ++† CLhigh 24.77 ++∗∗†† 35.24 ++†† 31.70 ++†† CLV 24.12 † 35.02 †† 31.73 ++∗†† CLIN 24.71 ++∗∗†† 35.26 +∗†† 31.76 ++∗†† Table 1: Automatic evaluation results on Chinese-to-English, German-to-English, and Russian-to-English translation tasks. Contrastive learning starts with the model parameters trained by MLE and converges in only 150 steps. For fair comparison, all the models of MLE, MLE + CP, and MLE + data are trained for another 150 steps as well, but yielding no further improvement. “+”: significantly better than MLE (p < 0.05). “++”: significantly better than MLE (p < 0.01). “∗”: significantly better than MLE + CP (p < 0.05). “∗∗”: significantly better than MLE + CP (p < 0.01).“†”: significantly better than WordDropout (p < 0.05). “††”: significantly better than WordDropout (p < 0.01). Method Flu. Ade. Evaluator 1 MLE 4.31 4.25 MLE + CP 4.31 4.31 WordDropout 4.29 4.25 CLone 4.32 4.58 Evaluator 2 MLE 4.27 4.22 MLE + CP 4.26 4.25 WordDropout 4.25 4.23 CLone 4.27 4.53 Table 2: Human evaluation results on the Chinese-toEnglish task. “Flu.” denotes fluency and “Ade.” denotes adequacy. Two human evaluators who can read both Chinese and English were asked to assess the fluency and adequacy of the translations. The scores of fluency and adequacy range from 1 to 5. the models of MLE, MLE+CP, and MLE+data are trained for another 150 steps as well, but yielding no further improvement. We observe that with negative examples synthesized properly, our contrastive learning method significantly outperforms MLE, MLE + CP, and WordDropout on all three language pairs. An interesting finding is that omitting highfrequency source words (i.e., CLhigh) achieves significantly better results than omitting lowfrequency source words (i.e., CLlow) for all three language pairs, which suggests that standard NMT models tend to omit high-frequency source words rather than low-frequency words. Method Zh-En De-En Ru-En MLE 362 221 471 MLE + CP 265 200 383 WordDropout 245 168 351 CLone 122 138 250 Table 3: Comparison of error counts on the test sets. CL denotes the contrastive learning method with the highest BLEU score, which is CLone for the Chineseto-English and German-to-English tasks and CLthree for the Russian-to-English task. The experiment on omission by part of speech further confirms this finding as omitting highfrequency prepositions (i.e., CLIN) leads to better results than omitting low-frequency verbs (i.e., CLV). 3.4 Human Evaluation Results Table 2 shows the results of human evaluation on the Chinese-to-English task. We asked two human evaluators who can read both Chinese and English to evaluate the fluency and adequacy of the translations generated by MLE, MLE + CP, MLE + data, and CLone. The scores of fluency and adequacy range from 1 to 5. The translations were shuffled randomly, and the name of each method was anonymous to human evaluators. We find that CLone significantly improves the adequacy over all baselines. This is because omitting important information in source sentences de6195 creases the adequacy of translation. CLone is capable of alleviating this problem by assigning lower probabilities to translations with word omission errors. To further quantify to what extent our approach reduces word omission errors, we asked human evaluators to manually count word omission errors on the test sets of all the translation tasks. Table 3 shows the error counts. We find that CLone achieves significant error reduction as compared with MLE, MLE + CP, and WordDropout for all the three language pairs. 4 Related Work Our work is related to two lines of research: modeling coverage for NMT and contrastive learning in NLP. 4.1 Modeling Coverage for NMT The notion of coverage dates back to conventional phrase-based statistical machine translation (Koehn et al., 2003). A coverage vector, which is used to indicate whether a source phrase is translated or not during the decoding process, ensures that each source phrase is translated exactly once. As there are no latent variables defined on language structures in neural networks, it is hard to directly introduce coverage into NMT. As a result, there are two strategies. The first strategy is to modify the model architectures to incorporate coverage (Tu et al., 2016; Mi et al., 2016), which requires considerable expertise. The second strategy is to impose constraints on the decoding process (Wu et al., 2016). Our work differs from prior studies in that contrastive learning is model agnostic. All previous coverage-based methods heavily rely on attention weights between source and target words to derive coverage for source words. Such attention weights are not readily available for all NMT models. In contrast, our method can be used to fine-tune arbitrary NMT models to reduce word omission errors in only hundreds of steps. 4.2 Contrastive Learning in NLP Contrastive learning has been widely used in natural language processing. For instance, word embeddings are usually learned by the noise contrastive estimation method (Gutmann and Hyv¨arinen, 2012): a negative example is synthesized by randomly selecting a word from the vocabulary to replace a word in a ground-truth example (Vaswani et al., 2013; Mnih and Kavukcuoglu, 2013; Bose et al., 2018). Contrastive learning has also been investigated in neural language modelling (Huang et al., 2018), unsupervised word alignment (Liu and Sun, 2015), order embeddings (Vendrov et al., 2016; Bose et al., 2018), knowledge graph embeddings (Yang et al., 2015; Lin et al., 2015; Bose et al., 2018) and caption generation (Mao et al., 2016; Vedantam et al., 2017). The closest work to ours is (Wiseman and Rush, 2016), which leverages contrastive learning during beam search with the golden reference sentences as positive examples and the current output sentences as contrastive examples. While they focus on improving the capability of Seq2Seq model to capture global dependencies, we focus on reducing word omission errors of Transformer model effectively. 5 Conclusion We have presented contrastive learning for reducing word omission errors in neural machine translation. Contrastive examples are automatically constructed by omitting words from the groundtruth translations. Our approach is model-agnostic and can be applied to arbitrary NMT models. Experiments show that our approach significantly reduces omission errors and improves translation performance on three language pairs. 6 Acknowledgments We thank all anonymous reviewers for their valuable comments. This work is supported by the National Key R&D Program of China (No. 2017YFB0202204), National Natural Science Foundation of China (No. 61761166008, No. 61432013), Beijing Advanced Innovation Center for Language Resources (No. TYR17002), and the NExT++ project supported by the National Research Foundation, Prime Ministers Office, Singapore under its IRC@Singapore Funding Initiative. References Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Avishek Joey Bose, Huan Ling, and Yanshuai Cao. 6196 2018. Adversarial contrastive estimation. In Proceedings of ACL. Michael U Gutmann and Aapo Hyv¨arinen. 2012. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. Journal of Machine Learning Research, 13(Feb):307–361. Jiaji Huang, Yi Li, Wei Ping, and Liang Huang. 2018. Large margin neural language model. In Proceedings of EMNLP. Phillip Koehn, Franz Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of NAACL. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of AAAI. Yang Liu and Maosong Sun. 2015. Contrastive unsupervised word alignment with non-local features. In Proceedings of AAAI. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous object descriptions. In Proceedings of CVPR. Haitao Mi, Baskaran Sankaran, Zhiguo Wang, and Abe Ittycheriah. 2016. Coverage embedding models for neural machine translation. In Proceedings of EMNLP. Andriy Mnih and Koray Kavukcuoglu. 2013. Learning word embeddings efficiently with noise-contrastive estimation. In Proceedings of NIPS. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of ACL. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS. Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li. 2017. Neural machine translation with reconstruction. In Proceedings of AAAI. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of ACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NIPS. Ashish Vaswani, Yinggong Zhao, Victoria Fossum, and David Chiang. 2013. Decoding with large-scale neural language models improves translation. In Proceedings of EMNLP. Ramakrishna Vedantam, Samy Bengio, Kevin Murphy, Devi Parikh, and Gal Chechik. 2017. Context-aware captions from context-agnostic supervision. In Proceedings of CVPR. Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2016. Order-embeddings of images and language. In Proceedings of ICLR. Longyue Wang, Zhaopeng Tu, Xiaojun Zhang, Hang Li, Andy Way, and Qun Liu. 2016. A novel approach to dropped pronoun translation. In Proceedings of NAACL. Sam Wiseman and Alexander M. Rush. 2016. Sequence-to-sequence learning as beam-search optimization. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1296–1306. Association for Computational Linguistics. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. In Proceedings of NIPS. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of ICLR.
2019
623
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6197–6203 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6197 Exploiting Sentential Context for Neural Machine Translation Xing Wang Tencent AI Lab [email protected] Zhaopeng Tu Tencent AI Lab [email protected] Longyue Wang Tencent AI Lab [email protected] Shuming Shi Tencent AI Lab [email protected] Abstract In this work, we present novel approaches to exploit sentential context for neural machine translation (NMT). Specifically, we first show that a shallow sentential context extracted from the top encoder layer only, can improve translation performance via contextualizing the encoding representations of individual words. Next, we introduce a deep sentential context, which aggregates the sentential context representations from all the internal layers of the encoder to form a more comprehensive context representation. Experimental results on the WMT14 English⇒German and English⇒French benchmarks show that our model consistently improves performance over the strong TRANSFORMER model (Vaswani et al., 2017), demonstrating the necessity and effectiveness of exploiting sentential context for NMT. 1 Introduction Sentential context, which involves deep syntactic and semantic structure of the source and target languages (Nida, 1969), is crucial for machine translation. In statistical machine translation (SMT), the sentential context has proven beneficial for predicting local translations (Meng et al., 2015; Zhang et al., 2015). The exploitation of sentential context in neural machine translation (NMT, Bahdanau et al., 2015), however, is not well studied. Recently, Lin et al. (2018) showed that the translation at each time step should be conditioned on the whole target-side context. They introduced a deconvolution-based decoder to provide the global information from the target-side context for guidance of decoding. In this work, we propose simple yet effective approaches to exploiting source-side global sentence-level context for NMT models. We use encoder representations to represent the sourceside context, which are summarized into a sentential context vector. The source-side context vector is fed to the decoder, so that translation at each step is conditioned on the whole source-side context. Specifically, we propose two types of sentential context: 1) the shallow one that only exploits the top encoder layer, and 2) the deep one that aggregates the sentence representations of all the encoder layers. The deep sentential context can be viewed as a more comprehensive global sentence representation, since different types of syntax and semantic information are encoded in different encoder layers (Shi et al., 2016; Peters et al., 2018; Raganato and Tiedemann, 2018). We validate our approaches on top of the stateof-the-art TRANSFORMER model (Vaswani et al., 2017). Experimental results on the benchmarks WMT14 English⇒German and English⇒French translation tasks show that exploiting sentential context consistently improves translation performance across language pairs. Among the model variations, the deep strategies consistently outperform their shallow counterparts, which confirms our claim. Linguistic analyses (Conneau et al., 2018) on the learned representations reveal that the proposed approach indeed provides richer linguistic information. The contributions of this paper are: • Our study demonstrates the necessity and effectiveness of exploiting source-side sentential context for NMT, which benefits from fusing useful contextual information across encoder layers. • We propose several strategies to better capture useful sentential context for neural machine translation. Experimental results empirically show that the proposed approaches achieve improvement over the strong baseline model TRANSFORMER. 6198 layer 2 layer 1 layer 3 layer 2 layer 1 layer 3 layer 2 layer 1 layer 3 (a) Vanilla layer 2 layer 1 layer 3 layer 2 layer 1 layer 3 layer 2 layer 1 layer 3 (b) Shallow Sentential Context layer 2 layer 1 layer 2 layer 1 layer 2 layer 1 layer 3 (c) Deep Sentential Context Figure 1: Illustration of the proposed approache. As on a 3-layer encoder: (a) vanilla model without sentential context, (b) shallow sentential context representation (i.e. blue square) by exploiting the top encoder layer only; and (c) deep sentential context representation (i.e. brown square) by exploiting all encoder layers. The circles denote hidden states of individual tokens in the input sentence, and the squares denote the sentential context representations. The red up arrows denote that the representations are fed to the subsequent decoder. This figure is best viewed in color. 2 Approach Like a human translator, the encoding process is analogous to reading a sentence in the source language and summarizing its meaning (i.e. sentential context) for generating the equivalents in the target language. When humans translate a source sentence, they generally scan the sentence to create a whole understanding, with which in mind they incrementally generate the target sentence by selecting parts of the source sentence to translate at each decoding step. In current NMT models, the attention model plays the role of selecting parts of the source sentence, but lacking a mechanism to guarantee that the decoder is aware of the whole meaning of the sentence. In response to this problem, we propose to augment NMT models with sentential context, which represents the whole meaning of the source sentence. 2.1 Framework Figure 1 illustrates the framework of the proposed approach. Let g = g(X) be the sentential context vector, and g(·) denotes the function to summarize the source sentence X, which we will discuss in the next sections. There are many possible ways to integrate the sentential context into the decoder. The target of this paper is not to explore this whole space but simply to show that one fairly straightforward implementation works well and that sentential context helps. In this work, we incorporate the sentential context into decoder as dl i = f(LAYERdec( bDl−1), cl i), (1) bDl−1 = FFNl(Dl−1, g), (2) where dl i is the l-th layer decoder state at decoding step i, cl i is a dynamic vector that selects certain parts of the encoder output, FFNl(·) is a distinct feed-forward network associated with the l-th layer of the decoder, which reads the l −1-th layer output Dl−1 and the sentential context g. In this way, at each decoding step i, the decoder is aware of the sentential context g embedded in bDl−1. In the following sections, we discuss the choice of g(·), namely shallow sentential context (Figure 1b) and deep sentential context (Figure 1c), which differ at the encoder layers to be exploited. It should be pointed out that the new parameters introduced in the proposed approach are jointly updated with NMT model parameters in an endto-end manner. 2.2 Shallow Sentential Context Shallow sentential context is a function of the top encoder layer output HL: g = g(HL) = GLOBAL(HL), (3) where GLOBAL(·) is the composition function. Choices of GLOBAL(·) Two intuitive choices are mean pooling (Iyyer et al., 2015) and max pooling (Kalchbrenner et al., 2014): GLOBALMEAN = MEAN(HL), (4) GLOBALMAX = MAX(HL). (5) Recently, Lin et al. (2017) proposed a selfattention mechanism to form sentence representation, which is appealing for its flexibility on extracting implicit global features. Inspired by this, 6199 g3 g2 g1 r2 r1 r3 r0 g3 g2 g1 gi βi,1 βi,2 βi,3 di-1 g3 g2 g1 gi (a) RNN g3 g2 g1 r2 r1 r3 r0 g3 g2 g1 gi βi,1 βi,2 βi,3 di-1 g3 g2 g1 gi (b) TAM Figure 2: Illustration of the deep functions. “TAM” model dynamically aggregates sentence representations at each decoding step with state di−1. we propose an attentive mechanism to learn sentence representation: GLOBALATT = ATT(g0, HL), (6) g0 = MAX(H0), (7) where H0 is the word embedding layer, and its max pooling vector g0 serves as the query to extract features to form the final sentential context representation. 2.3 Deep Sentential Context Deep sentential context is a function of all encoder layers outputs {H1, . . . , HL}: g = g(H1, . . . , HL) = DEEP(g1, . . . , gL), (8) where gl is the sentence representation of the l-th layer Hl, which is calculated by Equation 3. The motivation for this mechanism is that recent studies reveal that different encoder layers capture linguistic properties of the input sentence at different levels (Peters et al., 2018), and aggregating layers to better fuse semantic information has proven to be of profound value (Shen et al., 2018; Dou et al., 2018; Wang et al., 2018; Dou et al., 2019). In this work, we propose to fuse the global information across layers. Choices of DEEP(·) In this work, we investigate two representative functions to aggregate information across layers, which differ at whether the decoding information is taken into account. RNN Intuitively, we can treat G = {g1, . . . , gL} as a sequence of representations, and recurring all the representations with an RNN: DEEPRNN = RNN(G). (9) We use the last RNN state as the sentence representation: g = rL. As seen, the RNN-based aggregation repeatedly revises the sentence representations of the sequence with each recurrent step. As a side effect coming together with the proposed approach, the added recurrent inductive bias of RNNs has proven beneficial for many sequence-to-sequence learning tasks such as machine translation (Dehghani et al., 2018). TAM Recently, Bapna et al. (2018) proposed a novel transparent attention model (TAM) to train very deep NMT models. In this work, we apply TAM to aggregate sentence representations: DEEPTAM = L X l=1 βi,lgl, (10) βi = ATTg(dl i−1, G), (11) where ATTg(·) is an attention model with its own parameters, that specifics which context representations is relevant for each decoding output. Again, dl i−1 is the decoder state in the l-th layer. Comparing with its RNN counterpart, the TAM mechanism has three appealing strengths. First, TAM dynamically generates the weights βi based on the decoding information at every decoding step dl i−1, while RNN is unaware of the decoder states and the associated parameters are fixed after training. Second, TAM allows the model to adjust the gradient flow to different layers in the encoder depending on its training phase. 3 Experiment We conducted experiments on WMT14 En⇒De and En⇒Fr benchmarks, which contain 4.5M and 35.5M sentence pairs respectively. We reported experimental results with case-sensitive 4gram BLEU score. We used byte-pair encoding (BPE) (Sennrich et al., 2016) with 32K merge operations to alleviate the out-of-vocabulary problem. We implemented the proposed approaches on top of TRANSFORMER model (Vaswani et al., 2017). We followed Vaswani et al. (2017) to set the model configurations, and reproduced their reported results. We tested both Base and Big models, which differ at the layer size (512 vs. 1024) and the number of attention heads (8 vs. 16). 3.1 Ablation Study We first investigated the effect of components in the proposed approaches, as listed in Table 1. Shallow Sentential Context (Rows 3-5) All the shallow strategies achieve improvement over the 6200 # Model GLOBAL(·) DEEP(·) # Para. Train Decode BLEU 1 BASE n/a n/a 88.0M 1.39 3.85 27.31 2 MEDIUM n/a n/a +25.2M 1.08 3.09 27.81 3 SHALLOW Mean Pooling n/a +18.9M 1.35 3.45 27.58 4 Max Pooling +18.9M 1.34 3.43 27.81↑ 5 Attention +19.9M 1.22 3.23 28.04⇑ 6 DEEP Attention RNN +26.8M 1.03 3.14 28.38⇑ 7 TAM +26.4M 1.07 3.03 28.33⇑ Table 1: Impact of components on WMT14 En⇒De translation task. BLEU scores in the table are case sensitive. “Train” denotes the training speed (steps/second), and “Decode” denotes the decoding speed (sentences/second) on a Tesla P40. “TAM” denotes the transparent attention model to implement the function DEEP(·). “↑/ ⇑”: significant over TRANSFORMER counterpart (p < 0.05/0.01), tested by bootstrap resampling (Koehn, 2004). baseline Base model, validating the importance of sentential context in NMT. Among them, attentive mechanism (Row 5) obtains the best performance in terms of BLEU score, while maintains the training and decoding speeds. Therefore, we used the attentive mechanism to implement the function GLOBAL(·) as the default setting in the following experiments. Deep Sentential Context (Rows 6-7) As seen, both RNN and TAM consistently outperform their shallow counterparts, proving the effectiveness of deep sentential context. Introducing deep context significantly improves translation performance by over 1.0 BLEU point, while only marginally decreases the training and decoding speeds. Compared to Strong Base Model (Row 2) As our model has more parameters than the Base model, we build a new baseline model (MEDIUM in Table 1) which has a similar model size as the proposed deep sentential context model. We change the filter size from 1024 to 3072 in the decoder’s feed-forward network (Eq.2). As seen, the proposed deep sentential context models also outperform the MEDIUM model over 0.5 BLEU point. 3.2 Main Result Experimental results on both WMT14 En⇒De and En⇒Fr translation tasks are listed in Table 2. As seen, exploiting deep sentential context representation consistently improves translation performance across language pairs and model architectures, demonstrating the necessity and effectiveness of modeling sentential context for NMT. Among them, TRANSFORMER-BASE with deep sentential context achieves comparable performance with the vanilla TRANSFORMER-BIG, with only less than half of the parameters (114.4M Model En⇒De En⇒Fr TRANSFORMER-BASE 27.31 39.32 + DEEP (RNN) 28.38⇑ 40.15⇑ + DEEP (TAM) 28.33⇑ 40.27⇑ TRANSFORMER-BIG 28.58 41.41 + DEEP (RNN) 29.04↑ 41.87 + DEEP (TAM) 29.19⇑ 42.04⇑ Table 2: Case-sensitive BLEU scores on WMT14 En⇒De and En⇒Fr test sets. “↑/ ⇑”: significant over TRANSFORMER counterpart (p < 0.05/0.01), tested by bootstrap resampling. vs. 264.1M, not shown in the table). Furthermore, DEEP (TAM) consistently outperforms DEEP (RNN) in the TRANSFORMER-BIG configuration. One possible reason is that the big models benefit more from the improved gradient flow with the transparent attention (Bapna et al., 2018). 3.3 Linguistic Analysis To gain linguistic insights into the global and deep sentence representation, we conducted probing tasks1 (Conneau et al., 2018) to evaluate linguistics knowledge embedded in the encoder output and the sentence representation in the variations of the Base model that are trained on En⇒De translation task. The probing tasks are classification problems that focus on simple linguistic properties of sentences. The 10 probing tasks are categories into three groups: (1) Surface information. (2) Syntactic information. (3) Semantic information. For each task, we trained the classifier on the train set, and validated the classifier on the validation set. We followed Hao et al. (2019) and Li 1https://github.com/facebookresearch/ SentEval/tree/master/data/probing 6201 Model Surface Syntactic Semantic SeLen WC Avg TrDep ToCo BShif Avg Tense SubN ObjN SoMo CoIn Avg L4 IN BASE 94.18 66.24 80.21 43.91 77.36 69.25 63.51 88.03 83.77 83.68 52.22 60.57 73.65 L5 IN BASE 93.40 63.95 78.68 44.36 78.26 71.36 64.66 88.84 84.05 84.56 52.58 61.56 74.32 L6 IN BASE 92.20 63.00 77.60 44.74 79.02 71.24 65.00 89.24 84.69 84.53 52.13 62.47 74.61 + SSR 92.09 62.54 77.32 44.94 78.39 71.31 64.88 89.17 85.79 85.21 53.14 63.32 75.33 + DSR 91.86 65.61 78.74 45.52 78.77 71.62 65.30 89.08 85.89 84.91 53.40 63.33 75.32 Table 3: Performance on the linguistic probing tasks of evaluating linguistics embedded in the encoder outputs. “BASE” denotes the representations from TRANFORMER-BASED encoder. “SSR” denotes shallow sentence representation. “DSR” denotes deep sentence representation. “AVG” denotes the average accuracy of each category. et al. (2019) to set the model configurations. We also listed the results of lower layer representations (L = 4, 5) in TRANSFORMER-BASE to conduct better comparison. The accuracy results on the different test sets are shown in Table 3. From the tale, we can see that • For different encoder layers in the baseline model (see “L4 in BASE”, “L5 in BASE” and “L6 in BASE”), lower layers embed more about surface information while higher layers encode more semantics, which are consistent with previous findings in (Raganato and Tiedemann, 2018). • Integrating the shallow sentence representation (“+ SSR”) obtains improvement over the baseline on semantic tasks (75.33 vs. 74.61), while fails to improve on the surface (77.32 vs. 77.60) and syntactic tasks (64.88 vs. 65.00). This may indicate that the shallow representations that exploits only the top encoder layer (“L6 in BASE”) encodes more semantic information. • Introducing deep sentence representation (“+ DSR”) brings more improvements. The reason is that our deep sentence representation is induced from the sentence representations of all the encoder layers, and lower layers that contain abound surface and syntactic information are exploited. Along with the above translation experiments, we believe that the sentential context is necessary for NMT by enriching the source sentence representation. The deep sentential context which is induced from all encoder layers can improve translation performance by offering different types of syntax and semantic information. 4 Related Work Sentential context has been successfully applied in SMT (Meng et al., 2015; Zhang et al., 2015). In these works, sentential context representation which is generated by the CNNs is exploited to guided the target sentence generation. In broad terms, sentential context can be viewed as a sentence abstraction from a specific aspect. From this point of view, domain information (Foster and Kuhn, 2007; Hasler et al., 2014; Wang et al., 2017b) and topic information (Xiao et al., 2012; Xiong et al., 2015; Zhang et al., 2016) can also be treated as the sentential context, the exploitation of which we leave for future work. In the context of NMT, several researchers leverage document-level context for NMT (Wang et al., 2017a; Choi et al., 2017; Tu et al., 2018), while we opt for sentential context. In addition, contextual information are used to improve the encoder representations (Yang et al., 2018, 2019; Lin et al., 2018). Our approach is complementary to theirs by better exploiting the encoder representations for the subsequent decoder. Concerning guiding the NMT generation with source-side context, Zheng et al. (2018) split the source content into translated and untranslated parts, while we focus on exploiting global sentence-level context. 5 Conclusion In this work, we propose to exploit sentential context for neural machine translation. Specifically, the shallow and the deep strategies exploit the top encoder layer and all the encoder layers, respectively. Experimental results on WMT14 benchmarks show that exploiting sentential context improves performances over the state-of-theart TRANSFORMER model. Linguistic analyses reveal that the proposed approach indeed captures more linguistic information as expected. 6202 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In ICLR. Ankur Bapna, Mia Chen, Orhan Firat, Yuan Cao, and Yonghui Wu. 2018. Training deeper neural machine translation models with transparent attention. In EMNLP. Heeyoul Choi, Kyunghyun Cho, and Yoshua Bengio. 2017. Context-dependent word representation for neural machine translation. Computer Speech & Language, 45:149–160. Alexis Conneau, Germ´an Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In ACL. Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Łukasz Kaiser. 2018. Universal transformers. arXiv preprint arXiv:1807.03819. Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Shuming Shi, and Tong Zhang. 2018. Exploiting deep representations for neural machine translation. In EMNLP. Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Longyue Wang, Shuming Shi, and Tong Zhang. 2019. Dynamic layer aggregation for neural machine translation with routing-by-agreement. In AAAI. George Foster and Roland Kuhn. 2007. Mixture-model adaptation for smt. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 128–135. Association for Computational Linguistics. Jie Hao, Xing Wang, Baosong Yang, Longyue Wang, Jinfeng Zhang, and Zhaopeng Tu. 2019. Modeling recurrence for transformer. In NAACL. Eva Hasler, Barry Haddow, and Philipp Koehn. 2014. Combining domain and topic adaptation for smt. In Proceedings of AMTA, volume 1, pages 139–151. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In ACL. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In ACL. Philipp Koehn. 2004. Statistical Significance Tests for Machine Translation Evaluation. In EMNLP. Jian Li, Baosong Yang, Zi-Yi Dou, Xing Wang, Michael R. Lyu, and Zhaopeng Tu. 2019. Information aggregation for multi-head attention with routing-by-agreement. In NAACL. Junyang Lin, Xu Sun, Xuancheng Ren, Shuming Ma, Jinsong Su, and Qi Su. 2018. Deconvolution-based global decoding for neural machine translation. In COLING. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In ICLR. Fandong Meng, Zhengdong Lu, Mingxuan Wang, Hang Li, Wenbin Jiang, and Qun Liu. 2015. Encoding source language with convolutional neural network for machine translation. In ACL. Eugene A Nida. 1969. Science of translation. Language, pages 483–498. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In NAACL. Alessandro Raganato and J¨org Tiedemann. 2018. An analysis of encoder representations in transformerbased machine translation. In EMNLP 2018 workshop BlackboxNLP. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In ACL. Yanyao Shen, Xu Tan, Di He, Tao Qin, and Tie-Yan Liu. 2018. Dense information flow for neural machine translation. In NAACL. Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural mt learn source syntax? In EMNLP. Zhaopeng Tu, Yang Liu, Shuming Shi, and Tong Zhang. 2018. Learning to remember translation history with a continuous cache. TACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In NIPS. Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2017a. Exploiting cross-sentence context for neural machine translation. In EMNLP. Qiang Wang, Fuxue Li, Tong Xiao, Yanyang Li, Yinqiao Li, and Jingbo Zhu. 2018. Multi-layer representation fusion for neural machine translation. In COLING. Rui Wang, Andrew Finch, Masao Utiyama, and Eiichiro Sumita. 2017b. Sentence embedding for neural machine translation domain adaptation. In ACL. Xinyan Xiao, Deyi Xiong, Min Zhang, Qun Liu, and Shouxun Lin. 2012. A topic similarity model for hierarchical phrase-based translation. 6203 Deyi Xiong, Min Zhang, and Xing Wang. 2015. Topic-based coherence modeling for statistical machine translation. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 23(3):483–493. Baosong Yang, Jian Li, Derek F. Wong, Lidia S. Chao, Xing Wang, and Zhaopeng Tu. 2019. Context-aware self-attention networks. In AAAI. Baosong Yang, Zhaopeng Tu, Derek F. Wong, Fandong Meng, Lidia S. Chao, and Tong Zhang. 2018. Modeling localness for self-attention networks. In EMNLP. Jiajun Zhang, Dakun Zhang, and Jie Hao. 2015. Local translation prediction with global sentence representation. In IJCAI. Jian Zhang, Liangyou Li, Andy Way, and Qun Liu. 2016. Topic-informed neural machine translation. In COLING. Zaixiang Zheng, Hao Zhou, Shujian Huang, Lili Mou, Xinyu Dai, Jiajun Chen, and Zhaopeng Tu. 2018. Modeling past and future for neural machine translation. TACL.
2019
624
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6204–6214 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6204 W´etin dey with these comments? Modeling Sociolinguistic Factors Affecting Code-switching Behavior in Nigerian Online Discussions Innocent Ndubuisi-Obi∗ School of Information University of Michigan [email protected] Sayan Ghosh∗ Department of EECS University of Michigan [email protected] David Jurgens School of Information University of Michigan [email protected] Abstract Multilingual individuals code switch between languages as a part of a complex communication process. However, most computational studies have examined only one or a handful of contextual factors predictive of switching. Here, we examine Naij´a-English code switching in a rich contextual environment to understand the social and topical factors eliciting a switch. We introduce a new corpus of 330K articles and accompanying 389K comments labeled for code switching behavior. In modeling whether a comment will switch, we show that topic-driven variation, tribal affiliation, emotional valence, and audience design all play complementary roles in behavior. 1 Introduction Multilingual individuals frequently switch between different languages throughout a discourse, a process known as code switching (Heller, 2010; Gamb¨ack and Das, 2016). This switching process is thought to be driven from a variety of factors, including grammatical constraints (Pfaff, 1979; Poplack, 1980), audience design (Gumperz, 1977; Bell, 1984), or even to evoke a specific perception of the speaker’s identity (Niedzielski, 1999; Schmid, 2001). In common social situations, many of these factors are in play, yet we often do not have an idea of how they interact. Here, we present a large scale study of code switching in Nigeria between English and Naij´a, the widelyspoken Nigerian creole, to quantify which factors predict switching. Computational studies of code switching have largely focused on linguistic aspects of switching (Solorio and Liu, 2008; Adel et al., 2013; Vyas et al., 2014; Hartmann et al., 2018). However, several recent works have begun to examine the contextual factors that influence switching behavior, ∗Authors contributed equally. finding that the topic driving a discussion spurs on language variation (Shoemark et al., 2017; Stewart et al., 2018) and that individuals are sensitive to the scope of their audience when choosing a language (Papalexakis et al., 2014; Pavalanathan and Eisenstein, 2015). Given that the social context is known to be strongly influential on code switching (Gumperz, 1977; Thomason and Kaufman, 2001; Gardner-Chloros and Edwards, 2004), our work builds on these recent advancements to quantify the impact of social and contextual factors influencing code switching. Here, we examine the social and contextual factors predictive of English-Naij´a code switching in online discussions across five major Nigerian newspapers. Our work makes three contributions towards computational sociolinguistics. First, we introduce a massive new corpus of Naij´a and English text that presents code switching behavior in context, using 330K articles and 389K comments from nine years of longitudinal data. Second, we develop a new classifier for distinguishing Naij´a and English, identifying over 24K cases of code switching. Third, we show that although topicdriven variation drives much of code switching behavior, tribal affiliation, emotional valence, and audience design play important roles in which language is used. 2 Identifying Naij´a and English Naij´a is an English creole spoken by approximately 80 million people throughout Nigeria, with 3 to 5 million speaking it as a first language (Uchechukwu Ihemere, 2006), leading to many popular services generating content in Naij´a, e.g., BBC Pidgin. While official business is frequently conducted in English, Naij´a is considered the main language of social interaction in Nigeria (Ifeanyi Onyeche, 2004). Although spo6205 Source Articles Tokens Comments The Nation 150,724 80,596,156 6,232 The Guardian 73,894 39,411,837 59,232 The Punch 39,576 19,453,935 152,928 Vanguard 30,279 29,315,637 178,734 Daily Trust 29,019 14,481,549 723 BBC (Naij´a) 6,999 1,114,844 n/a Table 1: Corpus of Nigerian news in English and Naij´a ken widely, no language detection systems support recognizing the creole, in part due to the lack of existing corpora with examples.1 Therefore, to support our ultimate goal of modeling the social factors influencing code switching, we first introduce a new corpus of Naij´a and English texts and then develop a classifier to distinguish them. Data A longitudinal sample of Nigerian news was collected from six major news sources; five of these are in Nigerian Standard English, while one is in Naij´a. Table 1 summarizes the datasets. Articles span from 2010 to present day and all but the BBC Pidgin site allow users to comment on the article, with activity rates ranging significantly. Notably, all sites share a common commenting framework through Disqus, which allows consistent extraction and identification of individuals and observing commenter’s global statistics. As news media, all six datasets use a formal register in their style, which does not necessarily match that of the comments. Therefore, to supplement the news data, two annotators labeled a sample of 2,500 comments across all sites. As Naij´a is less frequent, the sample was bootstrapped to potentially contain more Naij´a by first training our classifier (described next) from the news data and then sampling comments uniformly across its posterior distribution. A held out set of 682 randomly sampled comments (not bootstrapped) was additionally doubly annotated (Krippendorff α=0.511) as a test set, 9.5% of which were Naij´a; note that due to class imbalance, α represents a highlyconservative estimate of agreement. Method and Experimental Setup Our goal is to create a classifier that identifies whether a sentence contains Naij´a. English is significantly more frequent in our news dataset and therefore we downsample English to a 9:1 ratio following the 1Nigerian Standard English is different from Naij´a, with each having its own syntax and separate lexicon—to the point that individuals code switch between them (Akande, 2010). Conf. Example 0.99 See dem people as dem dey steal our money. 0.89 Your brain don sour...Tufiakwa! 0.84 If you no like Kemi go bring Iweala. Table 2: High confidence Naij´a classification examples observed frequency in test data, using 461K English and 51K Naij´a sentences from our news corpora, in addition to 1,887 English and 613 Naij´a manually-annotated comment sentences. As a primarily spoken language, Naij´a has significant orthographic variation in its spelling (Deuber and Hinrichs, 2007). Therefore, we follow insights from language detection approaches (Lui and Baldwin, 2012; Jauhiainen et al., 2018; Zhang et al., 2018) and adopt character-based features, which are more robust to such variation. Here, character sequences of length 3 to 7 are used as features with a logistic regression with L2 loss. The resulting model is evaluated using AUC in two ways: using 5-fold cross validation within the training data and the held-out comment test set. Results The classifier was highly accurate at learning to distinguish Naij´a and English in the mostly-news training data, achieving a crossvalidation AUC of 0.996, compared with the random baseline of 0.5. The model performed less accurately on the comments, which have a more informal register, achieving an AUC of 0.724. 3 Social Factors Influencing Switching People code switch in part to signal a part of their identity (Nguyen, 2014) and online discussion provides an intersectional context that combines social and topic features that could each elicit the use of Naij´a (Myers-Scotton, 1995). Here, we outline the social and contextual factors that could affect whether Naij´a is used and identify outline specific research hypotheses to test. Article Topic The content of a discussion has the potential to elicit a response in a particular language, especially if content, language, and identity interrelate. For example, in online discussions of independence referendums, Shoemark et al. (2017) and Stewart et al. (2018) show evidence of topic-based language variation, with additional modulation based on expected audience. These results point to hypothesis H1 that we should observe topic-induced variation in which Naij´a would be more frequent for certain topics. 6206 Social Setting The audience imagined by an author leads to differing code switching behavior, where computational studies have found that messages intended for broader audiences typically use the major language (Papalexakis et al., 2014; Shoemark et al., 2017). Similarly, Nguyen et al. (2015) notes that individuals switch to a minority language during a conversation with other individuals. We operationalize audience design in three ways: (1) the number of prior comments to an article, which signals general its potential audience size, (2) the depth of the comment in the discussion thread, which is often a signal of more interpersonal discussion (Arag´on et al., 2017), and (3) the time of day the comment is made, as an expectation of future audience size. These three factors lead to hypothesis H2a that initial comments will be less likely to be in Naij´a as they would have a wider audience and H2b comments made to a smaller audience are more likely to be made in Naij´a. Tribal affiliation Nigeria is home to individuals identifying with over a hundred different tribal identities which are concentrated in different regions. These tribal affiliations are the strongest aspect of self identity in present day Nigeria (Mustapha, 2006) and have also historically served as sources of conflict due to social stratification along tribal and geographic lines (Akiwowo, 1964; Himmelstrand, 1969). Tribal identity and salience is closely linked with language in Nigeria (Bamiro, 2006), with individuals alternating between English, Naij´a, and local languages to emphasize identity. Language choice is driven in part by these cultural identities (Gudykunst and Schmidt, 1987; Myers-Scotton, 1991; Moreno et al., 1998). We test hypothesis H3 that tribal affiliation will be predictive of codeswitching. As our dataset does not initially come with tribal affiliation, we follow previous work (Rao et al., 2011; Fink et al., 2012) and train a classifier (described in Appendix A) to automatically label all article authors as Igbo, Hausa-Falani, Yoruba, or other. These three tribes constitute over 71% of the population. Similar to prior work, our method attains an 81.0 F1 on author names, with slightly lower performance (67.7 F1) on the noisier commenter names. Social Status Code switching behavior is connected to perceived notions of status, especially along the perceived status of each language in context (Genesee, 1982). Kim et al. (2014) notes that higher status individuals tend to speak in the majority language. Here, we operationalize status through users’ meta-data from Disqus that provides their number of followers, which acts as a proxy for their reputation on the platforms. In hypothesis H4, individuals with higher status are more likely to use the majority language, English. Emotion The language spoken by a bilingual individual is intimately connected to emotion (Rajagopalan, 2004). Indeed, individuals are more likely to swear in their native language (Dewaele, 2004; Rudra et al., 2016) or code switch when being impolite (Hartmann et al., 2018), underscoring a unconscious connection during emotional moments. Odebunmi (2012) notes that Naij´a is used in the more formal setting of doctor-patient interactions to express emotions. These results suggest hypothesis H5 that in high-emotion settings, individuals are more likely to code-switch into Naij´a. 4 When is Naij´a Used? What sociocultural factors influence a person’s choice of communicating in Naij´a or English? Here, we analyze the comments from data in Table 1 to test the hypotheses from Section 3. Experimental Setup The Naij´a-English classifier was run on all comments made to the 330K articles in the dataset, classifying each sentence within the comment separately. If any one sentence is classified as Naij´a, we consider the comment to have code-switched, noting that we are not making a distinction about what level the switch is occurring, e.g., word, phrase, or sentence (Gamb¨ack and Das, 2016). Ultimately, this process resulted in 365,420 English and 24,232 Naij´acontaining comments. User-based statistics were extracted for each commenter from their Disqus profile. As only 15K individuals use Disqus accounts (4%), we include an additional binary indicator variable for whether the individual has an account. To test for the effect of content, a 20-topic LDA model (Blei et al., 2003) was run on the article text and included as variables (due to collinearity, topic 20 is excluded). We model tribal affiliation in four ways: (i) the commenter, (ii) the article author, 6207 and, where possible, (iii) the affiliation of the parent being replied to, and (iv) whether the parent explicitly mentions a tribe. For the first, three the “Other” category is the reference coding. Emotion is measured using VADER (Hutto and Gilbert, 2014), a lexicon designed for sentiment analysis in social media on a scale of [-1,1]. We incorporate sentiment in four ways: (1-2) the sentiment scores of the post and its parent, using 0 for the parent’s sentiment if the current comment has no parent and (3-4) the absolute value of the sentiment and parent’s sentiment. The latter two variables enable us to separately test whether any emotionality (positive of negative) influence using Naij´a, rather than the particular direction. Each platform is included as a fixed effect to control for differences in baseline rates of Naij´a. After testing for collinearity, all features had VIF<3.1 indicating the model’s features are largely independent. As each hypothesis uses different regression variables, this low VIF also indicates that any results are likely not confounded by correlations within the data. Results A logistic regression model is fit using all the features, and the resulting coefficients, shown in Figure 1, provide support for all five hypotheses. However, the effect sizes of each hypotheses variables differed substantially, pointing to the complexity of code switching behavior. The strongest effects of Naij´a usage in the comment section came from the topic of the article, supporting H1. Topics related to business, social issues, and tribal and electoral politics were more likely to see code switching into Naij´a. However, topics related to more general, legislative politics and individual sectors of the economy do not promote Naij´a usage. Further, this trend is seen in the newspapers’ relative rates: being more oriented towards business topics and targeting an educated audience, The Guardian features less code-switching in its comment sections compared to The Punch, a tabloid with a wider audience (Marcus, 1999). In particular, the code switching effect is strongest for topics that relate to societal tensions (e.g., political, socioeconomic, and tribal). While prior work on topic-induced variation (Shoemark et al., 2017; Stewart et al., 2018) identified behaviors for political identitybased content (national referendums on independence), in contrast, here, we also observe that individuals are sensitive to audience for more doTopic: World Politics Topic: National Elections Topic: Election Parties Topic: Education Topic: Health Care Topic: Oil Topic: Agriculture Topic: Banking Topic: Courts and Law Topic: Presidential Topic: General Politics Topic: Senate Topic: Economic Develop. Topic: Tribal Politics 2 Topic: Business Topic: Transportation Topic: Police Topic: Tribal Politics Topic: IP Rights: The Guardian The Nation The Punch Vanguard Is Weekend? Time: Evening Time: Morning Article Author: Hausa Article Author: Igbo Article Author: Yoruba Commenter: Hausa Commenter: Igbo Commenter: Yoruba Parent Commenter: Hausa Parent Commenter: Igbo Parent Commenter: Yoruba Parent Commenter: None Parent mentions tribe? Depth Sequence Num Has Disqus Account? Log(# of Followers) Sentiment Parent’s Sentiment abs(Sentiment) abs(Parent’s Sentiment) Figure 1: Regression results for whether a comment will have Naij´a in it. Error bars show standard error, with *** denoting p<0.001, ** p<0.01, and * p<0.05. Shaded regions group similar variables. Full results are detailed in Appendix Table 7. mestic topics like education and health care. The use of Naij´a did vary by audience, with strongest support for H2b. Comments deeper in a reply thread are more likely to be Naij´a as well as those made in the evening when much of the discussion has taken place and when replies are more likely to be conversational with a particular person, rather than commentary on the article. The total effect is seen by considering both the depth and when “Parent Commenter: None” (i.e., the comment is at the top level). Such initial comments are much more likely to be in English, after which as the discuss turns more conversational, more Naij´a is used. Our results agree with those of Nguyen et al. (2015) who found more minority language using in interpersonal communication. 6208 The initial comments to an article (low sequence number) were less likely to be in Naij´a (H2a; p<0.05), though the effect is relatively weaker. Tribal affiliation only had limited association with use of Naij´a (H3), where Igbo commenters are more likely and Yoruba commenters are less likely to use Naij´a. A subsequent model tested for interaction effects between author and parent tribe, which revealed only one significant trend that individuals from all tribes are more likely to reply to Yoruba commenters in Naij´a. As Naij´a is widely spoken throughout the country, compared with Standard English, which is spoken more frequently at higher socioeconomic levels (Faraclas, 2002), our results suggest its use is not to emphasize tribal affiliation. The expectation of H4 was observed: higher status (as measured by number of followers) was as predictive of use of the higher prestige language (English), though the effect is relatively small and the effect is estimated only from those users with Disqus accounts. As a complementary analysis, we performed a second test where we replace the number of followers with the number of total upvotes as a proxy of status, with the rationale that users who generate content that is well-received by the community might aquire a positive reputation. The regression results using total upvotes also found a similar weak effect of higher status users writing more in English (and highly similar coefficients for all other features). However, we note that this second analysis has a potential confound, as an English comment could be read by a wider audience and therefore receive more upvotes simply due to audience size rather than status. As all newspapers in our study are primarily read by a Nigerian national audience who is likely bilingual in English and Naij´a, this potential effect is expected to be small. Nevertheless, given the limitations of both operationalizations of status, we view their similar results as tentative evidence of the effects of status on Naij´a code switching in social discussions (H4). The effects associated with H5 were strongly shown: when expressing any kind of sentiment, authors were much more likely to do it in Naij´a, with a positive effect for using Naij´a in positive sentiment comments. Surprisingly, a parent’s use of sentiment was negatively associated with Naij´a indicating a reaction to emotional language does not elicit a code switch. Given that our model controls for topics that may be more likely to elicit certain emotions, this result suggests that emotion is a driving factor code switching behavior. 5 Conclusion This work provides the first computational examination of code switching behavior in Naij´a through introducing a large corpora of articles in Naij´a and Nigerian Standard English, along with comments to these articles. We develop new methods for distinguishing these two languages and identify over 24K instances of code switching in the comments. Through examining code switching in an intersectional social context, our analysis provides evidence of complementary social factors influencing switching. Notably, we find that topical modulation has the largest effect on switching to Naij´a, with use of emotion surpassing the effect for a few topics. However, as no one factor was sufficient for predicting code switching, our results point to the need for holistically modeling the social context when examining factors influence code-switching behavior. All data and code are made available at https://blablablab.si. umich.edu/projects/naija/. Acknowledgments We thank the three reviewers for their helpful comments. References Heike Adel, Ngoc Thang Vu, and Tanja Schultz. 2013. Combination of recurrent neural networks and factored language models for code-switching language modeling. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 206–211. Akinmade T Akande. 2010. Is nigerian pidgin english english? Dialectologia et Geolinguistica, 18(1):3– 22. Akinsola A Akiwowo. 1964. The sociology of nigerian tribalism? Phylon (1960-), 25(2):155–163. Pablo Arag´on, Vicenc¸ G´omez, and Andreaks Kaltenbrunner. 2017. To thread or not to thread: The impact of conversation threading on online discussion. In Eleventh International AAAI Conference on Web and Social Media. Edmund O Bamiro. 2006. The politics of codeswitching: English vs. nigerian languages. World Englishes, 25(1):23–35. 6209 Allan Bell. 1984. Language style as audience design. Language in society, 13(2):145–204. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993–1022. Dagmar Deuber and Lars Hinrichs. 2007. Dynamics of orthographic standardization in jamaican creole and nigerian pidgin. World Englishes, 26(1):22–47. Jean-Marc Dewaele. 2004. Blistering barnacles! what language do multilinguals swear in? Estudios de sociolingstica: Linguas, sociedades e culturas (Issue dedicated to: Bilingualism and emotions), 5(1). Nick Faraclas. 2002. Nigerian pidgin. Routledge. Clayton Fink, Jonathon Kopecky, Nathan Bos, and Max Thomas. 2012. Mapping the twitterverse in the developing world: An analysis of social media use in nigeria. In International Conference on Social Computing, Behavioral-Cultural Modeling, and Prediction, pages 164–171. Springer. Bj¨orn Gamb¨ack and Amitava Das. 2016. Comparing the level of code-switching in corpora. In LREC. Penelope Gardner-Chloros and Malcolm Edwards. 2004. Assumptions behind grammatical approaches to code-switching: when the blueprint is a red herring. Transactions of the Philological Society, 102(1):103–129. Fred Genesee. 1982. The social psychological significance of code switching in cross-cultural communication. Journal of language and social psychology, 1(1):1–27. William B Gudykunst and Karen L Schmidt. 1987. Language and ethnic identity: An overview and prologue. Journal of Language and Social Psychology, 6(3-4):157–170. John J Gumperz. 1977. The sociolinguistic significance of conversational code-switching. RELC journal, 8(2):1–34. Silvana Hartmann, Monojit Choudhury, and Kalika Bali. 2018. An integrated representation of linguistic and social functions of code-switching. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC2018). Monica Heller. 2010. Codeswitching: Anthropological and sociolinguistic perspectives, volume 48. Walter de Gruyter. Ulf Himmelstrand. 1969. Tribalism, nationalism, rank-equilibration, and social structure: A theoretical interpretation of some socio-political processes in southern nigeria. Journal of Peace Research, 6(2):81–102. Clayton J Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Eighth international AAAI conference on weblogs and social media. Joseph Ifeanyi Onyeche. 2004. As naija pipo dey tok: a preliminary analysis of the role of nigerian pidgin in the nigerian community in sweden. Africa & Asia, 4:48–56. Tommi Jauhiainen, Marco Lui, Marcos Zampieri, Timothy Baldwin, and Krister Lind´en. 2018. Automatic language identification in texts: A survey. arXiv preprint arXiv:1804.08186. Suin Kim, Ingmar Weber, Li Wei, and Alice Oh. 2014. Sociolinguistic analysis of twitter in multilingual societies. In Proceedings of the 25th ACM conference on Hypertext and social media, pages 243–248. ACM. Marco Lui and Timothy Baldwin. 2012. langid. py: An off-the-shelf language identification tool. In Proceedings of the ACL 2012 system demonstrations, pages 25–30. Association for Computational Linguistics. Judith Marcus. 1999. Surviving the Twentieth Century: Social Philosophy From the Frankfurt School to the Columbia Faculty Seminars. Transaction Publishers. Luis Moreno, Ana Arriba, and Araceli Serrano. 1998. Multiple identities in decentralized spain: The case of catalonia. Regional & Federal Studies, 8(3):65– 88. Abdul Raufu Mustapha. 2006. Ethnic structure, inequality and governance of the public sector in Nigeria. United Nations Research Institute for Social Development Geneva, Switzerland. Carol Myers-Scotton. 1991. Making ethnicity salient in codeswitching. Language and ethnicity, 2:95– 109. Carol Myers-Scotton. 1995. Social motivations for codeswitching: Evidence from Africa. Oxford University Press. Dong-Phuong Nguyen, Rudolf Berend Trieschnigg, and Leonie Cornips. 2015. Audience and the use of minority languages on twitter. In Proceedings of the Ninth International AAAI Conference on Web and Social Media, ICWSM 2015, pages 666–669. AAAI Press. Eemcs-eprint-26675. Thuy Nguyen. 2014. Code Switching: A sociolinguistic perspective. Anchor Academic Publishing (aap verlag). Nancy Niedzielski. 1999. The effect of social information on the perception of sociolinguistic variables. Journal of language and social psychology, 18(1):62–85. 6210 Akin Odebunmi. 2012. the baby dey chuk chuk: Language and emotions in doctor–client interaction. Pragmatics and Society, 3(1):120–148. Evangelos Papalexakis, Dong Nguyen, and A Seza Do˘gru¨oz. 2014. Predicting code-switching in multilingual communication for immigrant communities. In Proceedings of the first workshop on computational approaches to code switching, pages 42–50. Umashanthi Pavalanathan and Jacob Eisenstein. 2015. Audience-Modulated Variation in Online Social Media. American Speech, 90(2):187–213. Carol W Pfaff. 1979. Constraints on language mixing: intrasentential code-switching and borrowing in spanish/english. Language, pages 291–318. Shana Poplack. 1980. Sometimes ill start a sentence in spanish y termino en espanol: toward a typology of code-switching1. Linguistics, 18(7-8):581–618. Kanavillil Rajagopalan. 2004. Emotion and language politics: The brazilian case. Journal of multilingual and multicultural development, 25(2-3):105–123. Delip Rao, Michael Paul, Clay Fink, David Yarowsky, Timothy Oates, and Glen Coppersmith. 2011. Hierarchical bayesian models for latent attribute detection in social media. In Fifth International AAAI Conference on Weblogs and Social Media. Koustav Rudra, Shruti Rijhwani, Rafiya Begum, Monojit Choudhury, Kalika Bali, and Niloy Ganguly. 2016. Understanding language preference for expression of opinion and sentiment: What do hindienglish speakers do on twitter? In Proceedings of EMNLP 2016. Association for Computational Linguistics. Carol L Schmid. 2001. The politics of language: Conflict, identity and cultural pluralism in comparative perspective. Oxford University Press. Philippa Shoemark, Debnil Sur, Luke Shrimpton, Iain Murray, and Sharon Goldwater. 2017. Aye or naw, whit dae ye hink? scottish independence and linguistic identity on social media. In Proceedings of EACL, volume 1, pages 1239–1248. Thamar Solorio and Yang Liu. 2008. Learning to predict code-switching points. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 973–981. Association for Computational Linguistics. Ian Stewart, Yuval Pinter, and Jacob Eisenstein. 2018. S´ıo no, qu´e penses? catalonian independence and linguistic identity on social media. In Proceedings of NAACL. Sarah Grey Thomason and Terrence Kaufman. 2001. Language contact. Edinburgh University Press. Kelechukwu Uchechukwu Ihemere. 2006. A basic description and analytic treatment of noun clauses in nigerian pidgin. Nordic Journal of African Studies, 15(3):296–313. Yogarshi Vyas, Spandana Gella, Jatin Sharma, Kalika Bali, and Monojit Choudhury. 2014. Pos tagging of english-hindi code-mixed social media content. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 974–979. Yuan Zhang, Jason Riesa, Daniel Gillick, Anton Bakalov, Jason Baldridge, and David Weiss. 2018. A fast, compact, accurate model for language identification of codemixed text. In Proceedings of EMNLP. A A Classifier for Tribal Affiliation As our dataset does not come with tribal affiliations to start with, we first create a classifier to identify affiliations on the basis of name. Due to cultural norms in Nigeria, individual’s names often reveal their tribal affiliation (Rao et al., 2011; Fink et al., 2012), which lends itself to developing computational methods for distinguishing between the affiliations. Here, we develop a classifier for distinguishing between the three largest tribal affiliations: Hausa-Falani (29%), Yoruba (21%), and Igbo (21%), which together account for over 71% of the population thereby providing solid coverage of online users. Data for the tribal affiliation classifier was compiled using online databases and annotated names extracted from a held-out set of article authors and commenter names from the dataset of articles. The final training dataset included 493 Hausa-Falani names, 500 Yoruba names, 351 Igbo names, and 511 “other” names, which encompassed Nigerian names not fulling under the aforementioned three categories as well as non-Nigerian names (e.g., “The Editorial Board” or “flexingbenny”). Table 4 shows examples of names used in training. We note that some tribes’ names have similar cultural origins and therefore our data could result in systematic misclassifications for some tribes; for example, both the Hausa and the Kanuri (an ethnic group comprising roughly 3-4% of the Nigerian population) share names that are Arabic in origin. Our model would likely label all such names as Hausa, though due to population size differences, the impact of such errors are likely to be small. A logistic regression classifier was trained using L2 regularization with character n-grams ranging from 2 to 5 in length. To evaluate perfor6211 Model Article Author Commenter Our method 0.81 0.68 majority class 0.12 0.17 random 0.24 0.21 Table 3: Tribal affiliation classifier Macro F1 Figure 2: Normalized confusion matrix of tribal affiliation classifier mance, two trained annotators labeled 200 held out names of article authors and 200 commenter names; Krippendorff α agreement was 0.516, with disagreements resolved through adjudication. Performance of our model is shown in Table 3. While absolute performance on article authors is on par with similar approaches to classifying tribal affiliation (Rao et al., 2011; Fink et al., 2012), which applied their classifiers to clean name data. Performance on commenter names is slightly lower due noise from lexical variation, misspellings, and web extraction. Table 5 shows examples of names with tribal affiliation in the test data. The confusion matrix of the tribal affiliations, shown in Figure 2, reveals no systematic misclassification bias, suggesting that any errors will only increase variance in the downstream results without biasing findings towards one particular affiliation. Category. Example Hausa Murtala Mohammed, Saheed Ahmad Rufai, abubakar umar, ismail mudashir, mamman usman Yoruba Olajide Olatundun, Yetunde Arebi, Ayo Olododo, Ahmad Olawale, Aderonke Adeyeri Igbo Kelechi Akunna, Davies Iheamnachor, Uche Okeke, Chukwudi Enekwechi, Bartholomew Madukwe Other John Marks, Aaron Frost, Charles Frederick, Bush Jenkins, Victor Jonah Table 4: Tribal affiliation training data examples Category. Example Hausa Muhammad Hassanto, AK Mohammed, Suleiman Alatise, Alalere Tajudeen, Zahraddeen Yakub Yoruba Olatunji Omirin, Adenike Grace, Anthony Akinola, Tayo Aiyetoro, Vincent Ikuoola Igbo Ochuko Akuoph, Nwanchor Friday, John Megbechi, Adache Ene, Cynthia Onana Other Leon Willems, Michael Johnbull, Pamela John, Roses Moses, Tim Daiss Table 5: Tribal affiliation test data examples B Additional Naij´a Classification Examples Table 8 shows a sample of instances classified by the final trained language-distinguishing model. Instances are sampled uniformly across the posterior to show the variety of confidence scores. C Additional Regression Details Table 7 shows the full regression coefficients for the model depicted in Figure 1. We additionally show the most probable words for each topic in Table 6. Note that the final topic (“Security”) was intentionally omitted from the regression to remove the effects of collinearity between topic probabilities. 6212 Category. Example World Politics africa president african countries world trump united south country international National Elections election inec elections electoral commission anambra party governor political national Election Parties party pdp apc governor national election political chairman congress candidate Education university school education students schools nigeria teachers universities lagos prof Health Care god health children women church life family medical hospital child Oil oil power gas petroleum nigeria company electricity nnpc government crude Agriculture nigeria food farmers products production agriculture rice government country agricultural Banking cent bank billion market cbn nigeria exchange million banks capital Courts and Law court justice efcc law accused judge appeal federal trial judgment Presidential president nigeria buhari country nigerians jonathan government national political nation General Politics people time nigeria don political country money nigerians power government Senate national senate president assembly government house committee budget federal public Economic Politics nigeria government development country economic sector economy people national support Tribal Politics 2 governor delta government rivers people edo niger bayelsa local chief Business usiness bank customers nigeria company services mobile technology service brand Transportation road lagos government roads federal airport project air aviation safety Police police arrested incident command told suspects security officer lagos killed Tribal Politics governor lagos ekiti government people osun fayose ondo chief ogun IP Rights punch government workers rights email written protected website published broadcast Security security government boko haram military people army kaduna nigeria nigerian Table 6: Key words corresponding to topic 6213 coef std err z P>|z| [0.025 0.975] Intercept -3.5180 0.208 -16.922 0.000 -3.925 -3.111 The Guardian 0.1704 0.198 0.862 0.389 -0.217 0.558 The Nation 0.2553 0.205 1.243 0.214 -0.147 0.658 The Punch 0.5449 0.198 2.754 0.006 0.157 0.933 Vanguard 0.4649 0.197 2.359 0.018 0.079 0.851 Is Weekend? -0.0398 0.021 -1.926 0.054 -0.080 0.001 Time: Evening 0.1422 0.015 9.558 0.000 0.113 0.171 Time: Morning -0.0193 0.019 -1.031 0.302 -0.056 0.017 Article Author: Hausa -0.0077 0.027 -0.285 0.776 -0.060 0.045 Article Author: Igbo -0.0452 0.022 -2.080 0.038 -0.088 -0.003 Article Author: Yoruba -0.0098 0.028 -0.347 0.728 -0.065 0.045 Commenter: Hausa -0.0136 0.020 -0.682 0.496 -0.053 0.026 Commenter: Igbo 0.1229 0.021 5.969 0.000 0.083 0.163 Commenter: Yoruba -0.0547 0.019 -2.826 0.005 -0.093 -0.017 Parent Commenter: Hausa -0.0167 0.028 -0.602 0.547 -0.071 0.038 Parent Commenter: Igbo 0.0382 0.030 1.291 0.197 -0.020 0.096 Parent Commenter: Yoruba 0.0147 0.026 0.557 0.577 -0.037 0.066 No parent (top-level comment) -0.1432 0.023 -6.097 0.000 -0.189 -0.097 Parent mentions tribe? 0.0200 0.039 0.511 0.609 -0.057 0.097 Comment Depth 0.0290 0.004 6.781 0.000 0.021 0.037 Sequence Number -0.0013 0.001 -2.257 0.024 -0.002 -0.000 Has Disqus Account? -0.1858 0.039 -4.761 0.000 -0.262 -0.109 log(Number of Followers) 0.0217 0.005 4.171 0.000 0.011 0.032 Sentiment 0.0400 0.012 3.434 0.001 0.017 0.063 Parent’s Sentiment -0.0224 0.016 -1.372 0.170 -0.054 0.010 abs(Sentiment) 0.2474 0.021 11.713 0.000 0.206 0.289 abs(Parent’s sentiment) -0.1112 0.029 -3.807 0.000 -0.168 -0.054 Topic: World Politics 0.6148 0.085 7.240 0.000 0.448 0.781 Topic: National Elections 0.1893 0.084 2.252 0.024 0.025 0.354 Topic: Election Parties 0.2953 0.068 4.344 0.000 0.162 0.428 Topic: Education 0.4549 0.129 3.530 0.000 0.202 0.708 Topic: Health Care 0.3294 0.086 3.830 0.000 0.161 0.498 Topic: Oil 0.1399 0.093 1.511 0.131 -0.042 0.321 Topic: Agriculture 0.2604 0.130 2.001 0.045 0.005 0.515 Topic: Banking 0.0618 0.089 0.695 0.487 -0.112 0.236 Topic: Courts and Law 0.3587 0.085 4.216 0.000 0.192 0.525 Topic: Presidential 0.2792 0.073 3.823 0.000 0.136 0.422 Topic: General Politics 0.0785 0.081 0.973 0.331 -0.080 0.237 Topic: Senate 0.1208 0.074 1.641 0.101 -0.023 0.265 Topic: Economic Develop. 0.0230 0.099 0.233 0.816 -0.170 0.216 Topic: Tribal Politics 2 0.5192 0.102 5.113 0.000 0.320 0.718 Topic: Business 0.7199 0.143 5.044 0.000 0.440 1.000 Topic: Transportation 0.1304 0.106 1.235 0.217 -0.077 0.337 Topic: Police 0.4538 0.071 6.362 0.000 0.314 0.594 Topic: Tribal Politics 0.2230 0.099 2.242 0.025 0.028 0.418 Topic: IP Rights -0.0828 0.112 -0.738 0.461 -0.303 0.137 Table 7: Logistic regression results for predicting the use of Naij´a in a comment (cf. Figure 1 in Main Paper) 6214 p(Naij´a) Sentence 0.966469 Me, I don taya for awa piple oo! 0.962135 You fit correct o because na only Igbos be the major tribe for Nigeria wey no get tribal fellow as citizens of neighboring West African countries. 0.927030 I don’t blame you. 0.906231 APC na Edo, Edo na APC. 0.863062 Abeg make I go collect small brandy from terrydgreat. 0.824909 Watch for August 14 0.812487 Guess your bet don cast by now. 0.798906 Abeg make we hear word. 0.798014 I tire for you! 0.793962 No spillage go affect my life. 0.792577 Make I come, joor! 0.783273 If e break or e crack, all na spoil. 0.782503 London. 0.752294 But you be ”entourage” abi ”High commissioner” dat one na another chapter. 0.727051 Abeg, Make we hia word. 0.696996 #NO2Buhari 0.691851 aspirant for mouth. 0.690936 im done. 0.670130 The guy no get money, make him no get something to press after the whole stress again? 0.659617 So please don’t refer me to it. 0.649868 Uba no case. 0.637665 Is it by land mass...abegii na population. 0.620910 I am done with you for ever! 0.613897 I weep for my country 0.606911 When am supposed to be charged 100Naira for bus fare, am charged 150Naira because of some party men. 0.530184 Happy New Year !! 0.530156 Like father like son. 0.497918 How come Saraki suddenly forgot Ekwe?? 0.489173 Thanks dear. 0.421127 A year from now? 0.404989 I got N4.6b from Dasuki for spiritual purposes - Bafarawa 6. 0.373616 DG, Immigration ...... Northern Muslim Hausa-Fulani 18. 0.356982 Solomon Grundy, Born on a Monday, Christened on Tuesday, Married on Wednesday, Took ill on Thursday, Grew worse on Friday, Died on Saturday, Buried on Sunday. 0.316233 India to come and help run government refinery?. 0.302539 Good morning in this hot afternoon Dr.Buhari, you just behave like say you don’t understand what you are doing? 0.293588 WHY CAN’T ONE NIGERIA DIVIDE - OSINBAJO ? 0.256922 He is crawling inside a 50 bedroom mansion on top a hill at minna. 0.232036 Lolz. 0.154062 Shehu Sani, may God bless you. 0.143232 Well stated .I don’t even know as much , as this of him. 0.132101 Vanguard please can you do a research on how much each zone or state contribute yearly to federal government coffer and how much each zone or state get from federal government coffer yearly? 0.103502 But madam your contradiction defeats your standpoint. 0.064607 Some Igbo then came out to claim NRI Kingdom. 0.049916 Now my Thursday is wasted. 0.016798 ”... when have we started practicing state...government.” 0.013228 They had been issued with bullets but I was unarmed. 0.005393 Yom Kippur war even is mild, the US and Taliban war in Afghanistan is better suited. 0.000042 A lot of the numerous Federal Ministries and agencies should be scrapped, and the funds given to the states to fund what is important to them. Table 8: A random sample of comment sentences and their classification probabilities
2019
625
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6215–6224 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6215 Accelerating Sparse Matrix Operations in Neural Networks on Graphics Processing Units Arturo Argueta and David Chiang Department of Computer Science and Engineering University of Notre Dame {aargueta,dchiang}@nd.edu Abstract Graphics Processing Units (GPUs) are commonly used to train and evaluate neural networks efficiently. While previous work in deep learning has focused on accelerating operations on dense matrices/tensors on GPUs, efforts have concentrated on operations involving sparse data structures. Operations using sparse structures are common in natural language models at the input and output layers, because these models operate on sequences over discrete alphabets. We present two new GPU algorithms: one at the input layer, for multiplying a matrix by a few-hot vector (generalizing the more common operation of multiplication by a one-hot vector) and one at the output layer, for a fused softmax and top-N selection (commonly used in beam search). Our methods achieve speedups over state-of-theart parallel GPU baselines of up to 7× and 50×, respectively. We also illustrate how our methods scale on different GPU architectures. 1 Introduction The speedups introduced by parallel architectures inspired the development of accelerators tailored towards specialized functions. Graphics Processing Units (GPUs) are now a standard platform for deep learning. GPUs provide faster model training and inference times compared to serial processors, because they can parallelize the linear algebra operations used so heavily in neural networks (Raina et al., 2009). Currently, major open source toolkits (Abadi et al., 2016) provide additional layers of abstraction to support one or more parallel GPU architectures. The seamless compatibility with multiple GPUs allows researchers to train a single model on multiple hardware platforms with no significant changes to their code base and no specialized knowledge about the targeted architectures. The disadvantage of hardware agnostic APIs is the lack of optimizations for a set of task-specific functions. Adapting parallel neural operations to a specific hardware platform is required to obtain optimal speed. Since matrix operations are used heavily in deep learning, much research has been done on optimizing them on GPUs (Chetlur et al., 2014; Gupta et al., 2015). Recently, some efforts have been made to other kinds of operations: serial operations running on the GPU (Povey et al., 2016), operations not involving matrix multiplications (Bogoychev et al., 2018), and models using sparse structures (Zhang et al., 2016). In this paper, we focus on sparse operations running exclusively on the GPU architecture. Much recent work in High Performance Computing (HPC) and Natural Language Processing (NLP) focuses on an expensive step of a model or models and optimizes it for a specific architecture. The lookup operation used in the input layer and the softmax function used in the output are two examples seen in machine translation, language modeling, and other tasks. Previous work has accelerated the softmax step by skipping it entirely (Devlin et al., 2014), or approximating it (Shim et al., 2017; Grave et al., 2017). Another strategy is to fuse multiple tasks into a single step. This approach increases the room for parallelism. Recent efforts have fused the softmax and top-N operations to accelerate beam search on the GPU using similar approaches (Hoang et al., 2018; Milakov and Gimelshein, 2018). Our approach differs from former methods in the following aspects: We deliver a novel method tailored towards scenarios seen in Neural Machine Translation (NMT), we introduce a new GPU-specific method to obtain the top-N elements from a list of hypotheses using a different sorting mechanism, and we introduce a sparse lookup method 6216 for GPUs. NMT uses beam search during inference to limit the full set of potential output translations explored during decoding (Cho et al., 2014; Graves, 2012). This algorithm is widely used to obtain state-of-the-art results during test time. At each decoding time-step t, the top-N hypotheses are chosen for further expansion and the rest are discarded. The top-N selection part of the search has been accelerated using hashing methods to avoid a full sort (Shi et al., 2018; Pagh and Rodler, 2004). The aim of this paper is to both combine softmax and top-N operations seen in the last layer of a neural network and optimize the top-N selection operation used by several NMT models. Our work uses ideas from previous work to accelerate two different operations. We focus on operations that manipulate sparse structures (Saad, 1990). By sparse, we mean operations that only require a small fraction of the elements in a tensor to output the correct result. We propose two different optimizations for sparse scenarios in deep learning: The first operation involves the first layer of a neural network. We accelerate the first matrix multiplication using batched sparse vectors as input. The second operation is the computation of the softmax used for beam search. We combine the softmax and the top-N selection into one operation obtaining a speedup over a parallel stateof-the-art baseline. We show that our fused topN selection and sparse lookups achieve speedups of 7× and 50× relative to other parallel NVIDIA baselines. 2 Graphics Processing Units GPUs are widely used to accelerate a variety of non-neural tasks such as search (Garcia et al., 2008), parsing (Hall et al., 2014), and sorting (Sintorn and Assarsson, 2008). Applications adapted to the GPU spot different architectural properties of the graphics card to obtain the best performance. This section provides a short overview of the architectural features targeted for this work. 2.1 CUDA execution model CPUs call special functions, also called kernels, to execute a set of instructions in parallel using multiple threads on the GPU. Kernels can be configured to create and execute an arbitrary number of threads. The threads in a kernel are grouped into different thread blocks (also called cooperative thread arrays). Threads in the same block can collaborate by sharing the same memory cache or similar operations. The maximum number of threads per block and number of blocks varies across GPU architectures. All threads running in the same block are assigned to a single Streaming Multiprocessor (SM) on the GPU. A SM contains the CUDA cores that execute the instructions for each thread in a single block. The number of CUDA cores per SM varies depending on the architecture. For example, Volta V100 contain 64 cores per SM, while GeForce GTX 1080s contain 128 cores per SM. Multiple thread blocks can be assigned to a SM if the number of blocks in the grid is larger than the number of physical SMs. Execution time will increase when more than one block is assigned to all SMs on the device (assuming all blocks run the same instruction). Regardless of the number of threads per block, all SMs can only run a total of 32 threads, called a warp, asynchronously at a time. Warp schedulers select in a round-robin fashion a warp from an assigned block to execute in parallel. The SMs finish execution when all blocks assigned to them complete their tasks. Each thread running on the SM can access multiple levels of memory on the graphics card, and an efficient use of all levels significantly improves the overall execution time on the device. 2.2 Memory GPUs contain different levels of memory designed to read and write data stored on the device. There are advantages and disadvantages associated with each memory type. The fastest memory on the device is the register memory. The amount of registers available per SM is limited and the access scope is limited to a single thread during execution. This memory is useful to hold a small amount of variables used at the thread-level. The next type of memory is shared memory. Shared memory is accessible by all threads running on the same block. While slower than registers, shared memory provides fast read and write access times. Shared memory also allows fast operations at the block level such as reductions, usermanaged caches, etc. The amount of shared memory per SM can range from 49KB (K40) up to 96KB (V100). The last (and slowest) type of memory is the global memory. Global memory latency is 100x slower than shared memory. The 6217 main use of this memory is to store all the data copied from and to the host CPU. The amount of global memory varies depending on the GPU model (e.g. 12GB on the K40 and 16GB on the V100). An efficient use of the memory hierarchy provides the best performance. A parallel application must be designed to minimize the total amount of calls to global memory while maximizing the use of registers and shared memory. An exclusive use of main memory will produce the worst execution times. Our methods focus on the efficient use of shared and register memory for scenarios where the data is small enough to fit. 2.3 GPU Sorting Currently, state-of-the-art methods use a treebased reduction operation (Harris, 2005) to sort the list on the GPU and obtain the top elements. Reductions are most efficient when the input needs to be completely sorted, yet faster algorithms can be used if only a portion of the sorted output is needed. The top-N operation can be accelerated with an improved sorting algorithm for the beam search task on the GPU. Beam search only requires the top-N entries for each mini-batch, and the entries do not need to be sorted in a specific order (ascending or descending). Storing the irrelevant elements for beam search back into global memory is not required for this task and should be avoided. A clear optimization is to obtain the top elements in each minibatch using a faster sorting algorithm. Distinct sorting algorithms can be used to obtain the top elements from a set of candidates. Previous work introduced custom sorting algorithms for specific tasks using multi-core CPU (Tridgell, 1999) and GPU setups (Satish et al., 2009; Govindaraju et al., 2006). 3 Background In this section, we describe two sparse operations commonly used in deep learning, especially for NLP: at the input layer, multiplication by a sparse matrix, and at the output layer, softmax and selection of the top-N elements. 3.1 N-hot lookup In models whose inputs are words, the input layer typically looks up a learned word embedding for each word. Equivalently, it represents each word as a one-hot vector (whose dimensionality is equal to the vocabulary size, K) and multiplies it (as a row vector) by a K × M matrix B whose rows are word embeddings. Then, a minibatch of L words can be represented as a L×K matrix A whose rows are one-hot vectors, so that the product C = AB is a matrix whose rows are the embeddings of the words in the minibatch. Deep learning toolkits (Neubig et al., 2017; Jia et al., 2014) do not perform a full matrix multiplication; typically, they implement a specialized operation to do this. A problem arises, however, when the input vector is not a one-hot vector, but an “N-hot” vector. For example, we might use additional dimensions of the vector to represent subword or partof-speech tag information (Niehues et al., 2011; Collobert et al., 2011; Chiu and Nichols, 2016). In this case, it would be appropriate to use a sparse matrix library like cuSPARSE, but we show below that we can do better. 3.2 Softmax The softmax function (Equation 1) is widely used in deep learning to output a categorical probability distribution: softmax(z) j = exp(zj) P j′ exp(zj′) (1) For better numerical stability, all deep learning toolkits actually compute the softmax as follows: softmax(z)j = exp(zj −max(z)) P j′ exp(zj′ −max(z)) (2) This alternative requires different optimizations on the GPU given the max operation. Recent work (Milakov and Gimelshein, 2018) explore different techniques to calculate this safe softmax version efficiently. 3.3 Beam search and top-N selection Some applications in deep learning require additional computations after the softmax function. During NMT decoding, the top-N probabilities from softmax(z) are chosen at every time-step t and used as an input to the next search step t+1. It is common practice to obtain the top-N elements after the softmax operation. Naively, we can do this by sorting the probabilities and then taking the first N elements, as shown in Algorithm 1. This operation is sparse in nature given the fact that several hypotheses are discarded during search. The 6218 Algorithm 1 Serial minibatched softmax and topN algorithm. Input C ∈RL×K Output D ∈RL×N 1: for ℓ←1, . . . , L do 2: dℓ←0 3: for k ←1, . . . , K do 4: for ℓ←1, . . . , L do 5: dℓ+= exp(C[ℓ][k]) 6: for k ←1, . . . , K do ▷softmax 7: for ℓ←1, . . . , L do 8: C[ℓ][k] ←exp(C[ℓ][k])/dℓ 9: for ℓ←1, . . . L do ▷top-N 10: c ←sort(C[ℓ]) 11: D[ℓ] ←c[1 : N] 12: return D retrieval of non-zero elements in a sparse input parallels the top-N scenario. (Beam search also requires that we keep track of the original column indices (i.e., the word IDs) of the selected columns; this is not shown in Algorithm 1 for simplicity.) In NMT, the top-N operation consumes a significant fraction of time during decoding. Hoang et al. (2018) find that the softmax operation takes 5% of total decoding time, whereas finding the top-N elements can take up to 36.8%. So there is a large potential benefit from speeding up this step. 4 Method In this section, we present our algorithms for Nhot lookup (§4.1) and fused softmax and top-N selection (§4.2). 4.1 Sparse input lookups Our sparse N-hot lookup method, shown in Algorithm 2, multiplies a sparse matrix A in Compressed Sparse Row (CSR) format by a row-major matrix B to yield a dense matrix C. CSR is widely used to store and process sparse matrices. This format stores all non-zero elements of a sparse matrix A contiguously into a new structure Av. Two additional vectors Ar and Ac are required to access the values in Av. An example of the CSR format is illustrated in Figure 1. Ar is first used to access the columns storing the non-zero elements in row ℓ. The number of non-zero elements for a row ℓcan be computed by accessing Ar[ℓ] and calculating its offset with the next element 1 0 2 0 0 0 3 0 4 0 0 5 6 0 0 0 0 0 7 8 (a) Ar 0 2 4 6 8 Ac 0 2 1 3 1 2 3 4 AV 1 2 3 4 5 6 7 8 (b) Figure 1: Example CSR representation for a sparse matrix (a). The CSR representation (b) relies on three lists R, C, and V to store a sparse matrix. R represents the rows, C the columns, and V stores the non-zero values. Ar[ℓ+ 1]. Ar[ℓ] is also used to index the lists containing the columns (Ac) and corresponding nonzero values (Av) in row A[ℓ]. For example, to calculate the number of non-zero values in the second row of Figure 1, The offset Ar[3] −Ar[2] = 2 is calculated. Finally, Ar[2] points to positions 4 and 5 on Ac and Av storing the columns and non-zero values for that specific row. Our method computes the matrix multiplication by processing the elements of the output matrix C in parallel. For our experiments, we process 32 (warp size) rows and columns in parallel for the input matrices. We cannot use a stride size larger than 32, since certain GPU architectures do not allow a 2 dimensional block larger than 32 × 32 (or a block containing more than 1024 threads total). Although this method is fairly straightforward, we will see below that it outperforms other methods when N is small, as we expect it to be. 4.2 Fused softmax and top-N The beam size, or top-N, used in NMT is usually small, with the most commonly used values ranging from 1 to 75 (Sutskever et al., 2014; Koehn and Knowles, 2017). Because of this, we base our implementation on insertion sort, which is O(K2), where K is the number of elements to be sorted, but is reasonably efficient for small arrays. It can be easily modified into a top-N selection algorithm that runs in O(KN) time (Algorithm 3). Unlike in6219 Algorithm 2 Sparse matrix multiplication using the CSR format. Input Ar ∈RL, Ac ∈RLN, Av ∈RLN, B ∈RK×M Output C ∈RL×M 1: parfor m ←1, . . . , M do ▷Block level 2: parfor ℓ←1, . . . , L do ▷Block level 3: x ←0 4: kstart ←Ar[m] 5: kend ←Ar[m + 1] 6: for k ←kstart, . . . , kend −1 do 7: z ←Ac[k] 8: y ←Av[k] 9: x += y × B[z][ℓ] 10: C[ℓ][m] ←x 11: return C Algorithm 3 Top-N selection based on insertion sort. Input array C ∈RK Output array D ∈RN 1: for n ←1, . . . , N do 2: D[n] ←−∞ 3: for k ←1, . . . K do 4: for n ←1, . . . , N do 5: if C[k] > D[n] then 6: swap D[n] and C[k] sertion sort, it maintains separate buffers for the sorted portion (D) and the unsorted portion (C); it also performs an insertion by repeating swapping instead of shifting. The key to our method is that we can parallelize the loop over k (line 3) while maintaining correctness, as long as the comparison and swap can be done atomically. To see this, note that no swap can ever decrease the value of one of the D[n]. Furthermore, because for each k, we compare C[k] with every element of D, it must be the case that after looping over all n (line 4), we have C[k] ≤D[n] for all n. Therefore, when the algorithm finishes, D contains the top-N values. Fusing this algorithm with the softmax algorithm, we obtain Algorithm 4. It takes an input array C containing a minibatch of logits and returns an array D with the top-N probabilities and an array E with their original indices. The comparisons in our method are carried out by the CUDA atomicMax operation (line 12). This function reads a value D′[ℓ][n] and computes the maxAlgorithm 4 Parallel fused batched softmax, and top-N algorithm. The comment “kernel-level” means a loop over blocks, and the comment “block-level” means a loop over threads in a block. Input C ∈RL×K Output D ∈RL×N, E ∈{1, . . . , K}L×N 1: parfor ℓ←1, . . . , L do ▷kernel-level 2: dℓ←0 3: eℓ←−∞ 4: for n ←1, . . . N do 5: D′[ℓ][n] ←pack(−∞, 0) 6: parfor ℓ←1, . . . , L do ▷kernel-level 7: parfor k ←1, . . . , K do ▷block-level 8: x ←C[ℓ][k] 9: y ←pack(x, k) 10: eℓ←atomicMax(C[ℓ][k], eℓ) 11: for n ←1, . . . , N do 12: c′ ←atomicMax(D′[ℓ][n], y) 13: if c′ < y then 14: y ←c′ 15: syncthreads() 16: dℓ+= exp(C[ℓ][k] −eℓ) 17: syncthreads() 18: for n ←1, . . . , N do 19: x, i ←unpack(D′[ℓ][n]) 20: D[ℓ][n] ←exp(x)/dℓ 21: E[ℓ][n] ←i 22: return D imum between it and a second value y. The larger is stored back into D′[ℓ][n], and the original value of D′[ℓ][n] is returned as c′. This operation is performed as one atomic transaction. The following two lines (13-14) set y to the smaller of the two values. Our algorithm recovers the original column indices (m) with a simple extension following Argueta and Chiang (2017). We pack each probability as well as its original column index into a single 64-bit integer before the sorting step (line 5), with the probability in the upper 32 bits and the column index in the lower 32 bits. This representation preserves the ordering of probabilities, so a single atomicMax operation on the packed representation will atomically update both the probability and the index. The final aspect to consider is the configuration of the kernel calls from the host CPU. The grid layout must be configured correctly to use this method. The top-N routine relies on specific ker6220 (a) Tesla V100 Method Number of dense values (N) 1 2 3 4 5 10 50 100 ours 0.02 0.02 0.02 0.02 0.02 0.03 0.06 0.11 cuBLAS 0.16 0.16 0.16 0.16 0.16 0.16 0.15 0.15 cuSPARSE 0.15 0.16 0.16 0.16 0.16 0.17 0.16 0.19 (b) TITAN X Method Number of dense values (N) 1 2 3 4 5 10 50 100 ours 0.03 0.04 0.05 0.07 0.08 0.14 0.49 0.90 cuBLAS 1.79 1.63 1.68 1.68 1.70 1.57 1.27 0.86 cuSPARSE 0.12 0.12 0.12 0.12 0.13 0.13 0.16 0.21 Table 1: Performance comparison for the N-hot lookups against the NVIDIA baseline using dimensions L = 100, K = 10240, N = 512. Each time (in ms) is an average over ten runs. Fastest times are in bold. nel and memory configurations to obtain the best performance. The number of kernel blocks must be equal to the number of elements in the minibatch. This means that batch sizes smaller than or equal to the number of SMs on the GPU will run more efficiently given only one block, or less, will run on all SMs in parallel. The overall performance will be affected if multiple blocks are assigned to all SMs. The number of SMs on the GPU varies depending on the architecture. For example, the Tesla V100 GPU contains 80 SMs, while the Pascal TITAN X contains 30 SMs. This means that our method will perform better on newer GPU architectures with a large amount of SMs. The number of threads in the block is an additional aspect to consider for our method. The block size used for our experiments is fixed to 256 for all the experiments. This number can be adapted if the expected number of hypotheses to sort is smaller than 256 (the number of threads must be divisible by 32). The amount of shared memory allocated per block depends on the size of N. The auxiliary memory used to store the topN elements must fit in shared memory to obtain the best performance. A large N will use a combination of shared and global memory affecting the overall execution of our method. 5 Experiments We run experiments on two different GPU configurations. The first setup is a 16 core Intel(R) Xeon(R) Silver 4110 CPU connected to a Tesla V100 CPU, and the second set is a 16-core Intel(R) Xeon(R) CPU E5-2630 connected to a GeForce GTX TITAN X. The dense matrices we use are randomly generated with different floating point values. We assume the dense representations contain no values equal to zero. The sparse minibatches used for the top-N experiments are randomly generated to contain a specific amount of non-zero values per element. The indices for all non-zero values are selected at random. 5.1 Sparse N-hot lookups For the N-hot lookup task, we compared against the cuBLAS1 and cuSPARSE2 parallel APIs from NVIDIA. Both interfaces provide methods to compute mathematical operations in parallel on the GPU. Table 1 shows the performance of our method against the two NVIDIA APIs for sparse and dense matrix multiplication using different architectures and levels of sparsity. All speedups decrease as the input becomes less sparse. The cuSPARSE baseline performs on par with the dense cuBLAS version on the V100 architecture when the number of non-zero elements per batch is larger than 1. The cuSPARSE baseline performs better than its dense counterpart on the TITAN X architecture and worse on the V100. An explanation behind this is the type of sparsity patterns cuSPARSE handles and the different amount of SMs and memory types on both architectures. 1https://developer.nvidia.com/cublas 2https://developer.nvidia.com/cusparse 6221 a) Tesla V100 Method L Number of top-N elements 10 20 30 40 50 100 200 300 400 Ours 1 0.07 0.11 0.15 0.19 0.21 0.57 1.54 2.85 4.49 Milakov et al. 1 3.56 3.43 3.44 3.46 3.44 3.44 3.44 3.44 3.44 Speedup 50.85 32.41 23.47 18.01 14.21 6.03 2.23 1.20 0.76 Ours 512 0.14 0.22 0.30 0.39 0.49 1.15 3.03 5.70 9.05 Milakov et al. 512 7.99 8.45 7.98 8.00 8.01 8.01 8.01 8.02 8.02 Speedup 54.79 37.22 25.84 20.03 16.13 6.95 2.64 1.40 0.88 Ours 1024 0.25 0.38 0.54 0.72 0.93 2.37 6.57 12.09 19.65 Milakov et al. 1024 12.54 12.70 12.58 12.58 13.02 12.59 12.62 12.59 12.58 Speedup 50.08 32.78 23.11 17.34 13.88 5.30 1.91 1.04 0.64 b) TITAN X Method L Number of top-N elements 10 20 30 40 50 100 200 300 400 Ours 1 0.09 0.14 0.19 0.25 0.32 0.75 2.09 3.97 6.37 Milakov et al. 1 7.65 7.60 7.61 7.64 7.63 7.64 7.58 7.61 7.59 Speedup 84.10 54.19 39.12 29.92 23.76 10.18 3.62 1.91 1.19 Ours 512 0.59 1.03 1.53 2.17 2.96 6.55 18.94 36.20 56.70 Milakov et al. 512 19.23 19.21 19.21 19.22 19.26 19.23 19.02 18.45 18.31 Speedup 32.72 18.64 12.51 8.83 6.48 2.93 1.00 0.50 0.32 Ours 1024 1.07 1.90 2.90 4.13 5.59 12.32 35.22 63.85 101.10 Milakov et al. 1024 31.89 31.91 31.88 31.73 31.91 31.60 30.94 29.55 28.49 Speedup 29.55 16.78 10.97 7.67 5.70 2.56 0.87 0.46 0.28 Table 2: Fused softmax and top-N performance comparison against the method of Milakov and Gimelshein (2018) using different values of N and different batch sizes. For all experiments, we set the vocabulary size to K = 10240. Each time (in ms) is an average over ten runs. Fastest times are shown in bold. cuSPARSE is designed to handle sparsity patterns that translate well on several tasks with different sparsity patterns. The multiplication time remains constant on the V100 when a standard dense matrix multiplication is used while cuSPARSE keeps performing worse once the sparse input becomes dense. The highest speedups are obtained when the amount of non-zero elements is low, and the lowest speedups are seen when the amount of nonzero elements increase. On the V100, our method starts performing worse than the cuBLAS baseline when the amount of non-zero elements per batch element is larger than 100. On the other side, the performance of our method is worse than cuSPARSE when the sparsity is larger than 10 on the TITAN X architecture. Our method performs well on newer GPU models with a larger amount of SMs. We also compare the performance of our method against a one-hot lookup (i.e., N = 1) implementation used in DyNet (Neubig et al., 2017). DyNet is a C++ toolkit (with CUDA support) designed for NLP models. We compare the time it takes to execute the lookup function on the same dimensions used for our N-hot lookup experiments on both architectures. On average, DyNet takes 0.06ms to execute the lookup on the TITAN X architecture and 0.08ms on the V100 architecture. This operation is faster than both cuBLAS and cuSPARSE yet slower than our sparse implementation; however, this comparison is not entirely fair, because the DyNet times include the overhead of constructing a computation graph, 6222 whereas the other times only include the matrix operation itself. 5.2 Softmax and top-N We compared our fused softmax operation against the current state-of-the art method from NVIDIA (Milakov and Gimelshein, 2018). Table 2 demonstrates the comparison of our method against the NVIDIA baseline using two different architectures. Our method outperforms the baseline on top-N sizes smaller than or equal to 300. Our method scales differently on both GPU architectures given the constrained amount of shared memory on the graphics cards and the amount of SMs available. The performance of our suggested implementation will slightly degrade on both architectures when the amount of memory used to perform the selection overtakes the amount of shared memory available. The speedups against the baseline decrease as N grows. Our execution time still outperforms the baseline on most sizes of N used in NMT scenarios. This makes our method suitable for tasks requiring a small amount of elements from an output list. If the size of N exceeds 300, different methods should be used to obtain the most optimal performance. The baseline scales better than our implementation when N increases. Table 2 shows the execution time for the baseline is not affected significantly when N grows. The baseline does see performance degradation when the amount of elements in the mini-batch increases. This is due to the same reduction operation used for all sizes of N. This factor allows our method to perform better in several scenarios where N is smaller than or equal to 300. The baseline performs best on scenarios where the batch size is small and the size of the batch elements is large (about 4000). They claim their method does not perform well on batches with a high dimensionality if N is very large due to the cost of computing the full reduction to sort the input weights and their ids. The batch size affects the performance in a different manner on both architectures. The performance scales in a different manner when the batch size changes. On our largest experiments, the performance for N = 400 does not degrade significantly on the V100 architecture, while the speedups on the TITAN X change significantly from 1.19 to 0.32. This shows that our method runs best on the TITAN X architecture when the batch size is small, and the amount of top-N elements required does not exceed 400. For larger batches, the V100 architecture performs best for all values of N. The TITAN X provides better speedups against the baseline when the number of elements in the mini-batch is small, and both our method and baseline run on the same GPU device. 6 Conclusion In this work, we introduce two parallel methods for sparse computations found in NMT. The first operation is the sparse multiplication found in the input layer, and the second one is a fused softmax and top-N. Both implementations outperform different parallel baselines. We obtained speedups of up to 7× for the sparse affine transformation, and 50× for the fused softmax and top-N task.3 Future work includes the fusion of additional operations in neural models. Matrix operations form the largest bottleneck in deep learning. The last affine transformation in deep neural models can be fused with our softmax and top-N methods. The fusion of these three operations requires a different implementation of the matrix multiplication, and shared memory usage. References Mart´ın Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: a system for large-scale machine learning. In OSDI, volume 16, pages 265– 283. Arturo Argueta and David Chiang. 2017. Decoding with finite-state transducers on GPUs. In Proc. EACL, volume 1, pages 1044–1052. Nikolay Bogoychev, Kenneth Heafield, Alham Fikri Aji, and Marcin Junczys-Dowmunt. 2018. Accelerating asynchronous stochastic gradient descent for neural machine translation. In Proc. EMNLP. Sharan Chetlur, CliffWoolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. 2014. cuDNN: Efficient primitives for deep learning. arXiv:1410.0759. Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Trans. ACL, 4:357–370. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties 3bitbucket.org/aargueta2/sparse operations 6223 of neural machine translation: Encoder-decoder approaches. In Proc. Workshop on Syntax, Semantics, and Structure in Statistical Translation. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537. Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for statistical machine translation. In Proc. ACL, volume 1, pages 1370–1380. Vincent Garcia, Eric Debreuve, and Michel Barlaud. 2008. Fast k nearest neighbor search using GPU. In CVPR Workshop on Computer Vision on GPU, pages 1–6. IEEE. Naga Govindaraju, Jim Gray, Ritesh Kumar, and Dinesh Manocha. 2006. GPUTeraSort: High performance graphics co-processor sorting for large database management. In Proc. ACM SIGMOD International Conference on Management of Data, pages 325–336. Edouard Grave, Armand Joulin, Moustapha Ciss´e, David Grangier, and Herv´e J´egou. 2017. Efficient softmax approximation for GPUs. In Proc. ICML. Alex Graves. 2012. Sequence transduction with recurrent neural networks. In ICML Workshop on Representation Learning. Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. 2015. Deep learning with limited numerical precision. In Proc. ICML, pages 1737–1746. David Hall, Taylor Berg-Kirkpatrick, and Dan Klein. 2014. Sparser, better, faster GPU parsing. In Proc. ACL, pages 208–217. Mark Harris. 2005. Mapping computational concepts to GPUs. In ACM SIGGRAPH 2005 Courses, page 50. Hieu Hoang, Tomasz Dwojak, Rihards Krislauks, Daniel Torregrosa, and Kenneth Heafield. 2018. Fast neural machine translation implementation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 116–121. Association for Computational Linguistics. Yangqing Jia, Evan Shelhamer, JeffDonahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. Caffe: Convolutional architecture for fast feature embedding. In Proc. ACM International Conference on Multimedia, pages 675–678. Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proc. Workshop on Neural Machine Translation, pages 28–39. Maxim Milakov and Natalia Gimelshein. 2018. Online normalizer calculation for softmax. arXiv:1805.02867. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, et al. 2017. DyNet: The dynamic neural network toolkit. arXiv:1701.03980. Jan Niehues, Teresa Herrmann, Stephan Vogel, and Alex Waibel. 2011. Wider context by using bilingual language models in machine translation. In Proc. Workshop on Statistical Machine Translation, pages 198–206. Rasmus Pagh and Flemming Friche Rodler. 2004. Cuckoo hashing. Journal of Algorithms, 51(2):122– 144. Daniel Povey, Vijayaditya Peddinti, Daniel Galvez, Pegah Ghahremani, Vimal Manohar, Xingyu Na, Yiming Wang, and Sanjeev Khudanpur. 2016. Purely sequence-trained neural networks for ASR based on lattice-free MMI. In Proc. Interspeech, pages 2751– 2755. Rajat Raina, Anand Madhavan, and Andrew Y Ng. 2009. Large-scale deep unsupervised learning using graphics processors. In Proc. ICML, pages 873– 880. Youcef Saad. 1990. Sparskit: A basic tool kit for sparse matrix computations. RIACS Technical Report. Nadathur Satish, Mark Harris, and Michael Garland. 2009. Designing efficient sorting algorithms for manycore GPUs. In IEEE Intl. Symposium on Parallel & Distributed Processing, pages 1–10. Xing Shi, Shizhen Xu, and Kevin Knight. 2018. Fast locality sensitive hashing for beam search on GPU. arXiv:1806.00588. Kyuhong Shim, Minjae Lee, Iksoo Choi, Yoonho Boo, and Wonyong Sung. 2017. Svd-softmax: Fast softmax approximation on large vocabulary neural networks. In Advances in Neural Information Processing Systems, pages 5463–5473. Erik Sintorn and Ulf Assarsson. 2008. Fast parallel GPU-sorting using a hybrid algorithm. Journal of Parallel and Distributed Computing, 10(68):1381– 1388. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112. Andrew Tridgell. 1999. Efficient algorithms for sorting and synchronization. Ph.D. thesis, Australian National University Canberra. 6224 Shijin Zhang, Zidong Du, Lei Zhang, Huiying Lan, Shaoli Liu, Ling Li, Qi Guo, Tianshi Chen, and Yunji Chen. 2016. Cambricon-X: An accelerator for sparse neural networks. In IEEE/ACM International Symposium on Microarchitecture.
2019
626
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6225–6235 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6225 An automated framework for fast cognate detection and Bayesian phylogenetic inference in computational historical linguistics Taraka Rama Department of Linguistics University of North Texas [email protected] Johann-Mattis List Dep. of Ling. and Cult. Evolution (DLCE) MPI-SHH (Jena) [email protected] Abstract We present a fully automated workflow for phylogenetic reconstruction on large datasets, consisting of two novel methods, one for fast detection of cognates and one for fast Bayesian phylogenetic inference. Our results show that the methods take less than a few minutes to process language families that have so far required large amounts of time and computational power. Moreover, the cognates and the trees inferred from the method are quite close, both to gold standard cognate judgments and to expert language family trees. Given its speed and ease of application, our framework is specifically useful for the exploration of very large datasets in historical linguistics. 1 Introduction Computational historical linguistics is a relatively young discipline which aims to provide automated solutions for those problems which have been traditionally dealt with in an exclusively manual fashion in historical linguistics. Computational historical linguists thus try to develop automated approaches to detect historically related words (called “cognates”; J¨ager et al. 2017; List et al. 2017; Rama et al. 2017; Rama 2018a), to infer language phylogenies (“language trees”; Rama et al. 2018; Greenhill and Gray 2009), to estimate the time depths of language families (Rama, 2018b; Chang et al., 2015; Gray and Atkinson, 2003), to determine the homelands of their speakers (Bouckaert et al., 2012; Wichmann et al., 2010), to determine diachronic word stability (Pagel and Meade, 2006; Rama and Wichmann, 2018), or to estimate evolutionary rates for linguistic features (Greenhill et al., 2010). Despite the general goal of automating traditional workflows, the majority of studies concerned with phylogenetic reconstruction (including studies on dating and homeland inference) still make use of expert judgments to determine cognate words in linguistic datasets, because detecting cognates is usually regarded as hard to automate. The problem of manual annotation is that the process is very time consuming and may show a lack of objectivity, as inter-annotator agreement is rarely tested when creating new datasets. The last twenty years have seen a surge of work in the development of methods for automatic cognate identification. Current methods reach high accuracy scores compared to human experts (List et al., 2017) and even fully automated workflows in which phylogenies are built from automatically inferred cognates do not differ a lot from phylogenies derived from expert’s cognate judgments (Rama et al., 2018). Despite the growing amount of research devoted to automated word comparison and fully automated phylogenetic reconstruction workflows, scholars have so far ignored the computational effort required to apply the methods to large amounts of data. While the speed of the current workflows can be ignored for small datasets, it becomes a challenge with increasing amounts of data, and some of the currently available methods for automatic cognate detection can only be applied to datasets with maximally 100 languages. Although methods for phylogenetic inference can handle far more languages, they require enormous computational efforts, even for small language families of less than 20 varieties (Kolipakam et al., 2018), which make it impossible for scholars perform exploratory studies in Bayesian frameworks. In this paper, we propose an automated framework for fast cognate detection and fast Bayesian phylogenetic inference. Our cognate detection algorithm uses an alignment-free technique based on character skip-grams (J¨arvelin et al., 2007), which has the advantage of neither requiring handcrafted nor statistically trained matrices of proba6226 ble sound correspondences to be supplied.1 Our fast approach to Bayesian inference uses a simulated annealing variant (Andrieu et al., 2003) of the original MCMC algorithm to compute a maximum-a-posteriori (MAP) tree in a very short amount of time. Testing both our fast cognate detection and our fast phylogenetic reconstruction approach on publicly available datasets, we find that the results presented in the paper are comparable to the alternative, much more time-consuming algorithms currently in use. Our automatic cognate detection algorithm shows results comparable to those achieved by the SCA approach (List, 2014), which is one of the best currently available algorithms that work without inferring regular sound correspondences prior to computation (List et al., 2017). Our automatically inferred MAP trees come close to the expert phylogenies reported in Glottolog (Hammarstr¨om et al., 2017), and are at least as good as the phylogenies inferred with MrBayes (Ronquist et al., 2012), one of the most popular programs for phylogenetic inference. In combination, our new approaches offer a fully automated workflow for phylogenetic reconstruction in computational historical linguistics, which is so fast that it can be easily run on single core machines, yielding results of considerable quality in less than 15 minutes for datasets of more than 50 languages. In the following, we describe the fast cognate detection program in Section 2. We describe both the regular variant of the phylogenetic inference program and our simulated annealing variant in Section 3. We present the results of our automated cognate detection and phylogenetic inference experiments and discuss the results in Section 4. We conclude the paper and present pointers to future work in Section 5. 2 Fast Cognate Detection Numerous methods for automatic cognate detection in historical linguistics have been proposed in the past (J¨ager et al., 2017; List, 2014; Rama et al., 2017; Turchin et al., 2010; Arnaud et al., 2017). Most of them are based on the same general workflow, by which – in a first stage – all possible pairs of words within the same meaning slot 1Although Rama (2015) uses skip-grams, the approach in the paper requires hand-annotated data which we intend to overcome in this paper. of a wordlist are compared with each other in order to compute a matrix of pairwise distances or similarities. In a second stage, a flat cluster algorithm or a network partitioning algorithm is used to partition all words into cognate sets, taking the information in the matrix of word pairs as basis (List et al., 2018b). Differences between the algorithms can be found in the way in which the pairwise word comparisons are carried out, to which degree some kind of pre-processing of the data is involved, or which algorithm for flat clustering is being used. Since any automated word comparison that starts from the comparison of word pairs needs to calculate similarities or distances for all n2−n 2 possible word pairs in a given concept slot, the computation cost for all algorithms which employ this strategy exponentially increases with the number of words being compared. If methods additionally require to pre-process the data, for example to search across all language-pairs for languagespecific similarities, such as regularly corresponding sounds (List et al., 2017; J¨ager et al., 2017), the computation becomes impractical for datasets of more than 100 languages. A linear time solution was first proposed by Dolgopolsky (1964). Its core idea is to represent all sound sequences in a given dataset by their consonant classes. A consonant class is hereby understood as a rough partitioning of speech sounds into groups that are conveniently used by historical linguistics when comparing languages (such as velars, [k, g, x], dentals [t, d, T], or liquids [r, l, K], etc.). The major idea of this approach is to judge all words as cognate whose initial two consonant classes match. Given that the method requires only that all words be converted to their first consonant classes, this approach, which is now usually called consonant-class matching approach (CCM, Turchin et al. 2010), is very fast, since its computation costs are linear with respect to the number of words being compared. The task of assigning a given word to a given cognate set is already fulfilled by assigning a word a given string of consonant classes. The drawback of the CCM approach is a certain lack of accuracy. While being quite conservative when applied to words showing the same meaning, the method likewise misses many valid matches and thus generally shows a low recall. This is most likely due to the fact that the method does not not 6227 contain any alignment component. Words are converted to sound-class strings and only complete matches are allowed, while good partial matches can often be observed in linguistic data, as can be seen from the comparison of English daughter, represented as TVTVR in sound classes compared to German Tochter TVKTVR. In order to develop an algorithm for automatic cognate detection which is both fast and shows a rather high degree of accuracy, we need to (1) learn from the strategy employed by the CCM method in avoiding any pairwise word comparison, while – at the same time – (2) avoiding the problems of the CCM method by allowing for a detailed sequence comparison based on some kind alignment techniques. Since the CCM method only compares the first two consonants per word, it cannot identify words like English daughter and German Tochter as cognate, although the overall similarity is obvious when comparing the whole strings. A straightforward way to account for our two requirements is using skip-grams of sound-class representations and to represent words and soundclass skip-grams in a given dataset in form of a bipartite network, in which words are assigned to one type of node, and skip-grams to another one. In such a network, we could compute multiple representations of TVTVR and TVKTVR directly and later see, in which of them the two sequences match. If, for example, we computed all n-grams of length 5 allowing to skip one, we would receive TVTVR for English (only possible solution) and VKTVR, TKTVR, TVTVR, TVKVR, TVKTR, and TVKTV for German, with TVTVR matching the English word, and thus being connected to both words by an edge in our bipartite network (see Figure 1). Similarly, when computing a modified variant of skip-grams based on n-grams of size 3, where only consonants are taken into account, and in which we allow to replace up to one segment systematically by a gap-symbol (“-”), we can see from Table 1 that the structure of matching ngrams directly reflects the cognate relations, with Greek çEri “hand” opposed to German Hand and English hand (both cognate), as well as Russian [ruka], Polish r˜ENka (both cognate). Note that the use of skip-grams here mimics the alignment component of those automatic cognate detection methods in which alignments are used. The difference is that we do not compute the alignments between a sequence pair only, but project each word to a potential (and likewise also restricted) alignment representation. Note also that – even if skip-grams may take some time to compute – our approach presented here is essentially linear in computation time requirements, since the skip-gram calculation represents a constant factor. When searching for potential cognates in our bipartite network, we can say that (A) all connected components correspond to cognate sets, or (B) use some additional algorithm to partition the bipartite network into our putative cognate sets. While computation time will be higher in the latter case, both cases will be drastically faster than existing popular methods for automatic cognate detection, since our bipartite-graph-based approach essentially avoids pairwise word comparisons. Following these basic ideas, we have developed a new method for fast cognate detection using bipartite networks of sound-class-based skipgrams (BipSkip), implemented as a Python library (see SI 1). The basic working procedure is extremely straightforward and consists of three stages. In a first stage, a bipartite network of words and their corresponding skip-grams is constructed, with edges drawn between all words and their corresponding skip-grams. In a second, optional stage, the bipartite graph is refined by deleting all skip-gram nodes which are linked to fewer word nodes than a user-defined threshold. In a third stage, the bipartite graph is projected to a monopartite graph and partitioned into cognate sets, either by its connected components, or with help of graph partitioning algorithms such as, e.g., Infomap (Rosvall and Bergstrom, 2008). Since it is difficult to assess which kinds of skip-grams and which kinds of sound-class systems would yield the most promising results, we conducted an exhaustive parameter training using the data of List (2014, see details reported in SI 2). This resulted in the following parameters used as default for our approach: (1) compute skip grams exclusively from consonant classes, (2) compute skip-grams of length 4, (3) include a gapped version of each word form (allowing for matches with a replacement), (4) use the SCA sound class model (List, 2014), and (5) prune the graph by deleting all skip-gram nodes which link to less than 20% of the median degree of all skip-gram nodes in the data. This setting 6228 TKTVR TVKTR TVKTV TVKVR TVTVR VKTVR daughter Tochter Figure 1: Bipartite graph of English daughter, German Tochter, and their corresponding sound-class-based skipgrams of size 5. yielded F-scores of 0.854 (connected components partitioning) and 0.852 (Infomap partitioning) on the training data (using B-Cubes as measure, cf. Amig´o et al. 2009 and section 4.2), suggesting that our BipSkip method performs in a manner comparable to the SCA method for automatic cognate detection (List, 2014), which is based on pairwise sequence comparison methods using improved sound class models and alignment techniques. This also means that it clearly outperforms the CCM approach on the training data (scoring 0.8) as well as the computationally rather demanding edit distance approach (scoring 0.814, see List et al. 2017). IPA çeri hant hænd ruka r˜ENka Cognacy 1 2 2 3 3 Sound Classes CERI HANT HENT RYKA RENKA H-T + + HN+ + HNT + + R-K + + Table 1: Shared skip-grams in words meaning “hand” in Greek, German, English, Russian, and Polish reflect the known cognate relations of the word. 3 Fast Phylogenetic Inference Methods for Bayesian phylogenetic inference in evolutionary biology and historical linguistics (Yang and Rannala, 1997) are all based on the following Bayes rule: f(Ψ|X) = f(X|Ψ)f(Ψ) f(X) , (1) where each state Ψ is composed of τ the tree topology, T the branch length vector of the tree, and θ the substitution model parameters where X is a binary cognate data matrix where each column codes a cognate set as a binary vector. The posterior distribution f(Ψ|X) is difficult to calculate analytically since one has to sum over all the possible rooted topologies ( (2L−3)! 2L−2(L−2)!) increases factorially with the number of languages in the sample. Therefore, Markov Chain Monte Carlo (MCMC) methods are used to estimate the posterior probability of Ψ. The Metropolis-Hastings algorithm (a MCMC algorithm) is used to sample the parameters from the posterior distribution. This algorithm constructs a Markov chain by proposing a new state Ψ∗and then accepting the proposed state Ψ∗with the probability given in equation 2 where, q(.) is the proposal distribution. r = f(X|Ψ∗)f(Ψ∗) f(X|Ψ)f(Ψ) q(Ψ|Ψ∗) q(Ψ∗|Ψ) (2) The likelihood of the data to the new parameters is computed using the pruning algorithm (Felsenstein, 2004, 251-255), which is a special case of the variable elimination algorithm (Jordan et al., 2004). We assume that the parameters τ, T, θ are independent of each other. In the above procedure, a Markov chain is run for millions of steps and sampled at regular intervals (called thinning) to reduce autocorrelation between the sampled states. A problem with the above procedure is that the chain can get stuck in a local maxima when the posterior has multiple peaks. A different approach known as Metropolis-coupled Markov Chain Monte-Carlo methods (MC3) has been applied to phylogenetics to explore the tree space efficiently (Altekar et al., 2004). 3.1 MC3 In the MC3 approach, n chains are run in parallel where n −1 chains are heated by raising the posterior probability to a power 1/Ti where Ti is the temperature of ith chain defined as 1 + δ(i −1) where δ > 0. A heated chain (i > 1) can explore peaks more efficiently than the cold chain since the posterior density is flattened. The MC3 approach swaps the states between a cold chain and a hot chain at regular intervals using a modified Metropolis-Hastings ratio. This swapping procedure allows the cold chain to explore multiple peaks in the tree space successfully. The MC3 6229 procedure is computationally expensive since it requires multiple CPU cores to run the Markov chains in parallel. As a matter of fact, Rama et al. (2018) employ the MC3 procedure (as implemented in MrBayes; Ronquist et al., 2012) to infer family phylogenetic trees from automatically inferred cognate judgments. 3.2 Simulated Annealing In this paper, we employ a computationally less intensive and a fast procedure inspired from simulated annealing (Andrieu et al., 2003) to infer the maximum-a-posteriori (MAP) tree. We refer the simulated annealing MCMC as MAPLE (MAP estimation for Language Evolution) in the rest of the paper. In this procedure, the Metropolis-Hastings ratio is computed according to the equation 3. In this equation, the initial temperature T0 is set to a high value and then decreased according to a cooling schedule until Ti →0 . The final state of the chain is treated as the maximum-aposteriori (MAP) estimate of the inference procedure. We implement our own tree inference software in Cython which is made available along with the paper. r = f(X|Ψ∗)f(Ψ∗) f(X|Ψ)f(Ψ) 1/Ti q(Ψ|Ψ∗) q(Ψ∗|Ψ) (3) All our Bayesian analyses use binary datasets with states 0 and 1. We employ the Generalized Time Reversible Model (Yang, 2014, Ch.1) for computing the transition probabilities between individual states (0, 1). The rate variation across cognate sets is modeled using a four category discrete Γ distribution (Yang, 1994) which is sampled from a Γ distribution with shape parameter α. MCMC moves We employ multiple moves to sample the parameters. For continuous parameters such as branch lengths and shape parameter we use a multiplier move with exponential distribution (µ = 1) as the proposal distribution. In the case of the stationary frequencies, we employ a uniform slider move that randomly selects two states and proposes a new frequency such that the sum of the frequencies of the states does not change. We use two tree moves: Nearest neighbor interchange (NNI) and a specialized Subpruning and Regrafting move that operates on leaf nodes to propose new trees (Lakner et al., 2008). Cooling Schedule The cooling schedule is very important for the best performance of a simulated annealing algorithm (Andrieu et al., 2003). We experimented with a linear cooling schedule that starts with a high initial temperature T0 and reduces the temperature at iteration i through Ti = λTi−1 where 0.85 <= λ <= 0.96 (Du and Swamy, 2016). We decrease the value of Ti until Ti = 10−5. In this paper, we experiment with reducing the temperature over step size s starting from an initial temperature T0. 4 Evaluation 4.1 Materials All the data for training and testing was taken from publicly available sources and has further been submitted along with the supplementary material accompanying this paper. For training of the parameters of our BipSkip approach for fast cognate detection, the data by List (2014) was used in the form provided by List et al. (2017). This dataset consists of six subsets each covering a subgroup of a language family of moderate size and time depth (see SI 2). To test the BipSkip method, we used both the test set of List et al. (2017), consisting of six distinct datasets of moderate size, as well as five large datasets from five different language families (Austronesian, Austro-Asiatic, Indo-European, Pama-Nyungan, and Sino-Tibetan) used for the study by Rama et al. (2018) on the potential of automatic cognate detection methods for the purpose of phylogenetic reconstruction. The latter dataset was also used to test the MAPLE approach for phylogenetic inference. The other two datasets could not be used for the phylogenetic inference task, since these datasets contain a large number of largely unresolved dialect varieties for which no expert classifications are available at the moment. More information on all datasets is given in Table 2. 4.2 Evaluation Methods We evaluate the results of the automatic cognate detection task through B-Cubed scores (Amig´o et al., 2009), a measure now widely used for the task of assessing how well a given cognate detection method performs on a given test dataset (Hauer and Kondrak, 2011; List et al., 2016; J¨ager et al., 2017; List et al., 2017). B-Cubed scores are reported in form of precision, recall, and F-scores, with high precision indicating a high amount of 6230 Dataset Concepts Languages Cognates Austronesian 210 20 2864 Bai 110 9 285 Chinese 140 15 1189 Indo-European 207 20 1777 Japanese 200 10 460 Ob-Ugrian 110 21 242 (a) BipSkip training data. Dataset Concepts Languages Cognates Bahnaric 200 24 1055 Chinese 180 18 1231 Huon 139 14 855 Romance 110 43 465 Tujia 109 5 179 Uralic 173 7 870 (b) BipSkip test data. Dataset Concepts Languages Cognates Austronesian 210 45 3804 Austro-Asiatic 200 58 1872 Indo-European 208 42 2157 Pama-Nyungan 183 67 6634 Sino-Tibetan 110 64 1402 (c) BipSkip and MAPLE test data. Table 2: Datasets (name, concepts, and languages), used for training (a) and testing of BipSkip (b, c) and MAPLE (c). Data in (a) is from List (2014), data in (b) is from List et al. (2017), and data in (c) comes from Rama et al. (2018). true positives, and high recall indicating a high amount of true negatives. Details along with an example on how B-Cubed scores can be inferred are given in List et al. (2017). An implementation of the B-Cubed measure is available from the LingPy Python library for quantitative tasks in historical linguistics (List et al., 2018a). We evaluate the performance of the phylogenetic reconstruction methods by comparing them to expert phylogenies through the Generalized Quartet Distance (GQD), which is a variant of the quartet distance originally developed in bioinformatics (Christiansen et al., 2006) and adapted for linguistic trees by Pompei et al. (2011). A quartet consists of four languages and can either be a star or a butterfly. The quartet distance is defined as the total number of different quartets divided by the total number of possible quartets ( n 4 ) in the tree. This definition of quartet distance penalizes the tree when the gold standard tree has non-binary nodes which is quite common in linguistic phylogenies. The GQD version disregards star quartets and computes the distance between the inferred tree and the gold standard tree as the ratio between the number of different butterflies and the total number of butterflies in the gold standard tree. 4.3 Implementation Both methods are implemented in form of Python packages available – along with detailed installation instructions – from the supplemental material accompanying the paper (SI 1 and SI 4). While the BipSkip method for fast cognate detection is implemented in form of a plug-in for the LingPy library and thus accepts the standard wordlist formats used in LingPy as input format, MAPLE reads the data from files encoded in the Nexus format (Maddison et al., 1997). 4.4 Results Fast Cognate Detection We tested the two variants, of the new BipSkip approach for automatic cognate detection, connected components and Infomap (Rosvall and Bergstrom, 2008), on the two test sets (see Table 2) and calculated the B-Cubed precision, recall, and F-scores. To allow for a closer comparison with cognate detection algorithms of similar strength, we also calculated the results for the SCA method for cognate detection described in List et al. (2017), and the CCM approach described in Section 2. The SCA method uses the Sound-Class-Based Alignment algorithm (List, 2014) to derive distance scores for all word pairs in a given meaning slot and uses a flat version of the UPGMA method (Sokal and Michener, 1958) to cluster words into cognate sets. Table 3 lists the detailed results for all four approaches and all 11 subsets of the two datasets, including the computation time. As can be seen from the results in Table 3, the BipSkip algorithm clearly outperforms the CCM method in terms of overall accuracy on both datasets. It also comes very close in performance to the SCA method, while at the same time only requiring a small amount of the time required to run the SCA analysis. An obvious weakness of our current BipSkip implementation is the performance on South-East Asian language data. Here, we can see that the exclusion of tones and vowels, dictated by our training procedure, leads to a higher amount of false positives. Unfortunately, this cannot be overcome by simply includ6231 Dataset CCM BipSkip-CC BipSkip-IM SCA P R FS P R FS P R FS P R FS Bahnaric 0.92 0.63 0.75 0.82 0.87 0.84 0.85 0.85 0.85 0.88 0.84 0.86 Chinese 0.81 0.74 0.78 0.66 0.95 0.77 0.68 0.93 0.78 0.80 0.79 0.79 Huon 0.89 0.84 0.87 0.73 0.95 0.80 0.73 0.93 0.81 0.79 0.93 0.86 Romance 0.94 0.61 0.74 0.91 0.89 0.90 0.92 0.86 0.89 0.93 0.81 0.87 Tujia 0.97 0.74 0.84 0.89 0.95 0.90 0.89 0.90 0.90 0.97 0.83 0.89 Uralic 0.96 0.86 0.91 0.84 0.93 0.88 0.84 0.93 0.88 0.91 0.91 0.91 TOTAL 0.92 0.74 0.81 0.81 0.91 0.85 0.82 0.90 0.85 0.88 0.85 0.86 TIME 0m1.400s 0m2.960s 0m5.909s 0m25.768s (a) Test Data from List et al. 2017 Dataset CCM BipSkip-CC BipSkip-IM SCA P R FS P R FS P R FS P R FS Austro-Asiatic 0.79 0.64 0.71 0.61 0.81 0.70 0.67 0.77 0.72 0.73 0.80 0.76 Austronesian 0.88 0.58 0.70 0.72 0.72 0.72 0.77 0.68 0.72 0.82 0.74 0.77 Indo-European 0.89 0.64 0.75 0.82 0.73 0.77 0.86 0.69 0.77 0.89 0.74 0.81 Pama-Nyungan 0.64 0.82 0.72 0.71 0.79 0.75 0.75 0.77 0.76 0.59 0.85 0.69 Sino-Tibetan 0.78 0.35 0.48 0.59 0.62 0.60 0.61 0.59 0.60 0.73 0.46 0.56 TOTAL 0.80 0.61 0.67 0.69 0.73 0.71 0.73 0.70 0.71 0.75 0.72 0.72 TIME 0m2.938s 0m9.642s 0m17.642s 2m40.472s (b) Test Data from Rama et al. 2018 Table 3: Results of the cognate detection experiments. Table (a) presents the results for the performance of the four methods tested on the dataset by List et al. (2017): the CCM method, our new BipSkip methods in two variants (with connected components clusters, labelled CC, and the Infomap clusters, labelled IM), and the SCA method. Table (b) presents the results on the large testset by Rama et al. (2018). The column TIME indicates the time the code needed to run on a Linux machine (Thinkpad X280, i5, 8GB, ArchLinux OS), using the Unix “time” command (reporting the real time value). ing tones in the skip-grams, since not all languages in the South-East Asian datasets (Sino-Tibetan and Austro-Asiatic) are tonal, and tone matchings would thus lead to an unwanted clustering of tonal and non-tonal languages in the data, which would contradict certain subgroups in which tone developed only in a few language varieties, such as Tibetan. The most promising approach to deal consistently with language families such as Sino-Tibetan would therefore be to extend the current approach to identify partial instead of complete cognates (List et al., 2016), given the prominence of processes such as compounding or derivation in the history of Sino-Tibetan and its descendants. Partial cognates, however, do not offer a direct solution to the problem, since we currently lack phylogenetic algorithms that could handle partial cognates (List, 2016), while approaches to convert partial into full cognates usually require to take semantic information into account (Sagart et al., 2019, 10321). In addition to any attempt to improve on BipSkip by enhancing the training of features used for South-East Asian languages, consistent approaches for the transformation of partial into complete cognate sets will have to be developed in the future. Neither of the two BipSkip approaches can compete with the LexStat-Infomap approach, which yields F-scores of 0.89 on the first test set (see List et al. 2017) and 0.77 on the second test set (see Rama et al. 2018), but this is not surprising, given that neither of the four approaches compared here computes regular sound correspondence information. The obvious drawback of LexStat is its computation time, with more than 30 minutes for the first, and more than two hours for the second test set. While the superior results surely justify its use, the advantage of methods like BipSkip is that they can be used for the purpose of exploratory data analysis or web-based applications. 6232 Fast Phylogenetic Inference We present the results of the phylogenetic experiments in Table 4. Each sub-table shows the setting for s, T0 that yielded the lowest GQD for each cognate detection method. We experimented over a wide range of settings for s ∈{1, 5, 10, 20, 40, 80, 100} and T0 ∈{10, 20, . . . , 90, 100}. We provide the time and the number of generations taken to infer the MAP tree for each cognate inference program and language family. We note that the longest run takes less than fifteen minutes across all the families. In comparison, the results reported by Rama et al. (2018) using MrBayes takes at least four hours on six cores for each of the language family using the SCA method. We examined which settings of s/T0 give the lowest results and found that low step sizes such as 1 give the lowest results for a wide range of T0. We examined the results across the settings and found that the best results can be achieved with a step size above 20 with initial temperature set to 50. The lowest GQD distances were obtained with the SCA cognates. The BipSkipIM method emerged as the winner in the case of the Pama-Nyungan language family. The best result for Pama-Nyungan is better than the average GQD obtained through expert cognate judgments reported in Rama et al. (2018). The weakness of the BipSkip methods with respect to the SinoTibetan language family is also visible in terms of the GQD distance. Comparing the results obtained for the SCA cognates obtained with MAPLE against the ones inferred with MrBayes as reported in Rama et al. (2018), it becomes also clear that our method is at least as good as MrBayes, showing better results in Austro-Asiatic, Austronesian, and PamaNyungan. MAPLE with gold standard cognates We further tested if gold standard cognates make a difference in the inferred tree quality. We find that the tree quality improves if we employ gold standard cognates to infer the trees. This result supports the research track of developing high quality automated cognate detection systems which can be employed to analyze hitherto less studied language families of the world. Convergence We investigated if the MAPLE algorithm infers trees whose quality improves across the generations by plotting the GQD of the samFamily s/T0 GQD NGens Time (s) Austro-Asiatic 80/10 0.0155 18080 282.548 Austronesian 20/80 0.0446 5320 46.698 Indo-European 20/40 0.0138 5060 46.014 Pama-Nyungan 40/60 0.1476 10440 224.036 Sino-Tibetan 80/60 0.0958 20880 295.157 (a) Results for CCM cognates. Family s/T0 GQD NGens Time (s) Austro-Asiatic 100/90 0.0135 26900 439.005 Austronesian 100/80 0.0148 26600 285.659 Indo-European 20/80 0.0211 5320 41.544 Pama-Nyungan 80/100 0.1318 21680 435.8 Sino-Tibetan 100/10 0.0722 22600 235.774 (b) Results for SCA cognates. Family s/T0 GQD NGens Time (s) Austro-Asiatic 40/60 0.0415 10440 151.561 Austronesian 20/20 0.1022 4780 42.097 Indo-European 80/10 0.0322 18080 190.48 Pama-Nyungan 100/40 0.1647 25300 759.023 Sino-Tibetan 80/20 0.5218 19120 233.173 (c) Results for BipSkip-CC cognates. Family s/T0 GQD NGens Time (s) Austro-Asiatic 80/80 0.0245 21280 310.403 Austronesian 40/10 0.0927 9040 82.443 Indo-European 10/100 0.046 2710 28.691 Pama-Nyungan 80/70 0.0777 21120 662.447 Sino-Tibetan 40/80 0.3049 10640 129.903 (d) Results for BipSkip-IM cognates. Table 4: Results for the MAPLE approach to fast phylogenetic inference for each method. The best step size and initial temperature setting is shown as s/T0. NGens is the number of generations, Time is the time taken to run the inference in number of seconds on a single core Linux machine. Family s/T0 GQD NGens Time (s) Austro-Asiatic 100/90 0.0058 26900 476.113 Austronesian 80/80 0.0389 21280 123.167 Indo-European 10/10 0.0135 2260 16.713 Pama-Nyungan 100/10 0.061 22600 605.319 Sino-Tibetan 100/50 0.0475 25700 206.952 Table 5: Results for gold standard cognates. pled trees against the temperature for all the five best settings of s/T0 (in bold in Table 4) in Figure 2. The figure clearly shows that at high temperature settings, the quality of the trees is low whereas as temperature approaches zero, the tree quality also gets better for all the language fami6233 0 25 50 75 Temperature 0 0.2 0.4 0.6 GQD Family Austro-Asiatic Austronesian Indo-European Pama-Nyungan Sino-Tibetan Figure 2: Lineplot of GQD against temperature for all the five different language families. The trendlines are drawn using LOESS smoothing. lies. Moreover, the curves are monotonically decreasing once the temperature is below 12. 5 Conclusion In this paper we proposed an automated framework for very fast and still highly reliable phylogenetic reconstruction in historical linguistics. Our framework introduces two new methods. The BipSkip approach uses bipartite networks of soundclass-based skip-grams for the task of automatic cognate detection. The MAPLE approach makes use of simulated annealing technique to infer a MAP tree for linguistic evolution. Both methods are not only very fast, but – as our tests show – also quite accurate in their performance, when compared to similar, much slower, algorithms proposed in the past. In combination, the methods can be used to assess preliminary phylogenies from linguistic datasets of more than 100 languages in less than half an hour on an ordinary single core machine. We are well aware that our framework is by no means perfect, and that it should be used with a certain amount of care. Our methods are best used for the purpose of exploratory analysis on larger datasets which have so far not yet been thoroughly studied. Here, we believe that the new framework can provide considerable help to future research, specifically also, because it does not not require the technical support of high-end clusters. Both methods can be further improved in multiple ways. Our cognate detection method’s weak performance on South-East Asian languages could be addressed by enabling it to detect partial cognates instead of complete cognates. At the same time, new models, allowing for a consistent handling of multi-state characters and a direct handling of partial cognates, could be added to our fast Bayesian phylogenetic inference approach. Acknowledgments We thank the three reviewers for the comments which helped improve the paper. TR took part in the BigMed project (https://bigmed.no/) at University of Oslo when the work was performed. JML’s work was supported by the ERC Starting Grant 715618 “Computer-Assisted Language Comparison” (http://calc.digling.org). 6234 References Gautam Altekar, Sandhya Dwarkadas, John P Huelsenbeck, and Fredrik Ronquist. 2004. Parallel metropolis coupled Markov chain Monte Carlo for Bayesian phylogenetic inference. Bioinformatics, 20(3):407– 415. Enrique Amig´o, Julio Gonzalo, Javier Artiles, and Felisa Verdejo. 2009. A comparison of extrinsic clustering evaluation metrics based on formal constraints. Information Retrieval, 12(4):461–486. Christophe Andrieu, Nando De Freitas, Arnaud Doucet, and Michael I Jordan. 2003. An introduction to MCMC for machine learning. Machine learning, 50(1-2):5–43. Adam S. Arnaud, David Beck, and Grzegorz Kondrak. 2017. Identifying cognate sets across dictionaries of related languages. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2509–2518. Association for Computational Linguistics. Remco Bouckaert, Philippe Lemey, Michael Dunn, Simon J. Greenhill, Alexander V. Alekseyenko, Alexei J. Drummond, Russell D. Gray, Marc A. Suchard, and Quentin D. Atkinson. 2012. Mapping the Origins and Expansion of the Indo-European Language Family. Science, 337(6097):957–960. Will Chang, Chundra Cathcart, David Hall, and Andrew Garrett. 2015. Ancestry-constrained phylogenetic analysis supports the Indo-European steppe hypothesis. Language, 91(1):194–244. Chris Christiansen, Thomas Mailund, Christian NS Pedersen, Martin Randers, and Martin Stig Stissing. 2006. Fast calculation of the quartet distance between trees of arbitrary degrees. Algorithms for Molecular Biology, 1(1). Aron B. Dolgopolsky. 1964. Gipoteza drevnejˇsego rodstva jazykovych semej severnoj evrazii s verojatnostej toˇcky zrenija. Voprosy Jazykoznanija, 2:53– 63. Ke-Lin Du and MNS Swamy. 2016. Simulated annealing. In Search and Optimization by Metaheuristics, pages 29–36. Springer. Joseph Felsenstein. 2004. Inferring phylogenies. Sinauer Associates, Sunderland, Massachusetts. Russell D. Gray and Quentin D. Atkinson. 2003. Language-tree divergence times support the Anatolian theory of Indo-European origin. Nature, 426(6965):435–439. Simon J. Greenhill, Quentin D. Atkinson, Andrew Meade, and Russell D. Gray. 2010. The shape and tempo of language evolution. Proceedings of the Royal Society B: Biological Sciences, 277(1693):2443–2450. Simon J. Greenhill and Russell D. Gray. 2009. Austronesian language phylogenies: Myths and misconceptions about Bayesian computational methods. Austronesian Historical Linguistics and Culture History: A Festschrift for Robert Blust, pages 375–397. Harald Hammarstr¨om, Robert Forkel, and Martin Haspelmath. 2017. Glottolog. Max Planck Institute for Evolutionary Anthropology, Leipzig. Bradley Hauer and Grzegorz Kondrak. 2011. Clustering semantically equivalent words into cognate sets in multilingual lists. In Proceedings of the 5th International Joint Conference on Natural Language Processing, pages 865–873. AFNLP. Gerhard J¨ager, Johann-Mattis List, and Pavel Sofroniev. 2017. Using support vector machines and state-of-the-art algorithms for phonetic alignment to identify cognates in multi-lingual wordlists. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Long Papers, pages 1204–1215, Valencia. Association for Computational Linguistics. Anni J¨arvelin, Antti J¨arvelin, and Kalervo J¨arvelin. 2007. s-grams: Defining generalized n-grams for information retrieval. Information Processing & Management, 43(4):1005–1019. Michael I Jordan et al. 2004. Graphical models. Statistical Science, 19(1):140–155. Vishnupriya Kolipakam, Fiona M. Jordan, Michael Dunn, Simon J. Greenhill, Remco Bouckaert, Russell D. Gray, and Annemarie Verkerk. 2018. A Bayesian phylogenetic study of the Dravidian language family. Royal Society Open Science, 5:171504. Clemens Lakner, Paul Van Der Mark, John P Huelsenbeck, Bret Larget, and Fredrik Ronquist. 2008. Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics. Systematic biology, 57(1):86–103. Johann-Mattis List. 2014. Sequence comparison in historical linguistics. D¨usseldorf University Press, D¨usseldorf. Johann-Mattis List. 2016. Beyond cognacy: Historical relations between words and their implication for phylogenetic reconstruction. Journal of Language Evolution, 1(2):119–136. Johann-Mattis List, Simon Greenhill, Tiago Tresoldi, and Robert Forkel. 2018a. LingPy. A Python library for quantitative tasks in historical linguistics. Max Planck Institute for the Science of Human History, Jena. Johann-Mattis List, Simon J. Greenhill, and Russell D. Gray. 2017. The potential of automatic word comparison for historical linguistics. PLOS ONE, 12(1):1–18. 6235 Johann-Mattis List, Philippe Lopez, and Eric Bapteste. 2016. Using sequence similarity networks to identify partial cognates in multilingual wordlists. In Proceedings of the Association of Computational Linguistics 2016 (Volume 2: Short Papers), pages 599–605, Berlin. Association of Computational Linguistics. Johann-Mattis List, Mary Walworth, Simon J. Greenhill, Tiago Tresoldi, and Robert Forkel. 2018b. Sequence comparison in computational historical linguistics. Journal of Language Evolution, 3(2):130–144. David R Maddison, David L Swofford, and Wayne P Maddison. 1997. NEXUS: an extensible file format for systematic information. Syst. Biol., 46(4):590– 621. Mark Pagel and Andrew Meade. 2006. Estimating rates of lexical replacement on phylogenetic trees of languages. In Peter Forster and Colin Renfrew, editors, Phylogenetic Methods and the Prehistory of Languages, pages 173–182. McDonald Institute Monographs, Cambridge. Simone Pompei, Vittorio Loreto, and Francesca Tria. 2011. On the accuracy of language trees. PloS one, 6(6):e20109. Taraka Rama. 2015. Automatic cognate identification with gap-weighted string subsequences. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies., pages 1227–1231. Taraka Rama. 2018a. Similarity dependent chinese restaurant process for cognate identification in multilingual wordlists. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 271–281. Taraka Rama. 2018b. Three tree priors and five datasets. Language Dynamics and Change, 8(2):182 – 218. Taraka Rama, Johann-Mattis List, Johannes Wahle, and Gerhard J¨ager. 2018. Are automatic methods for cognate detection good enough for phylogenetic reconstruction in historical linguistics? In Proceedings of the North American Chapter of the Association of Computational Linguistics, pages 393–400. Taraka Rama, Johannes Wahle, Pavel Sofroniev, and Gerhard J¨ager. 2017. Fast and unsupervised methods for multilingual cognate clustering. arXiv preprint arXiv:1702.04938. Taraka Rama and Søren Wichmann. 2018. Towards identifying the optimal datasize for lexically-based bayesian inference of linguistic phylogenies. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1578–1590. Fredrik Ronquist, Maxim Teslenko, Paul van der Mark, Daniel L Ayres, Aaron Darling, Sebastian H¨ohna, Bret Larget, Liang Liu, Marc A Suchard, and John P Huelsenbeck. 2012. MrBayes 3.2: efficient Bayesian phylogenetic inference and model choice across a large model space. Systematic biology, 61(3):539–542. Martin Rosvall and Carl T Bergstrom. 2008. Maps of random walks on complex networks reveal community structure. Proceedings of the National Academy of Sciences, 105(4):1118–1123. Laurent Sagart, Guillaume Jacques, Yunfan Lai, Robin Ryder, Valentin Thouzeau, Simon J. Greenhill, and Johann-Mattis List. 2019. Dated language phylogenies shed light on the ancestry of sino-tibetan. Proceedings of the National Academy of Science of the United States of America, 116:1–6. Robert. R. Sokal and Charles. D. Michener. 1958. A statistical method for evaluating systematic relationships. University of Kansas Scientific Bulletin, 28:1409–1438. Peter Turchin, Ilja Peiros, and Murray Gell-Mann. 2010. Analyzing genetic connections between languages by matching consonant classes. Journal of Language Relationship, 3:117–126. Søren Wichmann, Andr´e M¨uller, and Viveka Velupillai. 2010. Homelands of the world’s language families: A quantitative approach. Diachronica, 27(2):247– 276. Ziheng Yang. 1994. Maximum likelihood phylogenetic estimation from DNA sequences with variable rates over sites: Approximate methods. Journal of Molecular evolution, 39(3):306–314. Ziheng Yang. 2014. Molecular evolution: A statistical approach. Oxford University Press, Oxford. Ziheng Yang and Bruce Rannala. 1997. Bayesian phylogenetic inference using DNA sequences: a Markov Chain Monte Carlo method. Molecular biology and evolution, 14(7):717–724. A Supplemental Material The supplemental material was submitted along with this paper and also uploaded to Zenodo (https://doi.org/10.5281/zenodo. 3237508). The packages provide all data needed to replicate the analyses, as well as detailed instructions in how to apply the methods. In the paper, we point to the relevant sections in the supplemental material.
2019
627
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6236–6247 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6236 Sentence Centrality Revisited for Unsupervised Summarization Hao Zheng and Mirella Lapata Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB [email protected] [email protected] Abstract Single document summarization has enjoyed renewed interest in recent years thanks to the popularity of neural network models and the availability of large-scale datasets. In this paper we develop an unsupervised approach arguing that it is unrealistic to expect large-scale and high-quality training data to be available or created for different types of summaries, domains, or languages. We revisit a popular graph-based ranking algorithm and modify how node (aka sentence) centrality is computed in two ways: (a) we employ BERT, a state-of-the-art neural representation learning model to better capture sentential meaning and (b) we build graphs with directed edges arguing that the contribution of any two nodes to their respective centrality is influenced by their relative position in a document. Experimental results on three news summarization datasets representative of different languages and writing styles show that our approach outperforms strong baselines by a wide margin.1 1 Introduction Single-document summarization is the task of generating a shorter version of a document while retaining its most important content (Nenkova et al., 2011). Modern neural network-based approaches (Nallapati et al., 2016; Paulus et al., 2018; Nallapati et al., 2017; Cheng and Lapata, 2016; See et al., 2017; Narayan et al., 2018b; Gehrmann et al., 2018) have achieved promising results thanks to the availability of largescale datasets containing hundreds of thousands of document-summary pairs (Sandhaus, 2008; Hermann et al., 2015b; Grusky et al., 2018). Nevertheless, it is unrealistic to expect that large-scale and high-quality training data will be available or cre1Our code is available at https://github.com/ mswellhao/PacSum. ated for different summarization styles (e.g., highlights vs. single-sentence summaries), domains (e.g., user- vs. professionally-written articles), and languages. It therefore comes as no surprise that unsupervised approaches have been the subject of much previous research (Marcu, 1997; Radev et al., 2000; Lin and Hovy, 2002; Mihalcea and Tarau, 2004; Erkan and Radev, 2004; Wan, 2008; Wan and Yang, 2008; Hirao et al., 2013; Parveen et al., 2015; Yin and Pei, 2015; Li et al., 2017). A very popular algorithm for extractive single-document summarization is TextRank (Mihalcea and Tarau, 2004); it represents document sentences as nodes in a graph with undirected edges whose weights are computed based on sentence similarity. In order to decide which sentence to include in the summary, a node’s centrality is often measured using graph-based ranking algorithms such as PageRank (Brin and Page, 1998). In this paper, we argue that the centrality measure can be improved in two important respects. Firstly, to better capture sentential meaning and compute sentence similarity, we employ BERT (Devlin et al., 2018), a neural representation learning model which has obtained state-of-the-art results on various natural language processing tasks including textual inference, question answering, and sentiment analysis. Secondly, we advocate that edges should be directed, since the contribution induced by two nodes’ connection to their respective centrality can be in many cases unequal. For example, the two sentences below are semantically related: (1) Half of hospitals are letting patients jump NHS queues for cataract surgery if they pay for it themselves, an investigation has revealed. (2) Clara Eaglen, from the royal national in6237 stitute of blind people, said: “It’s shameful that people are being asked to consider funding their own treatment when they are entitled to it for free, and in a timely manner on the NHS.” Sentence (1) describes a news event while sentence (2) comments on it. Sentence (2) would not make much sense on its own, without the support of the preceding sentence, whose content is more central. Similarity as an undirected measure, cannot distinguish this fundamental intuition which is also grounded in theories of discourse structure (Mann and Thompson, 1988) postulating that discourse units are characterized in terms of their text importance: nuclei denote central segments, whereas satellites denote peripheral ones. We propose a simple, yet effective approach for measuring directed centrality for single-document summarization, based on the assumption that the contribution of any two nodes’ connection to their respective centrality is influenced by their relative position. Position information has been frequently used in summarization, especially in the news domain, either as a baseline that creates a summary by selecting the first n sentences of the document (Nenkova, 2005) or as a feature in learning-based systems (Lin and Hovy, 1997; Schilder and Kondadadi, 2008; Ouyang et al., 2010). We transform undirected edges between sentences into directed ones by differentially weighting them according to their orientation. Given a pair of sentences in the same document, one is looking forward (to the sentences following it), and the other is looking backward (to the sentences preceding it). For some types of documents (e.g., news articles) one might further expect sentences occurring early on to be more central and therefore backward-looking edges to have larger weights. We evaluate the proposed approach on three single-document news summarization datasets representative of different languages, writing conventions (e.g., important information is concentrated in the beginning of the document or distributed more evenly throughout) and summary styles (e.g., verbose or more telegraphic). We experimentally show that position-augmented centrality significantly outperforms strong baselines (including TextRank; Mihalcea and Tarau 2004) across the board. In addition, our best system achieves performance comparable to supervised systems trained on hundreds of thousands of examples (Narayan et al., 2018b; See et al., 2017). We present an alternative to more data-hungry models, which we argue should be used as a standard comparison when assessing the merits of more sophisticated supervised approaches over and above the baseline of extracting the leading sentences (which our model outperforms). Taken together, our results indicate that directed centrality improves the selection of salient content substantially. Interestingly, its significance for unsupervised summarization has gone largely unnoticed in the research community. For example, gensim (Barrios et al., 2016), a widely used open-source implementation of TextRank only supports building undirected graphs, even though follow-on work (Mihalcea, 2004) experiments with position-based directed graphs similar to ours. Moreover, our approach highlights the effectiveness of pretrained embeddings for the summarization task, and their promise for the development of unsupervised methods in the future. We are not aware of any previous neural-based approaches to unsupervised single-document summarization, although some effort has gone into developing unsupervised models for multi-document summarization using reconstruction objectives (Li et al., 2017; Ma et al., 2016; Chu and Liu, 2018). 2 Centrality-based Summarization 2.1 Undirected Text Graph A prominent class of approaches in unsupervised summarization uses graph-based ranking algorithms to determine a sentence’s salience for inclusion in the summary (Mihalcea and Tarau, 2004; Erkan and Radev, 2004). A document (or a cluster of documents) is represented as a graph, in which nodes correspond to sentences and edges between sentences are weighted by their similarity. A node’s centrality can be measured by simply computing its degree or running a ranking algorithm such as PageRank (Brin and Page, 1998). For single-document summarization, let D denote a document consisting of a sequence of sentences {s1, s2, ..., sn}, and eij the similarity score for each pair (si, sj). The degree centrality for sentence si can be defined as: centrality(si) = X j∈{1,..,i−1,i+1,..,n} eij (1) After obtaining the centrality score for each sentence, sentences are sorted in reverse order and the 6238 top ranked ones are included in the summary. TextRank (Mihalcea and Tarau, 2004) adopts PageRank (Brin and Page, 1998) to compute node centrality recursively based on a Markov chain model. Whereas degree centrality only takes local connectivity into account, PageRank assigns relative scores to all nodes in the graph based on the recursive principle that connections to nodes having a high score contribute more to the score of the node in question. Compared to degree centrality, PageRank can in theory be better since the global graph structure is considered. However, we only observed marginal differences in our experiments (see Sections 4 and 5 for details). 2.2 Directed Text Graph The idea that textual units vary in terms of their importance or salience, has found support in various theories of discourse structure including Rhetorical Structure Theory (RST; Mann and Thompson 1988). RST is a compositional model of discourse structure, in which elementary discourse units are combined into progressively larger discourse units, ultimately covering the entire document. Discourse units are linked to each other by rhetorical relations (e.g., Contrast, Elaboration) and are further characterized in terms of their text importance: nuclei denote central segments, whereas satellites denote peripheral ones. The notion of nuclearity has been leveraged extensively in document summarization (Marcu, 1997, 1998; Hirao et al., 2013) and in our case provides motivation for taking directionality into account when measuring centrality. We could determine nuclearity with the help of a discourse parser (Li et al. 2016; Feng and Hirst 2014; Joty et al. 2013; Liu and Lapata 2017, inter alia) but problematically such parsers rely on the availability of annotated corpora as well as a wider range of standard NLP tools which might not exist for different domains, languages, or text genres. We instead approximate nuclearity by relative position in the hope that sentences occurring earlier in a document should be more central. Given any two sentences si, sj (i < j) taken from the same document D, we formalize this simple intuition by transforming the undirected edge weighted by the similarity score eij between si and sj into two directed ones differentially weighted by λ1eij and λ2eij. Then, we can refine the centrality score of si based on the directed graph as follows: centrality(si) = λ1 X j<i eij + λ2 X j>i eij (2) where λ1, λ2 are different weights for forwardand backward-looking directed edges. Note that when λ1 and λ1 are equal to 1, Equation (2) becomes degree centrality. The weights can be tuned experimentally on a validation set consisting of a small number of documents and corresponding summaries, or set manually to reflect prior knowledge about how information flows in a document. During tuning experiments, we set λ1 + λ2 = 1 to control the number of free hyper-parameters. Interestingly, we find that the optimal λ1 tends to be negative, implying that similarity with previous content actually hurts centrality. This observation contrasts with existing graph-based summarization approaches (Mihalcea and Tarau, 2004; Mihalcea, 2004) where nodes typically have either no edge or edges with positive weights. Although it is possible to use some extensions of PageRank (Kerchove and Dooren, 2008) to take negative edges into account, we leave this to future work and only consider the definition of centrality from Equation (6) in this paper. 3 Sentence Similarity Computation The key question now is how to compute the similarity between two sentences. There are many variations of the similarity function of TextRank (Barrios et al., 2016) based on symbolic sentence representations such as tf-idf. We instead employ a state-of-the-art neural representation learning model. We use BERT (Devlin et al., 2018) as our sentence encoder and fine-tune it based on a type of sentence-level distributional hypothesis (Harris, 1954; Polajnar et al., 2015) which we explain below. Fine-tuned BERT representations are subsequently used to compute the similarity between sentences in a document. 3.1 BERT as Sentence Encoder We use BERT (Bidirectional Encoder Representations from Transformers; Devlin et al. 2018) to map sentences into deep continuous representations. BERT adopts a multi-layer bidirectional Transformer encoder (Vaswani et al., 2017) and uses two unsupervised prediction tasks, i.e., masked language modeling and next sentence prediction, to pre-train the encoder. 6239 The language modeling task aims to predict masked tokens by jointly conditioning on both left and right context, which allows pre-trained representations to fuse both contexts in contrast to conventional uni-directional language models. Sentence prediction aims to model the relationship between two sentences. It is a binary classification task, essentially predicting whether the second sentence in a sentence pair is indeed the next sentence. Pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference. We use BERT to encode sentences for unsupervised summarization. 3.2 Sentence-level Distributional Hypothesis To fine-tune the BERT encoder, we exploit a type of sentence-level distributional hypothesis (Harris, 1954; Polajnar et al., 2015) as a means to define a training objective. In contrast to skip-thought vectors (Kiros et al., 2015) which are learned by reconstructing the surrounding sentences of an encoded sentence, we borrow the idea of negative sampling from word representation learning (Mikolov et al., 2013). Specifically, for a sentence si in document D, we take its previous sentence si−1 and its following sentence si+1 to be positive examples, and consider any other sentence in the corpus to be a negative example. The training objective for si is defined as: log σ(v′ si−1 ⊤vsi) + log σ(v′ si+1 ⊤vsi) + Es∼P(s)  log σ(−v′ s ⊤vs)] (3) where vs and v′ s are two different representations of sentence s via two differently parameterized BERT encoders; σ is the sigmoid function; and P(s) is a uniform distribution defined over the sentence space. The objective in Equation (3) aims to distinguish context sentences from other sentences in the corpus, and the encoder is pushed to capture the meaning of the intended sentence in order to achieve that. We sample five negative samples for each positive example to approximate the expectation. Note, that this approach is much more computationally efficient, compared to reconstructing surrounding sentences (Kiros et al., 2015). Dataset # docs avg. document avg. summary words sen. words sen. CNN+DM 11,490 641.9 28.0 54.6 3.9 NYT 4,375 1,290.5 50.7 79.8 3.5 TTNews 2,000 1,037.1 21.8 44.8 1.1 Table 1: Statistics on NYT, CNN/Daily Mail, and TTNews datasets (test set). We compute the average document and summary length in terms of number of words and sentences, respectively. 3.3 Similarity Matrix Once we obtain representations {v1, v2, ..., vn} for sentences {s1, s2, . . . , sn} in document D, we employ pair-wise dot product to compute an unnormalized similarity matrix ¯E: ¯Eij = vi⊤vj (4) We could also use cosine similarity, but we empirically found that the dot product performs better. The final normalized similarity matrix E is defined based on ¯E: ˜Eij = ¯Eij − h min ¯E + β(max ¯E −min ¯E) i (5) Eij = (˜Eij if ˜Eij > 0 0 otherwise (6) Equation (5) aims to remove the effect of absolute values by emphasizing the relative contribution of different similarity scores. This is particularly important for the adopted sentence representations which in some cases might assign very high values to all possible sentence pairs. Hyper-parameter β (β ∈[0, 1]) controls the threshold below which the similarity score is set to 0. 4 Experimental Setup In this section we present our experimental setup for evaluating our unsupervised summarization approach which we call PACSUM as a shorthand for Position-Augmented Centrality based Summarization. 4.1 Datasets We performed experiments on three recently released single-document summarization datasets representing different languages, document information distribution, and summary styles. Table 1 presents statistics on these datasets (test set); example summaries are shown in Table 5. The CNN/DailyMail dataset (Hermann et al., 2015a) contains news articles and associated highlights, i.e., a few bullet points giving a brief 6240 overview of the article. We followed the standard splits for training, validation, and testing used by supervised systems (90,266/1,220/1,093 CNN documents and 196,961/12,148/10,397 DailyMail documents). We did not anonymize entities. The LEAD-3 baseline (selecting the first three sentences in each document as the summary) is extremely difficult to beat on CNN/DailyMail (Narayan et al., 2018b,a), which implies that salient information is mostly concentrated in the beginning of a document. NYT writers follow less prescriptive guidelines2, and as a result salient information is distributed more evenly in the course of an article (Durrett et al., 2016). We therefore view the NYT annotated corpus (Sandhaus, 2008) as complementary to CNN/DailyMail in terms of evaluating the model’s ability of finding salient information. We adopted the training, validation and test splits (589,284/32,736/32,739) widely used for evaluating abstractive summarization systems. However, as noted in Durrett et al. (2016), some summaries are extremely short and formulaic (especially those for obituaries and editorials), and thus not suitable for evaluating extractive summarization systems. Following Durrett et al. (2016), we eliminate documents with summaries shorter than 50 words. As a result, the NYT test set contains longer and more elaborate summary sentences than CNN/Daily Mail (see Table 1). Finally, to showcase the applicability of our approach across languages, we also evaluated our model on TTNews (Hua et al., 2017), a Chinese news summarization corpus, created for the shared summarization task at NLPCC 2017. The corpus contains a large set of news articles and corresponding human-written summaries which were displayed on the Toutiao app (a mobile news app). Because of the limited display space on the mobile phone screen, the summaries are very concise and typically contain just one sentence. There are 50,000 news articles with summaries and 50,000 news articles without summaries in the training set, and 2,000 news articles in test set. 4.2 Implementation Details For each dataset, we used the documents in the training set to fine-tune the BERT model; hyperparameters (λ1, λ2, β) were tuned on a validation set consisting of 1,000 examples with gold sum2https://archive.nytimes.com/www. nytimes.com/learning/issues_in_depth/ 10WritingSkillsIdeas.html maries, and model performance was evaluated on the test set. We used the publicly released BERT model3 (Devlin et al., 2018) to initialize our sentence encoder. English and Chinese versions of BERT were respectively used for the English and Chinese corpora. As mentioned in Section 3.2, we finetune BERT using negative sampling; we randomly sample five negative examples for every positive one to create a training instance. Each mini-batch included 20 such instances, namely 120 examples. We used Adam (Kingma and Ba, 2014) as our optimizer with initial learning rate set to 4e-6. 5 Results 5.1 Automatic Evaluation We evaluated summarization quality automatically using ROUGE F1 (Lin and Hovy, 2003). We report unigram and bigram overlap (ROUGE-1 and ROUGE-2) as a means of assessing informativeness and the longest common subsequence (ROUGE-L) as a means of assessing fluency. NYT and CNN/Daily Mail Table 2 summarizes our results on the NYT and CNN/Daily Mail corpora (examples of system output can be found in the Appendix). We forced all extractive approaches to select three summary sentences for fair comparison. The first block in the table includes two state-of-the-art supervised models. REFRESH (Narayan et al., 2018b) is an extractive summarization system trained by globally optimizing the ROUGE metric with reinforcement learning. POINTER-GENERATOR (See et al., 2017) is an abstractive summarization system which can copy words from the source text while retaining the ability to produce novel words. As an upper bound, we also present results with an extractive oracle system. We used a greedy algorithm similar to Nallapati et al. (2017) to generate an oracle summary for each document. The algorithm explores different combinations of sentences and generates an oracle consisting of multiple sentences which maximize the ROUGE score against the gold summary. The second block in Table 2 presents the results of the LEAD-3 baseline (which simply creates a summary by selecting the first three sentences in a document) as well as various instantiations of 3https://github.com/google-research/ bert 6241 Method NYT CNN+DM R-1 R-2 R-L R-1 R-2 R-L ORACLE 61.9 41.7 58.3 54.7 30.4 50.8 REFRESH4 (Narayan et al., 2018b) 41.3 22.0 37.8 41.3 18.4 37.5 POINTER-GENERATOR (See et al., 2017) 42.7 22.1 38.0 39.5 17.3 36.4 LEAD-3 35.5 17.2 32.0 40.5 17.7 36.7 DEGREE (tf-idf) 33.2 13.1 29.0 33.0 11.7 29.5 TEXTRANK (tf-idf) 33.2 13.1 29.0 33.2 11.8 29.6 TEXTRANK (skip-thought vectors) 30.1 9.6 26.1 31.4 10.2 28.2 TEXTRANK (BERT) 29.7 9.0 25.3 30.8 9.6 27.4 PACSUM (tf-idf) 40.4 20.6 36.4 39.2 16.3 35.3 PACSUM (skip-thought vectors) 38.3 18.8 34.5 38.6 16.1 34.9 PACSUM (BERT) 41.4 21.7 37.5 40.7 17.8 36.9 Table 2: Test set results on the NYT and CNNDailyMail datasets using ROUGE F1 (R-1 and R-2 are shorthands for unigram and bigram overlap, R-L is the longest common subsequence). TEXTRANK (Mihalcea and Tarau, 2004). Specifically, we experimented with three sentence representations to compute sentence similarity. The first one is based on tf-idf where the value of the corresponding dimension in the vector representation is the number of occurrences of the word in the sentence times the idf (inverse document frequency) of the word. Following gensim, We preprocessed sentences by removing function words and stemming words. The second one is based on the skip-thought model (Kiros et al., 2015) which exploits a type of sentence-level distributional hypothesis to train an encoder-decoder model trying to reconstruct the surrounding sentences of an encoded sentence. We used the publicly released skip-thought model5 to obtain vector representations for our task. The third one is based on BERT (Devlin et al., 2018) fine-tuned with the method proposed in this paper. Finally, to determine whether the performance of PageRank and degree centrality varies in practice, we also include a graph-based summarizer with DEGREE centrality and tf-idf representations. The third block in Table 2 reports results with three variants of our model, PACSUM. These include sentence representations based on tf-idf, skip-thought vectors, and BERT. Recall that PACSUM uses directed degree centrality to decide which sentence to include in the summary. On both NYT and CNN/Daily Mail datasets, PAC4The ROUGE scores here on CNN/Daily Mail are higher than those reported in the original paper, because we extract 3 sentences in Daily Mail rather than 4. 5https://github.com/ryankiros/ skip-thoughts 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1 0.30 0.32 0.34 0.36 0.38 0.40 ROUGE-1 F1 = 0 = 0.3 = 0.6 Figure 1: PACSUM’s performance against different values of λ1 on the NYT validation set with with λ2 = 1. Optimal hyper-parameters (λ1, λ2, β) are (−2, 1, 0.6). SUM (with BERT representations) achieves the highest ROUGE F1 score, compared to other unsupervised approaches. This gain is more pronounced on NYT where the gap between our best system and LEAD-3 is approximately 6 absolute ROUGE-1 F1 points. Interestingly, despite limited access to only 1,000 examples for hyperparameter tuning, our best system is comparable to supervised systems trained on hundreds of thousands of examples (see rows REFRESH and POINTER-GENERATOR in the table). As can be seen in Table 2, DEGREE (tf-idf) is very close to TEXTRANK (tf-idf). Due to space limitations, we only show comparisons between DEGREE and TEXTRANK with tf-idf, however, we observed similar trends across sentence representations. These results indicate that considering global structure does not make a difference when selecting salient sentences for NYT and CNN/Daily Mail, possibly due to the fact 6242 Method TTNews R-1 R-2 R-L ORACLE 45.6 31.4 41.7 POINTER-GENERATOR 42.7 27.5 36.2 LEAD 30.8 18.4 24.9 TEXTRANK (tf-idf) 25.6 13.1 19.7 PACSUM (BERT) 32.8 18.9 26.1 Table 3: Results on Chinese TTNews corpus using ROUGE F1 (R-1 and R-2 are shorthands for unigram and bigram overlap, R-L is the longest common subsequence). that news articles in these datasets are relatively short (see Table 1). The results in Table 2 further show that PACSUM substantially outperforms TEXTRANK across sentence representations, directly confirming our assumption that position information is beneficial for determining sentence centrality in news single-document summarization. In Figure 1 we further show how PACSUM’s performance (ROUGE-1 F1) on the NYT validation set varies as λ1 ranges from -2 to 1 (λ2 = 1 and β = 0, 0.3, 0.6). The plot highlights that differentially weighting a connection’s contribution (via relative position) has a huge impact on performance (ROUGE ranges from 0.30 to 0.40). In addition, the optimal λ1 is negative, suggesting that similarity with the previous content actually hurts centrality in this case. We also observed that PACSUM improves further when equipped with the BERT encoder. This validates the superiority of BERT-based sentence representations (over tf-idf and skip-thought vectors) in capturing sentence similarity for unsupervised summarization. Interestingly, TEXTRANK performs worse with BERT. We believe that this is caused by the problematic centrality definition, which fails to fully exploit the potential of continuous representations. Overall, PACSUM obtains improvements over baselines on both datasets highlighting the effectiveness of our approach across writing styles (highlights vs. summaries) and narrative conventions. For instance, CNN/Daily Mail articles often follow the inverted pyramid format starting with the most important information while NYT articles are less prescriptive attempting to pull the reader in with an engaging introduction and develop from there to explain a topic. TTNews Dataset Table 3 presents our results on the TTNews corpus using ROUGE F1 as our Method NYT CNN+DM TTNews ORACLE 49.0∗ 53.9∗ 60.0∗ REFRESH 42.5 34.2 — LEAD 34.7∗ 26.0∗ 50.0∗ PACSUM 44.4 31.1 56.0 Table 4: Results of QA-based evaluation on NYT, CNN/Daily Mail, and TTNews. We compute a system’s final score as the average of all question scores. Systems statistically significant from PACSUM are denoted with an asterisk * (using a one-way ANOVA with posthoc Tukey HSD tests; p < 0.01). evaluation metric. We report results with variants of TEXTRANK (tf-idf) and PACSUM (BERT) which performed best on NYT and CNN/Daily Mail. Since summaries in the TTNews corpus are typically one sentence long (see Table 1), we also limit our extractive systems to selecting a single sentence from the document. The LEAD baseline also extracts the first document sentence, while the ORACLE selects the sentence with maximum ROUGE score against the gold summary in each document. We use the popular POINTERGENERATOR system of See et al. (2017) as a comparison against supervised methods. The results in Table 3 show that POINTERGENERATOR is superior to unsupervised methods, and even comes close to the extractive oracle, which indicates that TTNews summaries are more abstractive compared to the English corpora. Nevertheless, even in this setting which disadvantages extractive methods, PACSUM outperforms LEAD and TEXTRANK showing that our approach is generally portable across different languages and summary styles. Finally, we show some examples of system output for the three datasets in Appendix. 5.2 Human Evaluation In addition to automatic evaluation using ROUGE, we also evaluated system output by eliciting human judgments. Specifically, we assessed the degree to which our model retains key information from the document following a questionanswering (QA) paradigm which has been previously used to evaluate summary quality and document compression (Clarke and Lapata, 2010; Narayan et al., 2018b). We created a set of questions based on the gold summary under the assumption that it highlights the most important document content. We then examined whether partici6243 NYT Gold Summary: Marine Corps says that V-22 Osprey, hybrid aircraft with troubled past, will be sent to Iraq in September, where it will see combat for first time. The Pentagon has placed so many restrictions on how it can be used in combat that plane – which is able to drop troops into battle like helicopter and then speed away like airplane – could have difficulty fulfilling marines longstanding mission for it. limitations on v-22, which cost $80 million apiece, mean it can not evade enemy fire with same maneuvers and sharp turns used by helicopter pilots. Questions: • Which aircraft will be sent to Iraq? V-22 Osprey • What are the distinctive features of this type of aircraft? able to drop troops into battle like helicopter and then speed away like airplane • How much does each v-22 cost? $80 million apiece CNN+DM Gold Summary: “We’re all equal, and we all deserve the same fair trial,” says one juror. The months-long murder trial of Aaron Hernandez brought jurors together. Foreperson: “It’s been an incredibly emotional toll on all of us.” Questions: • Who was on trial? Aaron Hernandez • Who said: “It’s been an incredibly emotional toll on all of us”? Foreperson TTNEWS Gold Summary : 皇马今夏清洗名单曝光,三小将租借外出,科恩特朗、伊利亚拉门迪将被永久送出伯纳乌球场. (Real Madrid’s cleaning list was exposed this summer, and the three players will be rented out. Coentrao and Illarramendi will permanently leave the Bernabeu Stadium.) Question: 皇马今夏清洗名单中几人将被外租?三(How many people will be rented out by Real Madrid this summer? three) Table 5: NYT, CNN/Daily Mail and TTNews with corresponding questions. Words highlighted in red are answers to those questions. pants were able to answer these questions by reading system summaries alone without access to the article. The more questions a system can answer, the better it is at summarizing the document. For CNN/Daily Mail, we worked on the same 20 documents and associated 71 questions used in Narayan et al. (2018b). For NYT, we randomly selected 18 documents from the test set and created 59 questions in total. For TTNews, we randomly selected 50 documents from the test set and created 50 questions in total. Example questions (and answers) are shown in Table 5. We compared our best system PACSUM (BERT) against REFRESH, LEAD-3, and ORACLE on CNN/Daily Mail and NYT, and against LEAD-3 and ORACLE on TTNews. Note that we did not include TEXTRANK in this evaluation as it performed worse than LEAD-3 in previous experiments (see Tables 2 and 3). Five participants answered questions for each summary. We used the same scoring mechanism from Narayan et al. (2018b), i.e., a correct answer was marked with a score of one, partially correct answers with a score of 0.5, and zero otherwise. The final score for a system is the average of all its question scores. Answers for English examples were elicited using Amazon’s Mechanical Turk crowdsourcing platform while answers for Chinese summaries were assessed by in-house native speakers of Chinese. We uploaded the data in batches (one system at a time) on AMT to ensure that the same participant does not evaluate summaries from different systems on the same set of questions. The results of our QA evaluation are shown in Table 4. ORACLE’s performance is below 100, indicating that extracting sentences by maximizing ROUGE fails in many cases to select salient content, capturing surface similarity instead. PACSUM significantly outperforms LEAD but is worse than ORACLE which suggests there is room for further improvement. Interestingly, PACSUM performs on par with REFRESH (the two systems are not significantly different). 6 Conclusions In this paper, we developed an unsupervised summarization system which has very modest data requirements and is portable across different types of summaries, domains, or languages. We revisited a popular graph-based ranking algorithm and refined how node (aka sentence) centrality is computed. We employed BERT to better capture sentence similarity and built graphs with directed edges arguing that the contribution of any two nodes to their respective centrality is influenced by their relative position in a document. Experimental results on three news summarization datasets demonstrated the superiority of our approach against strong baselines. In the future, we would like to investigate whether some of the ideas introduced in this paper can improve the performance of supervised systems as well as sentence selection in multi-document summarization. 6244 Acknowledgments The authors gratefully acknowledge the financial support of the European Research Council (Lapata; award number 681760). This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract FA8650-17-C-9118. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation therein. References Federico Barrios, Federico L´opez, Luis Argerich, and Rosa Wachenchauzer. 2016. Variations of the similarity function of TextRank for automated summarization. arXiv preprint arXiv:1602.03606. Sergey Brin and Michael Page. 1998. Anatomy of a large-scale hypertextual Web search engine. In Proceedings of the 7th Conference on World Wide Web, pages 107–117, Brisbane, Australia. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 484–494, Berlin, Germany. Eric Chu and Peter J. Liu. 2018. Unsupervised neural multi-document abstractive summarization. CoRR, abs/1810.05739. James Clarke and Mirella Lapata. 2010. Discourse constraints for document compression. Computational Linguistics, 36(3):411–441. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Greg Durrett, Taylor Berg-Kirkpatrick, and Dan Klein. 2016. Learning-based single-document summarization with compression and anaphoricity constraints. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1998–2008, Berlin, Germany. G¨unes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, 22:457–479. Vanessa Wei Feng and Graeme Hirst. 2014. A lineartime bottom-up discourse parser with constraints and post-editing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 511–521, Baltimore, Maryland. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109, Brussels, Belgium. Max Grusky, Mor Naaman, and Yoav Artzi. 2018. NEWSROOM: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 708–719, New Orleans, USA. Zellig S Harris. 1954. Distributional structure. Word, 10(2-3):146–162. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015a. Teaching machines to read and comprehend. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1693–1701. Curran Associates, Inc. Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015b. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28, pages 1693– 1701. Morgan, Kaufmann. Tsutomu Hirao, Yasuhisa Yoshida, Masaaki Nishino, Norihito Yasuda, and Masaaki Nagata. 2013. Single-document summarization as a tree knapsack problem. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1515–1520, Seattle, Washington, USA. Lifeng Hua, Xiaojun Wan, and Lei Li. 2017. Overview of the nlpcc 2017 shared task: Single document summarization. In National CCF Conference on Natural Language Processing and Chinese Computing, pages 942–947. Springer. Shafiq Joty, Giuseppe Carenini, Raymond Ng, and Yashar Mehdad. 2013. Combining intra- and multisentential rhetorical parsing for document-level discourse analysis. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 486–496, Sofia, Bulgaria. Cristobald de Kerchove and Paul Van Dooren. 2008. The pagetrust algorithm: How to rank web pages when negative links are allowed? In Proceedings of the 2008 SIAM International Conference on Data Mining, pages 346–352. SIAM. 6245 Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 3294–3302. Curran Associates, Inc. Piji Li, Zihao Wang, Wai Lam, Zhaochun Ren, and Lidong Bing. 2017. Salience estimation via variational auto-encoders for multi-document summarization. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, pages 3497–3503, San Francisco, California. Qi Li, Tianshi Li, and Baobao Chang. 2016. Discourse parsing with attention-based hierarchical neural networks. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 362–371, Austin, Texas. Chin-Yew Lin and Eduard Hovy. 1997. Identifying topics by position. In Proceedings of the 5th Conference on Applied Natural Language Processing, pages 283–290, Washington, DC, USA. Chin-Yew Lin and Eduard Hovy. 2002. From single to multi-document summarization: A prototype system and its evaluation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 457–464, Pennsylvania, Philadelphia. Chin Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram cooccurrence statistics. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 71–78, Edmonton, Canada. Yang Liu and Mirella Lapata. 2017. Learning contextually informed representations for linear-time discourse parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1300–1309, Copenhagen, Denmark. Shulei Ma, Zhi-Hong Deng, and Yunlun Yang. 2016. An unsupervised multi-document summarization framework based on neural document model. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1514–1523, Osaka, Japan. William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse, 8(3):243–281. Daniel Marcu. 1997. From discourse structures to text summaries. In Proceedings of the ACL Workshop on Intelligent Scalable Text Summarization, pages 82– 88, Madrid, Spain. Daniel Marcu. 1998. Improving summarization through rhetorical parsing tuning. In Proceedings of the 6th Workshop on Very Large Corpora, pages 206–215, Montr´eal, Canada. Rada Mihalcea. 2004. Graph-based ranking algorithms for sentence extraction, applied to text summarization. In The Companion Volume to the Proceedings of 42st Annual Meeting of the Association for Computational Linguistics, pages 170–173, Barcelona, Spain. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into texts. In Proceedings of EMNLP 2004, pages 404–411, Barcelona, Spain. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111–3119. Curran Associates, Inc. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, pages 3075–3081, San Francisco, California. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-tosequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290, Berlin, Germany. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018a. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018b. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1747–1759, New Orleans, Louisiana. Ani Nenkova. 2005. Automatic text summarization of newswire: Lessons learned from the document understanding conference. In Proceedings of the 20th National Conference on Artificial Intelligence, pages 1436–1441, Pittsburgh, Pennsylvania. 6246 Ani Nenkova, Kathleen McKeown, et al. 2011. Automatic summarization. Foundations and Trends R⃝in Information Retrieval, 5(2–3):103–233. You Ouyang, Wenjie Li, Qin Lu, and Renxian Zhang. 2010. A study on position information in document summarization. In Coling 2010: Posters, pages 919–927, Beijing, China. Daraksha Parveen, Hans-Martin Ramsl, and Michael Strube. 2015. Topical coherence for graph-based extractive summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1949–1954, Lisbon, Portugal. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In Proceedings of the 6th International Conference on Learning Representations, Vancouver, BC, Canada. Tamara Polajnar, Laura Rimell, and Stephen Clark. 2015. An exploration of discourse-based sentence spaces for compositional distributional semantics. In Proceedings of the First Workshop on Linking Computational Models of Lexical, Sentential and Discourse-level Semantics, pages 1–11, Lisbon, Portugal. Dragomir R. Radev, Hongyan Jing, and Malgorzata Budzikowska. 2000. Centroid-based summarization of multiple documents: sentence extraction, utilitybased evaluation, and user studies. In Proceedings of the NAACL-ANLP 2000 Workshop: Automatic Summarization, pages 21–30, Seattle, Washington. Evan Sandhaus. 2008. The New York Times Annotated Corpus. Linguistic Data Consortium, Philadelphia, 6(12). Frank Schilder and Ravikumar Kondadadi. 2008. Fastsum: Fast and accurate query-based multi-document summarization. In Proceedings of ACL-08: HLT, Short Papers, pages 205–208, Columbus, Ohio. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Xiaojun Wan. 2008. An exploration of document impact on graph-based multi-document summarization. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 755–762, Honolulu, Hawaii. Xiaojun Wan and Jianwu Yang. 2008. Multi-document summarization using cluster-based link analysis. In Proceedings of the 31st Annual International ACL SIGIR Conference on Research and Development in Information Retrieval, pages 299–306, Singapore. Wenpeng Yin and Yulong Pei. 2015. Optimizing sentence modeling and selection for document summarization. In Proceedings of the 24th International Joint Conference on Artificial Intelligence, pages 1383–1389, Buenos Aires, Argentina. A Appendix A.1 Examples of System Output Table 6 shows examples of system output. Specifically, we show summaries produced from GOLD, LEAD, TEXTRANK and PACSUM for test documents in NYT, CNN/Daily Mail and TTNews. GOLD is the gold summary associated with each document; LEAD extracts the first document sentences; TextRank (Mihalcea and Tarau, 2004) adopts PageRank (Brin and Page, 1998) to compute node centrality recursively based on a Markov chain model; PACSUM is position augmented centrality based summarization approach introduced in this paper. 6247 NYT CNN+DM TTNews GOLD Marine Corps says that V-22 Osprey, hybrid aircraft with troubled past, will be sent to Iraq in September, where it will see combat for first time. The Pentagon has placed so many restrictions on how it can be used in combat that plane – which is able to drop troops into battle like helicopter and then speed away like airplane – could have difficulty fulfilling marines longstanding mission for it. Limitations on v-22, which cost $80 million apiece, mean it can not evade enemy fire with same maneuvers and sharp turns used by helicopter pilots. ”We’re all equal, and we all deserve the same fair trial.” says one juror. The months-long murder trial of Aaron Hernandez brought jurors together. Foreperson: ”It’s been an incredibly emotional toll on all of us.” 皇马今夏清洗名单曝光,三小 将租借外出,科恩特朗、伊利 亚拉门迪将被永久送出伯纳 乌球场. (Real Madrid’s cleaning list was exposed this summer, and the three players will be rented out. Coentrao and Illarramendi will permanently leave the Bernabeu Stadium. ) TEXTRANK The Pentagon has placed so many restrictions on how it can be used in combat that the plane – which is able to drop troops into battle like a helicopter and then speed away from danger like an airplane – could have difficulty fulfilling the marines ’ longstanding mission for it. Because of these problems, Mr. Coyle, the former pentagon weapons tester, predicted the marines will use the v-22 to ferry troops from one relatively safe spot to another, like a flying truck. In December 2000, four more marines, including the program’s most experienced pilot, were killed in a crash caused by a burst hydraulic line and software problems. A day earlier, Strachan, the jury foreperson, announced the firstdegree murder conviction in the 2013 shooting death of Hernandez’s onetime friend Odin Lloyd. Before the trial, at least one juror – Rosalie Oliver – had n’t heard of the 25-year-old defendant who has now gone from a $ 40 million pro-football contract to a term of life without parole in a maximumsecurity prison. Rosalie Oliver – the juror who had n’t heard of Hernandez before the trial – said that, for her, the first shot was enough. 2个赛季前,皇马花费3500万 欧元引进了伊利亚拉门迪, 巴斯克人在安切洛蒂手下就知 道,他在皇马得不到好机会, 现在主教练换成了贝尼特斯, 情况也没有变化。(Two seasons ago, Real Madrid spent 35 million euros to introduce Illarramendi. The Basques knew under Ancelotti that he could not get a good chance in Real Madrid. Now the head coach has changed to Benitez. The situation has not changed.) LEAD the Marine Corps said yesterday that the V22 Osprey, a hybrid aircraft with a troubled past, will be sent to Iraq this September, where it will see combat for the first time. But because of a checkered safety record in test flights, the v-22 will be kept on a short leash. The Pentagon has placed so many restrictions on how it can be used in combat that the plane – which is able to drop troops into battle like a helicopter and then speed away from danger like an airplane – could have difficulty fulfilling the marines ’ longstanding mission for it. (CNN) After deliberating for more than 35 hours over parts of seven days, listening intently to the testimony of more than 130 witnesses and reviewing more than 400 pieces of evidence, the teary-eyed men and women of the jury exchanged embraces. Since late January, their work in the Massachusetts murder trial of former NFL star Aaron Hernandez had consumed their lives. It was nothing like “Law & Order.” 新浪体育显示图片厄德高新 赛季可能会被皇马外租,皇 马主席弗罗伦蒂诺已经获 得了贝尼特斯制定的“清洗 黑名单”。(Sina Sports shows that ¨Odegaard this season may be rented by Real Madrid, Real Madrid President Florentino has obtained the ”cleansing blacklist” developed by Benitez.) PACSUM The Marine Corps said yesterday that the V-22 Osprey, a hybrid aircraft with a troubled past, will be sent to Iraq this September, where it will see combat for the first time. The Pentagon has placed so many restrictions on how it can be used in combat that the plane — which is able to drop troops into battle like a helicopter and then speed away from danger like an airplane — could have difficulty fulfilling the Marines’ longstanding mission for it. The limitations on the V-22, which cost $80 million apiece, mean it cannot evade enemy fire with the same maneuvers and sharp turns used by helicopter pilots. (CNN) After deliberating for more than 35 hours over parts of seven days, listening intently to the testimony of more than 130 witnesses and reviewing more than 400 pieces of evidence, the teary-eyed men and women of the jury exchanged embraces. Since late January, their work in the Massachusetts murder trial of former NFL star Aaron Hernandez had consumed their lives. ”It ’s been an incredibly emotional toll on all of us.” Lesa Strachan told CNN ’s Anderson Cooper Thursday in the first nationally televised interview with members of the jury. 厄德高、卢卡斯-席尔瓦和 阿森西奥将被租借外出,而 科恩特朗和伊利亚拉门迪, 则将被永久送出伯纳乌球 场。( ¨Odegaard, Lucas Silva and Asencio will be rented out, while Coentrao and Illarramendi will permanently leave the Bernabeu Stadium.) Table 6: Example gold summaries and system output for NYT, CNN/Daily Mail and TTNews documents.
2019
628
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6248–6262 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6248 Discourse Representation Parsing for Sentences and Documents Jiangming Liu Shay B. Cohen Mirella Lapata Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB [email protected], {scohen,mlap}@inf.ed.ac.uk Abstract We introduce a novel semantic parsing task based on Discourse Representation Theory (DRT; Kamp and Reyle 1993). Our model operates over Discourse Representation Tree Structures which we formally define for sentences and documents. We present a general framework for parsing discourse structures of arbitrary length and granularity. We achieve this with a neural model equipped with a supervised hierarchical attention mechanism and a linguistically-motivated copy strategy. Experimental results on sentence- and documentlevel benchmarks show that our model outperforms competitive baselines by a wide margin. 1 Introduction Semantic parsing is the task of mapping natural language to machine interpretable meaning representations. Various models have been proposed over the years to learn semantic parsers from linguistic expressions paired with logical forms, SQL queries, or source code (Kate et al., 2005; Liang et al., 2011; Zettlemoyer and Collins, 2005; Banarescu et al., 2013; Wong and Mooney, 2007; Kwiatkowski et al., 2011; Zhao and Huang, 2015). The successful application of encoder-decoder models (Sutskever et al., 2014; Bahdanau et al., 2015) to a variety of NLP tasks has prompted the reformulation of semantic parsing as a sequenceto-sequence learning problem (Dong and Lapata, 2016; Jia and Liang, 2016; Koˇcisk`y et al., 2016), although most recent efforts focus on architectures which make use of the syntax of meaning representations, e.g., by developing tree or graph-structured decoders (Dong and Lapata, 2016; Cheng et al., 2017; Yin and Neubig, 2017; Alvarez-Melis and Jaakkola, 2017; Rabinovich et al., 2017; Buys and Blunsom, 2017). In this work we focus on parsing formal meaning representations in the style of Discourse Representation Theory (DRT; Kamp and Reyle 1993). a. k1 : x1 e1 t1 max(x1) fall(e1) Agent(e1, x1) now(t1) e1 ≤t1 k2 : ⋄: x2 e2 john(x2) push(e2) Patient(e2, x2) male(x1) e2 ≤e1 because(k1, k2) b. SDRS max(x1) k1 k2 DRS fall(e1) Agent(e1,x1) now(t1) temp_before(e1, t1) DRS POS DRS john(x2) push(e2) Patient(e2, x1) male(x1) temp_before(e2, e1) because(k1, k2) Max fell . John might push him . Figure 1: Meaning representation for the discourse “Max fell. John might push him.” in box-like format (top) and as a tree (bottom). Red lines indicate terminals corresponding to words and green lines indicate non-terminals corresponding to sentences. ⋄and POS are modality operators for possibility. DRT is a popular theory of meaning representation (Kamp, 1981; Kamp and Reyle, 1993; Asher, 1993; Asher and Lascarides, 2003) designed to account for a variety of linguistic phenomena, including the interpretation of pronouns and temporal expressions within and across sentences. The basic meaning-carrying units in DRT are Discourse Representation Structures (DRSs) which consist of discourse referents (e.g., x1, x2) representing entities in the discourse and discourse conditions (e.g., max(x1), male(x1)) representing information about discourse referents. An example of a two-sentence discourse in box-like format is shown in Figure 1a. DRT parsing resembles the task of mapping sentences to Abstract Meaning Representations (AMRs; Banarescu et al. 2013) in that logical forms are broad-coverage, they represent compositional utterances with varied vocabu6249 lary and syntax and are ungrounded, i.e., they are not tied to a specific database from which answers to queries might be retrieved (Zelle and Mooney, 1996; Cheng et al., 2017; Dahl et al., 1994). Our work departs from previous generalpurpose semantic parsers (Flanigan et al., 2016; Foland and Martin, 2017; Lyu and Titov, 2018; Liu et al., 2018; van Noord et al., 2018b) in that we focus on building representations for entire documents rather than isolated utterances, and introduce a novel semantic parsing task based on DRT. Specifically, our model operates over Discourse Representation Tree Structures (DRTSs) which are DRSs rendered in a tree-style format (Liu et al. 2018; see Figure 1b). Discourse representation parsing has been gaining more attention lately.1 The semantic analysis of text beyond isolated sentences can enhance various NLP applications such as information retrieval (Zou et al., 2014), summarization (Goyal and Eisenstein, 2016), conversational agents (Vinyals and Le, 2015), machine translation (Sim Smith, 2017; Bawden et al., 2018), and question anwsering (Rajpurkar et al., 2018). Our contributions in this work can be summarized as follows: 1) We formally define Discourse Representation Tree structures for sentences and documents; 2) We present a general framework for parsing discourse structures of arbitrary length and granularity; our framework is based on a neural model which decomposes the generation of meaning representations into three stages following a coarse-to-fine approach (Liu et al., 2018; Dong and Lapata, 2018); 3) We further demonstrate that three modeling innovations are key to tree structure prediction: a supervised hierarchical attention mechanism, a linguistically-motivated copy strategy, and constraint-based inference to ensure wellformed DRTS output; 4) Experimental results on sentence- and document-level benchmarks show that our model outperforms competitive baselines by a wide margin. We release our code and DRTS benchmarks in the hope of driving research in semantic parsing further.2 1The shared task on Discourse Representation Structure parsing in IWCS 2019. https://sites.google.com/ view/iwcs2019/home 2https://github.com/LeonCrashCode/ TreeDRSparsing 2 Discourse Representation Trees In this section, we define Discourse Representation Tree Structures (DRTSs). We adopt the boxto-tree conversion algorithm of Liu et al. (2018) to obtain trees which we generalize to multi-sentence discourse. As shown in Figure 1, the conversion preserves most of the content of DRS boxes, such as referents, conditions, and their dependencies. Furthermore, we add alignments between sentences and DRTSs nodes. A DRTS is represented by a labeled tree over a domain D = [R, V, C, N] where R denotes relation symbols, V denotes variable symbols, C denotes constants and N denotes scoping symbols. Variables V are indexed and can refer to entities x, events e, states s, time t, propositions p, and segments k.3 R is the disjoint union of a set of elementary relations Re and segment relations Rs. The set N is defined as the union of binary scoping symbols Nb and unary scoping symbols Nu, where Nb = {IMP, OR, DUP}, denoting conditions involving implication, disjunction, and duplex,4 and Nu = {POS, NEC, NOT} denoting modality operators expressing possibility, necessity, and negation. There are six types of nodes in a DRTS: simple scoped nodes, proposition scoped nodes, segment scoped nodes, elementary DRS nodes, segmented DRS nodes, and atomic nodes. Atomic nodes are leaf nodes such that their label is an instantiated relation r ∈R with argument variables from V or constants from C.5 Relations can either be unary or binary. For example, in Figure 1, male(x1) denotes an atomic node with a unary relation, while Patient(e2, x1) denotes a binary relation node. A simple scoped node can take one of the labels in N. A node that takes a label from Nu has only one child which is either an elementary or a segmented DRS node. A binary scope label node can take two children nodes which are an elementary or a segmented DRS. A proposition scoped node can take as label one of the proposition variables p. Its children are elementary or segmented DRS nodes. A segment scoped node can take as label one of the segment variables k and its chil3Segment variables originate from Segmented Discourse Representation Theory (SDRT; Asher and Lascarides 2003), and denote units connected by discourse relations. 4Duplex represents wh-questions (e.g., who, what, how). 5In our formulation, the only constants used are for denoting numbers. Proper names are denoted by relations, such as John(x2). 6250 Figure 2: The DRTS parsing framework; words and sentences are encoded with bi-LSTMs; documents are decoded in three stages, starting with tree non-terminals, then relations, and finally variables. Decoding makes use of multi-attention and copying. dren are elementary or segmented DRS nodes. An elementary DRS node is labeled with “DRS” and has children (one or more) which are atomic nodes (taking relations from Re), simple scoped nodes, or proposition scoped nodes. Atomic nodes may use any of the variables except for segment variables k. Finally, a segmented DRS node (labeled with “SDRS”) takes at least two children nodes which are segment scoped nodes and at least one atomic node (where the variables allowed are the segment variables that were chosen for the other children nodes and the relations are taken from Rs). For example, the root node in Figure 1 is an SDRS node with two segment variables k1 and k2 and the instantiated relation is because(k1, k2). The children of the nodes labeled with the segment variables are elementary or segmented DRS nodes. A full DRTS is a tree with an elementary or segmented DRS node as root. 3 Modeling Framework We propose a unified framework for sentenceand document-level semantic parsing based on the encoder-decoder architecture shown in Figure 2. The encoder is used to obtain word and sentence representations while the decoder generates trees in three stages. Initially, elementary DRS nodes, segmented DRS nodes, and scoped nodes are generated. Next, the relations of atomic nodes are predicted, followed by their variables. In order to make the framework compatible for discourse structures of arbitrary length and granularity and capable of adopting document-level information, we equip the decoder with multi-attention, a supervised attention mechanism for aligning DRTS nodes to sentences, and a linguistically-motivated copy strategy. 3.1 Encoder Documents (or sentences) are represented as a sequence of words ⟨d⟩,w00,...,⟨sepi⟩,...,wij,...,⟨/d⟩, where ⟨d⟩and ⟨/d⟩denote the start and end of document, respectively, and ⟨sepi⟩denotes the right boundary of the ith sentence.6 The jth token in the ith sentence of a document is represented by vector xij = f([ewij; ¯ewij; eℓij]) which is the concatenation (;) of randomly initialized embeddings ewij, pre-trained word embeddings ¯ewij, and lemma embeddings eℓij (where f(·) is a non-linear function). Embeddings ewij and eℓij are randomly initialized and tuned during training, while ¯ewij are fixed. The encoder represents words and sentences in a unified framework compatible with sentenceand document-level DRTS parsing. Our experiments employed recurrent neural networks with long-short term memory units (LSTMs; Hochreiter and Schmidhuber 1997), however, there is nothing inherent in our framework that is LSTM specific. For instance, representations based on convolutional (Kim, 2014) or recursive neural networks (Socher et al., 2012) are also possible. Word Representation We encode the input text with a bidirectional LSTM (biLSTM): [←→ hx00 : ←−→ hxmn] = biLSTM(x00 : xmn), where ←→ hxij denotes the hidden representation of the encoder for xij, which denotes the input representation of token j in sentence i. 6The left boundary of sentence i is the right boundary of sentence i −1, the left boundary of the first sentence is ⟨d⟩, and the right boundary of the last sentence is ⟨/d⟩. 6251 ℎ"# ℎ"# $% sentence-level representations word-level representations ℎ"# $& ℎ"# $' ℎ"# $( … 𝑔$% 𝑔$& 𝑔$' 𝑔$( Figure 3: Multi-attention component; linear functions gv(·) transform decoder hidden representations into different vector spaces, where v shows which linear function is applied, e.g. hword yk = gword(hyk). Shallow Sentence Representation Each sentence can be represented via the concatenation of the forward hidden state of its right boundary and the backward hidden state of its left boundary, i.e., hxi = [−−−−→ hx⟨sepi⟩; ←−−−−−− hx⟨sepi−1⟩]. Deep Sentence Representation An alternative to the shallow sentence representation just described, is a biLSTM encoder: [←→ hx0 : ←→ hxm] = biLSTM(hx0 : hxm), which takes hxi, the shallow sentence representation, as input. 3.2 Decoder We generate DRTSs following a three-stage decoding process (Liu et al., 2018), where each stage can be regarded as a sequential prediction on its own. Based on this, we propose the multiattention mechanism to make it possible to deal with multiple sentences. The backbone of our tree-generation procedure is an LSTM decoder which takes encoder representations Hx as input and constructs bracketed trees (i.e., strings) in a top-down manner, while being equipped with multi-attention. We first describe this attention mechanism as it underlies all generation stages and then move on to present each stage in detail. 3.2.1 Multi-Attention Multi-attention aims to extract features from different encoder representations and is illustrated in Figure 3. The hidden representations hyk of the decoder are fed to various linear functions to obtain vector space representations: hv yk = gv(hyk), where gv(·) is a linear function with the name v.7 Given encoder representations Hx = hx0, hx1, ...hxm, we extract features by applying a standard attention mechanism (Bahdanau et al., 2015) on encoder representations hv yk: Attnv(hyk, Hx) = Attn(hv yk, Hx) = m X i=1 βv kihxi, where weight βv ki is computed by: βv ki = exp(hvT yk hxi) P o exp(hvT yk hxo). Multi-attention scores can be also obtained from the attention weights: Scorev(hyk, Hx) = [βv k0 : βv km] 3.2.2 Tree Generation Stage 1 Our decoder first generates tree nonterminals yst 0 , ..., yst k (see Figure 2).8 The probabilistic distribution of the kth prediction is: P(yst k |yst <k, Hx) = SoftMax(sst k), where Hx refers to the encoder representations and score sst k is computed as: sst k =f([hyst k; Attnword(hyst k, [hx00:hxmn]) ; Attnsent(hyst k, [hx0:hxm])]), (1) where hyst k is the hidden representation of the decoder in Stage 1, i.e., hyst k = LSTM(eyst k−1).9 Stage 2 Given elementary or segmented DRS nodes generated in Stage 1, atomic nodes ynd 0 , ..., ynd k are predicted (see Figure 2), with the aid of copy strategies which we discuss shortly. The probabilistic distribution of the kth prediction is: P(ynd k |ynd <k, Hx, Hyst) = SoftMax([snd k ; scopy k ]), where snd k and scopy k are generation and copy scores, respectively, over the kth prediction. snd k =f([hynd k ; Attnword(hynd k , [hx00:hxmn]) ; Attnsent(hynd k , [hx0:hxm])]) (2) 7In this paper, v could be “word”, “sent”, “copy”, “st2nd” (from first to second stage) and “nd2rd” (from second to third stage), which are used to distinguish linear functions in different roles, as explained later. 8Upper subscripts “st”, “nd”, and “rd” denote Stage 1, 2, and 3, respectively. 9yst −1 is special token SOS denoting the start of sequence. 6252 scopy k = Scorecopy(hynd k , [hcopy ℓ′ 0 : hcopy ℓ′z ]) (3) where [hcopy ℓ0 : hcopy ℓz ] are copy representations used for copy scoring; and hynd k is the hidden representation of the decoder in Stage 2, which is obtained based on how the previous token was constructed: hnd yk =      LSTM(gcopy(hcopy ynd k−1)) ynd k−1 is copied LSTM(eynd k−1) ynd k−1 is generated LSTM(gst2nd(hdrs)) k = 0 The generation of atomic nodes in the second stage is conditioned on hdrs, the decoder hidden representation of elementary or segmented DRS nodes from Stage 1 by the linear function gst2nd. For the generation of atomic nodes, we copy lemmas from the input text. However, copying is limited to unary nodes which mostly represent entities and predicates (e.g., john(x1), eat(e1)), and correspond almost verbatim to input tokens. Binary atomic nodes denote semantic relations between two variables and do not directly correspond to the surface text. For example, given the DRTS for the utterance “the oil company is deprived of ...”, nodes oil(x1) and company(x2) will be copied from oil and company, while node of(x2, x1) will not be copied from deprived of. Copy representations Md = [hcopy ℓ′ 0 : hcopy ℓ′z ] are constructed for each document d from its encoder hidden representations [hx00 : hxmn], by averaging the encoder word representations which have the same lemma, where ℓ′ ∈L′ and L′ is the set of distinct lemmas in document d: hℓ′z = 1 N X (ij):ℓij=ℓ′z hxij, and N is the number of tokens with lemma ℓ′ z. Stage 3 Finally, we generate terminals, i.e. atomic node variables yrd 0 , ..., yrd k (see Figure 2). The probabilistic distribution of the kth prediction is: P(yrd k |yrd <k, Hx, Hynd) = SoftMax(srd k ), srd k =f([hyrd k ; Attnword(hyrd k , [hx00:hxmn]) ; Attnsent(hyrd k , [hx0:hxm])]) (4) where hyrd k is the decoder hidden representation in the third stage: hyrd k = ( LSTM(eyrd k−1) k ̸= 0 LSTM(gnd2rd(hatm)) k = 0 Here, the generation of variables is conditioned upon hatm, the decoder hidden representation of atomic nodes from the second stage by the linear function gnd2rd. 3.3 Training The model is trained to minimize an average crossentropy loss objective: L(θ) = −1 N X j log pj, (5) where pj is the distribution of output tokens, θ are the parameters of the model. We use stochastic gradient descent and adjust the learning rate with Adam (Kingma and Ba, 2014). 4 Extensions In this section we present two important extensions to the basic modeling framework outlined above. These include a supervised attention mechanism dedicated to aligning sentences to tree nodes. This type of alignment is important when parsing documents (rather than individual sentences) and may also enhance the quality of the copy mechanism. Our second extension concerns the generation of well-formed and meaningful logical forms which is generally challenging for semantic parsers based on sequence-to-sequence architectures, even more so when dealing with long and complex sequences pertaining to documents. 4.1 Supervised Attention The attention mechanism from Section 3.2.1 can automatically learn alignments between encoder and decoder hidden representations. However, as shown in Figure 1, DRTSs are constructed recursively and alignment information between DRTS nodes and sentences is available. For this reason, we propose a method to explicitly learn this alignment by exploiting the feature representations afforded by multi-attention. Specifically, we obtain alignment weights via multi-attention: Scorealign(hyk, [hx0 : hxm]) = [βalign k0 : βalign km ] 6253 where βalign km = P(ak = m|hyk, [hx0 : hxm]), i.e., the probabilistic distribution over alignments from sentences to the kth prediction in the decoder, where ak = m denotes the kth prediction aligned to the mth sentence. We add an alignment loss to the objective in Equation (5): L(θ) = −1 N X j log pj + 1 Nalign X k log palign k , where palign k is the probability distribution of alignments. We then use these alignments in two ways. Alignments as Features Alignments are incorporated as additional features in the decoder by concatenating the aligned sentence representations with the scoring layers. Equations (1), (2), and (4) are thus rewritten as: sstg k = f([hystg k ; Attnword(hystg k , [hx00 : hxmn]) hxak; Attnsent(hystg k , [hx0 : hxm])]), where stg ∈{st, nd, rd}, and hxak is the akth sentence representation. At test time, the scoring layer requires the alignment information, so we first select the sentence with the highest probability, i.e., a∗ k = arg maxak P(ak|hyk, [hx0 : hxm]), and then add its representation hx∗ak to the scoring layer. Copying from Alignments We use alignment as a means to modulate which information is copied. Specifically, we allow copying to take place only over sentences aligned to elementary DRS nodes. We construct copy representations for each sentence in a document, i.e., M0, ..., Mi, ..., Mm where Mi = [hcopy ℓ′ i0 : hcopy ℓ′ iz ], ℓ′ iz ∈L′ i, and L′ i is the set of distinct lemmas in the ith sentence: hcopy ℓ′ iz = 1 N X (ij):ℓij=ℓ′ iz hxij, Given the alignment between elementary DRS nodes and sentences, we calculate the copying score by rewriting Equation (3) as: scopy k = Scorecopy(hynd k , Ma) where a is the index of the sentence that is aligned to the elementary DRS node. At test time, when an elementary DRS is generated during the first stage, we further predict which sentence the node should be aligned to. The information is then passed onto the second stage, and elements from the aligned sentence can be copied. step stack valid candidates prediction 1 [] SDRS(, DRS( SDRS( 2 [SDRS(0] k1( k1( 3 [SDRS(0, k1(0] SDRS(, DRS( DRS( 4 [SDRS(0, k1(0, DRS(] simpSNs, ) ) 5 [SDRS(0, k1(1] ) ) 6 [SDRS(1] k2( k2( ... ... ... ... Figure 4: Constraint-based inference in decoding stage 1; simpSNs are simple scoped nodes; subscripts denote the number of children already constructed. 4.2 Constraint-based Inference Recall that our decoder consists of three stages, each of which is a sequence-to-sequence model. As a result, there is no guarantee that tree output will be well-formed. To ensure the generation of syntactically valid trees, at each step, we generate the set of valid candidates Y valid k which do not violate the DRTS definitions in Section 2, and then select the highest scoring tree as our prediction: y∗ k = arg max yk∈Y valid k P(yk|y<k, θ), where θ are the parameters of the model, and Y valid k the set of valid candidates at step k. In Stage 1, partial DRTSs are stored in a stack and for each prediction the model checks the stack to obtain a set of valid candidates. In the example in Figure 4, segment scoped node k1 has a child already at step 5, so predicting a right bracket would not violate the definition of DRTS.10 In stage 2, when generating relations for elementary DRS nodes, the candidates come from Re and lemmas that are used for copying; when generating relations for segmented DRS nodes, the candidates only come from Rs. Finally, in stage 3 we generate only two variables for binary relations and one variable for unary relations. A formal description is given in the Appendix. 5 Experimental Setup Benchmarks Our experiments were carried out on the Groningen Meaning Bank (GMB; Bos et al. 2017) which provides a large collection of English texts annotated with Discourse Representation Structures. We preprocessed the GMB into the tree-based format defined in Section 2 and created two benchmarks, one which preserves documentlevel boundaries, and a second one which treats sentences as isolated instances. Various statistics 10Similar constraints apply to unary simple scoped nodes. 6254 Sentences Documents #sent avgw #doc #sent avgs avgw train 41,563 21.1 7,843 48,599 6.2 135.3 dev 5,173 21.0 991 6,111 6.2 134.0 test 5,451 21.2 1,035 6,469 6.3 137.2 Table 1: Statistics on the GMB sentence- and document-level benchmarks (avgw denotes the average number of words per sentence (or document), avgs denotes the average number of sentences per document). on these are shown in Table 1, for the respective training, development, and testing partitions. We followed the same data splits as Liu et al. (2018). Settings We carried out experiments on the sentence- and document-level GMB benchmarks in order to evaluate our framework. We used the same empirical hyper-parameters for sentenceand document-level parsing. The dimensions of word and lemma embeddings were 300 and 100, respectively. The encoder and decoder had two layers with 300 and 600 hidden dimensions, respectively. The dropout rate was 0.1. Pre-trained word embeddings (100 dimensions) were generated with Word2Vec trained on the AFP portion of the English Gigaword corpus.11 Model Comparison For the sentence-level experiments, we compared our DRTS parser against Liu et al. (2018) who also perform tree parsing and have a decoder which first predicts the structure of the DRS, then its conditions, and finally its referents. Our parser without the documentlevel component is similar to Liu et al. (2018); a key difference is that our model is equipped with linguistically-motivated copy strategies. In addition, we employed a baseline sequence-tosequence model (Dong and Lapata, 2016) which treats DRTSs as linearized trees. For the document-level experiments, we built two baseline models. The first one treats documents as one long string (by concatenating all document sentences) and performs sentence-level parsing (DocSent). The second one parses each sentence in a document with a parser trained on the sentence-level version of the GMB and constructs a (flat) document tree by gathering all sentential DRTSs as children of a segmented DRS node (DocTree). We used the sentence-level DRTS parser for both baselines. We also compared four variants of our document-level model: one with 11Models were trained on a single GPU without batches. SDRS max(x1) k1 k2 DRS fall(e1) Agent(e1,x1) now(t1) temp_before(e1, t1) DRS POS DRS john(x2) push(e2) Patient(e2, x1) male(x1) temp_before(e2, e1) because(k1, k2) b0 b1 b2 b3 b0 DRS b1 b0 DRS b2 b0 because b1 b2 b1 max x1 b1 fall e1 b1 Agent e1 x1 b1 now t1 b1 temp_before e1 t1 b2 POS b3 b3 john x2 b3 push e2 b3 Patient e2 x1 b3 male x1 b3 temp_before e2 e1 Figure 5: Clausal form for DRTS corresponding to the document “Max fell. John might push him.”. multi-attention and shallow sentence representations (Shallow); one with multi-attention and deep sentence representations (Deep); a Deep model with supervised attention and alignments as features (DeepFeat); and finally, a Deep model with copying modulated by supervised attention (DeepCopy). All variants of our DRTS parser and comparison models adopt constraint-based inference. Evaluation We evaluated the output of our semantic parser using COUNTER (van Noord et al., 2018a), a recently proposed metric suited to matching scoped meaning representations. COUNTER converts DRSs to sets of clauses and computes precision and recall on matching clauses. We transformed DRTSs to clauses as shown in Figure 5. b variables refer to DRS nodes, and children of DRS nodes correspond to clauses. We used a hill-climbing algorithm to match variables between predicted clauses and gold standard clauses. We report F1 using exact match and partial match. For example, given predicted clauses “b0 fall e1, b0 Agent e2 x1, b0 push e2” and gold standard clauses “b0 fall e1, b0 Agent e1 x1”, exact F1 is 0.4 (1/3 precision and 1/2 recall) while partial F1 is 0.67 (4/7 precision and 4/5 recall). 6 Results Parsing Sentences Table 2 summarizes results on the sentence-level semantic parsing task for our model (DRTS parser), Liu et al.’s (2018) model, and the sequence-to-sequence baseline (Seq2Seq). As can be seen, our system outperforms comparison models by a wide margin. The better performance over Liu et al. (2018) is due to the richer feature space we exploit and the application of linguistically-motivated copy strategies. Parsing Documents Table 3 presents various ablation studies for the document-level model on the development set. Deep sentence representations when combined with multi-attention bring 6255 Models par-F1 exa-F1 Seq2Seq 61.27 51.21 Liu et al. (2018) 74.31 68.72 DRTS parser 80.06 77.85 Table 2: Results (test set) on sentence-level GMB benchmark. DRTS parser par-F1 exa-F1 Shallow 66.63 61.74 Deep 71.01 65.42 DeepFeat 71.44 66.43 DeepCopy 75.89 69.45 Table 3: Results (dev set) on document-level GMB benchmark. Models par-F1 exa-F1 DocSent 57.10 53.27 DocTree 62.83 58.22 DeepCopy 70.83 66.56 Table 4: Results (test set) on document-level GMB benchmark. DeepCopy atomic scoped DRS All sentences 0.22 0.26 1.78 2.09 documents 3.57 4.54 25.02 30.75 Table 5: Percentage of ill-formed outputs without constraints during inference (test set); atomic refers to atomic nodes, scoped refers to scoped nodes and DRS referes to DRS nodes (from Section 2) violated. improvements over shallow representations (+3.68 exact-F1). Using alignments as features and as a way of highlighting where to copy from yields further performance gains both in terms of exact and partial F1. The best performing variant is DeepCopy which combines supervised attention with copying. Table 4 shows our results on the test set (see the Appendix for an example of model output); we compare the best performing DRTS parser (DeepCopy) against two baselines which rely on our sentence-level parser (DocSent and DocTree). The DRTS parser, which has a global view of the document, outperforms variants which construct document representations by aggregating individually parsed sentences. Influence of Constraints In Table 5, we examine whether constraint-based inference is helpful. In particular we show the percentage of ill-formed DRTSs when constraints are not enforced. We present results for the sentence- and documentlevel parsers overall and broken down according to the type of DRTS nodes being violated. 30.75% of document level DRTSs are ill-formed when constraints are not imposed during inference. This is in stark contrast with sentence-level outputs which are mostly well-formed (only 2.09% display violations of any kind). We observe that most violations concern elementary and segmented DRS nodes. Influence of Document size Figure 6 shows how our parser (DeepCopy variant) and comparison systems perform on documents of varying length. Unsurprisingly, we observe that F1 decreases with document length and that all systems have trouble modeling documents with 10 sen5 6 7 8 9 10 11 40 50 60 70 document length F1 (%) DocSent DocTree DeepCopy Figure 6: Model performance (exact F1%) as a function of document length (i.e., number of sentences). tences and beyond. In general, DeepCopy has an advantage over comparison systems due to the more sophisticated alignment information and the fact that it aims to generate global document-level structures. Our results also indicate that modeling longer documents which are relatively few in the training set is challenging mainly because the parser cannot learn reliable representations for them. Moreover, as the size of documents increases, ambiguity for the resolution of coreferring expressions increases, suggesting that explicit modeling of anaphoric links might be necessary. 7 Related Work Le and Zuidema (2012) were the first to train a data-driven DRT parser using a graph-based representation. Recently, Liu et al. (2018) conceptualized DRT parsing as a tree structure prediction problem which they modeled with a series of encoder-decoder architectures. van Noord et al. (2018b) adapt models from neural machine translation (Klein et al., 2017) to DRT parsing, also following a graph-based representation. Previous work has focused exclusively on sentences, whereas we design a general framework for parsing sentences and documents and provide a model which can be used interchangeably for both. Various mechanisms have been proposed to improve sequence-to-sequence models including copying (Gu et al., 2016) and attention (Mikolov 6256 et al., 2013). Our copying mechanism is more specialized and linguistically-motivated: it considers the semantics of the input text for deciding which tokens to copy. While our multi-attention mechanism is fairly general, it extracts features from different encoder representations (word- or sentencelevel) and flexibly integrates supervised and unsupervised attention in a unified framework. A few recent approaches focus on the alignment between semantic representations and input text, either as a preprocessing step (Foland and Martin, 2017; Damonte et al., 2017) or as a latent variable (Lyu and Titov, 2018). Instead, our parser implicitly models word-level alignments with multi-attention and explicitly obtains sentence-level alignments with supervised attention, aiming to jointly train a semantic parser. 8 Conclusions In this work we proposed a novel semantic parsing task to obtain Discourse Representation Tree Structures and introduced a general framework for parsing texts of arbitrary length and granularity. Experimental results on two benchmarks show that our parser is able to obtain reasonably accurate sentence- and document-level discourse representation structures (77.85 and 66.56 exact-F1, respectively). In the future, we would like to more faithfully capture the semantics of documents by explicitly modeling entities and their linking. Acknowledgments We thank the anonymous reviewers for their feedback and Johan Bos for answering several questions relating to the GMB. We gratefully acknowledge the support of the European Research Council (Lapata, Liu; award number 681760), the EU H2020 project SUMMA (Cohen, Liu; grant agreement 688139) and Bloomberg (Cohen, Liu). References David Alvarez-Melis and Tommi S. Jaakkola. 2017. Tree-structured decoding with doubly-recurrent neural networks. In Proceedings of the 5th International Conference on Learning Representation (ICLR), Toulon, France. Nicholas Asher. 1993. Reference to abstract objects in English: a philosophical semantics for natural language metaphysics. Studies in Linguistics and Philosophy. Kluwer, Dordrecht. Nicholas Asher and Alex Lascarides. 2003. Logics of conversation. Cambridge University Press. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 4th International Conference on Learning Representations (ICLR), San Diego, California. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Rachel Bawden, Rico Sennrich, Alexandra Birch, and Barry Haddow. 2018. Evaluating discourse phenomena in neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1304–1313, New Orleans, Louisiana. Johan Bos, Valerio Basile, Kilian Evang, Noortje Venhuizen, and Johannes Bjerva. 2017. The groningen meaning bank. In Nancy Ide and James Pustejovsky, editors, Handbook of Linguistic Annotation, volume 2, pages 463–496. Springer. Jan Buys and Phil Blunsom. 2017. Robust incremental neural semantic graph parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1215–1226, Vancouver, Canada. Jianpeng Cheng, Siva Reddy, Vijay Saraswat, and Mirella Lapata. 2017. Learning structured natural language representations for semantic parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 44–55, Vancouver, Canada. Deborah A. Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, Christine Pao David Pallett, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the atis task: the atis-3 corpus. In Proceedings of the workshop on ARPA Human Language Technology, pages 43–48, Plainsboro, New Jersey. Marco Damonte, Shay B. Cohen, and Giorgio Satta. 2017. An incremental parser for abstract meaning representation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 536–546, Valencia, Spain. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 33–43, Berlin, Germany. Li Dong and Mirella Lapata. 2018. Coarse-to-fine decoding for neural semantic parsing. In Proceedings of the 56th Annual Meeting of the Association 6257 for Computational Linguistics (Volume 1: Long Papers), pages 731–742, Melbourne, Australia. Jeffrey Flanigan, Chris Dyer, Noah A Smith, and Jaime Carbonell. 2016. Cmu at semeval-2016 task 8: Graph-based amr parsing with infinite ramp loss. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1202– 1206, San Diego, California. William Foland and James H Martin. 2017. Abstract meaning representation parsing using lstm recurrent neural networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 463–472, Vancouver, Canada. Naman Goyal and Jacob Eisenstein. 2016. A joint model of rhetorical discourse structure and summarization. In Proceedings of the Workshop on Structured Prediction for NLP, pages 25–34, Austin, Texas. Association for Computational Linguistics. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631–1640, Berlin, Germany. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12–22, Berlin, Germany. Hans Kamp. 1981. A theory of truth and semantic representation. In J. A. G. Groenendijk, T. M. V. Janssen, and M. B. J. Stokhof, editors, Formal Methods in the Study of Language, volume 1, pages 277– 322. Mathematisch Centrum, Amsterdam. Hans Kamp and Uwe Reyle. 1993. From Discourse to Logic; An Introduction to Modeltheoretic Semantics of Natural Language, Formal Logic and Discourse Representation Theory. Kluwer, Dordrecht. Rohit J. Kate, Yuk Wah Wong, and Raymond J. Mooney. 2005. Learning to transform natural to formal languages. In Proceedings of the 20th National Conference on Artificial Intelligence, pages 1062– 1068, Pittsburgh, Pennsylvania. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1746–1751, Doha, Qatar. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), Banff, Canada. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67–72, Vancouver, Canada. Tom´aˇs Koˇcisk`y, G´abor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and Karl Moritz Hermann. 2016. Semantic parsing with semi-supervised sequential autoencoders. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1078– 1087, Austin, Texas. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2011. Lexical generalization in CCG grammar induction for semantic parsing. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1512–1523, Edinburgh, Scotland, UK. Phong Le and Willem Zuidema. 2012. Learning compositional semantics for open domain semantic parsing. In Proceedings of the 24th International Conference on Computational Linguistics (COLING), pages 1535–1552, Mumbai, India. Percy Liang, Michael Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 590–599, Portland, Oregon. Jiangming Liu, Shay B. Cohen, and Mirella Lapata. 2018. Discourse representation structure parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 429–439, Melbourne, Australia. Chunchuan Lyu and Ivan Titov. 2018. Amr parsing as graph prediction with latent alignment. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 397–407, Melbourne, Australia. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, pages 3111–3119. Curran Associates, Inc. Rik van Noord, Lasha Abzianidze, Hessel Haagsma, and Johan Bos. 2018a. Evaluating scoped meaning representations. In Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. Rik van Noord, Lasha Abzianidze, Antonio Toral, and Johan Bos. 2018b. Exploring neural methods for parsing discourse representation structures. Transactions of the Association for Computational Linguistics, 6:619–633. 6258 Ella Rabinovich, Noam Ordan, and Shuly Wintner. 2017. Found in translation: Reconstructing phylogenetic language trees from translations. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 530–540, Vancouver, Canada. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789, Melbourne, Australia. Karin Sim Smith. 2017. On integrating discourse in machine translation. In Proceedings of the Third Workshop on Discourse in Machine Translation, pages 110–121, Copenhagen, Denmark. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201–1211, Jeju Island, Korea. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. In Proceedings of the Deep Learning Workshop in the 31st International Conference on Machine Learning, volume 37. Yuk Wah Wong and Raymond Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 960–967, Prague, Czech Republic. Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 440–450, Vancouver, Canada. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the 13th National Conference on Artificial Intelligence, pages 1050– 1055, Portland, Oregon. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In PProceedings of the 21st Conference in Uncertainty in Artificial Intelligence, pages 658– 666, Edinburgh, Scotland, UK. Kai Zhao and Liang Huang. 2015. Type-driven incremental semantic parsing with polymorphism. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1416–1421, Denver, Colorado. Bowei Zou, Guodong Zhou, and Qiaoming Zhu. 2014. Negation focus identification with contextual discourse information. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 522– 530, Baltimore, Maryland. Association for Computational Linguistics. A Constraint-based Inference In this section we provide more formal detail on how our model applies constraint-based inference. In order to guide sequential predictions, we define a State Tracker (ST) equipped with four functions: INITIALIZATION initializes the ST, UPDATE updates the ST according to token y, ISTERMINATED determines whether the ST should terminate, and VALID returns the set of valid candidates in the current state. The state tracker provides an efficient interface for applying constraints during decoding. Sequential inference with the ST is shown in Algorithm 1; θ are model parameters and Y valid k all possible valid predictions at step k. A.1 Stage 1 Algorithm 2 implements the ST functions for Stage 1; DRS denotes an elementary DRS node, SDRS denotes a segmented DRS node, propSN is short for proposition scoped node, segmSN is short for segment scoped node, simpSN is short for simple scoped node. Function INITIALIZATION (lines 1–4) initializes the ST as an empty stack with a counter. Lines 5–15 implement the function UPDATE, where y is placed on top of the stack if it is not a CompletedSymbol (lines 6–7) and the counter is incremented if y is an elementary DRS node (lines 8–9). The top of the stack is popped if y is a CompletedSymbol (line 12), i.e., the children of the node on top of the stack have been generated, and the stack is updated (line 13). Lines 16–22 implement the function ISTERMINATED. If the stack is empty, decoding in Stage 1 is completed. Function ISTERMINATED is called after function UPDATE has been called at least once (see lines 7–8 in Algorithm 1). Lines 23–63 implement the function VALID, which returns the set of valid candidates Y valid 6259 Algorithm 1 Inference with ST 1: procedure INFERENCE(ST, θ) 2: INITIALIZATION(ST) 3: k = 0 4: repeat 5: Y valid k = VALID(ST) 6: y∗ k = arg maxyk∈Y valid k P(yk|y<k, θ) 7: UPDATE(ST, y∗ k) 8: k = k + 1 9: until ISTERMINATED(ST) 10: return [y∗ 0, ..., y∗ k−1] 11: end procedure in the current state. If the stack is empty, which means that a root of a DRTS should be constructed, Y valid only includes elementary and segmented DRS nodes (lines 24–25). We use top to denote the top node of the stack (line 27). If top is a proposition scoped node or segment scoped node, Y valid includes an elementary and segmented DRS node only if top has no children (lines 29–30), otherwise Y valid includes CompletedSymbol (lines 31–32), showing that the scoped node should be completed with only one elementary or segmented DRS node as a child. The same constraints are applied to unary simple scoped nodes (lines 34–39). Similarly, binary simple scoped nodes should only have two elementary or segmented DRS nodes as children (lines 40–45). If top is an elementary DRS node, Y valid is initialized with the set {CompletedSymbol} (line 47), because it can be completed without any child in Stage 1.12 Furthermore, if the number of elementary DRS nodes is within the threshold MAX DRS, top can have more children, i.e., Y valid includes scoped nodes, except segmented scoped nodes (lines 48–49). If top is a segmented DRS node and has less than two children, Y valid only includes segment scoped nodes (lines 52–53). Furthermore, if the number of elementary DRS nodes is within the threshold MAX DRS, top can have more children, i.e., Y valid includes segmented scoped nodes (lines 55–57). A.2 Stage 2 The ST functions for Stage 2 are shown in Algorithm 3. Lines 1–5 implement the function INITIALIZATION, which initializes ST as a relation counter, a type flag, and a completed flag. The relation counter records the number of relations that have been already constructed. The type flag 12Atomic nodes are constructed in Stage 2. shows the type of nodes, i.e., e for elementary DRS nodes or s for segmented DRS nodes, based on which the relations are constructed. The completed flag checks if the construction is completed. Lines 6–11 implement the function UPDATE. If CompletedSymbol is predicted, the completed flag is set to true, and the completed flag is checked (lines 12–14, function ISTERMINATED). Lines 15–24 implement the function VALID. If the number of constructed relations is zero, Y valid only includes R (lines 16–17). If the number of constructed relations is within the threshold MAX RELST.type, it is possible to construct more relations (lines 18–19). If the number of children exceeds the threshold, Y valid only includes CompletedSymbol to complete the construction of relations (lines 20–21). A.3 Stage 3 Algorithm 4 implements the ST functions for Stage 3, where Ve includes entity variables, event variables, state variables, time variables, proposition variables, and constants, and Vs includes segment variables. Lines 1–5 (INITIALIZATION) initialize the ST with a variable counter, a type flag, and a completed flag. The variable counter records the number of variables that have already been constructed. The type flag shows the type of nodes (e for elementary DRS nodes or s for segmented DRS nodes), based on which the variables are constructed. The completed flag checks if the construction is completed. Lines 6–11 implement the function UPDATE. If CompletedSymbol is predicted, the completed flag is set to true and checked (lines 12–14, function ISTERMINATED). Lines 15–28 implement the function VALID. If no variables are constructed, Y valid only includes VST.type (lines 16–17). If only one variable is constructed and ST.type is a segmented DRS, Y valid only includes Vs to construct one more variable because relations in segmented DRS nodes are binary (lines 21–22). If two variables are constructed, Y valid only includes CompletedSymbol (line 25). Note that indices of variables are in increased order. B Example Output We provide example output of our model (DRTS parser, DeepCopy variant) for the GMB document below in Figure 7. European Union energy officials will 6260 hold an emergency meeting next week amid concerns that the RussianUkrainian dispute over natural gas prices could affect EU gas supplies. An EU statement released Friday says the meeting is aimed at finding a common approach. It also expresses the European Commission’s concern about the situation, but says the EU top executive body remains confident an agreement will be reached. A Russian cut-off of supplies to Ukraine will reduce the amount of natural gas flowing through the main pipeline toward Europe. But the commission says there is no risk of a gas shortage in the short term. German officials say they are hoping for a quick resolution to the dispute. Government spokesman, Ulrich Wilhelm says officials have been in contact with both sides at a working level, but will not mediate. Algorithm 2 State Tracker for Stage 1 1: procedure INITIALIZATION(ST) 2: ST.stack = [] 3: ST.count = 0 4: end procedure 5: procedure UPDATE(ST, y) 6: if y is not CompletedSymbol then 7: ST.stack.push(y) 8: if y is DRS then 9: ST.count += 1 10: end if 11: else 12: ST.stack.pop() 13: ST.stack.top.childnum += 1 14: end if 15: end procedure 16: procedure ISTERMINATED(ST) 17: if ST.stack.empty() then 18: return True 19: else 20: return False 21: end if 22: end procedure 23: procedure VAILD(ST) 24: if ST.stack.empty() then 25: Y Valid = {DRS, SDRS} 26: else 27: top = ST.stack.top 28: if top is propSN or segmSN then 29: if top.childnum = 0 then 30: Y Valid = {DRS, SDRS} 31: else 32: Y Valid = {CompletedSymbol} 33: end if 34: else if top is unary simpSN then 35: if top.childnum = 0 then 36: Y Valid = {DRS, SDRS} 37: else 38: Y Valid = {CompletedSymbol} 39: end if 40: else if top is binary simpSN then 41: if top.childnum ≤1 then 42: Y Valid = {DRS, SDRS} 43: else 44: Y Valid = {CompletedSymbol} 45: end if 46: else if top is DRS then 47: Y Valid = {CompletedSymbol} 48: if ST.count < MAX DRS then 49: Y Valid = Y Valid ∪{propSN, simpSN} 50: end if 51: else if top is SDRS then 52: if top.childnum < 2 then 53: Y Valid = {segmSN} 54: else 55: Y Valid = {CompletedSymbol} 56: if ST.count < MAX DRS then 57: Y Valid = Y Valid ∪{segmSN} 58: end if 59: end if 60: end if 61: end if 62: return Y Valid 63: end procedure 6261 Algorithm 3 State Tracker for Stage 2 1: procedure INITIALIZATION(ST, type) 2: ST.count = 0 3: ST.completed = False 4: ST.type = type 5: end procedure 6: procedure UPDATE(ST, y) 7: ST.count += 1 8: if y is CompletedSymbol then 9: ST.completed = True 10: end if 11: end procedure 12: procedure ISTERMINATED(ST) 13: return ST.completed 14: end procedure 15: procedure VALID(ST) 16: if ST.count = 0 then 17: Y Valid = RST.type 18: else if ST.count < MAX RELST.type then 19: Y Valid = RST.type ∪{CompletedSymbol} 20: else 21: Y Valid = {CompletedSymbol} 22: end if 23: return Y Valid 24: end procedure Algorithm 4 State Tracker for Stage 3 1: procedure INITIALIZATION(ST, type) 2: ST.count = 0 3: ST.completed = False 4: ST.type = type 5: end procedure 6: procedure UPDATE(ST, y) 7: ST.count += 1 8: if y is CompletedSymbol then 9: ST.completed = True 10: end if 11: end procedure 12: procedure ISTERMINATED(ST) 13: return ST.completed 14: end procedure 15: procedure VALID(ST) 16: if ST.count = 0 then 17: Y Valid = VST.type 18: else if ST.count = 1 then 19: if ST.type is elementary DRS then 20: Y Valid = VST.type∪{CompletedSymbol} 21: else if ST.type is segmented DRS then 22: Y Valid = VST.type 23: end if 24: else 25: Y Valid = {CompletedSymbol} 26: end if 27: return Y Valid 28: end procedure 6262 SDRS european(x1) in(x2, x1) union(x3) in(x2, x3) energy(x4) in(x2, x4) official(x2) emergency(x5) in(x6, x5) meeting(x6) hold(e1) Agent(e1, x2) Theme(e1, x6) week(x7) on(e1, x7) next(e1) concern(x8) Theme(x8, p1) under(e1, x8) now(t1) temp_included(e1, t2) temp_before(t1, t2) k1 DRS NEC DRS p1 DRS POS DRS Topic(s1, x9) russian-ukrainian(s1) dispute(x9) Topic(s2, x10) natural(s2) gas(x11) of(x10, x11) price(x10) over(x9, x10) eu(x12) of(x13, x12) gas(x14) of(x13, x14) supplies(x13) affect(e2) stimulus(e2, x9) Experiencer(e2, x13) now(t1) temp_included(e2, t3) temp_before(t1, t3) k2 DRS eu(x15) of(x16, x15) statement(x16) equ(x16, x17) release(e3) Theme(e3, x17) friday(x18) on(e3, x18) say(e4) Cause(e4, x16) Topic(e4, p2) now(t1) temp_included(e4, t4) equ(t4, t1) p2 DRS meeting(x19) aim(e5) Theme(e5, x19) Topic(s3, x20) common(s3) approach(x20) find(e6) Agent(e6, x21) Theme(e6, x20) at(e5, e6) now(t1) temp_included(e5, t5) equ(t5, t1) k3 DRS thing(x2) Topic(s4, x22) european(s4) commission(x22) of(x23, x22) concern(x23) situation(x24) about(x23, x24) express(e7) Agent(e7, x2) Theme(e7, x23) now(t1) temp_included(e7, t6) equ(t6, t1) also(e7) DRS k4 thing(x2) say(e8) Cause(e8, x2) Topic(e8, p3) now(t1) temp_included(e8, t7) equ(t7, t1) p3 DRS eu(x25) in(x26, x25) Topic(s5, x26) top(s5) Topic(s6, x26 ) executive(s6) body(x26) remain(e9) Agent(e9, x26) Topic(e9, p4) now(t1) temp_included(e9, t8) equ(t8, t1) p4 DRS confident(e10) Agent(e10, x26) Topic(e10, p5) NEC DRS agreement(x27) reach(e11) Theme(e11, x27) now(t1) Temp_included(e11, t9) Temp_before(t1, t9) k5 DRS NEC DRS russia(x28) of(x29, x28) cut-off(x29) supplies(x30) ukraine(x31) to(x30, x31) of(x29, x30) amount(x32) Topic(s7, x33) natural(s7) gas(x33) of(x32, x33) equ(x32, x34) flow(e12) Theme(e12, x34) Topic(s8, x35) main(s8) pipeline(x35) europe(x36) toward(x35, x36) through(e12, x35) reduce(e13) Cause(e13, x29) Patient(e13, x32) now(t1) temp_included(e13, t10) temp_before(t1, t10) k6 DRS commission(x37) say(e14) Cause(e14, x37) Topic(e14, p5) now(t1) temp_included(e14, t11) equ(t11, t1) but(e14) p5 DRS risk(x38) gas(x39 ) of(x40, x39) shortage(x40 ) Patient(s9, x41) short(s9) term(x41) in(x40, x41) of(x38, x40) be(e15) Agent(e15, x42) Theme(e15, x38) now(t1) temp_included(e15, t12) equ(t12, t1) NOT DRS k7 DRS germany(x43) of(x44, x43) official(x44) say(e16) Cause(e16, x44) Topic(e16, p6) now(t1) temp_included(e16, t13) equ(t13, t1) p6 DRS thing(x2) hope(e17) Theme(e17, x2) Topic(s10, x45) quick(s10) resolution(x45) dispute(x46) to(x45, x46) for(e17, x45) now(t1) equ(x47, t1) temp_includes(t13, x47) temp_overlap(e17, t13) k8 DRS government(x48) for(x49, x48) spokesman(x49) ulrich(x50) equ(x51, x50) wilhelm(x51) rel(x49, x51) say(e18) Cause(e18, x49) Topic(e18, p7) now(t1) temp_included(e18, t14) equ(t14, t1) p7 DRS official(x52) be(e19) Agent(e19, x52) contact(x53) side(x54) with(x53, x54) in(e19, x53) work(e20) Patient(e20, x55) level(x55) at(e19, x55) now(t1) equ(x56, t1) temp_includes(e21, x56) temp_abut(e19, e21) k9 DRS government(x57) for(x58, x57) spokesman(x58) ulrich(x59) equ(x60, x59) wilhelm(x60) rel(x58, x60) NOT DRS NEC DRS mediate(e22) Agent(e22, x58) now(t1) temp_included(e22, t15) temp_before(t1, t15) continuation(k1, k2) continuation(k2, k3) continuation(k3, k4) contrast(k3, k4) continuation(k4, k5) continuation(k5, k6) continuation(k6, k7) continuation(k7, k8) continuation(k8, k9) contrast(k8, k9) Figure 7: Output of DRTS parser (DeepCopy variant) for the document in Section 2.
2019
629
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 654–659 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 654 Know What You Don’t Know: Modeling a Pragmatic Speaker that Refers to Objects of Unknown Categories Sina Zarrieß Faculty of Linguistics and Literary Studies Bielefeld University, Germany [email protected] David Schlangen Linguistics Department University of Potsdam, Germany [email protected] Abstract Zero-shot learning in Language & Vision is the task of correctly labelling (or naming) objects of novel categories. Another strand of work in L&V aims at pragmatically informative rather than “correct” object descriptions, e.g. in reference games. We combine these lines of research and model zero-shot reference games, where a speaker needs to successfully refer to a novel object in an image. Inspired by models of “rational speech acts”, we extend a neural generator to become a pragmatic speaker reasoning about uncertain object categories. As a result of this reasoning, the generator produces fewer nouns and names of distractor categories as compared to a literal speaker. We show that this conversational strategy for dealing with novel objects often improves communicative success, in terms of resolution accuracy of an automatic listener. 1 Introduction It is commonly agreed that even massive resources for language & vision (Deng et al., 2009; Chen et al., 2015; Krishna et al., 2017) will never fully cover the huge range of objects to be found “in the wild”. This motivates research in zero-shot learning (Lampert et al., 2009; Socher et al., 2013; Hendricks et al., 2016), which aims at predicting correct labels or names for objects of novel categories, typically via external lexical knowledge such as, e.g., word embeddings. More generally, however, uncertain knowledge of the world that surrounds us, including novel objects, is not only a machine learning challenge: it is simply a very common aspect of human communication, as speakers rarely have perfect representations of their environment. Precisely the richness of verbal interaction allows us to communicate these uncertainties and to collaborate towards communicative success (Clark and WilkesGibbs, 1986). Figure 1 illustrates this general refexp: right thingy refexp: left blue Figure 1: RefCOCO expressions referring to difficult/unknown objects point with two examples from the RefCOCO corpus (Yu et al., 2016), providing descriptions of visual objects from an interactive reference game. Here, the use of the unspecific thingy and the omission of a noun in left blue can be seen as pragmatically plausible strategies that avoid confusing the listener with potentially inaccurate names for difficult-to-name objects. While there has been a lot of recent and traditional research on pragmatically informative object descriptions in reference games (Mao et al., 2016; Yu et al., 2017; CohnGordon et al., 2018; Dale and Reiter, 1995; Frank and Goodman, 2012), conversational strategies for dealing with uncertainties like novel categories are largely understudied in computational pragmatics, though see, e.g., work by Fang et al. (2014). In this paper, we frame zero-shot learning as a challenge for pragmatic modeling and explore zero-shot reference games, where a speaker needs to describe a novel-category object in an image to an addressee who may or may not know the category. In contrast to standard reference games, this game explicitly targets a situation where relatively common words like object names are likely to be more inaccurate than other words like e.g. attributes. We hypothesize that Bayesian reasoning in the style of Rational Speech Acts, RSA (Frank and Goodman, 2012), can extend a neural generation model trained to refer to objects of known categories, towards zero-shot learning. We im655 plement a Bayesian decoder reasoning about categorical uncertainty and show that, solely as a result of pragmatic decoding, our model produces fewer misleading object names when being uncertain about the category (just as the speakers did in Figure 1). Furthermore, we show that this strategy often improves reference resolution accuracies of an automatic listener. 2 Background We investigate referring expression generation (REG henceforth), where the goal is to compute an utterance u that identifies a target referent r among other referents R in a visual scene. Research on REG has a long tradition in natural language generation (Krahmer and Van Deemter, 2012), and has recently been re-discovered in the area of Language & Vision (Mao et al., 2016; Yu et al., 2016; Zarrieß and Schlangen, 2018). These latter models for REG essentially implement variants of a standard neural image captioning architecture (Vinyals et al., 2015), combining a CNN and an LSTM to generate an utterance directly from objects marked via bounding boxes in real-world images. Our approach combines such a neural REG model with a reasoning component that is inspired by theory-driven Bayesian pragmatics and RSA (Frank and Goodman, 2012). We will briefly sketch this approach here. The starting point in RSA is a model of a “literal speaker”, S0(u|r), which generates utterances u for the target r. The “pragmatic listener” L0 then assigns probabilities to all referents R based on the model S0: L0(r|u) ∝ S0(u|r) ∗P(r) P ri∈R S0(u|ri) ∗P(ri) (1) In turn, the “pragmatic speaker” S1 reasons about which utterance is more discriminative and will be resolved to the target by the pragmatic listener: S1(u|r) ∝ L0(r|u) ∗P(u) P ui∈U L0(r|ui) ∗P(ui) (2) (S0 and L0 are components of the recursive reasoning of S1 and not in fact separate agents.) There has been some previous work on leveraging RSA-like reasoning for neural language generation. For instance, Cohn-Gordon et al. (2018) implement the literal speaker as a neural captioning model trained on non-discriminative image descriptions. On top of this neural semantics, they build a pragmatic speaker that produces more discriminative captions, applying equation 2 at each step of the inference process. They evaluate their model in a reference game where an automatic listener (trained on a different portion of the image data) is used to test whether the generated caption singles out the target image among a range of distractor images. A range of related articles have extended neural captioning models with decoding procedures geared towards vocabulary expansion (Anderson et al., 2017; Agrawal et al., 2018) or contextually discriminative scene descriptions (Andreas and Klein, 2016; Vedantam et al., 2017). Previous work on REG commonly looks at visual scenes with multiple referents of identical or similar categories. Here, speakers typically produce expressions composed of a head noun, which names the category of the target, and a set of attributes, which distinguish the target from distractor referents of the same category (Krahmer and Van Deemter, 2012). Our work adds an additional dimension of uncertainty to this picture, namely a setting where the category of the target itself might not be known to the model and, hence, cannot be named with reasonable accuracy. In this setting, we expect that a literal speaker (e.g. a neural REG model trained for a restricted set object categories) generates misleading references, e.g. containing incorrect head nouns, as it has no means of “knowing” which words risk being inaccurate for referring to novel objects. The following Section 3 describes how we modify the RSA approach for reasoning in such a zero-shot reference game. 3 Model Inspired by the approach in Section 2, we model our pragmatic zero-shot speaker as a neural generator (the literal speaker) that is decoded via a pragmatic listener. In contrast to the listener in Equation (1), however, our listener possesses an additional latent variable C, which reflects its beliefs about the target’s category. This hidden belief distribution will, in turn, allow the pragmatic speaker to reason about how accurate the words produced by the literal speaker might be. Our Bayesian listener will assign a probability P(r|u) to a referent r conditioned on the utterance u by the (literal) speaker. To do that, it needs to calculate P(u|r), as in Equation 1. While previous work on RSA typically equates P(u|r) with S0(u|r), we are going to modify the way this prob656 ability is calculated. Thus, we assume that our listener has hidden beliefs about the category of the referent, that we can marginalize over as follows: P(u|r) = X ci∈C P(u, ci|r) = X ci∈C P(u, ci, r) P(r) = X ci∈C P(r) ∗P(ci|r) ∗P(u|ci, r) P(r) ∝ X ci∈C P(ci|r) ∗P(u|ci) (3) As a simplification, we condition u only on ci, instead of P(u|ci, r). This will allow us to estimate P(u|ci) directly via maximum likelihood on the training data, i.e. in terms of word probabilities conditioned on categories (observed in training) . The pragmatic listener is defined as follows: L0(r|u) = P(u|r) ∗P(r) P(u) ∝ X ci∈C P(ci|r) ∗P(u|ci) (4) For instance, let’s consider a game with 3 categories and two words, the less specific left with P(u|ci) = 1 2 for all ci ∈C and the more specific bus with P(u|c1) = 9 10, P(u|c2) = 1 10, P(u|c3) = 1 10. When the listener is uncertain and predicts P(ci|r) = 1 3 for all ci ∈C, this yields L0(r|left) = 0.5 and L0(r|bus) = 0.36, meaning that the less specific left will be more likely resolved to the target r. Vice versa, when the listener is more certain, e.g. P(c1|r) = 9 10, P(c2|r) = 1 10, P(c3|r) = 1 10, more specific words will be preferred: L0(r|bus) = 0.83 and L0(r|left) = 0.55. The definition of the pragmatic speaker is straightforward: S1(u|r) = S0(u|r) ∗L0(r|u)α (5) Intuitively, S1 guides its potentially overoptimistic language model (S0) to be more cautious in producing category-specific words, e.g. nouns. The idea is that the degree to which a word is category-specific and, hence, risky in a zero-shot reference game can be determined on descriptions for objects of known categories and is expressed in P(u|c) . For unknown categories, the pragmatic speaker can deliberately avoid these category-specific words and resort to describing other visual properties like colour or location.1 Similar to Cohn-Gordon et al. (2018), we use incremental, word-level inference to decode the pragmatic speaker model in a greedy fashion: St 1(w|r, ut−1) = St 0(w|r, ut−1) ∗L0(r|w)α+β (6) At each time step, we generate the most likely word determined via S0 and L0. The parameters α and β will determine the balance between the literal speaker and the listener. While α is simply a constant (set to 2, in our case), β is zero as long as w does not occur in ut−1 and increases when it does occur in ut−1 (it is then set to 2). This ensures that there is a dynamic tradeoff between the speaker and the listener, i.e. for words that occur in previously generated utterance prefix, the language model probabilities (S0) will have comparitively more weight than for new words. 4 Exp. 1: Referring without naming? Section 3 has introduced a model for referring expression generation (REG) in a zero-shot reference game. This model, and its pragmatic decoding component in particular, is designed to avoid words that are specific to categories when there is uncertainty about the category of a target object, in favour of words that are not specific to categories like, e.g., colour or location attributes. In the following evaluation, we will test how this reasoning component actually affects the referring behavior of the pragmatic speaker as compared to the literal speaker, which we implement as neural supervised REG model along the lines of previous work (Mao et al., 2016; Yu et al., 2016). As object names typically express category-specific information in referring expressions, we focus the comparison on the nouns generated in the systems’ output. 4.1 Training Data We conduct experiments on RefCOCO (Yu et al., 2016) referring expressions to objects in MSCOCO (Lin et al., 2014) images. As is commonly done in zero-shot learning, we manually select a range of different categories as targets for our zero-shot game, cf. (Hendricks et al., 2016). Out of the 90 categories 1We leave it for future work to combine this approach with a listener reasoning about distractor objects in the scene (as in Equation 1). 657 in MSCOCO, we select 6 medium-frequent categories (cat,horse,cup,bottle,bus,train), that are similar to those in (Hendricks et al., 2016). For each category, we divide the training set of RefCOCO into a new train-test split such that all images with an instance of the target zero-shot category are moved to the test set. Generation Model (S0) We implement a standard CNN-LSTM model for REG, trained on pairs of image regions and referring expressions. The architecture follows the baseline version of (Yu et al., 2016). We crop images to the target region, and obtain the fc features from VGG (Simonyan and Zisserman, 2014). We set the word embedding layer size to 512, and the hidden state to 1024. We optimized with ADAM, set the batch size to 32 and the learning rate to 0.0004. The number of training epochs is 5 (verified on the RefCOCO validation set). Uncertainty Estimation Similar to previous work in zero-shot learning, we factor out the problem of automatically determining the model’s certainty with respect to an object’s category, cf. (Lampert et al., 2009; Socher et al., 2013): for computing L0(r|u), we set P(ci|r) to be a uniform distribution over categories, meaning that the model is maximally uncertain about the referent’s category. We leave exploration of a more realistic uncertainty or novelty prediction to future work. 4.2 Evaluation Measures We test to what extent our models produces incorrect names for novel objects. First, for each zero-shot category, we define a set of distractor nouns (distr-noun), which correspond to the names of the remaining categories in MSCOCO. Any choice of noun from that set would be wrong, as the categories are pairwise disjunct; the exploration of other nouns (e.g. thingy, animal) is left for future work. In Table 1, “% distr-noun” refers to how many expressions generated for an instance of a zero-shot category contain such an incorrect distractor noun. Second, we count how many generated expressions do not contain any noun (no-noun) at all, according to the NLTK POS tagger. Results Table 1 shows that the proportion of output expressions containing a distractor noun decreases markedly from S0 to S1, whereas the proportion of expression without any name increases Model % distr-noun % no-noun cat S0 0.606 0.107 S1 0.484 0.193 horse S0 0.683 0.085 S1 0.572 0.30 cup S0 0.627 0.079 S1 0.332 0.172 bottle S0 0.398 0.275 S1 0.166 0.562 bus S0 0.743 0.066 S1 0.612 0.247 train S0 0.759 0.166 S1 0.558 0.37 Table 1: Names and nouns contained in generation output for two speakers (S0, S1) Target (unknown cat): left horse S0: left person  S1: left black  Figure 2: Qualitative Example markedly from S0 to S1. First of all, this suggests that our baseline model S0 does, in many cases, not know what it does not know, i.e. it is not aware that it encounters a novel category and frequently generates names of known categories encountered during training. However, even in this simple model, we find a certain portion of output expressions that do not contain any name (e.g. 27% for bottle, but only 6% for bus). The results also confirm our hypothesis that the pragmatic speaker S1 avoids to produce “risky” or specific words that are likely to be confused for uncertain or unknown categories. It is worth stressing here that this behaviour results entirely from the Bayesian reasoning that S1 uses in decoding; the model does not have explicit knowledge of linguistic categories like nouns, names or other taxonomic knowledge. 5 Exp. 2: Communicative success The Experiment in Section 4 found that the pragmatic speaker uses less category-specific vocabulary when referring to objects of novel categories as compared to a literal speaker. Now, we need to establish whether the resulting utterances still achieve communicative success in the zero-shot reference game, despite using less specific vocab658 Zero-shot category Similar category cat dog, cow horse dog, cow cup bowl, bottle, wine glass bottle vase, wine glass bus car, train, truck train car, bus, truck Table 2: Target and distractor categories used for testing in Exp. 2 ulary (as shown above). We test this automatically using a model of a “competent” listener, that knows the respective object categories. This is supposed to approximate a conversation between a system and a human that has more elaborate knowledge of the world than the system. The evaluation listener One pitfall of using a trained listener model (instead of a human) for task-oriented evaluation is that this model might simply make the same mistakes as the speaker model as it is trained on similar data. To avoid this circularity, Cohn-Gordon et al. (2018) train their listener on a different subset of the image data. Rather than training on different data, we opt for training the listener on better data, as we want it to be as strict and human-like as possible. For instance, we do not want our listener model to resolve an expression like the brown cat to a dog. We train Seval as a neural speaker on the entire training set and give Leval access to ground-truth object categories. The ground-truth category cr of a referent r is used to calculate P(nu|cr) where nu is the object name contained in the utterance u. P(nu|cr) is estimated on the entire training set. Leval(r|u, cr) = Seval(u|r) ∗P(nu|cr) (7) P(nu|cr) will be close to zero if the utterance contains a rare or wrong name for the category cr, and Leval will then assign a very low probability to this referent. We apply this listener to all referents in the scene and take the argmax. Test set The set TS-image pairs each target with other (annotated!) objects in the same image, a typical set-up for reference resolution.As many images in RefCOCO only have distractors of the same category as the target (which is not ideal for our purposes), we randomly sample an additional test set called TS-distractors, pairing zeroModel TS-image TS-distractors cat S0 0.516 0.343 S1 0.603 0.386 horse S0 0.644 0.096 S1 0.589 0.150 cup S0 0.721 0.483 S1 0.674 0.540 bottle S0 0.502 0.275 S1 0.517 0.306 bus S0 0.789 0.405 S1 0.759 0.361 train S0 0.658 0.202 S1 0.667 0.305 Table 3: Reference resolution accuracies obtained from listener Leval on expressions by S0, S1 shot targets with 4 distractors of a similar category, which we defined manually, shown in Table 2. This is slightly artificial as objects are taken out of the coherent spatial context, but it helps us determining whether our model can successfully refer in a context with similar, but not identical, categories. Results As shown in Table 3, the S1 model improves the resolution accuracy for all categories on TS-distractors, except for bus. On TS-image, resolution accuracies are generally much higher and the comparison between S0 and S1 gives mixed results. We take this as positive evidence that S1 improves communicative success in a relevant number of cases, but it also indicates that combining this model with the more standard RSA approach could be promising. Figure 2 shows a qualitative example for S1 being more successful than S0. 6 Conclusion We have presented a pragmatic approach to modeling zero-shot reference games, showing that Bayesian reasoning inspired by RSA can help decoding a neural generator that refers to novel objects. The decoder is based on a pragmatic listener that has hidden beliefs about a referent’s category, which leads the pragmatic speaker to use fewer nouns when being uncertain about this category. While some aspects of the experimental setting are, admittedly, simplified (e.g. compilation of an artificial test set, uncertainty estimation), we believe that this is an encouraging result for scaling models in computational pragmatics to realworld conversation and its complexities. 659 References Harsh Agrawal, Karan Desai, Xinlei Chen, Rishabh Jain, Dhruv Batra, Devi Parikh, Stefan Lee, and Peter Anderson. 2018. nocaps: novel object captioning at scale. arXiv preprint arXiv:1812.08658. Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2017. Guided open vocabulary image captioning with constrained beam search. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 936–945, Copenhagen, Denmark. Association for Computational Linguistics. Jacob Andreas and Dan Klein. 2016. Reasoning about pragmatics with neural listeners and speakers. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1173–1182, Austin, Texas. Association for Computational Linguistics. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Doll´ar, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325. Herbert H. Clark and Deanna Wilkes-Gibbs. 1986. Referring as a collaborative process. Cognition, 22:1– 39. Reuben Cohn-Gordon, Noah Goodman, and Christopher Potts. 2018. Pragmatically informative image captioning with character-level inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 439–443. Association for Computational Linguistics. Robert Dale and Ehud Reiter. 1995. Computational interpretations of the gricean maxims in the generation of referring expressions. Cognitive Science, 19(2):233–263. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. FeiFei. 2009. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09. Rui Fang, Malcolm Doering, and Joyce Y Chai. 2014. Collaborative models for referring expression generation in situated dialogue. In Twenty-Eighth AAAI Conference on Artificial Intelligence. Michael C Frank and Noah D Goodman. 2012. Predicting pragmatic reasoning in language games. Science, 336(6084):998–998. Lisa Anne Hendricks, Subhashini Venugopalan, Marcus Rohrbach, Raymond Mooney, Kate Saenko, and Trevor Darrell. 2016. Deep compositional captioning: Describing novel object categories without paired training data. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–10. Emiel Krahmer and Kees Van Deemter. 2012. Computational generation of referring expressions: A survey. Computational Linguistics, 38(1):173–218. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32–73. Christoph H Lampert, Hannes Nickisch, and Stefan Harmeling. 2009. Learning to detect unseen object classes by between-class attribute transfer. In IEEE Computer Vision and Pattern Recognition, pages 951–958. IEEE. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C.Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Computer Vision – ECCV 2014, volume 8693, pages 740–755. Springer International Publishing. Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L. Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous object descriptions. In CVPR 2016. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Richard Socher, Milind Ganjoo, Christopher D Manning, and Andrew Ng. 2013. Zero-shot learning through cross-modal transfer. In Advances in neural information processing systems, pages 935–943. Ramakrishna Vedantam, Samy Bengio, Kevin Murphy, Devi Parikh, and Gal Chechik. 2017. Context-aware captions from context-agnostic supervision. In Computer Vision and Pattern Recognition (CVPR), volume 3. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Computer Vision and Pattern Recognition. Licheng Yu, Patrick Poirson, Shan Yang, Alexander C. Berg, and Tamara L. Berg. 2016. Modeling Context in Referring Expressions, pages 69–85. Springer International Publishing, Cham. Licheng Yu, Hao Tan, Mohit Bansal, and Tamara L Berg. 2017. A joint speakerlistener-reinforcer model for referring expressions. In Computer Vision and Pattern Recognition (CVPR), volume 2. Sina Zarrieß and David Schlangen. 2018. Decoding strategies for neural referring expression generation. Proceedings of INLG 2018.
2019
63
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6263–6273 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6263 Inducing Document Structure for Aspect-based Summarization Lea Frermann Amazon Research [email protected] Alexandre Klementiev Amazon Research [email protected] Abstract Automatic summarization is typically treated as a 1-to-1 mapping from document to summary. Documents such as news articles, however, are structured and often cover multiple topics or aspects; and readers may be interested in only some of them. We tackle the task of aspect-based summarization, where, given a document and a target aspect, our models generate a summary centered around the aspect. We induce latent document structure jointly with an abstractive summarization objective, and train our models in a scalable synthetic setup. In addition to improvements in summarization over topic-agnostic baselines, we demonstrate the benefit of the learnt document structure: we show that our models (a) learn to accurately segment documents by aspect; (b) can leverage the structure to produce both abstractive and extractive aspectbased summaries; and (c) that structure is particularly advantageous for summarizing long documents. All results transfer from synthetic training documents to natural news articles from CNN/Daily Mail and RCV1. 1 Introduction Abstractive summarization systems typically treat documents as unstructured, and generate a single generic summary per document (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017). In this work we argue that incorporating document structure into abstractive summarization systems is beneficial for at least three reasons. First, the induced structure increases model interpretability, and can be leveraged for other purposes such as document segmentation. Second, structure-aware models help alleviate performance bottlenecks associated with summarization of long documents by learning to focus only on the segments relevant to the topic of interest. Third, they can adapt more flexibly to demands of a user who, faced with a long document or a document collection, might be interested only in some of its topics. For example given a set of reviews of a smartphone, one user might be interested in a summary of opinions on battery life while another may care more about its camera quality; or, given a news article about a body builder running for governor, a reader might care about the effect on his sports career, or on the political consequences (cf., Figure 1 (bottom) for another example). Throughout this paper, we will refer to such topics or perspectives collectively as aspects. We develop models for aspect-based summarization: given a document and a target aspect, our systems generate a summary specific to the aspect. We extend recent neural models (See et al., 2017) for abstractive summarization making the following contributions: • We propose and compare models for aspectbased summarization incorporating different aspect-driven attention mechanisms in both the encoder and the decoder. • We propose a scalable synthetic training setup and show that our models generalize from synthetic to natural documents, sidestepping the data sparsity problem and outperforming recent aspect-agnostic summarization models in both cases. • We show that our models induce meaningful latent structure, which allows them to generate abstractive and extractive aspect-driven summaries, segment documents by aspect, and generalize to long documents.1 We argue that associating model attention with aspects also improves model interpretability. 1A well-known weakness of encoder-decoder summarization models (Vaswani et al., 2017; Cohan et al., 2018) 6264 Our models are trained on documents paired with aspect-specific summaries. A sizable data set does not exist, and we adopt a scalable, synthetic training setup (Choi, 2000; Krishna and Srinivasan, 2018). We leverage aspect labels (such as news or health) associated with each article in the CNN/Daily Mail dataset (Hermann et al., 2015), and construct synthetic multi-aspect documents by interleaving paragraphs of articles pertaining to different aspects, and pairing them with the original summary of one of the included articles. Although assuming one aspect per source article may seem crude, we demonstrate that our model trained on this data picks up subtle aspect changes within natural news articles. Importantly, our setup requires no supervision such as pre-trained topics (Krishna and Srinivasan, 2018) or aspect-segmentation of documents. A script to reproduce the synthetic data set presented in this paper can be found at https://github.com/ColiLea/ aspect_based_summarization. Our evaluation shows that the generated summaries are more aspect-relevant and meaningful compared to aspect agnostic baselines, as well as a variety of advantages of the inferred latent aspect representations such as accurate document segmentation, that our models produce both extractive and abstractive summaries of high quality, and that they do so for long documents. We also show that our models, trained on synthetic documents, generalize to natural documents from the Reuters and the CNN/Daily Mail corpus, through both automatic and human evaluation. 2 Related Work Aspect-based summarization has previously been considered in the customer feedback domain (Hu and Liu, 2004; Zhuang et al., 2006; Titov and McDonald, 2008; Lu et al., 2009; Zhu et al., 2009), where a typical system discovers a set of relevant aspects (product properties), and extracts sentiment and information along those aspects. In contrast, we induce latent aspect representations under an abstractive summarization objective. Gerani et al. (2016) consider discourse and topical structure to abstractively summarize product reviews using a micro planning pipeline for text generation rather than building on recent advances in end-to-end modeling. Yang et al. (2018) propose an aspect- and sentiment-aware neural summarization model in a multi-task learning setup. Their model is geared towards the product domain and requires document-level category labels, and sentiment- and aspect lexica. In query-based summarization sets of documents are summarized with respect to a natural language input query (Dang, 2005; Daum´e III and Marcu, 2006; Mohamed and Rajasekaran, 2006; Liu et al., 2012; Wang et al., 2014; Baumel et al., 2018). Our systems generate summaries with respect to abstract input aspects (akin to topics in a topic model), whose representations are learnt jointly with the summarization task. We build on neural encoder-decoder architectures with attention (Nallapati et al., 2016; Cheng and Lapata, 2016; Chopra et al., 2016; See et al., 2017; Narayan et al., 2017), and extend the pointer-generator architecture of See et al. (2017) to our task of aspect-specific summarization. Narayan et al. (2018) use topic information from a pre-trained LDA topic model to generate ultra-short (single-topic) summaries, by scoring words in their relevance to the overall document. We learn topics jointly within the summarization system, and use them to directly drive summary content selection. Our work is most related to Krishna and Srinivasan (2018) (KS), who concurrently developed models for topic-oriented summarization in the context of artificial documents from the CNN/Daily Mail data. Our work differs from theirs in several important ways. KS use pointergenerator networks directly, whereas we develop novel architectures involving aspect-driven attention mechanisms (Section 3). As such, we can analyze the representations learnt by different attention mechanisms, whereas KS re-purpose attention which was designed with a different objective (coverage). KS use pre-trained topics to pre-select articles from CNN/Daily Mail whose summaries are highly separable in topic space, whereas we do not require such resources nor do we pre-select our data, resulting in a simpler and more realistic setup (Section 4). In addition, our synthetic data set is more complex (ours: 1-4 aspects per document, selected from a set of 6 global aspects; KS: 2 aspects per document, unknown total number of aspects). We extensively evaluate the benefit of latent document structure (Sections 5.1–5.3), and apply our method to human-labeled multi-aspect news documents from the Reuters corpus (Sec6265 tion 5.4). 3 Aspect-specific Summarization In this section we formalize the task of aspectspecific document summarization, and present our models. Given an input document x and a target aspect a, our model produces a summary of x with respect to a such that the summary (i) contains only information relevant to a; and (ii) states this information in a concise way (cf., examples in Figure 1). Our model builds on the pointer-generator networks (PG-net; See et al. (2017)), an encoderdecoder architecture for abstractive summarization. Unlike traditional document summarization, a model for aspect-based summarization needs to include aspects in its input document representation in order to select and compress relevant information. We propose three extensions to PG-net which allow the resulting model to learn to detect aspects. We begin by describing PG-net before we describe our extensions. Our models are trained on documents paired with aspect-specific summaries (cf., Section 4). Importantly, all proposed extensions treat aspect segmentation as latent, and as such learn to segment documents by aspects without exposure to word- or sentence-level aspect labels at train time. Figure 2 visualizes our models. PG-net. PG-net (See et al., 2017) is an encoderdecoder abstractive summarization model, consisting of two recurrent neural networks. The encoder network is a bi-directional LSTM which reads in the article x = {wi}N 1 , token by token, and produces a sequence of hidden states h = {hi}N 1 . This sequence is accessed by the decoder network, also an LSTM, which incrementally produces a summary, by sequentially emitting words. At each step t the decoder produces word yt conditioned on the previously produced word yt−1, its own latent LSTM state st and a time-specific representation of the encoder states h∗ t . This time-specific representation is computed through Bahdanau attention (Bahdanau et al., 2015) over the encoder states, et i = vT tanh(Whhi + Wsst + b) (1) at = softmax(et) (2) h∗ t = X i at ihi, (3) where v, Wh, Ws and b are model parameters. Given this information, the decoder learns to either generate a word from a fixed vocabulary or copy a word from the input. This procedure is repeated until either the maximum output sequence length is reached, or a special < STOP > symbol is produced.2 Loss. The loss of PG-net, and all proposed extensions, is the average negative log-likelihood of all words in the summary L = 1 T T X t=1 −logP(wt) (4) 3.1 Aspect-aware summarization models Our proposed models embed all words {w} ∈x into a latent space, shared between the encoder and the decoder. We also embed the input aspect a (a 1-hot indicator) into the same latent space, treating aspects as additional items of the vocabulary. The embedding space is randomly initialized and updated during training. Decoder aspect attention. As a first extension, we modify the decoder attention mechanism to depend on the target summary aspect a (Figure 2, left). To this end, we learn separate attention weights and biases for each possible input aspect, and use the parameters specific to target-aspect a during decoding, replacing equation (1) with et i = vT tanh(W a hhi + W a s st + ba). (5) Intuitively, the model can now focus on parts of the input not only conditioned on its current decoder state, but also depending on the aspect the summary should reflect. Encoder attention. Intuitively, all information about aspects is present in the input, independently of the summarization mechanism, and as such should be accurately reflected in the latent document representation. We formalize this intuition by adding an attention mechanism to the encoder (Figure 2, center). After LSTM encoding, we attend over the LSTM states h = {hi}N 1 conditioned on the target aspect as follows ˜ai = tanh(W˜ahi + b˜a) (6) a = sigmoid(eT a ˜a) (7) h ′ i = ahi, (8) 2A coverage mechanism was proposed with PG-net to avoid repetition in the summary. However, in order to minimize interaction with the aspect-attention mechanisms we propose, we do not include it in our models. 6266 Synthetic multi-aspect news article from the MT-news corpus a father spent #10,000 on private detectives after police failed to track down the thug who killed his daughter’s kitten with an air rifle. neil tregarthen devoted six weeks to gathering information [...] ∥roma ’s players were the latest to face the wrath of angry fans following thursday ’s capitulation against italian rivals fiorentina. serie a pundit matteo bonetti tweeted that [...] ∥having a demanding job can help stave off dementia later in life, a study has found. keeping your brain active throughout your lifetime, both at work and by enjoying stimulating hobbies, can delay mental decline [...] ∥the europa league had offered the last realistic chance [...] ∥the most beneficial hobbies included reading, having an active social life and using a computer regularly. ∥aylish was horrified to find farah lying in a pool of blood after limping home wounded and crippled with pain last september. [...] news neil tregarthen spent #10,000 on private detectives after police failed to track down thug who killed the thug who killed his daughter’s kitten with an air rifle. the kitten was shot near her owner’s home in exeter. health having a demanding job can help stave off dementia later in life. doctors have long said training your brain in later years can prevent dementia but but this is the first time mental activity earlier in life. sport mats francesco totti also spoke to fans despite being an unused substitute. roma captain francesco totti also spoke to fans despite being an unused substitute. News article from the Reuters RCV1 corpus steffigraf reluctantly paid 1.3 million marks to charity last month as part of a settlement with german prosecutors who dropped their tax evasion investigation [...] spiegel magazine said graf had ’agreed with a heavy heart’ to the bargain with prosecutors because she wanted to put the ’media circus’ about her tax affairs behind her and concentrate on tennis. [...] prosecutors dropped their investigation last month after probing graf’s finances for nearly two years when she agreed to their offer to pay a sum to charity. [...] german prosecutors often use the charity donation procedure , with the agreement of the accused, to end a case which they do not believe merits a lengthy legal process. [...] the seven-times wimbledon champion, who has not played since the semifinals [...] sport seven-times wimbledon champion could make a return to the court at the end of april in the german open . former family tax adviser joachim eckardt received two and a half years for complicity . news prosecutors dropped their investigation last month after probing graf ’s finances for nearly two years when she agreed to their offer to pay a sum to charity last month as part of a settlement with german prosecutors who dropped their tax evasion investigation of the tennis player , a news magazine tvshowbiz steffigraf reluctantly paid 1.3 million marks $ 777,000 ) to charity last month as part of a settlement with german prosecutors who dropped their tax evasion investigation of the tennis player . the player said she had entrusted financial matters to her father and his advisers from an early age . Figure 1: Two news articles with color-coded encoder attention-based document segmentations, and selected words for illustration (left), the abridged news article (top right) and associated aspect-specific model summaries (bottom right). Top: Article from our synthetic corpus with aspects sport, tvshowbiz and health. The true boundaries are known, and indicated by black lines in the plot and ∥in the article. Bottom: Article from the RCV1 corpus with document-level human-labeled aspects sports, news and tvshowbiz (gold segmentation unknown). where W˜a and b˜a are parameters, and ea is the embedded target aspect. The decoder will now attend over h′ instead of h in equations (1)-(3). Intuitively, we calculate a weight for each tokenspecific latent representation, and scale each latent representation independently by passing the weight through a sigmoid function. Words irrelevant to aspect a should be scaled down by the sigmoid transformation. Source-factors. Our final extension uses the original PG-net, and modifies its input by treating the target aspect as additional information (factor), which gets appended to our input document (Figure 2, right).3 We concatenate the aspect embed3This model most closely resembles the model presented in (Krishna and Srinivasan, 2018), who append 1-hot topic ding ea to the embedding of each word wi ∈x. The target summary aspect, not the word’s true aspect (which is latent and unknown), is utilized. Through the lexical signal from the target summary, we expect the model to learn to up- or downscale the latent token representations, depending on whether they are relevant to target aspect a. Note that this model does not provide us with aspect-driven attention, and as such cannot be used for document segmentation. 4 A Multi-Aspect News Dataset To train and evaluate our models, we require a data set of documents paired with aspect-specific summaries. Several summarization datasets conindicators to each word in the input. 6267 Figure 2: Visualization of our three aspect-aware summarization models, showing the embedded input aspect (red), word embeddings (green), latent encoder and decoder states (blue) and attention mechanisms (dotted arrows). Left: the decoder aspect attention model; Center: the encoder attention model; Right: the source-factors model. sisting of long and multifaceted documents have been proposed recently (Cohan et al., 2018; Liu et al., 2018). These datasets do not include aspectspecific summaries, however, and as such are not applicable to our problem setting. We synthesize a dataset fulfilling our requirements from the CNN/Daily Mail (CNN/DM) dataset (Hermann et al., 2015). Our dataset, MANews, is a set D of data points d = (x, y, a), where x is a multi-aspect document, a is an aspect in d, and y is a summary of x wrt. aspect a. We assemble synthetic multi-aspect documents, leveraging the article-summary pairs from the CNN/DM corpus, as well as the URL associated with each article, which indicates its topic category. We select six categories as our target aspects, optimizing for diversity and sufficient coverage in the CNN/DM corpus: A = { tvshowbiz, travel, health, sciencetech, sports, news}. We then create multi-aspect documents by interleaving paragraphs of documents belonging to different aspects. For each document d, we first sample its number of aspects nd ∼U(1, 4). Then, we sample nd aspects from A without replacement, and randomly draw a document for each aspect from the CNN/DM corpus.4 We randomly interleave paragraphs of the documents, maintaining each input document’s chronological order. Since paragraphs are not marked in the input data, we draw paragraph length between 1 and 5 sentences. The six aspects are roughly uniformly distributed in the resulting dataset, and the distribution of number of aspects per document is slightly skewed towards more aspects.5 Finally, we create nd data points from the resulting document, by pairing the document once with each of its nd components’ reference summaries. 4Train, validation and test documents are assembled from non-overlapping sets of articles. 5# aspects/proportion: 1/0.107, 2/0.203, 3/0.297, 4/0.393 We construct 284,701 documents for training and use 1,000 documents each for validation and test. In order to keep training and evaluation fast, we only consider CNN/DM documents of length 1000 words or less, and restrict the length of assembled MA-News documents to up to 1500 words. Note that the average MA-News article (1350 words) is longer than CNN/DM (770 words), increasing the difficulty of the summarization task, and emphasizing the importance of learning a good segmentation model, which allows the summarizer to focus on relevant parts of the input. We present evidence for this in Section 5.3. 5 Evaluation This section evaluates whether our models generate concise, aspect-relevant summaries for synthetic multi-aspect documents (Section 5.1), as well as natural documents (Sections 5.3, 5.4). We additionally explore the quality of the induced latent aspect structure, by (a) evaluating our models on document segmentation (Section 5.2), and (b) demonstrating the benefit of structure for summarizing long natural documents (Section 5.3). Model parameters. We extend the implementation of pointer-generator networks6, and use their training parameters. We set the maximum encoder steps to 2000 because our interleaved training and test documents are longer on average than the original CNN/DM articles. We use the development set for early stopping. We do not use coverage (See et al., 2017) in any of our models to minimize interaction with the aspect-attention mechanisms. We also evaluated systems trained with all combinations of our three aspect-awareness mechanisms, but we did not observe systematic improvements over the single-mechanism systems. Hence, we will only report results on those. 6https://github.com/abisee/pointer-generator 6268 5.1 Summarization This section evaluates the quality of produced summaries using the Rouge metric (Lin, 2004). Model Comparison. We compare the aspectaware models with decoder aspect attention (decattn), encoder attention (enc-attn), and source factors (sf) we introduced in Section 3.1 against a baseline which extracts a summary as the first three sentences in the article (lead-3). We expect any lead-n baseline to be weaker for aspectspecific summarization than for classical summarization, where the first n sentences typically provide a good generic summary. We also apply the original pointer-generator network (PGnet), which is aspect-agnostic. In addition to the abstractive summarization setup, we also derive extractive summaries from the aspect-based attention distributions of two of our models (encattn-extract and dec-attn-extract). We iteratively extract sentences in the input which received the highest attention until a maximum length of 100 words (same threshold as for abstractive) is reached. Sentence attention as is computed as average word attention aw for words in s: as = 1 |s| P w∈s aw. Finally, as an upper bound, we train our models on the subset of the original CNN/DM documents from which the MA-News documents were created (prefixed with ub-). Table 1 (top) presents results of models trained and tested on the synthetic multi-aspect dataset. All aspect-aware models beat both baselines by a large margin. For classical summarization, the lead-3 baseline remains a challenge to beat even by state-of-the-art systems, and also on multiaspect documents we observe that, unlike our systems, PG-net performs worse than lead-3. Unsurprisingly, the extractive aspect-aware models outperform their abstractive counterparts in terms of ROUGE, and the decoder attention distributions are more amenable to extraction than encoder attention scores. Overall, our structured models enable both abstractive and extractive aspectaware summarization at a quality clearly exceeding structure-agnostic baselines. To assess the impact of the synthetic multiaspect setup, we apply all models to the original CNN/DM documents from which MA-news was assembled (Table 1, bottom). Both baselines show a substantial performance boost, suggesting that they are well-suited for general summarization but do not generalize well to aspect-based summarizaRouge 1 Rouge 2 Rouge L lead-3 0.2150 0.0690 0.1410 PG-net 0.1757 0.0472 0.1594 enc-attn 0.2750 0.1027 0.2502 dec-attn 0.2734 0.1005 0.2509 sf 0.2802 0.1046 0.2536 enc-attn-extract 0.3033 0.1092 0.2732 dec-attn-extract 0.3326 0.1379 0.3026 ub-lead-3 0.3836 0.1765 0.2468 ub-PG-net 0.3446 0.1495 0.3159 ub-enc-attn 0.3603 0.1592 0.3282 ub-dec-attn 0.3337 0.1427 0.3039 ub-sf 0.3547 0.1570 0.3262 Table 1: Quantitative comparison (Rouge 1, 2 and L) of models on aspect-specific summarization. tion. The performance of our own models degrades more gracefully. Note that some of our aspect-aware methods outperform the PG-net on natural documents, showing that our models can pick up and leverage their less pronounced structure (compared to synthetic documents) as well. Aspect-based summarization requires models to leverage topical document structure to produce relevant summaries, and as such a baseline focusing on the beginning of the article, which typically summarizes its main content, is no longer viable. 5.2 Segmentation The model attention distribution over the input document, conditioned on a target aspect, allows us to qualitatively inspect the model’s aspect representation, and to derive a document segmentation. Since we know the true aspect segmentations for documents in our synthetic dataset, we can evaluate our models on this task, using all test documents with > 1 aspect (896 in total). We decode each test document multiple times conditioned on each of its aspects, and use the attention distributions over the input document under different target aspects to derive a document segmentation. Figure 1 visualizes induced segmentations of two documents. We omit the source-factor model in this evaluation, because it does not provide us with a latent document representation. For the encoder attention model, we obtain nd attention distributions (one per input aspect), and assign each word the aspect under which it received highest attention. For the decoder aspect attention model, we obtain nd × T attention dis6269 model Pk WD acc w acc s ratio global-max 0.694 0.694 0.138 0.142 10.8 sent-max 0.694 0.694 0.474 0.503 10.8 word-max 0.694 0.694 0.487 0.488 10.8 Considering only aspects ∈input x LDA 0.375 0.789 0.294 0.282 0.722 MNB 0.223 0.594 0.753 0.732 0.553 enc-attn 0.270 0.348 0.793 0.784 0.784 dec-attn 0.285 0.385 0.727 0.780 0.697 Considering all global aspects ∈A LDA 0.590 0.697 0.250 0.204 3.725 MNB 0.268 0.784 0.591 0.564 0.398 enc-attn 0.337 0.482 0.667 0.663 0.580 dec-attn 0.454 0.708 0.385 0.424 0.374 Table 2: Text segmentation results: Segmentation metrics Pk and windiff (WD; lower is better), aspect label accuracies (acc w, acc s), and the ratio of system to summary segments (ratio). Three majority baselines (global-max, word-max, sent-max), and a topic model (LDA) and classification baseline (MNB). The majority baselines assign the same aspect to all words (sentences) in a doc, so that Pk and WD scores are identical. tributions, one for each decoder step t and input aspect. For each aspect we assign each word the maximum attention it received over the T decoder steps.7 Since our gold standard provides us with sentence-level aspect labels, we derive sentencelevel aspect labels as the most prevalent wordlevel aspect in the sentence. Baselines. global-max assigns each word to the globally most prevalent aspect in the corpus. A second baseline assigns each word to the document’s most prevalent aspect on word- (wordmax) or sentence level (sent-max). An unsupervised topic model baseline (LDA) is trained on the training portion of our synthetic data set (K = 6; topics were mapped manually to aspects). At decode time, we assign each word its most likely topic and derive sentence labels as the topic assigned to most of its words. Finally, a supervised classification baseline (multinomial naive Bayes; MNB) is trained to classify sentences into aspects. Metrics. We either consider the set of aspects present in a document (Table 2 center) or all possible aspects in the data set (Table 2 bottom). We measure traditional segmentation met7We also experimented with mean instead of max, but observed very similar results. PG-net enc-attn dec-attn src-fct 0.2 0.25 0.3 0.35 Rouge 1 avg truee long truee avg beste long best Figure 3: Models trained on synthetic data evaluated on original CNN/DM documents, of either <1000 words (short) or >2000 words (long). True uses the summary under the document’s true aspect. ‘Best’ takes the bestscoring summary under all possible input aspects. rics Pk (Beeferman et al., 1999) and windiff (WD; Pevzner and Hearst (2002)) (lower is better) which estimate the accuracy of segmentation boundaries, but do not evaluate whether a correct aspect has been assigned to any segment. Hence, we also include aspect label accuracy on the word level (acc w) and sentence level (acc s) (higher is better). We also compute the ratio of the true number of segments to the predicted number of segments (ratio). The attention-aware summarization models outperform all baselines across the board (Table 2). LDA outperforms the most basic global-max baseline, but not the more informed per-document majority baselines. Unsurprisingly, MNB as a supervised model trained specifically to classify sentences performs competitively. Overall, the performance drops when considering the larger set of all six aspects (bottom) compared to only aspects present in the document (between 2 and 4; center). 5.3 Long Documents Accurately encoding long documents is a known challenge for encoder-decoder models. We hypothesize that access to a structured intermediate document representation would help alleviate this issue. To this end, we compare our models against the aspect-agnostic PG-net on natural average and long documents from CNN/DM. All models are trained on the multi-aspect data set. We construct two test datasets: (i) the CNN/DM documents underlying our test set (up to 1000 words; avg), and (ii) CNN/DM documents which are at least 2000 words long (long) and are tagged with one of our target aspects. The total number of average and long documents is 527 and 4560, respectively. Results (Figure 3) confirm that our aspect-aware 6270 rand max LDA MNB enc-attn dec-attn 0.34 0.71 0.40 0.53 0.75 0.37 Table 3: Sentence labelling accuracy of aspects present in the input article. models indeed degrade more gracefully in performance when applied to long documents, and that the source-factor model (R1=0.236) outperforms the PG (R1=0.226) model by one ROUGE point on long documents (red bars). We finally explore our aspect-aware models on the task of aspect-agnostic summarization, decoding test documents under all possible aspects, and selected the aspect with the highest-scoring summary in terms of ROUGE (avg best and long best, respectively). In this setup, all our models outperform the PG-baseline by a large margin, both on long and average documents. 5.4 Evaluation on Reuters News Finally, we evaluate our models on documents with multiple gold-annotated aspects, using the Reuters RCV1 dataset (Lewis et al., 2004). Our target aspects sport, health, sciencetech and travel are identically annotated in the Reuters data set. We map the remaining tags tvshowbiz and news to their most relevant Reuters counterparts.8 We obtain 792 document (with average length of 12.2 sentences), which were labeled with two or more aspects. Figure 1 (bottom) shows an example of generated summaries for a multi-aspect Reuters document. Automatic evaluation. We evaluate how well our models recover aspects actually present in the documents. We use the approach described in Section 5.2 to assign aspects to sentences in a document, then collect all of the aspects we discover in each document. We compare aspect to document assignment accuracy against two baselines, one assigning random aspects to sentences (rand), and one always assigning the globally most prominent aspect in the corpus (max). Note that we do not include PG-net or the source-factor model because neither can assign aspects to input tokens. Table 3 shows that the encoder attention model outperforms all other systems and both baselines. 8tvshowbiz →fashion, biographies personalities people, art culture entertainment news → disasters accidents, crime lawenforcement, international relations model acc diversity fluency info lead-2 0.540 0.127 1.930 1.647 enc-attn 0.543 0.177∗ 1.567 1.317 enc-attn ex 0.436 0.129 1.924 1.367 dec-attn 0.553∗ 0.197∗ 1.447 1.277 dec-attn ex 0.440 0.151 1.889 1.448 sf 0.553 0.133 1.667 1.433 Table 4: Human evaluation: aspect label accuracy (acc), aspect label diversity for two summaries (diversity), and fluency and informativeness (info) scores. Systems performing significantly better than the lead-2 baseline are marked with a ∗(p < 0.05, paired t-test; Dror et al. (2018)). The global majority baseline shows that the gold aspect distribution in the RCV1 corpus is peaked (the most frequent aspect, news, occurs in about 70% of the test documents), and majority class assignment leads to a strong baseline. Human evaluation. We measure the quality and aspect diversity in aspect-specific summaries of RCV1 articles through human evaluation, using Amazon Mechanical Turk. We randomly select a subset of 50 articles with at least two aspects from the Reuters RCV1 data, and present Turkers with a news article and two summaries. We ask the Turkers to (1) select a topic for each summary from the set of six target topics;9; (2) rate the summary with respect to its fluency (0=not fluent, 1=somewhat fluent, 2=very fluent); and (3) analogously rate its informativeness. We evaluate the extractive and abstractive versions of our three aspect aware models. We do not include the original PG-net, because it is incapable of producing distinct, aspect-conditioned summaries for the same document. Like in our automatic summarization evaluation we include a lead baseline. Since the annotators are presented with two summaries for each article, we adopt a lead-2 baseline, and present the first two sentences of a document as a summary each (lead-2). This baseline has two advantages over our systems: first, it extracts summaries as single, complete sentences which are typically semantically coherent units; second, the two sentences (i.e., summaries) do not naturally map to a gold aspect each. We consider both mappings, and score the best. Results are displayed in Table 4. As expected, the extractive models score higher on fluency, and 9A random baseline would achieve acc=0.17. 6271 consequently on aspect-agnostic informativeness. Our abstractive models, however, outperform all other systems in terms of aspect-labeling accuracy (acc), and annotators more frequently assign distinct aspects to two summaries of an article (diversity). The results corroborate our conclusion that the proposed aspect-aware summarization models produce summaries aspect-focused summaries with and distinguishable and human interpretable focus. 6 Conclusions This paper presented the task of aspect-based summarization, where a system summarizes a document with respect to a given input aspect of interest. We introduced neural models for abstractive, aspect-driven document summarization. Our models induce latent document structure, to identify aspect-relevant segments of the input document. Treating document structure as latent allows for efficient training with no need for subdocument level topic annotations. The latent document structure is induced jointly with the summarization objective. Sizable datasets of documents paired with aspect-specific summaries do not exist and are expensive to create. We proposed a scalable synthetic training setup, adapting an existing summarization data set to our task. We demonstrated the benefit of document structure aware models for summarization through a diverse set of evaluations. Document structure was shown to be particularly useful for long documents. Evaluation further showed that models trained on synthetic data generalize to natural test documents. An interesting challenge, and open research question, concerns the extent to which synthetic training impacts the overall model generalizability. The aspects considered in this work, as well as the creation process of synthetic data by interleaving documents which are maximally distinct with respect to the target aspects leave room for refinement. Ideas for incorporating more realistic topic structure in artificial documents include leveraging more fine-grained (or hierarchical) topics in the source data; or adopting a more sophisticated selection of article segments to interleave by controlling for confounding factors like author, time period, or general theme.10 We believe that 10E.g., constructing articles about a fixed theme (Barack Obama) from different aspects (politics and showbiz). training models on heuristic, but inexpensive data sets is a valuable approach which opens up exciting opportunities for future research. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations. Tal Baumel, Matan Eyal, and Michael Elhadad. 2018. Query focused abstractive summarization: Incorporating query relevance, multi-document coverage, and summary length constraints into seq2seq models. arXiv preprint arXiv:1801.07704. Doug Beeferman, Adam Berger, and John Lafferty. 1999. Statistical models for text segmentation. Machine Learning - Special issue on natural language learning, 34(1-3):177–210. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 484–494, Berlin, Germany. Association for Computational Linguistics. Freddy Y. Y. Choi. 2000. Advances in domain independent linear text segmentation. In Proceedings of the 1st North American Chapter of the Association for Computational Linguistics Conference, NAACL 2000, pages 26–33, Stroudsburg, PA, USA. Association for Computational Linguistics. Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 93–98, San Diego, California. Association for Computational Linguistics. Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 615–621, New Orleans, Louisiana. Association for Computational Linguistics. Hoa T. Dang. 2005. Overview of DUC 2005. In Proceedings of the Document Understanding Conference. Hal Daum´e III and Daniel Marcu. 2006. Bayesian query-focused summarization. In Proceedings of the 21st International Conference on Computational 6272 Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 305–312. Association for Computational Linguistics. Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker’s guide to testing statistical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1383–1392. Association for Computational Linguistics. Shima Gerani, Giuseppe Carenini, and Raymond T Ng. 2016. Modeling content and structure for abstractive review summarization. Computer Speech & Language. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1693–1701. Curran Associates, Inc. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 168–177. ACM. Kundan Krishna and Balaji Vasan Srinivasan. 2018. Generating topic-oriented summaries using neural attention. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1697–1705, New Orleans, Louisiana. Association for Computational Linguistics. David D. Lewis, Yiming Yang, Tony G. Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for text categorization research. J. Mach. Learn. Res., 5:361–397. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. In International Conference on Learning Representations. Yan Liu, Sheng-hua Zhong, and Wenjie Li. 2012. Query-oriented multi-document summarization via unsupervised deep learning. In AAAI. Yue Lu, ChengXiang Zhai, and Neel Sundaresan. 2009. Rated aspect summarization of short comments. In Proceedings of the 18th International Conference on World Wide Web, WWW ’09, pages 131–140, New York, NY, USA. ACM. Ahmed A. S. Mohamed and Sanguthevar Rajasekaran. 2006. Query-based summarization based on document graphs. Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. Shashi Narayan, Nikos Papasarantopoulos, Mirella Lapata, and Shay B. Cohen. 2017. Neural extractive summarization with side information. CoRR, abs/1704.04530. Lev Pevzner and Marti A. Hearst. 2002. A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics, 28(1):19– 36. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389, Lisbon, Portugal. Association for Computational Linguistics. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368. Ivan Titov and Ryan McDonald. 2008. A joint model of text and aspect ratings for sentiment summarization. In Proceedings of ACL-08: HLT, pages 308– 316, Columbus, Ohio. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Lu Wang, Hema Raghavan, Claire Cardie, and Vittorio Castelli. 2014. Query-focused opinion summarization for user-generated content. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1660–1669. Min Yang, Qiang Qu, Ying Shen, Qiao Liu, Wei Zhao, and Jia Zhu. 2018. Aspect and sentiment aware abstractive review summarization. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1110–1120, Santa Fe, New Mexico, USA. Association for Computational Linguistics. 6273 Jingbo Zhu, Muhua Zhu, Huizhen Wang, and Benjamin K. Tsou. 2009. Aspect-based sentence segmentation for sentiment summarization. In Proceedings of the 1st International CIKM Workshop on Topic-sentiment Analysis for Mass Opinion, TSA ’09, pages 65–72, New York, NY, USA. ACM. Li Zhuang, Feng Jing, and Xiao-Yan Zhu. 2006. Movie review mining and summarization. In Proceedings of the 15th ACM international conference on Information and knowledge management, pages 43–50. ACM.
2019
630
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6274–6283 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6274 Incorporating Priors with Feature Attribution on Text Classification Frederick Liu Besim Avci Google {frederickliu, besim}@google.com Abstract Feature attribution methods, proposed recently, help users interpret the predictions of complex models. Our approach integrates feature attributions into the objective function to allow machine learning practitioners to incorporate priors in model building. To demonstrate the effectiveness our technique, we apply it to two tasks: (1) mitigating unintended bias in text classifiers by neutralizing identity terms; (2) improving classifier performance in a scarce data setting by forcing the model to focus on toxic terms. Our approach adds an L2 distance loss between feature attributions and task-specific prior values to the objective. Our experiments show that i) a classifier trained with our technique reduces undesired model biases without a tradeoff on the original task; ii) incorporating priors helps model performance in scarce data settings. 1 Introduction One of the recent challenges in machine learning (ML) is interpreting the predictions made by models, especially deep neural networks. Understanding models is not only beneficial, but necessary for wide-spread adoption of more complex (and potentially more accurate) ML models. From healthcare to financial domains, regulatory agencies mandate entities to provide explanations for their decisions (Goodman and Flaxman, 2016). Hence, most machine learning progress made in those areas is hindered by a lack of model explainability – causing practitioners to resort to simpler, potentially low-performance models. To supply for this demand, there has been many attempts for model interpretation in recent years for tree-based algorithms (Lundberg et al., 2018) and deep learning algorithms (Lundberg and Lee, 2017; Smilkov et al., 2017; Sundararajan et al., 2017; Bach et al., 2015; Kim et al., 2018; Dhurandhar et al., 2018). Method Sentence Probability Baseline I am gay 0.915 I am straight 0.085 Our Method I am gay 0.141 I am straight 0.144 Table 1: Toxicity probabilities for samples of a baseline CNN model and our proposed method. Words are shaded based on their attribution and italicized if attribution is > 0. On the other hand, the amount of research focusing on explainable natural language processing (NLP) models (Li et al., 2016; Murdoch et al., 2018; Lei et al., 2016) is modest as opposed to image explanation techniques. Inherent problems in data emerge in a trained model in several ways. Model explanations can show that the model is not inline with human judgment or domain expertise. A canonical example is model unfairness, which stems from biases in the training data. Fairness in ML models rightfully came under heavy scrutiny in recent years (Zhang et al., 2018a; Dixon et al., 2018; Angwin et al., 2016). Some examples include sentiment analysis models weighing negatively for inputs containing identity terms such as “jew” and “black”, and hate speech classifiers leaning to predict any sentence containing “islam” as toxic (Waseem and Hovy, 2016). If employed, explanation techniques help divulge these issues, but fail to offer a remedy. For instance, the sentence “I am gay” receives a high score on a toxicity model as seen in Table 1. The Integrated Gradients (Sundararajan et al., 2017) explanation method attributes the majority of this decision to the word “gay.” However, none of the explanations methods suggest next steps to fix the issue. Instead, researchers try to reduce biases indirectly by mostly adding more data (Dixon et al., 6275 2018; Chen et al., 2018), using unbiased word vectors (Park et al., 2018), or directly optimizing for a fairness proxy with adversarial training (Madras et al., 2018; Zhang et al., 2018a). These methods either offer to collect more data, which is costly in many cases, or make a tradeoff between original task performance and fairness. In this paper, we attempt to enable injecting priors through model explanations to rectify issues in trained models. We demonstrate our approach on two problems in text classification settings: (1) model biases towards protected identity groups; (2) low classification performance due to lack of data. The core idea is to add L2 distance between Path Integrated Gradients attributions for pre-selected tokens and a target attribution value in the objective function as a loss term. For model fairness, we impose the loss on keywords identifying protected groups with target attribution of 0, so the trained model is penalized for attributing model decisions to those keywords. Our main intuition is that undesirable correlations between toxicity labels and instances of identity terms cause the model to learn unfair biases which can be corrected by incorporating priors on these identity terms. Moreover, our approach allows practitioners to impose priors in the other direction to tackle the problem of training a classifier when there is only a small amount of data. As shown in our experiments, by setting a positive target attribution for known toxic words 1, one can improve the performance of a toxicity classifier in a scarce data regime. We validate our approach on the Wikipedia toxic comments dataset (Wulczyn et al., 2017). Our fairness experiments show that the classifiers trained with our method achieve the same performance, if not better, on the original task, while improving AUC and fairness metrics on a synthetic, unbiased dataset. Models trained with our technique also show lower attributions to identity terms on average. Our technique produces much better word vectors as a by-product when compared to the baseline. Lastly, by setting an attribution target of 1 on toxic words, a classifier trained with our objective function achieves better performance when only a subset of the data is present. 1Full list of identity terms and toxic terms used as priors can be found in supplemental material. Please note the toxic terms are not censored. 2 Feature Attribution In this section, we give formal definitions of feature attribution and a primer on [Path] Integrated Gradients (IG), which is the basis for our method. Definition 2.1. Given a function f : Rn → [0, 1] that represents a model, and an input x = (x1, ..., xn) ∈Rn. An attribution of the prediction at input x is a vector a = (a1, ..., an) and ai is defined as the attribution of xi. Feature attribution methods have been studied to understand the contribution of each input feature to the output prediction score. This contribution, then, can further be used to interpret model decisions. Linear models are considered to be more desirable because of their implicit interpretability, where feature attribution is the product of the feature value and the coefficient. To some, non-linear models such as gradient boosting trees and neural networks are less favorable due to the fact that they do not enjoy such transparent contribution of each feature and are harder to interpret (Lou et al., 2012). Despite the complexity of these models, prior work has been able to extract attributions with gradient based methods (Smilkov et al., 2017), Shapley values from game theory (SHAP) (Lundberg and Lee, 2017), or other similar methods (Bach et al., 2015; Shrikumar et al., 2017). Some of these attributions methods, for example Path Intergrated Gradients and SHAP, not only follow Definition 2.1, but also satisfy axioms or properties that resemble linear models. One of these axioms is completeness, which postulates that the sum of attributions should be equal to the difference between uncertainty and model output. Integrated Gradients Integrated Gradients (Sundararajan et al., 2017) is a model attribution technique applicable to all models that have differentiable inputs w.r.t. outputs. IG produces feature attributions relative to an uninformative baseline. This baseline input is designed to produce a high-entropy prediction representing uncertainty. IG, then, interpolates the baseline towards the actual input, with the prediction moving from uncertainty to certainty in the process. Building on the notion that the gradient of a function, f, with respect to input can characterize sensitivity of f for each input dimension, IG simply aggregates the gradients of f with respect to the input along this path using a path integral. 6276 The crux of using path integral rather than overall gradient at the input is that f’s gradients might have been saturated around the input and integrating over a path alleviates this phenomenon. Even though there can be infinitely many paths from a baseline to input point, Integrated Gradients takes the straight path between the two. We give the formal definition from the original paper in 2.2. Definition 2.2. Given an input x and baseline x′, the integrated gradient along the ith dimension is defined as follows. IGi(x, x′) ::= (xi −x′i) × Z 1 α=0 ∂f(x′+α×(x−x′)) ∂xi dα (1) where ∂f(x) ∂xi represents the gradient of f along the ith dimension at x. In the NLP setting, x is the concatenated embedding of the input sequence. The attribution of each token is the sum of the attributions of its embedding. There are other explainability methods that attribute a model’s decision to its features, but we chose IG in this framework due to several of its characteristics. First, it is both theoretically justified (Sundararajan et al., 2017) and proven to be effective in NLP-related tasks (Mudrakarta et al., 2018). Second, the IG formula in 2.2 is differentiable everywhere with respect to model parameters. Lastly, it is lightweight in terms of implementation and execution complexity. 3 Incorporating Priors Problems in data manifest themselves in a trained model’s performance on classification or fairness metrics. Traditionally, model deficiencies were addressed by providing priors through extensive feature engineering and collecting more data. Recently, attributions help uncover deficiencies causing models to perform poorly, but do not offer actionability. To this end, we propose to add an extra term to the objective function to penalize the L2 distance between model attributions on certain features and target attribution values. This modification allows model practitioners to inject priors. For example, consider a model that tends to predict every sentence containing “gay” as toxic in a comment moderation system. Penalizing non-zero attributions on the tokens identifying protected groups would force the model to focus more on the context words rather than mere existence of certain tokens. We give the formal definition of the new objective function that incorporates priors as the follows: Definition 3.1. Given a vector t of size n, where n is the length of the input sequence and ti is the attribution target value for the ith token in the input sequence. The prior loss for a scalar output is defined as: Lprior(a, t) = n X i (ai −ti)2 (2) where ai refers to attribution of the ith token as in Definition 2.1. For a multi-class problem, we train our model with the following joint objective, Ljoint = L(y, p) + λ C X c Lprior(ac, tc) (3) where ac and tc are the attribution and attribution target for class c, λ is the hyperparameter that controls the stength of the prior loss and L is the crossentropy loss defined as follows: L(y, p) = C X c −yc log(pc) (4) where y is an indicator vector of the ground truth label and pc is the posterior probability of class c. The joint objective function is differentiable w.r.t. model parameters when attribution is calculated through Equation 1 and can be trained with most off-the-shelf optimizers. The proposed objective is not dataset-dependent and is applicable to different problem settings such as sentiment classification, abuse detection, etc. It only requires users to specify the target attribution value for tokens of interest in the corpus. We illustrate the effectiveness of our method by applying it to a toxic comment classification problem. In the next section, we first show how we set the target attribution value for identity terms to remove unintended biases while retaining the same performance on the original task. Then, using the same technique, we show how to set target attribution for toxic words to improve classifier performance in a scarce data setting. 6277 Identity Base Imp TOK Ours gay .272 .353 -.006 .000 homosexual .085 .388 -.006 -.000 queer .071 .28 -.006 .000 teenage .030 -0.02 -.006 -.001 lesbian .012 .046 -.006 .001 vocab avg -.002 -.001 -.004 -.001 Table 2: Subset of identity terms we used and their mean attribution value on the test set. Method names are abbreviated with the prefix. The last row is the average across all vocabularies. 4 Experiments We incorporate human prior in model building on two applications. First, we tackle the problem of unintended bias in toxic comment classification (Dixon et al., 2018) with our proposed method. For our experiments, we aim to mitigate the issue of neutral sentences with identity terms being classified as toxic for a given a set of identity terms. A subset of the identity terms are listed in the first column of Table 2. Second, we force the model to focus on a list of human-selected toxic terms under scarce data scenario to increase model performance. In the following section, we introduce the dataset we train and evaluate on along with a synthetic dataset to further validate our fairness improvements. After that, we describe our experimental setup. Then, we compare our method to a classifier trained without the prior loss and 2 other baselines. Lastly, we show the results demonstrating usefulness of our approach with data scarcity. 4.1 Dataset In this work, we use a dataset containing comments from Wikipedia Talk Pages (Dixon et al., 2018). Number of samples are 95,692 / 32,128 / 31,866 in the train/dev/test sets respectively. The ratio of positive (toxic) labels in the training set is 9.7%. The dataset was annotated by human raters, where toxicity was defined as a “rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion” per Dixon et al. (2018). Please refer to the corresponding paper for more details about collection methodology, biases present in the data, and toxicity distribution per comment length. We also use a synthetically generated dataset to validate our approach on fairness as in Park et al. Template Examples Labels I am Identity Non-toxic Hug Identity Non-toxic Being Identity is wonderful Non-toxic I hate Identity Toxic You are a moronic Identity Toxic Name is a disgusting Identity Toxic Table 3: Example templates used to generate the synthetic dataset for evaluation. (2018); Dixon et al. (2018). The dataset is created using a set of templates, developed by Dixon et al. (2018) 2, where placeholders are replaced with different identity terms. We show a subset of example templates in Table 3 and selected identity terms along with their mean attributions across the test set in Table 2. We mainly evaluate the effectiveness of our debiasing technique on this dataset because the original test sets follow the same biased distribution. Intuition is that predictions returned for sentences containing different identity terms in the exact same context should be similar. Hence, this dataset enables us to quantify the performance of a classifier in more detail when controlled on identity. 4.2 Experimental Setup For the text classifier, we built a convolutional neural network (CNN) classifier as in Kim (2014). The network contains a convolution layer with 128 2-, 3-, 4-gram filters for a sequence length of 100 followed by a max-pooling layer and softmax function. Embeddings were randomly initialized and their size was set to 128. Shorter sequences are padded with <pad> token and longer sequences are truncated. Tokens occurring 5 times or more are retained in the vocabulary. We set dropout as 0.2 and used Adam (Kingma and Ba, 2015) as our optimizer with initial learning rate set to 0.001. We didn’t perform extensive network architecture search to improve the performance as it is a reasonably strong classifier with the initial performance of 95.5% accuracy. The number of interpolating steps for IG is set to 50 (as in the original paper) for calculating Riemann approximation of the integral. Since the output of the binary classification can be reduced to a single scalar output by taking the posterior of the 2https://github.com/conversationai/ unintended-ml-bias-analysis 6278 Whole Dataset Acc F1 AUC FP FN Baseline .955 .728 .948 .010 .035 Importance .957 .739 .953 .009 .034 TOK Replace .939 .607 .904 .014 .047 Our Method .958 .752 .960 .009 .032 Fine-tuned .955 .720 .954 .007 .038 Table 4: Performance on the Wikipedia toxic comment dataset. Columns represent Accuracy, F-1 score, Area Under ROC curve, False Positive, and False Negative. Numbers represent the mean of 5 runs. Maximum variance is .012. positive (toxic) class, the prior is only added to the positive class in equation 3 . We set ti = ( k, if xi ∈I ai, otherwise , (5) where I is the set of selected terms and xi being the i th token in the sequence. For fairness experiments, we set k to be 0 and I to the set of identity terms with the hope that these terms should be as neutral as possible when making predictions. Hyperparamter λ is searched in the range of (1, 108) and increased from 1 by a scale of 10 on the dev set and we pick the one with best F-1 score. λ is set to 106 for the final model. For data scarcity experiments, we set k to 1 and I to the set of toxic terms to force the model to make high attributions to these terms. Hyperparameter λ is set to 105 across all data size experiments by tuning on the dev set with model given 1% of training data. Each experiment was repeated for 5 runs with 10 epochs and the best model is selected according to the dev set. Training takes 1 minute for a model with cross-entropy loss and 30 minutes for a model with joint loss on an NVidia V100 GPU. However, reducing the step size in IG for calculating Riemann approximation of the integral to 10 steps reduces the training time to 6 minutes. Lastly, training with joint loss reaches its best performance in later epochs than training with crossentropy loss. Implementation Decisions When taking the derivative with respect to the loss, we treat the interpolated embeddings as constants. Thus, the prior loss does not back-propagate to the embedding parameters. There are two reasons that lead to this decision: (i) taking the gradient of the interpolate operation would break the axioms Identity Acc F1 AUC FP FN Baseline .931 .692 .910 .011 .057 Importance .933 .704 .945 .012 .055 TOK Replace .910 .528 .882 .008 .081 Our Method .934 .697 .949 .008 .058 Finetuned .928 .660 .940 .007 .064 Table 5: Performance statistics of all approaches on the Wikipedia dataset filtered on samples including identity terms. Numbers represent the mean of 5 runs. Maximum variance is .001. that IG guarantees; (ii) the Hessian of the embedding matrix is slow to compute. The implementation decision does not imply that prior loss has no effect on the word embeddings, though. During training, the model parameters are updated with respect to both losses. Therefore, the word embeddings had to adjust accordingly to the new model parameters by updating the embedding parameters with cross-entropy loss. 4.3 Results on Incorporating Fairness Priors We compare our work to 3 models with the same CNN architecture, but different training settings: • Baseline: A baseline classifier trained with cross-entropy loss. • Importance: Classifier trained with crossentropy loss, but the loss for samples containing identity words are weighted in the range (1, 108), where the actual coefficient is determined to be 10 on the dev set based on F-1 score. • TOK Replace: Common technique for making models blind to identity terms (Garg et al., 2018). All identity terms are replaced with a special <id> token. We also explore a different training schedule for cases where a model has been trained to optimize for a classification loss: • Finetuned: An already-trained classifier is finetuned with joint loss for several epochs. The aim of this experiment is to show that our method is also applicable for tweaking trained models, which could be useful if the original had been trained for a long time. 6279 gay homosexual <id> Baseline Our method Importance Baseline Our method Importance Tok Replace a**hole <pad> sh*t b*tch scorecard f*ck 456 f*ck jus f*cking cr*p dutchman b*tch messengers pathetic tweaking b*tch f*g ‘oh pu**y louie fu*king sess f*ck bulls*** 678 sucks dome fa**ot ridiculous penis dumba*s nitrites f*cked accumulation bas**rd ‘do suck sh*t poured pathetic ink cr*p manhood pu**y penis nuts c*ck usher suck dub d*ckhead moron gubernatorial fart wikiepedia sh*t heartening moron retard convincing a**hole schizophrenics a*s desire fa**ot gay strung fa**ot notables Table 6: Top 10 nearest neighbors for tokens ‘gay’ and ‘homosexual’ and <id> for TOK Replace. All asterisks are inserted by authors to replace certain characters. Synthetic AUC FPED FNED Baseline .885 2.77 3.51 Importance .850 2.90 3.06 TOK Replace .930 0.00 0.00 Our Method .952 0.01 0.11 Finetuned .925 0.00 0.19 Table 7: AUC and Bias mitigation metrics on synthetic dataset. The lower the better for Bias mitigation metrics and is bounded by 0. Numbers represent the mean of 5 runs. Maximum variance is 0.013. 4.3.1 Evaluation on Original Data We first verify that the prior loss term does not adversely affect overall classifier performance on the main task using general performance metrics such as accuracy and F-1. Results are shown in Table 4. Unlike previous approaches (Park et al., 2018; Dixon et al., 2018; Madras et al., 2018), our method does not degrade classifier performance (it even improves) in terms of all reported metrics. We also look at samples containing identity terms. Table 5 shows classifier performance metrics for such samples. The importance weighting approach slightly outperforms the baseline classifier. Replacing identity words with a special tokens, on the other hand, hurts the performance on the main task. One of the reasons might be that replacing all identity terms with a token potentially removes other useful information model can rely on. If we were to make an analogy between the token replacement method and hard ablation, then the same analogy can be made between our method and soft ablation. Hence, the information pertaining to identity terms is not completely lost for our method, but come at a cost. Results for fine-tuning experiments show the performance after 2 epochs. It is seen that the model converges to similar performance with joint training after only 2 epochs, albeit being slightly poorer. 4.3.2 Evaluation on Synthetic Data Now we run our experiments on the templatebased synthetic data. As stated, this dataset is used to measure biases in the model since it is unbiased towards identities. We use AUC along with False Positive Equality Difference (FPED) and False Negative Equality Difference (FNED), which measure a proxy of Equality of Odds (Hardt et al., 2016), as in Dixon et al. (2018); Park et al. (2018). FPED sums absolute differences between overall false positive rate and false positive rates for each identity term. FNED calculates the same for false negatives. Results on this dataset are shown in Table 7. Our method provides substantial improvement on AUC and almost completely eliminates false positive and false negative inequality across identities. The fine-tuned model also outperforms the baseline for mitigating the bias. The token replacement method comes out as a good baseline for mitigating the bias since it treats all identities the same. The importance weighting approach fails to produce an unbiased model. 4.4 Nearest Neighbors of Identity Terms Models convert input tokens to embeddings before providing them to convolutional layers. As embeddings make up the majority of the parameters of the network and can be exported for use in 6280 Ratio 1% 5% 10% Toxic Base Ours Base Ours Base Ours hell -.002 .035 .002 .673 .076 .624 moron -.002 .044 .002 .462 .077 .290 sh*t -.003 .078 .006 .575 .098 .437 f*ck -.003 .142 .013 .643 .282 .682 b*tch -.003 .051 .002 .397 .065 .362 Table 8: Subset of toxic terms we used in the experiments and their mean attribution value on the test set for different training sizes. other tasks, we’re interested in how they change for the identity terms. We show 10 nearest neighbors of the terms <id> (for the token replacement method), “gay”, and “homosexual” – top two identity terms with the most mean attribution difference (our method vs. baseline), in Table 6. The word embedding of the term “gay” shifts from having swear words as its neighbors to having the <pad> token as the closest neighbor. Although the term “homosexual” has lower mean attribution, its neighboring words are still mostly swear words in the baseline embedding space. “homosexual” also moved to more neutral terms that shouldn’t play a role in deciding if the comment is toxic or not. Although they are not as high quality as one would expect general-purpose word embeddings to be possibly due to data size and the model having a different objective, the results show that our method yields inherently unbiased embeddings. It removes the necessity to initialize word embeddings with pre-debiased embeddings as proposed in Bolukbasi et al. (2016). The importance weighting technique penalizes the model on the sentence level instead of focusing on the token level. Therefore, the word embedding of “gay” doesn’t seem to shift to neutral words. The token replacement method, on the other hand, replaces the identity terms with a token that is surrounded with neutral words in the embedding space, so it results in greater improvement on the synthetic dataset. However, since all identity terms are collapsed into one, it’s harder for the model to capture the context and as a result, classification performance on the original dataset drops. 4.5 Results on Incorporating Priors in Different Training Sizes We now demonstrate our approach on encouraging higher attributions on toxic words to increase 0 5 10 15 20 25 30 35 40 90 92 94 96 Training Data Percentage Test Accuracy Baseline Our method Rule based Figure 1: Test accuracy for different training sizes. The rule based method gives positive prediction if the comment includes any of the toxic temrs. model performance in scarce data regime. We down-sample the dataset with different ratios to simulate a data scarcity scenario. To directly validate the effectiveness of prior loss on attributions, we first show that the attribution of the toxic words have higher values for our method across different data ratios compared to the baseline in Table 8. We also show that the attribution for these terms increases as training data increases for the baseline method. We then show model performance on testing data for different data size ratios for the baseline and our method in Figure 1. Our method outperforms the baseline by a big margin in 1% and 5% ratio. However, the impact of our approach diminishes after adding more data, since the model starts to learn to focus on toxic words itself for predicting toxicity without the need for prior injection. We can also see that both the baseline and our method start to catch up with the rule based approach, where we give positive prediction if the toxic word is in the sentence, and eventually outperform it. 5 Discussion and Related Work For explaining ML models, recent research attempts offer techniques ranging from building inherently interpretable models (Kim et al., 2014) to building a proxy model for explaining a more complex model (Ribeiro et al., 2016; Frosst and Hinton, 2017) to explaining inner mechanics of mostly uninterpretable neural networks (Sundararajan et al., 2017; Bach et al., 2015). One family of interpretability methods uses sensitivity of the network with respect to data points (Koh 6281 and Liang, 2017) or features (Ribeiro et al., 2016) as a form of explanation. These methods rely on small, local perturbations and check how a network’s response changes. Explaining text models has another layer of complexity due to a lock of proper technique to generate counterfactuals in the form of small perturbations. Hence, interpretability methods tailored for text are quite sparse (Mudrakarta et al., 2018; Jia and Liang, 2017; Murdoch et al., 2018). On the other hand, there are many papers criticizing the aforementioned methods by questioning their faithfulness, correctness (Adebayo et al., 2018; Kindermans et al., 2017) and usefulness. Smilkov et al. (2017) show that gradient based methods are susceptible to saturation and can be fooled by adversarial techniques. Other sets of papers (Miller, 2019; Gilpin et al., 2018) attack model explanation papers from a philosophical perspective. However, the lack of actionability angle is often overlooked. Lipton (2018) briefly questions the practical benefit of having model explanations from a practitioners perspective. There are several works taking advantage of model explanations. Namely, using model explanations to aid doctors in diagnosing retinopathy patients (Sayres et al., 2018), and removing minimal features, called pathologies, from neural networks by tuning the model to have high entropy on pathologies (Feng et al., 2018). The authors of Ross et al. (2017) propose a similar idea to our approach in that they regularize input gradients to alter the decision boundary of the model to make it more consistent with domain knowledge. However, the input gradients technique has been shown to be an inaccurate explanation technique (Adebayo et al., 2018). Addressing and mitigating bias in NLP models are paramount tasks as the effects on these models adversely affect protected subpopulations (Schmidt and Wiegand, 2017). One of the earliest works is Calders and Verwer (2010). Later, Bolukbasi et al. (2016) proposed to unbias word vectors from gender stereotypes. Park et al. (2018) also try to address gender bias for abusive language detection models by debiasing word vectors, augmenting more data and changing model architecture. While their results seem to show promise for removing gender bias, their method doesn’t scale for other identity dimensions such as race and religion. The authors of Dixon et al. (2018) highlight the bias in toxic comment classifier models originating from the dataset. They also supplement the training dataset from Wikipedia articles to shift positive class imbalance for sentences containing identity terms to dataset average. Similarly, their approach alleviates the issue to a certain extent, but does not scale to similar problems as their augmentation technique is too data-specific. Also, both methods trade original task accuracy for fairness, while our method does not. Lastly, there are several works (Davidson et al., 2017; Zhang et al., 2018b) offering methodologies or datasets to evaluate models for unintended bias, but they fail to offer a general framework. One of the main reasons our approach improves the model in the original task is that the model is now more robust thanks to the reinforcement provided to the model builder through attributions. From a fairness angle, our technique shares similarities with adversarial training (Zhang et al., 2018a; Madras et al., 2018) in asking the model to optimize for an additional objective that transitively unbiases the classifier. However, those approaches work to remove protected attributes from the representation layer, which is unstable. Our approach, on the other hand, works with basic human-interpretable units of information – tokens. Also, those approaches propose to sacrifice main task performance for fairness as well. While our method enables model builders to inject priors to aid a model, it has several limitations. In solving the fairness problem in question, it causes the classifier to not focus on the identity terms even for the cases where an identity term itself is being used as an insult. Moreover, our approach requires prior terms to be manually provided, which bears resemblance to blacklist approaches and suffers from the same drawbacks. Lastly, the evaluation methodology that we and previous papers (Dixon et al., 2018; Park et al., 2018) rely on are based on a syntheticallygenerated dataset, which may contain biases of the individuals creating it. 6 Conclusion and Future Work In this paper, we proposed actionability on model explanations that enable ML practitioners to enforce priors on their model. We apply this technique to model fairness in toxic comment classification. Our method incorporates Path Integrated Gradients attributions into the objective function 6282 with the aim of stopping the classifier from carrying along false positive bias from the data by punishing it when it focuses on identity words. Our experiments indicate that the models trained jointly with cross-entropy and prior loss do not suffer a performance drop on the original task, while achieving a better performance in fairness metrics on the template-based dataset. Applying model attribution as a fine-tuning step on a trained classifier makes it converge to a more debiased classifier in just a few epochs. Additionally, we show that model can be also forced to focus on pre-determined tokens. There are several avenues we can explore as future research. Our technique can be applied to implement a more robust model by penalizing the attributions falling outside of tokens annotated to be relevant to the predicted class. Another avenue is to incorporate different model attribution strategies such as DeepLRP (Bach et al., 2015) into the objective function. Finally, it would be worthwhile to invest in a technique to extract problematic terms from the model automatically rather than providing prescribed identity or toxic terms. Acknowledgments We thank Salem Haykal, Ankur Taly, Diego Garcia-Olano, Raz Mathias, and Mukund Sundararajan for their valuable feedback and insightful discussions. References Julius Adebayo, Justin Gilmer, Michael Muelly, Ian J. Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity checks for saliency maps. In Proceedings of NeurIPS. Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias: Theres software used across the country to predict future criminals. and its biased against blacks. ProPublica. Sebastian Bach, Alexander Binder, Gr´egoire Montavon, Frederick Klauschen, Klaus-Robert M¨uller, Wojciech Samek, and Oscar Deniz Suarez. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. In Proceedings of PloS one. Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Proceedings of NIPS. Toon Calders and Sicco Verwer. 2010. Three naive bayes approaches for discrimination-free classification. In Proceedings of Data Mining and Knowledge Discovery, Hingham, MA, USA. Kluwer Academic Publishers. Irene Chen, Fredrik D. Johansson, and David Sontag. 2018. Why is my classifier discriminatory? In Proceedings of NeurIPS. Thomas Davidson, Dana Warmsley, Michael W. Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of ICWSM. Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, ChunChen Tu, Pai-Shun Ting, Karthikeyan Shanmugam, and Payel Das. 2018. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. In Proceedings of NeurIPS. Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigating unintended bias in text classification. In Proceedings of AIES. Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018. Pathologies of neural models make interpretations difficult. In Proceedings of EMNLP. Nicholas Frosst and Geoffrey E. Hinton. 2017. Distilling a neural network into a soft decision tree. Arxiv, 1711.09784. Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed Huai hsin Chi, and Alex Beutel. 2018. Counterfactual fairness in text classification through robustness. In Proceedings of AIES. Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining explanations: An overview of interpretability of machine learning. In Proceedings of DSAA. Bryce Goodman and Seth Flaxman. 2016. European union regulations on algorithmic decision-making and a right to explanation. In Proceedings of ICML Workshop on Human Interpretability in Machine Learning. Moritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of opportunity in supervised learning. In Proceedings of NIPS. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of EMNLP. Been Kim, Cynthia Rudin, and Julie Shah. 2014. The bayesian case model: A generative approach for case-based reasoning and prototype classification. In Proceedings of NIPS. 6283 Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda B. Vi´egas, and Rory Sayres. 2018. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV). In Proceedings of ICML. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of EMNLP. Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Sch¨utt, Sven D¨ahne, Dumitru Erhan, and Been Kim. 2017. The (un)reliability of saliency methods. In Proceedings of NIPS workshop on Explaining and Visualizing Deep Learning. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR. Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In Proceedings of ICML. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of EMNLP. Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in nlp. In Proceedings of NAACL-HLT. Zachary C. Lipton. 2018. The mythos of model interpretability. In Queue, New York, NY, USA. ACM. Yin Lou, Rich Caruana, and Johannes Gehrke. 2012. Intelligible models for classification and regression. In Proceedings of KDD. Scott M. Lundberg, Gabriel G. Erion, and Su-In Lee. 2018. Consistent individualized feature attribution for tree ensembles. In Proceedings of KDD. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Proceedings of NIPS. David Madras, Elliot Creager, Toniann Pitassi, and Richard S. Zemel. 2018. Learning adversarially fair and transferable representations. In Proceedings of ICML. Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. In Proceedings of Artificial Intelligence. Pramod Kaushik Mudrakarta, Ankur Taly, Mukund Sundararajan, and Kedar Dhamdhere. 2018. Did the model understand the question? In Proceedings of ACL. W. James Murdoch, Peter J. Liu, and Bin Yu. 2018. Beyond word importance: Contextual decomposition to extract interactions from lstms. In Proceedings of ICLR. Ji Ho Park, Jamin Shin, and Pascale Fung. 2018. Reducing gender bias in abusive language detection. In Proceedings of EMNLP. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ”why should i trust you?”: Explaining the predictions of any classifier. In Proceedings of KDD. Andrew Slavin Ross, Michael C. Hughes, and Finale Doshi-Velez. 2017. Right for the right reasons: Training differentiable models by constraining their explanations. In Proceedings of IJCAI, IJCAI’17, pages 2662–2670. AAAI Press. Rory Sayres, Ankur Taly, Ehsan Rahimy, Katy Blumer, David Coz, Naama Hammel, Jonathan Krause, Arunachalam Narayanaswamy, Zahra Rastegar, Derek Wu, Shawn Xu, Scott Barb, Anthony Joseph, Michael Shumski, Jesse Smith, Arjun B. Sood, Greg S. Corrado, Lily Peng, and Dale R. Webster. 2018. Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy. In Proceedings of American Academy of Ophthalmology. Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language processing. In Proceedings of International Workshop on Natural Language Processing for Social Media. Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning important features through propagating activation differences. In Proceedings of ICML. Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda B. Vi´egas, and Martin Wattenberg. 2017. Smoothgrad: removing noise by adding noise. In Proceedings of ICML Workshop on Visualization for Deep Learning. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Proceedings of ICML. Zeerak Waseem and Dirk Hovy. 2016. Hateful symbols or hateful people? predictive features for hate speech detection on twitter. In Proceedings of the NAACL Student Research Workshop. Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Proceedings of WWW. Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018a. Mitigating unwanted biases with adversarial learning. In Proceedings of AIES. Ziqi Zhang, David Robinson, and Jonathan A. Tepper. 2018b. Detecting hate speech on twitter using a convolution-gru based deep neural network. In Proceedings of ESWC.
2019
631
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6284–6294 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6284 Matching Article Pairs with Graphical Decomposition and Convolutions Bang Liu†, Di Niu†, Haojie Wei‡, Jinghong Lin‡, Yancheng He‡, Kunfeng Lai‡, Yu Xu‡ †University of Alberta, Edmonton, AB, Canada {bang3, dniu}@ualberta.ca ‡Platform and Content Group, Tencent, Shenzhen, China {fayewei, daphnelin, collinhe, calvinlai, henrysxu}@tencent.com Abstract Identifying the relationship between two articles, e.g., whether two articles published from different sources describe the same breaking news, is critical to many document understanding tasks. Existing approaches for modeling and matching sentence pairs do not perform well in matching longer documents, which embody more complex interactions between the enclosed entities than a sentence does. To model article pairs, we propose the Concept Interaction Graph to represent an article as a graph of concepts. We then match a pair of articles by comparing the sentences that enclose the same concept vertex through a series of encoding techniques, and aggregate the matching signals through a graph convolutional network. To facilitate the evaluation of long article matching, we have created two datasets, each consisting of about 30K pairs of breaking news articles covering diverse topics in the open domain. Extensive evaluations of the proposed methods on the two datasets demonstrate significant improvements over a wide range of state-of-the-art methods for natural language matching. 1 Introduction Identifying the relationship between a pair of articles is an essential natural language understanding task, which is critical to news systems and search engines. For example, a news system needs to cluster various articles on the Internet reporting the same breaking news (probably in different ways of wording and narratives), remove redundancy and form storylines (Shahaf et al., 2013; Liu et al., 2017; Zhou et al., 2015; Vossen et al., 2015; Bruggermann et al., 2016). The rich semantic and logic structures in longer documents have made it a different and more challenging task to match a pair of articles than to match a pair of sentences or a query-document pair in information retrieval. Traditional term-based matching approaches estimate the semantic distance between a pair of text objects via unsupervised metrics, e.g., via TFIDF vectors, BM25 (Robertson et al., 2009), LDA (Blei et al., 2003) and so forth. These methods have achieved success in query-document matching, information retrieval and search. In recent years, a wide variety of deep neural network models have also been proposed for text matching (Hu et al., 2014; Qiu and Huang, 2015; Wan et al., 2016; Pang et al., 2016), which can capture the semantic dependencies (especially sequential dependencies) in natural language through layers of recurrent or convolutional neural networks. However, existing deep models are mainly designed for matching sentence pairs, e.g., for paraphrase identification, answer selection in question-answering, omitting the complex interactions among keywords, entities or sentences that are present in a longer article. Therefore, article pair matching remains under-explored in spite of its importance. In this paper, we apply the divide-and-conquer philosophy to matching a pair of articles and bring deep text understanding from the currently dominating sequential modeling of language elements to a new level of graphical document representation, which is more suitable for longer articles. Specifically, we have made the following contributions: First, we propose the so-called Concept Interaction Graph (CIG) to represent a document as a weighted graph of concepts, where each concept vertex is either a keyword or a set of tightly connected keywords. The sentences in the article associated with each concept serve as the features for local comparison to the same concept appearing in another article. Furthermore, two concept vertices in an article are also connected by a weighted edge which indicates their interaction strength. The CIG does not only capture the essen6285 tial semantic units in a document but also offers a way to perform anchored comparison between two articles along the common concepts found. Second, we propose a divide-and-conquer framework to match a pair of articles based on the constructed CIGs and graph convolutional networks (GCNs). The idea is that for each concept vertex that appears in both articles, we first obtain the local matching vectors through a range of text pair encoding schemes, including both neural encoding and term-based encoding. We then aggregate the local matching vectors into the final matching result through graph convolutional layers (Kipf and Welling, 2016; Defferrard et al., 2016). In contrast to RNN-based sequential modeling, our model factorizes the matching process into local matching sub-problems on a graph, each focusing on a different concept, and by using GCN layers, generates matching results based on a holistic view of the entire graph. Although there exist many datasets for sentence matching, the semantic matching between longer articles is a largely unexplored area. To the best of our knowledge, to date, there does not exist a labeled public dataset for long document matching. To facilitate evaluation and further research on document and especially news article matching, we have created two labeled datasets1, one annotating whether two news articles found on Internet (from different media sources) report the same breaking news event, while the other annotating whether they belong to the same news story (yet not necessarily reporting the same breaking news event). These articles were collected from major Internet news providers in China, including Tencent, Sina, WeChat, Sohu, etc., covering diverse topics, and were labeled by professional editors. Note that similar to most other natural language matching models, all the approaches proposed in this paper can easily work on other languages as well. Through extensive experiments, we show that our proposed algorithms have achieved significant improvements on matching news article pairs, as compared to a wide range of state-of-the-art methods, including both term-based and deep text matching algorithms. With the same encoding or term-based feature representation of a pair of articles, our approach based on graphical decomposi1Our code and datasets are available at: https://github.com/BangLiu/ArticlePairMatching Text: Concept Interaction Graph: [1] Rick asks Morty to travel with him in the universe. [2] Morty doesn't want to go as Rick always brings him dangerous experiences. [3] However, the destination of this journey is the Candy Planet, which is an fascinating place that attracts Morty. [4] The planet is full of delicious candies. [5] Summer wishes to travel with Rick. [6] However, Rick doesn't like to travel with Summer. Rick Morty Rick Summer Morty Candy Planet [1, 2] [5, 6] [3, 4] Figure 1: An example to show a piece of text and its Concept Interaction Graph representation. tion and convolutions can improve the classification accuracy by 17.31% and 23.09% on the two datasets, respectively. 2 Concept Interaction Graph In this section, we present our Concept Interaction Graph (CIG) to represent a document as an undirected weighted graph, which decomposes a document into subsets of sentences, each subset focusing on a different concept. Given a document D, a CIG is a graph GD, where each vertex in GD is called a concept, which is a keyword or a set of highly correlated keywords in document D. Each sentence in D will be attached to the single concept vertex that it is the most related to, which most frequently is the concept the sentence mentions. Hence, vertices will have their own sentence sets, which are disjoint. The weight of the edge between a pair of concepts denotes how much the two concepts are related to each other and can be determined in various ways. As an example, Fig. 1 illustrates how we convert a document into a Concept Interaction Graph. We can extract keywords Rick, Morty, Summer, and Candy Planet from the document using standard keyword extraction algorithms, e.g., TextRank (Mihalcea and Tarau, 2004). These keywords are further clustered into three concepts, where each concept is a subset of highly correlated keywords. After grouping keywords into concepts, we attach each sentence in the document to its most related concept vertex. For example, in Fig. 1, sentences 1 and 2 are mainly talking about the relationship between Rick and Morty, and are thus attached to the concept (Rick, Morty). Other sentences are attached to vertices in a similar way. The attachment of sentences to concepts naturally dissects the original document into multiple disjoint sentence subsets. As a result, we have represented the original document with a graph of key concepts, each with a sentence subset, as well as 6286 Construct KeyGraph by Word Co-occurrence w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w Document Pair KeyGraph Concepts Doc A Doc B Detect Concepts by Community Detection Assign Sentences by Similarities S1 S2 Concept 1 S1 S2 Concept 2 S1 S2 Concept 3 S1 S2 Concept 4 S1 S2 Concept 5 Concepts with sentences Get Edge Weights by Vertex Similarities 3 5 4 2 1 Concept Interaction Graph Siamese Encoder Context Layer Contex Layer Matching Layer Sentences 1 Sentences 2 Vertex Feature Term-based Feature Extractor Sentence 1, Sentence 2 Feature Extractor Vertex Feature CIG with vertex features vertex features Result (a) Representation (b) Encoding (c) Transformation (d) Aggregation Input GCN Layers Aggregation Layer transformed features Siamese matching Term-based matching Siamese matching … Term-based matching Global matching Concatenate Classify Figure 2: An overview of our approach for constructing the Concept Interaction Graph (CIG) from a pair of documents and classifying it by Graph Convolutional Networks. the interaction topology among them. Fig 2 (a) illustrates the construction of CIGs for a pair of documents aligned by the discovered concepts. Here we first describe the detailed steps to construct a CIG for a single document: KeyGraph Construction. Given a document D, we first extract the named entities and keywords by TextRank (Mihalcea and Tarau, 2004). After that, we construct a keyword co-occurrence graph, called KeyGraph, based on the set of found keywords. Each keyword is a vertex in the KeyGraph. We connect two keywords by an edge if they co-occur in a same sentence. We can further improve our model by performing co-reference resolution and synonym analysis to merge keywords with the same meaning. However, we do not apply these operations due to the time complexity. Concept Detection (Optional). The structure of KeyGraph reveals the connections between keywords. If a subset of keywords are highly correlated, they will form a densely connected subgraph in the KeyGraph, which we call a concept. Concepts can be extracted by applying community detection algorithms on the constructed KeyGraph. Community detection is able to split a KeyGraph Gkey into a set of communities C = {C1, C2, ..., C|C|}, where each community Ci contains the keywords for a certain concept. By using overlapping community detection, each keyword may appear in multiple concepts. As the number of concepts in different documents varies a lot, we utilize the betweenness centrality score based algorithm (Sayyadi and Raschid, 2013) to detect keyword communities in KeyGraph. Note that this step is optional, i.e., we can also use each keyword directly as a concept. The benefit brought by concept detection is that it reduces the number of vertices in a graph and speeds up matching, as will be shown in Sec. 4. Sentence Attachment. After the concepts are discovered, the next step is to group sentences by concepts. We calculate the cosine similarity between each sentence and each concept, where sentences and concepts are represented by TF-IDF vectors. We assign each sentence to the concept which is the most similar to the sentence. Sentences that do not match any concepts in the document will be attached to a dummy vertex that does not contain any keywords. Edge Construction. To construct edges that reveal the correlations between different concepts, for each vertex, we represent its sentence set as a concatenation of the sentences attached to it, and calculate the edge weight between any two vertices as the TF-IDF similarity between their sentence sets. Although edge weights may be decided in other ways, our experience shows that constructing edges by TF-IDF similarity generates a CIG that is more densely connected. When performing article pair matching, the above steps will be applied to a pair of documents DA and DB, as is shown in Fig. 2 (a). The only additional step is that we align the CIGs of the two articles by the concept vertices, and for each common concept vertex, merge the sentence sets from DA and DB for local comparison. 3 Article Pair Matching through Graph Convolutions Given the merged CIG GAB of two documents DA and DB described in Sec. 2, we match a pair of ar6287 ticles in a “divide-and-conquer” manner by matching the sentence sets from DA and DB associated with each concept and aggregating local matching results into a final result through multiple graph convolutional layers. Our approach overcomes the limitation of previous text matching algorithms, by extending text representation from a sequential (or grid) point of view to a graphical view, and can therefore better capture the rich semantic interactions in longer text. Fig. 2 illustrates the overall architecture of our proposed method, which consists of four steps: a) representing a pair of documents by a single merged CIG, b) learning multi-viewed matching features for each concept vertex, c) structurally transforming local matching features by graph convolutional layers, and d) aggregating local matching features to get the final result. Steps (b)-(d) can be trained end-to-end. Encoding Local Matching Vectors. Given the merged CIG GAB, our first step is to learn an appropriate matching vector of a fixed length for each individual concept v ∈GAB to express the semantic similarity between SA(v) and SB(v), the sentence sets of concept v from documents DA and DB, respectively. This way, the matching of two documents is converted to match the pair of sentence sets on each vertex of GAB. Specifically, we generate local matching vectors based on both neural networks and term-based techniques. Siamese Encoder: we apply a Siamese neural network encoder (Neculoiu et al., 2016) onto each vertex v ∈GAB to convert the word embeddings (Mikolov et al., 2013) of {SA(v), SB(v)} into a fixed-sized hidden feature vector mAB(v), which we call the match vector. We use a Siamese structure to take SA(v) and SB(v)} (which are two sequences of word embeddings) as inputs, and encode them into two context vectors through the context layers that share the same weights, as shown in Fig. 2 (b). The context layer usually contains one or multiple bidirectional LSTM (BiLSTM) or CNN layers with max pooling layers, aiming to capture the contextual information in SA(v) and SB(v)}. Let cA(v) and cB(v) denote the context vectors obtained for SA(v) and SB(v), respectively. Then, the matching vector mAB(v) for vertex v is given by the subsequent aggregation layer, which concatenates the element-wise absolute difference and the element-wise multiplication of the two context vectors, i.e., mAB(v) = (|cA(v) −cB(v)|, cA(v) ◦cB(v)), (1) where ◦denotes Hadamard product. Term-based Similarities: we also generate another matching vector for each v by directly calculating term-based similarities between SA(v) and SB(v), based on 5 metrics: the TF-IDF cosine similarity, TF cosine similarity, BM25 cosine similarity, Jaccard similarity of 1-gram, and Ochiai similarity measure. These similarity scores are concatenated into another matching vector m′ AB(v) for v, as shown in Fig. 2 (b). Matching Aggregation via GCN The local matching vectors must be aggregated into a final matching score for the pair of articles. We propose to utilize the ability of the Graph Convolutional Network (GCN) filters (Kipf and Welling, 2016) to capture the patterns exhibited in the CIG GAB at multiple scales. In general, the input to the GCN is a graph G = (V, E) with N vertices vi ∈V, and edges eij = (vi, vj) ∈E with weights wij. The input also contains a vertex feature matrix denoted by X = {xi}N i=1, where xi is the feature vector of vertex vi. For a pair of documents DA and DB, we input their CIG GAB (with N vertices) with a (concatenated) matching vector on each vertex into the GCN, such that the feature vector of vertex vi in GCN is given by xi = (mAB(vi), m′ AB(vi)). Now let us briefly describe the GCN layers (Kipf and Welling, 2016) used in Fig. 2 (c). Denote the weighted adjacency matrix of the graph as A ∈RN×N where Aij = wij (in CIG, it is the TF-IDF similarity between vertex i and j). Let D be a diagonal matrix such that Dii = P j Aij. The input layer to the GCN is H(0) = X, which contains the original vertex features. Let H(l) ∈RN×Ml denote the matrix of hidden representations of the vertices in the lth layer. Then each GCN layer applies the following graph convolutional filter onto the previous hidden representations: H(l+1) = σ( ˜D−1 2 ˜A ˜D−1 2 H(l)W (l)), (2) where ˜A = A + IN, IN is the identity matrix, and ˜D is a diagonal matrix such that ˜Dii = P j ˜ Aij. They are the adjacency matrix and the degree matrix of graph G, respectively. 6288 2016-10-28 !"#$%&'()%( *+),-$.//%0 ,12&'(,3)(,/1 2016-10-29 !"#$&45-),1 6/%$%&'()%(,13 *+),-$.//%0 ,12&'(,3)(,/1 2016-10-30 7,-)%8$9:&'(,/1' !"#;'$+/(,2)(,/1 6/%$%&'()%(,13 ,12&'(,3)(,/1 2016-11-06 !"#$.,%&<(/%= >/$<?)%3&' )6(&%$1&@ %&2,&@$/6 7,-)%8$&+),-' Hilary’s “mail door’’ 2016-09-11 7,-)%8$)((&1.' (?&$ABB C11,2&%')%8 )1.$-&)2&$&)%-8 2016-09-12 D/<(/%$')8 7,-)%8$?)' 51&:+/1,) 2016-09-14 7,-)%8$')8 '?&$@)' ?&)-(?8 2016-09-16 7,-)%8$,' %&</2&%&. Hilary’s health condition 2016-10-07 E)'?,13(/1$F/'( %&2&)-'$G%:+5;' '5&&<?$)H/:( </1(&+5( 6/%$@/+&1 2016-10-08 G%:+5$5:H-,<-8 )5/-/3,I&'$6/%$?,' </1(%/2&%',)-$'5&&<? )H/:($@/+&1 2016-11-02 7,-)%8$</1.&+1' G%:+5$6/% H:--8,13$@/+&1 Trump's speech about contempt for woman 2016-09-26 !,%'($&-&<(,/1 (&-&2,',/1 .&H)(& 2016-10-10 J&</1.$&-&<(,/1 (&-&2,',/1 .&H)(& 2016-10-19 G?,%.$&-&<(,/1 (&-&2,',/1 .&H)(& Election television debates 2016-07-19 Trump become presidential candidate 2016-07-26 Hilary become presidential candidate Presidential candidates 2016-09-28 7,-)%8$)<<:'&' G%:+5$/6 %&6:',13$(/ .,'<-/'&$()4 ,16/%+)(,/1 2016-10-02 >&@$K/%L$G,+&' &45/':%&'$G%:+5 ()4$)2/,.)1<&$$ Trump avoid tax 2016-11-09 D/1)-.$G%:+5 ,'$&-&<(&. 5%&',.&1( 2016-11-08 C+&%,<)$2/(&' (/$&-&<($1&@ 5%&',.&1( Voting for new president 2016 U.S. presidential election Figure 3: The events contained in the story “2016 U.S. presidential election”. W (l) is the trainable weight matrix in the lth layer. σ(·) denotes an activation function such as sigmoid or ReLU function. Such a graph convolutional rule is motivated by the first-order approximation of localized spectral filters on graphs (Kipf and Welling, 2016) and when applied recursively, can extract interaction patterns among vertices. Finally, the hidden representations in the final GCN layer is merged into a single vector (called a graphically merged matching vector) of a fixed length, denoted by mAB, by taking the mean of the hidden vectors of all vertices in the last layer. The final matching score will be computed based on mAB, through a classification network, e.g., a multi-layered perceptron (MLP). In addition to the graphically merged matching vector mAB described above, we may also append other global matching features to mAB to expand the feature set. These additional global features can be calculated, e.g., by encoding two documents directly with state-of-the-art language models like BERT (Devlin et al., 2018) or by directly computing their term-based similarities. However, we show in Sec. 4 that such global features can hardly bring any more benefit to our scheme, as the graphically merged matching vectors are already sufficiently expressive in our problem. 4 Evaluation Tasks. We evaluate the proposed approach on the task of identifying whether a pair of news articles report the same breaking news (or event) and whether they belong to the same series of news story, which is motivated by a real-world news app. In fact, the proposed article pair matching schemes have been deployed in the anonymous news app for news clustering, with more than 110 millions of daily active users. Note that traditional methods to document clustering include unsupervised text clustering and text classification into predefined topics. However, a number of breaking news articles emerge on the Internet everyday with their topics/themes unknown, so it is not possible to predefine their topics. Thus, supervised text classification cannot be used here. It is even impossible to determine how many news clusters there exist. Therefore, the task of classifying whether two news articles are reporting the same breaking news event or belong to the same story is critical to news apps and search engines for clustering, redundancy removal and topic summarization. In our task, an “event” refers to a piece of breaking news on which multiple media sources may publish articles with different narratives and wording. Furthermore, a “story” consists of a series of logically related breaking news events. It is worth noting that our objective is fundamentally different from the traditional event coreference literature, e.g., (Bejan and Harabagiu, 2010; Lee et al., 2013, 2012) or SemEval-2018 Task 5 (Counting Events) (Postma et al., 2018), where the task is to detect all the events (or in fact, “actions” like shooting, car crashes) a document mentions. In contrast, although a news article may mention multiple entities and even previous physical events, the “event” in our dataset always refers to the breaking news that the article intends to report or the incident that triggers the media’s coverage. And our task is to identify whether two articles intend to report the same breaking news. For example, two articles “University of California system libraries break off negotiations with Elsevier, will no longer order their journals” and “University of California Boycotts Publishing Giant Elsevier” from two different sources are apparently intended to report the same breaking news event of UC dropping subscription to Elsevier, although other actions may be peripherally mentioned in these articles, e.g., “eight months of unsuccessful negotiations.” In addition, we do not attempt to perform reading comprehension question answering tasks either, e.g., finding out how many killing incidents or car crashes there are in a year (SemEval-2018 Task 5 (Postma et al., 2018)). 6289 Dataset Pos Samples Neg Samples Train Dev Test CNSE 12865 16198 17438 5813 5812 CNSS 16887 16616 20102 6701 6700 Table 1: Description of evaluation datasets. As a typical example, Fig. 3 shows the events contained in the story 2016 U.S. presidential election, where each tag shows a breaking news event possibly reported by multiple articles with different narratives (articles not shown here). We group highly coherent events together. For example, there are multiple events about Election television debates. One of our objectives is to identify whether two news articles report the same event, e.g., a yes when they are both reporting Trump and Hilary’s first television debate, though with different wording, or a no, when one article is reporting Trump and Hilary’s second television debate while the other is talking about Donald Trump is elected president. Datasets. To the best of our knowledge, there is no publicly available dataset for long document matching tasks. We created two datasets: the Chinese News Same Event dataset (CNSE) and Chinese News Same Story dataset (CNSS), which are labeled by professional editors. They contain long Chinese news articles collected from major Internet news providers in China, covering diverse topics in the open domain. The CNSE dataset contains 29, 063 pairs of news articles with labels representing whether a pair of news articles are reporting about the same breaking news event. Similarly, the CNSS dataset contains 33, 503 pairs of articles with labels representing whether two documents fall into the same news story. The average number of words for all documents in the datasets is 734 and the maximum value is 21791. In our datasets, we only labeled the major event (or story) that a news article is reporting, since in the real world, each breaking news article on the Internet must be intended to report some specific breaking news that has just happened to attract clicks and views. Our objective is to determine whether two news articles intend to report the same breaking news. Note that the negative samples in the two datasets are not randomly generated: we select document pairs that contain similar keywords, and exclude samples with TF-IDF similarity below a certain threshold. The datasets have been made publicly available for research purpose. Table 1 shows a detailed breakdown of the two datasets. For both datasets, we use 60% of all the samples as the training set, 20% as the development (validation) set, and the remaining 20% as the test set. We carefully ensure that different splits do not contain any overlaps to avoid data leakage. The metrics used for performance evaluation are the accuracy and F1 scores of binary classification results. For each evaluated method, we perform training for 10 epochs and then choose the epoch with the best validation performance to be evaluated on the test set. Baselines. We test the following baselines: • Matching by representation-focused or interaction-focused deep neural network models: DSSM (Huang et al., 2013), CDSSM (Shen et al., 2014), DUET (Mitra et al., 2017), MatchPyramid (Pang et al., 2016), ARC-I (Hu et al., 2014), ARC-II (Hu et al., 2014). We use the implementations from MatchZoo (Fan et al., 2017) for the evaluation of these models. • Matching by term-based similarities: BM25 (Robertson et al., 2009), LDA (Blei et al., 2003) and SimNet (which is extracting the five text-pair similarities mentioned in Sec. 3 and classifying by a multi-layer feedforward neural network). • Matching by a large-scale pre-training language model: BERT (Devlin et al., 2018). Note that we focus on the capability of long text matching. Therefore, we do not use any short text information, such as titles, in our approach or in any baselines. In fact, the “relationship” between two documents is not limited to ”whether the same event or not”. Our algorithm is able to identify a general relationship between documents, e.g., whether two episodes are from the same season of a TV series. The definition of the relationship (e.g., same event/story, same chapter of a book) is solely defined and supervised by the labeled training data. For these tasks, the availability of other information such as titles can not be assumed. As shown in Table 2, we evaluate different variants of our own model to show the effect of different sub-modules. In model names, “CIG” means that in CIG, we directly use keywords as concepts without community detection, whereas “CIGcd” 6290 Baselines CNSE CNSS Our models CNSE CNSS Acc F1 Acc F1 Acc F1 Acc F1 I. ARC-I 53.84 48.68 50.10 66.58 XI. CIG-Siam 74.47 73.03 75.32 78.58 II. ARC-II 54.37 36.77 52.00 53.83 XII. CIG-Siam-GCN 74.58 73.69 78.91 80.72 III. DUET 55.63 51.94 52.33 60.67 XIII. CIGcd-Siam-GCN 73.25 73.10 76.23 76.94 IV. DSSM 58.08 64.68 61.09 70.58 XIV. CIG-Sim 72.58 71.91 75.16 77.27 V. C-DSSM 60.17 48.57 52.96 56.75 XV. CIG-Sim-GCN 83.35 80.96 87.12 87.57 VI. MatchPyramid 66.36 54.01 62.52 64.56 XVI. CIGcd-Sim-GCN 81.33 78.88 86.67 87.00 VII. BM25 69.63 66.60 67.77 70.40 XVII. CIG-Sim&Siam-GCN 84.64 82.75 89.77 90.07 VIII. LDA 63.81 62.44 62.98 69.11 XVIII. CIG-Sim&Siam-GCN-Simg 84.21 82.46 90.03 90.29 IX. SimNet 71.05 69.26 70.78 74.50 XIX. CIG-Sim&Siam-GCN-BERTg 84.68 82.60 89.56 89.97 X. BERT fine-tuning 81.30 79.20 86.64 87.08 XX. CIG-Sim&Siam-GCN-Simg&BERTg 84.61 82.59 89.47 89.71 Table 2: Accuracy and F1-score results of different algorithms on CNSE and CNSS datasets. means that each concept vertex in the CIG contains a set of keywords grouped via community detection. To generate the matching vector on each vertex, “Siam” indicates the use of Siamese encoder, while “Sim” indicates the use of term-based similarity encoder, as shown in Fig. 2. “GCN” means that we convolve the local matching vectors on vertices through GCN layers. Finally, “BERTg” or “Simg” indicates the use of additional global features given by BERT or the five termbased similarity metrics mentioned in Sec. 3, appended to the graphically merged matching vector mAB, for final classification. Implementation Details. We use Stanford CoreNLP (Manning et al., 2014) for word segmentation (on Chinese text) and named entity recognition. For Concept Interaction Graph construction with community detection, we set the minimum community size (number of keywords contained in a concept vertex) to be 2, and the maximum size to be 6. Our neural network model consists of word embedding layer, Siamese encoding layer, Graph transformation layers, and classification layer. For embedding, we load the pre-trained word vectors and fix it during training. The embeddings of out of vocabulary words are set to be zero vectors. For the Siamese encoding network, we use 1-D convolution with number of filters 32, followed by an ReLU layer and Max Pooling layer. For graph transformation, we utilize 2 layers of GCN (Kipf and Welling, 2016) for experiments on the CNSS dataset, and 3 layers of GCN for experiments on the CNSE dataset. When the vertex encoder is the five-dimensional features, we set the output size of GCN layers to be 16. When the vertex encoder is the Siamese network encoder, we set the output size of GCN layers to be 128 except the last layer. For the last GCN layer, the output size is always set to be 16. For the classification module, it consists of a linear layer with output size 16, an ReLU layer, a second linear layer, and finally a Sigmoid layer. Note that this classification module is also used for the baseline method SimNet. As we mentioned in Sec. 1, our code and datasets have been open sourced. We implement our model using PyTorch 1.0 (Paszke et al., 2017). The experiments without BERT are carried out on an MacBook Pro with a 2 GHz Intel Core i7 processor and 8 GB memory. We use L2 weight decay on all the trainable variables, with parameter λ = 3 × 10−7. The dropout rate between every two layers is 0.1. We apply gradient clipping with maximum gradient norm 5.0. We use the ADAM optimizer (Kingma and Ba, 2014) with β1 = 0.8, β2 = 0.999, ϵ = 108. We use a learning rate warm-up scheme with an inverse exponential increase from 0.0 to 0.001 in the first 1000 steps, and then maintain a constant learning rate for the remainder of training. For all the experiments, we set the maximum number of training epochs to be 10. 4.1 Results and Analysis Table 2 summarizes the performance of all the compared methods on both datasets. Our model achieves the best performance on both two datasets and significantly outperforms all other methods. This can be attributed to two reasons. First, as the input of article pairs are re-organized into Concept Interaction Graphs, the two documents are aligned along the corresponding semantic units for easier concept-wise comparison. Second, our model encodes local comparisons around different semantic units into local matching vectors, and aggregate them via graph convolutions, taking semantic topologies into consideration. Therefore, it solves the problem of matching documents via divide-and-conquer, which is suitable for handling long text. 6291 Impact of Graphical Decomposition. Comparing method XI with methods I-VI in Table 2, they all use the same word vectors and use neural networks for text encoding. The key difference is that our method XI compares a pair of articles over a CIG in per-vertex decomposed fashion. We can see that the performance of method XI is significantly better than methods I-VI. Similarly, comparing our method XIV with methods VII-IX, they all use the same term-based similarities. However, our method achieves significantly better performance by using graphical decomposition. Therefore, we conclude that graphical decomposition can greatly improve long text matching performance. Note that the deep text matching models IVI lead to bad performance, because they were invented mainly for sequence matching and can hardly capture meaningful semantic interactions in article pairs. When the text is long, it is hard to get an appropriate context vector representation for matching. For interaction-focused neural network models, most of the interactions between words in two long articles will be meaningless. Impact of Graph Convolutions. Compare methods XII and XI, and compare methods XV and XIV. We can see that incorporating GCN layers has significantly improved the performance on both datasets. Each GCN layer updates the hidden vector of each vertex by integrating the vectors from its neighboring vertices. Thus, the GCN layers learn to graphically aggregate local matching features into a final result. Impact of Community Detection. By comparing methods XIII and XII, and comparing methods XVI and XV, we observe that using community detection, such that each concept is a set of correlated keywords instead of a single keyword, leads to slightly worse performance. This is reasonable, as using each keyword directly as a concept vertex provides more anchor points for article comparison . However, community detection can group highly coherent keywords together and reduces the average size of CIGs from 30 to 13 vertices. This helps to reduce the total training and testing time of our models by as much as 55%. Therefore, one may choose whether to apply community detection to trade accuracy off for speedups. Impact of Multi-viewed Matching. Comparing methods XVII and XV, we can see that the concatenation of different graphical matching vectors (both term-based and Siamese encoded features) can further improve performance. This demonstrates the advantage of combining multiviewed matching vectors. Impact of Added Global Features. Comparing methods XVIII, XIX, XX with method XVII, we can see that adding more global features, such as global similarities (Simg) and/or global BERT encodings (BERTg) of the article pair, can hardly improve performance any further. This shows that graphical decomposition and convolutions are the main factors that contribute to the performance improvement. Since they already learn to aggregate local comparisons into a global semantic relationship, additionally engineered global features cannot help. Model Size and Parameter Sensitivity: Our biggest model without BERT is XVIII, which contains only ∼34K parameters. In comparison, BERT contains 110M-340M parameters. However, our model significantly outperforms BERT. We tested the sensitivity of different parameters in our model. We found that 2 to 3 layers of GCN layers gives the best performance. Further introducing more GCN layers does not improve the performance, while the performance is much worse with zero or only one GCN layer. Furthermore, in GCN hidden representations of a size between 16 and 128 yield good performance. Further increasing this size does not show obvious improvement. For the optional community detection step in CIG construction, we need to choose the minimum size and the maximum size of communities. We found that the final performance remains similar if we vary the minimum size from 2∼3 and the maximum size from 6∼10. This indicates that our model is robust and insensitive to these parameters. Time complexity. For keywords of news articles, in real-world industry applications, they are usually extracted in advance by highly efficient off-the-shelf tools and pre-defined vocabulary. For CIG construction, let Ns be the number of sentences in two documents, Nw be the number of unique words in documents, and Nk represents the number of unique keywords in a document. Building keyword graph requires O(NsNk + N2 w) complexity (Sayyadi and Raschid, 2013), and betweenness-based community detection requires O(N3 k). The complexity of sentence assignment 6292 and weight calculation is O(NsNk + N2 k). For graph classification, our model size is not big and can process document pairs efficiently. 5 Related Work Graphical Document Representation. A majority of existing works can be generalized into four categories: word graph, text graph, concept graph, and hybrid graph. Word graphs use words in a document as vertices, and construct edges based on syntactic analysis (Leskovec et al., 2004), cooccurrences (Zhang et al., 2018; Rousseau and Vazirgiannis, 2013; Nikolentzos et al., 2017) or preceding relation (Schenker et al., 2003). Text graphs use sentences, paragraphs or documents as vertices, and establish edges by word cooccurrence, location (Mihalcea and Tarau, 2004), text similarities (Putra and Tokunaga, 2017), or hyperlinks between documents (Page et al., 1999). Concept graphs link terms in a document to real world concepts based on knowledge bases such as DBpedia (Auer et al., 2007), and construct edges based on syntactic/semantic rules. Hybrid graphs (Rink et al., 2010; Baker and Ellsworth, 2017) consist of different types of vertices and edges. Text Matching. Traditional methods represent a text document as vectors of bag of words (BOW), term frequency inverse document frequency (TF-IDF), LDA (Blei et al., 2003) and so forth, and calculate the distance between vectors. However, they cannot capture the semantic distance and usually cannot achieve good performance. In recent years, different neural network architectures have been proposed for text pair matching tasks. For representation-focused models, they usually transform text pairs into context representation vectors through a Siamese neural network, followed by a fully connected network or score function which gives the matching result based on the context vectors (Qiu and Huang, 2015; Wan et al., 2016; Liu et al., 2018; Mueller and Thyagarajan, 2016; Severyn and Moschitti, 2015). For interaction-focused models, they extract the features of all pair-wise interactions between words in text pairs, and aggregate the interaction features by deep networks to give a matching result (Hu et al., 2014; Pang et al., 2016). However, the intrinsic structural properties of long text documents are not fully utilized by these neural models. Therefore, they cannot achieve good performance for long text pair matching. There are also research works which utilize knowledge (Wu et al., 2018), hierarchical property (Jiang et al., 2019) or graph structure (Nikolentzos et al., 2017; Paul et al., 2016) for long text matching. In contrast, our method represents documents by a novel graph representation and combines the representation with GCN. Finally, pre-training models such as BERT (Devlin et al., 2018) can also be utilized for text matching. However, the model is of high complexity and is hard to satisfy the speed requirement in real-world applications. Graph Convolutional Networks. We also contributed to the use of GCNs to identify the relationship between a pair of graphs, whereas previously, different GCN architectures have mainly been used for completing missing attributes/links (Kipf and Welling, 2016; Defferrard et al., 2016) or for node clustering or classification (Hamilton et al., 2017), but all within the context of a single graph, e.g., a knowledge graph, citation network or social network. In this work, the proposed Concept Interaction Graph takes a simple approach to represent a document by a weighted undirected graph, which essentially helps to decompose a document into subsets of sentences, each subset focusing on a different sub-topic or concept. 6 Conclusion We propose the Concept Interaction Graph to organize documents into a graph of concepts, and introduce a divide-and-conquer approach to matching a pair of articles based on graphical decomposition and convolutional aggregation. We created two new datasets for long document matching with the help of professional editors, consisting of about 60K pairs of news articles, on which we have performed extensive evaluations. In the experiments, our proposed approaches significantly outperformed an extensive range of state-of-theart schemes, including both term-based and deepmodel-based text matching algorithms. Results suggest that the proposed graphical decomposition and the structural transformation by GCN layers are critical to the performance improvement in matching article pairs. 6293 References S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. The semantic web, pages 722–735. Collin Baker and Michael Ellsworth. 2017. Graph methods for multilingual framenets. In Proceedings of TextGraphs-11: the Workshop on Graph-based Methods for Natural Language Processing, pages 45–50. Cosmin Adrian Bejan and Sanda Harabagiu. 2010. Unsupervised event coreference resolution with rich linguistic features. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1412–1422. Association for Computational Linguistics. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022. Daniel Bruggermann, Yannik Hermey, Carsten Orth, Darius Schneider, Stefan Selzer, and Gerasimos Spanakis. 2016. Storyline detection and tracking using dynamic latent dirichlet allocation. In Proceedings of the 2nd Workshop on Computing News Storylines (CNS 2016), pages 9–19. Micha¨el Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, pages 3844–3852. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Yixing Fan, Liang Pang, JianPeng Hou, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. 2017. Matchzoo: A toolkit for deep text matching. arXiv preprint arXiv:1707.07270. William L Hamilton, Rex Ying, and Jure Leskovec. 2017. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584. Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in neural information processing systems, pages 2042–2050. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management, pages 2333–2338. ACM. Jyun-Yu Jiang, Mingyang Zhang, Cheng Li, Mike Bendersky, Nadav Golbandi, and Marc Najork. 2019. Semantic text matching for long-form documents. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference resolution based on entity-centric, precision-ranked rules. Computational Linguistics, 39(4):885–916. Heeyoung Lee, Marta Recasens, Angel Chang, Mihai Surdeanu, and Dan Jurafsky. 2012. Joint entity and event coreference resolution across documents. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 489–500. Association for Computational Linguistics. Jure Leskovec, Marko Grobelnik, and Natasa MilicFrayling. 2004. Learning sub-structures of document semantic graphs for document summarization. Bang Liu, Di Niu, Kunfeng Lai, Linglong Kong, and Yu Xu. 2017. Growing story forest online from massive breaking news. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 777–785. ACM. Bang Liu, Ting Zhang, Fred X Han, Di Niu, Kunfeng Lai, and Yu Xu. 2018. Matching natural language sentences with hierarchical sentence factorization. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, pages 1237–1246. International World Wide Web Conferences Steering Committee. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Bhaskar Mitra, Fernando Diaz, and Nick Craswell. 2017. Learning to match using local and distributed representations of text for web search. In Proceedings of the 26th International Conference on World 6294 Wide Web, pages 1291–1299. International World Wide Web Conferences Steering Committee. Jonas Mueller and Aditya Thyagarajan. 2016. Siamese recurrent architectures for learning sentence similarity. In Thirtieth AAAI Conference on Artificial Intelligence. Paul Neculoiu, Maarten Versteegh, Mihai Rotaru, and Textkernel BV Amsterdam. 2016. Learning text similarity with siamese recurrent networks. ACL 2016, page 148. Giannis Nikolentzos, Polykarpos Meladianos, Franc¸ois Rousseau, Yannis Stavrakas, and Michalis Vazirgiannis. 2017. Shortest-path graph kernels for document similarity. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1890–1900. Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab. Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Shengxian Wan, and Xueqi Cheng. 2016. Text matching as image recognition. In AAAI, pages 2793–2799. Adam Paszke, Sam Gross, Soumith Chintala, and Gregory Chanan. 2017. Pytorch: Tensors and dynamic neural networks in python with strong gpu acceleration. PyTorch: Tensors and dynamic neural networks in Python with strong GPU acceleration. Christian Paul, Achim Rettinger, Aditya Mogadala, Craig A Knoblock, and Pedro Szekely. 2016. Efficient graph-based document similarity. In European Semantic Web Conference, pages 334–349. Springer. Marten Postma, Filip Ilievski, and Piek Vossen. 2018. Semeval-2018 task 5: Counting events and participants in the long tail. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 70–80. Jan Wira Gotama Putra and Takenobu Tokunaga. 2017. Evaluating text coherence based on semantic similarity graph. In Proceedings of TextGraphs-11: the Workshop on Graph-based Methods for Natural Language Processing, pages 76–85. Xipeng Qiu and Xuanjing Huang. 2015. Convolutional neural tensor network architecture for communitybased question answering. In IJCAI, pages 1305– 1311. Bryan Rink, Cosmin Adrian Bejan, and Sanda M Harabagiu. 2010. Learning textual graph patterns to detect causal event relations. In FLAIRS Conference. Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends R⃝in Information Retrieval, 3(4):333–389. Franc¸ois Rousseau and Michalis Vazirgiannis. 2013. Graph-of-word and tw-idf: new approach to ad hoc ir. In Proceedings of the 22nd ACM international conference on Information & Knowledge Management, pages 59–68. ACM. Hassan Sayyadi and Louiqa Raschid. 2013. A graph analytical approach for topic detection. ACM Transactions on Internet Technology (TOIT), 13(2):4. Adam Schenker, Mark Last, Horst Bunke, and Abraham Kandel. 2003. Clustering of web documents using a graph model. SERIES IN MACHINE PERCEPTION AND ARTIFICIAL INTELLIGENCE, 55:3–18. Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, pages 373– 382. ACM. Dafna Shahaf, Jaewon Yang, Caroline Suen, Jeff Jacobs, Heidi Wang, and Jure Leskovec. 2013. Information cartography: creating zoomable, large-scale maps of information. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1097–1105. ACM. Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Gr´egoire Mesnil. 2014. Learning semantic representations using convolutional neural networks for web search. In Proceedings of the 23rd International Conference on World Wide Web, pages 373– 374. ACM. Piek Vossen, Tommaso Caselli, and Yiota Kontzopoulou. 2015. Storylines for structuring massive streams of news. In Proceedings of the First Workshop on Computing News Storylines, pages 40–49. Shengxian Wan, Yanyan Lan, Jiafeng Guo, Jun Xu, Liang Pang, and Xueqi Cheng. 2016. A deep architecture for semantic matching with multiple positional sentence representations. In AAAI, volume 16, pages 2835–2841. Yu Wu, Wei Wu, Can Xu, and Zhoujun Li. 2018. Knowledge enhanced hybrid neural network for text matching. In Thirty-Second AAAI Conference on Artificial Intelligence. Ting Zhang, Bang Liu, Di Niu, Kunfeng Lai, and Yu Xu. 2018. Multiresolution graph attention networks for relevance matching. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pages 933–942. ACM. Deyu Zhou, Haiyang Xu, and Yulan He. 2015. An unsupervised bayesian modelling approach for storyline detection on news articles. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1943–1948.
2019
632
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6295–6300 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6295 Hierarchical Transfer Learning for Multi-label Text Classification Siddhartha Banerjee, Cem Akkaya, Francisco Perez-Sorrosal, Kostas Tsioutsiouliklis Yahoo Research 701 First Avenue Sunnyvale, CA, USA {siddb, cakkaya, fperez, kostas}@verizonmedia.com Abstract Multi-Label Hierarchical Text Classification (MLHTC) is the task of categorizing documents into one or more topics organized in an hierarchical taxonomy. MLHTC can be formulated by combining multiple binary classification problems with an independent classifier for each category. We propose a novel transfer learning based strategy, HTrans, where binary classifiers at lower levels in the hierarchy are initialized using parameters of the parent classifier and fine-tuned on the child category classification task. In HTrans, we use a Gated Recurrent Unit (GRU)-based deep learning architecture coupled with attention. Compared to binary classifiers trained from scratch, our HTrans approach results in significant improvements of 1% on micro-F1 and 3% on macro-F1 on the RCV1 dataset. Our experiments also show that binary classifiers trained from scratch are significantly better than single multi-label models. 1 Introduction Two main approaches for Multi-Label Hierarchical Text Classification (MLHTC) have been proposed (Tsoumakas and Katakis, 2007): 1. transforming the problem to a collection of independent binary classification problems by training a classifier for each category 2. training a single multilabel model that can predict all categories for instances simultaneously. In a hierarchical taxonomy of categories, dependencies exist between parent and child categories that should be exploited when training classifiers. Recent work on MLHTC uses a Deep Graph-based Convolutional Neural Network (DGCNN) (Peng et al., 2018) -based single multilabel model with a recursive regularization component to model dependencies between parent and child categories. However, multi-label models suffer on categories with very few training examples (Krawczyk, 2016) due to data imbalance. Due to a large prediction space (all categories) of multilabel models, it is very difficult to optimize class weights to handle data imbalance. By contrast, binary classifiers provide more flexibility as class weights for each classifier can easily be optimized based on validation metrics. With a reasonable number of categories (few hundreds), collection of binary classifiers are a feasible option to solve MLHTC problems. Influenced by recent progress of transfer learning on Natural Language Processing (NLP) tasks (Howard and Ruder, 2018; Mou et al., 2016), we present HTrans, a Hierarchical Transfer Learning approach. We hypothesize that introducing dependencies between parent and child categories is possible using transfer learning. Therefore, we initialize parameters of the child category classifier from the binary parent category classifier and later fine-tune the model. The transfer of parameters can provide a better starting point for the child category classifier than training from scratch using randomly initialized parameters. Without any loss of generality, we propose a simple classification model using Gated Recurrent Unit (GRU) (Cho et al., 2014) coupled with attention (Dzmitry et al., 2015). We also select optimal class weights for each category to account for class imbalance (Burez and Van den Poel, 2009) in the data. Our experiments on the RCV1 (Lewis et al., 2004) dataset show that HTrans improves over training models from scratch by 1% and 3% on micro-F1 and macro-F1 scores, respectively. Furthermore, we also show that binary models based on our architecture surpass DGCNN (state-of-theart multi label model on RCV1 dataset) by 4% and 19% on micro-F1 and macro-F1 scores, respectively. Class weight optimization in itself produces 6296 Figure 1: Architecture of our Proposed Model an improvement of ∼9% on macro-F1 scores. 2 Related Work A major focus of multi-label text classification research has been exploiting possible label dependencies to improve predictive performance. To account for label dependencies, some approaches utilize label correlations found in the training data (Tsoumakas et al., 2009; Huang and Zhou, 2012; Zhang and Zhang, 2010; Guo and Gu, 2011). Others make use of pre-defined label hierarchies. These approaches usually employ hierarchy-induced model regularization by putting constraints on the weight vectors of adjacent models, a type of transfer learning (Zhou et al., 2011; Gopal and Yang, 2013; Peng et al., 2018). HTrans is similar to the latter category of work as it uses transfer learning. We utilize fine-tuning to introduce inductive bias from a parent category to its children, whereas previous approaches use model regularization. Results are compared to the state-of-the-art DGCNN (Peng et al., 2018) model where a graph-based Convolutional neural network model is deployed in combination with recursive model regularization. Fine-tuning of pre-trained models has shown promising results on various NLP tasks. Some of these approaches employ supervised pretraining transferring knowledge between related tasks (Mou et al., 2016; Min et al., 2017; Conneau et al., 2017). Another set of research focuses on a more general transfer task where models are pretrained on a language modeling task on large unsupervised corpora and later fine-tuned to a supervised downstream task (Howard and Ruder, 2018; Devlin et al., 2018; Radford et al., 2018). Our work is more similar to the former, since we finetune a parent category model in order to obtain a model for its subcategory – transfer from supervised data. 3 Proposed Approach We propose a minimalistic model architecture based on Gated Recurrent Unit (GRU) (Cho et al., 2014) combined with an attention (Dzmitry et al., 2015) mechanism. We use a bidirectional GRU to encode both forward and backward sequences. GRU can memoize the context of the text documents while the attention layer allows the model to selectively focus on important elements in the text. Our attention model closely follows the word attention module from (Yang et al., 2016). Our model architecture is shown in Figure 1. The word sequences are fed into the GRU as embeddings. We use pre-trained embeddings from Glove (Pennington et al., 2017). Each state st produced by the GRU is a combination of sbt and sft, where b and f denote the backward and forward hidden states, respectively, for each timestep t. As shown in the equations below, S denotes states for all the timesteps (1, 2, ...., T). We apply attention on top of the GRU states to produce a fixed-dimensional vector representation Att(S). Furthermore, we combine a max-pooled (Maxpool) and mean-pooled (Meanpool) representation of all the GRU hidden states along with the Att(S) vector to produce R – the sequence representation that is fed into the output layer. S = [s1, s2, s3, ...sT ] R = [Att(S), Maxpool(S), Meanpool(S)] 6297 Finally, the output layer of the model includes a fully connected layer with sigmoid activations. The dimensionality of the fully-connected layer is determined by the number of categories in the classification task. HTrans (Hierarchical Transfer Learning) is based on a recursive strategy of training parent and child category classifiers. Say, P1 is a top-level category with C1 as one of its children. Also, lets consider C12 as a child of C1. First, we train a binary classifier for P1. Documents in the training data that contain P1 as one of the labels are treated as positive instances, the rest are all negative. Next, we initialize the C1 binary classifier with the final model parameters of P1 classifier. After training the C1 classifier, the C12 classifier is initialized with parameters from C1 and so on. Following recent work on transfer learning in other domains (Hoo-Chang et al., 2016), we re-initialize the parameters of the final output layer randomly but retain the parameters of other layers. Recent work on transfer learning (Howard and Ruder, 2018) suggested to use different learning rates for different layers. Based on recent findings in transfer learning (Bowman et al., 2015), we apply lower learning rates to the transferred parameters (from the parent classifier) and higher learning rates to the final fully connected classification (output) layer. We use Adam (Kingma and Ba, 2014) as our optimizer. We set the learning rate of the fully connected layer to 0.001 (high) as all the parameters in the layer are randomly initialized and they should be readjusted to the best possible values. In contrast, the learning rate for the other layers (GRU and attention) are changed to 0.0005 (low) to retain parent classification knowledge. In addition to different learning rates, we also freeze the embedding layer (Hu et al., 2014) after the top level classifiers have been trained. Layer freezing prevents over-fitting classifiers for categories in lower levels of the taxonomy. 4 Experimental Results In this section, first, we describe the characteristics of the dataset followed by implementation details. Thereafter, we describe the experiments we conduct along with the results obtained. Dataset: We use the Reuters dataset (RCV-v1) as provided in (Lewis et al., 2004). The dataset is a human-labeled collection of Reuters News articles from 1996-1997. There are a total of 103 catModel Micro-F1 Macro-F1 DGCNN 0.7618 0.4334 GRU-Att-basic 0.7980 0.5166 GRU-Att (class weights) 0.7974 0.5669 HTrans 0.8051† 0.5849† Table 1: Comparison of Models on RCV1 dataset (†: Statistically significant at p≤0.05 compared to GRUAtt (with class weights)) egories according to the taxonomy. The dataset consists of 23,149 training and 784,446 testing documents, respectively. Implementation and Metrics: We implemented our proposed network using PyTorch1. We use a 1 layer GRU with 96 hidden units and attention was added on top of the GRU layer. A dropout probability of 0.4 was applied on the GRU output. We use 100-dimensional pretrained word embeddings from Glove (Pennington et al., 2014). Each of the binary classifiers is trained for 10 epochs with early stopping (Caruana et al., 2001) with patience level 3. We use a batch size of 128 units for all our experiments. Models are trained on 2 Tesla V100 GPUs. The data corresponding to each category was randomly split into 85% training and 15% validation instances. We restrict the documents in the dataset to a maximum of 100 words from the body of the documents2. We use Binary Cross Entropy as the loss function for the classification problem. Due to significant data imbalance in several categories, we experiment with multiple class weights – 1, 2, 3, 5, 10, 30, 50 for each binary classifier and finally choose the best model based on validation metrics. Metrics: We follow the most recent work (Peng et al., 2018) on RCV1 dataset and report Micro-F1 and Macro-F1 scores for our experiments. MicroF1 considers the global precision and recall of the categories while Macro-F1 computes the average of the F1 scores obtained by individual categories. 4.1 Comparison of Different Models We show the comparison of different approaches on the RCV1 dataset in table 13. We refer to a version of GRU-Att without class weight optimization (default: 1) as GRU-Att-basic. As can be seen from the table, GRU-Att-basic performs significantly better than DGCNN on both Micro1https://pytorch.org/ 2We tokenize using spacy: https://spacy.io/ 3For comparisons with other models, please refer to (Peng et al., 2018) 6298 Figure 2: RCV1 dataset Levels 2 and 3: Macro-F1 without and with Transfer Learning F1 (0.7980 vs 0.7618) and Macro-F1 (0.5166 vs 0.4334) scores, respectively. Using binary classifiers with a very basic architecture beats DGCNN easily. Addition of class weights during model training (GRU-Att) further improves the binary models. We optimize the class weights based on the F1score on the validation data. As can be seen from the table, Macro-F1 improves by close to 10% after incorporating class weights. The Micro-F1 remains unchanged, though. Therefore, the biggest benefit of using class weights is observed in categories where the number of instances during training is very low. HTrans, our proposed technique that uses transfer learning (with embedding freezing and differential learning rates), further improves on GRU-Att by more than 3% on the Macro-F1 scores. Our initial conjecture was that transfer learning should help categories located at lower levels in the taxonomy. Therefore, we wanted to see the impact of HTrans on categories in different levels. Figure 2 shows the differences in Macro-F1 scores for the GRU-Att model (with class weights) and HTrans across different levels - Combined (level 2 and 3 both), level 2 and level 3. As can be seen from the Macro-F1 scores, HTrans outperforms GRU-Att at both levels – level 2 (0.587 vs 0.584) and 3 (0.556 vs 0.517). As expected, the improvement is visible in level 3 (∼7%) with more clarity as level 3 contains the least number of training instances in the hierarchy. Multi-label Model: We realize that training and inference using multiple binary classifiers might be a bottleneck due to resource constraints. In Model Micro-F1 Macro-F1 DGCNN 0.7618 0.4334 GRU-Att-Multi (no weights) 0.7407 0.3937 GRU-Att-Multi (weights) 0.7654 0.4842 Table 2: Comparison of Multi-label Models on RCV1 dataset (weights imply the use of class weights during training) such cases, a single multi-label model might be preferred over multiple binary classifiers. To this end, we build a multi-label version of GRU-Att, GRU-Att-Multi, by replacing the output layer. Instead of a single output, it contains 103 output nodes (for the number of classes) for the RCV1 dataset. We wanted to investigate the use of class weights on the multi-label model. To select class weights on the multi-label model using a search over user-provided weights, we will have to evaluate an intractable number of class weight combinations. For example, say, we have two class weight options for each category. For 103 categories, it would result in trying out 2103 combinations of class weights making it impractical. Instead, we propose using the optimal class weights obtained from training the binary models and using them for the multi-label model training. We optimize the weighted F1-score during training the multi-label model. Loss function and optimizers are kept unchanged. As can be seen from table 2, the use of the optimal class weights obtained from binary classifiers improve the Micro-F1 and Macro-F1 scores significantly on the multi-label model. The Macro-F1 scores suffer without the use of class weights. A more interesting observation is that our GRU-Att-Multi model trained using class weights outperforms the state-of-the-art multilabel model (DGCNN) on both metrics. The improvement of 12% seen in Macro-F1 score over DGCNN can be totally attributed to the class weighting scheme. We employ a much simpler architecture without the use of any regularization constraint but still can outperform DGCNN on both metrics. 5 Conclusions and Future Work In this work, we propose HTrans, a hierarchical transfer learning-based strategy to train binary classifiers for categories in a taxonomy. Our approach relies on re-using model parameters trained at upper levels in the taxonomy and fine-tuning them for classifying categories at lower levels. 6299 Our experiments on the RCV1 dataset show that classifiers of categories with less training examples benefit using pre-trained model parameters from upper level categories. Furthermore, we show that binary classifiers greatly outperform multi-label models. Finally, we show improvement over the state of the art multi-label model by using optimized class weights obtained when training the binary classifiers. As future work, we will investigate approaches to hyperparameter tuning to find better model architectures for hierarchical multi-label text classification tasks. References Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642. Jonathan Burez and Dirk Van den Poel. 2009. Handling class imbalance in customer churn prediction. Expert Systems with Applications, 36(3):4626– 4636. Rich Caruana, Steve Lawrence, and C Lee Giles. 2001. Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping. In Advances in neural information processing systems, pages 402– 408. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Bahdanau Dzmitry, Cho Kyunghyun, and B Yoshua. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations. Siddharth Gopal and Yiming Yang. 2013. Recursive regularization for large-scale classification with hierarchical and graphical dependencies. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 257–265. ACM. Yuhong Guo and Suicheng Gu. 2011. Multi-label classification using conditional dependency networks. In IJCAI Proceedings-International Joint Conference on Artificial Intelligence, page 1300. Shin Hoo-Chang, Holger R Roth, Mingchen Gao, Le Lu, Ziyue Xu, Isabella Nogues, Jianhua Yao, Daniel Mollura, and Ronald M Summers. 2016. Deep convolutional neural networks for computeraided detection: Cnn architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging, 35(5):1285. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 328–339. Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in neural information processing systems, pages 2042–2050. Sheng-Jun Huang and Zhi-Hua Zhou. 2012. Multilabel learning by exploiting label correlations locally. In Twenty-sixth AAAI conference on artificial intelligence. Diederik P Kingma and Jimmy Lei Ba. 2014. Adam: Amethod for stochastic optimization. In Proc. 3rd Int. Conf. Learn. Representations. Bartosz Krawczyk. 2016. Learning from imbalanced data: open challenges and future directions. Progress in Artificial Intelligence, 5(4):221–232. David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for text categorization research. Journal of machine learning research, 5(Apr):361–397. Sewon Min, Minjoon Seo, and Hannaneh Hajishirzi. 2017. Question answering through transfer learning from large fine-grained supervision data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 510–517. Association for Computational Linguistics. Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2016. How transferable are neural networks in nlp applications? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 479–489, Austin, Texas. Association for Computational Linguistics. 6300 Hao Peng, Jianxin Li, Yu He, Yaopeng Liu, Mengjiao Bao, Lihong Wang, Yangqiu Song, and Qiang Yang. 2018. Large-scale hierarchical text classification with recursively regularized deep graph-cnn. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, pages 1063–1072. International World Wide Web Conferences Steering Committee. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2017. Glove: Global vectors for word representation. 2014. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014). Alec Radford, Karthik Narasimhan, Time Salimans, and Ilya Sutskever. 2018. Improving language understanding with unsupervised learning. Technical report, Technical report, OpenAI. Grigorios Tsoumakas, Anastasios Dimou, Eleftherios Spyromitros, Vasileios Mezaris, Ioannis Kompatsiaris, and Ioannis Vlahavas. 2009. Correlationbased pruning of stacked binary relevance models for multi-label learning. In Proceedings of the 1st International Workshop on Learning from Multilabel Data, pages 101–116. Grigorios Tsoumakas and Ioannis Katakis. 2007. Multi-label classification: An overview. International Journal of Data Warehousing and Mining (IJDWM), 3(3):1–13. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489. Min-Ling Zhang and Kun Zhang. 2010. Multi-label learning by exploiting label dependency. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 999–1008. ACM. Denny Zhou, Lin Xiao, and Mingrui Wu. 2011. Hierarchical classification via orthogonal transfer.
2019
633
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6301–6306 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6301 Bias Analysis and Mitigation in the Evaluation of Authorship Verification Janek Bevendorff ∗ Benno Stein∗ Matthias Hagen† Martin Potthast‡ ∗Bauhaus-Universität Weimar †Martin-Luther-Universität Halle-Wittenberg ‡Leipzig University <first>.<last>@uni-{weimar, leipzig}.de <first>.<last>@informatik.uni-halle.de Abstract The PAN series of shared tasks is well known for its continuous and high quality research in the field of digital text forensics. Among others, PAN contributions include original corpora, tailored benchmarks, and standardized experimentation platforms. In this paper we review, theoretically and practically, the authorship verification task and conclude that the underlying experiment design cannot guarantee pushing forward the state of the art—in fact, it allows for top benchmarking with a surprisingly straightforward approach. In this regard, we present a “Basic and Fairly Flawed” (BAFF) authorship verifier that is on a par with the best approaches submitted so far, and that illustrates sources of bias that should be eliminated. We pinpoint these sources in the evaluation chain and present a refined authorship corpus as effective countermeasure. 1 Introduction When tackling a problem in empirical research, a sound and reliable evaluation of competing solution approaches is a prerequisite to achieve agreement on the state-of-the-art performance. For authorship verification, the PAN series of shared tasks caters for the most important benchmarks to which new approaches refer and compare against. The fundamental problem in authorship verification is to decide whether two given texts were written by the same author. When experimenting within the PAN setting, we learned that one can quickly achieve a competitive performance for this task—with one of the most basic approaches: a TFIDF-weighted character 3-gram model. By extending this model with a few additional features, such as the KullbackLeibler divergence and related measures, we were able to reach the performance of the best verifiers submitted so far.1 However, reality caught up with us when we applied our verifier to other authorship verification problems with little success. To 1https://www.tira.io/task/authorship-verification/ get to the bottom of this rather baffling outcome, we carried out a systematic analysis of the entire evaluation chain, its problem definition, its corpora, its evaluation procedure, and of course our model, in search of any sources of bias that may have artificially inflated the performance of our approach. The paper in hand introduces our “Basic and Fairly Flawed” (BAFF) model and reports on our bias analysis. Moreover, in an attempt to improve the situation and call for better data, we not only contribute a new and carefully curated authorship verification corpus,2 but also collect a few best practices for the creation of such corpora. The outlined situation calls into question a lot of what we believed to know about the state of the art, and future PAN tasks on verification will have to rectify these issues in order to provide for a more valid assessment of the state of the art. 2 Related Work Authorship verification is a young task in the field of authorship analysis. Proposed by Koppel and Schler (2004), and mostly solved on book-sized texts right away, it remains a challenging task on short texts. The numerous verification approaches developed over the years employ a wide array of features, methods, and corpora (Stamatatos, 2009), rendering a comparison between approaches difficult. A dedicated shared task series at PAN (Stamatatos et al., 2015, 2014; Juola and Stamatatos, 2013; Argamon and Juola, 2011) was a key enabler for comparability and reproducibility. The verifiers submitted by Bagnall (2015), Fréry et al. (2014), and Modaresi and Gross (2014) form the state of the art. While new verifiers are run against the shared task’s data to assess their performance against these baselines (e.g., Halvani et al., 2017; Kocher and Savoy, 2017), PAN continues to develop new benchmarks on closely related tasks.3 2Code and corpus: https://github.com/webis-de/acl-19 3See http://pan.webis.de for an overview of these tasks. 6302 3 BAFF: A Baffling Authorship Verifier In authorship verification, the most basic question to answer is whether two given texts p and q have been written by the same author.4 Key to solving the task is finding a good representation r of the style difference between p and q. We resort to seven well-known measures for this purpose. 3.1 Features: Style Difference Measures To compute the style difference measures listed below, we first represent p and q as character trigram vectors p and q; character n-grams are considered robust style indicators across many authorship analysis tasks (Stamatatos, 2013). Given p and q, we calculate the following well-known measures:5 1. Cosine similarity (TF-weighted) 2. Cosine similarity (TFIDF-weighted) 3. Kullback-Leibler divergence (KLD) 4. Skew divergence (skew-balanced KLD) 5. Jensen-Shannon divergence 6. Hellinger distance 7. Avg. logarithmic sentence length difference (a feature frequently used by PAN participants) After assembling r as a 7-dimensional vector from these difference measures, we rescale all computed features to the interval [0, 1] with respect to the dataset so as to align the diverse value ranges. We fully expect the divergence measures to be correlated to a greater or lesser extent; the learning algorithm will select the best-performing ones. 3.2 Performance Results Table 1a shows the performance of four WEKA classifiers based on our model on the PAN15 test dataset. The decision tree performs best, beating Bagnall’s winning deep learning approach in terms of accuracy by one percentage point for an overall second place (Table 1e). We can produce similar results on the PAN14 novels dataset (Table 1f), and, switching to a random forest, even claim first place on the essays dataset (Table 1g). Altogether, with very little effort, our model outperforms the 31 approaches submitted to PAN in 2014 and 2015, competing with much more elaborate solutions. 4In forensic applications, a text of unknown authorship and one or more texts known to be written by a given author are considered (van Halteren, 2004). If solved, other authorshiprelated tasks, such as authorship attribution, would be solved as well, since they can be reduced to a series of verifications. 5Except for the cosine similarity and the average sentence length difference, the other statistical difference measures we use have rarely been considered for verification to date. 4 Bias Analysis Unable to reproduce these outstanding results on other verification problems, our ensuing analysis of the evaluation chain revealed several interdependent sources of bias in all its components, namely our model, the data, and the evaluation procedure. In what follows, we discuss these biases, outline their underlying flaws, and ways to mitigate them. 4.1 Model Bias In an attempt to pinpoint which feature contributes how much to the overall performance, we ran an ablation test. While the removal of each feature causes some performance loss, the removal of Feature 2, the TFIDF-weighted cosine similarity, resulted in the loss of 19 percentage points, by far the largest among all features. What makes TFIDF special is its IDF factor, which was the key to identify two sources of bias in our model: (B1) Corpus-relative features. TFIDF is used so matter-of-factly throughout machine learning that hardly anyone discusses the origin of its document frequency (DF) values. In the absence of any explanation, one may assume that they are computed from the currently processed dataset. This is perfectly alright for most tasks, but crucially not for authorship verification where computing DF from the evaluation datasets at runtime is both unrealistic and prone to overfit. The rather small number of test cases in the PAN datasets combined with Bias B4 allows the learning algorithm to “reverseengineer” part of the ground-truth from the DF values, while in practice, a forensic linguist analyzes only one case at a time, not many (see Bias B6). Table 1c (“scaled” rows) shows BAFF’s performance when computing DF from the processed corpus, and when using the Brown corpus instead, revealing a severe drop of performance. Hence, corpus-relative features should be avoided. (B2) Feature scaling. Another machine learning technique that is often applied without second thought is scale normalization of all features. However, applying the same reasoning as for the (I)DF calculation, scale normalization biases our features towards corpus specifics. Table 1c shows BAFF’s performance with and without scale normalization. We experience a massive performance drop in combination with corpus-relative IDF, but much less so with “external” IDF from the Brown corpus. This aggravation of Bias B1 through feature scaling is most likely influenced by Biases B3–B6. 6303 (a) BAFF on PAN15 corpus Acc. Prec. F1 ROC Naive Bayes 0.674 0.675 0.674 0.771 SVM 0.700 0.700 0.700 0.700 Decision Tree 0.768 0.773 0.767 0.746 Random Forest 0.660 0.661 0.660 0.717 (b) BAFF on Gutenberg corpus (unscaled, w/o TFIDF) Naive Bayes 0.934 0.634 0.634 0.756 SVM 0.695 0.701 0.693 0.695 Decision Tree 0.695 0.765 0.674 0.695 Random Forest 0.683 0.687 0.681 0.741 (c) Corpus-relative IDF against external IDF Corpus IDF (scaled) 0.768 0.773 0.767 0.746 Corpus IDF (unscaled) 0.622 0.684 0.651 0.639 Brown IDF (scaled) 0.598 0.611 0.586 0.598 Brown IDF (unscaled) 0.590 0.605 0.575 0.590 (d) 10-fold cross-val. naive Bayes on corpus-rel. IDF PAN15 Test (scaled) 0.742 0.749 0.740 0.796 Gutenberg (scaled) 0.570 0.628 0.515 0.599 (e) PAN15 submissions C@1 ROC Final Bagnall 0.757 0.811 0.614 BAFF 0.768 0.746 0.573 Castro et al. 0.694 0.750 0.520 Gutierrez et al. 0.694 0.740 0.513 Kocher and Savoy 0.690 0.738 0.508 Halvani and Winter 0.601 0.762 0.458 (f) PAN14 novels submissions Modaresi and Gross 0.715 0.711 0.508 Zamani et al. 0.650 0.733 0.476 BAFF 0.651 0.715 0.465 Khonji and Iraqi 0.610 0.750 0.458 Mayor et al. 0.614 0.664 0.407 Castillo et al. 0.615 0.628 0.386 (g) PAN14 essays submissions BAFF 0.722 0.761 0.549 Fréry et al. 0.710 0.723 0.513 Satyam et al. 0.657 0.699 0.459 Moreau et al. 0.600 0.620 0.372 Layton 0.610 0.595 0.363 Modaresi and Gross 0.580 0.603 0.350 (h) PAN15/14 and our Gutenberg corpus statistics Num. Cases Avg. Words / Text Training Test Training Test PAN15 100 500 340 510 PAN14 Novels 100 200 1,540 6,000 PAN14 Essays 200 200 830 820 PAN14 Essaysa 200 200 3,040 2,940 Gutenberg 192 82 3,900 3,930 (i) Gutenberg corpus subsets (genre and time period) Corpus subset Num. Cases Unique Authors 19th cent. adventures 118 177 19th cent. sci-fi 60 90 20th cent. sci-fi 96 144 Total 274 390b aCounting “known” texts as a single large text. A case in the essays corpus has one “unknown” and up to five “known” texts. bNot all authors are unique across subsets. Table 1: Column 1 shows the results of different classifiers on the PAN15 (a) and our Gutenberg corpus (b), an analysis of BAFF on the PAN15 corpus with different IDF values (c), and a comparison of 10-fold cross-validation naive Bayes with corpus-relative TFIDF as the only feature between the two corpora (d). Column 2 ranks BAFF against the top-5 PAN15 (e) and PAN14 (f / g) submissions (final score = C@1 · ROC). Column 3 lists general statistics for all corpora (h) and genres and time periods covered by our Gutenberg corpus (i). 4.2 Data Bias Just as the creators of a verification model should mitigate bias by avoiding unsuitable features and techniques, so should the creators of an evaluation dataset take precautions not to make it readily exploitable. The reason why Biases B1 and B2 inflated the performance of our model is largely due to the fact that the data is biased, too, or else the model’s biased features would not have had such a significant positive effect. Reviewing PAN’s datasets, we identify three sources of bias. (B3) Plain text heterogeneity. Inspecting the plain text files of the datasets, many of them carry artifacts that are unlikely to signal authorial style, but rather originate from the plain text converter used or the human transcriber. Examples we observed include mixed use of ASCII and Unicode ellipsis markers (some as iconic as “. . . .”), a wide variety of quotation marks and em dashes (also mixed encodings), and curly braces for parentheses. Moreover, the texts are formatted to be human-readable by preserving white space, including indentations and line breaks, which vary greatly across authors, but were not necessarily introduced by them. Given that many verification models use character n-grams as basic style representation, ngrams covering these artifacts may indicate authorship even across cases. To mitigate this bias, the texts in a dataset should be fully homogenized (particularly in the presence of Bias B4). (B4) Population homogeneity. Many monographs are required to construct a verification dataset. But the sources tapped so far lack scale, so that three shortcuts are commonly applied to maximize yield:6 For same-author cases, more than one case is constructed for a given author, (1) by systematically pairing more than two texts by that author, and/or (2) by splitting long texts (e.g., books) to obtain more text chunks from that author. For different-authors cases, (3) texts from authors for whom same-author cases exist are reused, using different, or even the same chunks also found in same-author cases. Such imbalance causes authors’ styles to be over-/underrepresented. Steady use of these shortcuts also gives rise to Bias B5. (B5) Accidental text overlap. The strong contribution of the TFIDF-weighted cosine similarity points to text overlap in same-author cases that renders them easier-to-discriminate from differentauthors cases. Caused by Bias B4, text overlap includes named entities (e.g., speaker names in the plays of PAN15), topic words shared between text chunks taken from the same source text, repeated phrases, and unique character sequences. The fanfiction used for PAN14 contains text reuse from the original books. Accidental overlap between cases may lead a learning algorithm astray, especially in the presence of Biases B1 and B6. For mitigation, a text overlap analysis and correction is necessary. 6E.g., the PAN15 dataset consists of hundreds of cases constructed from only 15 stage plays by six different authors. 6304 4.3 Evaluation Bias Lastly, the evaluation procedure itself is biased. (B6) Test conflation. At testing time, authorship verifiers can usually access the entire test dataset. This is unrealistic; a forensic linguist works on a case-by-case basis, and cases are independent of one another, or their underlying population is unknown. Emulating this scenario, a verifier should process only one test case at a time, without referring to previously processed cases to solve the next one. Incidentally, this policy would mitigate many of the aforementioned biases. While not enforcible in individual evaluations and shared tasks with run submissions, at PAN, it may indeed be, by adjusting the TIRA platform (Potthast et al., 2019) to handle the software runs accordingly. 5 The Webis Authorship Verification Corpus With the goal of avoiding all data biases, we constructed a new authorship verification corpus based on books obtained from Project Gutenberg:7 the Webis Authorship Verification Corpus 2019. We validate the corpus using our BAFF approach. 5.1 Corpus Construction At Project Gutenberg, transcriptions of many public domain books are provided. Given their diversity, we limit our choice to fiction books from the 19th and 20th century and the two specific genres adventure and science fiction, controlling for respective style variation. Table 1h and i compare the corpus statistics with the three PAN corpora. To avoid Bias B4, we ensured that each author is unique within, though not necessarily across any combination of time period and genre. Moreover, no texts were reused to construct different-authors cases, but texts from previously unused authors were collected. The same-author cases were created so that both texts are from different books, and where possible, neither book is from the same series of books. Altogether, we created a total of 274 verification cases of which 50 % are sameauthor and the rest different-authors cases, with a 70/30 split of training and test. The size of each text varies between 3,500 and 4,000 words (21,870 characters on average), with a few individual texts being shorter due to insufficient material. Unlike the PAN datasets, we aimed for a corpus that can also be processed by Koppel and Schler’s unmasking, an important state-of-the-art approach. 7https://www.gutenberg.org/ To avoid Bias B3, all texts were carefully normalized to remove editorial and non-authorial artifacts. We stripped book and chapter titles, illustration placeholders, ASCII art, repeated character runs, footnotes, and obvious quotations from the texts (to also avoid Bias B5), as well as any Gutenbergrelated front pages and additions to the original text. Gutenberg books make use of underscores to signify italic text; we removed those as well. Special characters like ellipses and quotation marks were manually replaced by a consistent ASCII representation. We further collapsed all newlines and other white space into a single space character to avoid incidental and inadvertent bias due to formatting. 5.2 Corpus Validation As per Bias B1, a high performance of TFIDFweighted cosine similarity hints at a biased dataset. To validate our corpus in this respect, we crossvalidated a naive Bayes classifier using only this feature (Table 1d), which achieved merely 57 % accuracy compared to 74 % on PAN15. Excluding cosine similarity, BAFF still gets up to 70 % accuracy (Table 1b), which marks statistical divergence measures as promising features for future verifiers. 6 Conclusion In shared tasks, sometimes basic approaches outperform more sophisticated ones. This is frequently the case when machine learning meets small data. Inadvertent properties of the data act as confounders that a learning algorithm will gladly fit onto if they are not controlled. In the case of authorship verification as per PAN, this was a major part of the problem. As long as much larger corpora remain out of reach for lack of a sufficient source of monographs, extra care needs to be taken in preparing the data, as exemplified for our corpus. Another important take-away message is that model authors in authorship verification need to be extra careful about their feature selection. Fortunately, this will come naturally to researchers in the field as they are already trained to avoid features that encode topic rather than style. In particular, we strongly suggest that future evaluations should adopt a stateless one-case-at-a-time test policy. Finally, in a spin-off study on unmasking, we generalized the algorithm to work on short, essaylength texts (Bevendorff et al., 2019): it achieves an accuracy of 0.73, an F1 of 0.69, and a precision of 0.82, marking the first baseline for our corpus. 6305 References Shlomo Argamon and Patrick Juola. 2011. Overview of the International Authorship Identification Competition at PAN-2011. In Notebook Papers of CLEF 2011 Labs and Workshops, pages 19-22. Douglas Bagnall. 2015. Author Identification using multi-headed Recurrent Neural Networks — Notebook for PAN at CLEF 2015. In CLEF 2015 Working Notes Papers. Janek Bevendorff, Benno Stein, Matthias Hagen, and Martin Potthast. 2019. Generalizing Unmasking for Short Texts. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 654–659. Association for Computational Linguistics. Esteban Castillo, Ofelia Cervantes, Darnes Vilariño, David Pinto, , and Saul León. 2014. Unsupervised Method for the Authorship Identification Task — Notebook for PAN at CLEF 2014. In CLEF 2014 Working Notes Papers. Daniel Castro, Yaritza Adame, María Pelaez, and Rafael Muñoz. 2015. Authorship Verification, Combining Linguistic Features and Different Similarity Functions — Notebook for PAN at CLEF 2015. In CLEF 2015 Working Notes Papers. Jordan Fréry, Christine Largeron, and Mihaela Juganaru-Mathieu. 2014. UJM at CLEF in Author Identification — Notebook for PAN at CLEF 2014. In CLEF 2014 Working Notes Papers. Josue Gutierrez, Jose Casillas, Paola Ledesma, Gibran Fuentes, and Ivan Meza. 2015. Homotopy Based Classification for Author Verification Task — Notebook for PAN at CLEF 2015. In CLEF 2015 Working Notes Papers. Oren Halvani and Christian Winter. 2015. A Generic Authorship Verification Scheme Based on Equal Error Rates — Notebook for PAN at CLEF 2015. In CLEF 2015 Working Notes Papers. Oren Halvani, Christian Winter, and Lukas Graner. 2017. Authorship verification based on compression-models. CoRR, abs/1706.00516. Patrick Juola and Efstathios Stamatatos. 2013. Overview of the Author Identification Task at PAN 2013. In CLEF 2013 Working Notes Papers. Mahmoud Khonji and Youssef Iraqi. 2014. A Slightly-modified GI-based Author-verifier with Lots of Features (ASGALF) — Notebook for PAN at CLEF 2014. In CLEF 2014 Working Notes Papers. Mirco Kocher and Jacques Savoy. 2015. UniNE at CLEF 2015: Author Identification — Notebook for PAN at CLEF 2015. In CLEF 2015 Working Notes Papers. Mirco Kocher and Jacques Savoy. 2017. A simple and efficient algorithm for authorship verification. JASIST, 68(1):259–269. Moshe Koppel and Jonathan Schler. 2004. Authorship Verification as a One-Class Classification Problem. In Proceedings of the Twenty-First International Conference on Machine Learning, pages 1–7. Robert Layton. 2014. A simple Local n-gram Ensemble for Authorship Verification — Notebook for PAN at CLEF 2014. In CLEF 2014 Working Notes Papers. Cristhian Mayor, Josue Gutierrez, Angel Toledo, Rodrigo Martinez, Paola Ledesma, Gibran Fuentes, and Ivan Meza. 2014. A Single Author Style Representation for the Author Verification Task — Notebook for PAN at CLEF 2014. In CLEF 2014 Working Notes Papers. Pashutan Modaresi and Philipp Gross. 2014. A Language Independent Author Verifier Using Fuzzy C-Means Clustering — Notebook for PAN at CLEF 2014. In CLEF 2014 Working Notes Papers. Erwan Moreau, Arun Jayapal, , and Carl Vogel. 2014. Author Verification: Exploring a Large set of Parameters using a Genetic Algorithm — Notebook for PAN at CLEF 2014. In CLEF 2014 Working Notes Papers. Martin Potthast, Tim Gollub, Matti Wiegmann, and Benno Stein. 2019. TIRA Integrated Research Architecture. In Nicola Ferro and Carol Peters, editors, Information Retrieval Evaluation in a Changing World - Lessons Learned from 20 Years of CLEF. Springer. Satyam, Anand, Arnav Kumar Dawn, , and Sujan Kumar Saha. 2014. Statistical Analysis Approach to Author Identification Using Latent Semantic Analysis — Notebook for PAN at CLEF 2014. In CLEF 2014 Working Notes Papers. Efstathios Stamatatos. 2009. A Survey of Modern Authorship Attribution Methods. Journal of the American Society for Information Science and Technology, 60(3):538–556. Efstathios Stamatatos. 2013. On the robustness of authorship attribution based on character n-gram features. Journal of Law and Policy, 21(2):421–439. Efstathios Stamatatos, Walter Daelemans, Ben Verhoeven, Patrick Juola, Aurelio López López, Martin Potthast, and Benno Stein. 2015. Overview of the Author Identification Task at PAN 2015. In CLEF 2015 Working Notes Papers. Efstathios Stamatatos, Walter Daelemans, Ben Verhoeven, Martin Potthast, Benno Stein, Patrick Juola, Miguel A. Sanchez-Perez, and Alberto Barrón-Cedeño. 2014. Overview of the Author Identification Task at PAN 2014. In CLEF 2014 Working Notes Papers. 6306 Hans van Halteren. 2004. Linguistic profiling for author recognition and verification. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics (ACL 2004). Hamed Zamani, Samira Abnar, Mostafa Dehghani, Mahsa Forati, and Pariya Babaei. 2014. Submission to the Author Identification Task at PAN 2014. In CLEF 2014 Working Notes Papers.
2019
634
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6307–6313 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6307 Numeracy-600K: Learning Numeracy for Detecting Exaggerated Information in Market Comments Chung-Chi Chen,1 Hen-Hsen Huang,2,4 Hiroya Takamura,3 Hsin-Hsi Chen1,4 1 Department of Computer Science and Information Engineering National Taiwan University, Taiwan 2 Department of Computer Science, National Chengchi University, Taiwan 3 Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology, Japan 4 MOST Joint Research Center for AI Technology and All Vista Healthcare, Taiwan [email protected], [email protected], [email protected], [email protected] Abstract In this paper, we attempt to answer the question of whether neural network models can learn numeracy, which is the ability to predict the magnitude of a numeral at some specific position in a text description. A large benchmark dataset, called Numeracy-600K, is provided for the novel task. We explore several neural network models including CNN, GRU, BiGRU, CRNN, CNN-capsule, GRU-capsule, and BiGRU-capsule in the experiments. The results show that the BiGRU model gets the best micro-averaged F1 score of 80.16%, and the GRU-capsule model gets the best macroaveraged F1 score of 64.71%. Besides discussing the challenges through comprehensive experiments, we also present an important application scenario, i.e., detecting exaggerated information, for the task. 1 Introduction As a prior research from a dataset obtained from Reuters, one of the largest international news agencies, over 65.66% of market comments contain numerals. Without the numerals in market comments, we will miss a lot of useful information. Table 1 lists some instances of real-time market comments. The topics include the descriptions of market data (S1), financial statements (S2), products (S3), analyst reports (S4), and events (S5). From the table, we can see that numerals provide more detailed information than words do. For example, in comment (S1) we can learn that the share price of Apple Inc. (AAPL) has fallen, but we cannot obtain the percentage change or the price quote without the numerals. Furthermore, (S3) provides crucial information such as the date (Q2) and the amount of sales with numerals (4.6 (S1) <AAPL> SHARES DOWN 4 PCT AT $113.7IN MORNING TRADE (S2) <AAPL> Q1 REV VIEW $75.08 BLN (S3) <AAPL> - Q2 MAC SALES OF 4.6 MLN UNITS VS 4.1 MLN UNITS LAST YEAR (S4) <AAPL>: CANACCORD GENUITY RAISES PRICE TARGET TO $600 (S5) <AAPL> CFO SAYS REVENUE EXPECTED TO BE DOWN BETWEEN 5-10% IN CONSTANT CURRENCY FOR Q1 Table 1: Instances of market comments. (S6) S&P 500 <.SPX> UP 1.53 POINTS AT AFTER MARKET OPEN (S7) DOW JONES <.DJI> UP 8.70 POINTS AT AFTER MARKET OPEN (S8) U.S. Q3 GDP rises pct Table 2: Instances for the proposed task. and 4.1). These examples show the crucial roles of numerals in financial narratives. Table 2 lists three market comments selected from our dataset as examples. Investors would know from their experiences that the blanks in (S6) and (S7) should be filled with quotes of the opening indices of the S&P 500 and Dow Jones Industrial Average (DJIA), respectively. Accordingly, they would insert a 4th-magnitude numeral, 1840, into (S6), and a 5th-magnitude numeral, 16163, into (S7). We call such an interpretation as numeracy, which is the ability to interpret simple numerical concepts at some given positions. There are two challenging issues in (S6) and (S7): to detect the target entity, and to understand the type of information to insert into the blanks. A 6308 more fine-grained question is shown in (S8). After getting involved in markets and reading much more news and market data, investors gain intuition about market information. For example, investors can intuitively select a 1st-magnitude numeral, 2.9, to fill in the blank in (S8). We are interested in knowing if neural network (NN) models can learn this kind of numeracy from the numerous market comments. The contributions of this paper are four-fold: (1) providing a novel task and a benchmark dataset, called Numeracy-600K; (2) setting a strong baseline with thorough evaluation of several neural network models, including the state-of-the-art models, on the proposed task; (3) discussing the details of the challenges; and (4) indicating an important application scenario, i.e., detecting exaggerated information, for the proposed task. The rest of this paper is organized as follows. Section 2 surveys the related work on the identification of numerals and misinformation. Section 3 defines the task and introduces the dataset used in this study. Section 4 shows and discusses the experimental results in the comprehensive experiments. Section 5 presents an application scenario of detecting exaggerated numerals in market comments. Besides, we also extend the methodology in the market comment dataset to the general article title dataset. Section 6 concludes the remarks. 2 Related Work Murakami et al. (2017) attempted to generate market comments from stock prices. Their work used only two kinds of numerals: the latest price, and the difference of closing price between two days. As seen in Table 1, however, market comments describe various kinds of topics along with numerals. In this paper, we will provide experimental results for general market comments and show the numeracy of various NN models. Spithourakis and Riedel (2018) used language models to predict numerals in clinical and scientific datasets. They do not touch on numeral prediction in financial market comments. In this paper, we examine whether NN models can learn numeracy to insert proper information into market comments, rather than predicting exact numerals. We will discuss the reasons in Section 4.3. Our results give a positive answer to this question. Several different approaches have been used to detect false information and fake news. Wang et al. (2018a) used both text information and images in tweets to detect misleading information. Tschiatschek et al. (2018) identified fake news via crowd signals, namely, Facebook users flags of fake news. As mentioned in Shu et al. (2017), “the underlying characteristics of fake news have not been fully understood.” In this paper, we concentrate on market comments, and focus on exaggerated numeral identification in the comments. 3 Task Setting and Dataset The task is defined as to test whether NN models can learn numeracy by inserting the proper magnitude of numerals into a market comment. From the human perspective, we may feel that something makes sense intuitively, but this kind of feeling is not precise. In (S9), human experience suggests that inserting 7 into the blank would be better than inserting 10. Even experienced investors may be confused, however, if the candidates are 6.9 and 7. Therefore, to test the numeracy of a model, we separate numerals into eight classes by the magnitude and ask models to predict a suitable range. (S9) CHINA H1 GDP + PCT Y/Y For the experiments, we collected 600K market comments from Reuters. Numeracy means the approximate range of a numeral at some given position. In our task setting, we classify numerals, denoted as m, into eight classes by their magnitudes, as listed in Table 3. That is, we will examine whether NN models can insert a proper range of numerals into a market comment, rather than inserting the exact number. In addition to the eight classes, Table 3 also lists their distribution. We predefine some extraction rules to extract the numerals in the dataset automatically. Signs (+, -, and /) were separated from numerals. Furthermore, we only considered the magnitude beMagnitude Range Ratio Decimal 0 ≤m < 1 23.24 1 1 ≤m < 10 37.53 2 10 ≤m < 102 25.36 3 102 ≤m < 103 12.21 4 103 ≤m < 104 1.12 5 104 ≤m < 105 0.29 6 105 ≤m < 106 0.23 > 6 106 ≤m 0.01 Table 3: Distribution of numerals in the dataset. 6309 fore the decimal point, i.e., 10.08 was classified as a 2nd magnitude. Finally, we separate the dataset into training set and test set of sizes 500k and 100k, respectively. 4 Empirical Study 4.1 Models We adopt seven different architectures for our task, including CNN (Kim, 2014), GRU (Cho et al., 2014), BiGRU, CRNN (Choi et al., 2017), CNNcapsule (Sabour et al., 2017), GRU-capsule, and BiGRU-capsule (Wang et al., 2018b). In our models, each word in the input sentence is represented as a d-dimensional vector with word embeddings, and all the words are concatenated in as a d × l matrix, where l denotes the sentence length. Some preprocessing was performed on the data. We transformed all characters to lowercase. The sentence representation was padded to the maximum length of an instance. The target numeral to be inferred is replaced with a special token <TRT>. Appendices illustrate the detailed model settings. 4.2 Experimental Results For our task settings, each model outputs the result of the eight-way classification. We report the performance of the models in F1 scores and analyze the results by using confusion matrices. Table 4 summarizes the experimental results. Logistic regression (LR) with bag of words, which are composed of top-1K frequent words, sets a baseline for the proposed task. The BiGRU model beats the other models with a micro-averaged F1 score of 80.16%, and the GRU-capsule model performs the best with a macro-averaged F1 score of 64.71%. The RNN-based models outperform the CNN-based models in both the general NN framework and the capsule network framework. The results account for the importance of the order of the context in market comments when inserting numerical information. Further evidence supporting this statement is that the CRNN model obtains a higher performance than the CNN model does. Figure 1 provides the evidence for the GRUcapsule model performing the best with macroaveraged F1 score. Comparing to the other models, the GRU-capsule model correctly predicts 54% of the data in the 6th-magnitude class, which constitute 0.23% of the entire data. This result indicates that the GRU-capsule model is able to find some clues with the small size of training data. Model Micro-F1 Macro-F1 LR 71.25% 60.80% CNN 77.17% 58.49% GRU 78.25% 58.08% BiGRU 80.16% 62.74% CRNN 78.00% 64.62% CNN-capsule 75.89% 59.22% GRU-capsule 77.36% 64.71% BiGRU-capsule 77.97% 64.34% Table 4: Experimental results. Figure 1: Confusion matrices 4.3 Error Analysis and Future Research In this subsection, we analyze some frequent errors and point out some open issues for future research on machine learning with market comments. Table 5 lists some instances. (E1) indicates the problem of a different contract for the same financial instrument. That is, the government may publish the same bond with a different coupon rate. Whether we should replace <TRT> with 0.75 or 2.25 depends on the time of the auction described in the market comment. As another problem, the DJI in (E2) is different every day, making it hard to predict the actual amount of change. As indicated by the confusion matrix, however, the BiGRU model makes sensible predictions near the truth. (E3) shows that models should learn the past patterns (the change of the previous Disney quarterly revenue are always in 2nd magnitude) of a target companys financial statements. (S10) VOLVO <VOLVb.ST>: HSBC RAISES PRICE TARGET TO SEK 105 FROM <TRT> The numeral 10 is the ground truth for instance (E4), and 95 should be inserted into (S10), but the model predicted a 1st magnitude for (E4) and a 3rd magnitude for (S10). Both cases show that models may tend to refer to previously occurring numerals, 8 in (E4) and 105 in (S10), to decide the magnitude of the target numeral. 6310 T P Market comment Issue E1 0 1 CANADA <TRT> PCT 2014 BOND AUCTION YIELD LOW 1.110 PCT,HIGH 1.121 PCT Different contract E2 0 2 DOW JONES <.DJI> UNOFFICIALLY CLOSES UP <TRT> POINTS Market Data E3 1 2 Disney quarterly revenue rises <TRT> pct Past patterns E4 2 1 BILL BARRETT CORP <BBG.N>: BMO CUTS PRICE TARGET TO 8 FROM <TRT> Reference to other numerals E5 3 2 Maersk Drilling wins $ <TRT> mln contract from Eni Main event E6 7 6 OCC SAYS EXCHANGE-LISTED OPTIONS VOLUME REACHED <TRT> CONTRACTS IN MAY Varying amounts Table 5: Error analysis (T: truth; P: prediction; 0: the decimal; 7: magnitude greater than 6) 0 1 2 3 4 5 6 7 Market Drilling 0 1 0 2 0 0 0 0 wins 0 489 177 266 17 1 3 1 mln contract 0 39 46 30 1 0 0 0 Eni 0 51 11 2 12 0 4 0 Table 6: Co-occurrence statistics of (E5) Table 6 lists the co-occurrence statistics of the keywords and each class label for (E5). From the prediction of the 2nd magnitude, we find that models do not focus on the most frequent word (wins) but on the key term (mln contract) in this comment. Besides, the influence of company names (Maersk Drilling and Eni) may be less than that of the key term. Therefore, we infer that the models can capture the main event in a market comment. In (E6), the <TRT> label should be replaced by 377,539,997. Volume patterns vary, however, for different financial instruments. For example, the trading volume of Alphabet Inc. (GOOG) was about 4,760K (the 7th magnitude) on 2018/04/24 but about 899K (the 6th magnitude) on 2018/05/25. This indicates that trading volume can be diverse even for the same stock. The task setting in this paper is the coarsegrained setting for numeracy. More fine-grained settings toward numeracy can be extended in future works. For example, leveraging the taxonomy of the numeral information (Chen et al., 2018) and understanding the relationship between the named entities and the numbers (Chen et al., 2019) may be able to improve the performance of learning numeracy. 5 Discussion Fake news has brought negative effects, especially in the 2016 U.S. presidential election (Bakir and McStay, 2018). In the financial domain, even one piece of negative information can cause a stock price to crash. If someone with bad intentions introduces fake information about a company, its stock price can be influenced violently. Especially during trading hours, investors might not have enough time to verify such news, and the company could not declare its falsehood rapidly enough. In this section, we provide a first report of the simulated experimental results focusing on financial market comments, suggesting the capability of the models to detect such exaggerated numerals in market comments. We further experiment on The Examiner dataset1 to show the numeracy of models toward the article titles of crowdsourced journalism. 5.1 Exaggerated Numeral Detection To examine the BiGRU models reasoning ability, we multiply the numerals in market comments by different distortion factors. Then, the model aims to detect whether a numeral is correct, overstated or understated. For example, 138 in (S11) with 10% distortion factor will become 124.2 (-10%) and 151.8 (+10%), and both are considered as exaggerated numerals. (S11) SPLUNK INC <SPLK.O> SEES Q2 2016 REVENUE $138 MLN TO $140 MLN In this experiment, we release the boundary limitation, and test the numeracy for all real numbers. For instance, the altered results of 138 with 10% distortion factor are in the same magnitude, and that with 30% distortion factor, 96.6 and 179.4, are in different magnitude. Table 7 lists the experimental results. We find that the model obtained better performance for numerals distorted by more than 50%, with more confusion in the range below that. Furthermore, according to the microand macro-averaged F1 scores, the performance is similar among the three different cases (i.e., overstated, understated, and correct). In summary, our experiments show that we can not only learn the concept of magnitude, but also 1https://www.kaggle.com/therohk/examine-the-examiner 6311 Distortion factor Micro-F1 Macro-F1 ±10% 58.54% 57.87% ±30% 56.94% 56.11% ±50% 57.69% 56.85% ±70% 70.92% 70.85% ±90% 76.91% 76.94% Table 7: Results for exaggerated numeral detection. M 0 1 2 3 4 5 6 7 % 0.08 35.18 30.94 8.71 24.21 0.57 0.31 0.01 Table 8: Distribution of numerals in the title dataset. M.: magnitude; 7: M > 6. discover the concept of the reasonableness of the numerals in financial tweets. This kind of numeracy can be applied to many potential application scenarios, e.g., avoiding fat-finger error in the financial market, detecting the carelessly wrong of dosage in the doctor’s advice, and so on. 5.2 Numeracy in Open-Domain Article Titles The distribution of the numerals in the article title dataset is shown in Table 8. Comparing with the distribution of market comments, few article titles use decimal. On the other hand, writers of articles use more 4th-magnitude numerals than those in market comments. Total 23.25% of titles contain at least one numeral. Although the proportion is lower than that in the financial narrative, it still shows that numerals are important and informative in the general description. The experimental results are shown in Table 9. The BiGRU model outperforms the other models in both Micro-F1 and Macro-F1. Based on the experimental results on both datasets, BiGRU may be the best model for learning numeracy. In general, models perform relatively worse in the article title dataset than in the market comment dataset. The performance gaps may be caused by the following reasons. (1) The topics in titles are more diverse than those in market comments. (2) To attract more clicks, title writers may use a catchy numeral, which can be an exaggerated number. The illogical numbers may not only confuse humans, but also models. We leave the in-depth experiment on applying numeracy to detect illogical numbers in the future work, because more fine-grained annotations are needed. We further adopt the BiGRU model to test the numeracy with the cross-source data, i.e., one Model Micro-F1 Macro-F1 LR 62.49% 30.81% CNN 69.27% 35.96% GRU 70.92% 38.43% BiGRU 71.49% 39.94% CRNN 69.50% 36.15% CNN-capsule 63.11% 29.41% GRU-capsule 70.73% 33.57% BiGRU-capsule 71.49% 34.18% Table 9: Experimental results of titles. Training Test set Micro-F1 Macro-F1 Comment Title 31.38% 11.08% Title Comment 25.59% 10.58% Table 10: Results of learning cross-source numeracy. serves as the training set, and the other as the test set. The results in Table 10 show the difficulty of transferring numeracy toward different sources. 6 Conclusion We present a novel task of learning numeracy with the Numeracy-600K,2 including the market comments and the ariticle titles. The experimental results show that NN models can learn the proper range for a target numeral from contextual information. An experiment on an application scenario of exaggerated numeral detection suggests the capability of the proposed NN models. In future work, we plan to extend our work to further applications such as detecting exaggerated statements by investors in social media data. Acknowledgments This research was partially supported by Ministry of Science and Technology, Taiwan, under grants MOST-106-2923-E-002-012-MY3, MOST-107-2634-F-002-011-, MOST-108-2634F-002-008-, and MOST 107-2218-E-009-050-, and by Academia Sinica, Taiwan, under grant AS-TP-107-M05. This paper is based on results obtained from a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO). 2https://github.com/aistairc/Numeracy-600K 6312 References Vian Bakir and Andrew McStay. 2018. Fake news and the economy of emotions: Problems, causes, solutions. Digital Journalism, 6(2):154–175. Chung-Chi Chen, Hen-Hsen Huang, and Hsin-Hsi Chen. 2019. Numeral attachment with auxiliary tasks. In The 42nd International ACM SIGIR Conference on Research & Development in Information Retrieval. ACM. Chung-Chi Chen, Hen-Hsen Huang, Yow-Ting Shiue, and Hsin-Hsi Chen. 2018. Numeral understanding in financial tweets for fine-grained crowd-based forecasting. In 2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI), pages 136– 143. IEEE. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder–decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–111. Keunwoo Choi, Gy¨orgy Fazekas, Mark Sandler, and Kyunghyun Cho. 2017. Convolutional recurrent neural networks for music classification. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2392–2396. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751. Association for Computational Linguistics. Soichiro Murakami, Akihiko Watanabe, Akira Miyazawa, Keiichi Goshima, Toshihiko Yanase, Hiroya Takamura, and Yusuke Miyao. 2017. Learning to generate market comments from stock prices. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1374–1384. Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. 2017. Dynamic routing between capsules. In Advances in Neural Information Processing Systems, pages 3856–3866. Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. Fake news detection on social media: A data mining perspective. ACM SIGKDD Explorations Newsletter, 19(1):22–36. Georgios Spithourakis and Sebastian Riedel. 2018. Numeracy for language models: Evaluating and improving their ability to predict numbers. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2104–2115. Sebastian Tschiatschek, Adish Singla, Manuel Gomez Rodriguez, Arpit Merchant, and Andreas Krause. 2018. Fake news detection in social networks via crowd signals. In Proceedings of the 2018 Web Conference, pages 517–524. Yaqing Wang, Fenglong Ma, Zhiwei Jin, Ye Yuan, Guangxu Xun, Kishlay Jha, Lu Su, and Jing Gao. 2018a. Eann: Event adversarial neural networks for multi-modal fake news detection. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 849– 857. Yequan Wang, Aixin Sun, Jialong Han, Ying Liu, and Xiaoyan Zhu. 2018b. Sentiment analysis by capsules. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, pages 1165–1174. A Appendices We report the details for the replication of the experiments in the following appendices. A.1 Convolutional Neural Network (CNN) We construct a CNN model for numeracy. Modified from the CNN for sentence classification (Kim, 2014), in our model, each word in the input sentence is represented as a d−dimensional vector, and all the words are concatenated in as a d × l matrix, where l denotes the sentence length. The target numeral to be inferred is replaced with a special token <TRT>. The output of our CNN model is a softmax layer that generates the probability distribution over the magnitudes for the target numeral. The details of our CNN model are described as follows. The size of the first layer, the embedding layer, is set as d = 300. We set l = 73, which is the longest sentence in the dataset. Padding is performed for shorter sentences. The second layer is a convolutional layer with filter size 8. The third layer is a fully connected layer with dimension 32, which functions as a max-pooling layer. To avoid overfitting, a dropout layer is added with a dropout rate of 0.3. Finally, two activation functions, the rectified linear unit (ReLU) and softmax, are used in the last two layers. We chose to use the Adam optimizer. A.2 Gated Recurrent Unit (GRU) We construct an RNN-based model for numeracy with GRU. The tokens in the sentence are input as a sequence. Each token is represented as a ddimensional vector. The target numeral is replaced with the special token <TRT>. The architecture of the GRU model in this paper consists of a 300dimensional embedding layer, a 64-dimensional 6313 GRU layer, and a dropout layer with a dropout rate of 0.3. The final two layers and the optimizer are the same as those in the CNN model. A.3 Bidirectional GRU (BiGRU) The bidirectional RNN model, BiGRU, merges the outputs from both directions of the GRU model. Because units of measurement provide the important clues for numeral, a bidirectional architecture is expected to be useful with the right to left inputs. For example, the difference between (C1) and (C2) is the unit of measurement (i.e., POINTS and PERCENT), and it leads to different results of the magnitude of numerals. (C1) DOW JONES <.DJI> UP 8.70 POINTS (C2) DOW JONES <.DJI> UP 0.05 PERCENT A.4 Convolutional Recurrent Neural Network (CRNN) In our CRNN model, a CNN layer extracts features for each segment. Then, a max-pooling layer in the CNN model is replaced by an RNN layer and aggregates the extracted features. To examine whether replacing the pooling layer with the RNN layer can improve performance in our task, we keep the other components of the CRNN model the same as those in the CNN model, and replace the max-pooling layer with the 64-dimension BiGRU layer. A.5 CNN-capsule We also introduce one of the latest architectures, capsule network, to the task of numeracy. We combine the capsule network with either of the CNN and the GRU models. The structure of the CNN-capsule model begins with a 300dimensional embedding layer. The second layer is a convolutional layer having a kernel size of 9 and using the ReLU activation function. The third layer, called the primary layer, is used to retain the order of context information, including one convolutional layer with 32 channels. Finally, the capsule layer outputs an n × dim matrix, where n is the number of classes, set to 8 for this paper, and dim is the dimension of each capsule, set to 16. A.6 GRU-capsule The GRU-capsule model begins with a 300dimensional embedding layer, followed by a 64dimensional GRU layer, which returns the full sequence of outputs. To compare the impacts of the CNN and RNN frameworks in the CapsNet architecture, we keep the primary and capsule layers the same as those in the CNN-capsule model. A.7 BiGRU-capsule We further explore the bidirectional GRU model with the addition of capsule network. The BiGRUcapsule model consists of a 300-dimensional embedding layer, bidirectional GRU layers with a 64-dimensional hidden state, and the primary and capsule layers described above.
2019
635
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6314–6322 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6314 Large-Scale Multi-Label Text Classification on EU Legislation Ilias Chalkidis Manos Fergadiotis Prodromos Malakasiotis Ion Androutsopoulos Department of Informatics, Athens University of Economics and Business, Greece [ihalk,fergadiotis,rulller,ion]@aueb.gr Abstract We consider Large-Scale Multi-Label Text Classification (LMTC) in the legal domain. We release a new dataset of 57k legislative documents from EUR-LEX, annotated with ∼4.3k EUROVOC labels, which is suitable for LMTC, few- and zero-shot learning. Experimenting with several neural classifiers, we show that BIGRUs with label-wise attention perform better than other current state of the art methods. Domain-specific WORD2VEC and context-sensitive ELMO embeddings further improve performance. We also find that considering only particular zones of the documents is sufficient. This allows us to bypass BERT’s maximum text length limit and finetune BERT, obtaining the best results in all but zero-shot learning cases. 1 Introduction Large-scale multi-label text classification (LMTC) is the task of assigning to each document all the relevant labels from a large set, typically containing thousands of labels (classes). Applications include building web directories (Partalas et al., 2015), labeling scientific publications with concepts from ontologies (Tsatsaronis et al., 2015), assigning diagnostic and procedure labels to medical records (Mullenbach et al., 2018; Rios and Kavuluru, 2018). We focus on legal text processing, an emerging NLP field with many applications (e.g., legal judgment (Nallapati and Manning, 2008; Aletras et al., 2016), contract element extraction (Chalkidis et al., 2017), obligation extraction (Chalkidis et al., 2018)), but limited publicly available resources. Our first contribution is a new publicly available legal LMTC dataset, dubbed EURLEX57K, containing 57k English EU legislative documents from the EUR-LEX portal, tagged with ∼4.3k labels (concepts) from the European Vocabulary (EUROVOC).1 EUROVOC contains approx. 7k labels, but most of them are rarely used, hence they are under-represented (or absent) in EURLEX57K, making the dataset also appropriate for few- and zero-shot learning. EURLEX57K can be viewed as an improved version of the dataset released by Mencia and F¨urnkranzand (2007), which has been widely used in LMTC research, but is less than half the size of EURLEX57K (19.6k documents, 4k EUROVOC labels) and more than ten years old. As a second contribution, we experiment with several neural classifiers on EURLEX57K, including the Label-Wise Attention Network of Mullenbach et al. (2018), called CNN-LWAN here, which was reported to achieve state of the art performance in LMTC on medical records. We show that a simpler BIGRU with self-attention (Xu et al., 2015) outperforms CNN-LWAN by a wide margin on EURLEX57K. However, by replacing the CNN encoder of CNN-LWAN with a BIGRU, we obtain even better results on EURLEX57K. Domainspecific WORD2VEC (Mikolov et al., 2013) and context-sensitive ELMO embeddings (Peters et al., 2018) yield further improvements. We thus establish strong baselines for EURLEX57K. As a third contribution, we investigate which zones of the documents are more informative on EURLEX57K, showing that considering only the title and recitals of each document leads to almost the same performance as considering the full document. This allows us to bypass BERT’s (Devlin et al., 2018) maximum text length limit and finetune BERT, obtaining the best results for all but zero-shot learning labels. To our knowledge, this is the first application of BERT to an LMTC task, which provides further evidence of the superiority of pretrained language models with task-specific 1See https://eur-lex.europa.eu/ for EURLEX, and https://publications.europa.eu/en/ web/eu-vocabularies for EUROVOC. 6315 fine-tuning, and establishes an even stronger baseline for EURLEX57K and LMTC in general. 2 Related Work You et al. (2018) explored RNN-based methods with self-attention on five LMTC datasets that had also been considered by Liu et al. (2017), namely RCV1 (Lewis et al., 2004), Amazon-13K, (McAuley and Leskovec, 2013), Wiki-30K and Wiki-500K (Zubiaga, 2012), as well as the previous EUR-LEX dataset (Mencia and F¨urnkranzand, 2007), reporting that attention-based RNNs produced the best results overall (4 out of 5 datasets). Mullenbach et al. (2018) investigated the use of label-wise attention in LMTC for medical code prediction on the MIMIC-II and MIMIC-III datasets (Johnson et al., 2017). Their best method, Convolutional Attention for Multi-Label Classification, called CNN-LWAN here, employs one attention head per label and was shown to outperform weak baselines, namely logistic regression, plain BIGRUs, CNNs with a single convolution layer. Rios and Kavuluru (2018) consider few- and zero-shot learning on the MIMIC datasets. They propose Zero-shot Attentive CNN, called ZEROCNN-LWAN here, a method similar to CNN-LWAN, which also exploits label descriptors. Although ZERO-CNN-LWAN did not outperform CNN-LWAN overall on MIMIC-II and MIMIC-III, it had much improved results in few-shot and zero-shot learning, among other variations of ZERO-CNN-LWAN that exploit the hierarchical relations of the labels with graph convolutions. We note that the label-wise attention methods of Mullenbach et al. (2018) and Rios and Kavuluru (2018) were not compared to strong generic text classification baselines, such as attention-based RNNs (You et al., 2018) or Hierarchical Attention Network (HAN) (Yang et al., 2016), which we investigate below. 3 The New Dataset As already noted, EURLEX57K contains 57k legislative documents from EUR-LEX2 with an average length of 727 words (Table 1).3 Each document contains four major zones: the header, which includes the title and name of the legal body 2Our dataset is available at http://nlp.cs. aueb.gr/software_and_datasets/EURLEX57K, with permission of reuse under European Union c⃝, https://eur-lex.europa.eu, 1998–2019. 3See Appendix A for more statistics. Subset Documents (D) Words/D Labels/D Train 45,000 729 5 Dev. 6,000 714 5 Test 6,000 725 5 Total 57,000 727 5 Table 1: Statistics of the EUR-LEX dataset. enforcing the legal act; the recitals, which are legal background references; the main body, usually organized in articles; and the attachments (e.g., appendices, annexes). Some of the LMTC methods we consider need to be fed with documents split into smaller units. These are often sentences, but in our experiments they are sections, thus we preprocessed the raw text, respectively. We treat the header, the recitals zone, each article of the main body, and the attachments as separate sections. All the documents of the dataset have been annotated by the Publications Office of EU4 with multiple concepts from EUROVOC. While EUROVOC includes approx. 7k concepts (labels), only 4,271 (59.31%) are present in EURLEX57K, from which only 2,049 (47.97%) have been assigned to more than 10 documents. Similar distributions were reported by Rios and Kavuluru (2018) for the MIMIC datasets. We split EURLEX57K into training (45k documents), development (6k), and test subsets (6k). We also divide the 4,271 labels into frequent (746 labels), few-shot (3,362), and zeroshot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively. 4 Methods Exact Match, Logistic Regression: A first naive baseline, Exact Match, assigns only labels whose descriptors can be found verbatim in the document. A second one uses Logistic Regression with feature vectors containing TF-IDF scores of n-grams (n = 1, 2, . . . , 5). BIGRU-ATT: The first neural method is a BIGRU with self-attention (Xu et al., 2015). Each document is represented as the sequence of its word embeddings, which go through a stack of BIGRUs (Figure 1a). A document embedding (h) is computed as the sum of the resulting context-aware embeddings (h = P i aihi), weighted by the selfattention scores (ai), and goes through a dense 4See https://publications.europa.eu/en. 6316 Figure 1: Illustration of (a) BIGRU-ATT, (b) HAN, (c) BIGRU-LWAN, and (d) BERT. layer of L = 4, 271 output units with sigmoids, producing L probabilities, one per label. HAN: The Hierarchical Attention Network (Yang et al., 2016) is a strong baseline for text classification. We use a slightly modified version, where a BIGRU with self-attention reads the words of each section, as in BIGRU-ATT but separately per section, producing section embeddings. A second-level BIGRU with self-attention reads the section embeddings, producing a single document embedding (h) that goes through a similar output layer as in BIGRU-ATT (Figure 1b). CNN-LWAN, BIGRU-LWAN: In the original Label-Wise Attention Network (LWAN) of Mullenbach et al. (2018), called CNN-LWAN here, the word embeddings of each document are first converted to a sequence of vectors hi by a CNN encoder. A modified version of CNN-LWAN that we developed, called BIGRU-LWAN, replaces the CNN encoder with a BIGRU (Figure 1c), which converts the word embeddings into context-sensitive embeddings hi, much as in BIGRU-ATT. Unlike BIGRU-ATT, however, both CNN-LWAN and BIGRU-LWAN use L independent attention heads, one per label, generating L document embeddings (h(l) = P i al,ihi, l = 1, . . . , L) from the sequence of vectors hi produced by the CNN or BIGRU encoder, respectively. Each document embedding (h(l)) is specialized to predict the corresponding label and goes through a separate dense layer (L dense layers in total) with a sigmoid, to produce the probability of the corresponding label. ZERO-CNN-LWAN, ZERO-BIGRU-LWAN: Rios and Kavuluru (2018) designed a model similar to CNN-LWAN, called ZACNN in their work and ZERO-CNN-LWAN here, to deal with rare labels. In ZERO-CNN-LWAN, the attention scores (al,i) and the label probabilities are produced by comparing the hi vectors that the CNN encoder produces and the label-specific document embeddings (h(l)), respectively, to label embeddings. Each label embedding is the centroid of the pretrained word embeddings of the label’s descriptor; consult Rios and Kavuluru (2018) for further details. By contrast, CNN-LWAN and BIGRU-LWAN do not consider the descriptors of the labels. We also experiment with a variant of ZERO-CNN-LWAN that we developed, dubbed ZERO-BIGRU-LWAN, where the CNN encoder is replaced by a BIGRU. BERT: BERT (Devlin et al., 2018) is a language model based on Transformers (Vaswani et al., 2017) pretrained on large corpora. For a new target task, a task-specific layer is added on top of BERT. The extra layer is trained jointly with BERT by fine-tuning on task-specific data. We add a dense layer on top of BERT, with sigmoids, that produces a probability per label. Unfortunately, BERT can currently process texts up to 512 wordpieces, which is too small for the documents of EURLEX57K. Hence, BERT can only be applied to truncated versions of our documents (see below). 5 Experiments Evaluation measures: Common LMTC evaluation measures are precision (P@K) and recall (R@K) at the top K predicted labels, averaged over test documents, micro-averaged F1 over all labels, and nDCG@K (Manning et al., 2009). However, P@K and R@K unfairly penalize methods when the gold labels of a document are fewer or more than K, respectively. Similar concerns have led to the introduction of R-Precision and nDCG@K in Information Retrieval (Manning et al., 2009), which we believe are also more appropriate for LMTC. Note, however, that R-Precision requires the number of gold labels per document to be known beforehand, which is unrealistic in practical applications. Therefore we propose using R-Precision@K (RP@K), where 6317 ALL LABELS FREQUENT FEW ZERO RP@5 nDCG@5 Micro-F1 RP@5 nDCG@5 RP@5 nDCG@5 RP@5 nDCG@5 Exact Match 0.097 0.099 0.120 0.219 0.201 0.111 0.074 0.194 0.186 Logistic Regression 0.710 0.741 0.539 0.767 0.781 0.508 0.470 0.011 0.011 BIGRU-ATT 0.758 0.789 0.689 0.799 0.813 0.631 0.580 0.040 0.027 HAN 0.746 0.778 0.680 0.789 0.805 0.597 0.544 0.051 0.034 CNN-LWAN 0.716 0.746 0.642 0.761 0.772 0.613 0.557 0.036 0.023 BIGRU-LWAN 0.766 0.796 0.698 0.805 0.819 0.662 0.618 0.029 0.019 ZERO-CNN-LWAN 0.684 0.717 0.618 0.730 0.745 0.495 0.454 0.321 0.264 ZERO-BIGRU-LWAN 0.718 0.752 0.652 0.764 0.780 0.561 0.510 0.438 0.345 BIGRU-LWAN (L2V) 0.775 0.804 0.711 0.815 0.828 0.656 0.612 0.034 0.024 BIGRU-LWAN (L2V) * 0.770 0.796 0.709 0.811 0.825 0.641 0.600 0.047 0.030 BIGRU-LWAN (ELMO) * 0.781 0.811 0.719 0.821 0.835 0.668 0.619 0.044 0.028 BERT-BASE * 0.796 0.823 0.732 0.835 0.846 0.686 0.636 0.028 0.023 Table 2: Results on EURLEX57K for all, frequent, few-shot, zero-shot labels. Starred methods use the first 512 document tokens; all other methods use full documents. Unless otherwise stated, GLOVE embeddings are used. K is a parameter. This measure is the same as P@K if there are at least K gold labels, otherwise K is reduced to the number of gold labels. 1 2 3 4 5 6 7 8 9 10 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 K top predictions BIGRU-LWAN (ELMO) BIGRU-LWAN (L2V) BERT-BASE Figure 2: R@K (green lines), P@K (red), RP@K (black) of the best methods (BIGRU-LWAN (L2V), BIGRU-LWAN (ELMO), BERT-BASE), for K = 1 to 10. Figure 2 shows RP@K for the three best systems, macro-averaged over test documents. Unlike P@K, RP@K does not decline sharply as K increases, because it replaces K by the number of gold labels, when the latter is lower than K. For K = 1, RP@K is equivalent to P@K, as confirmed by Fig. 2. For large values of K that almost always exceed the number of gold labels, RP@K asymptotically approaches R@K, as also confirmed by Fig. 2.5 In our dataset, there are 5.07 labels per document, hence K = 5 is reasonable.6 5See Appendix C for a more detailed discussion on the evaluation measures. 6Evaluating at other values of K lead to similar conclusions (see Fig. 2 and Appendix D). Setup: Hyper-parameters are tuned using the HYPEROPT library selecting the values with the best loss on development data.7 For the best hyper-parameter values, we perform five runs and report mean scores on test data. For statistical significance tests, we take the run of each method with the best performance on development data, and perform two-tailed approximate randomization tests (Dror et al., 2018) on test data.8 Unless otherwise stated, we used 200-D pretrained GLOVE embeddings (Pennington et al., 2014). Full documents: The first five horizontal zones of Table 2 report results for full documents. The naive baselines are weak, as expected. Interestingly, for all, frequent, and even few-shot labels, the generic BIGRU-ATT performs better than CNNLWAN, which was designed for LMTC. HAN also performs better than CNN-LWAN for all and frequent labels. However, replacing the CNN encoder of CNN-LWAN with a BIGRU (BIGRU-LWAN) leads to the best results, indicating that the main weakness of CNN-LWAN is its vanilla CNN encoder. The zero-shot versions of CNN-LWAN and BIGRU-LWAN outperform all other methods on zero-shot labels (Table 2), in line with the findings of Rios and Kavuluru (2018), because they exploit label descriptors, but more importantly because they have a component that uses prior knowledge as is (i.e., label embeddings are frozen). Exact Match also performs better on zero-shot labels, for the same reason (i.e., the prior knowledge is 7We implemented all neural methods in KERAS (https: //keras.io/). Code available at https://github. com/iliaschalkidis/lmtc-eurlex57k.git. See Appendix B for details on hyper-parameter tuning. 8We perform 10k iterations, randomly swapping in each iteration the responses (sets of returned labels) of the two compared systems for 50% of the test documents. 6318 intact). BIGRU-LWAN, however, is still the best method in few-shot learning. All the differences between the best (bold) and other methods in Table 2 are statistically significant (p < 0.01). Table 3 shows that using WORD2VEC embeddings trained on legal texts (L2V) (Chalkidis and Kampas, 2018) or ELMO embeddings (Peters et al., 2018) trained on generic texts further improve the performance of BIGRU-LWAN. Document zones: Table 4 compares the performance of BIGRU-LWAN on the development set for different combinations of document zones (Section 3): header (H), recitals (R), main body (MB), full text. Surprisingly H+R leads to almost the same results as full documents,9 indicating that H+R provides most of the information needed to assign EUROVOC labels. RP@5 nDCG@5 Micro-F1 GLOVE 0.766 0.796 0.698 L2V 0.775 0.804 0.711 GLOVE + ELMO 0.777 0.808 0.714 L2V + ELMO 0.781 0.811 0.719 Table 3: BIGRU-LWAN with GLOVE, L2V, ELMO. µwords RP@5 nDCG@5 Micro-F1 H 43 0.747 0.782 0.688 R 317 0.734 0.765 0.669 H+R 360 0.765 0.796 0.701 MB 187 0.643 0.674 0.590 Full 727 0.766 0.797 0.702 Table 4: BIGRU-LWAN with different document zones. First 512 tokens: Given that H+R contains enough information and is shorter than 500 tokens in 83% of our dataset’s documents, we also apply BERT to the first 512 tokens of each document (truncated to BERT’s max. length), comparing to BIGRU-LWAN also operating on the first 512 tokens. Table 2 (bottom zone) shows that BERT outperforms all other methods, even though it considers only the first 512 tokens. It fails, however, in zero-shot learning, since it does not have a component that exploits prior knowledge as is (i.e., all the components are fine-tuned on training data). 6 Limitations and Future Work One major limitation of the investigated methods is that they are unsuitable for Extreme Multi-Label Text Classification where there are hundreds of thousands of labels (Liu et al., 2017; Zhang et al., 9The approximate randomization tests detected no statistically significant difference in this case (p = 0.20). 2018; Wydmuch et al., 2018), as opposed to the LMTC setting of our work where the labels are in the order of thousands. We leave the investigation of methods for extremely large label sets for future work. Moreover, RNN (and GRU) based methods have high computational cost, especially for long documents. We plan to investigate more computationally efficient methods, e.g., dilated CNNs (Kalchbrenner et al., 2017) and Transformers (Vaswani et al., 2017; Dai et al., 2019). We also plan to experiment with hierarchical flavors of BERT to surpass its length limitations. Furthermore, experimenting with more datasets e.g., RCV1, Amazon-13K, Wiki-30K, MIMIC-III will allow us to confirm our conclusions in different domains. Finally, we plan to investigate Generalized Zero-Shot Learning (Liu et al., 2018). Acknowledgements This work was partly supported by the Research Center of the Athens University of Economics and Business. References Nikolaos Aletras et al. 2016. Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective. PeerJ Computer Science, 2:e93. Ilias Chalkidis, Ion Androutsopoulos, and Achilleas Michos. 2017. Extracting Contract Elements. In Proceedings of the 16th Edition of the International Conference on Articial Intelligence and Law, pages 19–28, London, UK. Ilias Chalkidis, Ion Androutsopoulos, and Achilleas Michos. 2018. Obligation and Prohibition Extraction Using Hierarchical RNNs. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 254–259, Melbourne, Australia. Ilias Chalkidis and Dimitrios Kampas. 2018. Deep learning in law: early adaptation and legal word embeddings trained on large corpora. Artificial Intelligence and Law. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. CoRR, abs/1901.02860. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the Conference of the North American Chapter of the Association for 6319 Computational Linguistics: Human Language Technologies, Minneapolis, MN, USA. Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The Hitchhiker’s Guide to Testing Statistical Significance in Natural Language Processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1383–1392, Melbourne, Australia. Alistair EW Johnson, David J. Stone, Leo A. Celi, and Tom J. Pollard. 2017. MIMIC-III, a freely accessible critical care database. Nature. Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, A¨aron van den Oord, Alex Graves, and Koray Kavukcuoglu. 2017. Neural Machine Translation in Linear Time. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Copenhagen, Denmark. David D. Lewis, Yiming Yang, Tony G. Rose, and Fan Li. 2004. RCV1: A New Benchmark Collection for Text Categorization Research. J. Mach. Learn. Res., 5:361–397. Jingzhou Liu, Wei-Cheng Chang, Yuexin Wu, and Yiming Yang. 2017. Deep Learning for Extreme Multi-label Text Classification. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’17, pages 115–124, New York, NY, USA. Shichen Liu, Mingsheng Long, Jianmin Wang, and Michael I Jordan. 2018. Generalized Zero-Shot Learning with Deep Calibration Network. In Advances in Neural Information Processing Systems 31, pages 2005–2015. Curran Associates, Inc. Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schtze. 2009. Introduction to Information Retrieval. Cambridge University Press. Julian McAuley and Jure Leskovec. 2013. Hidden Factors and Hidden Topics: Understanding Rating Dimensions with Review Text. In Proceedings of the 7th ACM Conference on Recommender Systems, RecSys ’13, pages 165–172, New York, NY, USA. Eneldo Loza Mencia and Johannes F¨urnkranzand. 2007. An Evaluation of Efficient Multilabel Classification Algorithms for Large-Scale Problems in the Legal Domain. In Proceedings of the LWA 2007, pages 126–132, Halle, Germany. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. In Proceedings of the International Conference on Learning Representations (ICLR), Scottsdale, AZ. James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explainable Prediction of Medical Codes from Clinical Text. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1101–1111, New Orleans, Louisiana. Association for Computational Linguistics. Ramesh Nallapati and Christopher D. Manning. 2008. Legal Docket Classification: Where Machine Learning Stumbles. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 438–446, Honolulu, Hawaii. Association for Computational Linguistics. Ioannis Partalas, Aris Kosmopoulos, Nicolas Baskiotis, Thierry Arti`eres, Georgios Paliouras, ´Eric Gaussier, Ion Androutsopoulos, Massih-Reza Amini, and Patrick Gallinari. 2015. LSHTC: A Benchmark for Large-Scale Text Classification. CoRR, abs/1503.08581. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), New Orleans, Louisiana, USA. Association for Computational Linguistics. Anthony Rios and Ramakanth Kavuluru. 2018. FewShot and Zero-Shot Multi-Label Learning for Structured Label Spaces. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3132–3142, Brussels, Belgium. Association for Computational Linguistics. George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R. Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopoulos, Yannis Almirantis, John Pavlopoulos, Nicolas Baskiotis, Patrick Gallinari, Thierry Arti`eres, Axel-Cyrille Ngonga Ngomo, Norman Heino, ´Eric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulos, and Georgios Paliouras. 2015. An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition. BMC Bioinformatics, 16(138). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Proceedings of the 31th Annual Conference on Neural Information Processing Systems, Long Beach, CA, USA. 6320 Marek Wydmuch, Kalina Jasinska, Mikhail Kuznetsov, R´obert Busa-Fekete, and Krzysztof Dembczynski. 2018. A no-regret generalization of hierarchical softmax to extreme multi-label classification. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 6355–6366. Curran Associates, Inc. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 2048–2057, Lille, France. PMLR. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical Attention Networks for Document Classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489, San Diego, California. Association for Computational Linguistics. Ronghui You, Suyang Dai, Zihan Zhang, Hiroshi Mamitsuka, and Shanfeng Zhu. 2018. AttentionXML: Extreme Multi-Label Text Classification with Multi-Label Attention Based Recurrent Neural Networks. CoRR, abs/1811.01727. Wenjie Zhang, Junchi Yan, Xiangfeng Wang, and Hongyuan Zha. 2018. Deep Extreme Multi-label Learning. In Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval, ICMR ’18, pages 100–107, New York, NY, USA. Arkaitz Zubiaga. 2012. Enhancing Navigation on Wikipedia with Social Tags. CoRR, abs/1202.5469. Appendix A EURLEX57K statistics Figure 3 shows the distribution of labels across EURLEX57K documents. From the 7k labels fewer than 50% appear in more than 10 documents. Such an aggressive Zipfian distribution has also been noted in medical code predictions (Rios and Kavuluru, 2018), where such thesauri are used to classify documents, demonstrating the practical importance of few-shot and zero-shot learning. B Hyper-paramater tuning Table 5 shows the best hyper-parameters returned by HYPEROPT. Concerning BERT, we set the dropout rate and learning rate to 0.1 and 5e-5, respectively, as suggested by Devlin et al. (2018), while batch size was set to 8 due to GPU memory limitations. Finally, we noticed that the model did Figure 3: Distribution of EUROVOC concepts across EURLEX57K documents not converge in the fourth epoch, as suggested by Devlin et al. (2018). Thus we used early-stopping with no patience and trained the model for eight to nine epochs on average among the five runs. C Evaluation Measures The macro-averaged versions of R@K and P@K are defined as follows: R@K = 1 T T X t=1 St(K) Rt (1) P@K = 1 T T X t=1 St(K) K (2) where T is the total number of test documents, K is the number of labels to be selected per document, St(K) is the number of correct labels among those ranked as top K for the t-th document, and Rt is the number of gold labels for each document. Although these measures are widely used in LMTC, we question their appropriateness for the following reasons: 1. R@K leads to excessive penalization when documents have more than K gold labels. For example, evaluating at K = 1 for a single document with 5 gold labels returns R@1 = 0.20, if the system managed to return a correct label. The system is penalized, even though it was not allowed to return more than one label. 2. P@K does the same for documents with fewer than K gold labels. For example, evaluating at K = 5 for a single document with a single gold label returns P@1 = 0.20. 3. Both measures over- or under-estimate performance on documents whose number of gold la6321 Hyper parameters BIGRU-ATT HAN CNN-LWAN BIGRU-LWAN ZACNN * ZAGRU * BERT-BASE + Nl ∈[1, 2] 1 (1,1) 1 1 1 1 12 HU ∈[200, 300, 400] 300 (300,300) 200 300 200 100 768 Dd ∈[0.1, 0.2, . . . , 0.5] 0.2 0.3 0.1 0.4 0.1 0.1 0.1 Dwe ∈[0.00, 0.01, 0.02] 0.02 0.02 0.01 0.00 0.00 0.00 0.00 BS ∈[8, 12, 16] 12 16 12 16 16 16 8 Table 5: Best hyper parameters for neural methods. Nl: number of layers, HU: hidden units size, Dd: dropout rate across dimensions, Dwe: dropout rate of word embeddings, BS: batch size. * Hidden units size is fixed to word embedding dimensionality, + Nl, HU are fixed from the pre-trained model. Dropout rate fixed as suggested by Devlin et al. (2018). OVERALL FREQUENT FEW ZERO @1 @5 @10 @1 @5 @10 @1 @5 @10 @1 @5 @10 Exact Match 0.131 0.084 0.080 0.194 0.166 0.141 0.037 0.037 0.036 0.178 0.042 0.022 Logistic Regression 0.861 0.613 0.378 0.864 0.604 0.368 0.458 0.169 0.094 0.011 0.002 0.002 BIGRU-ATT 0.899 0.654 0.407 0.893 0.627 0.382 0.551 0.212 0.121 0.015 0.008 0.007 HAN 0.894 0.643 0.401 0.889 0.620 0.378 0.510 0.199 0.114 0.020 0.011 0.008 CNN-LWAN 0.853 0.617 0.395 0.849 0.596 0.374 0.521 0.204 0.117 0.011 0.007 0.007 BIGRU-LWAN 0.907 0.661 0.414 0.900 0.631 0.387 0.599 0.222 0.124 0.011 0.006 0.006 ZERO-CNN-LWAN 0.842 0.589 0.371 0.837 0.572 0.355 0.447 0.164 0.094 0.202 0.069 0.040 ZERO-BIGRU-LWAN 0.874 0.619 0.386 0.867 0.599 0.367 0.488 0.184 0.107 0.247 0.093 0.057 BIGRU-LWAN (L2V) 0.913 0.669 0.417 0.905 0.639 0.390 0.593 0.219 0.122 0.013 0.007 0.008 BIGRU-LWAN (L2V) * 0.915 0.664 0.413 0.905 0.637 0.387 0.586 0.214 0.120 0.013 0.010 0.010 BIGRU-LWAN (ELMO) * 0.921 0.674 0.419 0.912 0.644 0.391 0.595 0.226 0.127 0.011 0.009 0.007 BERT-BASE * 0.922 0.687 0.424 0.914 0.656 0.394 0.611 0.229 0.129 0.019 0.006 0.007 Table 6: P@1, P@5 and P@10 results on EURLEX57K for all, frequent, few-shot, zero-shot labels. Starred methods use the first 512 document tokens; all other methods use full documents. Unless otherwise stated, GLOVE embeddings are used. bels largely diverges from K. This is clearly illustrated in Figure 2 of the main article. 4. Because of these drawbacks, both measures do not correctly single out the best methods. Based on the above arguments, we believe that R-Precision@K (RP@K) and nDCG@K lead to a more informative and fair evaluation. Both measures adjust to the number of gold labels per document, without over- or under-estimating performance when documents have few or many gold labels. The macro-averaged versions of the two measures are defined as follows: RP@K = 1 T T X t=1 St(K) min (K, Rt) (3) nDCG@K = 1 T T X t=1 K X k=1 2St(k) −1 log (1 + k) (4) Again, T is the total number of test documents, K is the number of labels to be selected, St(K) is the number of correct labels among those ranked as top K for the t-th document, and Rt is the number of gold labels for each document. In the main article we report results for K = 5. The reason is that the majority of the documents of EURLEX57K (57.7%) have at most 5 labels. The detailed distributions can be seen in Figure 4. 1 5 10 15 20 26 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of labels per document Probability distribution Cumulative distribution Figure 4: Distribution of number of labels per document in EURLEX57K. D Experimental Results In Tables 6–9, we present additional results for the main measures used across the LMTC literature (P@K, R@K, RP@K, nDGC@K). 6322 OVERALL FREQUENT FEW ZERO @1 @5 @10 @1 @5 @10 @1 @5 @10 @1 @5 @10 Exact Match 0.026 0.087 0.168 0.045 0.207 0.344 0.022 0.111 0.214 0.161 0.194 0.206 Logistic Regression 0.195 0.641 0.764 0.234 0.719 0.845 0.313 0.507 0.560 0.011 0.011 0.022 BIGRU-ATT 0.204 0.685 0.824 0.242 0.749 0.880 0.382 0.629 0.703 0.015 0.040 0.062 HAN 0.203 0.675 0.811 0.241 0.740 0.871 0.355 0.596 0.673 0.018 0.051 0.079 CNN-LWAN 0.193 0.647 0.800 0.229 0.713 0.862 0.360 0.612 0.681 0.011 0.036 0.061 BIGRU-LWAN 0.205 0.692 0.836 0.243 0.755 0.891 0.420 0.661 0.725 0.011 0.029 0.060 ZERO-CNN-LWAN 0.189 0.617 0.752 0.223 0.683 0.820 0.300 0.494 0.556 0.189 0.321 0.376 ZERO-BIGRU-LWAN 0.197 0.648 0.782 0.232 0.716 0.847 0.335 0.560 0.635 0.231 0.438 0.531 BIGRU-LWAN (L2V) 0.207 0.700 0.842 0.246 0.764 0.898 0.414 0.655 0.716 0.012 0.034 0.066 BIGRU-LWAN (L2V) * 0.207 0.696 0.835 0.245 0.760 0.891 0.409 0.640 0.707 0.013 0.047 0.084 BIGRU-LWAN (ELMO) * 0.208 0.705 0.844 0.249 0.770 0.900 0.410 0.667 0.732 0.011 0.044 0.061 BERT-BASE * 0.209 0.719 0.855 0.250 0.784 0.908 0.428 0.684 0.752 0.018 0.028 0.068 Table 7: R@1, R@5 and R@10 results on EURLEX57K for all, frequent, few-shot, zero-shot labels. Starred methods use the first 512 document tokens; all other methods use full documents. Unless otherwise stated, GLOVE embeddings are used. OVERALL FREQUENT FEW ZERO @1 @5 @10 @1 @5 @10 @1 @5 @10 @1 @5 @10 Exact Match 0.131 0.097 0.168 0.194 0.219 0.344 0.037 0.111 0.214 0.178 0.194 0.206 Logistic Regression 0.861 0.710 0.765 0.864 0.767 0.846 0.458 0.508 0.560 0.011 0.011 0.022 BIGRU-ATT 0.899 0.758 0.824 0.893 0.799 0.880 0.551 0.631 0.703 0.015 0.040 0.062 HAN 0.894 0.746 0.811 0.889 0.789 0.872 0.510 0.597 0.673 0.020 0.051 0.079 CNN-LWAN 0.853 0.716 0.801 0.849 0.761 0.862 0.521 0.613 0.681 0.011 0.036 0.061 BIGRU-LWAN 0.907 0.766 0.836 0.900 0.805 0.891 0.599 0.662 0.725 0.011 0.029 0.060 ZERO-CNN-LWAN 0.842 0.684 0.753 0.837 0.730 0.820 0.447 0.495 0.556 0.202 0.321 0.376 ZERO-BIGRU-LWAN 0.874 0.718 0.782 0.867 0.764 0.847 0.488 0.561 0.635 0.247 0.438 0.531 BIGRU-LWAN (L2V) 0.913 0.775 0.842 0.905 0.815 0.898 0.593 0.657 0.716 0.013 0.034 0.066 BIGRU-LWAN (L2V) * 0.915 0.770 0.836 0.905 0.811 0.891 0.586 0.641 0.707 0.013 0.047 0.084 BIGRU-LWAN (ELMO) * 0.921 0.781 0.845 0.912 0.821 0.901 0.595 0.668 0.732 0.011 0.044 0.061 BERT-BASE * 0.922 0.796 0.856 0.914 0.835 0.908 0.611 0.686 0.752 0.019 0.028 0.068 Table 8: RP@1, RP@5 and RP@10 results on EURLEX57K for all, frequent, few-shot, zero-shot labels. Starred methods use the first 512 document tokens; all other methods use full documents. Unless otherwise stated, GLOVE embeddings are used. OVERALL FREQUENT FEW ZERO @1 @5 @10 @1 @5 @10 @1 @5 @10 @1 @5 @10 Exact Match 0.131 0.099 0.134 0.194 0.201 0.262 0.037 0.074 0.112 0.178 0.186 0.189 Logistic Regression 0.861 0.741 0.766 0.864 0.781 0.819 0.458 0.470 0.489 0.011 0.011 0.014 BIGRU-ATT 0.899 0.789 0.819 0.893 0.813 0.853 0.551 0.580 0.608 0.015 0.027 0.034 HAN 0.894 0.778 0.808 0.889 0.805 0.845 0.510 0.544 0.573 0.020 0.034 0.043 CNN-LWAN 0.853 0.746 0.786 0.849 0.772 0.822 0.521 0.557 0.583 0.011 0.023 0.032 BIGRU-LWAN 0.907 0.796 0.829 0.900 0.819 0.861 0.599 0.618 0.643 0.011 0.019 0.029 ZERO-CNN-LWAN 0.842 0.717 0.749 0.837 0.745 0.789 0.447 0.454 0.478 0.202 0.264 0.281 ZERO-BIGRU-LWAN 0.874 0.752 0.781 0.867 0.780 0.819 0.488 0.510 0.539 0.247 0.345 0.375 BIGRU-LWAN (L2V) 0.913 0.804 0.836 0.905 0.828 0.869 0.593 0.612 0.635 0.013 0.024 0.035 BIGRU-LWAN (L2V) * 0.915 0.801 0.832 0.905 0.825 0.864 0.586 0.600 0.625 0.013 0.030 0.042 BIGRU-LWAN (ELMO) * 0.921 0.811 0.841 0.912 0.835 0.874 0.595 0.619 0.643 0.011 0.028 0.034 BERT-BASE * 0.922 0.823 0.851 0.914 0.846 0.882 0.611 0.636 0.662 0.019 0.023 0.036 Table 9: nDCG@1, nDCG@5 and nDCG@10 results on EURLEX57K for all, frequent, few-shot, zero-shot labels. Starred methods use the first 512 document tokens; all other methods use full documents. Unless otherwise stated, GLOVE embeddings are used.
2019
636
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6323–6330 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6323 Why Didn’t You Listen to Me? Comparing User Control of Human-in-the-Loop Topic Models Varun Kumar ∗ Amazon Alexa Cambridge, MA [email protected] Alison Smith-Renner University of Maryland College Park, MD [email protected] Leah Findlater University of Washington Seattle, Washington [email protected] Kevin Seppi Brigham Young University Provo, UT [email protected] Jordan Boyd-Graber University of Maryland College Park, MD [email protected] Abstract To address the lack of comparative evaluation of Human-in-the-Loop Topic Modeling (HLTM) systems, we implement and evaluate three contrasting HLTM modeling approaches using simulation experiments. These approaches extend previously proposed frameworks, including constraints and informed prior-based methods. Users should have a sense of control in HLTM systems, so we propose a control metric to measure whether refinement operations’ results match users’ expectations. Informed prior-based methods provide better control than constraints, but constraints yield higher quality topics. 1 Human-in-the-Loop Topic Modeling Topic models help explore large, unstructured text corpora by automatically discovering the topics discussed in the documents (Blei et al., 2003). However, generated topic models are not perfect; they may contain incoherent or loosely connected topics (Chang et al., 2009; Mimno et al., 2011; Boyd-Graber et al., 2014). Human-in-the-Loop Topic Modeling (HLTM) addresses these issues by incorporating human knowledge into the modeling process. Existing HLTM systems expose topic models as their topic words and documents, and users provide feedback to improve the models using varied refinement operations, such as adding words to topics, merging topics, or removing documents (Smith et al., 2018; Wang et al., 2019). Systems also vary in how they incorporate feedback, such as “must∗Work performed at University of Maryland, College Park link” and “cannot-link” constraints (Andrzejewski et al., 2009; Hu et al., 2014), informed priors (Smith et al., 2018), or document labels (Yang et al., 2015). However, evaluations of these systems are either not comparative (Choo et al., 2013; Lee et al., 2017) or compare against noninteractive models (Hoque and Carenini, 2015; Hu et al., 2014) or for only a limited set of refinements (Yang et al., 2015; Xie et al., 2015). Evaluations are thus silent on which HLTM system best supports users in improving topic models: they ignore whether refinements are applied correctly or how they compare with other approaches. Moreover, comparative evaluations can be difficult because existing HLTM systems support diverse refinement operations with little overlap. To address these issues, we implement three HLTM systems that differ in the techniques for incorporating prior knowledge (informed priors vs. constraints) and for inference (Gibbs sampling vs. variational EM), but that all support seven refinement operations preferred by end users (Lee et al., 2017; Musialek et al., 2016). We compare these systems through experiments simulating random and “good” user behavior. The two Gibbs sampling-based systems extend prior work (Yang et al., 2015; Smith et al., 2018), but to our knowledge, the combination of informed priors and variational inference in an HLTM system is new. Additionally, while Yang et al. incorporate word correlation knowledge and document label knowledge into topic models, this paper extends their modeling approach with the implementation of seven new user refinements. 6324 We also introduce metrics to assess the degree to which HLTM systems listen to users— user control—a key user interface design principle for human-in-the-loop systems (Amershi et al., 2014; Du et al., 2017). In general, informed priors provide more control while constraints produce higher quality topics. This paper provides three contributions: (1) implementation of an HLTM system using informed priors and variational inference, (2) experimental comparison of three HLTM systems, and (3) metrics to evaluate user control in HLTM systems. 2 Human Feedback and LDA We briefly describe Latent Dirichlet Allocation (Blei et al., 2003, LDA) and outline the experimental conditions and our implementation. 2.1 LDA Inference LDA is generative, modeling documents as mixtures of k topics where each topic is a multinomial distribution, φz, over the vocabulary, V . Each document d is an admixture of topics θd. Each word indexed by i in document d is generated by first sampling a topic assignment zd,i from θd and then sampling a word from the corresponding topic φzi. Collapsed Gibbs sampling (Griffiths and Steyvers, 2004) and variational ExpectationMaximization (Blei et al., 2003, EM) are two popular inference methods to compute the posterior, p(z, φ, θ | w, α, β). Gibbs sampling iteratively samples a topic assignment, zd,i = t given an observed token wd,i in document d and other topic assignments, z−d,n, with probability P(zd,i = t | z−d,n, w) ∝(nd,t + α)nw,t + β nt + V β (1) Here, nd,t is the count topic t is in document d, nw,t is the count of token w in topic t, and nt is the marginal count of tokens assigned to topic t. Alternatively, variational EM approximates the posterior using a tractable family of distributions by first defining a mean field variational distribution q(z, φ, θ | λ, γ, π) = K Y k=1 q(φk | λk) D Y d=1 q(θd | γd) Nd Y n=1 q(zdn | πdn) (2) where γd, πd are local parameters of the distribution q for document d, and λ is a global parameter. Inference minimizes the KL divergence between the variational distribution and true posterior. While there are many LDA variants for specific applications (Boyd-Graber et al., 2017), we focus on models that interactively refine initial topic clustering. 2.2 HLTM Modeling Approaches To investigate adherence to user feedback and topic quality improvements, we compare HLTM systems, based on three modeling approaches. Each of these approaches incorporate user feedback by first forgetting what the model learned before, by unassigning words from topics (Hu et al., 2014), and then injecting new information based on user feedback into the model. We compare two existing techniques for injecting new information: (1) asymmetric priors (or informed priors), which are used extensively for injecting knowledge into topic models (Fan et al., 2017; Zhai et al., 2012; Plepl´e, 2013; Smith et al., 2018; Wang et al., 2019) by modifying Dirichlet parameters, α and β, and (2) constraints (Yang et al., 2015), in which knowledge source m is incorporated as a potential function fm(z, m, d) of the hidden topic z of word type w in document d. While other frameworks exist (Foulds et al., 2015; Andrzejewski et al., 2009; Hu et al., 2014; Xie et al., 2015; Roberts et al., 2014), we focus on informed priors and constraints, as these are flexible to support the refinement operations preferred by users and reasonably fast enough to support “rapid interaction cycles” required for effective interactive systems (Amershi et al., 2014). We also compare two inference techniques for topic models (1) Gibbs sampling and (2) variational EM inference. Because HLTM requires forgetting existing topic assignments (Hu et al., 2014), we use two different methods to forget existing topic assignments. In Gibbs sampling, information is forgotten by adjusting topic-word assignments, zi. In variational EM, λt,w encodes how closely the word w is related to topic t. In the E-step, the model assigns latent topics based on the current value of λ, and in the M-step, the model updates λ using the current topic assignments. Because the model relies on a fixed λ for topic assignment, information for a word w in a topic t can be forgotten by resetting λt,w to the prior βt,w. Together, these injection and inference techniques result in three HLTM modeling approaches: 6325 Informed priors using Gibbs sampling (infogibbs) forgets topic-word assignments zi and injects new information by modifying Dirichlet parameters, α and β. Smith et al. (2018) implement seven refinements for this approach. We extend their work with a create topic refinement. Informed priors using variational inference (info-vb) forgets topic-word assignments for a word w in topic t by resetting the value of λt,w. This approach manipulates priors, α and β, to incorporate new knowledge like info-gibbs. We define and implement seven user-preferred refinement operations for this approach. Constraints using Gibbs sampling (const-gibbs) forgets topic assignments like in info-gibbs, but instead of prior manipulation, injects new information into the model using potential functions, fm(z, m, d) (Yang et al., 2015). We define and implement seven user-preferred refinement operations for this approach. 2.3 Refinement Implementations Our three systems support the following seven refinements that users request in HLTM systems (Musialek et al., 2016; Lee et al., 2017): Remove word w from topic t. For all three systems, first forget all w’s tokens wi from t. Then, for info-gibbs and info-vb, assign a very small prior1 ϵ to w in t. For const-gibbs, add a constraint2 fm(z, w, d), such that fm(z, w, d) = log(ϵ) if z = t and w = x, else assign 0. Add word w to topic t. For all three systems, first forget w from all other topics. Then, for infogibbs and info-vb, increase the prior of w in t by the difference between the topic-word counts of w and topic's top word ˆw in t. For const-gibbs, add a constraint fm(z, w, d), such that fm(z, w, d) = 0 if z = t and w = x, else assign log(ϵ). Remove document d from topic t. For all models, first forget the topic assignment for all words in the document d. Then, for info-gibbs and infovb, overwrite the previous prior value with a very small prior ϵ, to t in αd. For const-gibbs, add a constraint fm(z, w, d), such that fm(z, w, d) = log(ϵ) if z = t and d = x, else assign 0. 1We use ϵ = 10−8 2We use log(ϵ) to make it a soft constraint. Replacing it with -∞will make it a hard constraint. Merge topics t1 and t2 into a single topic, t1. For info-gibbs and const-gibbs, assign t1 to all tokens previously assigned to t2. This effectively removes t2 and updates t1, which should represent both t1 and t2. For info-vb, add counts from λt2 to λt1 and remove row from λ corresponding to t2. Split topic t given seed words s into two topics, tn, containing s, and t, without s. For each vocabulary word, move a fraction of probability mass from t to tn as proposed by (Plepl´e, 2013). Then, for info-gibbs and info-vb, assign a high prior for all s in tn. Following Fan et al., we use 100 as the high prior. For const-gibbs, to s to tn, add a constraint fm(z, w, d), such that fm(z, w, d) = 0 if z = tn and w = wi ∈s, else assign log(ϵ). Change word order , such that w2 is higher than w1 in topic t. In info-gibbs, increase the prior of w2 in t by the topic word counts’ difference nw1,t -nw2,t. In info-vb, increase the prior by λt,w1 −λt,w2. For const-gibbs, compute the ratio r between the topic word counts’ difference nw1,t −nw2,t and the counts of word w2, which have any topic except t, nw2,x,x̸=t. Then, add a constraint fm(z, w, d), such that fm(z, w, d) = 0 if z = t and w = w2, else assign δ where δ = log(ϵ) if r > 1 else δ = 1.0 −r. Create topic tn, given seed words, s. First forget the topic assignment for all s. Then, for infogibbs and info-vb, assign a high prior to s. For const-gibbs, to assign s to tn, add a constraint fm(z, w, d), such that fm(z, w, d) = 0 if z = tn and w = wi ∈s, else assign log(ϵ). 3 Measuring Control Prior work in interactive systems emphasizes the importance of doing what users ask, that is, end user control (Shneiderman, 2010; Amershi et al., 2014). However, HLTM, which must balance modeling the data well and fulfilling users’ desires, can frustrate users when refinements are not applied as expected (Smith et al., 2018). Evaluation metrics such as topic coherence, perplexity, and log-likelihood measure how well topics model data, but are not sufficient to measure whether user feedback is incorporated as expected. Therefore, we propose new control metrics to measure how well models reflect users’ refinement intentions. Consider a topic, t, as a ranked word list sorted in descending order of their probabilities in t. Let 6326 rM1 wt denote the rank of a word w in topic t in model M1. After applying a word-level refinement, the rank of w in the updated model M2, is rM2 wt . For word-level refinements, such as add word, remove word, and change word order, compute control as the ratio of the actual rank change, the absolute difference (rM1 wt −rM2 wt ), and the expected rank change. A score of 1.0 indicates that the model perfectly applied the refinement, while a negative score indicates the model did the opposite of what was desired. For remove document, use the same definition as remove word except consider a topic as a ranked document list. For create topic, compute control as the ratio of the number of seed words in the created topic out of the total number of provided seed words. For merge topics, control is defined as the ratio of the number of words in the merged topic which came from either of the parent topics, and the total number of words shown to a user. For split topic, control is the average of the control scores of parent topic and child topic, computed using the control definition for create topic. 4 HLTM System Comparison To compare how the three HLTM systems model data and adhere to user feedback (i.e., provide control), we need user data; however, real user interaction is expensive to obtain. So, we simulate a range of user behavior with these systems: users that aim to improve topics, “good users”, and those that behave unexpectedly, “random users”. The simulations use a data set of 7000 news articles, 500 articles each for fourteen different news categories, such as business, law, and money, collected using the Guardian API.3 4.1 Simulated Users The “random user” refines randomly. For example, remove document, deletes a randomly selected document from a randomly selected topic. Our “good user” reflects a realistic user behavior pattern: identify a mixed category topic and apply refinements to focus the topic on its most dominant category. Thus the “good user”—with access to true document categories—first chooses a topic associated with multiple categories of documents and determines the dominant category of the top documents for the topic. Then, refinement operations push the topic to the dominant category. For 3https://open-platform.theguardian.com example, the “good user” may remove a document which does not belong to the dominant category. Additional simulation are found in Appendix A. 4.2 Method We train forty initial LDA models, twenty with ten topics and twenty with twenty topics for the news articles, resulting in models with less and more topics than the true number of categories. For each of the three HLTM systems and each of the seven refinement types, we randomly select one of the pre-trained models. The create and split topic refinement types select from the models with ten topics, ensuring that topics have overlapping categories, while the others select from the models with twenty topics. We then apply a refinement as dictated by the simulated user. For the “random user”, we randomly select refinement parameters, such as topic and word (Appendix A.1), and for the “good user”, we choose topic and refinement parameters intending to improve the topics (Appendix A.2). We apply the refinement (Section 2.3) and run inference until the model converges or reaches a threshold of twenty Gibbs sampling and three EM iterations. We compute control (Section 3) of the refinement and change in topic coherence using NPMI derived from Wikipedia for the top twenty topic words (Lau et al., 2014). We repeat this process 100 times for each refinement type, simulated user, and HLTM system. 5 Informed Priors Listen to Users, while Constraints Produce Coherent Topics Table 1 shows the per-refinement control and coherence deltas for the three different HLTM systems. As detailed in Appendix B, Kruskal-Wallis tests show that HLTM systems have significantly different (p < .05) control scores for all refinements for the “good user” and for all but remove word for the “random user.” Coherence deltas were also significantly different for all refinements except add word, where const-gibbs yields consistently higher coherence improvements than the other conditions aside from remove document. For remove word, and merge topics, all methods provide good control (scores close to 1.0). However, the informed prior methods, info-vb and info-gibbs, provide more control, for both the random (CRand) and good (CGood) users, compared to const-gibbs. Informed prior methods also excel at refinements that promote topic words, 6327 const-gibbs info-gibbs info-vb CRand CGood QGood∗ CRand CGood QGood∗ CRand CGood QGood∗ remove w 1.0 (0.0) 1.0 (0.0) 5.4 (9.7) 1.0 (0.0) 1.0 (0.0) 3.0 (8.9) 1.0 (0.0) 1.0, (0.0) 1.2 (5.0) remove d 1.0 (0.0) 1.0 (0.0) -1.7 (10.8) 1.0 (0.0) 1.0 (0.0) .8 (4.5) .72 (.4) .85 (.25) -6.0 (13.2) merge t .97 (.05) 1.0 (0.0) 6.3 (8.7) .96 (.05) 1.0 (0.0) -.43 (9.3) .99 (.02) .99 (.02) 1.4 (9.8) add w .82 (.29) .86 (.24) 3.0 (9.4) 1.0 (0.0) .98 (.03) 3.1 (6.4) .98 (.04) .98 (.02) 1.7 (5.6) create t .08 (.10) .81 (.13) -6.6 (13.7) .98 (.11) .98 (.04) -11 (10.4) 1.0 (0.0) 1.0 (0.0) -13.0 (8.4) split t .91 (.09) .79 (.19) 1.9 (17.9) .93 (.06) .87 (.19) -7.9 (13.5) 1.0 (0.0) .93 (.16) -1.6 (8) reorder w .41 (.53) .19 (.20) 1.6 (7) 1.19 (.46) .56 (.24) -1.0 (5.5) 1.02 (.27) .44 (.24) -1.0 (5.1) Table 1: Simulation results, reported as mean (SD): control with the random (CRand) and good (CGood) users, and coherence deltas (QGood) for the good user (we omit coherence for the random user as the goal there is not to improve the topics). ∗values reported as E-04. such as add word and create topic. On the other hand, const-gibbs supports defining token and document-level constraints, which ensure almost perfect control for refinements that require restricting certain words or documents, such as remove word and remove document. Additionally, comparing good and random users, all systems provide similar control except for const-gibbs for create topic: .81 for good (CGood) compared to .08 for random (CRand). This is because const-gibbs is limited by the underlying data and cannot generate topics containing random, unrelated seed words, lowering control for the “random user.” Informed prior models, however, inflate priors to adhere to user feedback, regardless of whether it aligns with the underlying data, so these methods provide higher control even for random input. Finally, for change word order, all three systems lack control. As topic models are probabilistic models, it is therefore difficult to maintain the exact user provided word order. 5.1 Why Informed Priors Offer Control Informed priors provide higher control than constraints for refinements that require promoting words, such as add word and create topic. To understand the difference between these two feedback techniques, we conduct an additional simulation to compare const-gibbs and info-gibbs: we generate an initial topic model of 10 topics and apply add word refinements to explore varied control of the feedback techniques. The initial model includes a law topic with the top ten words: “court, law, justice, rights, legal, case, police, human, public, courts”. A user wants to add the word “injustice”, initially ranked at 1035th position, to this topic using both constgibbs and info-gibbs models. While const-gibbs improves the ranking of the added word to 631, info-gibbs puts this word at the first position in the updated topic. The const-gibbs system tries to push tokens of “injustice” to the law topic; however, there just are not enough occurrences to put it in the first ten words. Even assigning all its occurrences to the law topic cannot improve its ranking further. On the other hand, info-gibbs can increase the prior for “injustice” enough to put the word in the top of the topic list; until overruled by data info-gibbs, can use high priors to incorporate user feedback, resulting in higher control. 6 Conclusion Informed prior models provide an effective way to incorporate different feedback into topic models, improving user control and topic coherence, while constraints yield higher quality topics, but with less control. While we simulate user behavior for good and random users, future work should compare these systems with end users, as well as compare end user ratings of control with our proposed automated metrics. Interactive models—by design—are balancing user insight with the truth of the data (and thus the world). An important question for future models, especially interactive ones, is how to signal to the user when their desires do not comport with reality. In such cases, control may not be a desired property of interactive systems. Acknowledgements This work was supported by the collaborative NSF Grant IIS-1409287 (UMD) and IIS-1409739 (BYU). Boyd-Graber is also supported by NSF grant IIS-1822494 and IIS-1748663. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor. 6328 References Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. 2014. Power to the people: The role of humans in interactive machine learning. AI Magazine, 35(4):105–120. David Andrzejewski, Xiaojin Zhu, and Mark Craven. 2009. Incorporating domain knowledge into topic modeling via Dirichlet forest priors. In Proceedings of the International Conference of Machine Learning. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent Dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022. Jordan Boyd-Graber, Yuening Hu, and David Mimno. 2017. Applications of Topic Models, volume 11 of Foundations and Trends in Information Retrieval. NOW Publishers. Jordan Boyd-Graber, David Mimno, David Newman, Edoardo M Airoldi, David Blei, and Elena A Erosheva. 2014. Care and feeding of topic models: Problems, diagnostics, and improvements. Handbook of mixed membership models and their applications, pages 3–34. Jonathan Chang, Jordan L Boyd-Graber, Sean Gerrish, Chong Wang, and David M Blei. 2009. Reading tea leaves: How humans interpret topic models. In Proceedings of Advances in Neural Information Processing Systems. Jaegul Choo, Changhyun Lee, Chandan K Reddy, and Haesun Park. 2013. Utopian: User-driven topic modeling based on interactive nonnegative matrix factorization. IEEE transactions on visualization and computer graphics, 19(12):1992–2001. Fan Du, Catherine Plaisant, Neil Spring, and Ben Shneiderman. 2017. Finding similar people to guide life choices: Challenge, design, and evaluation. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. Angela Fan, Finale Doshi-Velez, and Luke Miratrix. 2017. Prior matters: simple and general methods for evaluating and improving topic quality in topic modeling. arXiv preprint arXiv:1701.03227. James Foulds, Shachi Kumar, and Lise Getoor. 2015. Latent topic networks: A versatile probabilistic programming framework for topic models. In Proceedings of the International Conference of Machine Learning. Thomas L Griffiths and Mark Steyvers. 2004. Finding scientific topics. Proceedings of the National academy of Sciences, 101(suppl 1):5228–5235. Enamul Hoque and Giuseppe Carenini. 2015. Convisit: Interactive topic modeling for exploring asynchronous online conversations. In International Conference on Intelligent User Interfaces. ACM. Yuening Hu, Jordan Boyd-Graber, Brianna Satinoff, and Alison Smith. 2014. Interactive topic modeling. Machine learning, 95(3). Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics, pages 530–539. Tak Yeon Lee, Smith Alison, Kevin Seppi, Niklas Elmqvist, Jordan Boyd-Graber, and Leah Findlater. 2017. The human touch: How non-expert users perceive, interpret, and fix topic models. International Journal of Human-Computer Studies. David Mimno, Hanna M Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing semantic coherence in topic models. In Proceedings of Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Chris Musialek, Philip Resnik, and S Andrew Stavisky. 2016. Using text analytic techniques to create efficiencies in analyzing qualitative data: A comparison between traditional content analysis and a topic modeling approach. American Association for Public Opinion Research. Quentin Plepl´e. 2013. Interactive topic modeling. Master’s thesis, UC San Diego. Margaret E Roberts, Brandon M Stewart, Dustin Tingley, Christopher Lucas, Jetson Leder-Luis, Shana Kushner Gadarian, Bethany Albertson, and David G Rand. 2014. Structural topic models for open-ended survey responses. American Journal of Political Science, 58(4):1064–1082. Ben Shneiderman. 2010. Designing the user interface: strategies for effective human-computer interaction. Pearson Education India. Alison Smith, Varun Kumar, Jordan Boyd-Graber, Kevin Seppi, and Leah Findlater. 2018. Closing the loop: User-centered design and evaluation of a human-in-the-loop topic modeling system. In International Conference on Intelligent User Interfaces. ACM. Jun Wang, Changsheng Zhao, Junfu Xiang, and Kanji Uchino. 2019. Interactive topic model with enhanced interpretability. In IUI Workshops. Pengtao Xie, Diyi Yang, and Eric P Xing. 2015. Incorporating word correlation knowledge into topic modeling. In Conference of the North American Chapter of the Association for Computational Linguistics. Yi Yang, Doug Downey, Jordan L Boyd-Graber, and Jordan Boyd Graber. 2015. Efficient methods for incorporating knowledge into topic models. In Proceedings of Empirical Methods in Natural Language Processing. 6329 Ke Zhai, Jordan L. Boyd-Graber, Nima Asadi, and Mohamad L. Alkhouja. 2012. Mr. LDA: a flexible large scale topic modeling package using variational inference in mapreduce. In Proceedings of the 21st International Conference on World Wide Web. A Simulation Details To simulate the behavior of the “random user” and “good user” for the three HLTM systems, we train 40 initial LDA models, 20 with 10 topics and 20 with 20 topics for the news articles, resulting in models with less and more topics than the true number of categories. A.1 Random User Simulation To simulate random user behavior, for each of the three systems and for each of the seven refinement types, we randomly select a pre-trained LDA model from the pool of models with 20 topics. Then, we apply a refinement of that refinement type to the selected model. We randomly select refinement specific parameters, such as candidate topic, word to be added, and document to be deleted. We run inference until the model converges or reaches a limit. For Gibbs sampling models, info-gibbs and const-gibbs, we use 20 iterations as limit and for the variational model, infovb, we use 3 EM iterations as the limit. After applying the refinement, we compute the control and coherence given the updated and initial model. We perform this 100 times for each of the refinement types and HLTM systems. A.2 Good User Simulation For each category c of the 14 categories of the Guardian news dataset (art & design, business, education, environment, fashion, film, football, law, money, music, politics, science, sports, technology), we compute the most important words in c, Sc, using a Logistic regression classifier. We use Sc as a list of representative words for category c. Given a labeled corpus, we randomly choose one of the pre-trained models. When applying create or split topic refinement types, we select from the models with 10 topics, ensuring that topics have overlapping categories. While applying all other refinement types, we select from the models with 20 topics. We then simulate good user behavior for each of the refinement types as follows: 1. Add word: Randomly select a topic t from those where the top 20 documents are from more than one category. Then, find the corresponding labeled category c by analyzing top 20 documents in the selected category. To improve the topic coherence of t, add top ranked words (from one to five words) from Sc, which are not already in the top words of t. 2. Remove word: Randomly select a topic t from those where top 20 documents are from more than one category. Then, find the corresponding labeled category c by analyzing top 20 documents in the selected category. For selected topic t, remove words which are not part of Sc. 3. Change word order: Randomly select a topic t among all topics. Then, find the corresponding labeled category c by analyzing top 20 documents in the selected category. Then, find words between index 10 to 20, which are at higher rank in Sc. Promote such words to a higher rank using change word order. 4. Remove document: Randomly select a topic t from those where top 20 documents are from more than one category. Then, find the corresponding labeled category c by analyzing top 20 documents in the selected category. For selected topic t, delete documents (from one to five documents), which are not in c. 5. Merge topics: Randomly choose a topic pair to merge which represents a common category c. 6. Create topic: Randomly select a category c which is not a dominant category in any of the topics. Create a topic by providing top 10 words as seed words from Sc. 7. Split topic: Randomly select a topic from those which have documents from two different categories, c1 and c2. Split the top 20 words in that topic into two lists using the representative words from Sc1 and Sc2. Then, split the topic using one of the lists. B Kruskal Wallis Tests We provide details on the Kruskal Wallis tests used to assess whether there are significant differences in how the three HLTM systems, const-gibbs, info-gibbs, and info-vb, impact control and topic 6330 coherence. The means reported here repeat what is provided in the main paper, but with the additional χ2 and p values output from the Kruskal Wallis tests; p < .05 is considered to be significant. Because control values are not comparable across the seven user-preferred refinements, we conducted separate Kruskal Wallis tests for each refinement. The results include control for the simulated good user (Table 3) and for the simulated random user (Table 2), as well as quality improvements (coherence) for the simulated good user (Table 4). const-gibbs info-gibbs info-vb χ2 p-value add w 0.82 1.00 0.99 249.35 < .001 remove w 1.00 1.00 1.00 0.42 .810 remove d 1.00 1.00 0.72 27.12 < .001 merge t 0.97 0.96 0.99 31.24 < .001 reorder w 0.41 1.19 1.03 113.52 < .001 create t 0.08 0.98 1.00 277.23 < .001 split t 0.91 0.93 1.00 119.47 < .001 Table 2: Average control provided by the three HLTM systems for seven user-preferred refinements and simulated random user behavior. Kruskal-Wallis tests (p < .05) show significant differences between the systems for all refinements except remove word. const-gibbs info-gibbs info-vb χ2 p-value add w 0.86 0.98 0.98 13.02 .001 remove w 0.99 0.99 0.99 6.22 .045 remove d 0.99 0.99 0.85 163.73 < .001 merge t 1.00 1.00 0.99 22.76 < .001 reorder w 0.19 0.56 0.44 103.44 < .001 create t 0.82 0.98 1.00 191.82 < .001 split t 0.77 0.87 0.93 81.71 < .001 Table 3: Average control provided by the three HLTM systems for seven user-preferred refinements and simulated good user behavior. Kruskal-Wallis tests (p < .05) show significant differences between the systems for all refinements. const-gibbs info-gibbs info-vb χ2 p-value add w 3.0E-04 3.1E-04 1.7E-04 2.93 .230 remove w 5.3E-04 3.0E-04 1.2E-04 25.51 < .001 remove d -1.7E-04 7.5E-05 -6.0E-04 19.29 < .001 merge t 6.3E-04 -4.3E-05 1.4E-04 30.66 < .001 reorder w 1.6E-04 -8.0E-05 -1.0E-05 7.67 .020 create t -6.6E-04 -1.1E-03 -1.2E-03 11.20 .004 split t 1.9E-04 -7.9E-04 -1.6E-04 22.19 < .001 Table 4: Average coherence provided by the three HLTM systems for seven user-preferred refinements and simulated good user behavior. Kruskal-Wallis tests (p < .05) show significant differences between the systems for all refinements except for add word.
2019
637
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6331–6338 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6331 Encouraging Paragraph Embeddings to Remember Sentence Identity Improves Classification Tu Vu and Mohit Iyyer College of Information and Computer Sciences University of Massachusetts Amherst {tuvu,miyyer}@cs.umass.edu Abstract While paragraph embedding models are remarkably effective for downstream classification tasks, what they learn and encode into a single vector remains opaque. In this paper, we investigate a state-of-the-art paragraph embedding method proposed by Zhang et al. (2017) and discover that it cannot reliably tell whether a given sentence occurs in the input paragraph or not. We formulate a sentence content task to probe for this basic linguistic property and find that even a much simpler bag-of-words method has no trouble solving it. This result motivates us to replace the reconstructionbased objective of Zhang et al. (2017) with our sentence content probe objective in a semisupervised setting. Despite its simplicity, our objective improves over paragraph reconstruction in terms of (1) downstream classification accuracies on benchmark datasets, (2) faster training, and (3) better generalization ability.1 1 Introduction Methods that embed a paragraph into a single vector have been successfully integrated into many NLP applications, including text classification (Zhang et al., 2017), document retrieval (Le and Mikolov, 2014), and semantic similarity and relatedness (Dai et al., 2015; Chen, 2017). However, downstream performance provides little insight into the kinds of linguistic properties that are encoded by these embeddings. Inspired by the growing body of work on sentence-level linguistic probe tasks (Adi et al., 2017; Conneau et al., 2018), we set out to evaluate a state-of-the-art paragraph embedding method using a probe task to measure how well it encodes the identity of the sentences within a paragraph. We discover that the method falls short of capturing this basic property, and that implementing a simple objective to 1Source code and data are available at https://github.com/ tuvuumass/SCoPE. fix this issue improves classification performance, training speed, and generalization ability. We specifically investigate the paragraph embedding method of Zhang et al. (2017), which consists of a CNN-based encoder-decoder model (Sutskever et al., 2014) paired with a reconstruction objective to learn powerful paragraph embeddings that are capable of accurately reconstructing long paragraphs. This model significantly improves downstream classification accuracies, outperforming LSTM-based alternatives (Li et al., 2015). How well do these embeddings encode whether or not a given sentence appears in the paragraph? Conneau et al. (2018) show that such identity information is correlated with performance on downstream sentence-level tasks. We thus design a probe task to measure the extent to which this sentence content property is captured in a paragraph embedding. Surprisingly, our experiments (Section 2) reveal that despite its impressive downstream performance, the model of Zhang et al. (2017) substantially underperforms a simple bagof-words model on our sentence content probe. Given this result, it is natural to wonder whether the sentence content property is actually useful for downstream classification. To explore this question, we move to a semi-supervised setting by pre-training the paragraph encoder in Zhang et al.’s model (2017) on either our sentence content objective or its original reconstruction objective, and then optionally fine-tuning it on supervised classification tasks (Section 3). Sentence content significantly improves over reconstruction on standard benchmark datasets both with and without fine-tuning; additionally, this objective is four times faster to train than the reconstruction-based variant. Furthermore, pre-training with sentence content substantially boosts generalization ability: fine-tuning a pre-trained model on just 500 labeled 6332 reviews from the Yelp sentiment dataset surpasses the accuracy of a purely supervised model trained on 100,000 labeled reviews. Our results indicate that incorporating probe objectives into downstream models might help improve both accuracy and efficiency, which we hope will spur more linguistically-informed research into paragraph embedding methods. 2 Probing paragraph embeddings for sentence content In this section, we first fully specify our probe task before comparing the model of Zhang et al. (2017) to a simple bag-of-words model. Somewhat surprisingly, the latter substantially outperforms the former despite its relative simplicity. 2.1 Probe task design Our proposed sentence content task is a paragraph-level analogue to the word content task of Adi et al. (2017): given embeddings2 p, s of a paragraph p and a candidate sentence s, respectively, we train a classifier to predict whether or not s occurs in p. Specifically, we construct a binary classification task in which the input is [p; s], the concatenation of p and s. This task is balanced: for each paragraph p in our corpus, we create one positive instance by sampling a sentence s+ from p and one negative instance by randomly sampling a sentence s− from another paragraph p′. As we do not perform any fine-tuning of the base embedding model, our methodology is agnostic to the choice of model. 2.2 Paragraph embedding models Armed with our probe task, we investigate the following embedding methods.3 Zhang et al. (2017) (CNN-R): This model uses a multi-layer convolutional encoder to compute a single vector embedding p of an input paragraph p and a multi-layer deconvolutional decoder that mirrors the convolutional steps in the encoding stage to reconstruct the tokens of p from p. We refer readers to Zhang et al. (2017) for a detailed description of the model architecture. For a more intuitive comparison in our experiments, we denote this model further as CNN-R instead of CNN2computed using the same embedding method 3We experiment with several other models in Appendix A.1, including an LSTM-based encoder-decoder model, a variant of Paragraph Vector (Le and Mikolov, 2014), and BOW models using pre-trained word representations. 100 300 500 700 900 Embedding dimensionality 50 60 70 80 90 Sentence content accuracy CNN-R BoW Figure 1: Probe task accuracies across representation dimensions. BoW surprisingly outperforms the more complex model CNN-R. DCNN as in the original paper. In all experiments, we use their publicly available code.4 Bag-of-words (BoW): This model is simply an average of the word vectors learned by a trained CNN-R model. BoW models have been shown to be surprisingly good at sentence-level probe tasks (Adi et al., 2017; Conneau et al., 2018). 2.3 Probe experimental details Paragraphs to train our classifiers are extracted from the Hotel Reviews corpus (Li et al., 2015), which has previously been used for evaluating the quality of paragraph embeddings (Li et al., 2015; Zhang et al., 2017). We only consider paragraphs that have at least two sentences. Our dataset has 346,033 training paragraphs, 19,368 for validation, and 19,350 for testing. The average numbers of sentences per paragraph, tokens per paragraph, and tokens per sentence are 8.0, 123.9, and 15.6, respectively. The vocabulary contains 25,000 tokens. To examine the effect of the embedding dimensionality d on the results, we trained models with d ∈{100, 300, 500, 700, 900}. Each classifier is a feed-forward neural network with a single 300-d ReLu layer. We use a minibatch size of 32, Adam optimizer (Kingma and Ba, 2015) with a learning rate of 2e-4, and a dropout rate of 0.5 (Srivastava et al., 2014). We trained classifiers for a maximum of 100 epochs with early stopping based on validation performance. 2.4 BoW outperforms CNN-R on sentence content Our probe task results are displayed in Figure 1. Interestingly, BoW performs significantly better 4https://github.com/dreasysnail/textCNN public 6333 Paragraph(s) Encoder Encoder PRE-TRAINING TASK: Sentence content TARGET TASK: Paragraph classification Paragraph(s) Candidate sentence(s) Encoder Figure 2: A visualization of our semi-supervised approach. We first train the CNN encoder (shown as two copies with shared parameters) on unlabeled data using our sentence content objective. The encoder is then used for downstream classification tasks. Setting CNN-R BoW Without s+ excluded from p 61.2 82.3 With s+ excluded from p 57.5 61.7 Table 1: Probe task accuracies without and with s+ excluded from p, measured at d = 300. BoW’s accuracy degrades quickly in the latter case, suggesting that it relies much more on low-level matching. than CNN-R, achieving an accuracy of 87.2% at 900 dimensions, compared to only 66.4% for CNN-R. We hypothesize that much of BoW’s success is because it is easier for the model to perform approximate string matches between the candidate sentence and text segments within the paragraph than it is for the highly non-linear representations of CNN-R. To investigate this further, we repeat the experiment, but exclude the sentence s+ from the paragraph p during both training and testing. As we would expect (see Table 1), BoW’s performance degrades significantly (20.6% absolute) with s+ excluded from p, whereas CNN-R experiences a more modest drop (3.6%). While BoW still outperforms CNN-R in this new setting, the dramatic drop in accuracy suggests that it relies much more heavily on low-level matching. 3 Sentence content improves paragraph classification Motivated by our probe results, we further investigate whether incorporating the sentence content property into a paragraph encoder can help increase downstream classification accuracies. We propose a semi-supervised approach by pretraining the encoder of CNN-R using our sentence content objective, and optionally fine-tuning it on different classification tasks. A visualization of Dataset Type # classes # examples Yelp Sentiment 2 560K DBpedia Topic 14 560K Yahoo Topic 10 1.4M Table 2: Properties of the text classification datasets used for our evaluations. this procedure can be seen in Figure 2. We compare our approach (henceforth CNN-SC) without and with fine-tuning against CNN-R, which uses a reconstruction-based objective.5 We report comparisons on three standard paragraph classification datasets: Yelp Review Polarity (Yelp), DBPedia, and Yahoo! Answers (Yahoo) (Zhang et al., 2015), which are instances of common classification tasks, including sentiment analysis and topic classification. Table 2 shows the statistics for each dataset. Paragraphs from each training set without labels were used to generate training data for unsupervised pre-training. Sentence content significantly improves over reconstruction on both in-domain and out-ofdomain data We first investigate how useful each pre-training objective is for downstream classification without any fine-tuning by simply training a classifier on top of the frozen pre-trained CNN encoder. We report the best downstream performance for each model across different numbers of pre-training epochs. The first row of Table 3 shows the downstream accuracy on Yelp when the whole unlabeled data of the Yelp training set is used for unsupervised pre-training. Strikingly, 5Here, we use unsupervised pre-training as it allows us to isolate the effects of the unsupervised training objectives. Zhang et al. (2017) implemented auxiliary unsupervised training as an alternative form of semi-supervised learning. We tried both strategies and found that they performed similarly. 6334 0.1% 1% 10% 100% Yelp training set size 60 70 80 90 Yelp test accuracy CNN (no pre-training) CNN-R CNN-SC 0.1% 1% 10% 100% DBpedia training set size 30 40 50 60 70 80 90 DBpedia test accuracy 0.1% 1% 10% 100% Yahoo training set size 20 30 40 50 60 70 Yahoo test accuracy Figure 3: CNN-SC substantially improves generalization ability. Results of CNN-R are taken from Zhang et al. (2017). Pre-training CNN-R CNN-SC On Yelp 67.4 90.0 On Wikipedia 61.4 65.7 Wall-clock speedup 1x 4x Table 3: Yelp test accuracy (without fine-tuning). CNN-SC significantly improves over CNN-R. CNN-SC achieves an accuracy of 90.0%, outperforming CNN-R by a large margin. Additionally, sentence content is four times as fast to train as the computationally-expensive reconstruction objective.6 Are representations obtained using these objectives more useful when learned from in-domain data? To examine the dataset effect, we repeat our experiments using paragraph embeddings pre-trained using these objectives on a subset of Wikipedia (560K paragraphs). The second row of Table 3 shows that both approaches suffer a drop in downstream accuracy when pre-trained on out-of-domain data. Interestingly, CNN-SC still performs best, indicating that sentence content is more suitable for downstream classification. Another advantage of our sentence content objective over reconstruction is that it better correlates to downstream accuracy (see Appendix A.2). For reconstruction, there is no apparent correlation between BLEU and downstream accuracy; while BLEU increases with the number of epochs, the downstream performance quickly begins to decrease. This result indicates that early stopping based on BLEU is not feasible with reconstruction-based pre-training objectives. With fine-tuning, CNN-SC substantially boosts accuracy and generalization We switch gears 6This objective requires computing a probability distribution over the whole vocabulary for every token of the paragraph, making it prohibitively slow to train. Model Yelp DBPedia Yahoo purely supervised w/o external data ngrams TFIDF 95.4 98.7 68.5 Large Word ConvNet 95.1 98.3 70.9 Small Word ConvNet 94.5 98.2 70.0 Large Char ConvNet 94.1 98.3 70.5 Small Char ConvNet 93.5 98.0 70.2 SA-LSTM (word level) NA 98.6 NA Deep ConvNet 95.7 98.7 73.4 CNN (Zhang et al., 2017) 95.4 98.2 72.6 pre-training + fine-tuning w/o external data CNN-R (Zhang et al., 2017) 96.0 98.8 74.2 CNN-SC (ours) 96.6 99.0 74.9 pre-training + fine-tuning w/ external data ULMFiT (Howard and Ruder, 2018) 97.8 99.2 NA Table 4: CNN-SC outperforms other baseline models that do not use external data, including CNN-R. All baseline models are taken from Zhang et al. (2017). now to our fine-tuning experiments. Specifically, we take the CNN encoder pre-trained using our sentence content objective and then fine-tune it on downstream classification tasks with supervised labels. While our previous version of CNNSC created just a single positive/negative pair of examples from a single paragraph, for our finetuning experiments we create a pair of examples from every sentence in the paragraph to maximize the training data. For each task, we compare against the original CNN-R model in (Zhang et al., 2017). Figure 3 shows the model performance with fine-tuning on 0.1% to 100% of the training set of each dataset. One interesting result is that CNN-SC relies on very few training examples to achieve comparable accuracy to the purely supervised CNN model. For instance, fine-tuning CNN-SC using just 500 labeled training examples surpasses the accuracy of training from scratch on 100,000 labeled examples, indicating that the sentence content encoder generalizes well. CNN-SC also outperforms CNN-R by large margins when only small amounts of labeled training data are 6335 available. Finally, when all labeled training data is used, CNN-SC achieves higher classification accuracy than CNN-R on all three datasets (Table 4). While CNN-SC exhibits a clear preference for target task unlabeled data (see Table 3), we can additionally leverage large amounts of unlabeled general-domain data by incorporating pretrained word representations from language models into CNN-SC. Our results show that further improvements can be achieved by training the sentence content objective on top of the pre-trained language model representations from ULMFiT (Howard and Ruder, 2018) (see Appendix A.3), indicating that our sentence content objective learns complementary information. On Yelp, it exceeds the performance of training from scratch on the whole labeled data (560K examples) with only 0.1% of the labeled data. CNN-SC implicitly learns to distinguish between class labels The substantial difference in downstream accuracy between pre-training on indomain and out-of-domain data (Table 3) implies that the sentence content objective is implicitly learning to distinguish between class labels (e.g., that a candidate sentence with negative sentiment is unlikely to belong to a paragraph with positive sentiment). If true, this result implies that CNNSC prefers not only in-domain data but also a representative sample of paragraphs from all class labels. To investigate, we conduct an additional experiment that restricts the class label from which negative sentence candidates s−are sampled. We experiment with two sources of s−: (1) paragraphs of the same class label as the probe paragraph (CNN-SC−), and (2) paragraphs from a different class label (CNN-SC+). Figure 4 reveals that the performance of CNN-SC drops dramatically when trained on the first dataset and improves when trained on the second dataset, which confirms our hypothesis. 4 Related work Text embeddings and probe tasks A variety of methods exist for obtaining fixed-length dense vector representations of words (e.g., Mikolov et al., 2013; Pennington et al., 2014; Peters et al., 2018), sentences (e.g., Kiros et al., 2015; Conneau et al., 2017; Subramanian et al., 2018; Cer et al., 2018), and larger bodies of text (e.g., Le and Mikolov, 2014; Dai et al., 2015; Iyyer et al., 2015; Li et al., 2015; Chen, 2017; Zhang et al., 2017) that 0.1% 1% 10% 100% Yelp training set size 60 70 80 90 Yelp test accuracy CNN-SC CNN-SC CNN-SC+ Figure 4: CNN-SC implicitly learns to distinguish between class labels. significantly improve various downstream tasks. To analyze word and sentence embeddings, recent work has studied classification tasks that probe them for various linguistic properties (Shi et al., 2016; Adi et al., 2017; Belinkov et al., 2017a,b; Conneau et al., 2018; Tenney et al., 2019). In this paper, we extend the notion of probe tasks to the paragraph level. Transfer learning Another line of related work is transfer learning, which has been the driver of recent successes in NLP. Recently-proposed objectives for transfer learning include surrounding sentence prediction (Kiros et al., 2015), paraphrasing (Wieting and Gimpel, 2017), entailment (Conneau et al., 2017), machine translation (McCann et al., 2017), discourse (Jernite et al., 2017; Nie et al., 2017), and language modeling (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2018). 5 Conclusions and Future work In this paper, we evaluate a state-of-the-art paragraph embedding model, based on how well it captures the sentence identity within a paragraph. Our results indicate that the model is not fully aware of this basic property, and that implementing a simple objective to fix this issue improves classification performance, training speed, and generalization ability. Future work can investigate other embedding methods with a richer set of probe tasks, or explore a wider range of downstream tasks. Acknowledgments We thank the anonymous reviewers, Kalpesh Krishna, Nader Akoury, and the members of the UMass NLP reading group for their helpful comments. 6336 References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In International Conference on Learning Representations (ICLR). Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017a. What do neural machine translation models learn about morphology? In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), pages 861–872. Yonatan Belinkov, Llu´ıs M`arquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2017b. Evaluating layers of representation in neural machine translation on part-of-speech and semantic tagging tasks. In Proceedings of the International Joint Conference on Natural Language Processing (IJCNLP), pages 1–10. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder. CoRR, abs/1803.11175. Minmin Chen. 2017. Efficient vector representation for documents through corruption. In International Conference on Learning Representations (ICLR). Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 670–680. Alexis Conneau, Germ´an Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), pages 2126–2136. Andrew M. Dai, Christopher Olah, and Quoc V. Le. 2015. Document embedding with paragraph vectors. CoRR, abs/1507.07998. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), pages 328–339. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of the Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP), pages 1681–1691. Yacine Jernite, Samuel R. Bowman, and David Sontag. 2017. Discourse-based objectives for fast unsupervised sentence representation learning. CoRR, abs/1705.00557. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR). Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems (NIPS), pages 3294–3302. Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the International Conference on Machine Learning (ICML), volume 32, pages 1188–1196. Jiwei Li, Thang Luong, and Dan Jurafsky. 2015. A hierarchical neural autoencoder for paragraphs and documents. In Proceedings of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (ACL-IJCNLP), pages 1106– 1115. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL), pages 142–150. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems (NIPS), pages 6294– 6305. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems (NIPS), pages 3111–3119. Allen Nie, Erin D. Bennett, and Noah D. Goodman. 2017. Dissent: Sentence representation learning from explicit discourse relations. CoRR, abs/1710.04334. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. 6337 Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 2227–2237. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural mt learn source syntax? In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1526–1534. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Sandeep Subramanian, Adam Trischler, Yoshua Bengio, and Christopher J Pal. 2018. Learning general purpose distributed sentence representations via large scale multi-task learning. In International Conference on Learning Representations (ICLR). Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 3104–3112. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R Bowman, Dipanjan Das, et al. 2019. What do you learn from context? probing for sentence structure in contextualized word representations. In International Conference on Learning Representations (ICLR). John Wieting and Kevin Gimpel. 2017. Revisiting recurrent networks for paraphrastic sentence embeddings. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), pages 2078–2088. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems (NIPS), pages 649–657. Yizhe Zhang, Dinghan Shen, Guoyin Wang, Zhe Gan, Ricardo Henao, and Lawrence Carin. 2017. Deconvolutional paragraph representation learning. In Advances in Neural Information Processing Systems (NIPS), pages 4169–4179. A Appendices A.1 BoW models outperform more complex models on our sentence content probe In addition to the paragraph embedding models presented in the main paper, we also experiment Model Dimensionality Accuracy Random – 50.0 trained on paragraphs from Hotel Reviews CNN-R 900 66.4 BoW (CNN-R) 900 87.2 LSTM-R 900 65.4 Doc2VecC 900 90.8 pre-trained on other datasets Word2Vec-avg 300 83.2 GloVe-avg 300 84.6 ELMo-avg 1024 88.1 Table 5: Sentence content accuracy for different paragraph embedding methods. BoW models outperform more complex models. with the following embedding methods: LSTM-R: We consider an LSTM (Hochreiter and Schmidhuber, 1997) encoder-decoder model paired with a reconstruction objective. Specifically, we implement a single-layer bidirectional LSTM encoder and a two-layer unidirectional LSTM decoder. Paragraph representations are computed from the encoder’s final hidden state. Doc2VecC: This model (Chen, 2017) represents a document as an average of randomly-sampled words from within the document. The method introduces a corruption mechanism that favors rare but important words while suppressing frequent but uninformative ones. Doc2VecC was found to outperform other unsupervised BoW-style algorithms, including Paragraph Vector (Le and Mikolov, 2014), on downstream tasks. Other BoW models: We also consider other BoW models with pre-trained word embeddings or contextualized word representations, including Word2Vec (Mikolov et al., 2013), Glove (Pennington et al., 2014), and ELMo (Peters et al., 2018). Paragraph embeddings are computed as the average of the word vectors. For ELMo, we take the average of the layers. The results of our sentence content probe task are summarized in Table 5. A.2 Sentence content better correlates to downstream accuracy than reconstruction See Figure 5. 6338 2 4 6 8 10 # Pre-training epochs 60 70 80 90 Accuracy CNN-R (Reconst. BLEU) CNN-R (Downstream acc.) CNN-SC (Sent. cont. acc.) CNN-SC (Downstream acc.) 20 30 40 BLEU Figure 5: Pre-training performance vs. downstream accuracy on Yelp. Performance measured on validation data. There is no apparent correlation between BLEU and downstream accuracy. 0.1% 1% 10% 100% Yelp training set size 95 96 97 Yelp test accuracy LM only LM + SC 1% 10% 100% IMDB training set size 91 92 IMDB test accuracy LM only LM + SC Figure 6: Further improvements can be achieved by training sentence content (SC) on top of the pretrained language model (LM) representations from ULMFiT (Howard and Ruder, 2018). A.3 Further improvements by training sentence content on top of pre-trained language model representations Figure 6 shows that further improvements can be achieved by training sentence content on top of the pre-trained language model representations from ULMFiT (Howard and Ruder, 2018) on Yelp and IMDB (Maas et al., 2011) datasets, indicating that our sentence content objective learns complementary information.7 On Yelp, it exceeds the performance of training from scratch on the whole labeled data (560K examples) with only 0.1% of the labeled data. 7Here, we do not perform target task classifier fine-tuning to isolate the effects of our sentence content objective.
2019
638
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6339–6344 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6339 A Multi-Task Architecture on Relevance-based Neural Query Translation Sheikh Muhammad Sarwar Hamed Bonab James Allan Center for Intelligent Information Retrieval College of Information and Computer Sciences University of Massachusetts Amherst Amherst, MA 01003 {smsarwar, bonab, allan}@cs.umass.edu Abstract We describe a multi-task learning approach to train a Neural Machine Translation (NMT) model with a Relevance-based Auxiliary Task (RAT) for search query translation. The translation process for Cross-lingual Information Retrieval (CLIR) task is usually treated as a black box and it is performed as an independent step. However, an NMT model trained on sentence-level parallel data is not aware of the vocabulary distribution of the retrieval corpus. We address this problem with our multitask learning architecture that achieves 16% improvement over a strong NMT baseline on Italian-English query-document dataset. We show using both quantitative and qualitative analysis that our model generates balanced and precise translations with the regularization effect it achieves from multi-task learning paradigm. 1 Introduction CLIR systems retrieve documents written in a language that is different from search query language (Nie, 2010). The primary objective of CLIR is to translate or project a query into the language of the document repository (Sokokov et al., 2013), which we refer to as Retrieval Corpus (RC). To this end, common CLIR approaches translate search queries using a Machine Translation (MT) model and then use a monolingual IR system to retrieve from RC. In this process, a translation model is treated as a black box (Sokolov et al., 2014), and it is usually trained on a sentence level parallel corpus, which we refer to as Translation Corpus (TC). We address a pitfall of using existing MT models for query translation (Sokokov et al., 2013). An MT model trained on TC does not have any knowledge of RC. In an extreme setting, where there are no common terms between the target side of TC and RC, a well trained and tested translation model would fail because of vocabulary mismatch between the translated query and documents of RC. Assuming a relaxed scenario where some commonality exists between two corpora, a translation model might still perform poorly, favoring terms that are more likely in TC but rare in RC. Our hypothesis is that a search query translation model would perform better if a translated query term is likely to appear in the both retrieval and translation corpora, a property we call balanced translation. To achieve balanced translations, it is desired to construct an MT model that is aware of RC vocabulary. Different types of MT approaches have been adopted for CLIR task, such as dictionarybased MT, rule-based MT, statistical MT etc. (Zhou et al., 2012). However, to the best of our knowledge, a neural search query translation approach has yet to be taken by the community. NMT models with attention based encoder-decoder techniques have achieved state-of-the-art performance for several language pairs (Bahdanau et al., 2015). We propose a multi-task learning NMT architecture that takes RC vocabulary into account by learning Relevance-based Auxiliary Task (RAT). RAT is inspired from two word embedding learning approaches: Relevance-based Word Embedding (RWE) (Zamani and Croft, 2017) and Continuous Bag of Words (CBOW) embedding (Mikolov et al., 2013). We show that learning NMT with RAT enables it to generate balanced translation. NMT models learn to encode the meaning of a source sentence and decode the meaning to generate words in a target language (Luong et al., 2015). In the proposed multi-task learning model, RAT shares the decoder embedding and final representation layer with NMT. Our architecture answers the following question: In the decoding stage, can we restrict an NMT model so that it does not only generate terms that are highly likely in TC?. We show that training a strong baseline NMT with RAT 6340 roughly achieves 16% improvement over the baseline. Using a qualitative analysis, we further show that RAT works as a regularizer and prohibits NMT to overfit to TC vocabulary. 2 Balanced Translation Approach We train NMT with RAT to achieve better query translations. We improve a recently proposed NMT baseline, Transformer, that achieves state-of-theart results for sentence pairs in some languages (Vaswani et al., 2017). We discuss Transformer, RAT, and our multi-task learning architecture that achieves balanced translation. 2.1 NMT and Transformer In principle, we could adopt any NMT and combine it with RAT. An NMT system directly models the conditional probability P(ti|si) of translating a source sentence, si = s1 i , . . . , sn i , to a target sentence ti = t1 i , . . . , tn i . A basic form of NMT comprises two components: (a) an encoder that computes the representations or meaning of si and (b) a decoder that generates one target word at a time. State-of-the-art NMT models have an attention component that “searches for a set of positions in a source sentence where the most relevant information is concentrated” (Bahdanau et al., 2015). For this study, we use a state-of-the-art NMT model, Transformer (Vaswani et al., 2017), that uses positional encoding and self attention mechanism to achieve three benefits over the existing convolutional or recurrent neural network based models: (a) reduced computational complexity of each layer, (b) parallel computation, and (c) path length between long-range dependencies. 2.2 Relevance-based Auxiliary Task (RAT) We define RAT a variant of word embedding task (Mikolov et al., 2013). Word embedding approaches learn high dimensional dense representations for words and their objective functions aim to capture contextual information around a word. Zamani and Croft (2017) proposed a model that learns word vectors by predicting words in relevant documents retrieved against a search query. We follow the same idea but use a simpler learning approach that is suitable for our task. They tried to predict words from the relevance model (Lavrenko and Croft, 2001) computed from a query, which does not work for our task because the connection between a query and ranked sentences falls rapidly Figure 1: The architecture of our multi-task NMT. Note that, rectangles indicate data sources and rectangles with rounded corners indicate functions or layers. after the top one (see below). We consider two data sources for learning NMT and RAT jointly. The first one is a sentence-level parallel corpus, which we refer to as translation corpus, TC = {(si, ti); i = 1, 2, . . . m}. The second one is the retrieval corpus, which is a collection of k documents RC = {D1, D2, . . . Dk} in the same language as ti. Our word-embedding approach takes each ti ∈TC, uses it as a query to retrieve the top document Dtop i . After that we obtain t′ i by concatenating ti with Dtop i and randomly shuffling the words in the combined sequence. We then augment TC using t′ i and obtain a dataset, TC′ = {(si, ti, t′ i); i = 1, 2, . . . m}. We use t′ i to learn a continuous bag of words (CBOW) embedding as proposed by Mikolov et al. (2013). This learning component shares two layers with the NMT model. The goal is to expose the retrieval corpus’ vocabulary to the NMT model. We discuss layer sharing in the next section. We select the single top document retrieved against a sentence ti because a sentence is a weak representation of information need. As a result, documents at lower ranks show heavy shift from the context of the sentence query. We verified this by observing that a relevance model constructed from the top k documents does not perform well in this setting. We thus deviate from the relevance model based approach taken by Zamani and Croft (2017) and learn over the random shuffling of ti and a single document. Random shuffling has shown reasonable effectiveness for word embedding construction for comparable corpus (Vuli´c and Moens, 2015). 6341 2.3 Multi-task NMT Architecture Our balanced translation architecture is presented in Figure 1. This architecture is NMT-model agnostic as we only propose to share two layers common to most NMTs: the trainable target embedding layer and the transformation function (Luong et al., 2015) that outputs a probability distribution over the union of the vocabulary of TC and RC. Hence, the size of the vocabulary, |RC ∪TC|, is much larger compared to TC and it enables the model to access RC. In order to show task sharing clearly we placed two shared layers between NMT and RAT in Figure 1. We also show the two different paths taken by two different tasks at training time: the NMT path in shown with red arrows while the RAT path is shown in green arrows. On NMT path training loss is computed as the sum of term-wise softmax with cross-entropy loss of the predicted translation and the human translation and it summed over a batch of sentence pairs, LNMT = P (si,ti)∈T P|ti| j=1 −log P(tj i|t<j i , si). We also use a similar loss function to train word embedding over a set of context (ctx) and pivot (pvt) pairs formed using ti as query to retrieve Dtop i using Query Likelihood (QL) ranker, LWE = α P (ctx,pvt) −log P(pvt | ctx). This objective is similar to CBOW word embedding as context is used to predict pivot word Here, we use a scaling factor α, to have a balance between the gradients from the NMT loss and RAT loss. For RAT, the context is drawn from a context window following Mikolov et al. (2013). In the figure, (si, ti) ∈TC and Dtop i represents the top document retrieved against ti. The shuffler component shuffles ti and Dtop i and creates (context, pivot) pairs. After that those data points are passed through a fully connected linear projection layer and eventually to the transformation function. Intuitively, the word embedding task is similar to NMT as it tries to assign a large probability mass to a target word given a context. However, it enables the transformation function and decoding layer to assign probability mass not only to terms from TC, but also to terms from RC. This implicitly prohibits NMT to overfit and provides a regularization effect. A similar technique was proposed by Katsuki Chousa (2018) to handle out-of-vocabulary or less frequent words for NMT. For these terms they enabled the transformation (also called the softmax cross-entropy layer) to fairly distribute probability mass among similar words. In contrast, we focus on relevant terms rather than similar terms. 3 Experiments and Results Data. We experiment on two language pairs: {Italian, Finnish} →English. Topics and relevance judgments are obtained from the Cross-Language Evaluation Forum (CLEF) 2000-2003 campaigns for bilingual ad-hoc retrieval tracks1. The Italian and French topics are human translations of a set of two hundred English topics. Our retrieval corpus is the Los Angeles Times (LAT94) comprising over 113k news articles. Topics without any relevant documents on LAT94 are excluded resulting in 151 topics for both Italian and Finnish language. Among the 151 topics in our dataset, we randomly selected 50 queries for validation and 101 queries for test. In the CLEF literature, queries are constructed from either the title field or a concatenation of title and description fields of the topic sets. Following Vuli´c and Moens (2015), we work on the longer queries. For TC we use Europarl v7 sentence-aligned corpus (Koehn, 2005). TC statistics in Table 1 indicates that we had around two million sentence pairs for each language pairs. Lang. Pair Resource #Inst. |VF | |VE| Ita-Eng Europarl 1,894,217 146,036 77,441 Fin-Eng Europarl 1,905,683 637,902 75,851 Table 1: Statistics of resources used for training. |VF | and |VE| are the vocabulary size for the source language and the target English language, respectively. Text Pre-processing. For having text consistency across TC and RC, we apply the following pre-processing steps. Characters are normalized by mapping diacritic characters to the corresponding unmarked characters and lower-casing. We remove non-alphabetic, non-printable, and punctuation characters from each word. The NLTK library (Bird and Loper, 2004) is used for tokenization and stop-word removal. No stemming is performed. Retrieval. For ranking documents, after query translation, we use the Galago’s implementation2 of query likelihood using Dirichlet smoothing (Zhai and Lafferty, 2004) with default parameters. 1catalog.elra.info/en-us/repository/browse/ELRA-E0008/ 2https://www.lemurproject.org/galago.php 6342 Italian →English Finnish →English Models Val Test Val Test Transformer 0.192 0.179 0.127 0.077 Our model 0.230 0.211 0.126 0.097 Table 2: Results for ranking with query translation models, in terms of MAP. Training Technique. Before applying multitasking we train the transformer to obtain a reasonable MAP on the Val set. Then we spawn our multi-task transformer from that point, also continuing to train the transformer. We use an early stopping criterion to stop both the models, and evaluate performance on the test set. For NMT training we use Stochastic Gradient Descent (SGD) with Adam Optimizer and learning rate of 0.01. We found that a learning rate of 10−5 with the same optimizer works well for the word embedding loss minimization. From a training batch (we use dynamic size training batches), more data points are actually created for the word embedding task because of large number of (context, pivot) pairs. We allow the gradients from word embedding loss to pass through the multi-tasking model at first, and then apply NMT loss. Setting a lower learning rate for the word embedding optimizer, and α = 0.1 allows the NMT gradient updates to be competitive. Evaluation. Given that in CLIR the primary goal is to get a better ranked list of documents against a translated query, we only report Mean Average Precision (MAP). 3.1 Results and Analysis Table 2 shows the effectiveness of our model (multitask transformer) over the baseline transformer (Vaswani et al., 2017). Our model achieves significant performance gains in the test sets over the baseline for both Italian and Finnish query translation. The overall low MAP for NMT can possibly be improved with larger TC. Moreover, our model validation approach requires access to RC index, and it slows down overall training process. Hence, we could not train our model for a large number of epochs - it may be another cause of the low performance. Balance of Translations. We want to show that translation terms generated by our multi-task transformer are roughly equally likely to be seen in the Europarl corpus (TC) or the CLEF corpus (RC). Given a translation term t, we compute the ratio of Figure 2: Balance values of a sample of val queries Figure 3: Balance values of a sample of test queries the probability of seeing t in TC and RC, PT C(t) PRC(t). Here, PTC(t) = countT C(t) P t∈T C countT C(t) and PRC(t) is calculated similarly. Given a query qi and its translation Tm(qi) provided by model m, we calculate the balance of m, B(Tm(qi)) = P t∈Tm(q) PT C (t) PRC (t) |Tm(q)| . If B(Tm(qi)) is close to 1, the translation terms are as likely in TC as in RC. Figure 2 shows the balance values for transformer and our model for a random sample of 20 queries from the validation set of Italian queries, respectively. Figure 3 shows the balance values for transformer and our model for a random sample of 20 queries from the test set of Italian queries, respectively. It is evident that our model achieves better balance compared to baseline transformer, except for a very few cases. Precision and Recall of Translations. Given a query Q , consider Q′ = {q′ 1, q′ 1, . . . , q′ p} as the set of terms from human translation of Q and QM = {qM 1 , qM 2 , . . . , qM q } as the set of translation terms generated by model M. We define PM(Q) = QM∩Q′ |QM| and RM(Q) = QM∩Q′ |Q′| as precision and recall of Q for model M. In Table 3, we report average precision and recall for both trans6343 Italian →English Finnish →English Models Val Test Val Test Transformer (0.44, 0.45) (0.43, 0.46) (0.24, 0.23) (0.25, 0.26) Our model (0.62, 0.45) (0.57, 0.41) (0.31, 0.25) (0.30, 0.24) Table 3: Average precision and recall of translated queries, respectively reported in tuples. former and our model across our train and validation query set over two language pairs. Our model generates precise translation, i.e. it avoids terms that might be useless or even harmful for retrieval. Generally, from our observation, avoided terms are highly likely terms from TC and they are generated because of translation model overfitting. Our model achieves a regularization effect through an auxiliary task. This confirms results from existing multi-tasking literature (Ruder, 2017). To explore translation quality, consider pair of sample translations provided by two models. For example, against an Italian query, medaglia oro super vinse medaglia oro super olimpiadi invernali lillehammer, translated term set from our model is {gold, coin, super, free, harmonising, won, winter, olympics}, while transformer output is {olympic, gold, one, coin, super, years, won, parliament, also, two, winter}. Term set from human translation is: {super, gold, medal, won, lillehammer, olypmic, winter, games}. Transformer comes up with terms like parliament, also, two and years that never appears in human translation. We found that these terms are very likely in Europarl and rare in CLEF. Our model also generates terms such as harmonising, free, olympics that not generated by transformer. However, we found that these terms are equally likely in Europarl and CLEF. 4 Conclusion We present a multi-task learning architecture to learn NMT for search query translation. As the motivating task is CLIR, we evaluated the ranking effectiveness of our proposed architecture. We used sentences from the target side of the parallel corpus as queries to retrieve relevant document and use terms from those documents to train a word embedding model along with NMT. One big challenge in this landscape is to sample meaningful queries from sentences as sentences do not directly convey information need. In the future, we hope to learn models that are able to sample search queries or information needs from sentences and use the output of that model to get relevant documents. Acknowledgments This work was supported in part by the Center for Intelligent Information Retrieval and in part by the Air Force Research Laboratory (AFRL) and IARPA under contract #FA8650-17-C-9118 under subcontract #14775 from Raytheon BBN Technologies Corporation. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR). Steven Bird and Edward Loper. 2004. NLTK: The natural language toolkit. In Proceedings of the ACL Interactive Poster and Demonstration Sessions. Satoshi Nakamura Katsuki Chousa, Katsuhito Sudoh. 2018. Training neural machine translation using word embedding-based loss. arXiv preprint arXiv:1807.11219. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, volume 5, pages 79–86. Victor Lavrenko and W. Bruce Croft. 2001. Relevancebased language models. In SIGIR 2001: Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 120–127, New Orleans, Louisiana, USA. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1412–1421, Lisbon, Portugal. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems,NIPS, pages 3111– 3119, Lake Tahoe, Nevada, USA. Jian-Yun Nie. 2010. Cross-Language Information Retrieval. Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers. Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098. 6344 Artem Sokokov, Laura Jehl, Felix Hieber, and Stefan Riezler. 2013. Boosting cross-language retrieval by learning bilingual phrase associations from relevance rankings. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1688–1699, Seattle, Washington, USA. Artem Sokolov, Felix Hieber, and Stefan Riezler. 2014. Learning to translate queries for CLIR. In The 37th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1179–1182, Gold Coast , QLD, Australia. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Ivan Vuli´c and Marie-Francine Moens. 2015. Monolingual and cross-lingual information retrieval models based on (bilingual) word embeddings. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 363–372, Santiago, Chile. Hamed Zamani and W. Bruce Croft. 2017. Relevancebased word embedding. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Tokyo, Japan. Chengxiang Zhai and John Lafferty. 2004. A study of smoothing methods for language models applied to information retrieval. ACM Transactions on Information Systems (TOIS), 22(2):179–214. Dong Zhou, Mark Truran, Tim J. Brailsford, Vincent Wade, and Helen Ashman. 2012. Translation techniques in cross-language information retrieval. ACM Computing Surveys, 45(1):1:1–1:44. A Loss Function and Validation Performance Analysis We show the loss function analysis of transformer and our model. Figure 7 shows the validation performance of transformer against global training steps. Figure 5 show the validation performance of our model for the same number of global steps. Figure 6 shows that NMT loss is going down with the number of steps, while Figure 4 shows the degradation of the loss of our proposed RAT task. Figure 4: RAT loss of our model on Italian-English training data Figure 5: Validation performance of our model on Italian-English validation data Figure 6: NMT loss of our model on Italian-English training data Figure 7: Validation set performance of Transformer on Italian-English training data
2019
639
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 660–665 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 660 End-to-end Deep Reinforcement Learning Based Coreference Resolution Hongliang Fei, Xu Li, Dingcheng Li, Ping Li Cognitive Computing Lab, Baidu Research {hongliangfei,lixu13,lidingcheng,liping11}@baidu.com Abstract Recent neural network models have significantly advanced the task of coreference resolution. However, current neural coreference models are typically trained with heuristic loss functions that are computed over a sequence of local decisions. In this paper, we introduce an end-to-end reinforcement learning based coreference resolution model to directly optimize coreference evaluation metrics. Specifically, we modify the state-of-the-art higherorder mention ranking approach in Lee et al. (2018) to a reinforced policy gradient model by incorporating the reward associated with a sequence of coreference linking actions. Furthermore, we introduce maximum entropy regularization for adequate exploration to prevent the model from prematurely converging to a bad local optimum. Our proposed model achieves new state-of-the-art performance on the English OntoNotes v5.0 benchmark. 1 Introduction Coreference resolution is one of the most fundamental tasks in natural language processing (NLP), which has a significant impact on many downstream applications including information extraction (Dai et al., 2019), question answering (Weston et al., 2015), and entity linking (Hajishirzi et al., 2013). Given an input text, coreference resolution aims to identify and group all the mentions that refer to the same entity. In recent years, deep neural network models for coreference resolution have been prevalent (Wiseman et al., 2016; Clark and Manning, 2016b). These models, however, either assumed mentions were given and only developed a coreference linking model (Clark and Manning, 2016b) or built a pipeline system to detect mention first then resolved coreferences (Haghighi and Klein, 2010). In either case, they depend on hand-crafted features and syntactic parsers that may not generalize well or may even propagate errors. To avoid the cascading errors of pipeline systems, recent NLP researchers have developed endto-end approaches (Lee et al., 2017; Luan et al., 2018; Lee et al., 2018; Zhang et al., 2018), which directly consider all text spans, jointly identify entity mentions and cluster them. The core of those end-to-end models are vector embeddings to represent text spans in the document and scoring functions to compute the mention scores for text spans and antecedent scores for pairs of spans. Depending on how the span embeddings are computed, the end-to-end coreference models could be further divided into first order methods (Lee et al., 2017; Luan et al., 2018; Zhang et al., 2018) or higher order methods (Lee et al., 2018). Although recent end-to-end neural coreference models have advanced the state-of-the-art performance for coreference resolution, they are still trained with heuristic loss functions and make a sequence of local decisions for each pair of mentions. However as studied in Clark and Manning (2016a); Yin et al. (2018), most coreference resolution evaluation measures are not accessible over local decisions, but can only be known until all other decisions have been made. Therefore, the next key research question is how to integrate and directly optimize coreference evaluation metrics in an end-to-end manner. In this paper, we propose a goal-directed endto-end deep reinforcement learning framework to resolve coreference as shown in Figure 1. Specifically, we leverage the neural architecture in Lee et al. (2018) as our policy network, which includes learning span representation, scoring potential entity mentions, and generating a probability distribution over all possible coreference linking actions from the current mention to its antecedents. Once a sequence of linking actions are made, our 661 Policy network Environment State Next state Trajectory Reward function Exploration Gradient Action at Next State St+1 St R(a1:T) pθ(at|St) Reward rt Figure 1: The basic framework of our policy gradient model for one trajectory. The policy network is an end-to-end neural module that can generate probability distributions over actions of coreference linking. The reward function computes a reward given a trajectory of actions based on coreference evaluation metrics. Solid line indicates the model exploration and (red) dashed line indicates the gradient update. reward function is used to measure how good the generated coreference clusters are, which is directly related to coreference evaluation metrics. Besides, we introduce an entropy regularization term to encourage exploration and prevent the policy from prematurely converging to a bad local optimum. Finally, we update the regularized policy network parameters based on the rewards associated with sequences of sampled actions, which are computed on the whole input document. We evaluate our end-to-end reinforced coreference resolution model on the English OntoNotes v5.0 benchmark. Our model achieves the new state-of-the-art F1-score of 73.8%, which outperforms previous best-published result (73.0%) of Lee et al. (2018) with statistical significance. 2 Related Work Closely related to our work are the end-to-end coreference models developed by Lee et al. (2017) and Lee et al. (2018). Different from previous pipeline approaches, Lee et al. used neural networks to learn mention representations and calculate mention and antecedent scores without using syntactic parsers. However, their models optimize a heuristic loss based on local decisions rather than the actual coreference evaluation metrics, while our reinforcement model directly optimizes the evaluation metrics based on the rewards calculated from sequences of actions. Our work is also inspired by Clark and Manning (2016a) and Yin et al. (2018), which resolve coreferences with reinforcement learning techniques. They view the mention-ranking model as an agent taking a series of actions, where each action links each mention to a candidate antecedent. They also use pretraining for initialization. Nevertheless, their models assume mentions are given while our work is end-to-end. Furthermore, we add entropy regularization to encourage more exploration (Mnih et al.; Eysenbach et al., 2019) and prevent our model from prematurely converging to a sub-optimal (or bad) local optimum. 3 Methodology 3.1 Task definition Given a document, the task of end-to-end coreference resolution aims to identify a set of mention clusters, each of which refers to the same entity. Following Lee et al. (2017), we formulate the task as a sequence of linking decisions for each span i to the set of its possible antecedents, denoted as Y(i) = {ϵ, 1, · · · , i −1}, a dummy antecedent ϵ and all preceding spans. In particular, the use of dummy antecedent ϵ for a span is to handle two possible scenarios: (i) the span is not an entity mention or (ii) the span is an entity mention but it is not coreferent with any previous spans. The final coreference clusters can be recovered with a backtracking step on the antecedent predictions. 3.2 Our Model Figure 2 illustrates a demonstration of our iterative coreference resolution model on a document. Given a document, our model first identifies top scored mentions, and then conducts a sequence of actions a1:T = {a1, a2, · · · , aT } over them, where T is the number of mentions and each action at assigns mention t to a candidate antecedent yt in Yt = {ϵ, 1, · · · , t −1}. The state at time t is defined as St = {g1, · · · , gt−1, gt}, where gi is the mention i’s representation. Once our model has finished all the actions, it observes a reward R(a1:T ). The calculated gradients are then propagated to update model parameters. We use the average of the three metrics: MUC (Grishman and Sundheim, 1995), B3 (Recasens and Hovy, 2011) and CEAFφ4 (Cai and 662 (1) (2) (3) (4) (5) Observe Sample (2) Act (1) (2) (3) (4) (5) Env update (1) (2) (3) (4) (5) (a) State: St (b) Policy network: pθ(at|St) (c) Action (e) Update env and compute reward (d) Execute action Observe Sample Act Env update Stop:1 Reward: r Stop:0 (1) (2) (3) (4) (5) (6) (1) (2) (3) (4) (5) (6) (1) (2) (3) (4) (5) (6) Figure 2: A demonstration of our reinforced coreference resolution method on a document with 6 mentions. The upper and lower rows correspond to step 5 and 6 respectively, in which the policy network selects mention (2) as the antecedent of mention (5) and leaves mention (6) as a singleton mention. The red (gray) nodes represent processed (current) mentions and edges between them indicate current predicted coreferential relations. The gray rectangles around circles are span embeddings and the reward is calculated at the trajectory end. Char & word emb encoder: BiLSTM Head-Finding Attention Span Representation (g) Mention Score (sm) Antecedent Score (sa) Coreference Score (s) Masked softmax (pθ) Self Attention Gate Gate Iterative refinement FFNNm FFNNa Figure 3: Architecture of the policy network. The components in dashed square iteratively refine span representations. The last layer is a masked softmax layer that computes probability distribution only over the candidate antecedents for each mention. We omit the span generation and pruning component for simplicity. Strube, 2010) as the reward. Following Clark and Manning (2016a), we assume actions are independent and the next state St+1 is generated based on the natural order of the starting position and then the end position of mentions regardless of action at. Policy Network: We adopt the state-of-the-art end-to-end neural coreferene scoring architecture from Lee et al. (2018) and add a masked softmax layer to compute the probability distribution over actions, as illustrated in Figure 3. The success of their approach lies in two aspects: (i) a coarse-tofine pruning to reduce the search space, and (ii) an iterative procedure to refine the span representation with an self-attention mechanism that averages over the previous round’s representations weighted by the normalized coreference scores. Given the state St and current network parameters θ, the probability of action at choosing yt is: pθ(at = yt|St) = exp (s(t, yt)) P y′∈Yt exp (s(t, y′)) (1) where s(i, j) is the pairwise coreference score between span i and span j defined as following: s(i, j) = sm(i) + sm(j) + sc(i, j) + sa(i, j) (2) For the dummy antecedent, the score s(i, ϵ) is fixed to 0. Here sm(.) is the mention score function, sc(., .) is a bilinear score function used to prune antecedents, and sa(., .) is the antecedent score function. Let gi denote the refined representation for span i after gating, the three functions are sm(i) = θT mFFNNm(gi), sc(i, j) = gT i Θcgj, and sa(i, j) is: sa(i, j) = θT a FFNNa([gi, gj, gi ◦gj, φ(i, j)]) where FFNN denotes a feed-forward neural network and ◦denotes the element-wise product. θm, Θc and θa are network parameters. φ(i, j) is the feature vector encoding speaker and genre information from metadata. The Reinforced Algorithm: We explore using the policy gradient algorithm to maximize the expected reward: J(θ) = Ea1:T ∼pθ(a)R(a1:T ) (3) Computing the exact gradient of J(θ) is infeasible due to the expectation over all possible action sequences. Instead, we use Monte-Carlo methods 663 Model MUC B3 CEAFφ4 Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 Avg. F1 Wiseman et al. (2016) 77.5 69.8 73.4 66.8 57.0 61.5 62.1 53.9 57.7 64.2 Clark and Manning (2016a) 79.2 70.4 74.6 69.9 58.0 63.4 63.5 55.5 59.2 65.7 Clark and Manning (2016b) 79.9 69.3 74.2 71.0 56.5 63.0 63.8 54.3 58.7 65.3 Lee et al. (2017) 78.4 73.4 75.8 68.6 61.8 65.0 62.7 59.0 60.8 67.2 Zhang et al. (2018) 79.4 73.8 76.5 69.0 62.3 65.5 64.9 58.3 61.4 67.8 Luan et al. (2018)* 78.6 77.1 77.9 66.3 65.4 65.9 66.0 63.1 64.5 69.4 Lee et al. (2018)* 81.4 79.5 80.4 72.2 69.5 70.8 68.2 67.1 67.6 73.0 Our base reinforced model 79.0 76.9 77.9 66.8 64.9 65.8 66.5 63.0 64.7 69.5 + Entropy Regularization 79.6 77.2 78.4 70.7 65.1 67.8 67.6 63.4 65.4 70.5 + ELMo embedding* 85.4 77.9 81.4 77.9 66.4 71.7 70.6 66.3 68.4 73.8 Table 1: Experimental results with MUC, B3 and CEAFφ4 metrics on the test set of English OntoNotes. The models marked with * utilized word embedding from deep language model ELMo (Peters et al., 2018). The F1 improvement is statistically significant under t-test with p < 0.05, compared with Lee et al. (2018). to approximate the actual gradient by randomly sampling Ns trajectories according to pθ and compute the gradient only over the sampled trajectories. Meanwhile, following Clark and Manning (2016a), we subtract a baseline value from the reward to reduce the variance of gradient estimation. The gradient estimate is as follows: ∇θJ(θ) ≈1 Ns Ns X i=1 T X t=1 ∇θ log pθ(ait|Sit)(Rτi −b) where Ns is the number of sampled trajectories, τi = {ai1, · · · aiT } is the ith sampled trajectory and b = PNs i=1 R(τi)/Ns is the baseline reward. The Entropy Regularization: To prevent our model from being stuck in highly-peaked polices towards a few actions, an entropy regularization term is added to encourage exploration. The final regularized policy gradient estimate is as follows: ∇θJ(θ) ≈1 Ns Ns X i=1 T X t=1 ∇θ  log pθ(ait|Sit) + λexpr pθ(ait|Sit) log pθ(ait|Sit)  (Rτi −b) where λexpr ≥0 is the regularization parameter that controls how diverse our model can explore. The larger the λexpr is, the more diverse our model can explore. If λexpr →∞, all actions will be sampled uniformly regardless of current policies. To the contrary, if λexpr = 0, all actions will be sampled based on current polices. Pretraining: We pretrain the policy network parameterized by θ using the loss function below: L(θ) = − N X i=1 X j∈Yi I(i, j) log (p(j|i; θ)) (4) where N is the number of mentions, I(i, j) = 1 if mention i and j are coreferred, and 0 otherwise. Yi is the set of candidate antecedents of mention i. 4 Experiments We evaluate our model on the English OntoNotes v5.0 (Pradhan et al., 2011), which contains 2,802 training documents, 343 development documents, and 348 test documents. We reuse the hyperparameters and evaluation metrics from Lee et al. (2018) with a few exceptions. First, we pretrain our model using Eq. (4) for around 200K steps and use the learned parameters for initialization. Besides, we set the number of sampled trajectories Ns = 100, tune the regularization parameter λexpr in {10−5, 10−4, 0.001, 0.01, 0.1, 1} and set it to 10−4 based on the development set. We use three standard metrics: MUC (Grishman and Sundheim, 1995), B3 (Recasens and Hovy, 2011) and CEAFφ4 (Cai and Strube, 2010). For each metric, we report the precision, recall and F1 score. The final evaluation is the average F1 of the above three metrics. 4.1 Results In Table 1, we compare our model with the coreference systems that have produced significant improvement over the last 3 years on the OntoNotes benchmark. The reported results are either adopted from their papers or reproduced from their code. The first section of the table lists the pipeline models, while the second section lists the end-to-end approaches. The third section lists the results of our model with different variants. Note that Luan et al. (2018)’s method contains 3 tasks: named entity recognition, relation inference and coreference resolution and we disable the relation inference task and train the other two tasks. Built on top of the model in Lee et al. (2018) but excluding ELMo, our base reinforced model improves the average F1 score around 2 points (statistical significant t-test with p < 0.05) compared 664 with Lee et al. (2017); Zhang et al. (2018). Besides, it is even comparable with the end-to-end multi-task coreference model that has ELMo support (Luan et al., 2018), which demonstrates the power of reinforcement learning combined with the state-of-the-art end-to-end model in Lee et al. (2018). Regarding our model, using entropy regularization to encourage exploration can improve the result by 1 point. Moreover, introducing the context-dependent ELMo embedding to our base model can further boosts the performance, which is consistent with the results in Lee et al. (2018). We also notice that our full model’s improvement is mainly from higher precision scores and reasonably good recall scores, which indicates that our reinforced model combined with more active exploration produces better coreference scores to reduce false positive coreference links. Overall, our full model achieves the state-ofthe-art performance of 73.8% F1-score when using ELMo and entropy regularization (compared to models marked with * in Table 1), and our approach simultaneously obtains the best F1-score of 70.5% when using fixed word embedding only. Model Prec. Rec. F1 Our full model 89.6 82.2 85.7 Lee et al. (2018) 86.2 83.7 84.9 Table 2: The overall mention detection results on the test set of OntoNotes. The F1 improvement is statistically significant under t-test with p < 0.05. Since mention detection is a subtask of coreference resolution, it is worthwhile to study the performance. Table 2 shows the mention detection results on the test set. Similar to coreference linking results, our model achieves higher precision and F1 score, which indicates that our model can significantly reduce false positive mentions while it can still find a reasonable number of mentions. 4.2 Analysis and Discussion Ablation Study: To understand the effect of different components, we conduct an ablation study on the development set as illustrated in Table 3. Clearly, removing entropy regularization deteriorates the average F1 score by 1%. Also, disabling coarse-to-fine pruning or second-order inference decreases 0.3/0.5 F1 score. Among all the components, ELMo embedding makes the most contribution and improves the result by 3.1%. Model Avg. F1 Full Model 74.1 w/o entropy regularization 73.1 w/o coarse-to-fine pruning 73.8 w/o second-order inference 73.6 w/o ELMo embedding 71.0 Table 3: Ablation study on the development set. “Coarse-to-fine pruning” and “second-order inference” are adopted from Lee et al. (2018) Impact of the parameter λexpr: Since the parameter λexpr directly controls how diverse the model is explored during training, it is necessary to study its effect on the model performance. Figure 4 shows the avg. F1 score on the development set for our full model and Lee et al. (2018). We observe that λexpr does have a strong effect on the performance and the best value is around 10−4. Besides, our full model consistently outperforms Lee et al. (2018) over a wide range of λexpr. 1e-5 1e-4 0.001 0.01 0.1 1 72 72.5 73 73.5 74 74.5 75 Avg. F1 Our model Lee et al. (2018) Figure 4: Avg. F1 score on the development set with different regularization parameter λexpr. The result of Lee et al. (2018) is also plotted for comparison, which is a flat line since it does not depend on λexpr. 5 Conclusion We present the first end-to-end reinforcement learning based coreference resolution model. Our model transforms the supervised higher order coreference model to a policy gradient model that can directly optimizes coreference evaluation metrics. Experiments on the English OntoNotes benchmark demonstrate that our full model integrated with entropy regularization significantly outperforms previous coreference systems. There are several potential improvements to our model as future work, such as incorporating mention detection result as a part of the reward. Another interesting direction would be introducing intermediate step rewards for each action to better guide the behaviour of the RL agent. 665 References Jie Cai and Michael Strube. 2010. Evaluation metrics for end-to-end coreference resolution systems. In Proceedings the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 28–36, Tokyo, Japan. Kevin Clark and Christopher D. Manning. 2016a. Deep reinforcement learning for mention-ranking coreference models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2256–2262, Austin, TX. Kevin Clark and Christopher D Manning. 2016b. Improving coreference resolution by learning entitylevel distributed representations. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 643–653, Berlin, Germany. Zeyu Dai, Hongliang Fei, and Ping Li. 2019. Coreference aware representation learning for neural named entity recognition. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Macau. Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. 2019. Diversity is all you need: Learning skills without a reward function. In Seventh International Conference on Learning Representations (ICLR), New Orleans, LA. Ralph Grishman and Beth Sundheim. 1995. Design of the muc-6 evaluation. In Proceedings of the 6th conference on Message understanding (MUC), pages 1– 11, Columbia, MD. Aria Haghighi and Dan Klein. 2010. Coreference resolution in a modular, entity-centered model. In Proceedings of Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics (NAACL), pages 385–393, Los Angeles, CA. Hannaneh Hajishirzi, Leila Zilles, Daniel S. Weld, and Luke S. Zettlemoyer. 2013. Joint coreference resolution and named-entity linking with multi-pass sieves. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 289–299, Seattle, WA. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 188–197, Copenhagen, Denmark. Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-tofine inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 687–692, New Orleans, LA. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3219–3232, Brussels, Belgium. Volodymyr Mnih, Adri`a Puigdom`enech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33nd International Conference on Machine Learning (ICML), New York, NY. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 2227– 2237, New Orleans, LA. Sameer Pradhan, Lance Ramshaw, Mitchell Marcus, Martha Palmer, Ralph Weischedel, and Nianwen Xue. 2011. Conll-2011 shared task: Modeling unrestricted coreference in ontonotes. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, pages 1–27, Portland, OR. Marta Recasens and Eduard Hovy. 2011. Blanc: Implementing the rand index for coreference evaluation. Natural Language Engineering, 17(4):485– 510. Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. CoRR, abs/1502.05698. Sam Wiseman, Alexander M. Rush, and Stuart M. Shieber. 2016. Learning global features for coreference resolution. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 994– 1004, San Diego, CA. Qingyu Yin, Yu Zhang, Weinan Zhang, Ting Liu, and William Yang Wang. 2018. Deep reinforcement learning for chinese zero pronoun resolution. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 569–578, Melbourne, Australia. Rui Zhang, C´ıcero Nogueira dos Santos, Michihiro Yasunaga, Bing Xiang, and Dragomir R. Radev. 2018. Neural coreference resolution with deep biaffine attention by joint mention detection and mention clustering. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 102–107, Melbourne, Australia.
2019
64
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6345–6381 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6345 Topic Modeling with Wasserstein Autoencoders Feng Nan†, Ran Ding ∗‡, Ramesh Nallapati†, Bing Xiang† Amazon Web Services†, Compass Inc.‡ {nanfen, rnallapa, bxiang}@amazon.com†, [email protected]‡ Abstract We propose a novel neural topic model in the Wasserstein autoencoders (WAE) framework. Unlike existing variational autoencoder based models, we directly enforce Dirichlet prior on the latent document-topic vectors. We exploit the structure of the latent space and apply a suitable kernel in minimizing the Maximum Mean Discrepancy (MMD) to perform distribution matching. We discover that MMD performs much better than the Generative Adversarial Network (GAN) in matching high dimensional Dirichlet distribution. We further discover that incorporating randomness in the encoder output during training leads to significantly more coherent topics. To measure the diversity of the produced topics, we propose a simple topic uniqueness metric. Together with the widely used coherence measure NPMI, we offer a more wholistic evaluation of topic quality. Experiments on several real datasets show that our model produces significantly better topics than existing topic models. 1 Introduction Probabilistic topic models (Hoffman et al., 2010) have been widely used to explore large collections of documents in an unsupervised manner. They can discover the underlying themes and organize the documents accordingly. The most popular probabilistic topic model is the Latent Dirichlet Allocation (LDA) (Blei et al., 2003), where the authors developed a variational Bayesian (VB) algorithm to perform approximate inference; subsequently (Griffiths and Steyvers, 2004) proposed an alternative inference method using collapsed Gibbs sampling. More recently, deep neural networks have been successfully used for such probabilistic models with the emergence of variational autoencoders ∗This work was done when the author was with Amazon. (VAE) (Kingma and Welling, 2013). The key advantage of such neural network based models is that inference can be carried out easily via a forward pass of the recognition network, without the need for expensive iterative inference scheme per example as in VB and collapsed Gibbs sampling. Topic models that fall in this framework include NVDM (Miao et al., 2016), ProdLDA (Srivastava and Sutton, 2017) and NTM-R (Ding et al., 2018). At a high level, these models consist of an encoder network that maps the Bag-of-Words (BoW) input to a latent document-topic vector and a decoder network that maps the document-topic vector to a discrete distribution over the words in the vocabulary. They are autoencoders in the sense that the output of the decoder aims to reconstruct the word distribution of the input BoW representation. Besides the reconstruction loss, VAE-based methods also minimize a KL-divergence term between the prior and posterior of the latent vector distributions. Despite their popularity, these VAEbased topic models suffer from several conceptual and practical challenges. First, the Auto-Encoding Variational Bayes (Kingma and Welling, 2013) framework of VAE relies on a reparameterization trick that only works with the “location-scale” family of distributions. Unfortunately, the Dirichlet distribution, which largely accounted for the modeling success of LDA, does not belong to this family. The Dirichlet prior on the latent documenttopic vector nicely captures the intuition that a document typically belongs to a sparse subset of topics. The VAE-based topic models have to resort to various Gaussian approximations toward this effect. For example, NVDM and NTM-R simply use Gaussian instead of Dirichlet prior; ProdLDA uses Laplace approximation of the Dirichlet distribution in the softmax basis as prior. Second, the KL divergence term in the VAE objective forces posterior distributions for all examples to match 6346 the prior, essentially making the encoder output independent of the input. This leads to the problem commonly known as posterior collapse (He et al., 2019). Although various heuristics such as KL-annealing (Bowman et al., 2016) have been proposed to address this problem, they are shown to be ineffective in more complex datasets (Kim et al., 2018). In this work we leverage the expressive power and efficiency of neural networks and propose a novel neural topic model to address the above difficulties. Our neural topic model belongs to a broader family of Wasserstein autoencoders (WAE) (Tolstikhin et al., 2017). We name our neural topic model W-LDA to emphasize the connection with WAE. Compared to the VAE-based topic models, our model has a few advantages. First, we encourage the latent document-topic vectors to follow the Dirichlet prior directly via distribution matching, without any Gaussian approximation; by preserving the Dirichlet prior, our model represents a much more faithful generalization of LDA to neural network based topic models. Second, our model matches the aggregated posterior to the prior. As a result, the latent codes of different examples get to stay away from each other, promoting a better reconstruction (Tolstikhin et al., 2017). We are thus able to avoid the problem of posterior collapse. To evaluate the quality of the topics from WLDA and other models, we measure the coherence of the representative words of the topics using the widely accepted Normalized Pointwise Mutual Information (NPMI) (Aletras and Stevenson, 2013) score, which is shown to closely match human judgments (Lau et al., 2014). While NPMI captures topic coherence, it is also important that the discovered topics are diverse (not repetitive). Yet such a measure has been missing in the topic model literature.1 We therefore propose a simple Topic Uniqueness (TU) measure for this purpose. Given a set of representative words from all the topics, the TU score is inversely proportional to the number of times each word is repeated in the set. High TU score means the representative words are rarely repeated and the topics are unique to each other. Using both TU and NPMI, we are able to provide a more wholistic measure of topic quality. To summarize our main contributions: 1Most papers on topic modeling only present a selected small subset of non-repetitive topics for qualitative evaluation. The diversity among the topics is not measured. • We introduce a uniqueness measure to evaluate topic quality more wholistically. • W-LDA produces significantly better quality topics than existing topic models in terms of topic coherence and uniqueness. • We experiment with both the WAE-GAN and WAE-MMD variants (Tolstikhin et al., 2017) for distribution matching and demonstrate key performance advantage of the latter with a carefully chosen kernel, especially in high dimensional settings. • We discover a novel technique of adding noise to W-LDA to significantly boost topic coherence. This technique can potentially be applied to WAE in general and is of independent interest. 2 Related Work Adversarial Autoencoder (AAE) (Makhzani et al., 2015) was proposed as an alternative to VAE. The main difference is that AAE regularizes the aggregated posterior to be close to a prior distribution whereas VAE regularizes the posterior to be close to the prior. Wasserstein autoencoders (WAE) (Tolstikhin et al., 2017) provides justification for AAE from the Wasserstein distance minimization point of view. In addition to adversarial training used in AAE, the authors also suggested using Maximum Mean Discrepancy (MMD) for distribution matching. Compared to VAE, AAE/WAEs are shown to produce better quality samples. AAE has been applied in the task of unaligned text style transfer and semi-supervised natural language inference by ARAE (Kim et al., 2017). To be best of our knowledge, W-LDA is the first topic model based on the WAE framework. Recently, Adversarial Topic model (ATM) (Wang et al., 2018) proposes using GAN with Dirichlet prior to learn topics. The generator takes in samples from Dirichlet distribution and maps to a document-word distribution layer to form the fake samples. The discriminator tries to distinguish between the real documents from the fake documents. It also pre-processes the BoW representation of documents using TF-IDF. The evaluation is limited to topic coherence. A critical difference of W-LDA and ATM is that ATM tries to perform distribution matching in the vocabulary space whereas W-LDA in the latent documenttopic space. Since the vocabulary space has much 6347 higher dimension (size of the vocabulary) than the latent document-topic space (number of topics), we believe it is much more challenging for ATM to train and perform well compared to W-LDA. Our work is also related to the topic of learning disentangled representations. A disentangled representation can be defined as one where single latent units are sensitive to changes in single generative factors, while being relatively invariant to changes in other factors (Bengio et al., 2013). In topic modeling, such disentanglement means that the learned topics are coherent and distinct. (Rubenstein et al., 2018) demonstrated that WAE learns better disentangled representation than VAE. Interestingly, (Rubenstein et al., 2018) argue for adding randomness to the encoder output to address the dimensionality mismatch between the intrinsic data and the latent space. One of our contributions is to discover that by properly adding randomness, we can significantly improve disentanglement (topic coherence and uniqueness) of WAE. Therefore we offer yet another evidence to the advantage of randomized WAE. 3 Background 3.1 Latent Dirichlet Allocation LDA is the most popular topic model. Suppose there are V words in the vocabulary, each document is represented as a BoW w = (w1, . . . , wN), where wn is the word at position n and assume there are N words in the document. The number of topics K is pre-specified. Each topic βk, k = 1, . . . , K is a probability distribution over the words in the vocabulary. Each document is assumed to have a mixed membership of the topics θ ∈ℜK, P k θk = 1, θk ≥0. The generative process for each document starts with drawing a document-topic vector from the Dirichlet prior distribution with parameter α. To generate the nth word in the document, a topic zn ∈{1, . . . , K} is drawn according to the multinomial distribution θ and the word is then drawn according to the multinomial distribution βzn. Thus, the marginal likelihood of the document p(w|α, β) is Z θ N Y n=1 K X zn=1 p(wn|zn, β)p(zn|θ) ! p(θ|α)dθ Given a document w, the inference task is to determine the conditional distribution p(θ|w). 3.2 Wasserstein Auto-encoder The latent variable generative model posits that a target domain example (eg. document w) is generated by first sampling a latent code θ from a prior distribution PΘ and then passed through a decoder network. The resulting distribution in the target domain is Pdec with density: pdec(w) = Z θ pdec(w|θ)p(θ)dθ. (1) The key result of (Tolstikhin et al., 2017) is that in order to minimize the optimal transport distance between Pdec and the target distribution Pw, it is equivalent to minimizing the following objective for some scalar value of λ: infQ(θ|w) EPwEQ(θ|w)[c(w, dec(θ))] + λ · DΘ(QΘ, PΘ), (2) where c is a cost function and QΘ := EPwQ(θ|w) is the aggregated posterior or the encoded distribution of the examples; DΘ(QΘ, PΘ) is an arbitrary divergence between QΘ and PΘ. Similar to VAE, the WAE objective consists of a reconstruction term and a regularization term. Note the key difference is that the regularization term for WAE is on the aggregated posterior whereas the term for VAE is on the posterior distribution. Two different divergences were proposed for DΘ(QΘ, PΘ). The first is GAN-based, setting DΘ(Qθ, PΘ) = DJS(QΘ, PΘ) (Goodfellow et al., 2014). A discriminator (an adversary) is introduced trying to separate “true” points sampled from PΘ and “fake” ones sampled from QΘ. The second is Maximum Mean Discrepancy (MMD)based (Gretton et al., 2012), setting Dθ(Qθ, Pθ) = MMDk(QΘ, PΘ). For a kernel function k : Θ × Θ →ℜ, the MMD is defined as MMDk(QΘ, PΘ) = ∥ Z Θ k(θ, ·)dPΘ(θ) − Z Θ k(θ, ·)dQΘ(θ)∥Hk, (3) where H is the Reproducing Kernel Hilbert Space (RKHS) of real-valued functions mapping Θ to ℜ and k is the kernel function; k(θ, ·) can be considered as the feature mapping of θ to a higher dimensional space. 4 W-LDA We now introduce our W-LDA model. We consider the BoW representation of documents. With 6348 a slight abuse of notation, a document is a BoW w, where wi is the number of occurrences of the ith vocabulary word in the document. 4.1 Encoder-decoder The encoder of W-LDA consists of an Multi-Layer Perceptron (MLP) mapping w to an output layer of K units before applying softmax to obtain the document-topic vector θ ∈SK−1. The encoder acts as the recognition network to perform efficient inference: Q(θ|w) ≈p(θ|w). Unlike VAEbased method, we have the option to use deterministic encoder θ = enc(w), which is conceptually and computationally simpler. In this case Q(θ|w) is a Dirac Delta distribution. Given θ, the decoder consists of a single layer neural network mapping θ to an output layer of V units before applying softmax to obtain ˆw ∈SV −1. ˆw is a probability distribution over the words in the vocabulary. Mathematically, we have ˆwi = exp hi PV j=1 exp hj , h = βθ + b, (4) where β = [β1, . . . , βK] is the matrix of topicword vectors as in LDA and b is an offset vector. The reconstruction loss for the autoencoder is simply the negative cross-entropy loss between the BoW w and the ˆw from the decoder: c(w, ˆw) = − V X i=1 wi log ˆwi. (5) 4.2 Distribution matching We explored both GAN and MMD-based options for DΘ(QΘ, PΘ). For GAN, we additionally introduce an MLP as a discriminator network. We alternate between minimization and maximization as done in (Tolstikhin et al., 2017; Makhzani et al., 2015). Unfortunately, we are unable to train the GAN-based W-LDA as we face a vanishing gradient problem and the encoder fails to update for distribution matching. We investigate this issue further in Section 6.4 and demonstrate through a toy example that MMD is better suited than GAN for matching high dimensional Dirichlet distributions. We therefore focus on the MMD-based method. The immediate question is which kernel function to use for MMD. Since our task is to match the Dirichlet distribution, it is natural to seek kernel functions that are based on meaningful distance metrics on the simplex. We therefore choose to use the information diffusion kernel (Lafferty and Lebanon, 2002), which uses the geodesic distance: d(θ, θ′) = 2 arccos K X k=1 q θkθ′ k ! . Intuitively, it first maps points on the simplex to a sphere via θk →√θk and then measures the distance between points on the curved surface. Compared to the more common L-2 distance, the geodesic distance is much more sensitive to points near the boundary of the simplex, which is especially important for sparse data (Lafferty and Lebanon, 2002). The information diffusion kernel we use is k(θ, θ′) = exp −arccos2 K X k=1 q θkθ′ k !! . (6) The MMD in (3) can be unbiasedly estimated using m samples via \ MMDk(QΘ, PΘ) = 1 m(m −1) X i̸=j k(θi, θj) + 1 m(m −1) X i̸=j k(θ′ i, θ′ j) −2 m2 X i,j k(θi, θ′ j), (7) where {θ1, . . . , θm} are sampled from QΘ and {θ′ 1, . . . , θ′ m} are sampled from PΘ. This form can be more easily understood by writing the norm in (3) in terms of an inner product and expand the product of sums. In practice, the reconstruction loss (5) can be orders of magnitude larger than the regularization term DΘ(QΘ, PΘ). We therefore need to multiply a scaling factor to the reconstruction loss in order to balance the two terms. Yet, we would like to avoid introducing an additional hyperparameter. Consider a baseline case where the document length is s and contains only one unique word; further assume the output of the decoder is completely uninformative, i.e. ˆwi = 1/V, i = 1, . . . , V ; then s log V is the reconstruction loss. By setting the scaling factor to 1/(s log V ), we can normalize the reconstruction loss to 1 with respect to this baseline case. Empirical study suggests that such a choice works well across multiple datasets. 4.3 Adding noise One of the key discoveries of this paper is that adding noise to the document-topic vectors during training leads to substantially better topics. 6349 Specifically, for each training example we sample a random Dirichlet vector from the prior θnoise ∼ PΘ and mix with the encoder output θ = enc(w): θ+ = (1 −α)θ + αθnoise, (8) where α ∈[0, 1] is the mixing proportion. α = 0 is equivalent to not adding any noise; α = 1 is equivalent to using purely noise and ignore the encoder output altogether. We use θ+ as input to the decoder and compute the reconstruction loss for stochastic gradient optimization. Note that although adding noise appears similar to the reparameterization trick in VAEs, it is much more flexible and not restricted to the “location-scale” family of distributions as in VAEs. 5 Topic extraction and TU measure We can extract the top words based on the decoder matrix weights. Specifically, The representative words of the kth topic are those corresponding to the top entries of βk sorted in descending order. As explained in the introduction, we evaluate the quality of the topics in terms of both topic uniqueness (TU) and coherence (NPMI). We propose a simple measure of TU defined as follows. Given the top L words from each of the K topics, the TU for topic k is TU(k) = 1 L PL l=1 1 cnt(l,k), k = 1, . . . , K, where cnt(l, k) is the total number of times the lth top word in topic k appears in the top words across all topics. For example, if the lth top word in topic k appears only in topic k, then cnt(l, k) = 1; on the other hand, if the word appears in all the topics then cnt(l, k) = K. Finally, the average TU is computed as TU = 1 K PK k=1 TU(k). The range of the TU value is between 1/K and 1. A higher TU value means the produced topics are more diverse. 6 Experiments and Results We conduct experiments on a synthetic corpus generated according to the LDA model and six widely used real world benchmark datasets: 20NG (the same version as (Srivastava and Sutton, 2017)), AGNews, 2 DBpedia (Lehmann et al., 2013) , Yelp review polarity from the Yelp Dataset Challenge in 2015, NYTimes (Dheeru and Karra Taniskidou, 2017) and Wikitext-103 (Merity et al., 2016). We use the same version of AGNews, 2http://www.di.unipi.it/˜gulli/AG_corpus_of_news_ articles.html dataset #train #test vocab avg.doc.len #class Synthetic LDA 10000 100 30 20NG 10926 7266 1995 52.5 20 AGNews 96000 7600 31827 17.6 4 DBPedia 448000 70000 10248 21.3 14 Yelp P. 448000 38000 20000 57.5 2 NYTimes 242798 29977 102660 330.6 Wikitext-103 28472 60 20000 1392.2 Table 1: Dataset summary LDA (C.G.) Online LDA ProdLDA NTM-R W-LDA 0.88 0.98 0.76 0.52 0.94 Table 2: Precision in topic recovery: W-LDA is competitive with the best models. DBpedia and Yelp review polarity as (Zhang et al., 2015). These datasets have very different characteristics in terms of vocabulary size, document length and the number of samples. Four of them have class labels associated with the documents. Table 1 summarizes the basic statistics. 6.1 Baselines We evaluate W-LDA against existing topic model methods: 1. Collapsed Gibbs Sampling LDA as implemented in the Mallet package (McCallum, 2002); 2. Online LDA as implemented in the Gensim package ( ˇReh˚uˇrek and Sojka, 2010); 3. ProdLDA (Tolstikhin et al., 2017): VAEbased, uses Gaussian approximation of the Dirichlet prior in the softmax space; 4. NTM-R (Ding et al., 2018): VAE-based, improvement of NVDM (Miao et al., 2016), uses pretrained word embeddings for coherence regularization. 6.2 Synthetic topic recovery We first verify the ability of W-LDA in recovering topics via a synthetic experiment. We construct a corpus of 10000 documents following the LDA generative process. The vocabulary size is 100 and there are 5 topics and Dirichlet parameters are 0.1. We run all methods with 5 latent topics and compare the recovered top 10 words for each topic against the ground truth. We compute the maximum precision among all permutations to align the topics and report the result in Table 2. Note a top-10 word in a predicted topic is a false positive if it is not among the top-10 words in the ground truth topic. We also compare the topic words produced by W-LDA against the ground truth in Table 3. W-LDA clearly recovers the ground truth very well, even the relative importance of most top words. Details of the experiments can be found in the Appendix. 6350 46, 4, 44, 30, 81, 40, 87, 13, 58, 62 46, 4, 44, 30, 81, 40, 13, 87, 62, 58 13, 81, 29, 33, 27, 1, 7, 83, 2, 39 13, 81, 29, 27, 33, 1, 7, 83, 39, 2 88, 67, 16, 13, 14, 3, 75, 8, 61, 71 88, 67, 16, 13, 14, 3, 75, 8, 44, 32 38, 17, 57, 48, 23, 56, 50, 83, 16, 82 38, 17, 57, 48, 23, 50, 56, 83, 16, 82 44, 86, 32, 62, 20, 99, 83, 88, 51, 31 44, 86, 32, 62, 20, 88, 99, 83, 16, 31 Table 3: Top 10 word indices ordered in decreasing importance. Each cell corresponds to a topic, in which the first row is the ground truth and the second row is WLDA output. The false positives are in bold. W-LDA recovers the ground truth topics very well. Figure 1: W-LDA: TU and NPMI for various Dirichlet parameters and noise α for 20NG (top row); NYTimes (2nd row) and Wikitext-103 (bottom row). Adding Dirichlet noise generally improves topic NPMI. Minimizing reconstruction loss only (without distribution matching in latent space) generally leads to mode collapse of latent space where only one dimension is nonzero and the failure to learn the topics. 6.3 Parameter settings for benchmarking The parameter settings to run the real world datasets are as follows. For LDA with collapsed Gibbs sampling we use the default Mallet parameter settings and run 2000 iterations. For Online LDA we use the default Gensim parameter settings and run 100 passes. For ProdLDA, we use the original implementation provided by the authors.3 We tune the dropout probability on the latent vector (the keep_prob parameter in the original implementation) as we find it has significant impact on topic quality. We vary it from 0.4 (recommended value in the original paper) to 1. We find that setting it to 0.4 gives the highest NPMI; setting it to 1 gives better TU but much lower NPMI. For NTM-R, we vary the Word Embedding Topic Coherence (WETC) coefficient in [0, 1, 2, 5, 10, 50] and observe that setting it to 10 usually gives the best results in terms of NPMI and TU; setting it to 50 indeed raises the NPMI but the TU becomes very low and the topics consist of repetitive and generic words. For W-LDA, we set the Dirichlet parameter to 0.1 and 0.2 and use MMD with the information diffusion kernel (6); we set the noise coefficient α = {0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6}. Similar to ProdLDA, we use ADAM optimizer with high momentum β1 = 0.99 and learning rate of 0.002 as they can overcome initial local minima. To be consistent, we set the encoder layers of W-LDA and ProdLDA the same as NTM-R, with two hidden layers and 100 neurons in each layer. For the evaluation of topic quality, we monitor the NPMI and TU for all algorithms over a reasonable number of iterations (either when the topic quality begins to deteriorate or stops improving over a number of iterations) and report the best results from the different parameter settings. 6.4 Benchmark results and ablation study The benchmark results are summarized in Table 4. We observe that LDA with collapsed Gibbs sampling produces similar topics as Online LDA. Although the NPMI of the topics produced by ProdLDA is high, the TU score is low, which means the topics are repetitive. For a qualitative inspection, we identify several repetitive topics that ProdLDA produces on the Wikitext-103 in Table 6 together with the best aligned topics 3https://github.com/akashgit/autoencoding_vi_for_ topic_models 6351 LDA (C.G.) Online LDA ProdLDA NTM-R W-LDA 20NG 0.264/0.85 0.252/0.79 0.267/0.58 0.240/0.62 0.252/0.86 AGNews 0.239/0.76 0.213/0.80 0.245/0.68 0.220/0.69 0.270/0.89 DBpedia 0.257/0.81 0.230/0.81 0.334/0.49 0.222/0.71 0.295/ 1.00 Yelp.P. 0.238/0.68 0.233/0.74 0.215/0.63 0.224/0.40 0.235/0.82 NYTimes 0.300/0.81 0.291/0.80 0.319/0.67 0.218/0.88 0.356/1.00 Wikitext-103 0.289/0.75 0.282/0.78 0.400/0.62 0.215/0.91 0.464/1.00 Table 4: Benchmark results for 50 topics. The numbers in each cell are NPMI/TU. Overall our method (W-LDA) achieves much higher NPMI as well as TU than existing methods. LDA (C.G.) Online LDA ProdLDA NTM-R W-LDA 20NG 0.513 0.473 0.213 0.433 0.431 AGNews 0.848 0.825 0.827 0.857 0.853 DBpedia 0.906 0.890 0.112 0.916 0.938 Yelp P. 0.869 0.865 0.777 0.862 0.856 Table 5: Test accuracies for the document classification task. W-LDA is competitive with the best models. from W-LDA. The topics from W-LDA are much more unique. A complete comparison of the topics from all of the methods can be found in the Appendix. NTM-R generally achieves a higher TU than LDAs and ProdLDA but has lower NPMI. Overall W-LDA achieves much higher NPMI as well as TU than existing methods, especially on NYTimes and Wikitext-103. Document classification: Since W-LDA is not based on variational inference, we cannot compute the ELBO based perplexity as a performance metric as in (Miao et al., 2016; Srivastava and Sutton, 2017; Ding et al., 2018). To compare the predictive performance of the latent document-topic vectors across all models, we use document classification accuracy instead. Detailed setup can be found in the Appendix to save space. The accuracies on the test set are summarized in Table 7. We observe that the latent vectors from W-LDA have competitive classification accuracy with LDAs and NTMR. ProdLDA performs significantly poorly on DBpedia dataset; further inspection shows that the distribution of the document-topic vectors produced by ProdLDA on test and training data are quite different. Next, we carry out ablation study on W-LDA. Distribution matching: What if we only minimize the reconstruction loss of the auto-encoder, without the loss term associated with the distribution matching? We found that across all datasets in general, the learning tends to get stuck in bad local minima where only one dimension in the latent space is non-zero. The decoder weights also fail to produce meaningful topics at all. The NPMI season, playoff, league, nhl, game, rookie, touchdown, player, coach, goaltender season, nhl, playoff, game, rookie, shutout, player, league, roster, goaltender touchdown, fumble, quarterback, kickoff, punt, yardage, cornerback, linebacker, rushing, preseason infantry, casualty, troop, battalion, artillery, reinforcement, brigade, flank, division, army brigade, casualty, troop, infantry, artillery, flank, battalion, commanded, division, regiment artillery, casualty, destroyer, battalion, squadron, reinforcement, troop, regiment, guadalcanal, convoy battalion, brigade, infantry, platoon, bridgehead, regiment, panzer, rok, pusan, counterattack mph, km, tropical, westward, landfall, flooding, northwestward, rainfall, northeastward, extratropical mph, km, landfall, tropical, storm, hurricane, rainfall, flooding, extratropical, saffir km, mph, tropical, westward, rainfall, flooding, convection, landfall, extratropical, storm dissipating, tropical, dissipated, extratropical, cyclone, shear, northwestward, southwestward, saffir, convection Table 6: Comparison of select ProdLDA and W-LDA topics on Wikitext-103. ProdLDA topics are repetitive (above the dashed line in each cell); W-LDA topics are unique (below the dashed line in each cell). and TU values are plotted in dashed lines in Figure 3. This confirms the importance of distribution matching in our topic model. Dirichlet parameter and noise effects: We study the effect of the Dirichlet parameter that controls the sparsity and the amount of noise added to the latent vector during training. Due to space limit, we only plot the TU and NPMI curves for 3 datasets in Figure 3. The full set of plots on all datasets can be found in the Appendix. We observe that NPMI can be significantly improved by setting the noise coefficient α to 0.5 compared to 0 (no added noise). It may appear surprising that such a high level of noise is beneficial; however, we note that due to the sparsity of the Dirichlet noise, the significant elements of the encoder output θ would remain significant in θ+ in Eq. (8). In other words, the variance from the noise does not wash out the signal; it helps spread out the latent space to benefit the training of the decoder network. This highlights the importance of randomness in the WAE framework on the one hand and the importance of Dirichlet assumption in the topic model on the other hand. The effect of setting the Dirichlet parameter to 0.1 or 0.2 is more mixed, signaling that the inherent topic sparsity in these datasets can be different. MMD vs GAN: We encountered vanishing gradient problem for the GAN-based W-LDA. The encoder was not able to learn to match the prior distribution. To investigate further we compare MMD and GAN in distribution matching via a toy 6352 Figure 2: t-SNE plot of encoder output vectors (red) and samples from the Dirichlet prior (green) over epochs. First row corresponds to epochs 0,10,30,99 of GAN training; second row corresponds to those of MMD training. experiment. Our setup is as follows. 100000 input vectors are drawn from a 2D spherical Gaussian distribution. The encoder network consists of two hidden layers with 2 neurons in each layer and a 2D output layer with softmax. There is no decoder and no reconstruction loss. The goal is to train the encoder network so that the output appears to come from a 2D Dirichlet prior distribution of parameter 0.1. Due to space limit, Figure 1 in the Appendix shows that both GAN and MMD training successfully match the Dirichlet prior. Next, we increase the number of neurons in each hidden and output layer to 50 and set the prior to a 50D Dirichlet distribution of parameter 0.1. Since there is no easy way to visualize the 50D distribution, we use t-SNE (Maaten and Hinton, 2008) to reduce the vectors to 2D and scatter plot the encoder output vectors (red) together with samples from the true Dirichlet prior (green) in Figure 5. Since the samples from the 50D Dirichlet prior tends to be sparse, there are roughly 50 green clusters corresponding to the 50 modes. We see that GAN (first row) fails to match the Dirichlet prior. On the other hand, MMD (second row) is able to gradually match the Dirichlet prior by capturing more and more clusters (modes). Given recent report that GAN learns challenging distributions much better than MMD (Li et al., 2017), our model offers an alternative view in support of the latter. The success of using MMD in W-LDA is perhaps not surprising; the Dirichlet distribution is supported in the space of simplex, which behaves much more regularly than the space of pixels in images. Furthermore, the information diffusion kernel that we choose is able to exploit such regularity in the geometry. 7 Conclusion and Future Work We have proposed W-LDA, a neural network based topic model. Unlike existing neural network based models, W-LDA can directly enforce Dirichlet prior, which plays a central role in the sparse mixed membership model of LDA. To measure topic diversity, we have proposed a topic uniqueness measure in addition to the widely used NPMI for coherence. We report significant improvement of topic quality in both coherence and diversity over existing topic models. We further make two novel discoveries: first, MMD out-performs GAN in matching high dimensional Dirichlet distributions; second, carefully adding noise to the encoder output can significantly boost topic coherence without harming diversity. We believe these discoveries are of independent interest to the broader research on MMD, GAN and WAE. While we were not successful in training WLDA using the GAN-based method, we acknowledge that many new formulations of GAN have been proposed to overcome mode collapse and vanishing gradient such as (Arjovsky et al., 2017; Gulrajani et al., 2017). A future direction is to improve the GAN-based training of W-LDA. Another future direction is to experiment with more complex priors than the Dirichlet prior. The W-LDA framework that we have proposed offers the flexibility of matching more sophisticated prior distributions via MMD or GAN. For example, the nested Chinese restaurant process can be used as a nonparametric prior to induce hierarchical topic models (Griffiths et al., 2004). 6353 References Nikolaos Aletras and Mark Stevenson. 2013. Evaluating topic coherence using distributional semantics. In Proceedings of the 10th International Conference on Computational Semantics, IWCS 2013, March 19-22, 2013, University of Potsdam, Potsdam, Germany, pages 13–22. Martin Arjovsky, Soumith Chintala, and L´eon Bottou. 2017. Wasserstein generative adversarial networks. In International Conference on Machine Learning, pages 214–223. Y Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35:1798–1828. David M. Blei, Andrew Y. Ng, Michael I. Jordan, and John Lafferty. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3:2003. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. Dua Dheeru and EfiKarra Taniskidou. 2017. UCI machine learning repository. Ran Ding, Ramesh Nallapati, and Bing Xiang. 2018. Coherence-aware neural topic modeling. In EMNLP, pages 830–836. Association for Computational Linguistics. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672–2680. Curran Associates, Inc. Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Sch¨olkopf, and Alexander Smola. 2012. A kernel two-sample test. Journal of Machine Learning Research, 13(Mar):723–773. T. L. Griffiths and M. Steyvers. 2004. Finding scientific topics. Proceedings of the National Academy of Science, 101:5228–5235. Thomas L. Griffiths, Michael I. Jordan, Joshua B. Tenenbaum, and David M. Blei. 2004. Hierarchical topic models and the nested chinese restaurant process. In S. Thrun, L. K. Saul, and B. Sch¨olkopf, editors, Advances in Neural Information Processing Systems 16, pages 17–24. MIT Press. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. 2017. Improved training of wasserstein gans. In Advances in Neural Information Processing Systems, pages 5767–5777. Junxian He, Daniel Spokoyny, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019. Lagging inference networks and posterior collapse in variational autoencoders. arXiv preprint arXiv:1901.05534. Matthew Hoffman, Francis R. Bach, and David M. Blei. 2010. Online learning for latent dirichlet allocation. In J. D. Lafferty, C. K. I. Williams, J. ShaweTaylor, R. S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 856–864. Curran Associates, Inc. Yoon Kim, Sam Wiseman, Andrew Miller, David Sontag, and Alexander Rush. 2018. Semi-amortized variational autoencoders. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2678–2687, Stockholmsm¨assan, Stockholm Sweden. PMLR. Yoon Kim, Kelly Zhang, Alexander M Rush, Yann LeCun, et al. 2017. Adversarially regularized autoencoders. arXiv preprint arXiv:1706.04223. Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. John D. Lafferty and Guy Lebanon. 2002. Information diffusion kernels. In Advances in Neural Information Processing Systems 15 [Neural Information Processing Systems, NIPS 2002, December 914, 2002, Vancouver, British Columbia, Canada], pages 375–382. Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 530–539. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, S¨oren Auer, and Christian Bizer. 2013. Dbpedia - a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web – Interoperability, Usability, Applicability. Chun-Liang Li, Wei-Cheng Chang, Yu Cheng, Yiming Yang, and Barnab´as P´oczos. 2017. Mmd gan: Towards deeper understanding of moment matching network. In Advances in Neural Information Processing Systems, pages 2203–2213. L.J.P.V.D. Maaten and GE Hinton. 2008. Visualizing high-dimensional data using t-sne. Journal of Machine Learning Research, 9:2579–2605. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. 2015. Adversarial autoencoders. arXiv preprint arXiv:1511.05644. Andrew Kachites McCallum. 2002. Mallet: A machine learning for language toolkit. Http://mallet.cs.umass.edu. 6354 Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. CoRR, abs/1609.07843. Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In International Conference on Machine Learning, pages 1727–1736. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45–50, Valletta, Malta. ELRA. http://is.muni.cz/ publication/884893/en. Paul K Rubenstein, Bernhard Schoelkopf, and Ilya Tolstikhin. 2018. On the latent space of wasserstein auto-encoders. arXiv preprint arXiv:1802.03761. Akash Srivastava and Charles Sutton. 2017. Autoencoding variational inference for topic models. arXiv preprint arXiv:1703.01488. Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. 2017. Wasserstein autoencoders. arXiv preprint arXiv:1711.01558. Rui Wang, Deyu Zhou, and Yulan He. 2018. Atm: Adversarial-neural topic model. arXiv preprint arXiv:1811.00265. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS’15, pages 649–657, Cambridge, MA, USA. MIT Press. 6355 8 Appendix: synthetic topic recovery experiment details We construct a synthetic corpus of 10000 documents following the LDA generative process. The vocabulary size is 100 and there are 5 topics and Dirichlet parameters is 0.1. For all models we set the number of topics to be 5. For LDA with collapsed Gibbs sampling, we use the default parameters of Mallet and run 2000 iterations. For Online LDA we run 200 iterations using the default parameters. We set the encoder network to have two hidden layers with 10 units each for the NTMR, ProdLDA and W-LDA. For these 3 methods, we run 50 epochs and evaluate the topics every 10 epochs to choose the best epoch. We disable the WETC parameter for NTM-R because there is no word embedding. We set the Dirichlet parameter to 0.1 for W-LDA without adding noise. For ProdLDA we set the keep_prob parameter to 1. 9 Appendix: additional TU and NPMI plots for W-LDA Due to space limit, we only provided TU and NPMI plots for 3 datasets in Figure 1 in the main paper. Here we provide the complete plots for all datasets in Figure 3. Note that even though the NPMI in Yelp P. without distribution matching is high, the TU is very low. The topics turn out to consist of highly repetitive words such as “good”, “nice”, “love”. 10 Appendix: document classification Besides exploring the corpus using interpretable topics, another usage for topic model is to act as a feature transformation of documents for downstream task such as document classification. We compare the predictive performance of the latent document-topic vectors across all models. We set the number of topics for all models to be 50. For the neural network based models, we extract the output of the encoder as the features for document classification. For LDA, we extract the inferred document-topic vectors. A linear multiclass classifier with cross entropy loss is minimized using Adam optimizer with learning rate of 0.01 for 100 iterations for all models. Finally we choose the best parameter setting for each model based on the accuracy on a separate validation set. For NTM, we vary the topic coherence parameter between 0 and 50; for ProdLDA we vary Figure 3: W-LDA: TU and NPMI for various Dirichlet parameters and noise α for 20NG (top row); NYTimes (2nd row) and Wikitext-103 (bottom row). Adding Dirichlet noise generally improves topic NPMI. Minimizing reconstruction loss only (without distribution matching in latent space) generally leads to mode collapse of latent space where only one dimension is nonzero and the failure to learn the topics. 6356 LDA (C.G.) Online LDA ProdLDA NTM-R W-LDA 20NG 0.5129 0.4725 0.2133 0.4334 0.4308 AGNews 0.8478 0.8253 0.8265 0.8567 0.8529 DBpedia 0.9059 0.8902 0.1124 0.9159 0.9382 Yelp P. 0.8685 0.8652 0.7773 0.8616 0.8563 Table 7: Test accuracies for the document classification task. W-LDA is competitive with the best models. the keep_prob parameter between 0.4 and 1. For W-LDA, we set the Dirichlet parameter to 0.1 and vary the Dirichlet prior parameter between 0.1 and 0.7. The accuracies on the test set are summarized in Table 7. We observe that the latent vectors from W-LDA have competitive classification accuracy with LDAs and NTM-R. ProdLDA performs significantly poorly on DBpedia dataset; further inspection shows that the distribution of the document-topic vectors produced by ProdLDA on test and training data are quite different. 11 Appendix: MMD vs GAN in distribution matching In our experiments we encountered vanishing gradient problem for the GAN-based W-LDA. The encoder was not able to learn to match the prior distribution. To investigate further we compare MMD and GAN in distribution matching via a synthetic experiment. We show that both approaches perform well for low dimensional Dirichlet distribution yet MMD performs much better than GAN in higher dimensional setting. Our setup is as follows. 100, 000 input vectors are drawn from a 2D spherical Gaussian distribution. The encoder network consists of two hidden layers with 2 neurons in each layer and a 2D output layer with softmax. The goal is to train the encoder network so that the output appears to come from a 2D Dirichlet prior distribution of parameter 0.1. Since the 2 dimensions of the output vector sum to 1, we can visualize the resulting distribution via the histogram of the first dimension. The histogram from the true 2D Dirichlet prior of parameter 0.1 is shown in the right most sub-figure on the second row of Figure 4. After 20 epochs of GAN training, the encoder output distribution is able to match that of the prior as shown in the first row of Figure 4. Similarly, MMD training is able to match that of the prior as shown in the second row of Figure 4. Next, we increase the number of neurons in each hidden and output layer to 50 and set the prior to a Dirichlet distribution of parameter 0.1. Since there is no easy way to visualize the 50 dimensional distribution, we use t-SNE (Maaten and Hinton, 2008) to reduce the vectors to 2D and scatter plot the encoder output vectors (red) together with samples from the true Dirichlet prior (green). Figure 5 shows such a plot. Since the samples from the 50 dimensional Dirichlet prior tends to be sparse, there are roughly 50 green clusters corresponding to the 50 modes. We see that GAN (first row) fails to match the Dirichlet prior. On the other hand, MMD (second row) is able to gradually match the Dirichlet prior by capturing more and more clusters (modes). 6357 Figure 4: Histogram for the encoded latent distribution over epochs. First row corresponds to epochs 0, 10, 20 and 50 of GAN training; second row corresponds to epochs 0, 10, 20 and 50 of MMD training; the right most figure on the second row corresponds to the histogram of the prior distribution: 2D Dirichlet of parameter 0.1 Figure 5: t-SNE plot of encoder output vectors (red) and samples from the Dirichlet prior (green) over epochs. First row corresponds to epochs 0,10,30,99 of GAN training; second row corresponds to those of MMD training 6358 12 Appendix: topic words The numbers at the beginning of each row are topic ID, TU and NPMI for each topic. 12.1 Topic words on 20NG LDA Collapsed Gibbs sampling: NPMI=0.264, TU=0.854 [ 0 - 0.85 - 0.30594]: ['question', 'answer', 'correct', 'order', 'wrong', 'claim', 'knowledge', 'doubt', 'original', 'reason'] ,→ [ 1 - 0.49762 - 0.28311]: ['thing', 'find', 'idea', 'couple', 'make', 'ago', 'put', 'guess', 'read', 'happy'] [ 2 - 1 - 0.41173]: ['god', 'jesus', 'bible', 'christian', 'church', 'christ', 'faith', 'christianity', 'lord', 'sin'] [ 3 - 0.88333 - 0.27473]: ['life', 'hell', 'man', 'death', 'love', 'body', 'dead', 'world', 'die', 'point'] [ 4 - 0.79762 - 0.32737]: ['case', 'point', 'fact', 'make', 'situation', 'clear', 'avoid', 'idea', 'simply', 'position'] [ 5 - 0.95 - 0.35637]: ['drive', 'scsi', 'disk', 'hard', 'controller', 'floppy', 'ide', 'rom', 'tape', 'card'] [ 6 - 0.95 - 0.19716]: ['mr', 'president', 'stephanopoulos', 'package', 'today', 'house', 'press', 'myers', 'george', 'continue'] ,→ [ 7 - 0.61429 - 0.26655]: ['work', 'job', 'lot', 'school', 'year', 'business', 'experience', 'make', 'learn', 'time'] [ 8 - 0.73333 - 0.29981]: ['time', 'day', 'long', 'times', 'week', 'end', 'give', 'night', 'stop', 'rest'] [ 9 - 0.95 - 0.28707]: ['gun', 'control', 'police', 'crime', 'carry', 'rate', 'weapon', 'defense', 'times', 'firearm'] [ 10 - 0.9 - 0.32923]: ['windows', 'dos', 'os', 'screen', 'software', 'driver', 'mode', 'pc', 'ibm', 'memory'] [ 11 - 0.93333 - 0.27132]: ['price', 'offer', 'sale', 'interested', 'buy', 'sell', 'mail', 'shipping', 'company', 'condition'] ,→ [ 12 - 0.88333 - 0.29119]: ['team', 'hockey', 'season', 'league', 'nhl', 'year', 'game', 'division', 'city', 'pick'] [ 13 - 0.95 - 0.30172]: ['evidence', 'argument', 'true', 'exist', 'truth', 'science', 'existence', 'theory', 'atheism', 'statement'] ,→ [ 14 - 0.9 - 0.30554]: ['israel', 'jews', 'jewish', 'israeli', 'peace', 'arab', 'land', 'state', 'islam', 'human'] [ 15 - 0.75833 - 0.32471]: ['state', 'government', 'law', 'rights', 'bill', 'states', 'federal', 'public', 'court', 'united'] [ 16 - 0.81667 - 0.24906]: ['small', 'large', 'size', 'type', 'area', 'difference', 'free', 'order', 'work', 'set'] [ 17 - 0.93333 - 0.26958]: ['health', 'medical', 'number', 'food', 'disease', 'care', 'pain', 'blood', 'study', 'msg'] [ 18 - 0.85833 - 0.23656]: ['chip', 'encryption', 'clipper', 'government', 'law', 'technology', 'enforcement', 'escrow', 'privacy', 'phone'] ,→ [ 19 - 0.9 - 0.14479]: ['period', 'la', 'power', 'pp', 'win', 'van', 'play', 'ny', 'cal', 'de'] [ 20 - 0.83333 - 0.24515]: ['good', 'pretty', 'nice', 'worth', 'bad', 'level', 'class', 'quality', 'luck', 'thing'] [ 21 - 1 - 0.22251]: ['car', 'bike', 'engine', 'speed', 'dod', 'road', 'ride', 'front', 'oil', 'dealer'] [ 22 - 0.80833 - 0.19851]: ['file', 'output', 'entry', 'program', 'build', 'section', 'info', 'read', 'int', 'number'] [ 23 - 0.93333 - 0.21397]: ['window', 'server', 'motif', 'application', 'widget', 'display', 'subject', 'mit', 'sun', 'set'] [ 24 - 0.85833 - 0.23623]: ['post', 'article', 'group', 'posting', 'news', 'newsgroup', 'reply', 'read', 'response', 'mail'] [ 25 - 0.9 - 0.26876]: ['image', 'graphics', 'version', 'ftp', 'color', 'format', 'package', 'jpeg', 'gif', 'contact'] [ 26 - 0.76429 - 0.31668]: ['sense', 'make', 'moral', 'choice', 'person', 'personal', 'human', 'means', 'objective', 'understand'] ,→ [ 27 - 0.8 - 0.27119]: ['back', 'side', 'left', 'put', 'head', 'end', 'turn', 'top', 'hand', 'picture'] [ 28 - 0.59762 - 0.28526]: ['people', 'person', 'make', 'live', 'thing', 'talk', 'give', 'stop', 'realize', 'means'] [ 29 - 0.73333 - 0.25073]: ['book', 'word', 'read', 'law', 'reference', 'find', 'matthew', 'text', 'context', 'david'] [ 30 - 0.88333 - 0.2469]: ['water', 'war', 'military', 'time', 'air', 'south', 'plan', 'nuclear', 'force', 'ago'] [ 31 - 0.9 - 0.18688]: ['cs', 'uk', 'ed', 'ac', 'john', 'david', 'ca', 'mark', 'jim', 'tom'] [ 32 - 0.78333 - 0.20401]: ['key', 'bit', 'number', 'public', 'des', 'message', 'algorithm', 'security', 'part', 'block'] [ 33 - 0.66429 - 0.26271]: ['big', 'bad', 'make', 'lot', 'stuff', 'remember', 'back', 'gm', 'guy', 'guess'] [ 34 - 0.9 - 0.25369]: ['home', 'woman', 'wife', 'building', 'left', 'mother', 'door', 'remember', 'family', 'leave'] [ 35 - 0.95 - 0.21317]: ['power', 'ground', 'current', 'wire', 'cable', 'supply', 'circuit', 'hot', 'box', 'run'] [ 36 - 0.9 - 0.30954]: ['system', 'data', 'systems', 'software', 'computer', 'design', 'analysis', 'level', 'digital', 'high'] ,→ [ 37 - 0.95 - 0.22316]: ['university', 'research', 'national', 'information', 'center', 'april', 'california', 'office', 'washington', 'conference'] ,→ [ 38 - 0.875 - 0.33608]: ['armenian', 'turkish', 'armenians', 'people', 'turkey', 'armenia', 'turks', 'greek', 'genocide', 'government'] ,→ [ 39 - 0.83333 - 0.24725]: ['game', 'year', 'play', 'hit', 'baseball', 'goal', 'player', 'average', 'flyers', 'shot'] [ 40 - 1 - 0.22238]: ['black', 'fire', 'light', 'white', 'face', 'fbi', 'red', 'local', 'thought', 'koresh'] [ 41 - 0.93333 - 0.22892]: ['code', 'line', 'source', 'set', 'include', 'simple', 'library', 'language', 'write', 'object'] [ 42 - 0.85 - 0.23873]: ['card', 'video', 'mac', 'bit', 'apple', 'monitor', 'board', 'ram', 'memory', 'modem'] [ 43 - 0.83333 - 0.26739]: ['mail', 'list', 'send', 'information', 'internet', 'email', 'anonymous', 'request', 'ftp', 'address'] ,→ [ 44 - 0.65833 - 0.30522]: ['reason', 'wrong', 'agree', 'point', 'true', 'feel', 'find', 'opinion', 'reading', 'experience'] [ 45 - 0.9 - 0.13391]: ['db', 'call', 'copy', 'al', 'section', 'mov', 'cs', 'place', 'bh', 'dangerous'] [ 46 - 0.9 - 0.21481]: ['world', 'history', 'media', 'germany', 'german', 'europe', 'usa', 'american', 'great', 'part'] [ 47 - 0.85833 - 0.21134]: ['problem', 'work', 'advance', 'fine', 'friend', 'find', 'recently', 'error', 'machine', 'cross'] [ 48 - 1 - 0.37787]: ['space', 'nasa', 'earth', 'launch', 'satellite', 'shuttle', 'orbit', 'moon', 'mission', 'lunar'] [ 49 - 0.83929 - 0.30421]: ['money', 'cost', 'pay', 'support', 'insurance', 'make', 'private', 'million', 'administration', 'government'] ,→ Online LDA: NPMI=0.252, TU=0.788 [ 0 - 0.78333 - 0.33403]: ['drive', 'disk', 'scsi', 'hard', 'controller', 'ide', 'floppy', 'tape', 'system', 'bus'] [ 1 - 0.9 - 0.25403]: ['jews', 'greek', 'jewish', 'turkish', 'turkey', 'greece', 'turks', 'adam', 'western', 'movement'] [ 2 - 0.86667 - 0.17326]: ['new', 'period', 'york', 'chicago', 'st', 'pp', 'second', 'pittsburgh', 'los', 'power'] [ 3 - 0.68333 - 0.24503]: ['encryption', 'government', 'law', 'technology', 'enforcement', 'privacy', 'security', 'new', 'clipper', 'escrow'] ,→ [ 4 - 0.76429 - 0.23766]: ['widget', 'application', 'window', 'use', 'display', 'set', 'server', 'xt', 'motif', 'resource'] [ 5 - 0.95 - 0.23512]: ['good', 'article', 'book', 'read', 'ago', 'paper', 'reading', 'reference', 'excellent', 'bob'] [ 6 - 0.825 - 0.24322]: ['card', 'video', 'monitor', 'bit', 'screen', 'port', 'mode', 'vga', 'color', 'bus'] [ 7 - 0.9 - 0.29799]: ['available', 'ftp', 'graphics', 'software', 'data', 'information', 'also', 'version', 'contact', 'package'] ,→ [ 8 - 0.62708 - 0.31925]: ['one', 'people', 'think', 'true', 'may', 'question', 'say', 'point', 'evidence', 'even'] [ 9 - 0.82292 - 0.18638]: ['bike', 'dod', 'pain', 'day', 'one', 'side', 'back', 'ride', 'like', 'first'] [ 10 - 1 - 0.37787]: ['space', 'nasa', 'launch', 'earth', 'satellite', 'orbit', 'shuttle', 'moon', 'lunar', 'mission'] [ 11 - 1 - 0.1984]: ['line', 'radio', 'tv', 'mark', 'audio', 'try', 'end', 'two', 'edge', 'center'] [ 12 - 0.80625 - 0.24895]: ['power', 'board', 'memory', 'supply', 'ram', 'case', 'battery', 'motherboard', 'one', 'pin'] [ 13 - 0.5625 - 0.26666]: ['people', 'right', 'government', 'rights', 'state', 'well', 'society', 'system', 'law', 'militia'] [ 14 - 0.36042 - 0.3857]: ['like', 'people', 'think', 'get', 'know', 'one', 'really', 'want', 'say', 'something'] [ 15 - 0.9 - 0.35912]: ['god', 'religion', 'believe', 'atheism', 'christian', 'religious', 'exist', 'belief', 'islam', 'existence'] ,→ [ 16 - 0.71429 - 0.26711]: ['image', 'color', 'jpeg', 'gif', 'file', 'format', 'quality', 'use', 'bit', 'convert'] [ 17 - 0.91667 - 0.22672]: ['black', 'man', 'cover', 'white', 'art', 'frank', 'red', 'jim', 'new', 'green'] [ 18 - 0.8 - 0.2531]: ['thanks', 'please', 'anyone', 'know', 'help', 'mail', 'like', 'advance', 'post', 'need'] [ 19 - 0.57054 - 0.20668]: ['chip', 'number', 'phone', 'clipper', 'use', 'serial', 'company', 'one', 'get', 'want'] 6359 [ 20 - 0.8 - 0.24168]: ['university', 'program', 'research', 'national', 'conference', 'science', 'new', 'april', 'organization', 'billion'] ,→ [ 21 - 0.85 - 0.11704]: ['year', 'last', 'win', 'la', 'cal', 'min', 'det', 'van', 'mon', 'tor'] [ 22 - 0.775 - 0.22759]: ['game', 'goal', 'scsi', 'play', 'shot', 'puck', 'flyers', 'net', 'penalty', 'bit'] [ 23 - 0.65208 - 0.39624]: ['god', 'jesus', 'one', 'church', 'bible', 'christ', 'christian', 'us', 'faith', 'people'] [ 24 - 0.60625 - 0.30005]: ['money', 'buy', 'one', 'price', 'pay', 'insurance', 'cost', 'get', 'like', 'new'] [ 25 - 0.95 - 0.1792]: ['ca', 'uk', 'cs', 'david', 'de', 'michael', 'ac', 'tom', 'john', 'andrew'] [ 26 - 0.81667 - 0.19665]: ['sale', 'price', 'offer', 'new', 'shipping', 'condition', 'dos', 'cd', 'sell', 'interested'] [ 27 - 0.9 - 0.24906]: ['sound', 'mike', 'record', 'oh', 'night', 'okay', 're', 'last', 'eric', 'sorry'] [ 28 - 0.51458 - 0.34604]: ['much', 'time', 'one', 'like', 'good', 'better', 'think', 'get', 'well', 'really'] [ 29 - 0.85625 - 0.13969]: ['db', 'al', 'cs', 'mov', 'bh', 'channel', 'byte', 'pop', 'push', 'one'] [ 30 - 0.76875 - 0.34827]: ['armenian', 'armenians', 'turkish', 'people', 'genocide', 'armenia', 'one', 'russian', 'soviet', 'azerbaijan'] ,→ [ 31 - 0.9 - 0.25769]: ['list', 'internet', 'mail', 'address', 'news', 'email', 'send', 'posting', 'anonymous', 'information'] ,→ [ 32 - 0.67262 - 0.28665]: ['windows', 'dos', 'software', 'use', 'system', 'mac', 'problem', 'pc', 'file', 'driver'] [ 33 - 0.85833 - 0.27275]: ['gun', 'file', 'crime', 'bill', 'law', 'control', 'police', 'weapon', 'states', 'firearm'] [ 34 - 0.74762 - 0.24866]: ['study', 'health', 'number', 'rate', 'use', 'april', 'among', 'report', 'page', 'risk'] [ 35 - 0.76667 - 0.21434]: ['window', 'sun', 'keyboard', 'server', 'mouse', 'motif', 'xterm', 'font', 'mit', 'get'] [ 36 - 0.62292 - 0.26581]: ['car', 'engine', 'speed', 'front', 'oil', 'one', 'may', 'get', 'like', 'right'] [ 37 - 0.95 - 0.1132]: ['vs', 'gm', 'la', 'pt', 'pm', 'ma', 'mg', 'md', 'tm', 'mi'] [ 38 - 0.9 - 0.28248]: ['israel', 'israeli', 'arab', 'san', 'land', 'arabs', 'francisco', 'palestinian', 'state', 'jews'] [ 39 - 0.9 - 0.27416]: ['medical', 'disease', 'public', 'soon', 'cancer', 'trial', 'treatment', 'health', 'gordon', 'medicine'] ,→ [ 40 - 0.81875 - 0.2017]: ['fire', 'fbi', 'koresh', 'gas', 'dog', 'batf', 'compound', 'one', 'people', 'story'] [ 41 - 0.73125 - 0.2485]: ['key', 'des', 'public', 'algorithm', 'bit', 'nsa', 'encryption', 'one', 'rsa', 'ripem'] [ 42 - 0.85 - 0.36136]: ['team', 'game', 'season', 'hockey', 'league', 'year', 'play', 'nhl', 'player', 'baseball'] [ 43 - 0.66429 - 0.19931]: ['entry', 'section', 'must', 'use', 'cross', 'program', 'info', 'number', 'source', 'may'] [ 44 - 0.77917 - 0.24336]: ['us', 'war', 'country', 'government', 'military', 'american', 'people', 'world', 'nuclear', 'america'] ,→ [ 45 - 0.9 - 0.16519]: ['master', 'feature', 'slave', 'pin', 'systems', 'tank', 'model', 'jumper', 'drive', 'japanese'] [ 46 - 0.56042 - 0.22238]: ['mr', 'people', 'know', 'president', 're', 'us', 'one', 'stephanopoulos', 'think', 'go'] [ 47 - 0.82054 - 0.21982]: ['ground', 'wire', 'hot', 'circuit', 'use', 'one', 'wiring', 'neutral', 'cable', 'current'] [ 48 - 0.80833 - 0.24895]: ['output', 'file', 'program', 'int', 'printf', 'char', 'entry', 'input', 'oname', 'stream'] [ 49 - 0.90625 - 0.19526]: ['code', 'media', 'call', 'one', 'object', 'stuff', 'date', 'btw', 'way', 'deal'] ProdLDA : NPMI=0.268, TU=0.59 [ 0 - 0.58333 - 0.21393]: ['int', 'char', 'oname', 'buf', 'printf', 'output', 'null', 'entry', 'file', 'stream'] [ 1 - 0.7 - 0.19171]: ['stephanopoulos', 'administration', 'president', 'senior', 'sector', 'congress', 'mr', 'russian', 'package', 'russia'] ,→ [ 2 - 0.43333 - 0.095146]: ['tor', 'det', 'que', 'pit', 'nj', 'min', 'la', 'buf', 'van', 'cal'] [ 3 - 0.65 - 0.18382]: ['bike', 'brake', 'gear', 'gateway', 'rider', 'manual', 'quadra', 'filter', 'mhz', 'motherboard'] [ 4 - 0.345 - 0.46605]: ['interface', 'rom', 'controller', 'disk', 'ram', 'floppy', 'motherboard', 'mb', 'slot', 'scsi'] [ 5 - 0.70833 - 0.40336]: ['israel', 'israeli', 'arab', 'arabs', 'islamic', 'lebanon', 'lebanese', 'palestinian', 'jew', 'murder'] ,→ [ 6 - 0.56667 - 0.32953]: ['privacy', 'escrow', 'encryption', 'security', 'wiretap', 'enforcement', 'secure', 'encrypt', 'anonymous', 'ripem'] ,→ [ 7 - 0.43333 - 0.3356]: ['jesus', 'passage', 'matthew', 'doctrine', 'scripture', 'holy', 'prophet', 'church', 'prophecy', 'pope'] ,→ [ 8 - 0.55 - 0.273]: ['export', 'ftp', 'mit', 'xt', 'widget', 'server', 'unix', 'directory', 'vendor', 'font'] [ 9 - 0.425 - 0.36579]: ['jesus', 'faith', 'passage', 'god', 'doctrine', 'belief', 'christ', 'existence', 'church', 'biblical'] ,→ [ 10 - 0.60833 - 0.12043]: ['app', 'professor', 'rider', 'annual', 'league', 'genocide', 'francisco', 'armenian', 'art', 'arab'] ,→ [ 11 - 0.65 - 0.20985]: ['stephanopoulos', 'mr', 'president', 'senate', 'consideration', 'meeting', 'myers', 'promise', 'decision', 'package'] ,→ [ 12 - 0.71667 - 0.29307]: ['xt', 'image', 'xlib', 'amiga', 'toolkit', 'processing', 'resource', 'jpeg', 'workstation', 'server'] ,→ [ 13 - 0.8 - 0.31247]: ['anonymous', 'privacy', 'cryptography', 'rsa', 'cipher', 'electronic', 'ftp', 'ripem', 'internet', 'pgp'] ,→ [ 14 - 0.56667 - 0.16196]: ['stephanopoulos', 'president', 'clipper', 'scheme', 'mr', 'escrow', 'myers', 'restriction', 'nsa', 'wiretap'] ,→ [ 15 - 0.395 - 0.40117]: ['armenians', 'turkish', 'armenian', 'turks', 'armenia', 'genocide', 'massacre', 'muslim', 'turkey', 'jews'] ,→ [ 16 - 0.5 - 0.29259]: ['holy', 'jesus', 'son', 'father', 'lord', 'spirit', 'matthew', 'prophecy', 'satan', 'prophet'] [ 17 - 0.95 - 0.16966]: ['health', 'hus', 'among', 'child', 'culture', 'md', 'volume', 'laboratory', 'age', 'safety'] [ 18 - 0.31667 - 0.34482]: ['jesus', 'god', 'matthew', 'passage', 'prophecy', 'christ', 'holy', 'faith', 'lord', 'prophet'] [ 19 - 0.85 - 0.11794]: ['db', 'byte', 'mov', 'bh', 'cs', 'ax', 'pop', 'push', 'west', 'ah'] [ 20 - 0.45 - 0.092708]: ['tor', 'det', 'que', 'pit', 'van', 'nj', 'cal', 'la', 'gm', 'min'] [ 21 - 0.83333 - 0.3036]: ['conclude', 'universe', 'existence', 'atheism', 'atheist', 'religious', 'belief', 'conclusion', 'evidence', 'truth'] ,→ [ 22 - 0.4 - 0.35819]: ['hitter', 'season', 'defensive', 'puck', 'braves', 'baseball', 'playoff', 'league', 'coach', 'team'] [ 23 - 0.63333 - 0.32329]: ['windows', 'colormap', 'window', 'microsoft', 'application', 'menu', 'dos', 'screen', 'widget', 'default'] ,→ [ 24 - 0.37833 - 0.34674]: ['scsi', 'motherboard', 'ide', 'quadra', 'ram', 'vga', 'meg', 'mhz', 'adapter', 'isa'] [ 25 - 0.53333 - 0.30136]: ['hitter', 'coach', 'offense', 'career', 'team', 'season', 'baseball', 'pitcher', 'dog', 'defensive'] ,→ [ 26 - 0.56667 - 0.16056]: ['detroit', 'winnipeg', 'det', 'playoff', 'calgary', 'tor', 'vancouver', 'pp', 'rangers', 'gm'] [ 27 - 0.69167 - 0.37335]: ['god', 'belief', 'faith', 'truth', 'reject', 'absolute', 'bible', 'christianity', 'christian', 'revelation'] ,→ [ 28 - 0.44167 - 0.24637]: ['turkish', 'jews', 'greece', 'greek', 'muslims', 'jewish', 'matthew', 'lebanese', 'pope', 'christ'] ,→ [ 29 - 0.63333 - 0.16721]: ['wiring', 'wire', 'oname', 'buf', 'entry', 'char', 'outlet', 'int', 'output', 'printf'] [ 30 - 0.575 - 0.33351]: ['rom', 'disk', 'controller', 'floppy', 'feature', 'interface', 'connector', 'slot', 'mb', 'jumper'] [ 31 - 0.67 - 0.21175]: ['armenians', 'apartment', 'woman', 'neighbor', 'troops', 'secretary', 'armenian', 'girl', 'armenia', 'afraid'] ,→ [ 32 - 0.42 - 0.35202]: ['greek', 'turks', 'armenian', 'greece', 'minority', 'armenians', 'muslim', 'muslims', 'genocide', 'lebanese'] ,→ [ 33 - 0.5 - 0.30162]: ['puck', 'flyers', 'season', 'score', 'hitter', 'braves', 'coach', 'team', 'nhl', 'career'] [ 34 - 0.545 - 0.21961]: ['ide', 'scsi', 'meg', 'bus', 'isa', 'dos', 'hd', 'controller', 'adapter', 'slave'] [ 35 - 0.68333 - 0.29779]: ['os', 'server', 'pixel', 'vendor', 'image', 'processing', 'documentation', 'xterm', 'unix', 'mit'] ,→ [ 36 - 0.8 - 0.2106]: ['file', 'gun', 'united', 'congress', 'handgun', 'journal', 'prohibit', 'february', 'firearm', 'senate'] ,→ [ 37 - 0.68333 - 0.3552]: ['winnipeg', 'calgary', 'montreal', 'detroit', 'rangers', 'nhl', 'hockey', 'leafs', 'louis', 'minnesota'] ,→ [ 38 - 0.63333 - 0.27956]: ['heaven', 'god', 'eternal', 'braves', 'christ', 'christianity', 'pray', 'sin', 'dog', 'satan'] 6360 [ 39 - 0.7 - 0.3484]: ['satellite', 'mission', 'space', 'nasa', 'shuttle', 'lunar', 'spacecraft', 'launch', 'international', 'earth'] ,→ [ 40 - 0.395 - 0.22319]: ['hockey', 'nhl', 'league', 'armenian', 'massacre', 'turkish', 'draft', 'armenians', 'genocide', 'turks'] ,→ [ 41 - 0.7 - 0.1999]: ['motherboard', 'amp', 'hd', 'brake', 'mhz', 'monitor', 'tire', 'upgrade', 'bike', 'compatible'] [ 42 - 0.56667 - 0.25204]: ['widget', 'visual', 'resource', 'xt', 'application', 'colormap', 'app', 'export', 'default', 'converter'] ,→ [ 43 - 0.7 - 0.28056]: ['earth', 'space', 'shuttle', 'mission', 'orbit', 'km', 'nasa', 'sky', 'lunar', 'foundation'] [ 44 - 0.60333 - 0.34564]: ['mhz', 'scsi', 'modem', 'ram', 'vga', 'processor', 'cache', 'port', 'screen', 'printer'] [ 45 - 0.66667 - 0.19954]: ['encryption', 'key', 'escrow', 'clipper', 'algorithm', 'enforcement', 'des', 'secure', 'wiretap', 'session'] ,→ [ 46 - 0.73333 - 0.086758]: ['mw', 'db', 'wm', 'na', 'rg', 'van', 'md', 'mov', 'sl', 'bh'] [ 47 - 0.40333 - 0.40371]: ['scsi', 'controller', 'mb', 'cache', 'disk', 'card', 'windows', 'floppy', 'vga', 'ram'] [ 48 - 0.57 - 0.21702]: ['armenians', 'father', 'armenian', 'apartment', 'armenia', 'february', 'azerbaijan', 'woman', 'soviet', 'investigation'] ,→ [ 49 - 0.64167 - 0.31175]: ['militia', 'sentence', 'jews', 'constitution', 'arab', 'israeli', 'lebanese', 'arabs', 'israel', 'nazi'] ,→ %\end{verbnobox} NTM-R: NPMI=0.24, TU=0.624 [0-0.78333-0.22157]: ['marriage', 'exist', 'evidence', 'surely', 'sick', 'perhaps', 'appear', 'air', 'serious', 'raise'] [1-0.465-0.16851]: ['monitor', 'jesus', 'surrender', 'lot', 'dave', 'drive', 'put', 'disk', 'love', 'soon'] [2-0.33167-0.37993]: ['ide', 'controller', 'vga', 'card', 'floppy', 'adapter', 'hd', 'scsi', 'mb', 'video'] [3-0.73667-0.23189]: ['lebanon', 'surrender', 'evidence', 'reaction', 'islamic', 'death', 'soon', 'government', 'happen', 'effect'] ,→ [4-0.83333-0.39082]: ['armenian', 'armenians', 'turks', 'armenia', 'turkish', 'genocide', 'turkey', 'israel', 'arab', 'israeli'] ,→ [5-0.79-0.20377]: ['mask', 'punishment', 'surrender', 'try', 'religious', 'guess', 'patient', 'always', 'islam', 'bible'] [6-0.64167-0.24654]: ['year', 'consider', 'certain', 'besides', 'day', 'blame', 'pretty', 'evidence', 'damage', 'go'] [7-0.28667-0.33766]: ['ide', 'scsi', 'drive', 'disk', 'controller', 'floppy', 'isa', 'card', 'bus', 'ram'] [8-0.83333-0.3028]: ['hockey', 'toronto', 'cal', 'coach', 'game', 'league', 'winnipeg', 'rangers', 'detroit', 'playoff'] [9-0.54167-0.25723]: ['fan', 'season', 'team', 'toronto', 'game', 'last', 'year', 'braves', 'hit', 'miss'] [10-0.80833-0.18641]: ['insurance', 'false', 'difficult', 'find', 'clipper', 'relatively', 'regard', 'chip', 'etc', 'damn'] [11-0.395-0.24484]: ['please', 'sale', 'email', 'version', 'mail', 'modem', 'thanks', 'mailing', 'macintosh', 'ftp'] [12-0.88333-0.19338]: ['weapon', 'federal', 'military', 'warrant', 'population', 'government', 'judge', 'worry', 'attitude', 'ago'] ,→ [13-0.55278-0.22393]: ['interested', 'advance', 'os', 'dos', 'thanks', 'box', 'apple', 'windows', 'monitor', 'file'] [14-0.75833-0.18371]: ['round', 'year', 'go', 'else', 'money', 'digital', 'air', 'lot', 'wait', 'clinton'] [15-0.62833-0.18964]: ['mail', 'ftp', 'sale', 'workstation', 'email', 'eric', 'via', 'project', 'thanks', 'test'] [16-0.775-0.17498]: ['san', 'nasa', 'clipper', 'administration', 'americans', 'houston', 'gun', 'gm', 'closer', 'president'] [17-0.60333-0.24235]: ['realize', 'arab', 'israeli', 'jews', 'religious', 'surrender', 'shall', 'raise', 'atheism', 'carry'] [18-0.68333-0.29585]: ['hitter', 'hit', 'baseball', 'coach', 'team', 'flyers', 'staff', 'braves', 'season', 'player'] [19-0.55333-0.24486]: ['motif', 'image', 'mode', 'thanks', 'appreciate', 'pc', 'widget', 'vga', 'available', 'graphics'] [20-0.41667-0.24645]: ['cable', 'disk', 'ram', 'thanks', 'board', 'mb', 'modem', 'video', 'sale', 'adapter'] [21-0.7-0.26178]: ['widget', 'input', 'key', 'toolkit', 'chip', 'window', 'menu', 'error', 'default', 'int'] [22-0.565-0.38053]: ['god', 'christian', 'heaven', 'faith', 'christianity', 'jesus', 'hell', 'sin', 'interpretation', 'bible'] [23-0.44278-0.18352]: ['appreciate', 'thanks', 'card', 'windows', 'post', 'luck', 'vga', 'anybody', 'advance', 'thank'] [24-0.71667-0.19024]: ['window', 'toolkit', 'server', 'key', 'motif', 'pgp', 'mit', 'session', 'utility', 'stream'] [25-0.76167-0.23294]: ['properly', 'catholic', 'thanks', 'bible', 'sex', 'easy', 'moral', 'religion', 'mine', 'appropriate'] [26-0.83333-0.18606]: ['design', 'doctor', 'car', 'alive', 'imagine', 'brain', 'go', 'suppose', 'something', 'student'] [27-0.78333-0.22233]: ['israel', 'kill', 'jews', 'arab', 'woman', 'americans', 'responsible', 'nothing', 'civil', 'gordon'] [28-0.44778-0.27568]: ['windows', 'modem', 'server', 'version', 'vga', 'appreciate', 'client', 'binary', 'file', 'mouse'] [29-0.69167-0.27641]: ['key', 'escrow', 'encryption', 'clipper', 'chip', 'secure', 'enforcement', 'privacy', 'crypto', 'algorithm'] ,→ [30-0.5-0.18861]: ['hit', 'year', 'last', 'baseball', 'pick', 'love', 'address', 'ago', 'thanks', 'anyone'] [31-0.38167-0.4242]: ['jesus', 'god', 'christ', 'belief', 'faith', 'christian', 'bible', 'scripture', 'sin', 'church'] [32-0.50278-0.19305]: ['windows', 'client', 'font', 'advance', 'info', 'thanks', 'graphics', 'color', 'appreciate', 'anybody'] [33-0.57333-0.17343]: ['driver', 'file', 'help', 'anybody', 'anyone', 'hello', 'ftp', 'cool', 'jesus', 'set'] [34-0.88333-0.18867]: ['win', 'chicago', 'game', 'average', 'tie', 'car', 'bike', 'yeah', 'nice', 'hot'] [35-0.575-0.32561]: ['serious', 'christ', 'mary', 'eternal', 'god', 'faith', 'truth', 'freedom', 'scripture', 'man'] [36-0.57778-0.22004]: ['reply', 'windows', 'driver', 'version', 'file', 'thanks', 'find', 'ask', 'legal', 'switch'] [37-0.34778-0.32627]: ['controller', 'scsi', 'ide', 'bus', 'motherboard', 'port', 'mb', 'windows', 'isa', 'card'] [38-0.45333-0.24598]: ['cable', 'drive', 'rom', 'ftp', 'printer', 'pc', 'scsi', 'cd', 'disk', 'thanks'] [39-0.75833-0.22125]: ['proposal', 'encryption', 'clipper', 'secure', 'fairly', 'expensive', 'far', 'government', 'enough', 'traffic'] ,→ [40-0.95-0.10837]: ['det', 'van', 'pit', 'tor', 'period', 'min', 'pp', 'gm', 'que', 'ny'] [41-0.50333-0.33905]: ['satan', 'christian', 'jesus', 'scripture', 'moral', 'eternal', 'objective', 'truth', 'christ', 'belief'] ,→ [42-0.54167-0.15676]: ['bh', 'hd', 'rg', 'bus', 'ide', 'isa', 'db', 'md', 'floppy', 'drive'] [43-0.26778-0.21658]: ['printer', 'vga', 'card', 'anybody', 'windows', 'monitor', 'sale', 'controller', 'isa', 'port'] [44-0.75-0.1717]: ['motorola', 'db', 'ac', 'contact', 'toolkit', 'sale', 'xt', 'clock', 'macintosh', 'hr'] [45-0.61167-0.2934]: ['morality', 'moral', 'atheism', 'cause', 'bible', 'person', 'god', 'accurate', 'sin', 'disease'] [46-0.88333-0.25513]: ['stuff', 'ahead', 'fall', 'disease', 'food', 'thing', 'know', 'actually', 'anyone', 'expect'] [47-0.8-0.24088]: ['gun', 'trust', 'gang', 'something', 'blame', 'child', 'reading', 'avoid', 'abuse', 'pretty'] [48-0.68111-0.15852]: ['dos', 'hear', 'bob', 'package', 'anyway', 'windows', 'david', 'consider', 'surrender', 'site'] [49-0.41444-0.21188]: ['ftp', 'site', 'sale', 'monitor', 'windows', 'thanks', 'email', 'please', 'gif', 'newsgroup'] W-LDA: NPMI=0.252, TU=0.856 [0-0.9-0.31117]: ['leafs', 'stanley', 'coach', 'nhl', 'hockey', 'team', 'wings', 'roger', 'cup', 'rangers'] [1-0.9-0.21338]: ['char', 'entry', 'widget', 'toolkit', 'int', 'oname', 'printf', 'contest', 'xlib', 'mit'] [2-0.9-0.2541]: ['xterm', 'window', 'colormap', 'expose', 'widget', 'client', 'xlib', 'null', 'button', 'server'] [3-0.9-0.21862]: ['amp', 'wave', 'voltage', 'audio', 'electronics', 'circuit', 'heat', 'cycle', 'bell', 'noise'] [4-0.9-0.19192]: ['plane', 'voltage', 'motif', 'edge', 'instruction', 'tube', 'algorithm', 'input', 'draw', 'surface'] [5-0.69-0.30885]: ['sorry', 'guess', 'like', 'get', 'anyone', 'think', 'know', 'someone', 'thanks', 'one'] [6-0.65-0.2681]: ['dos', 'driver', 'printer', 'card', 'windows', 'video', 'microsoft', 'isa', 'mode', 'pc'] [7-0.81667-0.20558]: ['helmet', 'bike', 'ride', 'detector', 'rider', 'motorcycle', 'road', 'radar', 'eye', 'cop'] [8-0.85-0.18271]: ['apartment', 'armenians', 'azerbaijan', 'neighbor', 'armenian', 'floor', 'afraid', 'secretary', 'building', 'woman'] ,→ [9-0.86667-0.28224]: ['orbit', 'earth', 'theory', 'mass', 'star', 'universe', 'space', 'moon', 'physical', 'material'] [10-0.9-0.30376]: ['scsi', 'ide', 'controller', 'bus', 'isa', 'jumper', 'drive', 'mhz', 'mb', 'disk'] [11-0.83333-0.20827]: ['neutral', 'outlet', 'wire', 'wiring', 'ground', 'electrical', 'panel', 'circuit', 'lunar', 'orbit'] [12-0.9-0.21013]: ['drive', 'floppy', 'meg', 'cd', 'motherboard', 'hd', 'external', 'boot', 'supply', 'brand'] [13-0.8-0.22575]: ['oh', 'yeah', 'guess', 'sick', 'hey', 'employer', 'sorry', 'disclaimer', 'wonder', 'excuse'] [14-0.82-0.181]: ['advance', 'gif', 'convert', 'format', 'graphic', 'graphics', 'thanks', 'ftp', 'site', 'anybody'] [15-0.70333-0.17932]: ['ford', 'curious', 'anyone', 'manual', 'recall', 'band', 'ago', 'paint', 'car', 'stuff'] 6361 [16-0.85-0.33088]: ['jesus', 'god', 'christ', 'matthew', 'spirit', 'lord', 'holy', 'passage', 'heaven', 'eternal'] [17-0.64-0.20301]: ['thanks', 'anybody', 'hello', 'appreciate', 'excuse', 'thread', 'friend', 'anyone', 'adams', 'mirror'] [18-0.9-0.30013]: ['homosexual', 'sexual', 'punishment', 'gay', 'sex', 'murder', 'commit', 'islamic', 'male', 'penalty'] [19-0.85-0.23501]: ['resurrection', 'hell', 'kent', 'eternal', 'evidence', 'body', 'heaven', 'koresh', 'claim', 'death'] [20-0.9-0.26142]: ['dog', 'ball', 'hitter', 'hr', 'pitcher', 'hit', 'braves', 'hall', 'ryan', 'ab'] [21-0.75333-0.13538]: ['uucp', 'curious', 'anyone', 'al', 'dave', 'compare', 'hear', 'someone', 'office', 'mine'] [22-0.85-0.21743]: ['doctor', 'pain', 'koresh', 'compound', 'fbi', 'tear', 'batf', 'fire', 'gas', 'treatment'] [23-0.95-0.22785]: ['sale', 'shipping', 'condition', 'offer', 'excellent', 'pair', 'sell', 'manual', 'inch', 'price'] [24-0.85-0.30941]: ['hitter', 'puck', 'defensive', 'season', 'offense', 'score', 'braves', 'game', 'team', 'career'] [25-0.72333-0.14718]: ['connector', 'curious', 'newsgroup', 'help', 'pin', 'anyone', 'soul', 'greatly', 'hello', 'thanks'] [26-1-0.38384]: ['israel', 'israeli', 'arabs', 'arab', 'lebanon', 'lebanese', 'civilian', 'peace', 'palestinian', 'war'] [27-0.76667-0.41534]: ['mission', 'satellite', 'shuttle', 'lunar', 'nasa', 'space', 'spacecraft', 'launch', 'orbit', 'solar'] [28-0.95-0.23404]: ['msg', 'morality', 'objective', 'moral', 'food', 'science', 'absolute', 'existence', 'scientific', 'definition'] ,→ [29-0.85-0.29938]: ['monitor', 'apple', 'vga', 'quadra', 'video', 'card', 'motherboard', 'mac', 'simm', 'cache'] [30-0.95-0.31918]: ['church', 'catholic', 'pope', 'doctrine', 'worship', 'authority', 'scripture', 'christ', 'lewis', 'tradition'] ,→ [31-1-0.087238]: ['mw', 'tor', 'det', 'que', 'ax', 'pit', 'rg', 'van', 'min', 'wm'] [32-0.85-0.18213]: ['tony', 'yeah', 'honda', 'student', 'watch', 'hear', 'listen', 'david', 'liberal', 'ticket'] [33-0.9-0.35688]: ['christianity', 'christian', 'bible', 'religion', 'faith', 'gay', 'belief', 'homosexual', 'islam', 'truth'] [34-0.95-0.23899]: ['keyboard', 'anonymous', 'usenet', 'privacy', 'internet', 'mailing', 'request', 'injury', 'posting', 'user'] ,→ [35-0.7-0.36042]: ['medicine', 'disease', 'drug', 'patient', 'medical', 'treatment', 'study', 'health', 'doctor', 'scientific'] ,→ [36-0.85-0.39602]: ['turkish', 'turks', 'armenian', 'genocide', 'armenians', 'armenia', 'greece', 'turkey', 'azerbaijan', 'greek'] ,→ [37-0.95-0.20527]: ['mouse', 'modem', 'printer', 'port', 'serial', 'print', 'hp', 'postscript', 'connect', 'resolution'] [38-0.85-0.22659]: ['surrender', 'gordon', 'soon', 'patient', 'eat', 'brain', 'girl', 'medicine', 'disease', 'treat'] [39-0.77-0.2276]: ['mail', 'please', 'address', 'mailing', 'advance', 'thanks', 'email', 'interested', 'appreciate', 'thank'] [40-1-0.24035]: ['stephanopoulos', 'president', 'mr', 'george', 'senate', 'myers', 'bush', 'meeting', 'consideration', 'clinton'] ,→ [41-0.85-0.25791]: ['swap', 'windows', 'nt', 'gateway', 'dos', 'memory', 'screen', 'menu', 'ram', 'microsoft'] [42-0.88333-0.15345]: ['apr', 'tom', 'frank', 'nasa', 'article', 'gmt', 'trial', 'space', 'university', 'id'] [43-0.95-0.28355]: ['escrow', 'encryption', 'clipper', 'key', 'wiretap', 'encrypt', 'des', 'nsa', 'rsa', 'algorithm'] [44-0.8-0.30457]: ['car', 'brake', 'tire', 'ford', 'engine', 'oil', 'saturn', 'dealer', 'transmission', 'fuel'] [45-0.71667-0.27262]: ['bike', 'bmw', 'battery', 'honda', 'rear', 'tank', 'ride', 'seat', 'sport', 'engine'] [46-1-0.23824]: ['handgun', 'homicide', 'gun', 'firearm', 'insurance', 'crime', 'ban', 'billion', 'seattle', 'fund'] [47-0.95-0.28678]: ['winnipeg', 'calgary', 'montreal', 'louis', 'philadelphia', 'rangers', 'minnesota', 'pittsburgh', 'ottawa', 'detroit'] ,→ [48-1-0.29992]: ['militia', 'amendment', 'constitution', 'bear', 'court', 'libertarian', 'federal', 'violate', 'rights', 'shall'] ,→ [49-0.71667-0.20957]: ['motorcycle', 'dod', 'bmw', 'ride', 'bike', 'truck', 'tire', 'lock', 'shop', 'module'] 12.2 Topic words on NYTimes: LDA Collapsed Gibbs sampling: NPMI=0.30, TU=0.808 [ 0 - 0.78333 - 0.20576]: ['cup', 'food', 'minutes', 'add', 'oil', 'tablespoon', 'wine', 'sugar', 'water', 'fat'] [ 1 - 0.80333 - 0.2849]: ['race', 'won', 'team', 'zzz_olympic', 'sport', 'track', 'gold', 'win', 'racing', 'medal'] [ 2 - 0.55667 - 0.37877]: ['team', 'yard', 'game', 'season', 'play', 'player', 'quarterback', 'football', 'zzz_nfl', 'coach'] [ 3 - 1 - 0.34611]: ['car', 'driver', 'truck', 'road', 'drive', 'seat', 'driving', 'vehicle', 'vehicles', 'wheel'] [ 4 - 0.67833 - 0.36089]: ['company', 'business', 'sales', 'product', 'customer', 'million', 'market', 'companies', 'consumer', 'industry'] ,→ [ 5 - 0.80833 - 0.31807]: ['meeting', 'question', 'asked', 'told', 'official', 'decision', 'interview', 'talk', 'reporter', 'comment'] ,→ [ 6 - 0.875 - 0.27026]: ['art', 'century', 'history', 'french', 'artist', 'painting', 'museum', 'show', 'collection', 'zzz_london'] ,→ [ 7 - 0.87 - 0.21794]: ['zzz_new_york', 'building', 'resident', 'area', 'million', 'mayor', 'project', 'zzz_los_angeles', 'local', 'center'] ,→ [ 8 - 0.78333 - 0.19082]: ['daily', 'question', 'american', 'newspaper', 'beach', 'palm', 'statesman', 'information', 'today', 'zzz_washington'] ,→ [ 9 - 0.9 - 0.35134]: ['family', 'father', 'home', 'son', 'friend', 'wife', 'mother', 'daughter', 'brother', 'husband'] [ 10 - 0.875 - 0.29023]: ['hair', 'fashion', 'wear', 'designer', 'shirt', 'show', 'wearing', 'black', 'red', 'suit'] [ 11 - 0.7 - 0.27419]: ['government', 'zzz_china', 'zzz_united_states', 'country', 'countries', 'foreign', 'political', 'european', 'leader', 'chinese'] ,→ [ 12 - 0.83333 - 0.29267]: ['sense', 'fact', 'zzz_america', 'power', 'perhap', 'history', 'question', 'view', 'moment', 'real'] ,→ [ 13 - 0.88333 - 0.25602]: ['water', 'fish', 'weather', 'boat', 'bird', 'wind', 'miles', 'storm', 'air', 'light'] [ 14 - 0.85833 - 0.30065]: ['show', 'television', 'network', 'series', 'zzz_nbc', 'viewer', 'media', 'broadcast', 'station', 'night'] ,→ [ 15 - 0.9 - 0.41115]: ['palestinian', 'zzz_israel', 'peace', 'zzz_israeli', 'israeli', 'zzz_yasser_arafat', 'leader', 'israelis', 'violence', 'attack'] ,→ [ 16 - 0.75333 - 0.27807]: ['power', 'energy', 'oil', 'plant', 'gas', 'zzz_california', 'prices', 'million', 'water', 'environmental'] ,→ [ 17 - 0.95 - 0.26389]: ['fight', 'hand', 'left', 'pound', 'body', 'weight', 'head', 'arm', 'hard', 'face'] [ 18 - 1 - 0.35861]: ['drug', 'patient', 'doctor', 'medical', 'cell', 'cancer', 'hospital', 'health', 'treatment', 'care'] [ 19 - 0.925 - 0.30406]: ['religious', 'church', 'zzz_god', 'gay', 'group', 'jewish', 'priest', 'faith', 'religion', 'jew'] [ 20 - 0.62333 - 0.3712]: ['run', 'season', 'hit', 'team', 'game', 'inning', 'baseball', 'yankees', 'player', 'games'] [ 21 - 0.62833 - 0.32998]: ['company', 'million', 'companies', 'firm', 'deal', 'zzz_enron', 'stock', 'business', 'billion', 'financial'] ,→ [ 22 - 0.95 - 0.27674]: ['guy', 'bad', 'feel', 'thought', 'big', 'kid', 'kind', 'dog', 'word', 'remember'] [ 23 - 0.85833 - 0.25315]: ['job', 'worker', 'employees', 'contract', 'manager', 'business', 'union', 'working', 'company', 'executive'] ,→ [ 24 - 0.775 - 0.26369]: ['percent', 'number', 'study', 'found', 'result', 'survey', 'article', 'level', 'problem', 'group'] [ 25 - 0.70833 - 0.21533]: ['black', 'white', 'zzz_mexico', 'american', 'country', 'immigrant', 'zzz_united_states', 'mexican', 'group', 'flag'] ,→ [ 26 - 0.83333 - 0.36285]: ['zzz_bush', 'president', 'zzz_white_house', 'bill', 'zzz_clinton', 'zzz_senate', 'zzz_congress', 'administration', 'republican', 'political'] ,→ [ 27 - 0.62 - 0.23437]: ['round', 'won', 'shot', 'player', 'tour', 'play', 'golf', 'zzz_tiger_wood', 'win', 'set'] [ 28 - 0.86667 - 0.33338]: ['film', 'movie', 'character', 'actor', 'movies', 'director', 'zzz_hollywood', 'play', 'minutes', 'starring'] ,→ [ 29 - 0.37333 - 0.37747]: ['team', 'game', 'point', 'season', 'coach', 'play', 'player', 'games', 'basketball', 'win'] [ 30 - 0.85 - 0.34187]: ['court', 'case', 'law', 'lawyer', 'decision', 'legal', 'lawsuit', 'judge', 'zzz_florida', 'ruling'] [ 31 - 0.83333 - 0.23369]: ['room', 'hotel', 'house', 'town', 'restaurant', 'wall', 'home', 'tour', 'trip', 'night'] [ 32 - 0.95 - 0.28975]: ['women', 'children', 'child', 'parent', 'girl', 'age', 'young', 'woman', 'mother', 'teen'] [ 33 - 0.84167 - 0.36553]: ['music', 'song', 'play', 'band', 'musical', 'show', 'album', 'sound', 'stage', 'record'] [ 34 - 0.68333 - 0.26919]: ['law', 'group', 'government', 'official', 'federal', 'public', 'rules', 'agency', 'states', 'issue'] ,→ 6362 [ 35 - 0.95 - 0.28208]: ['web', 'site', 'www', 'mail', 'information', 'online', 'sites', 'zzz_internet', 'internet', 'telegram'] ,→ [ 36 - 0.78333 - 0.27546]: ['fire', 'night', 'hour', 'dead', 'police', 'morning', 'street', 'left', 'building', 'killed'] [ 37 - 0.75833 - 0.30963]: ['zzz_afghanistan', 'zzz_taliban', 'war', 'bin', 'laden', 'government', 'official', 'zzz_pakistan', 'forces', 'zzz_u_s'] ,→ [ 38 - 0.83333 - 0.28581]: ['school', 'student', 'program', 'teacher', 'high', 'college', 'education', 'class', 'test', 'public'] ,→ [ 39 - 0.95 - 0.13694]: ['fax', 'syndicate', 'con', 'article', 'purchased', 'zzz_canada', 'una', 'publish', 'zzz_paris', 'representatives'] ,→ [ 40 - 0.80333 - 0.32636]: ['money', 'million', 'tax', 'plan', 'pay', 'billion', 'cut', 'fund', 'cost', 'program'] [ 41 - 0.88333 - 0.41706]: ['campaign', 'zzz_al_gore', 'zzz_george_bush', 'election', 'voter', 'vote', 'political', 'presidential', 'republican', 'democratic'] ,→ [ 42 - 0.85833 - 0.32241]: ['computer', 'system', 'technology', 'software', 'zzz_microsoft', 'window', 'digital', 'user', 'company', 'program'] ,→ [ 43 - 0.9 - 0.35597]: ['police', 'case', 'death', 'officer', 'investigation', 'prison', 'charges', 'trial', 'prosecutor', 'zzz_fbi'] ,→ [ 44 - 0.85 - 0.31374]: ['percent', 'market', 'stock', 'economy', 'quarter', 'growth', 'economic', 'analyst', 'rate', 'rates'] ,→ [ 45 - 0.60833 - 0.26599]: ['attack', 'military', 'zzz_u_s', 'zzz_united_states', 'terrorist', 'zzz_bush', 'official', 'war', 'zzz_american', 'security'] ,→ [ 46 - 0.44 - 0.30756]: ['team', 'point', 'game', 'season', 'play', 'player', 'games', 'goal', 'shot', 'zzz_laker'] [ 47 - 0.85 - 0.27431]: ['human', 'scientist', 'anthrax', 'animal', 'disease', 'found', 'test', 'food', 'research', 'virus'] [ 48 - 0.9 - 0.3247]: ['flight', 'plane', 'airport', 'passenger', 'pilot', 'travel', 'security', 'air', 'airline', 'crew'] [ 49 - 0.9 - 0.31896]: ['book', 'writer', 'author', 'wrote', 'read', 'word', 'writing', 'magazine', 'newspaper', 'paper'] Online LDA: NPMI=0.291, TU=0.804 [ 0 - 0.93333 - 0.29401]: ['women', 'gay', 'sex', 'girl', 'woman', 'look', 'fashion', 'female', 'wear', 'hair'] [ 1 - 0.95 - 0.35632]: ['car', 'driver', 'truck', 'race', 'vehicle', 'vehicles', 'zzz_ford', 'wheel', 'driving', 'road'] [ 2 - 0.44 - 0.30627]: ['point', 'game', 'team', 'play', 'season', 'games', 'zzz_laker', 'shot', 'player', 'basketball'] [ 3 - 0.75 - 0.31578]: ['election', 'ballot', 'zzz_florida', 'vote', 'votes', 'recount', 'court', 'zzz_al_gore', 'voter', 'count'] ,→ [ 4 - 0.88333 - 0.34432]: ['computer', 'web', 'zzz_internet', 'site', 'online', 'system', 'mail', 'internet', 'sites', 'software'] ,→ [ 5 - 1 - 0.27207]: ['con', 'una', 'las', 'mas', 'por', 'dice', 'como', 'los', 'anos', 'sus'] [ 6 - 0.87 - 0.2716]: ['study', 'test', 'found', 'data', 'percent', 'researcher', 'evidence', 'result', 'finding', 'scientist'] ,→ [ 7 - 0.93333 - 0.28208]: ['show', 'television', 'network', 'zzz_nbc', 'series', 'viewer', 'zzz_cb', 'zzz_abc', 'broadcast', 'producer'] ,→ [ 8 - 0.8 - 0.3368]: ['court', 'case', 'law', 'lawyer', 'police', 'trial', 'death', 'officer', 'prosecutor', 'prison'] [ 9 - 0.82 - 0.33016]: ['percent', 'tax', 'economy', 'money', 'cut', 'fund', 'market', 'stock', 'billion', 'economic'] [ 10 - 0.64 - 0.30092]: ['team', 'player', 'million', 'season', 'contract', 'deal', 'manager', 'agent', 'fan', 'league'] [ 11 - 0.88333 - 0.30657]: ['need', 'feel', 'word', 'question', 'look', 'right', 'mean', 'kind', 'fact', 'course'] [ 12 - 0.81667 - 0.25913]: ['religious', 'zzz_american', 'jewish', 'zzz_god', 'religion', 'jew', 'american', 'german', 'political', 'zzz_america'] ,→ [ 13 - 0.9 - 0.22142]: ['cup', 'minutes', 'add', 'tablespoon', 'food', 'oil', 'pepper', 'wine', 'sugar', 'teaspoon'] [ 14 - 0.69167 - 0.25718]: ['zzz_china', 'zzz_united_states', 'zzz_u_s', 'chinese', 'zzz_japan', 'zzz_american', 'countries', 'foreign', 'japanese', 'official'] ,→ [ 15 - 0.72333 - 0.21791]: ['match', 'tennis', 'set', 'boat', 'won', 'point', 'zzz_pete_sampras', 'final', 'game', 'player'] [ 16 - 0.76667 - 0.16417]: ['zzz_texas', 'telegram', 'com', 'zzz_austin', 'zzz_houston', 'visit', 'www', 'services', 'web', 'file'] ,→ [ 17 - 0.81667 - 0.27063]: ['room', 'building', 'house', 'look', 'wall', 'floor', 'door', 'home', 'small', 'light'] [ 18 - 0.93333 - 0.42167]: ['music', 'song', 'band', 'album', 'musical', 'sound', 'singer', 'record', 'jazz', 'show'] [ 19 - 0.95 - 0.28682]: ['water', 'weather', 'air', 'wind', 'storm', 'feet', 'snow', 'rain', 'mountain', 'miles'] [ 20 - 0.76667 - 0.30412]: ['military', 'attack', 'war', 'terrorist', 'zzz_u_s', 'laden', 'zzz_american', 'bin', 'zzz_pentagon', 'forces'] ,→ [ 21 - 1 - 0.37961]: ['drug', 'patient', 'doctor', 'health', 'medical', 'disease', 'hospital', 'care', 'cancer', 'treatment'] [ 22 - 0.93333 - 0.26762]: ['black', 'white', 'flag', 'zzz_black', 'racial', 'irish', 'protest', 'crowd', 'american', 'african'] ,→ [ 23 - 0.64 - 0.32174]: ['company', 'companies', 'million', 'business', 'market', 'percent', 'stock', 'sales', 'analyst', 'customer'] ,→ [ 24 - 0.95 - 0.35968]: ['book', 'magazine', 'newspaper', 'author', 'wrote', 'writer', 'writing', 'published', 'read', 'reader'] ,→ [ 25 - 0.72 - 0.2336]: ['company', 'zzz_enron', 'firm', 'zzz_microsoft', 'million', 'lawsuit', 'companies', 'lawyer', 'case', 'settlement'] ,→ [ 26 - 0.5 - 0.25483]: ['official', 'zzz_fbi', 'government', 'agent', 'terrorist', 'information', 'zzz_cuba', 'attack', 'security', 'zzz_united_states'] ,→ [ 27 - 0.83333 - 0.26933]: ['art', 'zzz_new_york', 'artist', 'century', 'painting', 'show', 'museum', 'collection', 'history', 'director'] ,→ [ 28 - 0.88333 - 0.20031]: ['priest', 'church', 'horse', 'race', 'horses', 'bishop', 'abuse', 'zzz_kentucky_derby', 'pope', 'won'] ,→ [ 29 - 0.65833 - 0.22441]: ['government', 'zzz_mexico', 'country', 'zzz_united_states', 'mexican', 'immigrant', 'border', 'countries', 'president', 'worker'] ,→ [ 30 - 0.715 - 0.25157]: ['goal', 'shot', 'play', 'game', 'king', 'round', 'zzz_tiger_wood', 'player', 'fight', 'win'] [ 31 - 0.88333 - 0.34572]: ['family', 'home', 'friend', 'father', 'children', 'mother', 'son', 'wife', 'told', 'daughter'] [ 32 - 0.87 - 0.20923]: ['land', 'town', 'animal', 'farm', 'fish', 'bird', 'local', 'farmer', 'million', 'miles'] [ 33 - 0.70333 - 0.35313]: ['run', 'game', 'hit', 'inning', 'season', 'yankees', 'games', 'pitcher', 'home', 'zzz_dodger'] [ 34 - 0.95 - 0.20151]: ['plant', 'mayor', 'zzz_rudolph_giuliani', 'zzz_los_angeles', 'flower', 'garden', 'tree', 'trees', 'zzz_southern_california', 'seed'] ,→ [ 35 - 0.95 - 0.28879]: ['cell', 'scientist', 'research', 'human', 'science', 'stem', 'brain', 'space', 'technology', 'experiment'] ,→ [ 36 - 0.75 - 0.14192]: ['com', 'daily', 'palm', 'beach', 'question', 'statesman', 'american', 'information', 'zzz_eastern', 'austin'] ,→ [ 37 - 0.75 - 0.41055]: ['zzz_george_bush', 'zzz_al_gore', 'president', 'zzz_bush', 'campaign', 'zzz_clinton', 'zzz_white_house', 'presidential', 'zzz_bill_clinton', 'republican'] ,→ [ 38 - 0.87 - 0.28937]: ['school', 'student', 'program', 'teacher', 'children', 'high', 'education', 'college', 'job', 'percent'] ,→ [ 39 - 0.825 - 0.29646]: ['zzz_taliban', 'zzz_afghanistan', 'zzz_pakistan', 'zzz_russia', 'government', 'zzz_russian', 'afghan', 'country', 'zzz_vladimir_putin', 'leader'] ,→ [ 40 - 0.825 - 0.33129]: ['film', 'movie', 'character', 'play', 'actor', 'movies', 'director', 'book', 'zzz_hollywood', 'love'] ,→ [ 41 - 0.82 - 0.29137]: ['oil', 'power', 'plant', 'energy', 'gas', 'prices', 'zzz_california', 'fuel', 'million', 'cost'] [ 42 - 0.875 - 0.40189]: ['palestinian', 'zzz_israel', 'zzz_israeli', 'peace', 'israeli', 'zzz_yasser_arafat', 'leader', 'israelis', 'official', 'violence'] ,→ [ 43 - 0.87 - 0.26507]: ['food', 'product', 'drink', 'eat', 'weight', 'pound', 'smoking', 'diet', 'percent', 'tobacco'] [ 44 - 0.65 - 0.19242]: ['com', 'www', 'information', 'site', 'fax', 'web', 'article', 'syndicate', 'visit', 'contact'] [ 45 - 0.78333 - 0.33252]: ['zzz_olympic', 'games', 'sport', 'medal', 'team', 'gold', 'athletes', 'event', 'won', 'competition'] ,→ 6363 [ 46 - 0.65833 - 0.20844]: ['flight', 'plane', 'airport', 'passenger', 'attack', 'zzz_new_york', 'building', 'security', 'worker', 'official'] ,→ [ 47 - 0.7 - 0.43799]: ['campaign', 'political', 'election', 'vote', 'democratic', 'voter', 'zzz_party', 'republican', 'zzz_republican', 'governor'] ,→ [ 48 - 0.54 - 0.37981]: ['game', 'team', 'season', 'play', 'coach', 'yard', 'player', 'football', 'games', 'quarterback'] [ 49 - 0.825 - 0.30119]: ['bill', 'zzz_congress', 'zzz_bush', 'plan', 'federal', 'government', 'administration', 'law', 'group', 'zzz_senate'] ,→ ProdLDA: NPMI=0.319, TU=0.668 [0-1-0.19456]: ['zzz_discover', 'molecules', 'data', 'zzz_eric_haseltine', 'ion', 'gigahertz', 'computing', 'zzz_dna', 'horsepower', 'molecule'] ,→ [1-0.95-0.18699]: ['zzz_focus', 'zzz_mississippi_valley', 'zzz_national_forecast', 'zzz_ohio_valley', 'torque', 'moisture', 'gusty', 'zzz_bernard_gladstone', 'zzz_middle_atlantic', 'zzz_winston_cup'] ,→ [2-0.625-0.14594]: ['prosecutor', 'murder', 'distinguishable', 'zzz_bantam', 'zzz_how_to_and_miscellaneous', 'zzz_ray_lewis', 'zzz_my_cheese', 'zzz_bill_phillip', 'zzz_michael_d_orso', 'zzz_fiction'] ,→ [3-0.85-0.30119]: ['film', 'comedy', 'movie', 'zzz_fare', 'zzz_judi_dench', 'zzz_billy_bob_thornton', 'zzz_steve_buscemi', 'starring', 'adaptation', 'zzz_cable_cast'] ,→ [4-0.9-0.29068]: ['zzz_federal_energy_regulatory_commission', 'zzz_enron', 'administration', 'megawatt', 'zzz_congress', 'utilities', 'lawmaker', 'legislation', 'zzz_southern_california_edison', 'zzz_senate'] ,→ [5-1-0.43922]: ['constitutional', 'justices', 'zzz_supreme_court', 'zzz_ruth_bader_ginsburg', 'ruling', 'federal', 'zzz_chief_justice_william_h_rehnquist', 'zzz_justices_sandra_day_o_connor', 'zzz_u_s_circuit_court', 'zzz_florida_supreme_court'] ,→ ,→ [6-0.9-0.26193]: ['victorian', 'artist', 'sculptures', 'painting', 'garden', 'decorative', 'zzz_post_office_box', 'zzz_gothic', 'boutiques', 'galleries'] ,→ [7-0.7-0.50355]: ['zzz_afghanistan', 'qaida', 'zzz_pentagon', 'zzz_taliban', 'bin', 'laden', 'zzz_rumsfeld', 'zzz_osama', 'zzz_defense_secretary_donald_rumsfeld', 'terrorism'] ,→ [8-0.50833-0.33875]: ['zzz_federal_reserve', 'prices', 'zzz_fed', 'stock', 'companies', 'rates', 'economy', 'investor', 'billion', 'inflation'] ,→ [9-0.50833-0.096284]: ['distinguishable', 'zzz_bantam', 'zzz_how_to_and_miscellaneous', 'bookstores', 'wholesaler', 'zzz_phillip_mcgraw', 'zzz_nonfiction', 'zzz_anne_stephenson', 'zzz_berkley', 'zzz_harpercollin'] ,→ [10-0.88333-0.55375]: ['zzz_winston_cup', 'zzz_nascar', 'zzz_daytona', 'championship', 'zzz_dale_earnhardt_jr', 'zzz_nascar_winston_cup', 'zzz_tony_stewart', 'zzz_jeff_gordon', 'restrictor', 'zzz_dale_jarrett'] ,→ [11-1-0.3249]: ['chiffon', 'zzz_randolph_duke', 'strapless', 'tulle', 'zzz_valentino', 'beaded', 'dresses', 'couture', 'zzz_hal_rubenstein', 'zzz_versace'] ,→ [12-1-0.077046]: ['gutty', 'zzz_ansel_williamson', 'zzz_caracas_cannonball', 'zzz_rosa_hoot', 'zzz_osage_indian', 'zzz_black_gold', 'arrestingly', 'zzz_canonero_ii', 'zzz_david_alexander', 'zzz_aristides'] ,→ [13-0.9-0.02489]: ['oped', 'zzz_andy_alexander', 'zzz_kaplow', 'zzz_bessonette', 'andya', 'zzz_eyman', 'zzz_news_questions_q', 'zzz_lee_may_this', 'pica', 'zzz_alan_gordon'] ,→ [14-0.51667-0.47783]: ['zzz_republican', 'election', 'zzz_al_gore', 'democratic', 'republican', 'votes', 'democrat', 'voter', 'zzz_gop', 'ballot'] ,→ [15-0.43333-0.36588]: ['inning', 'season', 'scored', 'playoff', 'scoring', 'game', 'postseason', 'homer', 'goaltender', 'baseman'] ,→ [16-0.40833-0.16908]: ['distinguishable', 'zzz_how_to_and_miscellaneous', 'zzz_bantam', 'zzz_dave_pelzer', 'zzz_nonfiction', 'bookstores', 'wholesaler', 'zzz_lost_boy', 'zzz_fiction', 'zzz_berkley'] ,→ [17-0.95-0.35634]: ['user', 'software', 'zzz_microsoft', 'zzz_internet', 'zzz_aol', 'provider', 'consumer', 'download', 'zzz_microsoft_corp', 'zzz_napster'] ,→ [18-0.83333-0.3176]: ['zzz_arthur_andersen', 'zzz_justice_department', 'zzz_enron', 'prosecutor', 'auditor', 'zzz_securities', 'defendant', 'zzz_sec', 'litigation', 'plaintiff'] ,→ [19-0.56667-0.29763]: ['rebound', 'layup', 'pointer', 'halftime', 'touchdown', 'coach', 'tournament', 'zzz_laker', 'seeded', 'championship'] ,→ [20-0.56667-0.45302]: ['zzz_al_gore', 'zzz_republican', 'election', 'democratic', 'votes', 'democrat', 'voter', 'zzz_democrat', 'zzz_bush', 'ballot'] ,→ [21-0.95-0.33733]: ['nutrient', 'biotechnology', 'zzz_drug_administration', 'protein', 'pesticides', 'zzz_starlink', 'biotech', 'bacteria', 'genetically', 'species'] ,→ [22-0.51667-0.23056]: ['tiene', 'una', 'mas', 'sobre', 'anos', 'representantes', 'publicar', 'comprar', 'tienen', 'ventas'] [23-0.45833-0.36537]: ['companies', 'stock', 'investor', 'analyst', 'company', 'shareholder', 'billion', 'zzz_thomson_financial_first_call', 'zzz_exchange_commission', 'zzz_securities'] ,→ [24-0.7-0.37406]: ['zzz_taliban', 'zzz_afghanistan', 'zzz_attorney_general_john_ashcroft', 'zzz_ashcroft', 'qaida', 'zzz_pentagon', 'tribunal', 'terrorism', 'missiles', 'zzz_rumsfeld'] ,→ [25-0.36667-0.36524]: ['inning', 'season', 'scoring', 'playoff', 'scored', 'game', 'postseason', 'homer', 'defenseman', 'fielder'] ,→ [26-0.71-0.26107]: ['zzz_robert_kagan', 'unilateral', 'zzz_jeane_kirkpatrick', 'zzz_yasser_arafat', 'democracy', 'zzz_israel', 'palestinian', 'zzz_norman_levine', 'zzz_conservative', 'zzz_nlevineiip'] ,→ [27-0.65-0.35567]: ['tax', 'trillion', 'surpluses', 'zzz_fed', 'zzz_federal_reserve', 'surplus', 'zzz_social_security', 'inflation', 'economy', 'stimulus'] ,→ [28-1-0.17428]: ['zzz_technobuddy_popular', 'zzz_husted', 'zzz_cleere_rudd', 'zzz_netwatch', 'zzz_tech_savvy', 'zzz_technobuddy', 'pageex', 'zzz_bizmags_a', 'zzz_texas_consumer_q', 'zzz_tech_tools_software'] ,→ [29-0.31833-0.55049]: ['palestinian', 'zzz_israeli', 'zzz_yasser_arafat', 'zzz_west_bank', 'israelis', 'zzz_israel', 'militant', 'zzz_prime_minister_ariel_sharon', 'zzz_gaza_strip', 'zzz_palestinian'] ,→ [30-0.59167-0.3032]: ['companies', 'analyst', 'automaker', 'stock', 'consumer', 'zzz_daimlerchrysler', 'zzz_first_call_thomson_financial', 'billion', 'zzz_daimlerchrysler_ag', 'company'] ,→ [31-0.9-0.18023]: ['zzz_doubles', 'zzz_eat', 'painting', 'artist', 'decor', 'designer', 'painter', 'zzz_sightseeing', 'sculpture', 'spangly'] ,→ [32-0.46667-0.31749]: ['mas', 'sobre', 'anos', 'una', 'como', 'otros', 'ventas', 'tienen', 'sus', 'todo'] [33-0.36833-0.54809]: ['zzz_israeli', 'palestinian', 'zzz_west_bank', 'israelis', 'zzz_palestinian', 'zzz_yasser_arafat', 'militant', 'zzz_israel', 'zzz_prime_minister_ariel_sharon', 'zzz_gaza'] ,→ [34-0.5-0.28574]: ['tablespoon', 'teaspoon', 'saucepan', 'pepper', 'cholesterol', 'cup', 'chopped', 'garlic', 'sodium', 'browned'] ,→ [35-0.41667-0.30011]: ['mas', 'anos', 'sobre', 'tiene', 'sus', 'como', 'ventas', 'todo', 'representantes', 'una'] [36-0.9-0.1902]: ['toder', 'zzz_tom_oder', 'andya', 'zzz_andy_alexander', 'artd', 'tduncan', 'zzz_dalglish', 'zzz_todd_duncan', 'zzz_rick_christie', 'rickc'] ,→ [37-0.56667-0.32578]: ['pointer', 'layup', 'touchdown', 'halftime', 'tournament', 'semifinal', 'championship', 'coach', 'zzz_ncaa', 'seeded'] ,→ [38-0.43333-0.42183]: ['season', 'playoff', 'game', 'inning', 'scoring', 'scored', 'defenseman', 'scoreless', 'games', 'shutout'] ,→ [39-0.50833-0.086353]: ['distinguishable', 'zzz_how_to_and_miscellaneous', 'zzz_dave_pelzer', 'zzz_bantam', 'zzz_lost_boy', 'zzz_phillip_mcgraw', 'zzz_hyperion', 'zzz_nonfiction', 'zzz_bill_phillip', 'zzz_robert_atkin'] ,→ [40-0.635-0.37871]: ['zzz_barak', 'zzz_israel', 'zzz_yasser_arafat', 'palestinian', 'zzz_ariel_sharon', 'zzz_prime_minister_ehud_barak', 'parliamentary', 'zzz_pri', 'israelis', 'democracy'] ,→ [41-0.45-0.35843]: ['tablespoon', 'teaspoon', 'cup', 'saucepan', 'pepper', 'garlic', 'cloves', 'onion', 'chopped', 'minced'] [42-0.61667-0.40298]: ['zzz_medicare', 'tax', 'zzz_republican', 'zzz_social_security', 'prescription', 'trillion', 'zzz_senate', 'zzz_house_republican', 'republican', 'democrat'] ,→ [43-0.85-0.26198]: ['film', 'movie', 'album', 'comedy', 'actress', 'zzz_merle_ginsberg', 'genre', 'debut', 'zzz_nicole_kidman', 'musical'] ,→ [44-0.95-0.34236]: ['anthrax', 'spores', 'inhalation', 'antibiotic', 'zzz_drug_administration', 'zzz_disease_control', 'zzz_fda', 'zzz_cdc', 'zzz_ernesto_blanco', 'zzz_cipro'] ,→ [45-0.3-0.41301]: ['season', 'inning', 'playoff', 'game', 'scoring', 'scored', 'games', 'coach', 'postseason', 'defenseman'] 6364 [46-0.45833-0.3414]: ['companies', 'stock', 'shareholder', 'investor', 'analyst', 'company', 'merger', 'billion', 'zzz_at', 'zzz_securities'] ,→ [47-0.45-0.31771]: ['tablespoon', 'teaspoon', 'saucepan', 'cholesterol', 'pepper', 'parsley', 'cup', 'garlic', 'cloves', 'onion'] ,→ [48-1-0.4116]: ['bishop', 'priest', 'catholic', 'zzz_vatican', 'zzz_cardinal_bernard_f_law', 'jew', 'religious', 'zzz_christianity', 'zzz_roman_catholic', 'dioceses'] ,→ [49-0.36833-0.55235]: ['zzz_israeli', 'palestinian', 'zzz_west_bank', 'militant', 'zzz_yasser_arafat', 'zzz_gaza_strip', 'israelis', 'zzz_ramallah', 'zzz_palestinian', 'zzz_israel'] ,→ NTM-R: NPMI=0.218, TU=0.874 [0-1-0.16553]: ['zzz_dow_jones', 'zzz_first_call_thomson_financial', 'zzz_thomson_financial_first_call', 'composite', 'zzz_tom_walker', 'indexes', 'zzz_sach', 'annualized', 'zzz_fed', 'zzz_prudential_securities'] ,→ [1-0.69769-0.1441]: ['zzz_held', 'advisory', 'redevelopment', 'renovated', 'premature', 'occupancy', 'sicheianytimes', 'suites', 'una', 'zzz_atentamente'] ,→ [2-0.95-0.40759]: ['zzz_playstation', 'gameplay', 'zzz_we_want', 'zzz_dreamcast', 'gamer', 'zzz_national_geographic_today_list', 'ps2', 'zzz_publish_a_story', 'zzz_natgeo_list', 'zzz_know_about'] ,→ [3-0.95-0.20454]: ['zzz_new_hampshire', 'zzz_budget_office', 'caucuses', 'uninsured', 'zzz_sooner', 'zzz_john_mccain', 'zzz_south_carolina', 'zzz_mccain', 'milligram', 'seeded'] ,→ [4-0.95-0.20167]: ['studios', 'zzz_dvd', 'zzz_vh', 'zzz_recording_industry_association', 'soundtrack', 'zzz_paramount', 'zzz_fare', 'zzz_dreamwork', 'zzz_metallica', 'zzz_warner_brother'] ,→ [5-0.8-0.11388]: ['zzz_joseph_ellis', 'zzz_lance_armstrong', 'zzz_my_cheese', 'zzz_bill_phillip', 'zzz_crown', 'clinton', 'zzz_michael_d_orso', 'zzz_doubleday', 'zzz_mitch_albom', 'noticias'] ,→ [6-0.8-0.092516]: ['zzz_mike_scioscia', 'minced', 'zzz_secret', 'coarsely', 'zzz_scribner', 'zzz_my_cheese', 'combine', 'zzz_chronicle', 'zzz_mitch_albom', 'zzz_michael_d_orso'] ,→ [7-1-0.24547]: ['zzz_touch_tone', 'astrascope', 'zzz_news_america', 'zzz_xii', 'zzz_sagittarius', 'zzz_capricorn', 'zzz_clip_and_save', 'zzz_birthday', 'zzz_aquarius', 'zzz_pisces'] ,→ [8-0.48269-0.17408]: ['undatelined', 'zzz_held', 'misidentified', 'zzz_attn_editor', 'zzz_boston_globe', 'zzz_killed', 'herbert', 'zzz_states_news_service', 'publication', 'dowd'] ,→ [9-1-0.37552]: ['megawatt', 'zzz_opec', 'zzz_petroleum_exporting_countries', 'renewable', 'zzz_federal_communications_commission', 'refineries', 'deregulation', 'zzz_federal_energy_regulatory_commission', 'deregulated', 'pipelines'] ,→ ,→ [10-0.95-0.19122]: ['species', 'zzz_anne_stephenson', 'ecological', 'habitat', 'archaeologist', 'mammal', 'biologist', 'genes', 'zzz_duplication', 'conservationist'] ,→ [11-1-0.16745]: ['zzz_phoenix', 'zzz_rudolph_giuliani', 'zzz_army', 'zzz_brooklyn', 'station', 'stadium', 'officer', 'apartment', 'zzz_kansas_city', 'zzz_manhattan'] ,→ [12-1-0.076741]: ['zzz_technobuddy_popular', 'zzz_husted', 'zzz_netwatch', 'zzz_cleere_rudd', 'zzz_tech_savvy', 'zzz_technobuddy', 'computing', 'hacker', 'zzz_greig', 'zzz_texas_consumer_q'] ,→ [13-1-0.33571]: ['zzz_national_transportation_safety_board', 'zzz_american_airlines_flight', 'zzz_defense_secretary_donald_rumsfeld', 'zzz_federal_aviation_administration', 'zzz_joint_chief', 'zzz_david_wood', 'zzz_rumsfeld', 'zzz_u_s_central_command', 'zzz_pentagon', 'cockpit'] ,→ ,→ [14-0.95-0.18476]: ['zzz_mccain_feingold', 'zzz_common_cause', 'zzz_ir', 'zzz_recording_industry_association', 'zzz_internal_revenue_service', 'taxable', 'deduction', 'debtor', 'infringement', 'zzz_russell_feingold'] ,→ [15-0.95-0.38946]: ['zzz_troy_glaus', 'zzz_mike_scioscia', 'zzz_david_eckstein', 'zzz_edison_field', 'zzz_garret_anderson', 'zzz_angel', 'psychiatry', 'zzz_adam_kennedy', 'zzz_troy_percival', 'zzz_scott_spiezio'] ,→ [16-1-0.38596]: ['zzz_northern_alliance', 'zzz_tajik', 'zzz_pashtun', 'zzz_uzbek', 'warlord', 'zzz_kashmir', 'zzz_taliban', 'zzz_kabul', 'zzz_afghan', 'caves'] ,→ [17-1-0.47243]: ['winemaker', 'wines', 'winery', 'vineyard', 'wineries', 'zzz_publisher', 'grape', 'tannin', 'grapes', 'zzz_harry_potter_and_the_sorcerer_s_stone'] ,→ [18-0.95-0.2147]: ['zzz_o_neal', 'zzz_kobe_bryant', 'zzz_robert_horry', 'zzz_phil_jackson', 'zzz_shaquille_o_neal', 'psychiatrist', 'screenplay', 'sexuality', 'zzz_derek_fisher', 'zzz_anne_stephenson'] ,→ [19-0.90769-0.14241]: ['zzz_held', 'goalkeeper', 'midfielder', 'zzz_ml', 'midfield', 'referee', 'zzz_olympian', 'zzz_dick_ebersol', 'zzz_galaxy', 'zzz_nbc_sport'] ,→ [20-0.95-0.18766]: ['fue', 'inversiones', 'gracias', 'las', 'latinoamericanas', 'angulos', 'finanzas', 'transmitida', 'backhand', 'industrias'] ,→ [21-0.90769-0.44435]: ['zzz_gaza_strip', 'zzz_nablus', 'oslo', 'zzz_palestinian_controlled', 'zzz_hebron', 'zzz_ramallah', 'zzz_west_bank', 'fatah', 'zzz_held', 'zzz_gaza'] ,→ [22-0.75769-0.066675]: ['zzz_karl_horwitz', 'zzz_lifebeat', 'shopper', 'homeowner', 'telex', 'zzz_nonsubscriber', 'pet', 'conditioner', 'zzz_dru_sefton', 'zzz_held'] ,→ [23-0.95-0.2593]: ['filibuster', 'bipartisanship', 'zzz_lott', 'zzz_pri', 'zzz_tom_daschle', 'zzz_mccain', 'zzz_daschle', 'zzz_sen_tom_daschle', 'centrist', 'zzz_jefford'] ,→ [24-1-0.38059]: ['holes', 'fairway', 'birdies', 'birdied', 'birdie', 'bogey', 'zzz_valentino', 'putted', 'putt', 'designation'] ,→ [25-0.85-0.28633]: ['zzz_chechnya', 'zzz_chechen', 'zzz_boris_yeltsin', 'zzz_vladimir_putin', 'choreographer', 'choreography', 'dancer', 'zzz_russian', 'costumes', 'zzz_kremlin'] ,→ [26-0.85-0.2319]: ['zzz_kgb', 'zzz_kremlin', 'zzz_jiang_zemin', 'zzz_boris_yeltsin', 'zzz_hainan', 'espionage', 'zzz_alberto_fujimori', 'zzz_wen_ho_lee', 'zzz_vladimir_putin', 'zzz_taiwan'] ,→ [27-0.54936-0.11824]: ['zzz_held', 'misidentified', 'zzz_attn_editor', 'zzz_killed', 'obituary', 'misspelled', 'zzz_washington_datelined', 'slugged', 'polygraph', 'publication'] ,→ [28-0.95-0.24534]: ['segregation', 'ordination', 'zzz_lazaro_gonzalez', 'protestant', 'dioceses', 'zzz_anthony_kennedy', 'parishes', 'zzz_juan_miguel_gonzalez', 'seminaries', 'priesthood'] ,→ [29-0.8-0.23328]: ['zzz_cox_news_campaign', 'zzz_jeb_bush', 'zzz_rev_al_sharpton', 'zzz_state_katherine_harris', 'chad', 'zzz_miami_dade', 'canvassing', 'zzz_super_tuesday', 'absentee', 'zzz_pat_buchanan'] ,→ [30-0.56603-0.15392]: ['zzz_held', 'zzz_attn_editor', 'undatelined', 'zzz_washington_datelined', 'zzz_anaconda', 'zzz_boston_globe', 'zzz_taloqan', 'zzz_international_space_station', 'zzz_killed', 'crewmen'] ,→ [31-1-0.16598]: ['zzz_gibsonburg', 'eschuett', 'nwonline', 'zzz_west_madison', 'zzz_elizabeth_schuett', 'zzz_marty_kurzfeld', 'zzz_lester', 'fumble', 'zzz_lester_pozz', 'downfield'] ,→ [32-0.93333-0.18509]: ['zzz_boston_globe', 'zzz_ralph_nader', 'jobless', 'employer', 'tuition', 'productivity', 'misstated', 'advertiser', 'tonight', 'recession'] ,→ [33-0.56436-0.10917]: ['zzz_held', 'advisory', 'premature', 'publication', 'sicheianytimes', 'guard', 'internacional', 'representantes', 'zzz_cada', 'industria'] ,→ [34-0.9-0.19004]: ['manhunt', 'arraignment', 'detectives', 'released', 'zzz_karachi', 'gunshot', 'semiautomatic', 'zzz_juan_miguel_gonzalez', 'arraigned', 'slaying'] ,→ [35-0.90769-0.28004]: ['zzz_held', 'zzz_david_pelletier', 'zzz_ottavio_cinquanta', 'zzz_jamie_sale', 'zzz_jacques_rogge', 'zzz_bob_arum', 'zzz_international_skating_union', 'doping', 'zzz_anton_sikharulidze', 'zzz_u_s_olympic_committee'] ,→ [36-0.51436-0.13181]: ['publication', 'premature', 'zzz_held', 'advisory', 'guard', 'send', 'released', 'zzz_broadway', 'zzz_lance_armstrong', 'zzz_tennessee_valley'] ,→ [37-1-0.33657]: ['zzz_fda', 'zzz_d_vt', 'zzz_security_council', 'zzz_ashcroft', 'zzz_senate_judiciary_committee', 'zzz_drug_administration', 'zzz_judiciary_committee', 'statutory', 'justices', 'zzz_attorney_general_john_ashcroft'] ,→ [38-1-0.12492]: ['zzz_focus', 'zzz_lost_boy', 'zzz_diet_revolution', 'zzz_dave_pelzer', 'zzz_jared_diamond', 'zzz_don_miguel_ruiz', 'zzz_seat', 'physiologist', 'zzz_robert_kiyosaki', 'zzz_soul'] ,→ [39-1-0.26288]: ['zzz_north_american_free_trade_agreement', 'migrant', 'saharan', 'zzz_nafta', 'undocumented', 'zzz_vicente_fox', 'afghan', 'zzz_naturalization_service', 'trafficker', 'zzz_revolutionary_party'] ,→ [40-0.95-0.17834]: ['zzz_teepen_column', 'zzz_schuett', 'carbohydrates', 'natgeo', 'zzz_national_geographic_today', 'zzz_nethaway', 'additionally', 'zzz_mccarty_column', 'zzz_mccarty', 'zzz_publish_a_story'] ,→ [41-0.95-0.059983]: ['zzz_andy_alexander', 'andya', 'toder', 'zzz_tom_oder', 'zzz_dalglish', 'artd', 'zzz_rick_christie', 'zzz_carl_rauscher', 'crausher', 'eta'] ,→ [42-1-0.14993]: ['zzz_red_sox', 'unionist', 'zzz_bill_belichick', 'zzz_david_trimble', 'zzz_richard_riordan', 'zzz_southern_california_edison', 'zzz_sinn_fein', 'zzz_pacific_gas', 'zzz_carl_everett', 'walkout'] ,→ 6365 [43-0.61436-0.13404]: ['zzz_held', 'premature', 'advisory', 'publication', 'periodicos', 'llamar', 'latinoamericanas', 'cubriendo', 'semanal', 'cubrir'] ,→ [44-0.85-0.16905]: ['zzz_karl_horwitz', 'telex', 'zzz_isabel_amorim_sicherle', 'zzz_governor_bush', 'zzz_nonsubscriber', 'zzz_ariel_sharon', 'zzz_ehud_barak', 'zzz_judaism', 'zzz_ana_pena', 'zzz_camp_david'] ,→ [45-0.95-0]: ['rickc', 'zzz_paul_foutch', 'zzz_firestone', 'pfoutch', 'zzz_layout_s_done', 'zzz_news_questions_q', 'paginated', 'zzz_bessonette', 'zzz_rick_christie', 'zzz_langhenry'] ,→ [46-0.8-0.26723]: ['canvassing', 'dimpled', 'zzz_miami_dade', 'zzz_broward', 'zzz_state_katherine_harris', 'chad', 'undervotes', 'recount', 'zzz_volusia', 'layup'] ,→ [47-0.90769-0.43632]: ['zzz_wba', 'zzz_oscar_de_la_hoya', 'zzz_ioc', 'zzz_held', 'zzz_wbc', 'zzz_international_boxing_federation', 'middleweight', 'zzz_ibf', 'zzz_world_boxing_association', 'welterweight'] ,→ [48-0.38936-0.11527]: ['zzz_attn_editor', 'zzz_held', 'misidentified', 'zzz_washington_datelined', 'zzz_los_angeles_daily_new', 'undatelined', 'premature', 'advisory', 'publication', 'imprecisely'] ,→ [49-1-0.32124]: ['zzz_pete_carroll', 'zzz_cleveland_brown', 'lineman', 'zzz_bill_parcell', 'cornerback', 'zzz_bud_selig', 'zzz_offensive', 'zzz_trojan', 'zzz_sugar_bowl', 'zzz_al_groh'] ,→ W-LDA: NPMI=0.356, TU=0.998 [0-1-0.3425]: ['touchdown', 'interception', 'cornerback', 'quarterback', 'patriot', 'linebacker', 'receiver', 'yard', 'zzz_cowboy', 'zzz_ram'] ,→ [1-1-0.27811]: ['como', 'comprar', 'una', 'tiene', 'mas', 'distinguishable', 'publicar', 'sobre', 'tienen', 'prohibitivo'] [2-1-0.38656]: ['zzz_elian', 'zzz_juan_miguel_gonzalez', 'zzz_cuba', 'cuban', 'zzz_elian_gonzalez', 'zzz_fidel_castro', 'zzz_cuban_american', 'zzz_little_havana', 'zzz_lazaro_gonzalez', 'exiles'] ,→ [3-1-0.38253]: ['zzz_red_sox', 'yankees', 'zzz_world_series', 'zzz_baseball', 'baseball', 'outfielder', 'zzz_dan_duquette', 'zzz_met', 'clubhouse', 'zzz_george_steinbrenner'] ,→ [4-1-0.24506]: ['zzz_microsoft', 'antitrust', 'zzz_judge_thomas_penfield_jackson', 'monopoly', 'monopolist', 'breakup', 'remedy', 'browser', 'zzz_u_s_district_judge_thomas_penfield_jackson', 'zzz_fcc'] ,→ [5-1-0.26442]: ['zzz_security_council', 'rebel', 'colombian', 'zzz_iraq', 'zzz_colombia', 'zzz_u_n', 'zzz_congo', 'iraqi', 'zzz_andres_pastrana', 'guerrillas'] ,→ [6-1-0.4521]: ['zzz_john_mccain', 'zzz_mccain', 'zzz_bill_bradley', 'zzz_al_gore', 'primaries', 'zzz_governor_bush', 'zzz_new_hampshire', 'caucuses', 'zzz_george_bush', 'zzz_bob_jones_university'] ,→ [7-1-0.14137]: ['zzz_bernard_gladstone', 'moisture', 'astronomer', 'species', 'zzz_caption', 'zzz_focus', 'bloom', 'particles', 'shrub', 'soil'] ,→ [8-1-0.29295]: ['couture', 'dresses', 'paginated', 'skirt', 'chiffon', 'designer', 'fashion', 'beaded', 'gown', 'zzz_layout_s_done'] ,→ [9-1-0.23167]: ['zzz_falun_gong', 'unionist', 'zzz_sinn_fein', 'zzz_islamic', 'zzz_northern_ireland', 'zzz_ulster', 'zzz_islam', 'reformist', 'zzz_ira', 'iranian'] ,→ [10-1-0.56851]: ['zzz_israeli', 'zzz_yasser_arafat', 'palestinian', 'zzz_palestinian', 'zzz_west_bank', 'israelis', 'zzz_gaza', 'zzz_israel', 'zzz_barak', 'zzz_ramallah'] ,→ [11-1-0.1902]: ['zzz_andy_alexander', 'andya', 'artd', 'zzz_tom_oder', 'toder', 'zzz_dalglish', 'tduncan', 'zzz_todd_duncan', 'rickc', 'zzz_rick_christie'] ,→ [12-0.95-0.36367]: ['zzz_fbi', 'indictment', 'zzz_justice_department', 'prosecutor', 'investigation', 'pardon', 'indicted', 'investigator', 'hijacker', 'wrongdoing'] ,→ [13-1-0.37572]: ['patient', 'embryos', 'cell', 'genes', 'gene', 'embryo', 'symptom', 'zzz_national_institutes', 'disease', 'tumor'] ,→ [14-1-0.49672]: ['zzz_taliban', 'zzz_northern_alliance', 'afghan', 'zzz_kabul', 'zzz_afghanistan', 'zzz_pakistan', 'zzz_kandahar', 'zzz_pashtun', 'bin', 'laden'] ,→ [15-1-0.4287]: ['defenseman', 'puck', 'goalie', 'goaltender', 'zzz_nhl', 'zzz_stanley_cup', 'zzz_andy_murray', 'zzz_ken_hitchcock', 'zzz_ziggy_palffy', 'defensemen'] ,→ [16-1-0.30709]: ['ballot', 'recount', 'canvassing', 'zzz_florida_supreme_court', 'absentee', 'elector', 'zzz_miami_dade', 'zzz_state_katherine_harris', 'zzz_broward', 'votes'] ,→ [17-1-0.36916]: ['zzz_enron', 'zzz_securities', 'zzz_enron_corp', 'zzz_exchange_commission', 'auditor', 'accounting', 'zzz_arthur_andersen', 'zzz_sec', 'creditor', 'bankruptcy'] ,→ [18-1-0.37785]: ['missile', 'zzz_north_korea', 'zzz_anti_ballistic_missile_treaty', 'warhead', 'zzz_abm', 'zzz_vladimir_putin', 'ballistic', 'missiles', 'zzz_taiwan', 'treaty'] ,→ [19-1-0.2819]: ['zzz_ncaa', 'zzz_florida_state', 'athletic', 'zzz_bc', 'zzz_usc', 'pac', 'zzz_bowl_championship_series', 'zzz_ucla', 'zzz_big_east', 'coaches'] ,→ [20-1-0.47272]: ['album', 'guitarist', 'guitar', 'song', 'band', 'bassist', 'songwriter', 'ballad', 'zzz_grammy', 'singer'] [21-1-0.28145]: ['zzz_cb', 'zzz_nbc', 'zzz_abc', 'sitcom', 'zzz_upn', 'zzz_cable_cast', 'zzz_fare', 'episodes', 'zzz_craig_kilborn', 'zzz_fox'] ,→ [22-1-0.37844]: ['medal', 'zzz_olympic', 'medalist', 'swimmer', 'freestyle', 'athletes', 'zzz_olympian', 'zzz_sydney', 'zzz_winter_olympic', 'gold'] ,→ [23-1-0.3913]: ['film', 'movie', 'starring', 'zzz_oscar', 'screenplay', 'actor', 'filmmaking', 'comedy', 'actress', 'zzz_oscar_winning'] ,→ [24-1-0.53088]: ['zzz_tiger_wood', 'putt', 'birdie', 'bogey', 'zzz_pga', 'birdies', 'par', 'zzz_u_s_open', 'tee', 'fairway'] [25-1-0.2886]: ['composer', 'repertory', 'literary', 'musical', 'conductor', 'choreographer', 'choreography', 'playwright', 'orchestra', 'zzz_anne_stephenson'] ,→ [26-1-0.29966]: ['zzz_fed', 'zzz_dow_jones', 'zzz_nasdaq', 'index', 'zzz_federal_reserve', 'composite', 'indexes', 'zzz_dow', 'inflation', 'stock'] ,→ [27-1-0.15688]: ['zzz_doubles', 'breakfast', 'zzz_nicholas', 'lodging', 'sleigh', 'dining', 'zzz_marty_kurzfeld', 'inn', 'excursion', 'sightseeing'] ,→ [28-1-0.26239]: ['zzz_at', 'merger', 'zzz_time_warner', 'zzz_compaq', 'acquisition', 'zzz_aol_time_warner', 'zzz_aol', 'cent', 'shareholder', 'zzz_first_call_thomson_financial'] ,→ [29-1-0.5289]: ['justices', 'zzz_supreme_court', 'zzz_chief_justice_william_h_rehnquist', 'zzz_ruth_bader_ginsburg', 'unconstitutional', 'zzz_justice_antonin_scalia', 'zzz_u_s_circuit_court', 'constitutional', 'zzz_justice_sandra_day_o_connor', 'zzz_first_amendment'] ,→ ,→ [30-1-0.45378]: ['inning', 'zzz_dodger', 'homer', 'zzz_rbi', 'bullpen', 'grounder', 'fastball', 'zzz_mike_scioscia', 'zzz_anaheim_angel', 'hander'] ,→ [31-1-0.33095]: ['zzz_opec', 'electricity', 'barrel', 'zzz_petroleum_exporting_countries', 'emission', 'gasoline', 'megawatt', 'utilities', 'gas', 'deregulation'] ,→ [32-1-0.40058]: ['tax', 'zzz_medicare', 'zzz_social_security', 'surplus', 'surpluses', 'trillion', 'taxes', 'zzz_budget', 'zzz_budget_office', 'stimulus'] ,→ [33-1-0.44259]: ['priest', 'bishop', 'parish', 'zzz_cardinal_bernard_f_law', 'zzz_vatican', 'church', 'clergy', 'catholic', 'priesthood', 'parishes'] ,→ [34-1-0.43725]: ['zzz_slobodan_milosevic', 'zzz_serbian', 'zzz_serb', 'zzz_yugoslav', 'zzz_serbia', 'zzz_belgrade', 'albanian', 'zzz_kosovo', 'zzz_vojislav_kostunica', 'submarine'] ,→ [35-1-0.40499]: ['zzz_winston_cup', 'zzz_daytona', 'colt', 'lap', 'racing', 'zzz_kentucky_derby', 'zzz_nascar', 'zzz_jeff_gordon', 'zzz_dale_earnhardt', 'zzz_preakness'] ,→ [36-1-0.43098]: ['torque', 'horsepower', 'liter', 'sedan', 'zzz_suv', 'zzz_royal_ford', 'rear', 'engine', 'wheel', 'cylinder'] [37-1-0.39637]: ['airport', 'airlines', 'passenger', 'zzz_federal_aviation_administration', 'airline', 'traveler', 'flight', 'fares', 'aviation', 'baggage'] ,→ [38-1-0.45111]: ['painting', 'curator', 'exhibition', 'sculpture', 'museum', 'sculptures', 'galleries', 'zzz_modern_art', 'painter', 'gallery'] ,→ [39-1-0.49464]: ['zzz_laker', 'zzz_phil_jackson', 'zzz_nba', 'zzz_o_neal', 'zzz_shaquille_o_neal', 'zzz_kobe_bryant', 'zzz_shaq', 'zzz_knick', 'zzz_los_angeles_laker', 'zzz_kobe'] ,→ [40-1-0.25086]: ['layoff', 'customer', 'employer', 'worker', 'manufacturing', 'supplier', 'retail', 'rent', 'retailer', 'shopper'] ,→ [41-1-0.22187]: ['acres', 'environmentalist', 'forest', 'environmental', 'land', 'germ', 'radioactive', 'timber', 'biological', 'wildlife'] ,→ [42-1-0.14201]: ['zzz_playstation', 'gamer', 'zzz_birthday', 'astrascope', 'zzz_news_america', 'zzz_touch_tone', 'brompton', 'zzz_clip_and_save', 'zzz_astrologer', 'zzz_xii'] ,→ 6366 [43-1-0.41952]: ['pointer', 'layup', 'jumper', 'rebound', 'outrebounded', 'halftime', 'fouled', 'foul', 'basket', 'buzzer'] [44-1-0.34321]: ['tablespoon', 'teaspoon', 'cup', 'pepper', 'chopped', 'saucepan', 'onion', 'garlic', 'oven', 'sauce'] [45-1-0.40768]: ['megabytes', 'user', 'download', 'modem', 'desktop', 'mp3', 'software', 'computer', 'digital', 'files'] [46-0.95-0.33261]: ['juror', 'execution', 'jury', 'murder', 'inmates', 'defendant', 'prosecutor', 'robbery', 'penalty', 'zzz_timothy_mcveigh'] ,→ [47-1-0.39516]: ['zzz_senate', 'zzz_house_republican', 'bill', 'zzz_mccain_feingold', 'zzz_d_wis', 'amendment', 'filibuster', 'zzz_r_ariz', 'unregulated', 'legislation'] ,→ [48-1-0.3287]: ['zzz_aid', 'zzz_hiv', 'infected', 'zzz_fda', 'genetically', 'epidemic', 'crop', 'medicines', 'zzz_world_health_organization', 'drug'] ,→ [49-1-0.35675]: ['student', 'teacher', 'curriculum', 'school', 'classroom', 'math', 'standardized', 'colleges', 'educator', 'faculty'] ,→ 12.3 Topic words on Wikitext-103: LDA Collapsed Gibbs sampling: NPMI=0.289, TU=0.754 [ 0 - 0.8 - 0.27197]: ['design', 'model', 'vehicle', 'coin', 'engine', 'version', 'production', 'power', 'car', 'machine'] [ 1 - 0.80625 - 0.21883]: ['specie', 'bird', 'ha', 'plant', 'brown', 'tree', 'white', 'nest', 'genus', 'fruit'] [ 2 - 0.78333 - 0.30133]: ['film', 'role', 'award', 'production', 'movie', 'actor', 'million', 'director', 'scene', 'release'] ,→ [ 3 - 0.80333 - 0.25923]: ['al', 'empire', 'city', 'emperor', 'army', 'roman', 'greek', 'byzantine', 'war', 'arab'] [ 4 - 0.85625 - 0.2574]: ['star', 'planet', 'earth', 'sun', 'mass', 'space', 'moon', 'light', 'ha', 'surface'] [ 5 - 0.95 - 0.32639]: ['storm', 'tropical', 'hurricane', 'wind', 'km', 'cyclone', 'damage', 'mph', 'day', 'depression'] [ 6 - 0.9 - 0.35437]: ['child', 'family', 'life', 'woman', 'father', 'mother', 'friend', 'death', 'wife', 'son'] [ 7 - 0.85 - 0.3176]: ['police', 'day', 'people', 'death', 'prison', 'murder', 'report', 'killed', 'trial', 'reported'] [ 8 - 0.82 - 0.28613]: ['german', 'war', 'soviet', 'germany', 'russian', 'french', 'polish', 'poland', 'russia', 'france'] [ 9 - 0.78958 - 0.28276]: ['god', 'church', 'christian', 'temple', 'religious', 'century', 'religion', 'text', 'ha', 'saint'] [ 10 - 0.68667 - 0.24509]: ['american', 'state', 'war', 'york', 'washington', 'united', 'virginia', 'john', 'fort', 'general'] ,→ [ 11 - 0.8 - 0.29384]: ['king', 'henry', 'england', 'john', 'royal', 'edward', 'william', 'english', 'son', 'scotland'] [ 12 - 0.68667 - 0.31251]: ['match', 'championship', 'event', 'world', 'team', 'won', 'title', 'wrestling', 'champion', 'final'] ,→ [ 13 - 0.75 - 0.31763]: ['island', 'ship', 'french', 'british', 'sea', 'navy', 'captain', 'port', 'fleet', 'coast'] [ 14 - 0.95 - 0.31526]: ['chinese', 'china', 'japanese', 'japan', 'vietnam', 'singapore', 'kong', 'philippine', 'government', 'vietnamese'] ,→ [ 15 - 0.85625 - 0.17395]: ['food', 'ice', 'harry', 'restaurant', 'ha', 'product', 'wine', 'meat', 'king', 'potter'] [ 16 - 0.8 - 0.36086]: ['state', 'president', 'election', 'republican', 'campaign', 'vote', 'senate', 'governor', 'house', 'party'] ,→ [ 17 - 0.86667 - 0.25547]: ['route', 'road', 'highway', 'state', 'county', 'north', 'ny', 'east', 'street', 'south'] [ 18 - 0.72 - 0.26666]: ['ship', 'gun', 'fleet', 'mm', 'inch', 'war', 'german', 'class', 'navy', 'ton'] [ 19 - 0.95 - 0.35007]: ['air', 'aircraft', 'flight', 'force', 'no.', 'squadron', 'fighter', 'pilot', 'operation', 'wing'] [ 20 - 0.77 - 0.24508]: ['race', 'stage', 'team', 'lap', 'car', 'point', 'driver', 'lead', 'won', 'place'] [ 21 - 0.86667 - 0.25987]: ['san', 'spanish', 'la', 'california', 'texas', 'mexico', 'state', 'el', 'american', 'francisco'] [ 22 - 0.475 - 0.39568]: ['album', 'song', 'music', 'track', 'released', 'record', 'single', 'release', 'chart', 'number'] [ 23 - 0.62292 - 0.27391]: ['century', 'castle', 'wall', 'building', 'built', 'church', 'stone', 'house', 'site', 'ha'] [ 24 - 0.73958 - 0.22785]: ['element', 'nuclear', 'ha', 'energy', 'metal', 'number', 'form', 'gas', 'group', 'chemical'] [ 25 - 0.55667 - 0.42035]: ['club', 'match', 'season', 'team', 'league', 'cup', 'goal', 'final', 'scored', 'player'] [ 26 - 0.9 - 0.44177]: ['force', 'army', 'division', 'battle', 'battalion', 'attack', 'infantry', 'troop', 'brigade', 'regiment'] ,→ [ 27 - 0.75333 - 0.23014]: ['british', 'london', 'australian', 'australia', 'war', 'wale', 'royal', 'victoria', 'world', 'britain'] ,→ [ 28 - 0.85625 - 0.22846]: ['black', 'white', 'horse', 'red', 'flag', 'dog', 'blue', 'breed', 'green', 'ha'] [ 29 - 0.55667 - 0.31724]: ['game', 'team', 'season', 'yard', 'point', 'player', 'play', 'coach', 'goal', 'football'] [ 30 - 0.70833 - 0.41145]: ['band', 'song', 'rock', 'album', 'guitar', 'tour', 'music', 'record', 'group', 'recording'] [ 31 - 0.52125 - 0.24719]: ['episode', 'series', 'season', 'character', 'ha', 'scene', 'television', 'viewer', 'michael', 'rating'] ,→ [ 32 - 0.78958 - 0.24362]: ['ha', 'language', 'word', 'theory', 'social', 'world', 'term', 'human', 'form', 'idea'] [ 33 - 0.81667 - 0.34101]: ['court', 'law', 'state', 'case', 'act', 'legal', 'justice', 'judge', 'decision', 'united'] [ 34 - 0.75625 - 0.19829]: ['specie', 'animal', 'ha', 'female', 'male', 'shark', 'large', 'long', 'population', 'water'] [ 35 - 0.9 - 0.31353]: ['book', 'work', 'published', 'story', 'art', 'writing', 'painting', 'writer', 'poem', 'magazine'] [ 36 - 0.71667 - 0.25432]: ['building', 'park', 'city', 'street', 'house', 'museum', 'foot', 'room', 'hotel', 'center'] [ 37 - 0.95 - 0.34335]: ['station', 'line', 'train', 'bridge', 'railway', 'service', 'passenger', 'construction', 'built', 'tunnel'] ,→ [ 38 - 0.85 - 0.30801]: ['school', 'university', 'student', 'college', 'program', 'member', 'education', 'national', 'research', 'science'] ,→ [ 39 - 0.61667 - 0.29633]: ['government', 'party', 'political', 'minister', 'member', 'national', 'country', 'leader', 'state', 'power'] ,→ [ 40 - 0.64333 - 0.28726]: ['game', 'season', 'league', 'run', 'baseball', 'hit', 'home', 'team', 'series', 'major'] [ 41 - 0.60125 - 0.24256]: ['character', 'series', 'story', 'man', 'bond', 'comic', 'ha', 'set', 'star', 'effect'] [ 42 - 0.775 - 0.33565]: ['music', 'work', 'opera', 'musical', 'performance', 'play', 'composer', 'theatre', 'orchestra', 'piece'] ,→ [ 43 - 0.9 - 0.30473]: ['company', 'million', 'business', 'market', 'bank', 'cost', 'sale', 'price', 'country', 'industry'] [ 44 - 0.57125 - 0.23648]: ['episode', 'series', 'television', 'simpson', 'homer', 'season', 'ha', 'character', 'network', 'bart'] ,→ [ 45 - 0.75625 - 0.2758]: ['river', 'water', 'area', 'lake', 'mountain', 'park', 'creek', 'ha', 'mile', 'valley'] [ 46 - 0.33458 - 0.24179]: ['game', 'player', 'character', 'released', 'series', 'version', 'video', 'final', 'release', 'ha'] ,→ [ 47 - 0.75625 - 0.27016]: ['cell', 'disease', 'ha', 'protein', 'treatment', 'risk', 'effect', 'blood', 'people', 'case'] [ 48 - 0.62292 - 0.19694]: ['city', 'ha', 'town', 'area', 'population', 'local', 'school', 'india', 'century', 'district'] [ 49 - 0.59167 - 0.32599]: ['song', 'video', 'number', 'single', 'chart', 'music', 'week', 'performance', 'madonna', 'performed'] ,→ Online LDA: NPMI=0.282, TU=0.776 [ 0 - 1 - 0.34845]: ['chinese', 'japanese', 'china', 'japan', 'singapore', 'kong', 'hong', 'korean', 'malaysia', 'emperor'] [ 1 - 0.65667 - 0.36749]: ['season', 'club', 'game', 'team', 'football', 'league', 'goal', 'yard', 'cup', 'match'] [ 2 - 0.86667 - 0.3889]: ['music', 'work', 'opera', 'musical', 'performance', 'composer', 'orchestra', 'theatre', 'concert', 'piano'] ,→ [ 3 - 0.73333 - 0.43867]: ['force', 'division', 'army', 'battalion', 'battle', 'war', 'brigade', 'attack', 'infantry', 'regiment'] ,→ [ 4 - 0.83056 - 0.26379]: ['film', 'role', 'production', 'award', 'movie', 'actor', 'best', 'director', 'released', 'ha'] [ 5 - 0.91667 - 0.34322]: ['german', 'soviet', 'war', 'germany', 'russian', 'polish', 's', 'hitler', 'jew', 'nazi'] [ 6 - 0.88333 - 0.23038]: ['art', 'painting', 'work', 'oxford', 'artist', 'museum', 'cambridge', 'blue', 'london', 'van'] [ 7 - 0.80333 - 0.23849]: ['australia', 'match', 'australian', 'test', 'run', 'england', 'wicket', 'cricket', 'team', 'inning'] ,→ [ 8 - 1 - 0.30542]: ['company', 'million', 'business', 'bank', 'market', 'sale', 'sold', 'food', 'product', 'price'] [ 9 - 0.41556 - 0.24511]: ['series', 'episode', 'character', 'scene', 'star', 'doctor', 'ha', 'television', 'set', 'season'] 6367 [ 10 - 0.82 - 0.24655]: ['race', 'second', 'lap', 'team', 'car', 'stage', 'driver', 'point', 'lead', 'place'] [ 11 - 0.51556 - 0.23493]: ['episode', 'season', 'series', 'television', 'character', 'ha', 'rating', 'homer', 'simpson', 'scene'] ,→ [ 12 - 0.50556 - 0.20267]: ['country', 'world', 'state', 'government', 'ha', 'national', 'international', 'united', 'woman', 'people'] ,→ [ 13 - 0.45389 - 0.24179]: ['game', 'player', 'character', 'released', 'series', 'version', 'video', 'ha', 'release', 'final'] ,→ [ 14 - 0.9 - 0.27909]: ['la', 'el', 'latin', 'puerto', 'mexico', 'american', 'spanish', 'del', 'brazil', 'argentina'] [ 15 - 0.68889 - 0.21473]: ['water', 'sea', 'shark', 'fish', 'ha', 'ft', 'island', 'area', 'whale', 'specie'] [ 16 - 0.74 - 0.27888]: ['man', 'comic', 'story', 'issue', 'book', 'magazine', 'character', 'spider', 'series', 'harry'] [ 17 - 0.52333 - 0.31687]: ['game', 'season', 'team', 'league', 'player', 'run', 'point', 'career', 'second', 'played'] [ 18 - 0.68889 - 0.29798]: ['book', 'work', 'published', 'novel', 'ha', 'writing', 'wrote', 'life', 'story', 'poem'] [ 19 - 0.83056 - 0.28515]: ['cell', 'disease', 'ha', 'virus', 'protein', 'cause', 'treatment', 'study', 'used', 'symptom'] [ 20 - 0.73889 - 0.29223]: ['church', 'god', 'christian', 'century', 'king', 'bishop', 'religious', 'catholic', 'ha', 'death'] ,→ [ 21 - 0.95 - 0.3451]: ['station', 'line', 'service', 'train', 'railway', 'bridge', 'construction', 'passenger', 'opened', 'built'] ,→ [ 22 - 0.85 - 0.24117]: ['island', 'spanish', 'san', 'french', 'colony', 'dutch', 'bay', 'spain', 'francisco', 'colonial'] [ 23 - 0.86667 - 0.34357]: ['ship', 'gun', 'fleet', 'navy', 'war', 'inch', 'mm', 'class', 'naval', 'battleship'] [ 24 - 0.85 - 0.24488]: ['british', 'expedition', 'ship', 'royal', 'britain', 'captain', 'sir', 'london', 'ice', 'party'] [ 25 - 0.75833 - 0.45318]: ['band', 'album', 'song', 'rock', 'record', 'music', 'guitar', 'released', 'recording', 'tour'] [ 26 - 0.78056 - 0.26289]: ['used', 'energy', 'nuclear', 'metal', 'gas', 'element', 'water', 'ha', 'chemical', 'carbon'] [ 27 - 0.61556 - 0.21828]: ['character', 'ha', 'storyline', 'series', 'season', 'relationship', 'tell', 'said', 'paul', 'dr.'] ,→ [ 28 - 0.73889 - 0.20052]: ['animal', 'specie', 'fossil', 'known', 'bone', 'specimen', 'like', 'ha', 'genus', 'skull'] [ 29 - 0.95 - 0.22467]: ['design', 'coin', 'model', 'version', 'dollar', 'structure', 'computer', 'window', 'mint', 'user'] [ 30 - 1 - 0.14881]: ['manchester', 'bach', 'leigh', 'liverpool', 'wheeler', 'cantata', 'movement', 'naruto', 'christmas', 'shaw'] ,→ [ 31 - 0.93333 - 0.34847]: ['air', 'aircraft', 'flight', 'squadron', 'no.', 'force', 'pilot', 'wing', 'fighter', 'mission'] [ 32 - 0.68056 - 0.2899]: ['used', 'number', 'use', 'ha', 'example', 'using', 'set', 'section', 'different', 'case'] [ 33 - 0.73889 - 0.26738]: ['building', 'century', 'house', 'castle', 'built', 'church', 'wall', 'ha', 'tower', 'st'] [ 34 - 0.75 - 0.30962]: ['said', 'police', 'case', 'day', 'people', 'court', 'trial', 'report', 'right', 'murder'] [ 35 - 0.72222 - 0.23387]: ['school', 'university', 'student', 'college', 'state', 'program', 'national', 'center', 'ha', 'city'] ,→ [ 36 - 0.80833 - 0.21909]: ['horse', 'dog', 'breed', 'animal', 'parson', 'used', 'century', 'wolf', 'pony', 'sheep'] [ 37 - 0.75 - 0.34074]: ['state', 'party', 'court', 'election', 'law', 'government', 'president', 'act', 'committee', 'vote'] [ 38 - 0.68333 - 0.24268]: ['american', 'state', 'war', 'york', 'washington', 'united', 'virginia', 'john', 'white', 'fort'] [ 39 - 0.73889 - 0.17892]: ['specie', 'ha', 'bird', 'male', 'female', 'white', 'tree', 'brown', 'population', 'genus'] [ 40 - 0.65833 - 0.38782]: ['song', 'album', 'music', 'single', 'number', 'chart', 'video', 'track', 'released', 'week'] [ 41 - 0.7 - 0.27283]: ['king', 'empire', 'battle', 'army', 'henry', 'son', 'war', 'roman', 'french', 'greek'] [ 42 - 0.75556 - 0.24232]: ['river', 'area', 'city', 'park', 'ha', 'town', 'creek', 'mile', 'south', 'county'] [ 43 - 0.86667 - 0.25378]: ['route', 'highway', 'road', 'u', 'state', 'ny', 'north', 'county', 'street', 'east'] [ 44 - 0.63333 - 0.25196]: ['government', 'military', 'force', 'war', 'croatian', 'vietnam', 'croatia', 'vietnamese', 'army', 'state'] ,→ [ 45 - 0.85556 - 0.27502]: ['star', 'planet', 'earth', 'sun', 'space', 'mass', 'ha', 'orbit', 'light', 'moon'] [ 46 - 1 - 0.26647]: ['al', 'india', 'temple', 'indian', 'arab', 'muslim', 'tamil', 'ibn', 'egyptian', 'israeli'] [ 47 - 0.80333 - 0.32352]: ['match', 'championship', 'team', 'event', 'world', 'won', 'wrestling', 'title', 'tournament', 'champion'] ,→ [ 48 - 0.95 - 0.32639]: ['storm', 'tropical', 'hurricane', 'wind', 'km', 'cyclone', 'damage', 'mph', 'day', 'depression'] [ 49 - 0.9 - 0.34157]: ['child', 'family', 'woman', 'life', 'father', 'mother', 'wife', 'friend', 'home', 'daughter'] ProdLDA: NPMI=0.4, TU=0.624 [0-0.85-0.43559]: ['legislature', 'gubernatorial', 'nomination', 'republican', 'statewide', 'governor', 'democrat', 'candidacy', 'senate', 'legislative'] ,→ [1-0.48333-0.35108]: ['game', 'player', 'metacritic', 'sequel', 'ign', 'gameplay', 'character', 'film', 'visuals', 'grossing'] [2-0.95-0.46624]: ['glacial', 'basalt', 'volcanic', 'glaciation', 'temperature', 'lava', 'pyroclastic', 'magma', 'sedimentary', 'sediment'] ,→ [3-0.75-0.45842]: ['uefa', 'cup', 'scored', 'midfielder', 'goalkeeper', 'victory', 'equaliser', 'wembley', 'fa', 'goalless'] [4-0.58667-0.25822]: ['specie', 'secretion', 'tissue', 'genus', 'vertebrate', 'taxonomy', 'phylogenetic', 'gland', 'symptom', 'habitat'] ,→ [5-0.43333-0.50367]: ['terminus', 'intersects', 'highway', 'intersection', 'interchange', 'concurrency', 'northeast', 'roadway', 'renumbering', 'junction'] ,→ [6-0.81667-0.45658]: ['touchdown', 'bcs', 'overtime', 'season', 'fumble', 'yard', 'playoff', 'fumbled', 'halftime', 'defensive'] ,→ [7-0.78333-0.47278]: ['aircraft', 'squadron', 'reconnaissance', 'sortie', 'raaf', 'bomber', 'avionics', 'operational', 'airfield', 'airframe'] ,→ [8-0.43333-0.51711]: ['highway', 'intersects', 'intersection', 'interchange', 'terminus', 'renumbering', 'concurrency', 'northeast', 'roadway', 'realigned'] ,→ music TV [9-0.40667-0.47917]: ['chart', 'peaked', 'billboard', 'mtv', 'debuted', 'song', 'video', 'album', 'riaa', 'cinquemani'] [10-0.48333-0.40831]: ['aircraft', 'squadron', 'sortie', 'mm', 'reconnaissance', 'aft', 'torpedo', 'knot', 'destroyer', 'armament'] ,→ [11-0.37-0.15898]: ['taxonomy', 'intersects', 'specie', 'whitish', 'phylogenetic', 'iucn', 'highway', 'genus', 'underpart', 'habitat'] ,→ [12-0.7-0.17913]: ['k˜A¶ppen', 'census', 'demography', 'campus', 'population', 'hectare', 'km2', 'constituency', 'enrollment', 'borough'] ,→ [13-0.52333-0.36319]: ['album', 'music', 'studio', 'lyric', 'allmusic', 'recording', 'song', 'musical', 'filmfare', 'bassist'] [14-0.43333-0.4715]: ['km', 'mph', 'tropical', 'westward', 'rainfall', 'flooding', 'convection', 'landfall', 'extratropical', 'storm'] ,→ [15-0.75-0.38372]: ['reign', 'ecclesiastical', 'archbishop', 'vassal', 'papacy', 'legate', 'ruler', 'papal', 'earldom', 'chronicler'] ,→ [16-0.51667-0.40183]: ['artillery', 'casualty', 'destroyer', 'battalion', 'squadron', 'reinforcement', 'troop', 'regiment', 'guadalcanal', 'convoy'] ,→ [17-0.93333-0.29541]: ['doctrine', 'parliament', 'hitler', 'socialism', 'philosopher', 'constitutional', 'theologian', 'critique', 'bucer', 'marxism'] ,→ [18-0.71667-0.49424]: ['championship', 'rematch', 'pinfall', 'shawn', 'disqualification', 'wwe', 'smackdown', 'backstage', 'referee', 'match'] ,→ [19-0.73667-0.28076]: ['temperature', 'diameter', 'density', 'oxidation', 'latitude', 'acidic', 'specie', 'dioxide', 'molecular', 'carbonate'] ,→ [20-0.6-0.33918]: ['championship', 'match', 'defeated', 'rematch', 'randy', 'referee', 'backstage', 'storyline', 'ign', 'summerslam'] ,→ [21-0.53333-0.42777]: ['game', 'player', 'sequel', 'metacritic', 'ign', 'gameplay', 'visuals', 'character', 'protagonist', 'gamespot'] ,→ [22-0.7-0.42088]: ['inning', 'batting', 'scored', 'unbeaten', 'batted', 'debut', 'scoring', 'wicket', 'bowled', 'opener'] [23-0.48333-0.4502]: ['mph', 'km', 'landfall', 'tropical', 'storm', 'hurricane', 'rainfall', 'flooding', 'extratropical', 'saffir'] ,→ [24-0.53333-0.38377]: ['episode', 'funny', 'decides', 'actor', 'nielsen', 'aired', 'filming', 'comedy', 'discovers', 'asks'] 6368 [25-0.58667-0.40961]: ['glee', 'chart', 'futterman', 'billboard', 'peaked', 'debuted', 'slezak', 'mtv', 'lyrically', 'song'] [26-0.7-0.11981]: ['demography', 'k˜A¶ppen', 'railway', 'stadium', 'infrastructure', 'census', 'constituency', 'campus', 'km2', 'stadion'] ,→ [27-0.58333-0.38244]: ['episode', 'actor', 'filming', 'script', 'comedy', 'funny', 'discovers', 'producer', 'sepinwall', 'film'] ,→ [28-0.68333-0.34977]: ['legislature', 'constitutional', 'governorship', 'appoint', 'election', 'legislative', 'treaty', 'diplomatic', 'elected', 'democrat'] ,→ [29-0.46667-0.41143]: ['season', 'playoff', 'league', 'nhl', 'game', 'rookie', 'touchdown', 'player', 'coach', 'goaltender'] [30-0.48333-0.49504]: ['mph', 'km', 'tropical', 'westward', 'landfall', 'flooding', 'northwestward', 'rainfall', 'northeastward', 'extratropical'] ,→ [31-0.65-0.44581]: ['amidships', 'conning', 'frigate', 'fleet', 'broadside', 'waterline', 'casemates', 'torpedo', 'mm', 'knot'] ,→ [32-0.40667-0.47498]: ['chart', 'peaked', 'billboard', 'album', 'video', 'debuted', 'song', 'riaa', 'mtv', 'phonographic'] [33-0.58333-0.52608]: ['interchange', 'terminus', 'intersects', 'highway', 'intersection', 'roadway', 'eastbound', 'westbound', 'freeway', 'route'] ,→ [34-0.48333-0.46306]: ['brigade', 'casualty', 'troop', 'infantry', 'artillery', 'flank', 'battalion', 'commanded', 'division', 'regiment'] ,→ [35-0.63333-0.35619]: ['episode', 'actor', 'filming', 'realizes', 'nielsen', 'discovers', 'asks', 'mulder', 'scully', 'viewer'] ,→ [36-0.52333-0.43397]: ['album', 'recording', 'allmusic', 'song', 'music', 'lyric', 'studio', 'musical', 'vocal', 'guitarist'] [37-0.75-0.43839]: ['bishopric', 'archbishop', 'ecclesiastical', 'clergy', 'consecrated', 'chronicler', 'papacy', 'lordship', 'archbishopric', 'papal'] ,→ [38-0.8-0.50362]: ['batting', 'inning', 'batted', 'hitter', 'batsman', 'fielder', 'nl', 'outfielder', 'unbeaten', 'rbi'] [39-0.6-0.44881]: ['mm', 'knot', 'torpedo', 'aft', 'amidships', 'boiler', 'conning', 'waterline', 'cruiser', 'horsepower'] [40-0.53667-0.29499]: ['specie', 'habitat', 'genus', 'iucn', 'taxonomy', 'vegetation', 'morphology', 'mammal', 'underpart', 'plumage'] ,→ [41-0.48333-0.41737]: ['infantry', 'casualty', 'troop', 'battalion', 'artillery', 'reinforcement', 'brigade', 'flank', 'division', 'army'] ,→ [42-0.51667-0.4231]: ['season', 'nhl', 'playoff', 'game', 'rookie', 'shutout', 'player', 'league', 'roster', 'goaltender'] [43-0.83333-0.31359]: ['treaty', 'mamluk', 'politburo', 'diplomatic', 'sovereignty', 'constitutional', 'militarily', 'abbasid', 'emir', 'gdp'] ,→ [44-0.71667-0.30233]: ['finite', 'soluble', 'integer', 'infinity', 'molecule', 'protein', 'infinite', 'molecular', 'computational', 'oxidation'] ,→ [45-0.56667-0.37782]: ['midfielder', 'cup', 'match', 'defeat', 'midfield', 'uefa', 'fa', 'defeated', 'championship', 'debut'] [46-0.71667-0.47156]: ['molecule', 'membrane', 'protein', 'eukaryote', 'oxidation', 'molecular', 'soluble', 'metabolism', 'metabolic', 'microscopy'] ,→ [47-0.85333-0.36357]: ['continuo', 'cantata', 'soundtrack', 'chorale', 'bwv', 'recitative', 'album', 'guitar', 'bach', 'music'] ,→ [48-0.58667-0.39756]: ['taxonomy', 'specie', 'genus', 'morphology', 'morphological', 'phylogenetic', 'clade', 'taxonomic', 'phylogeny', 'iucn'] ,→ [49-0.95-0.47451]: ['prognosis', 'diagnostic', 'behavioral', 'clinical', 'symptom', 'diagnosis', 'cognitive', 'abnormality', 'therapy', 'intravenous'] ,→ NTM-R: NPMI=0.215, TU=0.912 [0-0.85-0.13957]: ['m', 'enterprise', 'commander', 'bungie', 'generation', 'election', 'candidate', 'hd', 'roddenberry', 'society'] ,→ [1-0.95-0.18795]: ['liturgical', 'altarpiece', 'liturgy', 'fugue', 'cetacean', 'picts', 'anatomical', 'pictish', 'riata', 'grammatical'] ,→ [2-0.95-0.31937]: ['colfer', 'futterman', 'monteith', 'herodotus', 'slezak', 'karofsky', 'cheerleading', 'santana', 'xerxes', 'plutarch'] ,→ [3-0.7-0.15281]: ['cleveland', 'maryland', 'kentucky', 'iowa', 'harrison', 'mar', 'ford', 'pa', 'olivia', 'tech'] [4-0.9-0.15532]: ['sr', 'pembroke', 'mersey', 'plough', 'whitby', 'gateshead', 'humber', 'altrincham', 'peterborough', 'lichtenstein'] ,→ [5-0.73333-0.076084]: ['md', 'indonesian', 'svalbard', 'kepler', 'runway', 'm', 'jenna', 'ice', 'antarctic', 'wider˜A¸e'] [6-0.95-0.16751]: ['resonator', 'impedance', 'goebbels', 'bormann', 'jAzef', 'maunsell', 'heydrich', 'duAan', 'fAhrer', 'waveguide'] ,→ [7-1-0.26747]: ['sired', 'ranulf', 'anjou', 'blois', 'thessalonica', 'andronikos', 'rabi', 'nicaea', 'angevin', 'bohemond'] [8-0.66667-0.15309]: ['nelson', 'mexican', 'iowa', 'swift', 'lewis', 'jackson', 'moore', 'mar', 'texas', 'dog'] [9-0.8-0.23519]: ['leng', 'tgs', 'inglis', 'donaghy', 'beatle', 'overdubs', 'fey', 'snl', 'futterman', 'clapton'] [10-1-0.40066]: ['refuel', 'floatplane', 'grumman', 'refueling', 'sonar', 'transatlantic', 'rendezvoused', 'tf', 'leyte', 'tinian'] ,→ [11-0.71667-0.058102]: ['lichtenstein', 'etty', 'pa', 'md', 'nude', 'aftershock', 'jovanovi¨A\x87', 'eruptive', 'dreaming', 'weyden'] ,→ [12-0.95-0.18683]: ['sauk', 'brig.', 'galena', 'seminole', 'frankfort', 'kentuckian', 'hoosier', 'holliday', 'punted', 'maj.'] [13-0.95-0.15503]: ['wider˜A¸e', 'dupont', 'brest', 'tripoli', 'madras', 'guadalcanal', 'cherbourg', 'yorktown', 'hannibal', 'bombay'] ,→ [14-0.95-0.26828]: ['vijayanagara', 'ghat', 'batik', 'madurai', 'coimbatore', 'varanasi', 'cetacean', 'thanjavur', 'uttar', 'marathi'] ,→ [15-0.8-0.21604]: ['johnson', 'van', 'jackson', 'taylor', 'smith', 'dutch', 'martin', 'nelson', 'adam', 'lewis'] [16-1-0.18763]: ['canuck', 'nhl', 'tampa', 'mlb', 'canadiens', 'rbi', 'cantata', 'bermuda', 'sox', 'athletics'] [17-0.88333-0.079676]: ['banksia', 'hd', 'thrower', 'pam', 'halo', 'bowler', 'scoring', 'spike', 'mar', 'quadruple'] [18-0.95-0.14555]: ['reelected', 'accredited', 'reelection', 'senatorial', 'sorority', 'unionist', 'phi', 'bsa', 'appointee', 'briarcliff'] ,→ [19-0.88333-0.12676]: ['wheelchair', 'iowa', 'wsdot', 'ssh', 'plutonium', 'psh', 'paralympics', 'sr', 'freestyle', 'ub'] [20-0.83333-0.0696]: ['ny', 'md', 'jna', 'henriksen', 'veronica', 'labial', 'torv', 'zng', 'm1', 'lindelof'] [21-0.95-0.13199]: ['squad', 'jordan', 'hamilton', 'shark', 'johnson', 'teammate', 'kansa', 'rochester', 'ranger', 'hockey'] [22-0.9-0.17853]: ['theater', 'doctor', 'texas', 'orchestra', 'san', 'grand', 'theatre', 'disney', 'arthur', 'bar'] [23-1-0.18027]: ['mintage', 'mycena', 'cheilocystidia', 'cystidia', 'breen', 'spongebob', 'numismatic', 'capon', 'obverse', 'ellipsoid'] ,→ [24-1-0.71697]: ['duchovny', 'vitaris', 'spotnitz', 'mulder', 'gillian', 'paranormal', 'shearman', 'pileggi', 'scully', 'handlen'] ,→ [25-0.95-0.30171]: ['tardis', 'eastenders', 'gillan', 'torchwood', 'catesby', 'walford', 'luftwaffe', 'moffat', 'daleks', 'dalek'] ,→ [26-1-0.36124]: ['martyn', 'swartzwelder', 'mirkin', 'wiggum', 'kirkland', 'sauropod', 'smithers', 'jacobson', 'milhouse', 'theropod'] ,→ [27-1-0.11983]: ['cookery', 'hindenburg', 'povenmire', 'kratos', 'blamey', 'plankton', 'hillenburg', 'alamein', 'tulagi', 'rearguard'] ,→ [28-0.93333-0.47153]: ['stravinsky', 'clarinet', 'berlioz', 'debussy', 'oratorio', 'op.', 'liszt', 'op˜A©ra', 'elgar', 'orchestration'] ,→ [29-0.95-0.27002]: ['phylum', 'fumble', 'yardage', 'bcs', 'scrimmage', 'bivalve', 'sportswriter', 'fumbled', 'fiba', 'punted'] [30-1-0.20817]: ['rican', 'afanasieff', 'fatale', 'dupri', 'myrmecia', 'femme', 'wallonia', 'musicnotes.com', 'erotica', 'intercut'] ,→ [31-0.95-0.12071]: ['maunsell', 'navigable', 'naktong', 'sprinter', 'hauling', 'doncaster', 'bridgwater', 'rijeka', 'lswr', 'stretford'] ,→ [32-1-0.2357]: ['constitutionality', 'habeas', 'scalia', 'appellate', 'unreasonable', 'brownlee', 'harlan', 'sotomayor', 'newt', 'brahman'] ,→ [33-0.95-0.20657]: ['csx', 'stub', 'resurfaced', 'legislated', 'widen', 'rejoining', 'widens', 'pulaski', 'drawbridge', 'leng'] ,→ 6369 [34-0.85-0.13878]: ['harrison', 'jersey', 'summit', 'flag', 'disney', 'doggett', 'beatles', 'township', 'amusement', 'roller'] [35-0.9-0.15949]: ['dia>x87m', 'uematsu', 'petACn', 'naruto', 'nobuo', 'nhu', 'itza', 'sasuke', 'kenshin', 'texians'] [36-0.93333-0.12961]: ['pulp', 'sf', 'delaware', 'wasp', 'reprint', 'ant', 'cent', 'herg˜A©', 'tintin', 'pa'] [37-0.85-0.32081]: ['mi', 'oricon', 'rpgfan', 'nobuo', 'uematsu', 'enix', 'dengeki', 'maeda', 'hamauzu', 'ovum'] [38-1-0.13418]: ['astronomical', 'michigan', 'roof', 'coaster', 'window', 'saginaw', 'lansing', 'bl', 'usher', 'stadium'] [39-0.83333-0.17217]: ['highness', 'medici', 'dodo', 'palatine', 'weyden', 'cosimo', 'mascarene', 'huguenot', 'op˜A©ra', 'catesby'] ,→ [40-1-0.22306]: ['edda', 'fragmentary', 'thanhouser', 'loki', 'odin', 'cameraman', 'eline', 'heming', 'norse', 'ua'] [41-0.85-0.23907]: ['tgs', 'tornado', 'poehler', 'donaghy', 'pawnee', 'jenna', 'offerman', 'schur', 'tate', 'severe'] [42-0.95-0.30993]: ['eruptive', 'riparian', 'pyroclastic', 'glaciation', 'volcanism', 'tectonic', 'headwater', 'andes', 'drier', 'tropic'] ,→ [43-0.93333-0.19441]: ['ctw', 'muppets', 'filmography', 'muppet', 'repertory', 'cooney', 'heterosexual', 'op˜A©ra', 'goldwyn', 'professorship'] ,→ [44-0.95-0.44768]: ['gamesradar', 'unlockable', 'gametrailers', 'novelization', 'dengeki', 'famitsu', 'rpgs', 'ps3', 'cg', 'overworld'] ,→ [45-0.9-0.14062]: ['stanley', 'tiger', 'harvard', 'hudson', 'baltimore', 'maryland', 'kg', 'morrison', 'nba', 'lb'] [46-0.85-0.15527]: ['angelou', 'eurovision', 'zng', 'sao', 'milo˚A¡evi¨A\x87', 'svalbard', 'tu¨A\x91man', 'knin', 'bahraini', 'jna'] ,→ [47-0.95-0.17175]: ['atrium', 'pv', 'stucco', 'cornice', 'emu', 'pilaster', 'pediment', 'neoclassical', 'briarcliff', 'biomass'] ,→ [48-0.85-0.13668]: ['flag', 'vietnam', 'enterprise', 'singapore', 'slave', 'korean', 'philippine', 'stewart', 'zero', 'nba'] [49-1-0.40665]: ['harvick', 'hamlin', 'biffle', 'rAikkAnen', 'sauber', 'kenseth', 'trulli', 'heidfeld', 'verstappen', 'fisichella'] ,→ W-LDA: NPMI=0.464, TU=0.998 [0-0.95-0.51584]: ['jma', 'outage', 'gust', 'typhoon', 'landfall', 'floodwaters', 'jtwc', 'saffir', 'rainbands', 'overflowed'] [1-1-0.51968]: ['byzantine', 'caliphate', 'caliph', 'abbasid', 'ibn', 'byzantium', 'constantinople', 'nikephoros', 'emir', 'alexios'] ,→ [2-0.95-0.60175]: ['dissipating', 'tropical', 'dissipated', 'extratropical', 'cyclone', 'shear', 'northwestward', 'southwestward', 'saffir', 'convection'] ,→ [3-1-0.54757]: ['purana', 'vishnu', 'shiva', 'sanskrit', 'worshipped', 'hindu', 'deity', 'devotee', 'mahabharata', 'temple'] [4-1-0.49348]: ['beatle', 'beatles', 'leng', 'clapton', 'lennon', 'harrison', 'mccartney', 'overdubs', 'ringo', 'spector'] [5-1-0.46882]: ['torpedoed', 'grt', 'ub', 'destroyer', 'flotilla', 'convoy', 'escorting', 'refit', 'kriegsmarine', 'narvik'] [6-1-0.42421]: ['campus', 'enrollment', 'undergraduate', 'alumnus', 'faculty', 'accredited', 'student', 'semester', 'graduate', 'tuition'] ,→ [7-1-0.39366]: ['politburo', 'stalin', 'soviet', 'sejm', 'lithuania', 'ussr', 'lithuanian', 'polish', 'ssr', 'gorbachev'] [8-1-0.42948]: ['protein', 'receptor', 'prognosis', 'symptom', 'intravenous', 'mrna', 'medication', 'diagnosis', 'abnormality', 'nucleotide'] ,→ [9-1-0.50012]: ['fuselage', 'avionics', 'airframe', 'boeing', 'airline', 'lbf', 'takeoff', 'cockpit', 'undercarriage', 'mach'] [10-1-0.45672]: ['raaf', 'jagdgeschwader', 'bf', 'messerschmitt', 'staffel', 'luftwaffe', 'oberleutnant', 'no.', 'usaaf', 'squadron'] ,→ [11-1-0.46824]: ['constitutionality', 'statute', 'appellate', 'unconstitutional', 'defendant', 'amendment', 'judicial', 'court', 'plaintiff', 'statutory'] ,→ [12-1-0.72662]: ['lap', 'sauber', 'ferrari', 'rAikkAnen', 'rosberg', 'heidfeld', 'barrichello', 'vettel', 'trulli', 'massa'] [13-1-0.45447]: ['ny', 'renumbering', 'realigned', 'routing', 'cr', 'hamlet', 'truncated', 'intersects', 'unsigned', 'intersecting'] ,→ [14-1-0.50035]: ['beyonc˜A©', 'madonna', 'rihanna', 'cinquemani', 'carey', 'musicnotes.com', 'mariah', 'idolator', 'gaga', 'britney'] ,→ [15-1-0.31089]: ['gatehouse', 'castle', 'chancel', 'anglesey', 'stonework', 'nave', 'moat', 'antiquarian', 'earthwork', 'bastion'] ,→ [16-1-0.48429]: ['freeway', 'interchange', 'md', 'undivided', 'concurrency', 'cloverleaf', 'northbound', 'southbound', 'sr', 'highway'] ,→ [17-1-0.41763]: ['electrification', 'railway', 'locomotive', 'tramway', 'electrified', 'freight', 'intercity', 'train', 'nsb', 'footbridge'] ,→ [18-1-0.29094]: ['shakira', 'minogue', 'sugababes', 'airplay', 'chart', 'oricon', 'amor', 'salsa', 'stefani', 'tejano'] [19-1-0.55855]: ['ihp', 'conning', 'amidships', 'casemates', 'barbette', 'waterline', 'ironclad', 'krupp', 'hotchkiss', 'battlecruisers'] ,→ [20-1-0.68992]: ['wwe', 'smackdown', 'pinfall', 'tna', 'ringside', 'wrestlemania', 'heavyweight', 'wrestling', 'summerslam', 'wrestled'] ,→ [21-1-0.35687]: ['plumage', 'underpart', 'viviparous', 'pectoral', 'iucn', 'upperparts', 'nestling', 'passerine', 'copulation', 'gestation'] ,→ [22-1-0.61635]: ['hitter', 'mlb', 'baseman', 'rbi', 'nl', 'strikeout', 'outfielder', 'fastball', 'pitcher', 'slugging'] [23-1-0.49182]: ['nomura', 'manga', 'famitsu', 'anime', 'enix', 'sh˚A\x8dnen', 'fantasy', 'rpgfan', 'dengeki', 'nobuo'] [24-1-0.3997]: ['ebert', 'film', 'imax', 'afi', 'disney', 'grossing', 'spielberg', 'grossed', 'pixar', 'screenplay'] [25-1-0.62758]: ['multiplayer', 'platforming', 'nintendo', 'gamepro', 'gamerankings', 'eurogamer', 'gamecube', 'gamespot', 'gamespy', 'gameplay'] ,→ [26-1-0.47722]: ['parsec', 'orbit', 'orbiting', 'astronomer', 'kepler', 'luminosity', 'planetary', 'brightest', 'constellation', 'brightness'] ,→ [27-1-0.38927]: ['wicket', 'batsman', 'bowled', 'bowler', 'wisden', 'selector', 'equalised', 'cricketer', 'unbeaten', 'midfielder'] ,→ [28-1-0.22046]: ['puritan', 'congregation', 'settler', 'colony', 'rabbi', 'synagogue', 'massachusetts', 'colonist', 'virginia', 'hampshire'] ,→ [29-1-0.62088]: ['volcano', 'lava', 'magma', 'volcanic', 'eruption', 'pyroclastic', 'eruptive', 'caldera', 'volcanism', 'basalt'] ,→ [30-1-0.23967]: ['tardis', 'eastenders', 'sayid', 'rhimes', 'soap', 'walford', 'moffat', 'lindelof', 'realises', 'torchwood'] [31-1-0.49605]: ['finite', 'equation', 'theorem', 'impedance', 'algebraic', 'integer', 'mathematical', 'computation', 'multiplication', 'inverse'] ,→ [32-1-0.43899]: ['pilaster', 'pediment', 'portico', 'facade', 'cornice', 'facsade', 'architectural', 'architect', 'gable', 'marble'] ,→ [33-1-0.65639]: ['cystidia', 'spored', 'cheilocystidia', 'edibility', 'basidium', 'mycologist', 'hypha', 'hyaline', 'hymenium', 'spore'] ,→ [34-1-0.32705]: ['frigate', 'brig', 'musket', 'indiaman', 'privateer', 'ticonderoga', 'loyalist', 'cadiz', 'texians', 'rigging'] ,→ [35-1-0.51317]: ['marge', 'homer', 'bart', 'swartzwelder', 'wiggum', 'stewie', 'scully', 'groening', 'milhouse', 'simpson'] [36-1-0.58321]: ['krasinski', 'liz', 'halpert', 'jenna', 'rainn', 'tgs', 'dunder', 'pam', 'schrute', 'carell'] [37-1-0.46533]: ['halide', 'isotope', 'oxidation', 'oxide', 'aqueous', 'lanthanide', 'h2o', 'chloride', 'hydride', 'hydroxide'] ,→ [38-1-0.41953]: ['thrash', 'kerrang', 'bassist', 'frontman', 'band', 'guitarist', 'album', 'christgau', 'riff', 'nirvana'] [39-1-0.46206]: ['battalion', 'brigade', 'infantry', 'platoon', 'bridgehead', 'regiment', 'panzer', 'rok', 'pusan', 'counterattack'] ,→ [40-1-0.64531]: ['touchdown', 'fumble', 'quarterback', 'kickoff', 'punt', 'yardage', 'cornerback', 'linebacker', 'rushing', 'preseason'] ,→ [41-1-0.58504]: ['nhl', 'goaltender', 'defenceman', 'canuck', 'ahl', 'blackhawks', 'whl', 'hockey', 'defencemen', 'canadiens'] [42-1-0.44545]: ['inflorescence', 'banksia', 'pollinator', 'pollination', 'seedling', 'nectar', 'pollen', 'follicle', 'flowering', 'thiele'] ,→ [43-1-0.40939]: ['gubernatorial', 'republican', 'democrat', 'reelection', 'candidacy', 'senate', 'mintage', 'caucus', 'congressman', 'democratic'] ,→ [44-1-0.23055]: ['alamo', 'cyclotron', 'implosion', 'metallurgical', 'physicist', 'laboratory', 'physic', 'reactor', 'oppenheimer', 'testified'] ,→ 6370 [45-1-0.36894]: ['poem', 'angelou', 'poetry', 'prose', 'literary', 'poet', 'narrator', 'wollstonecraft', 'poetic', 'preface'] [46-1-0.40576]: ['northumbria', 'mercia', 'archbishop', 'papacy', 'earldom', 'bishopric', 'mercian', 'overlordship', 'papal', 'kingship'] ,→ [47-1-0.37166]: ['menu', 'gb', 'burger', 'apps', 'software', 'iphone', 'processor', 'user', 'apple', 'app'] [48-1-0.18914]: ['dia>x87m', 'labour', 'ngA', 'mp', 'liberal', 'nhu', 'rhodesia', 'protester', 'alberta', 'saigon'] [49-1-0.48897]: ['cantata', 'recitative', 'concerto', 'bach', 'libretto', 'berlioz', 'soloist', 'chorale', 'oboe', 'symphony'] 12.4 AGnews Online LDA: npmi=0.21322384969796335 [ 0 - 0.68803 - 0.22231]: ['microsoft', 'software', 'window', 'security', 'version', 'new', 'ha', 'server', 'company', 'application'] ,→ [ 1 - 0.65048 - 0.19395]: ['season', 'los', 'angeles', 'player', 'holiday', 'new', 'team', 'sport', 'forward', 'wa'] [ 2 - 0.74088 - 0.13022]: ['year', 'ago', 'wa', 'ha', 'family', 'focus', 'com', 'saddam', 'month', 'british'] [ 3 - 0.73333 - 0.22068]: ['trade', 'tax', 'organization', 'world', 'fund', 'u', 'year', 'boeing', 'international', 'enron'] [ 4 - 0.83214 - 0.20167]: ['east', 'middle', 'country', 'new', 'king', 'world', 'saudi', 'approach', 'annual', 'era'] [ 5 - 0.83333 - 0.28552]: ['israeli', 'palestinian', 'drug', 'gaza', 'israel', 'minister', 'strip', 'west', 'bank', 'prime'] [ 6 - 0.85588 - 0.23562]: ['search', 'google', 'site', 'web', 'internet', 'public', 'engine', 'ha', 'yahoo', 'offering'] [ 7 - 0.64636 - 0.19748]: ['scientist', 'study', 'say', 'researcher', 'new', 'human', 'ha', 'ap', 'expert', 'science'] [ 8 - 0.72255 - 0.25]: ['court', 'federal', 'case', 'charge', 'judge', 'trial', 'wa', 'said', 'law', 'ha'] [ 9 - 1 - 0.19845]: ['japan', 'japanese', 'tokyo', 'texas', 'powerful', 'heavy', 'rain', 'indonesia', 'networking', 'typhoon'] ,→ [ 10 - 1 - 0.18393]: ['aid', 'mark', 'worker', 'italian', 'italy', 'wake', 'relief', 'forest', 'doubt', 'option'] [ 11 - 0.66667 - 0.2254]: ['iraq', 'hostage', 'said', 'iraqi', 'militant', 'french', 'group', 'release', 'islamic', 'wa'] [ 12 - 0.9 - 0.19566]: ['state', 'united', 'press', 'canadian', 'canada', 'cp', 'toronto', 'nation', 'ottawa', 'martin'] [ 13 - 0.66833 - 0.19107]: ['game', 'olympic', 'athens', 'point', 'coach', 'night', 'team', 'wa', 'football', 'gold'] [ 14 - 0.63755 - 0.25872]: ['billion', 'million', 'company', 'said', 'deal', 'bid', 'ha', 'group', 'buy', 'agreed'] [ 15 - 0.56969 - 0.19212]: ['company', 'executive', 'chief', 'said', 'new', 'york', 'amp', 'ha', 'financial', 'exchange'] [ 16 - 1 - 0.22169]: ['according', 'report', 'released', 'university', 'school', 'book', 'published', 'student', 'survey', 'newspaper'] ,→ [ 17 - 0.95 - 0.16021]: ['news', 'german', 'germany', 'nyse', 'nasdaq', 'gold', 'dutch', 'field', 'corporation', 'berlin'] [ 18 - 0.76548 - 0.13072]: ['gt', 'lt', 'http', 'reuters', 'york', 'new', 'post', 'm', 'font', 'sans'] [ 19 - 0.84048 - 0.16701]: ['house', 'white', 'new', 'national', 'ap', 'hong', 'kong', 'intelligence', 'republican', 'senate'] ,→ [ 20 - 0.80588 - 0.18459]: ['ha', 'moon', 'earth', 'scientist', 'planet', 'mile', 'mar', 'titan', 'nasa', 'image'] [ 21 - 0.73803 - 0.20273]: ['computer', 'world', 'pc', 'drive', 'personal', 'new', 'ibm', 'power', 'hard', 'ha'] [ 22 - 1 - 0.20013]: ['free', 'agent', 'pick', 'pair', 'single', 'centre', 'sweep', 'choice', 'crowd', 'carter'] [ 23 - 0.76303 - 0.18581]: ['music', 'online', 'digital', 'apple', 'store', 'ha', 'new', 'industry', 'player', 'ipod'] [ 24 - 0.59588 - 0.24328]: ['president', 'minister', 'bush', 'prime', 'john', 'said', 'government', 'war', 'iraq', 'ha'] [ 25 - 0.95 - 0.17342]: ['giant', 'oil', 'russian', 'gas', 'baseball', 'yukos', 'bond', 'major', 'moscow', 'auction'] [ 26 - 0.80667 - 0.24213]: ['space', 'nasa', 'flight', 'station', 'said', 'plane', 'launch', 'international', 'airport', 'commercial'] ,→ [ 27 - 0.80667 - 0.26711]: ['people', 'said', 'killed', 'attack', 'police', 'baghdad', 'city', 'force', 'iraqi', 'official'] [ 28 - 0.67255 - 0.22167]: ['quot', 'wa', 'said', 'thing', 'want', 'better', 'know', 'say', 'ha', 'need'] [ 29 - 0.95 - 0.23242]: ['england', 'champion', 'match', 'goal', 'stage', 'league', 'home', 'wednesday', 'trophy', 'captain'] [ 30 - 1 - 0.18207]: ['european', 'hurricane', 'union', 'florida', 'ivan', 'eu', 'france', 'coast', 'storm', 'island'] [ 31 - 0.80667 - 0.23308]: ['change', 'nuclear', 'iran', 'agency', 'said', 'program', 'global', 'weapon', 'nation', 'security'] ,→ [ 32 - 0.68803 - 0.23633]: ['service', 'phone', 'technology', 'mobile', 'wireless', 'company', 'new', 'internet', 'ha', 'chip'] ,→ [ 33 - 0.83922 - 0.13841]: ['ha', 'turning', 'heat', 'team', 'bar', 'managed', 'seattle', 'lewis', 'connecticut', 'allen'] [ 34 - 1 - 0.20403]: ['san', 'francisco', 'johnson', 'diego', 'stewart', 'hotel', 'testing', 'living', 'room', 'jose'] [ 35 - 0.79 - 0.25845]: ['job', 'cut', 'airline', 'said', 'plan', 'u', 'million', 'cost', 'air', 'bankruptcy'] [ 36 - 1 - 0.17114]: ['victim', 'taiwan', 'blow', 'philippine', 'suffered', 'steve', 'singapore', 'overnight', 'delivered', 'gate'] ,→ [ 37 - 0.74714 - 0.18515]: ['india', 'new', 'radio', 'pakistan', 'indian', 'satellite', 'minister', 'la', 'delhi', 'said'] [ 38 - 0.95 - 0.37642]: ['election', 'presidential', 'president', 'party', 'vote', 'campaign', 'candidate', 'political', 'opposition', 'russia'] ,→ [ 39 - 0.85667 - 0.21321]: ['china', 'south', 'north', 'korea', 'said', 'talk', 'chinese', 'beijing', 'africa', 'official'] [ 40 - 0.925 - 0.26416]: ['cup', 'world', 'open', 'round', 'final', 'championship', 'win', 'race', 'second', 'grand'] [ 41 - 0.71548 - 0.26602]: ['sunday', 'ap', 'game', 'touchdown', 'season', 'yard', 'quarterback', 'new', 'running', 'victory'] ,→ [ 42 - 0.81667 - 0.15514]: ['research', 'quote', 'profile', 'black', 'wa', 'property', 'williams', 'heavyweight', 'said', 'accepted'] ,→ [ 43 - 0.64881 - 0.22719]: ['price', 'oil', 'reuters', 'stock', 'new', 'u', 'york', 'rate', 'high', 'dollar'] [ 44 - 0.70714 - 0.25335]: ['series', 'red', 'new', 'sox', 'york', 'game', 'night', 'boston', 'yankee', 'run'] [ 45 - 0.83088 - 0.19549]: ['game', 'video', 'announcement', 'watch', 'paul', 'ha', 'nintendo', 'mass', 'lose', 'fact'] [ 46 - 0.765 - 0.26435]: ['sale', 'percent', 'profit', 'said', 'reported', 'quarter', 'share', 'year', 'earnings', 'reuters'] [ 47 - 0.71588 - 0.17818]: ['manager', 'club', 'ha', 'united', 'manchester', 'league', 'arsenal', 'old', 'wa', 'chelsea'] [ 48 - 0.76667 - 0.21224]: ['australia', 'test', 'leader', 'arafat', 'australian', 'yasser', 'wa', 'palestinian', 'day', 'said'] ,→ [ 49 - 0.74088 - 0.23108]: ['just', 'like', 'big', 'year', 'look', 'time', 'wa', 'good', 'little', 'ha'] uniqueness=0.802 LDA Collapsed Gibbs sampling: npmi=0.23902729002814144 [ 0 - 0.81667 - 0.32929]: ['palestinian', 'leader', 'israeli', 'gaza', 'west', 'israel', 'official', 'arafat', 'yasser', 'sunday'] ,→ [ 1 - 0.9 - 0.24206]: ['space', 'nasa', 'international', 'station', 'scientist', 'launch', 'earth', 'mission', 'moon', 'star'] ,→ [ 2 - 0.82778 - 0.19331]: ['chief', 'executive', 'company', 'bid', 'rival', 'oracle', 'board', 'ha', 'peoplesoft', 'offer'] [ 3 - 0.69167 - 0.23957]: ['sunday', 'game', 'season', 'touchdown', 'sport', 'yard', 'running', 'quarterback', 'network', 'left'] ,→ [ 4 - 0.79 - 0.23012]: ['dollar', 'reuters', 'rate', 'economic', 'growth', 'federal', 'economy', 'reserve', 'euro', 'tuesday'] ,→ [ 5 - 0.505 - 0.24069]: ['china', 'news', 'japan', 'reuters', 'monday', 'thursday', 'wednesday', 'reported', 'tuesday', 'report'] ,→ [ 6 - 0.76944 - 0.16612]: ['game', 'industry', 'ha', 'player', 'video', 'sun', 'today', 'latest', 'sony', 'movie'] [ 7 - 0.69444 - 0.20597]: ['phone', 'ha', 'market', 'mobile', 'world', 'company', 'maker', 'electronics', 'device', 'cell'] [ 8 - 0.56944 - 0.22276]: ['ha', 'year', 'world', 'past', 'today', 'number', 'grand', 'month', 'time', 'half'] [ 9 - 1 - 0.2544]: ['drug', 'health', 'heart', 'food', 'study', 'risk', 'researcher', 'child', 'medical', 'died'] [ 10 - 0.83333 - 0.18319]: ['iraq', 'group', 'british', 'french', 'hostage', 'held', 'worker', 'militant', 'release', 'american'] ,→ 6371 [ 11 - 0.78333 - 0.31725]: ['billion', 'million', 'company', 'deal', 'group', 'buy', 'agreed', 'sell', 'cash', 'stake'] [ 12 - 0.73333 - 0.23883]: ['government', 'country', 'region', 'nation', 'security', 'talk', 'peace', 'rebel', 'darfur', 'end'] ,→ [ 13 - 0.81111 - 0.21487]: ['ha', 'make', 'big', 'making', 'television', 'question', 'doe', 'tv', 'work', 'set'] [ 14 - 0.70833 - 0.2839]: ['point', 'coach', 'night', 'team', 'scored', 'game', 'university', 'football', 'season', 'victory'] ,→ [ 15 - 0.72333 - 0.22315]: ['stock', 'share', 'york', 'street', 'investor', 'market', 'reuters', 'wall', 'higher', 'wednesday'] ,→ [ 16 - 0.85 - 0.27922]: ['city', 'people', 'killed', 'iraq', 'iraqi', 'baghdad', 'force', 'bomb', 'attack', 'car'] [ 17 - 0.7 - 0.23109]: ['san', 'hit', 'run', 'francisco', 'ap', 'night', 'home', 'victory', 'win', 'texas'] [ 18 - 0.85 - 0.26333]: ['minister', 'prime', 'country', 'party', 'leader', 'pakistan', 'president', 'tony', 'afp', 'foreign'] ,→ [ 19 - 0.83333 - 0.30335]: ['computer', 'technology', 'ibm', 'chip', 'intel', 'product', 'pc', 'announced', 'power', 'business'] ,→ [ 20 - 0.54278 - 0.16654]: ['ha', 'press', 'change', 'ap', 'canadian', 'global', 'tuesday', 'thursday', 'international', 'year'] ,→ [ 21 - 0.88333 - 0.33114]: ['software', 'microsoft', 'security', 'window', 'version', 'application', 'linux', 'operating', 'source', 'user'] ,→ [ 22 - 0.56333 - 0.15669]: ['gt', 'lt', 'reuters', 'http', 'york', 'thursday', 'washington', 'tuesday', 'wednesday', 'post'] [ 23 - 1 - 0.23855]: ['oil', 'price', 'high', 'record', 'crude', 'supply', 'barrel', 'concern', 'future', 'energy'] [ 24 - 0.83333 - 0.23538]: ['plan', 'cut', 'airline', 'air', 'job', 'cost', 'line', 'bankruptcy', 'union', 'million'] [ 25 - 0.6 - 0.29362]: ['service', 'network', 'wireless', 'company', 'internet', 'technology', 'business', 'communication', 'customer', 'announced'] ,→ [ 26 - 0.56111 - 0.15573]: ['wa', 'ap', 'contract', 'ha', 'yesterday', 'left', 'list', 'monday', 'free', 'signed'] [ 27 - 0.95 - 0.38325]: ['court', 'federal', 'case', 'judge', 'lawsuit', 'law', 'filed', 'legal', 'claim', 'trial'] [ 28 - 0.86667 - 0.27833]: ['president', 'election', 'bush', 'john', 'presidential', 'ap', 'campaign', 'vote', 'kerry', 'house'] ,→ [ 29 - 0.68333 - 0.21267]: ['world', 'lead', 'championship', 'cup', 'sunday', 'round', 'shot', 'saturday', 'title', 'tiger'] [ 30 - 0.80833 - 0.28389]: ['red', 'series', 'boston', 'game', 'sox', 'league', 'york', 'yankee', 'baseball', 'houston'] [ 31 - 0.83333 - 0.19603]: ['state', 'united', 'nation', 'nuclear', 'program', 'iran', 'secretary', 'weapon', 'washington', 'official'] ,→ [ 32 - 0.80833 - 0.26289]: ['police', 'wa', 'attack', 'man', 'accused', 'war', 'charged', 'arrested', 'terrorist', 'yesterday'] ,→ [ 33 - 0.81667 - 0.21302]: ['people', 'hurricane', 'thousand', 'home', 'coast', 'storm', 'florida', 'missing', 'official', 'powerful'] ,→ [ 34 - 0.725 - 0.24607]: ['month', 'report', 'consumer', 'government', 'showed', 'september', 'job', 'august', 'week', 'october'] ,→ [ 35 - 0.70833 - 0.21241]: ['research', 'group', 'firm', 'quote', 'bank', 'profile', 'company', 'business', 'monday', 'investment'] ,→ [ 36 - 0.95 - 0.21915]: ['quot', 'thing', 'called', 'word', 'don', 'good', 'story', 'told', 'work', 'staff'] [ 37 - 0.72778 - 0.16637]: ['ap', 'motor', 'ha', 'scientist', 'plant', 'general', 'human', 'long', 'great', 'remains'] [ 38 - 0.95 - 0.25207]: ['percent', 'sale', 'profit', 'quarter', 'reported', 'earnings', 'store', 'loss', 'retailer', 'rose'] [ 39 - 0.75667 - 0.17671]: ['russian', 'thursday', 'school', 'russia', 'los', 'angeles', 'ap', 'major', 'wednesday', 'california'] ,→ [ 40 - 0.53 - 0.24683]: ['reuters', 'week', 'south', 'north', 'friday', 'tuesday', 'monday', 'wednesday', 'thursday', 'korea'] ,→ [ 41 - 0.525 - 0.22387]: ['wa', 'year', 'time', 'ago', 'yesterday', 'day', 'week', 'earlier', 'long', 'history'] [ 42 - 0.64444 - 0.23892]: ['million', 'security', 'company', 'public', 'ha', 'fund', 'pay', 'exchange', 'commission', 'regulator'] ,→ [ 43 - 0.625 - 0.1675]: ['day', 'test', 'today', 'australia', 'india', 'australian', 'yesterday', 'england', 'saturday', 'team'] ,→ [ 44 - 0.6 - 0.24946]: ['open', 'world', 'final', 'set', 'cup', 'champion', 'saturday', 'reach', 'round', 'win'] [ 45 - 0.85 - 0.23868]: ['league', 'champion', 'club', 'goal', 'manager', 'england', 'real', 'manchester', 'madrid', 'arsenal'] ,→ [ 46 - 0.53333 - 0.27788]: ['week', 'time', 'season', 'start', 'year', 'home', 'day', 'early', 'end', 'weekend'] [ 47 - 0.81667 - 0.24605]: ['european', 'trade', 'union', 'german', 'tax', 'world', 'eu', 'germany', 'organization', 'commission'] ,→ [ 48 - 0.85 - 0.27251]: ['online', 'search', 'web', 'google', 'internet', 'site', 'music', 'apple', 'user', 'service'] [ 49 - 0.86667 - 0.24667]: ['olympic', 'athens', 'gold', 'medal', 'won', 'american', 'men', 'woman', 'world', 'olympics'] uniqueness=0.7559999999999999 ProdLDA: [ 0 - 0.78667 - 0.27803]: ['directory', 'netscape', 'flaw', 'xp', 'itunes', 'server', 'midrange', 'user', 'gmail', 'fujitsu'] [ 1 - 0.17 - 0.28389]: ['lt', 'gt', 'serif', 'arial', 'helvetica', 'verdana', 'font', 'sans', 'm', 'http'] [ 2 - 0.69167 - 0.20085]: ['moon', 'lunar', 'spacecraft', 'saturn', 'rover', 'mar', 'lived', 'utah', 'parachute', 'shuttle'] [ 3 - 0.66167 - 0.2175]: ['touchdown', 'yard', 'scored', 'dodger', 'inning', 'st', 'pujols', 'seahawks', 'slam', 'astros'] [ 4 - 0.93333 - 0.19771]: ['trent', 'jumper', 'tennessee', 'overcame', 'keith', 'cub', 'touchdown', 'milwaukee', 'season', 'mvp'] ,→ [ 5 - 0.44167 - 0.24495]: ['crude', 'barrel', 'oil', 'price', 'nikkei', 'opec', 'midsession', 'stock', 'heating', 'rose'] [ 6 - 0.75833 - 0.19283]: ['allawi', 'iyad', 'abuja', 'nepal', 'yonhap', 'pervez', 'eta', 'militant', 'sudan', 'iraqi'] [ 7 - 0.17 - 0.28389]: ['lt', 'gt', 'http', 'font', 'serif', 'arial', 'helvetica', 'verdana', 'sans', 'm'] [ 8 - 0.825 - 0.14285]: ['cup', 'phelps', 'scored', 'qualifier', 'cardinal', 'homered', 'federer', 'colt', 'magic', 'roger'] [ 9 - 0.87 - 0.22113]: ['sharapova', 'wimbledon', 'unbeaten', 'roddick', 'inning', 'champion', 'brett', 'postseason', 'homer', 'rivera'] ,→ [ 10 - 0.49167 - 0.33562]: ['insurgent', 'stronghold', 'baghdad', 'killed', 'iraqi', 'gaza', 'raid', 'israeli', 'killing', 'palestinian'] ,→ [ 11 - 0.80333 - 0.23854]: ['ipod', 'imac', 'desktop', 'xp', 'pt', 'embedded', 'apple', 'erp', 'com', 'window'] [ 12 - 0.9 - 0.19538]: ['abuja', 'sudanese', 'hideout', 'kabul', 'jerusalem', 'karzai', 'ariel', 'captive', 'hamid', 'damascus'] ,→ [ 13 - 0.8 - 0.28904]: ['msn', 'priority', 'server', 'hd', 'lan', 'infoworld', 'user', 'notebook', 'workstation', 'linux'] [ 14 - 0.44167 - 0.22566]: ['oil', 'crude', 'nikkei', 'inventory', 'price', 'barrel', 'trader', 'output', 'greenspan', 'opec'] ,→ [ 15 - 0.81667 - 0.34766]: ['telescope', 'spacecraft', 'relativity', 'earth', 'hubble', 'backwards', 'planet', 'circling', 'planetary', 'cassini'] ,→ [ 16 - 0.17 - 0.28389]: ['lt', 'gt', 'http', 'serif', 'arial', 'helvetica', 'verdana', 'font', 'sans', 'm'] [ 17 - 0.87 - 0.20465]: ['pitched', 'rutherford', 'piscataway', 'pedro', 'felix', 'shutout', 'pete', 'martinez', 'inning', 'kazmir'] ,→ [ 18 - 0.68667 - 0.28964]: ['version', 'smart', 'msn', 'antivirus', 'window', 'browser', 'feature', 'malicious', 'compatible', 'xp'] ,→ [ 19 - 0.325 - 0.24661]: ['crude', 'oil', 'barrel', 'heating', 'output', 'price', 'nikkei', 'opec', 'stock', 'inventory'] [ 20 - 0.81667 - 0.16414]: ['docomo', 'conspiracy', 'atomic', 'tehran', 'unused', 'iran', 'nuclear', 'regulatory', 'ntt', 'protocol'] ,→ [ 21 - 0.78333 - 0.33018]: ['java', 'server', 'kodak', 'cingular', 'software', 'microsystems', 'apps', 'microsoft', 'ibm', 'mobile'] ,→ [ 22 - 0.83333 - 0.21765]: ['cia', 'musharraf', 'yushchenko', 'tehran', 'pervez', 'enrichment', 'iran', 'conciliatory', 'irna', 'blair'] ,→ [ 23 - 0.95 - 0.17406]: ['pitcher', 'acc', 'premiership', 'curt', 'tampa', 'jim', 'supersonics', 'raucous', 'cal', 'oakland'] 6372 [ 24 - 0.71667 - 0.23467]: ['capsule', 'soyuz', 'cosmonaut', 'solar', 'astronaut', 'titan', 'lore', 'atmosphere', 'mar', 'genesis'] ,→ [ 25 - 1 - 0.20041]: ['safin', 'marat', 'busch', 'cincinnati', 'aaron', 'singled', 'sidelined', 'raptor', 'hamstring', 'guillermo'] ,→ [ 26 - 0.675 - 0.2152]: ['nordegren', 'astronaut', 'space', 'earth', 'pitcairn', 'moon', 'orbit', 'elin', 'nasa', 'craft'] [ 27 - 0.44167 - 0.37017]: ['gaza', 'baghdad', 'israeli', 'wounded', 'militant', 'palestinian', 'muqtada', 'wounding', 'insurgent', 'jabalya'] ,→ [ 28 - 0.17 - 0.28389]: ['lt', 'gt', 'serif', 'arial', 'helvetica', 'verdana', 'font', 'http', 'sans', 'm'] [ 29 - 0.73667 - 0.26993]: ['xp', 'nvidia', 'window', 'processor', 'msn', 'java', 'tool', 'chipset', 'stack', 'modeling'] [ 30 - 0.25 - 0.23847]: ['lt', 'gt', 'http', 'serif', 'arial', 'helvetica', 'verdana', 'font', 'sans', 'quarterly'] [ 31 - 0.525 - 0.25439]: ['mysterious', 'mar', 'solar', 'cassini', 'nasa', 'earth', 'fossil', 'saturn', 'soyuz', 'moon'] [ 32 - 0.51667 - 0.31844]: ['baghdad', 'israeli', 'gaza', 'wounding', 'iraqi', 'insurgent', 'wounded', 'bomb', 'policeman', 'troop'] ,→ [ 33 - 0.17 - 0.28389]: ['lt', 'gt', 'http', 'font', 'serif', 'arial', 'helvetica', 'verdana', 'sans', 'm'] [ 34 - 0.95 - 0.15975]: ['liverpool', 'vaughan', 'nash', 'blackburn', 'gerrard', 'locker', 'notre', 'nba', 'lomana', 'lualua'] ,→ [ 35 - 0.85833 - 0.16806]: ['knockout', 'scored', 'kicker', 'fc', 'timberwolves', 'ticker', 'defending', 'semifinal', 'rooney', 'astros'] ,→ [ 36 - 0.67 - 0.19242]: ['homered', 'alcs', 'brave', 'yard', 'sox', 'schnyder', 'cup', 'victory', 'inning', 'finale'] [ 37 - 0.575 - 0.27208]: ['ansari', 'prize', 'astronaut', 'spacecraft', 'pitcairn', 'spaceshipone', 'nasa', 'parachute', 'moon', 'atmosphere'] ,→ [ 38 - 0.81667 - 0.22488]: ['nuclear', 'putin', 'censure', 'standoff', 'prime', 'minister', 'thabo', 'darfur', 'hostage', 'iran'] ,→ [ 39 - 0.60833 - 0.3156]: ['gaza', 'moqtada', 'militant', 'hamas', 'wounding', 'killing', 'wounded', 'sharon', 'ariel', 'grenade'] ,→ [ 40 - 0.9 - 0.33229]: ['interoperability', 'provider', 'sender', 'authentication', 'microsystems', 'subscriber', 'adobe', 'enterprise', 'software', 'ietf'] ,→ [ 41 - 0.73333 - 0.30305]: ['militant', 'wounding', 'sunni', 'mosque', 'killed', 'shiite', 'strip', 'multan', 'palestinian', 'suicide'] ,→ [ 42 - 0.83333 - 0.22536]: ['mcgahee', 'referee', 'linebacker', 'elbow', 'willis', 'dame', 'astros', 'notre', 'rib', 'martinez'] ,→ [ 43 - 1 - 0.11352]: ['larkin', 'clubhouse', 'chelsea', 'defensive', 'dolphin', 'wei', 'owen', 'dunlop', 'league', 'coordinator'] ,→ [ 44 - 0.70333 - 0.3561]: ['firefox', 'compatible', 'browser', 'mozilla', 'desktop', 'user', 'platform', 'worm', 'xp', 'edition'] ,→ [ 45 - 0.49167 - 0.26583]: ['oil', 'crude', 'price', 'barrel', 'opec', 'inventory', 'eased', 'heating', 'gasoline', 'disruption'] ,→ [ 46 - 0.85833 - 0.248]: ['preseason', 'pass', 'match', 'quarterback', 'ahman', 'nedbank', 'touchdown', 'valencia', 'jacksonville', 'scored'] ,→ [ 47 - 0.95 - 0.16433]: ['championship', 'fitchburg', 'colby', 'oliver', 'celtic', 'endicott', 'playoff', 'coach', 'victory', 'pga'] ,→ [ 48 - 0.88333 - 0.25886]: ['recep', 'tayyip', 'erdogan', 'bosnian', 'nuclear', 'equatorial', 'minister', 'thatcher', 'anwar', 'elbaradei'] ,→ [ 49 - 0.77 - 0.1631]: ['wismilak', 'wta', 'yankee', 'sox', 'omega', 'oakland', 'gatlin', 'calf', 'sharapova', 'inning'] NTM-R: [0-0.5-0.17034]: ['eisner', 'zook', 'coaching', 'disney', 'walt', 'jaguar', 'willingham', 'notre', 'vacant', 'tyrone'] [1-0.65-0.2067]: ['lt', 'gt', 'http', 'font', 'serif', 'arial', 'helvetica', 'verdana', 'br', 'm'] [2-0.85-0.27743]: ['d', 'nintendo', 'cassini', 'saturn', 'playstation', 'console', 'sony', 'portable', 'andreas', 'moon'] [3-1-0.19087]: ['critic', 'treatment', 'committee', 'university', 'responsibility', 'fallen', 'item', 'public', 'medicine', 'undergo'] ,→ [4-0.54762-0.19074]: ['sox', 'pedro', 'saddam', 'kerry', 'martinez', 'hussein', 'red', 'george', 'fallujah', 'allawi'] [5-1-0.36219]: ['xp', 'browser', 'mozilla', 'firefox', 'beta', 'desktop', 'processor', 'window', 'msn', 'flaw'] [6-0.47-0.13705]: ['warming', 'vijay', 'arctic', 'climate', 'singh', 'radar', 'specie', 'pt', 'importance', 'bird'] [7-0.68667-0.31398]: ['telescope', 'orbiting', 'saturn', 'ansari', 'mojave', 'astronaut', 'antenna', 'hubble', 'cassini', 'shuttle'] ,→ [8-0.68333-0.24017]: ['chelsea', 'madrid', 'mutu', 'spanish', 'striker', 'camacho', 'banned', 'jol', 'cska', 'referee'] [9-0.47667-0.19242]: ['striker', 'mutu', 'ferguson', 'harry', 'trafford', 'rooney', 'manchester', 'arsene', 'hamid', 'karzai'] [10-0.68667-0.18856]: ['administration', 'crew', 'human', 'shuttle', 'atomic', 'food', 'flu', 'russia', 'hubble', 'soyuz'] [11-0.32417-0.15603]: ['greenspan', 'priority', 'ryder', 'alan', 'curt', 'schilling', 'pedro', 'martinez', 'sox', 'pt'] [12-0.52-0.08896]: ['upgrading', 'arctic', 'vijay', 'helen', 'zdnet', 'volcano', 'bird', 'simulator', 'mount', 'pt'] [13-0.5025-0.21493]: ['rooney', 'manchester', 'trafford', 'coaching', 'football', 'greenspan', 'wayne', 'auburn', 'blackburn', 'eriksson'] ,→ [14-0.875-0.13414]: ['blair', 'athlete', 'nasa', 'football', 'florida', 'tony', 'dangerous', 'watchdog', 'patriot', 'informed'] ,→ [15-0.5-0.2087]: ['willingham', 'tyrone', 'zook', 'ron', 'eisner', 'jeffrey', 'notre', 'dame', 'meyer', 'sirius'] [16-0.85833-0.16508]: ['motogp', 'nicholls', 'premiership', 'qualifying', 'newell', 'newcastle', 'pole', 'graeme', 'kieron', 'bannister'] ,→ [17-0.66012-0.26168]: ['challenger', 'greenspan', 'liberal', 'convention', 'kerry', 'campaign', 'hostile', 'candidate', 'democrat', 'poll'] ,→ [18-1-0.3008]: ['medal', 'gold', 'safin', 'marat', 'federer', 'lleyton', 'phelps', 'seed', 'athens', 'henman'] [19-0.88333-0.14774]: ['bernie', 'jaguar', 'ferrari', 'racing', 'prix', 'hopkins', 'ovitz', 'hoya', 'association', 'brazilian'] ,→ [20-0.78095-0.19085]: ['kerry', 'republican', 'appropriate', 'bush', 'greece', 'safe', 'columbia', 'saddam', 'hostage', 'regard'] ,→ [21-0.84762-0.13865]: ['celebration', 'simply', 'kerry', 'museum', 'represented', 'thanksgiving', 'korea', 'college', 'coast', 'mount'] ,→ [22-0.61167-0.19311]: ['shuttle', 'astronaut', 'nasa', 'endangered', 'capsule', 'moscow', 'soyuz', 'malaysia', 'warn', 'sean'] [23-0.44833-0.31216]: ['rooney', 'ferguson', 'blackburn', 'liverpool', 'arsenal', 'arsene', 'premiership', 'wenger', 'benitez', 'manchester'] ,→ [24-1-0.20159]: ['quarterly', 'earnings', 'profit', 'forecast', 'offset', 'nikkei', 'income', 'profile', 'higher', 'weighed'] [25-0.93333-0.20844]: ['corruption', 'genetic', 'handling', 'social', 'legislation', 'merck', 'dna', 'independent', 'cloning', 'vioxx'] ,→ [26-0.9-0.24207]: ['enrichment', 'uranium', 'tehran', 'iran', 'nuclear', 'suspend', 'sanction', 'freeze', 'atomic', 'negotiator'] ,→ [27-0.7125-0.15795]: ['mutu', 'hugo', 'greenspan', 'jailed', 'overturn', 'madrid', 'ottawa', 'chavez', 'conviction', 'spanish'] ,→ [28-0.56167-0.36871]: ['genesis', 'capsule', 'shuttle', 'space', 'soyuz', 'crew', 'nasa', 'spaceshipone', 'manned', 'astronaut'] ,→ [29-0.93333-0.16113]: ['kobe', 'eliot', 'attorney', 'bryant', 'guilty', 'ovitz', 'spitzer', 'milosevic', 'slobodan', 'enron'] [30-0.46167-0.11766]: ['obtaining', 'helen', 'erp', 'mount', 'priority', 'upgrading', 'radar', 'pyongyang', 'zdnet', 'pt'] [31-0.65-0.26263]: ['arial', 'verdana', 'helvetica', 'serif', 'font', 'sans', 'm', 'br', 'post', 'reg'] [32-0.49333-0.26054]: ['ferguson', 'trafford', 'manchester', 'alan', 'alex', 'newcastle', 'singh', 'tottenham', 'rooney', 'skipper'] ,→ [33-0.59583-0.25973]: ['republican', 'voter', 'convention', 'tax', 'congressional', 'poll', 'web', 'saddam', 'greenspan', 'social'] ,→ [34-0.95-0.20341]: ['oracle', 'peoplesoft', 'java', 'verizon', 'cingular', 'acquire', 'microsystems', 'hostile', 'takeover', 'conway'] ,→ 6373 [35-0.51429-0.19294]: ['martinez', 'sox', 'pedro', 'schilling', 'happen', 'curt', 'kerry', 'yankee', 'red', 'moon'] [36-0.95-0.22597]: ['ariel', 'sharon', 'manmohan', 'gaza', 'allawi', 'najaf', 'settler', 'aziz', 'iyad', 'kashmir'] [37-0.68667-0.29305]: ['climate', 'emission', 'kyoto', 'arctic', 'carbon', 'warming', 'dioxide', 'shuttle', 'hubble', 'scientific'] ,→ [38-0.44333-0.4275]: ['rooney', 'trafford', 'everton', 'ferguson', 'nistelrooy', 'arsene', 'striker', 'ruud', 'manchester', 'wenger'] ,→ [39-0.47333-0.18842]: ['meyer', 'trafford', 'tyrone', 'willingham', 'dame', 'notre', 'vogts', 'ferguson', 'berti', 'ron'] [40-0.50833-0.40095]: ['newcastle', 'premier', 'bolton', 'arsenal', 'premiership', 'chelsea', 'everton', 'blackburn', 'charlton', 'rooney'] ,→ [41-0.78333-0.21501]: ['putin', 'russian', 'chechen', 'beslan', 'vladimir', 'moscow', 'jakarta', 'spanish', 'canadian', 'kong'] ,→ [42-0.50417-0.17625]: ['importance', 'greenspan', 'priority', 'republican', 'legislative', 'poverty', 'alan', 'democratic', 'ryder', 'obtaining'] ,→ [43-1-0.30192]: ['homered', 'inning', 'homer', 'astros', 'touchdown', 'nl', 'peyton', 'pitched', 'clemens', 'yard'] [44-1-0.36627]: ['wounding', 'bomber', 'detonated', 'exploded', 'wounded', 'suicide', 'killing', 'injuring', 'mosque', 'bomb'] [45-0.35512-0.11672]: ['ryder', 'priority', 'pt', 'erp', 'vijay', 'obtaining', 'com', 'importance', 'greenspan', 'kerry'] [46-0.64345-0.20764]: ['assessment', 'academic', 'social', 'hong', 'kong', 'infrastructure', 'convention', 'kerry', 'greenspan', 'welfare'] ,→ [47-0.6-0.18425]: ['eisner', 'willingham', 'zook', 'tyrone', 'ovitz', 'spurrier', 'coordinator', 'chief', 'vice', 'walt'] [48-0.825-0.11548]: ['material', 'phone', 'biodegradable', 'hypersonic', 'asaravala', 'nasa', 'huygens', 'genesis', 'audiovox', 'iran'] ,→ [49-0.75833-0.15448]: ['hispano', 'madrid', 'barcelona', 'psv', 'charlton', 'kiev', 'premiership', 'russian', 'abbey', 'hartson'] ,→ W-LDA: [0-1-0.17838]: ['sale', 'quarter', 'retailer', 'idc', 'grew', 'slower', 'seasonally', 'unemployment', 'compared', 'july'] [1-1-0.50711]: ['najaf', 'baghdad', 'insurgent', 'shiite', 'fallujah', 'muqtada', 'mosul', 'iraqi', 'sadr', 'wounding'] [2-1-0.17183]: ['mae', 'fannie', 'vioxx', 'arthritis', 'enron', 'merck', 'accounting', 'celebrex', 'conrad', 'sanjay'] [3-1-0.3828]: ['arsene', 'wenger', 'arsenal', 'ferguson', 'premiership', 'nistelrooy', 'manchester', 'chelsea', 'striker', 'newcastle'] ,→ [4-1-0.2062]: ['bakar', 'arrested', 'hamza', 'suspect', 'jakarta', 'indonesian', 'bashir', 'murder', 'filmmaker', 'guantanamo'] ,→ [5-1-0.2292]: ['copyright', 'kazaa', 'copyrighted', 'piracy', 'movie', 'recording', 'lycos', 'liable', 'sharman', 'riaa'] [6-1-0.11278]: ['submarine', 'helen', 'kathmandu', 'volcano', 'maoist', 'earthquake', 'locust', 'mount', 'airliner', 'chicoutimi'] ,→ [7-1-0.51741]: ['prix', 'formula', 'schumacher', 'ecclestone', 'barrichello', 'rubens', 'ferrari', 'silverstone', 'jenson', 'bernie'] ,→ [8-1-0.29278]: ['enrichment', 'uranium', 'iran', 'tehran', 'atomic', 'nuclear', 'vienna', 'freeze', 'iaea', 'iranian'] [9-1-0.29095]: ['ipod', 'apple', 'nintendo', 'd', 'itunes', 'portable', 'music', 'obtaining', 'playstation', 'sony'] [10-1-0.3764]: ['saturn', 'spacecraft', 'cassini', 'moon', 'capsule', 'nasa', 'genesis', 'astronaut', 'space', 'orbit'] [11-0.18905-0.1813]: ['year', 'ha', 'say', 'time', 'new', 'make', 'world', 'ap', 'wa', 'state'] [12-1-0.18423]: ['slobodan', 'milosevic', 'augusto', 'pinochet', 'nobel', 'cloning', 'wangari', 'maathai', 'yugoslav', 'embryo'] ,→ [13-1-0.52732]: ['lleyton', 'federer', 'hewitt', 'mauresmo', 'wta', 'amelie', 'agassi', 'marat', 'sharapova', 'safin'] [14-1-0.19904]: ['equatorial', 'guinea', 'thatcher', 'norodom', 'pitcairn', 'coup', 'sihanouk', 'prince', 'throne', 'mercenary'] ,→ [15-1-0.45693]: ['speedway', 'nascar', 'dale', 'earnhardt', 'busch', 'talladega', 'kurt', 'raceway', 'breeder', 'nextel'] [16-1-0.13999]: ['martha', 'stewart', 'prison', 'kobe', 'sentence', 'quattrone', 'ghraib', 'lying', 'bryant', 'steroid'] [17-1-0.25499]: ['medal', 'athens', 'olympic', 'phelps', 'hamm', 'gymnastics', 'kenteris', 'sprinter', 'olympics', 'freestyle'] ,→ [18-1-0.30382]: ['manmohan', 'kashmir', 'shaukat', 'aziz', 'musharraf', 'pervez', 'jintao', 'kyoto', 'hu', 'erdogan'] [19-1-0.14067]: ['peoplesoft', 'eliot', 'mclennan', 'spitzer', 'oracle', 'marsh', 'cingular', 'tender', 'ipo', 'initial'] [20-1-0.21738]: ['ryder', 'wicket', 'pga', 'montgomerie', 'icc', 'langer', 'birdie', 'vijay', 'indie', 'jimenez'] [21-0.35571-0.15125]: ['say', 'year', 'ha', 'new', 'wa', 'make', 'outsourcing', 'time', 'quot', 'report'] [22-1-0.3434]: ['darfur', 'sudan', 'sudanese', 'khartoum', 'kofi', 'annan', 'congo', 'bin', 'osama', 'powell'] [23-1-0.20598]: ['eisner', 'ovitz', 'walt', 'disney', 'antitrust', 'microsystems', 'kodak', 'eastman', 'contentguard', 'java'] [24-1-0.2336]: ['willingham', 'tyrone', 'spurrier', 'notre', 'nhl', 'dame', 'zook', 'coaching', 'coach', 'mutu'] [25-1-0.1955]: ['profile', 'quote', 'research', 'yukos', 'lukoil', 'conocophillips', 'earnings', 'quarterly', 'gazprom', 'profit'] ,→ [26-0.22238-0.20984]: ['year', 'ha', 'time', 'say', 'new', 'check', 'wa', 'world', 'make', 'said'] [27-0.9-0.22791]: ['greenspan', 'alan', 'reserve', 'chairman', 'federal', 'social', 'budget', 'boom', 'economy', 'survey'] 12.5 DBPedia LDA Collapsed Gibbs sampling npmi=0.2569786099627621 [ 0 - 0.71667 - 0.24385]: ['company', 'group', 'based', 'international', 'owned', 'founded', 'service', 'airline', 'largest', 'operates'] ,→ [ 1 - 0.85 - 0.26205]: ['island', 'area', 'coast', 'small', 'bay', 'western', 'northern', 'long', 'water', 'pacific'] [ 2 - 0.80909 - 0.25008]: ['wa', 'car', 'produced', 'model', 'motor', 'sport', 'engine', 'sold', 'production', 'vehicle'] [ 3 - 0.76667 - 0.25635]: ['city', 'york', 'located', 'building', 'street', 'center', 'hotel', 'tower', 'park', 'hall'] [ 4 - 0.86667 - 0.28198]: ['journal', 'hospital', 'research', 'medical', 'established', 'society', 'published', 'field', 'health', 'science'] ,→ [ 5 - 0.9 - 0.24606]: ['south', 'north', 'america', 'east', 'central', 'africa', 'eastern', 'southern', 'europe', 'carolina'] [ 6 - 0.92 - 0.22337]: ['state', 'united', 'washington', 'american', 'massachusetts', 'kingdom', 'jersey', 'oregon', 'maryland', 'boston'] ,→ [ 7 - 0.80909 - 0.29375]: ['wa', 'november', 'october', 'march', 'august', 'september', 'december', 'april', 'june', 'july'] [ 8 - 0.8 - 0.14349]: ['german', 'ha', 'germany', 'people', 'municipality', 'time', 'swedish', 'norwegian', 'village', 'norway'] ,→ [ 9 - 0.76667 - 0.29049]: ['minister', 'president', 'served', 'born', 'general', 'politician', 'government', 'court', 'chief', 'office'] ,→ [ 10 - 0.725 - 0.19019]: ['county', 'texas', 'ohio', 'district', 'city', 'florida', 'community', 'located', 'west', 'virginia'] ,→ [ 11 - 0.93333 - 0.23736]: ['family', 'moth', 'white', 'black', 'mm', 'brown', 'red', 'green', 'adult', 'feed'] [ 12 - 0.77 - 0.27565]: ['american', 'michael', 'david', 'john', 'smith', 'robert', 'james', 'scott', 'tom', 'mark'] [ 13 - 0.66667 - 0.28403]: ['historic', 'house', 'national', 'built', 'place', 'register', 'building', 'listed', 'located', 'home'] ,→ [ 14 - 0.81667 - 0.22537]: ['award', 'chinese', 'ha', 'china', 'international', 'hong', 'kong', 'received', 'traditional', 'academy'] ,→ [ 15 - 0.78333 - 0.30854]: ['series', 'book', 'written', 'comic', 'child', 'story', 'published', 'set', 'character', 'manga'] [ 16 - 0.71667 - 0.33138]: ['born', 'play', 'played', 'league', 'footballer', 'club', 'professional', 'football', 'player', 'major'] ,→ [ 17 - 0.80909 - 0.2279]: ['wa', 'canadian', 'canada', 'british', 'ontario', 'columbia', 'quebec', 'son', 'toronto', 'september'] ,→ [ 18 - 0.91667 - 0.29944]: ['church', 'england', 'st.', 'catholic', 'parish', 'st', 'christian', 'roman', 'located', 'saint'] 6374 [ 19 - 1 - 0.30692]: ['california', 'san', 'la', 'spanish', 'mexico', 'brazil', 'los', 'angeles', 'francisco', 'el'] [ 20 - 0.8 - 0.34877]: ['album', 'released', 'record', 'single', 'label', 'music', 'studio', 'hit', 'debut', 'country'] [ 21 - 0.70909 - 0.27038]: ['wa', 'john', 'william', 'british', 'george', 'charles', 'james', 'thomas', 'robert', 'edward'] [ 22 - 0.85909 - 0.23405]: ['wa', 'year', 'early', 'late', 'time', 'century', 'originally', 'bridge', 'period', 'date'] [ 23 - 0.86667 - 0.21604]: ['mountain', 'range', 'located', 'hill', 'ft', 'peak', 'park', 'mount', 'metre', 'valley'] [ 24 - 0.86667 - 0.23436]: ['school', 'high', 'public', 'student', 'located', 'secondary', 'grade', 'academy', 'middle', 'independent'] ,→ [ 25 - 0.83667 - 0.23066]: ['work', 'art', 'museum', 'artist', 'american', 'history', 'painter', 'ha', 'modern', 'library'] [ 26 - 0.85 - 0.30557]: ['born', 'world', 'won', 'summer', 'team', 'championship', 'event', 'medal', 'olympics', 'competed'] [ 27 - 0.69167 - 0.25058]: ['member', 'politician', 'born', 'house', 'party', 'representative', 'served', 'elected', 'january', 'district'] ,→ [ 28 - 0.86667 - 0.31682]: ['university', 'college', 'education', 'campus', 'institute', 'private', 'program', 'founded', 'institution', 'science'] ,→ [ 29 - 0.73667 - 0.27641]: ['music', 'singer', 'born', 'musician', 'american', 'producer', 'jazz', 'blue', 'band', 'composer'] ,→ [ 30 - 1 - 0.23266]: ['french', 'life', 'france', 'needed', 'young', 'le', 'woman', 'citation', 'man', 'paris'] [ 31 - 0.81667 - 0.29395]: ['company', 'business', 'founded', 'service', 'product', 'inc.', 'firm', 'corporation', 'industry', 'headquartered'] ,→ [ 32 - 0.78333 - 0.29084]: ['specie', 'family', 'genus', 'plant', 'snail', 'endemic', 'sea', 'marine', 'gastropod', 'mollusk'] ,→ [ 33 - 0.73409 - 0.13352]: ['wa', 'republic', 'hockey', 'national', 'ice', 'turkey', 'czech', 'arabic', 'april', 'central'] [ 34 - 0.85 - 0.2474]: ['river', 'lake', 'tributary', 'romania', 'flow', 'km', 'creek', 'mile', 'area', 'water'] [ 35 - 0.85 - 0.2667]: ['specie', 'plant', 'habitat', 'native', 'forest', 'common', 'tree', 'tropical', 'endemic', 'natural'] [ 36 - 0.8 - 0.43845]: ['album', 'released', 'band', 'rock', 'studio', 'live', 'song', 'recorded', 'track', 'release'] [ 37 - 0.71742 - 0.2448]: ['navy', 'war', 'ship', 'world', 'royal', 'launched', 'wa', 'ii', 'named', 'built'] [ 38 - 0.70076 - 0.1717]: ['india', 'indian', 'ha', 'wa', 'english', 'government', 'union', 'national', 'tamil', 'sri'] [ 39 - 0.90909 - 0.15204]: ['wa', 'london', 'king', 'brother', 'irish', 'dutch', 'age', 'ireland', 'philippine', 'scottish'] [ 40 - 0.56167 - 0.19999]: ['born', 'american', 'football', 'russian', 'national', 'played', 'player', 'professional', 'michigan', 'free'] ,→ [ 41 - 0.9 - 0.20425]: ['japanese', 'italian', 'japan', 'game', 'television', 'video', 'based', 'production', 'medium', 'entertainment'] ,→ [ 42 - 0.76742 - 0.25129]: ['wa', 'class', 'built', 'line', 'railway', 'locomotive', 'service', 'station', 'operated', 'unit'] ,→ [ 43 - 0.76742 - 0.25105]: ['wa', 'aircraft', 'designed', 'built', 'design', 'world', 'air', 'force', 'light', 'construction'] ,→ [ 44 - 0.76667 - 0.32288]: ['published', 'book', 'magazine', 'story', 'writer', 'newspaper', 'author', 'short', 'fiction', 'science'] ,→ [ 45 - 0.86667 - 0.16642]: ['ha', 'australia', 'australian', 'zealand', 'store', 'wale', 'centre', 'south', 'chain', 'mall'] [ 46 - 0.81667 - 0.1886]: ['ha', 'bank', 'small', 'form', 'crater', 'large', 'greek', 'named', 'called', 'meaning'] [ 47 - 0.9 - 0.35836]: ['film', 'directed', 'starring', 'star', 'drama', 'comedy', 'role', 'produced', 'written', 'movie'] [ 48 - 0.74167 - 0.27242]: ['mi', 'village', 'km', 'county', 'poland', 'approximately', 'district', 'kilometre', 'administrative', 'gmina'] ,→ [ 49 - 0.725 - 0.29974]: ['district', 'village', 'province', 'county', 'population', 'census', 'rural', 'iran', 'persian', 'family'] ,→ uniqueness=0.8080000000000002 Online LDA: npmi=0.23031030285194948 [ 0 - 0.81845 - 0.24681]: ['wa', 'son', 'john', 'born', 'william', 'george', 'father', 'died', 'henry', 'law'] [ 1 - 0.81667 - 0.26355]: ['located', 'center', 'hotel', 'city', 'building', 'street', 'store', 'tower', 'centre', 'opened'] [ 2 - 1 - 0.15847]: ['swedish', 'poet', 'republic', 'danish', 'sweden', 'nova', 'congo', 'nigeria', 'israel', 'kenya'] [ 3 - 0.72417 - 0.18097]: ['wa', 'england', 'london', 'english', 'british', 'irish', 'ireland', 'county', 'cricketer', 'great'] ,→ [ 4 - 0.76845 - 0.22279]: ['won', 'russian', 'born', 'summer', 'wa', 'world', 'olympics', 'medal', 'championship', 'competed'] ,→ [ 5 - 0.88333 - 0.26265]: ['river', 'tributary', 'flow', 'mile', 'creek', 'km', 'water', 'bay', 'near', 'north'] [ 6 - 0.51583 - 0.22305]: ['wa', 'historic', 'house', 'building', 'built', 'national', 'place', 'register', 'located', 'county'] ,→ [ 7 - 0.7625 - 0.24764]: ['wa', 'aircraft', 'designed', 'built', 'design', 'engine', 'developed', 'produced', 'light', 'fighter'] ,→ [ 8 - 0.825 - 0.16935]: ['class', 'railway', 'locomotive', 'municipality', 'line', 'service', 'bus', 'serbian', 'czech', 'built'] ,→ [ 9 - 0.85 - 0.25305]: ['california', 'san', 'sea', 'snail', 'marine', 'family', 'gastropod', 'specie', 'mollusk', 'mexico'] [ 10 - 1 - 0.26365]: ['italian', 'la', 'spanish', 'italy', 'spain', 'el', 'del', 'arabic', 'mexican', 'turkish'] [ 11 - 1 - 0.32138]: ['chinese', 'china', 'hong', 'kong', 'traditional', 'pinyin', 'radio', 'taiwan', 'singapore', 'vietnam'] [ 12 - 0.8375 - 0.25622]: ['journal', 'research', 'published', 'society', 'peer-reviewed', 'study', 'academic', 'established', 'wa', 'field'] ,→ [ 13 - 1 - 0.1873]: ['le', 'hall', 'rose', 'albert', 'belgian', 'awarded', 'fame', 'jean', 'ray', 'philip'] [ 14 - 0.8375 - 0.22743]: ['art', 'museum', 'wa', 'century', 'early', 'history', 'late', 'castle', 'work', 'known'] [ 15 - 1 - 0.16756]: ['island', 'king', 'martin', 'scottish', 'scotland', 'prince', 'alabama', 'miller', 'rhode', 'isle'] [ 16 - 1 - 0.23044]: ['bank', 'financial', 'puerto', 'branch', 'exchange', 'prison', 'stock', 'real', 'investment', 'rico'] [ 17 - 0.71429 - 0.34343]: ['born', 'play', 'played', 'footballer', 'football', 'professional', 'club', 'player', 'currently', 'league'] ,→ [ 18 - 0.745 - 0.28877]: ['mi', 'village', 'km', 'poland', 'kilometre', 'district', 'county', 'administrative', 'gmina', 'voivodeship'] ,→ [ 19 - 0.72917 - 0.24186]: ['wa', 'navy', 'ship', 'built', 'royal', 'war', 'class', 'launched', 'named', 'commissioned'] [ 20 - 0.90417 - 0.15579]: ['french', 'france', 'needed', 'citation', 'airline', 'wa', 'norwegian', 'paris', 'air', 'international'] ,→ [ 21 - 0.61845 - 0.25364]: ['wa', 'born', 'politician', 'minister', 'president', 'party', 'served', 'member', 'national', 'government'] ,→ [ 22 - 0.7875 - 0.25469]: ['magazine', 'published', 'wa', 'newspaper', 'comic', 'news', 'daily', 'medium', 'issue', 'weekly'] [ 23 - 0.41012 - 0.17541]: ['member', 'house', 'district', 'wa', 'representative', 'born', 'politician', 'served', 'state', 'american'] ,→ [ 24 - 0.70417 - 0.13367]: ['family', 'moth', 'genus', 'specie', 'described', 'mm', 'brown', 'wa', 'bulbophyllum', 'feed'] [ 25 - 0.53512 - 0.24432]: ['american', 'played', 'league', 'wa', 'football', 'born', 'major', 'professional', 'baseball', 'season'] ,→ [ 26 - 0.77083 - 0.16115]: ['church', 'hockey', 'parish', 'wa', 'st', 'ice', 'christian', 'located', 'cathedral', 'england'] [ 27 - 0.83333 - 0.21282]: ['game', 'service', 'los', 'video', 'software', 'technology', 'angeles', 'network', 'based', 'medium'] ,→ [ 28 - 0.72083 - 0.26739]: ['world', 'war', 'wa', 'ii', 'military', 'force', 'army', 'union', 'american', 'civil'] [ 29 - 0.93333 - 0.16956]: ['crater', 'dutch', 'painter', 'far', 'moon', 'netherlands', 'ha', 'rim', 'wall', 'active'] [ 30 - 0.69917 - 0.29778]: ['district', 'village', 'province', 'population', 'wa', 'county', 'census', 'rural', 'iran', 'persian'] ,→ [ 31 - 0.85 - 0.24835]: ['lake', 'mountain', 'located', 'range', 'peak', 'hill', 'area', 'north', 'park', 'mount'] [ 32 - 1 - 0.17088]: ['polish', 'golden', 'gordon', 'camp', 'hero', 'knight', 'gate', 'super', 'princess', 'blood'] [ 33 - 0.75 - 0.28135]: ['specie', 'family', 'genus', 'plant', 'endemic', 'habitat', 'tropical', 'forest', 'natural', 'subtropical'] ,→ [ 34 - 0.7375 - 0.31567]: ['book', 'novel', 'published', 'wa', 'story', 'author', 'written', 'series', 'writer', 'fiction'] 6375 [ 35 - 0.93333 - 0.17547]: ['south', 'australia', 'australian', 'north', 'carolina', 'western', 'wale', 'africa', 'african', 'jersey'] ,→ [ 36 - 0.95 - 0.15799]: ['new', 'zealand', 'hampshire', 'don', 'wave', 'stewart', 'brunswick', 'carter', 'barry', 'auckland'] [ 37 - 0.86667 - 0.21505]: ['state', 'united', 'texas', 'kingdom', 'florida', 'georgia', 'oregon', 'ohio', 'virginia', 'american'] ,→ [ 38 - 0.77083 - 0.20659]: ['company', 'wa', 'founded', 'group', 'based', 'owned', 'ha', 'corporation', 'product', 'business'] ,→ [ 39 - 0.7875 - 0.20195]: ['japanese', 'wa', 'series', 'japan', 'car', 'manga', 'model', 'motor', 'produced', 'van'] [ 40 - 1 - 0.22157]: ['german', 'germany', 'portuguese', 'wilson', 'berlin', 'von', 'austria', 'jewish', 'austrian', 'nelson'] ,→ [ 41 - 0.95 - 0.22885]: ['india', 'canada', 'canadian', 'indian', 'ontario', 'columbia', 'quebec', 'british', 'toronto', 'tamil'] ,→ [ 42 - 0.81667 - 0.22995]: ['new', 'york', 'city', 'connecticut', 'queen', 'manhattan', 'morris', 'american', 's˜A£o', 'hudson'] ,→ [ 43 - 0.59762 - 0.20615]: ['born', 'known', 'music', 'american', 'singer', 'best', 'ha', 'artist', 'musician', 'band'] [ 44 - 0.80417 - 0.37837]: ['album', 'released', 'wa', 'record', 'band', 'studio', 'label', 'song', 'single', 'music'] [ 45 - 1 - 0.24352]: ['st.', 'catholic', 'roman', 'philippine', 'saint', 'louis', 'paul', 'lady', 'mary', 'sister'] [ 46 - 0.73333 - 0.2347]: ['specie', 'known', 'native', 'plant', 'common', 'leaf', 'tree', 'family', 'flower', 'grows'] [ 47 - 0.66583 - 0.18822]: ['school', 'high', 'located', 'public', 'student', 'district', 'secondary', 'county', 'grade', 'wa'] ,→ [ 48 - 0.72083 - 0.28238]: ['film', 'directed', 'wa', 'starring', 'star', 'written', 'drama', 'based', 'comedy', 'produced'] [ 49 - 0.82083 - 0.24591]: ['university', 'college', 'education', 'located', 'hospital', 'institute', 'wa', 'science', 'campus', 'degree'] ,→ uniqueness=0.81 ProdLDA: [ 0 - 0.45 - 0.29022]: ['football', 'league', 'played', 'born', 'hockey', 'nhl', 'player', 'draft', 'olympics', 'footballer'] [ 1 - 0.45 - 0.35073]: ['politician', 'served', 'representative', 'elected', 'senate', 'constituency', 'assembly', 'election', 'minister', 'representing'] ,→ [ 2 - 0.44167 - 0.30271]: ['leaf', 'grows', 'specie', 'plant', 'cm', 'mm', 'flowering', 'perennial', 'native', 'herb'] [ 3 - 0.29333 - 0.46587]: ['album', 'released', 'chart', 'billboard', 'track', 'band', 'studio', 'release', 'compilation', 'label'] ,→ [ 4 - 0.38333 - 0.34704]: ['league', 'born', 'football', 'played', 'hockey', 'professional', 'footballer', 'playing', 'nhl', 'player'] ,→ [ 5 - 0.44333 - 0.32381]: ['film', 'directed', 'story', 'written', 'starring', 'fantasy', 'horror', 'fiction', 'manga', 'series'] ,→ [ 6 - 0.86667 - 0.26989]: ['peer-reviewed', 'journal', 'editor-in-chief', 'scientific', 'springer', 'research', 'magazine', 'publication', 'aspect', 'review'] ,→ [ 7 - 0.56667 - 0.21863]: ['tributary', 'river', 'flow', 'mountain', 'crater', 'lake', 'sawtooth', 'rim', 'permit', 'southwest'] ,→ [ 8 - 0.49333 - 0.31468]: ['film', 'directed', 'starring', 'written', 'story', 'supporting', 'cannes', 'series', 'book', 'drama'] ,→ [ 9 - 0.64167 - 0.28355]: ['album', 'released', 'manga', 'comic', 'edition', 'anime', 'volume', 'series', 'serialized', 'song'] ,→ [ 10 - 0.40833 - 0.33874]: ['grows', 'leaf', 'flowering', 'specie', 'plant', 'tall', 'native', 'flower', 'shrub', 'erect'] [ 11 - 0.26667 - 0.24182]: ['mi', 'kilometre', 'voivodeship', 'gmina', 'lie', 'administrative', 'km', 'approximately', 'village', 'poland'] ,→ [ 12 - 0.48333 - 0.36666]: ['historic', 'register', 'building', 'built', 'added', 'dwelling', 'revival', 'roof', 'listed', 'gable'] ,→ [ 13 - 0.71667 - 0.31923]: ['university', 'education', 'institution', 'peer-reviewed', 'undergraduate', 'college', 'affiliated', 'journal', 'graduate', 'academic'] ,→ [ 14 - 0.35 - 0.24971]: ['mi', 'lie', 'km', 'voivodeship', 'gmina', 'kilometre', 'approximately', 'administrative', 'poland', 'regional'] ,→ [ 15 - 0.6 - 0.34841]: ['navy', 'ship', 'commissioned', 'laid', 'launched', 'submarine', 'hm', 'bremen', 'twenty-four', 'naval'] ,→ [ 16 - 0.41667 - 0.28862]: ['school', 'college', 'student', 'high', 'public', 'grade', 'university', 'republican', 'education', 'senate'] ,→ [ 17 - 0.53333 - 0.34759]: ['historic', 'register', 'built', 'porch', 'revival', 'added', 'brick', 'church', 'dwelling', 'listed'] ,→ [ 18 - 0.81667 - 0.25104]: ['peer-reviewed', 'journal', 'quarterly', 'indexed', 'topic', 'publishes', 'provides', 'technology', 'healthcare', 'privately'] ,→ [ 19 - 0.43333 - 0.35964]: ['league', 'played', 'football', 'born', 'player', 'professional', 'season', 'fc', 'footballer', 'nba'] ,→ [ 20 - 0.29167 - 0.39506]: ['district', 'census', 'romanized', 'population', 'iran', 'persian', 'rural', 'province', 'village', 'county'] ,→ [ 21 - 0.39333 - 0.48241]: ['album', 'released', 'peaked', 'band', 'chart', 'release', 'ep', 'billboard', 'label', 'studio'] [ 22 - 0.29167 - 0.39506]: ['district', 'romanized', 'census', 'population', 'iran', 'persian', 'rural', 'province', 'county', 'village'] ,→ [ 23 - 0.39333 - 0.43828]: ['album', 'released', 'song', 'studio', 'band', 'release', 'chart', 'music', 'record', 'dvd'] [ 24 - 0.61667 - 0.22685]: ['mountain', 'river', 'tributary', 'lake', 'divide', 'confluence', 'flow', 'lunar', 'km2', 'westward'] ,→ [ 25 - 0.45 - 0.3877]: ['politician', 'served', 'assembly', 'minister', 'constituency', 'elected', 'legislative', 'election', 'deputy', 'republican'] ,→ [ 26 - 0.26667 - 0.24182]: ['mi', 'lie', 'kilometre', 'gmina', 'voivodeship', 'km', 'administrative', 'approximately', 'village', 'poland'] ,→ [ 27 - 0.56667 - 0.21374]: ['school', 'high', 'public', 'grade', 'located', 'student', 'unincorporated', 'co-educational', 'four-year', 'secondary'] ,→ [ 28 - 0.24167 - 0.25698]: ['mi', 'village', 'district', 'voivodeship', 'gmina', 'lie', 'kilometre', 'county', 'population', 'administrative'] ,→ [ 29 - 0.29167 - 0.39506]: ['district', 'romanized', 'census', 'population', 'iran', 'persian', 'rural', 'province', 'village', 'county'] ,→ [ 30 - 0.31833 - 0.47689]: ['album', 'released', 'studio', 'song', 'band', 'billboard', 'release', 'chart', 'track', 'recorded'] ,→ [ 31 - 0.56 - 0.31184]: ['film', 'directed', 'starring', 'story', 'written', 'silent', 'comedy', 'star', 'award', 'upcoming'] [ 32 - 0.43333 - 0.35178]: ['league', 'played', 'football', 'born', 'player', 'won', 'season', 'professional', 'footballer', 'baseball'] ,→ [ 33 - 0.35833 - 0.45949]: ['grows', 'leaf', 'stem', 'perennial', 'herb', 'centimeter', 'shrub', 'flowering', 'flower', 'plant'] ,→ [ 34 - 0.85 - 0.31775]: ['aircraft', 'engine', 'kit', 'cc', 'conventional', 'convertible', 'car', 'kw', 'mid-size', 'configuration'] ,→ [ 35 - 0.5 - 0.38277]: ['politician', 'elected', 'legislative', 'served', 'election', 'constituency', 'representative', 'cabinet', 'democratic', 'minister'] ,→ [ 36 - 0.41667 - 0.3537]: ['habitat', 'specie', 'threatened', 'family', 'tropical', 'subtropical', 'moist', 'loss', 'endemic', 'natural'] ,→ [ 37 - 0.34167 - 0.35938]: ['leaf', 'perennial', 'stem', 'flower', 'centimeter', 'plant', 'tall', 'grows', 'herb', 'specie'] [ 38 - 0.41667 - 0.37624]: ['specie', 'habitat', 'tropical', 'subtropical', 'family', 'moist', 'threatened', 'endemic', 'lowland', 'loss'] ,→ 6376 [ 39 - 0.46 - 0.34544]: ['film', 'directed', 'written', 'novel', 'starring', 'story', 'drama', 'novella', 'comedy', 'fantasy'] ,→ [ 40 - 0.95 - 0.27988]: ['software', 'company', 'headquartered', 'investment', 'inc.', 'provider', 'operates', 'product', 'develops', 'privately'] ,→ [ 41 - 0.55 - 0.35021]: ['navy', 'ship', 'warship', 'commissioned', 'destroyer', 'hm', 'laid', 'launched', 'lt.', 'war'] [ 42 - 0.56667 - 0.25608]: ['school', 'grade', 'high', 'public', 'located', 'student', 'preparatory', 'caters', 'secondary', 'coeducational'] ,→ [ 43 - 0.34333 - 0.45168]: ['album', 'released', 'chart', 'hit', 'song', 'record', 'band', 'billboard', 'studio', 'compilation'] ,→ [ 44 - 0.51667 - 0.23861]: ['flow', 'lake', 'rim', 'elevation', 'river', 'crater', 'tributary', 'mountain', 'tidal', 'lunar'] [ 45 - 0.76 - 0.31916]: ['film', 'directed', 'starring', 'hai', 'role', 'remake', 'hindi', 'lead', 'telugu', 'sen'] [ 46 - 0.51667 - 0.28064]: ['specie', 'habitat', 'tropical', 'subtropical', 'family', 'moist', 'mollusk', 'threatened', 'gastropod', 'montane'] ,→ [ 47 - 0.58333 - 0.34982]: ['historic', 'register', 'building', 'two-story', 'built', 'brick', 'doric', 'listed', 'roof', 'pile'] ,→ [ 48 - 0.55 - 0.33823]: ['navy', 'laid', 'ship', 'commissioned', 'destroyer', 'sponsored', 'launched', 'mrs.', 'hm', 'command'] ,→ [ 49 - 0.85 - 0.37107]: ['motor', 'vehicle', 'engine', 'bmw', 'manufactured', 'motorcycle', 'aircraft', 'hp', 'car', 'automaker'] ,→ NTM-R: [0-1-0.17993]: ['muricidae', 'murex', 'snail', 'gastropod', 'mollusk', 'thrash', 'melodic', 'mordella', 'superfamily', 'peaked'] ,→ [1-0.41644-0.13796]: ['taxonomy', 'algae', 'specifically', 'tephritid', 'tephritidae', 'ray-finned', 'fruit', 'bromeliad', 'coordinate', 'fly'] ,→ [2-0.62333-0.21805]: ['policy', 'suggest', 'obama', 'israeli', 'recognition', 'banking', 'firm', 'intelligence', 'african', 'advice'] ,→ [3-1-0.24395]: ['league', 'afl', 'football', 'batsman', 'right-handed', 'rugby', 'right-arm', 'vfl', 'premiership', 'midfielder'] ,→ [4-0.66012-0.11442]: ['baron', 'bates', 'ray-finned', 'pc', 'chacteau', 'gcmg', 'statesman', 'mcgill', 'cooke', 'mildred'] [5-0.43644-0.22398]: ['specifically', 'algae', 'ray-finned', 'taxonomy', 'suggest', 'seeking', 'reduce', 'increasing', 'aim', 'objective'] ,→ [6-0.29739-0.070157]: ['ray-finned', 'taxonomy', 'bates', 'tillandsia', 'algae', 'viscount', 'specifically', 'schaus', 'earl', 'pc'] ,→ [7-0.62958-0.20941]: ['algae', 'taxonomy', 'avoid', 'achieve', 'unique', 'balance', 'finding', 'laying', 'everyday', 'feel'] [8-0.78095-0.10182]: ['bates', 'peck', 'brendan', 'fraser', 'lillian', 'sylvia', 'archibald', 'tillandsia', 'carabidae', 'mabel'] ,→ [9-0.73125-0.19545]: ['algae', 'israeli', 'keeping', 'sort', 'meant', 'approach', 'arab', 'equivalent', 'dealing', 'south-western'] ,→ [10-1-0.14948]: ['faboideae', 'scotia', 'quebec', 'ftse', 'ferry', 'cruise', 'halifax', 'olsztyn.before', 'nova', 'm'] [11-0.555-0.2645]: ['economic', 'aim', 'policy', 'civil', 'responsibility', 'keeping', 'weapon', 'diplomatic', 'turning', 'possibility'] ,→ [12-0.67436-0.25298]: ['seeking', 'continuing', 'effort', 'diplomatic', 'specifically', 'maintain', 'culture', 'regarding', 'monitoring', 'cell'] ,→ [13-0.51429-0.10161]: ['deh', 'tillandsia', 'viscount', 'bates', 'meyrick', 'talbot', 'mildred', 'earl', 'archibald', 'eliza'] [14-0.68103-0.22395]: ['economic', 'improved', 'critical', 'lack', 'emphasis', 'specifically', 'preparing', 'taxonomy', 'protest', 'immigration'] ,→ [15-0.86429-0.16632]: ['bates', 'incomplete', 'smith', 'watson', 'mccarthy', 'johnston', 'perkins', 'gould', 'editor', 'mann'] [16-0.41644-0.21118]: ['algae', 'taxonomy', 'specifically', 'establishing', 'handling', 'increase', 'economic', 'keeping', 'difficult', 'ray-finned'] ,→ [17-1-0.088445]: ['eupithecia', 'geometridae', 'scopula', 'baluchestan', 'sistan', 'coleophora', 'coleophoridae', 'urdu', 'pterophoridae', 'arctiidae'] ,→ [18-0.74762-0.11147]: ['marquess', 'styled', 'bates', 'meyrick', 'viscount', 'nobleman', 'deh', 'engraver', 'pietro', 'bavaria'] ,→ [19-0.52061-0.223]: ['taxonomy', 'algae', 'specifically', 'unable', 'aim', 'funding', 'analysis', 'maintain', 'finding', 'priority'] ,→ [20-1-0.21449]: ['olympics', 'fencer', 'bulgarian', 'swimmer', 'competed', 'gymnast', 'eurovision', 'medalist', 'handball', 'budapest'] ,→ [21-1-0.31143]: ['senate', 'republican', 'constituency', 'representing', 'janata', 'attorney', 'election', 'legislative', 'delegate', 'caucus'] ,→ [22-1-0.22003]: ['clinical', 'healthcare', 'campus', 'peer-reviewed', 'undergraduate', 'theological', 'coeducational', 'publishes', 'adventist', 'preparatory'] ,→ [23-0.86-0.2594]: ['possibility', 'risk', 'counter', 'regime', 'need', 'profile', 'minimum', 'meant', 'mission', 'relevant'] [24-1-0.3246]: ['painting', 'sculpture', 'poem', 'drawing', 'museum', 'art', 'exhibition', 'illustrator', 'collection', 'poetry'] ,→ [25-0.785-0.25122]: ['tax', 'intelligence', 'controversial', 'possibility', 'reason', 'situation', 'security', 'credit', 'keeping', 'grass'] ,→ [26-0.39978-0.079958]: ['ray-finned', 'tephritidae', 'algae', 'tephritid', 'taxonomy', 'tillandsia', 'ulmus', 'elm', 'specifically', 'lago'] ,→ [27-0.59458-0.24169]: ['crisis', 'difficult', 'algae', 'iraq', 'driven', 'possibility', 'identification', 'instance', 'policy', 'change'] ,→ [28-0.74762-0.16082]: ['bates', 'firm', 'fowler', 'economist', 'nicholson', 'consulting', 'reynolds', 'banking', 'watkins', 'reid'] ,→ [29-0.63061-0.22814]: ['taxonomy', 'specifically', 'algae', 'contact', 'possibility', 'mind', 'prepare', 'robust', 'increasingly', 'significant'] ,→ [30-1-0.42495]: ['romania', 'tributary', 'valea', 'olt', 'river', 'mica83', 'pacracul', 'izvorul', 'racul', 'headwater'] [31-1-0.152]: ['bony', 'epoch', 'centimetre', 'grape', 'prehistoric', 'glacier', 'grevillea', 'volcanic', 'massif', 'hispanicized'] ,→ [32-0.57667-0.28435]: ['crisis', 'allow', 'possibility', 'increased', 'virtually', 'balance', 'belonging', 'difficult', 'protection', 'gain'] ,→ [33-0.65625-0.15701]: ['algae', 'castle', 'bringing', 'chacteau', 'energy', 'taxonomy', 'campaign', 'possibility', 'affected', 'assigned'] ,→ [34-1-0.35761]: ['hm', 'destroyer', 'minesweeper', 'sloop', 'navy', 'frigate', 'hmcs', 'patrol', 'admiral', 'clemson-class'] [35-0.715-0.27075]: ['committee', 'protection', 'planning', 'advisory', 'policy', 'virtually', 'movement', 'suggest', 'intervention', 'wroca82aw'] ,→ [36-0.44894-0.22927]: ['algae', 'taxonomy', 'suggest', 'virtually', 'balance', 'showing', 'specifically', 'ideal', 'purpose', 'build'] ,→ [37-0.26644-0.1139]: ['ray-finned', 'chacteau', 'bromeliad', 'algae', 'taxonomy', 'tephritidae', 'tillandsia', 'tephritid', 'pitcairnia', 'specifically'] ,→ [38-1-0.41582]: ['homebuilt', 'ultralight', 'trike', 'undercarriage', 'ready-to-fly-aircraft', 'low-wing', 'two-seat', 'single-engine', 'monoplane', 'single-seat'] ,→ [39-0.41728-0.19018]: ['taxonomy', 'polish', 'algae', 'specifically', 'striking', 'netherlands', 'suggest', 'finding', 'maintain', 'possibility'] ,→ [40-0.44208-0.059865]: ['tephritid', 'taxonomy', 'ray-finned', 'tephritidae', 'ulidiidae', 'algae', 'neoregelia', 'tillandsia', 'mantis', 'picture-winged'] ,→ [41-0.66833-0.27628]: ['virtually', 'sector', 'requires', 'showing', 'monitoring', 'emphasis', 'resulting', 'impact', 'possibility', 'concern'] ,→ [42-0.83333-0.15431]: ['incomplete', 'firm', 'jenkins', 'dixon', 'emma', 'nigel', 'watkins', 'consultant', 'investment', 'dc'] 6377 [43-1-0.49868]: ['threatened', 'ecuador.its', 'forests.it', 'habitat', 'arecaceae', 'loss', 'family.it', 'montane', 'moist', 'subtropical'] ,→ [44-0.635-0.28883]: ['effectively', 'difficult', 'emphasis', 'possibility', 'potential', 'diplomatic', 'concerned', 'illegal', 'emerging', 'crisis'] ,→ [45-1-0.37187]: ['horror', 'fantasy', 'thriller', 'drama', 'comedy', 'comedy-drama', 'starring', 'directed', 'anthology', 'sequel'] ,→ [46-1-0.15964]: ['pornographic', 'hop', 'clothing', 'hip', 'thoroughbred', 'retailer', 'arranger', 'dj', 'stand-up', 'store'] [47-0.59061-0.19672]: ['algae', 'allowing', 'taxonomy', 'improvement', 'charge', 'laying', 'invasion', 'policy', 'expensive', 'specifically'] ,→ [48-0.88333-0.10236]: ['bromeliad', 'oly¨A\x81', 'olya', 'bulbophyllum', 'pozna˚A\x84', 'poaceae', 'neoregelia', 'nowy', 'masovian', 'mazowiecki'] ,→ [49-1-0.12801]: ['herzegovina', 'bosnia', 'croatia', 'connected', 'estonia', 'municipality', 'kuyavian-pomeranian', 'northern-central', 'highway', 'kielce.the'] ,→ W-LDA: [0-1-0.3445]: ['tournament', 'championship', 'cup', 'tennis', 'career-high', 'ncaa', 'season', 'fifa', 'player', 'scoring'] [1-1-0.26173]: ['peer-reviewed', 'journal', 'publishes', 'wiley-blackwell', 'quarterly', 'opinion', 'editor-in-chief', 'topic', 'theoretical', 'biannual'] ,→ [2-1-0.28195]: ['snail', 'ally', 'fasciolariidae', 'gastropod', 'tulip', 'mollusk', 'spindle', 'circuit', 'muricidae', 'eulimidae'] ,→ [3-1-0.23924]: ['presenter', 'arranger', 'songwriter', 'multi-instrumentalist', 'performer', 'sitcom', 'actress', 'conductor', 'composer', 'comedian'] ,→ [4-1-0.40359]: ['pinyin', 'chinese', 'simplified', 'wade{giles', 'guangzhou', 'guangdong', 'yuan', 'jyutping', 'mandarin', 'taipei'] ,→ [5-1-0.23772]: ['shopping', 'mall', 'mixed-use', 'parking', 'm2', 'anchored', 'condominium', 'hotel', 'prison', 'high-rise'] [6-1-0.32526]: ['coleophora', 'coleophoridae', 'wingspan', 'august.the', 'elachista', 'elachistidae', 'larva', 'iberian', 'year.the', 'hindwings'] ,→ [7-1-0.29773]: ['solution', 'software', 'provider', 'multinational', 'telecommunication', 'nasdaq', 'investment', 'outsourcing', 'semiconductor', 'asset'] ,→ [8-1-0.39978]: ['inflorescence', 'erect', 'raceme', 'ovate', 'panicle', 'stem', 'leaflet', 'toothed', 'frond', 'lanceolate'] [9-1-0.37453]: ['made-for-tv', 'documentary', 'made-for-television', 'directed', 'screenplay', 'starring', 'comedy-drama', 'technicolor', 'sundance', 'film'] ,→ [10-1-0.22754]: ['translator', 'essayist', 'poet', 'novelist', 'literary', 'poetry', 'screenwriter', 'short-story', 'bridgeport', 'siedlce'] ,→ [11-1-0.24646]: ['summit', 'hiking', 'glacier', 'subrange', 'snowdonia', 'traversed', 'peak', 'glacial', 'pas', 'mountain'] [12-1-0.44789]: ['thrash', 'punk', 'metal', 'band', 'drummer', 'melodic', 'bassist', 'hardcore', 'demo', 'line-up'] [13-1-0.18155]: ['shortlisted', 'booker', 'newbery', 'young-adult', 'nobel', 'qal', 'marriage', 'prize', 'bestseller', 'autobiographical'] ,→ [14-1-0.44883]: ['kapoor', 'dharmendra', 'tamil-language', 'pivotal', 'bollywood', 'khanna', 'vinod', 'sinha', 'mithun', 'shetty'] ,→ [15-1-0.21445]: ['congressional', 'republican', 'iowa', 'arizona', 'kansa', 'missouri', 'diego', 'tempore', 'dodge', 'wyoming'] ,→ [16-1-0.24376]: ['fc', 'sergei', 'ssr', 'midfielder', 'divisi´on', 'russian', 'footballer', 'aleksandrovich', 'belarusian', 'vladimirovich'] ,→ [17-1-0.39128]: ['indonesia', 'lankan', 'indonesian', 'malaysia', 'java', 'jakarta', 'brunei', 'sri', 'lanka', 'sinhala'] [18-1-0.48659]: ['two-seat', 'fuselage', 'single-engine', 'monoplane', 'prototype', 'kw', 'airliner', 'single-engined', 'twin-engined', 'aircraft'] ,→ [19-1-0.40455]: ['wale', 'sydney', 'australian', 'brisbane', 'australia', 'queensland', 'melbourne', 'adelaide', 'nsw', 'perth'] ,→ [20-1-0.39939]: ['kerman', 'persian', 'jonubi', 'tehran', 'kermanshah', 'iran', 'isfahan', 'romanized', 'razavi', 'rural'] [21-1-0.27091]: ['rhode', 'oahu', 'hawaii', 'hawaiian', 'maui', 'honolulu', 'hawaii', 'mordella', 'massachusetts', 'tenebrionoidea'] ,→ [22-1-0.34612]: ['brandenburg', 'schleswig-holstein', 'und', 'saxony', 'germany', 'f¨ur', 'hamburg', 'mecklenburg-vorpommern', 'german', 'austria'] ,→ [23-1-0.3073]: ['register', 'historic', 'added', 'two-story', 'brick', 'massachusetts.the', 'armory', 'one-story', 'three-story', 'revival'] ,→ [24-1-0.32644]: ['fantasy', 'universe', 'paperback', 'hardcover', 'marvel', 'comic', 'role-playing', 'conan', 'sword', 'dungeon'] ,→ [25-1-0.16965]: ['railway', 'brewing', 'newspaper', 'brewery', 'ferry', 'tabloid', 'caledonian', 'daily', 'railroad', 'roster'] ,→ [26-1-0.2635]: ['french', 'du', 'la', 'chˆateau', 'france', 'playstation', 'renault', 'le', 'et', 'french-language'] [27-1-0.087707]: ['orchid', 'trance', 'dj', 'zanjan', 'techno', 'tappeh', 'orchidaceae', 'baden-w¨urttemberg', 'fabric', 'wasp'] ,→ [28-1-0.28215]: ['poland', 'administrative', 'voivodeship', 'north-west', 'gmina', 'mi', 'kielce', 'masovian', 'west-central', 'pozna´n'] ,→ [29-1-0.14504]: ['moth', 'geometridae', 'arctiidae', 'notodontidae', 'turridae', 'turrids', 'crambidae', 'eupithecia', 'raphitomidae', 'scopula'] ,→ [30-1-0.45268]: ['compilation', 'chart', 'billboard', 'hit', 'peaked', 'itunes', 'charted', 'riaa', 'remixes', 'airplay'] [31-1-0.36847]: ['leptodactylidae', 'eleutherodactylus', 'ecuador.its', 'forests.it', 'brazil.its', 'high-altitude', 'shrubland', 'subtropical', 'rivers.it', 'frog'] ,→ [32-1-0.46631]: ['vessel', 'patrol', 'navy', 'convoy', 'ship', 'anti-submarine', 'auxiliary', 'destroyer', 'escort', 'naval'] [33-1-0.36041]: ['undergraduate', 'postgraduate', 'doctoral', 'degree', 'faculty', 'bachelor', 'nursing', 'university', 'post-graduate', 'post-secondary'] ,→ [34-1-0.14203]: ['picture-winged', 'ulidiid', 'fly', 'tephritidae', 'firearm', 'tachinidae', 'ulidiidae', 'footwear', 'apparel', 'tephritid'] ,→ [35-1-0.15219]: ['prehistoric', 'bony', 'legume', 'faboideae', 'asteraceae', 'cephalopod', 'fabaceae', 'clam', 'daisy', 'bivalve'] ,→ [36-1-0.14339]: ['alberta', 'portland', 'oregon', 'columbia', 'vancouver', 'omaha', 'saskatchewan', 'davenport', 'hokkaid¯o', 'mysore'] ,→ [37-1-0.20091]: ['davidii', 'priory', 'dorset', 'exeter', 'surrey', 'buddleja', 'gloucestershire', 'deptford', 'wiltshire', 'edinburgh'] ,→ [38-1-0.39987]: ['church', 'diocese', 'parish', 'jesus', 'congregation', 'holy', 'christ', 'cathedral', 'deanery', 'roman'] [39-1-0.33474]: ['mascot', 'elementary', 'ib', 'kindergarten', 'enrollment', 'pre-kindergarten', 'school', 'secondary', 'preschool', 'high'] ,→ [40-1-0.19429]: ['pradesh', 'yugoslav', 'serbian', 'novi', 'andhra', 'india', 'cyrillic', 'mandal', 'maharashtra', 'kerala'] [41-1-0.10226]: ['bosnian', 'palm', 'turtle', 'thai', 'ready-to-fly-aircraft', 'supplied', 'lil', 'amateur', 'mixtape', 'rapper'] ,→ [42-1-0.42669]: ['sculpture', 'photography', 'gallery', 'painting', 'museum', 'exhibition', 'exhibited', 'curator', 'art', 'sculptor'] ,→ [43-1-0.38726]: ['tributary', 'pˆarˆaul', 'valea', 'romania', 'river', 'mures¸', 'mic', 'transylvania', 'mic˘a', 'olt'] [44-1-0.13235]: ['tillandsia', 'spider', 'salticidae', 'jumping', 'poaceae', 'praying', 'ant', 'neoregelia', 'mantis', 'neotropical'] ,→ [45-1-0.12438]: ['estonia', 'bistrit¸a', 'p¨arnu', 'ccm', 'michigan', 'estonian', 'tanzanian', 'lycaenidae', 'saare', 'tartu'] [46-1-0.38204]: ['santa', 'cruz', 'jos´e', 'luis', 'mar´ıa', 'mexican', 'carlos', 'cuba', 'juan', 'chilean'] [47-1-0.36108]: ['cabinet', 'minister', 'election', 'legislative', 'f´ail', 'secretary', 'conservative', 'constituency', 'd´ala', 'teachta'] ,→ [48-1-0.40629]: ['italian', 'di', 'francesco', 'italy', 'baroque', 'giuseppe', 'lombardy', 'rome', 'carlo', 'luca'] [49-1-0.17494]: ['greek', 'greece', 'baluchestan', 'sistan', 'sixth', 'yorkshire', 'status', 'khash', 'chabahar', 'specialist'] ,→ 6378 12.6 Yelp Review Polarity LDA Collapsed Gibbs sampling: npmi=0.23787181653390055 [ 0 - 0.85 - 0.25418]: ['water', 'dirty', 'clean', 'smell', 'door', 'bathroom', 'wall', 'floor', 'hand', 'cleaning'] [ 1 - 0.59167 - 0.38849]: ['steak', 'dish', 'restaurant', 'meal', 'dinner', 'cooked', 'potato', 'menu', 'lobster', 'dessert'] [ 2 - 0.58333 - 0.2649]: ['walked', 'guy', 'asked', 'counter', 'lady', 'looked', 'girl', 'wanted', 'walk', 'door'] [ 3 - 0.52 - 0.27734]: ['thing', 'make', 'ca', 'doe', 'kind', 'people', 'feel', 'wrong', 'stuff', 'big'] [ 4 - 0.67769 - 0.2165]: ['burger', 'fry', 'cheese', 'onion', 'hot', 'ordered', 'good', 'mac', 'sweet', 'potato'] [ 5 - 0.58667 - 0.19482]: ['wa', 'tasted', 'cold', 'dry', 'bland', 'ordered', 'taste', 'bad', 'looked', 'disappointed'] [ 6 - 0.61167 - 0.21671]: ['club', 'people', 'night', 'music', 'girl', 'guy', 'party', 'friend', 'group', 'crowd'] [ 7 - 0.75333 - 0.21353]: ['great', 'love', 'amazing', 'recommend', 'awesome', 'service', 'favorite', 'highly', 'loved', 'excellent'] ,→ [ 8 - 0.93333 - 0.26762]: ['money', 'pay', 'extra', 'charge', 'dollar', 'paid', 'worth', 'free', 'cost', 'tip'] [ 9 - 0.78429 - 0.1659]: ['vega', 'le', 'la', 'strip', 'trip', 'place', 'service', 'pour', 'montreal', 'san'] [ 10 - 0.68333 - 0.24412]: ['car', 'work', 'guy', 'day', 'problem', 'needed', 'change', 'company', 'job', 'tire'] [ 11 - 0.66667 - 0.25441]: ['phone', 'card', 'called', 'day', 'credit', 'company', 'told', 'number', 'business', 'month'] [ 12 - 0.58095 - 0.19303]: ['staff', 'friendly', 'great', 'nice', 'coffee', 'super', 'clean', 'helpful', 'place', 'quick'] [ 13 - 0.6075 - 0.21777]: ['service', 'bad', 'wa', 'time', 'experience', 'horrible', 'terrible', 'worst', 'slow', 'poor'] [ 14 - 0.69167 - 0.25876]: ['drink', 'bar', 'night', 'happy', 'hour', 'friend', 'bartender', 'friday', 'saturday', 'cocktail'] ,→ [ 15 - 0.67333 - 0.19213]: ['table', 'server', 'waitress', 'waiter', 'ordered', 'food', 'restaurant', 'seated', 'drink', 'water'] ,→ [ 16 - 0.71103 - 0.21877]: ['pizza', 'sauce', 'cheese', 'wing', 'good', 'pasta', 'italian', 'slice', 'ordered', 'crust'] [ 17 - 0.76603 - 0.19741]: ['breakfast', 'egg', 'wa', 'good', 'bacon', 'brunch', 'coffee', 'french', 'morning', 'pancake'] [ 18 - 0.56583 - 0.2383]: ['line', 'time', 'people', 'hour', 'long', 'day', 'airport', 'late', 'wait', 'flight'] [ 19 - 0.88333 - 0.24362]: ['room', 'hotel', 'stay', 'pool', 'casino', 'bed', 'stayed', 'night', 'strip', 'desk'] [ 20 - 0.67679 - 0.19974]: ['place', 'love', 'super', 'dont', 'die', 'man', 'time', 'awesome', 'didnt', 'na'] [ 21 - 0.55417 - 0.20917]: ['wa', 'hair', 'cut', 'time', 'wanted', 'short', 'groupon', 'left', 'long', 'looked'] [ 22 - 0.73667 - 0.23585]: ['location', 'lot', 'parking', 'open', 'area', 'close', 'drive', 'street', 'ha', 'closed'] [ 23 - 0.95 - 0.27537]: ['store', 'shop', 'item', 'buy', 'product', 'sale', 'bought', 'stuff', 'shopping', 'sell'] [ 24 - 0.625 - 0.23123]: ['wa', 'husband', 'wife', 'friend', 'birthday', 'family', 'wanted', 'decided', 'mom', 'day'] [ 25 - 0.55269 - 0.23514]: ['food', 'buffet', 'good', 'wa', 'crab', 'dinner', 'eat', 'seafood', 'shrimp', 'worth'] [ 26 - 0.72917 - 0.23019]: ['dog', 'care', 'office', 'day', 'appointment', 'time', 'doctor', 'dr.', 'staff', 'patient'] [ 27 - 0.41603 - 0.25941]: ['wa', 'good', 'pretty', 'nice', 'bit', 'thing', 'thought', 'kind', 'ok.', 'big'] [ 28 - 0.84103 - 0.25157]: ['price', 'small', 'quality', 'high', 'portion', 'size', 'large', 'reasonable', 'worth', 'good'] [ 29 - 0.46198 - 0.25428]: ['food', 'restaurant', 'good', 'eat', 'service', 'place', 'fast', 'eating', 'meal', 'average'] [ 30 - 0.68667 - 0.17554]: ['ha', 'work', 'class', 'make', 'feel', 'gym', 'school', 'offer', 'member', 'doe'] [ 31 - 0.76103 - 0.25206]: ['taco', 'chip', 'mexican', 'bean', 'food', 'salsa', 'good', 'burrito', 'bbq', 'sauce'] [ 32 - 0.50417 - 0.18603]: ['wa', 'nail', 'time', 'day', 'massage', 'job', 'foot', 'work', 'experience', 'lady'] [ 33 - 0.53864 - 0.20588]: ['sushi', 'roll', 'fish', 'good', 'fresh', 'place', 'menu', 'wa', 'chef', 'eat'] [ 34 - 0.86667 - 0.24305]: ['review', 'star', 'yelp', 'experience', 'read', 'bad', 'reason', 'based', 'write', 'rating'] [ 35 - 0.58333 - 0.26748]: ['wa', 'told', 'asked', 'manager', 'wanted', 'left', 'called', 'offered', 'gave', 'point'] [ 36 - 0.35364 - 0.22002]: ['place', 'good', 'ha', 'pretty', 'people', 'friend', 'thing', 'lot', 'town', 'cheap'] [ 37 - 0.85 - 0.33754]: ['cream', 'ice', 'chocolate', 'cake', 'tea', 'sweet', 'flavor', 'dessert', 'taste', 'delicious'] [ 38 - 0.75333 - 0.20119]: ['local', 'phoenix', 'town', 'city', 'ha', 'live', 'street', 'area', 'downtown', 'valley'] [ 39 - 0.7825 - 0.23043]: ['time', 'year', 'ha', 'visit', 'ago', 'week', 'couple', 'past', 'month', 'coming'] [ 40 - 0.75 - 0.19678]: ['nice', 'area', 'decor', 'seating', 'inside', 'patio', 'feel', 'atmosphere', 'beautiful', 'bit'] [ 41 - 0.81667 - 0.27044]: ['kid', 'game', 'watch', 'fun', 'big', 'play', 'tv', 'movie', 'lot', 'child'] [ 42 - 0.69 - 0.21958]: ['customer', 'service', 'rude', 'business', 'owner', 'employee', 'attitude', 'care', 'people', 'manager'] ,→ [ 43 - 0.78333 - 0.41034]: ['dish', 'chicken', 'rice', 'soup', 'fried', 'thai', 'noodle', 'sauce', 'beef', 'chinese'] [ 44 - 0.48936 - 0.23629]: ['salad', 'chicken', 'wa', 'ordered', 'meal', 'food', 'soup', 'plate', 'dressing', 'good'] [ 45 - 0.63031 - 0.1956]: ['beer', 'great', 'wine', 'selection', 'good', 'place', 'glass', 'menu', 'bar', 'list'] [ 46 - 0.78269 - 0.25272]: ['sandwich', 'lunch', 'menu', 'option', 'bread', 'meat', 'fresh', 'special', 'good', 'choice'] [ 47 - 0.68333 - 0.27003]: ['wa', 'boyfriend', 'thought', 'decided', 'felt', 'surprised', 'disappointed', 'impressed', 'excited', 'looked'] ,→ [ 48 - 0.9 - 0.18477]: ['event', 'picture', 'seat', 'fun', 'art', 'ticket', 'photo', 'cool', 'music', 'stage'] [ 49 - 0.72917 - 0.23758]: ['minute', 'order', 'wait', 'time', 'waiting', 'waited', 'long', 'hour', 'finally', 'min'] uniqueness=0.6839999999999999 Online LDA: npmi=0.23341299435543492 [ 0 - 0.42909 - 0.22503]: ['customer', 'service', 'time', 'rude', 'people', 'place', 'employee', 'just', 'like', 'staff'] [ 1 - 0.93333 - 0.21528]: ['happy', 'hour', 'shrimp', 'crab', 'seafood', 'pita', 'oyster', 'gyro', 'greek', 'hummus'] [ 2 - 0.77917 - 0.21341]: ['airport', 'flight', 'ride', 'driver', 'cab', 'san', 'bus', 'u', 'hour', 'time'] [ 3 - 0.95 - 0.37322]: ['cake', 'chocolate', 'dessert', 'cupcake', 'sweet', 'butter', 'pie', 'bakery', 'cream', 'cheesecake'] [ 4 - 0.95 - 0.18921]: ['year', 'kid', 'old', 'ha', 'ago', 'family', 'used', 'daughter', 'son', 'child'] [ 5 - 0.37076 - 0.22117]: ['wa', 'u', 'minute', 'order', 'table', 'food', 'did', 'came', 'time', 'asked'] [ 6 - 0.90909 - 0.23602]: ['dirty', 'smell', 'clean', 'place', 'bathroom', 'sick', 'floor', 'smoke', 'hand', 'disgusting'] [ 7 - 0.80833 - 0.26455]: ['thai', 'bbq', 'pork', 'curry', 'rib', 'meat', 'spicy', 'indian', 'pad', 'chicken'] [ 8 - 0.78333 - 0.22653]: ['sandwich', 'bread', 'pho', 'meat', 'turkey', 'sub', 'wrap', 'beef', 'lunch', 'deli'] [ 9 - 1 - 0.2992]: ['die', 'im', 'und', 'da', 'der', 'man', 'ich', 'war', 'ist', 'nicht'] [ 10 - 0.5875 - 0.36542]: ['soup', 'rice', 'noodle', 'chinese', 'dish', 'chicken', 'bowl', 'fried', 'food', 'beef'] [ 11 - 0.95 - 0.24428]: ['breakfast', 'egg', 'sunday', 'brunch', 'pancake', 'bacon', 'toast', 'waffle', 'french', 'morning'] [ 12 - 0.62159 - 0.33782]: ['like', 'just', 'know', 'place', 'make', 'want', 'say', 'thing', 'look', 'people'] [ 13 - 0.32735 - 0.25599]: ['wa', 'place', 'good', 'really', 'just', 'like', 'review', 'pretty', 'did', 'star'] [ 14 - 1 - 0.19794]: ['dog', 'park', 'bagel', 'hot', 'course', 'pet', 'animal', 'cat', 'vet', 'golf'] [ 15 - 0.79242 - 0.23551]: ['steak', 'wa', 'lobster', 'rib', 'cooked', 'potato', 'meat', 'prime', 'medium', 'filet'] [ 16 - 0.71833 - 0.24819]: ['pizza', 'italian', 'crust', 'sauce', 'cheese', 'slice', 'order', 'pasta', 'good', 'topping'] [ 17 - 1 - 0.17949]: ['free', 'machine', 'soda', 'photo', 'crepe', 'coke', 'gluten', 'christmas', 'diet', 'picture'] [ 18 - 0.9 - 0.22138]: ['company', 'dress', 'shoe', 'work', 'house', 'home', 'shirt', 'new', 'apartment', 'wear'] [ 19 - 0.25159 - 0.25091]: ['food', 'place', 'good', 'time', 'great', 'love', 'lunch', 'service', 'like', 'really'] [ 20 - 0.8375 - 0.26798]: ['taco', 'chip', 'mexican', 'salsa', 'burrito', 'bean', 'food', 'margarita', 'tortilla', 'cheese'] [ 21 - 0.6875 - 0.23791]: ['coffee', 'ice', 'cream', 'flavor', 'drink', 'like', 'cup', 'fruit', 'starbucks', 'yogurt'] [ 22 - 1 - 0.15334]: ['wing', 'blue', 'buffalo', 'ranch', 'draft', 'philly', 'wild', 'sam', 'pint', 'diamond'] [ 23 - 0.675 - 0.21682]: ['review', 'experience', 'ha', 'visit', 'make', 'star', 'quite', 'quality', 'staff', 'high'] [ 24 - 0.85 - 0.162]: ['vega', 'la', 'strip', 'best', 'massage', 'trip', 'spa', 'casino', 'mall', 'cirque'] [ 25 - 0.93333 - 0.22988]: ['night', 'friday', 'groupon', 'monday', 'truck', 'tuesday', 'thursday', 'deal', 'wednesday', 'flower'] ,→ [ 26 - 0.55159 - 0.24595]: ['wa', 'restaurant', 'wine', 'dinner', 'menu', 'food', 'dish', 'meal', 'good', 'service'] [ 27 - 1 - 0.12768]: ['queen', 'karaoke', 'frank', 'hockey', 'buzz', 'dairy', 'sing', 'jennifer', 'ave.', 'europe'] [ 28 - 0.72909 - 0.25806]: ['wa', 'told', 'said', 'called', 'did', 'day', 'phone', 'asked', 'manager', 'card'] [ 29 - 0.8375 - 0.19815]: ['music', 'ticket', 'movie', 'seat', 'fun', 'play', 'theater', 'time', 'playing', 'great'] [ 30 - 0.45159 - 0.20431]: ['great', 'friendly', 'service', 'place', 'staff', 'food', 'love', 'amazing', 'recommend', 'good'] 6379 [ 31 - 0.75909 - 0.19737]: ['nice', 'area', 'outside', 'table', 'inside', 'seating', 'patio', 'bar', 'place', 'view'] [ 32 - 0.35659 - 0.22838]: ['wa', 'chicken', 'sauce', 'good', 'flavor', 'fried', 'ordered', 'like', 'little', 'just'] [ 33 - 0.9 - 0.26674]: ['sushi', 'roll', 'fish', 'tuna', 'fresh', 'chef', 'salmon', 'japanese', 'rice', 'sashimi'] [ 34 - 0.875 - 0.19026]: ['water', 'tea', 'glass', 'cup', 'drink', 'bottle', 'refill', 'iced', 'green', 'boba'] [ 35 - 0.50985 - 0.18241]: ['club', 'night', 'drink', 'wa', 'people', 'girl', 'party', 'place', 'friend', 'line'] [ 36 - 0.7225 - 0.27453]: ['price', 'buffet', 'worth', 'food', 'money', 'pay', 'better', 'quality', 'good', 'cost'] [ 37 - 0.32318 - 0.21363]: ['wa', 'food', 'like', 'place', 'service', 'bad', 'ordered', 'tasted', 'good', 'just'] [ 38 - 0.44159 - 0.27947]: ['wa', 'did', 'time', 'went', 'got', 'u', 'friend', 'came', 'just', 'day'] [ 39 - 0.7375 - 0.21797]: ['class', 'office', 'care', 'time', 'doctor', 'dr.', 'appointment', 'gym', 'work', 'staff'] [ 40 - 0.65 - 0.32226]: ['salad', 'cheese', 'bread', 'tomato', 'soup', 'dressing', 'mac', 'chicken', 'fresh', 'menu'] [ 41 - 0.65167 - 0.21533]: ['burger', 'fry', 'cheese', 'onion', 'bun', 'ring', 'good', 'ordered', 'order', 'bacon'] [ 42 - 0.69159 - 0.19701]: ['wa', 'car', 'hair', 'nail', 'did', 'time', 'salon', 'cut', 'job', 'tire'] [ 43 - 0.83333 - 0.25519]: ['parking', 'car', 'line', 'door', 'lot', 'open', 'drive', 'closed', 'hour', 'sign'] [ 44 - 0.95 - 0.29985]: ['le', 'et', 'la', 'pour', 'pa', 'que', 'est', 'en', 'une', 'je'] [ 45 - 0.8125 - 0.2528]: ['store', 'shop', 'buy', 'item', 'sale', 'product', 'selection', 'price', 'shopping', 'like'] [ 46 - 0.56909 - 0.20493]: ['bar', 'beer', 'drink', 'game', 'bartender', 'place', 'good', 'tv', 'selection', 'great'] [ 47 - 0.95 - 0.16546]: ['box', 'package', 'post', 'jack', 'express', 'chris', 'hookah', 'office', 'ups', 'ship'] [ 48 - 0.75909 - 0.18445]: ['location', 'place', 'phoenix', 'local', 'best', 'town', 'scottsdale', 'new', 'downtown', 'area'] [ 49 - 0.79242 - 0.21996]: ['room', 'hotel', 'wa', 'stay', 'pool', 'bed', 'night', 'stayed', 'casino', 'desk'] uniqueness=0.738 ProdLDA: [0-0.40944-0.24083]: ['rib', 'brisket', 'bbq', 'fish', 'taco', 'mexican', 'catfish', 'cajun', 'salsa', 'okra'] [1-0.55111-0.15347]: ['greek', 'gyro', 'bland', 'atmosphere', 'tasteless', 'filthy', 'greasy', 'shish', 'mold', 'souvlaki'] [2-0.28444-0.18454]: ['catfish', 'bbq', 'hush', 'corn', 'rib', 'mac', 'taco', 'cajun', 'brisket', 'texas'] [3-0.49278-0.18204]: ['bland', 'tasteless', 'overpriced', 'disgusting', 'flavorless', 'edible', 'food', 'overrated', 'atmosphere', 'mediocre'] ,→ [4-1-0.17619]: ['airline', 'theater', 'airport', 'terminal', 'trail', 'stadium', 'exhibit', 'flight', 'airway', 'museum'] [5-1-0.13442]: ['buffet', 'chinese', 'crab', 'leg', 'bacchanal', 'dim', 'mein', 'wicked', 'seafood', 'carving'] [6-0.44167-0.14681]: ['pizza', 'wedding', 'italian', 'gluten', 'coordinator', 'delicious', 'crust', 'amazing', 'florist', 'birthday'] ,→ [7-0.71944-0.37625]: ['asada', 'carne', 'salsa', 'taco', 'burrito', 'thai', 'mexican', 'enchilada', 'tortilla', 'refried'] [8-1-0.48323]: ['est', 'tr\\u00e8s', 'retournerai', 'sont', 'endroit', 'peu', 'une', 'vraiment', 'oeufs', 'qui'] [9-0.38111-0.16431]: ['mac', 'rib', 'taco', 'chowder', 'love', 'yummy', 'salsa', 'brisket', 'chip', 'texas'] [10-0.37929-0.24311]: ['hash', 'burger', 'egg', 'breakfast', 'benedict', 'biscuit', 'toast', 'pancake', 'scrambled', 'corned'] [11-0.95-0.21124]: ['warranty', 'insurance', 'repair', 'contract', 'car', 'vehicle', 'bbb', 'cancel', 'rental', 'email'] [12-0.26262-0.25198]: ['breakfast', 'hash', 'egg', 'benedict', 'burger', 'toast', 'biscuit', 'brunch', 'omelet', 'pancake'] [13-1-0.22042]: ['suite', 'shower', 'hotel', 'elevator', 'pool', 'housekeeping', 'jacuzzi', 'bed', 'tub', 'amenity'] [14-1-0.2429]: ['foie', 'filet', 'gras', 'scallop', 'mignon', 'risotto', 'lobster', 'amuse', 'wine', 'creamed'] [15-0.47083-0.15043]: ['ceremony', 'chapel', 'pizza', 'wedding', 'minister', 'gluten', 'florist', 'bouquet', 'bianco', 'photographer'] ,→ [16-1-0.24788]: ['beer', 'pub', 'brewery', 'ale', 'brew', 'ipa', 'craft', 'bartender', 'game', 'draft'] [17-0.36444-0.17536]: ['taco', 'delicious', 'crawfish', 'margarita', 'cajun', 'bbq', 'mac', 'amazing', 'corn', 'fun'] [18-0.49-0.17937]: ['disgusting', 'filthy', 'tasteless', 'dirty', 'inedible', 'bland', 'dry', 'mediocre', 'gyro', 'gross'] [19-0.20206-0.22668]: ['indian', 'italian', 'naan', 'masala', 'pasta', 'tikka', 'atmosphere', 'pizza', 'food', 'india'] [20-0.65333-0.20453]: ['wash', 'wash.', 'vacuuming', 'rag', 'wiped', 'filthy', 'wipe', 'vacuum', 'vacuumed', 'car'] [21-0.27611-0.15756]: ['catfish', 'bbq', 'brisket', 'rib', 'cob', 'corn', 'margarita', 'mac', 'taco', 'hush'] [22-0.6625-0.19396]: ['pizza', 'crust', 'pepperoni', 'burger', 'wing', 'domino', 'fry', 'dog', 'topping', 'soggy'] [23-0.39179-0.36182]: ['indian', 'naan', 'italian', 'masala', 'tandoori', 'tikka', 'india', 'lassi', 'paneer', 'dosa'] [24-0.40762-0.16713]: ['indian', 'naan', 'bland', 'masala', 'tikka', 'underwhelming', 'uninspired', 'mediocre', 'ambiance', 'overpriced'] ,→ [25-0.36944-0.11527]: ['taco', 'margarita', 'mac', 'salsa', 'yummy', 'chip', 'shake', 'catfish', 'carne', 'potatoe'] [26-0.875-0.20124]: ['pita', 'hummus', 'cardio', 'gym', 'falafel', 'gyro', 'workout', 'greek', 'sandwich', 'produce'] [27-0.41778-0.21467]: ['mac', 'taco', 'rib', 'cob', 'juicy', 'burger', 'bbq', 'carne', 'bomb.com', 'delish'] [28-0.95-0.20403]: ['clothing', 'thrift', 'dress', 'jewelry', 'store', 'clearance', 'accessory', 'merchandise', 'cupcake', 'alteration'] ,→ [29-0.61944-0.12706]: ['crawfish', 'margarita', 'yummy', 'yum', 'sundae', 'nacho', 'delish', 'trifecta', 'love', 'taco'] [30-0.27607-0.29397]: ['indian', 'naan', 'tikka', 'masala', 'paneer', 'italian', 'food', 'india', 'korma', 'breakfast'] [31-0.49778-0.20012]: ['wash', 'atmosphere', 'dirty', 'filthy', 'wipe', 'wash.', 'rag', 'cleanliness', 'latte', 'cleaning'] [32-1-0.21306]: ['dr.', 'vet', 'doctor', 'dentist', 'instructor', 'dental', 'yoga', 'exam', 'nurse', 'grooming'] [33-1-0.41195]: ['sushi', 'yellowtail', 'nigiri', 'sashimi', 'tempura', 'miso', 'ayce', 'ramen', 'eel', 'tuna'] [34-0.37373-0.18235]: ['breakfast', 'benedict', 'excellent', 'toast', 'atmosphere', 'hash', 'highly', 'delicious', 'egg', 'brunch'] ,→ [35-1-0.20086]: ['community', 'institution', 'consistently', 'unmatched', 'management', 'culture', 'monopoly', 'estate', 'authentic', 'property'] ,→ [36-1-0.25813]: ['dance', 'bouncer', 'promoter', 'dj', 'x', 'dancing', 'club', 'dancer', 'dancefloor', 'guestlist'] [37-0.3704-0.1806]: ['indian', 'italian', 'pizza', 'atmosphere', 'pasta', 'naan', 'food', 'italy', 'ambiance', 'romantic'] [38-1-0.20674]: ['massage', 'manicure', 'pedicure', 'nail', 'salon', 'gel', 'stylist', 'pedi', 'cuticle', 'mani'] [39-1-0.20876]: ['manager', 'hostess', 'flagged', 'waited', 'seated', 'apology', 'acknowledged', 'rude', 'apologized', 'acknowledge'] ,→ [40-0.48333-0.1382]: ['wedding', 'chapel', 'ceremony', 'pizza', 'italian', 'gluten', 'photographer', 'minister', 'married', 'planner'] ,→ [41-0.62873-0.17883]: ['atmosphere', 'ambience', 'decor', 'food', 'indian', 'lawrenceville', 'cozy', 'ambiance', 'quaint', 'outdoor'] ,→ [42-0.33762-0.2659]: ['hash', 'breakfast', 'burger', 'benedict', 'egg', 'pancake', 'omelet', 'omelette', 'biscuit', 'brunch'] [43-0.41944-0.19644]: ['pizza', 'bianco', 'wedding', 'crust', 'italian', 'atmosphere', 'delicious', 'pepperoni', 'pizzeria', 'cibo'] ,→ [44-0.49-0.18319]: ['filthy', 'dirty', 'cleaner', 'bland', 'tasteless', 'mushy', 'disgusting', 'uneatable', 'gyro', 'rag'] [45-0.73929-0.23052]: ['frosting', 'cupcake', 'latte', 'bagel', 'coffee', 'barista', 'boba', 'pancake', 'donut', 'breakfast'] [46-1-0.26929]: ['cirque', 'acrobatics', 'soleil', 'performer', 'audience', 'stage', 'storyline', 'acrobatic', 'acrobat', 'tire'] ,→ [47-0.49595-0.19841]: ['burger', 'breakfast', 'hash', 'ronin', 'fry', 'shake', 'steak', 'bacon', 'toast', 'benedict'] [48-0.86111-0.082801]: ['edinburgh', 'atmosphere', 'cosy', 'acoustic', 'montreal', 'newington', 'landscaping', 'gameworks', 'ambience', 'pittsburgh'] ,→ [49-0.30429-0.33129]: ['indian', 'naan', 'italian', 'masala', 'tikka', 'paneer', 'tandoori', 'india', 'saag', 'pizza'] NTM-R: [0-0.1909-0.26952]: ['lincoln', 'proclaimed', 'proclaiming', 'rally', 'defended', 'civil', 'marching', 'marched', 'campaign', 'boycott'] ,→ [1-0.22741-0.22154]: ['independence', 'unsuccessfully', 'monument', 'proclaiming', 'marching', 'supported', 'challenged', 'tennessee', 'defended', 'emerged'] ,→ [2-0.14614-0.25689]: ['campaign', 'independence', 'defended', 'proclaiming', 'marched', 'missouri', 'drawn', 'marching', 'supported', 'proclaimed'] ,→ [3-0.21257-0.21085]: ['supported', 'fought', 'alabama', 'campaign', 'proclaiming', 'defended', 'marching', 'enthusiastically', 'mao', 'missouri'] ,→ 6380 [4-0.31407-0.17423]: ['proclaiming', 'campaign', 'nelson', 'marching', 'indiana', 'carolina', 'gay', 'unsuccessful', 'missouri', 'catholic'] ,→ [5-0.2586-0.25677]: ['defended', 'campaign', 'independence', 'marching', 'strongest', 'supported', 'proclaiming', 'sponsored', 'rally', 'leadership'] ,→ [6-0.11352-0.24402]: ['declaring', 'proclaiming', 'marched', 'marching', 'arkansas', 'defended', 'strongest', 'missouri', 'campaign', 'proclaimed'] ,→ [7-1-0.22502]: ['dance', 'dancing', 'bouncer', 'dj', 'danced', 'song', 'dancer', 'ipa', 'bartender', 'promoter'] [8-0.19602-0.24114]: ['proclaiming', 'road', 'supported', 'marched', 'campaign', 'marching', 'fought', 'capitol', 'lincoln', 'defended'] ,→ [9-0.092567-0.23741]: ['campaign', 'independence', 'proclaiming', 'proclaimed', 'lincoln', 'catholic', 'tennessee', 'supported', 'marching', 'marched'] ,→ [10-0.15347-0.20531]: ['proclaiming', 'defended', 'marching', 'capitol', 'alabama', 'marched', 'mustang', 'campaign', 'missouri', 'unsuccessfully'] ,→ [11-0.11936-0.23289]: ['banner', 'campaign', 'missouri', 'defended', 'marching', 'supported', 'proclaiming', 'marched', 'alabama', 'emerged'] ,→ [12-1-0.18502]: ['refund', 'voicemail', 'refused', 'unprofessional', 'supervisor', 'cox', 'ontrac', 'reschedule', 'apology', 'rudely'] ,→ [13-0.12697-0.21198]: ['proclaiming', 'supported', 'marching', 'defended', 'missouri', 'renamed', 'sponsored', 'marched', 'indiana', 'campaign'] ,→ [14-0.27399-0.26035]: ['campaign', 'proclaimed', 'proclaiming', 'fought', 'national', 'emerged', 'marched', 'marching', 'declaring', 'predecessor'] ,→ [15-1-0.29774]: ['dentist', 'dental', 'suis', 'je', 'sont', 'choix', 'est', 'peu', 'qui', 'fait'] [16-1-0.18789]: ['mocha', 'dunkin', 'latte', 'bagel', 'croissant', 'tire', 'einstein', 'cone', 'maple', 'scone'] [17-0.32514-0.22424]: ['campaign', 'defended', 'marched', 'marching', 'unsuccessfully', 'dame', 'proclaiming', 'ralph', 'federal', 'army'] ,→ [18-0.25847-0.21798]: ['adopted', 'defended', 'proclaiming', 'marched', 'nelson', 'vietnamese', 'lincoln', 'campaign', 'unsuccessfully', 'marching'] ,→ [19-0.25763-0.22588]: ['independence', 'supported', 'campaign', 'defended', 'marching', 'unsuccessfully', 'enthusiastically', 'presidential', 'nelson', 'mississippi'] ,→ [20-0.31388-0.19339]: ['proclaiming', 'marching', 'marched', 'boldly', 'unsuccessfully', 'maroon', 'supported', 'proclaim', 'arkansas', 'verdun'] ,→ [21-1-0.14517]: ['bellagio', 'tower', 'suite', 'shuttle', 'elevator', 'paris', 'monorail', 'continental', 'ami', 'hilton'] [22-0.19364-0.22081]: ['missouri', 'supported', 'proclaiming', 'marching', 'defended', 'campaign', 'battle', 'marched', 'indiana', 'puerto'] ,→ [23-0.25503-0.23828]: ['challenged', 'defended', 'marching', 'proclaiming', 'declaring', 'campaign', 'fought', 'unsuccessfully', 'monroe', 'kentucky'] ,→ [24-0.24245-0.21527]: ['lincoln', 'banner', 'campaign', 'proclaiming', 'marching', 'declaring', 'football', 'roosevelt', 'marched', 'supported'] ,→ [25-0.16281-0.26664]: ['marching', 'proclaiming', 'proclaimed', 'defended', 'independence', 'campaign', 'supported', 'civil', 'marched', 'mormon'] ,→ [26-0.95-0.53788]: ['ayce', 'goyemon', 'nigiri', 'sushi', 'sashimi', 'teharu', 'amaebi', 'oyshi', 'sakana', 'auswahl'] [27-0.35443-0.23179]: ['exception', 'campaign', 'defended', 'marching', 'claimed', 'revolution', 'boldly', 'marched', 'proclaiming', 'arkansas'] ,→ [28-0.29895-0.21784]: ['marching', 'emerged', 'boldly', 'declaring', 'marched', 'civil', 'notre', 'waterloo', 'defended', 'proclaiming'] ,→ [29-0.95-0.13247]: ['circus', 'sum', 'imitation', 'dim', 'para', 'carnival', 'lo', 'nigiri', 'bacchanal', 'boba'] [30-0.27428-0.20607]: ['campaign', 'community', 'proclaiming', 'thrilled', 'marching', 'proclaimed', 'unsuccessful', 'defended', 'supported', 'arkansas'] ,→ [31-0.31752-0.20652]: ['schedule', 'proclaiming', 'campaign', 'missouri', 'marched', 'revived', 'largely', 'marching', 'arkansas', 'unsuccessful'] ,→ [32-0.37102-0.22941]: ['proclaiming', 'defended', 'supported', 'campaign', 'mississippi', 'marching', 'marched', 'pancho', 'declared', 'illinois'] ,→ [33-1-0.16294]: ['mani', 'manicure', 'gel', 'pedicure', 'pedi', 'cuticle', 'asada', 'carne', 'waxing', 'eyebrow'] [34-0.22772-0.2491]: ['fought', 'voted', 'defended', 'marching', 'rally', 'campaign', 'proclaiming', 'independence', 'roosevelt', 'lincoln'] ,→ [35-1-0.34016]: ['paneer', 'der', 'und', 'zu', 'auch', 'nicht', 'ich', 'aber', 'essen', 'kann'] [36-0.17936-0.2285]: ['campaign', 'defended', 'convention', 'marching', 'nelson', 'proclaiming', 'lincoln', 'supported', 'catholic', 'marched'] ,→ [37-0.29847-0.19011]: ['lincoln', 'campaign', 'economy', 'indiana', 'proclaiming', 'marching', 'arkansas', 'avenue', 'dame', 'marched'] ,→ [38-1-0.15268]: ['mahi', 'mashed', 'undercooked', 'broccoli', 'wonton', 'chowder', 'overcooked', 'soggy', 'katsu', 'breading'] [39-0.25617-0.2291]: ['independence', 'campaign', 'defended', 'marching', 'civil', 'lincoln', 'proclaiming', 'popularity', 'marched', 'maryland'] ,→ [40-0.319-0.18811]: ['campaign', 'marching', 'begun', 'unsuccessfully', 'supported', 'mustang', 'alabama', 'proclaiming', 'tennessee', 'leaning'] ,→ [41-0.23145-0.21547]: ['indiana', 'chinese', 'fought', 'marched', 'marching', 'september', 'proclaimed', 'proclaiming', 'catholic', 'independence'] ,→ [42-0.24117-0.20143]: ['defended', 'colorado', 'marching', 'missouri', 'campaign', 'proclaiming', 'independence', 'marched', 'unsuccessfully', 'skyline'] ,→ [43-0.18681-0.22528]: ['campaign', 'independence', 'marching', 'proclaiming', 'rowdy', 'lincoln', 'defended', 'renamed', 'proclaimed', 'declaring'] ,→ [44-0.24283-0.20284]: ['chinese', 'defended', 'marched', 'proclaiming', 'independence', 'marching', 'universal', 'alabama', 'campaign', 'ralph'] ,→ [45-0.10685-0.22909]: ['marched', 'lincoln', 'proclaiming', 'unsuccessfully', 'marching', 'campaign', 'indiana', 'defended', 'proclaimed', 'revived'] ,→ [46-0.13688-0.2036]: ['campaign', 'marching', 'marched', 'emerged', 'indiana', 'puerto', 'proclaiming', 'tennessee', 'independence', 'missouri'] ,→ [47-0.2716-0.19729]: ['renamed', 'noodle', 'campaign', 'missouri', 'lincoln', 'defended', 'proclaiming', 'marched', 'resisted', 'proclaimed'] ,→ [48-0.35085-0.18812]: ['proclaiming', 'marching', 'campaign', 'boldly', 'marched', 'anti', 'arkansas', 'alamo', 'proclaim', 'kentucky'] ,→ [49-1-0.16444]: ['dog', 'cardio', 'grooming', 'vet', 'petsmart', 'gym', 'animal', 'membership', 'harkins', 'trainer'] W-LDA: [0-1-0.10334]: ['buffet', 'leg', 'wicked', 'crab', 'prime', 'station', 'bacchanal', 'wynn', 'carving', 'seafood'] [1-0.78333-0.19376]: ['register', 'cashier', 'employee', 'counter', 'starbucks', 'customer', 'barista', 'standing', 'store', 'stood'] ,→ [2-0.73333-0.23916]: ['music', 'dj', 'dance', 'band', 'chill', 'crowd', 'bar', 'fun', 'lounge', 'drink'] [3-0.65833-0.1593]: ['hostess', 'table', 'seated', 'u', 'minute', 'waited', 'server', 'sat', 'waitress', 'acknowledged'] [4-0.80833-0.23538]: ['cold', 'salad', 'lettuce', 'slow', 'sandwich', 'horrible', 'dressing', 'terrible', 'medium', 'steak'] [5-0.78333-0.42375]: ['starbucks', 'coffee', 'latte', 'espresso', 'baristas', 'barista', 'caffeine', 'mocha', 'iced', 'chai'] [6-0.65-0.33339]: ['asada', 'carne', 'burrito', 'taco', 'salsa', 'pastor', 'tortilla', 'mexican', 'pico', 'enchilada'] [7-0.58333-0.25355]: ['hash', 'pancake', 'breakfast', 'egg', 'toast', 'scrambled', 'omelet', 'biscuit', 'benedict', 'bagel'] [8-0.93333-0.31394]: ['tire', 'brake', 'mechanic', 'car', 'repair', 'dealership', 'engine', 'vehicle', 'warranty', 'leak'] [9-0.5-0.18936]: ['car', 'sandwich', 'breakfast', 'coffee', 'wash', 'burger', 'latte', 'fry', 'friendly', 'awesome'] [10-0.78333-0.16338]: ['pho', 'excellent', 'delicious', 'authentic', 'indian', 'amazing', 'best', 'chinese', 'outstanding', 'favorite'] ,→ [11-0.83333-0.23699]: ['filthy', 'dirty', 'disgusting', 'worst', 'health', 'waste', 'suck', 'horrible', 'gross', 'nasty'] 6381 [12-0.9-0.31227]: ['roasted', 'vinaigrette', 'creamy', 'tomato', 'goat', 'chocolate', 'rich', 'caramelized', 'squash', 'topped'] ,→ [13-0.76667-0.28784]: ['tortilla', 'enchilada', 'salsa', 'bean', 'chip', 'taco', 'fish', 'canned', 'tasted', 'refried'] [14-1-0.37871]: ['et', 'est', 'une', 'je', 'mais', 'qui', 'und', 'que', 'avec', 'dans'] [15-0.69167-0.19405]: ['reservation', 'table', 'wine', 'waiter', 'hostess', 'restaurant', 'seated', 'dining', 'party', 'arrived'] ,→ [16-0.9-0.20611]: ['nail', 'manicure', 'pedicure', 'gel', 'cuticle', 'polish', 'salon', 'pedi', 'toe', 'acrylic'] [17-0.9-0.18233]: ['dance', 'club', 'bouncer', 'x', 'promoter', 'vip', 'tao', 'dj', 'marquee', 'dancing'] [18-0.93333-0.3198]: ['nigiri', 'sushi', 'roll', 'sashimi', 'yellowtail', 'ayce', 'tempura', 'eel', 'tuna', 'uni'] [19-0.83333-0.34693]: ['ramen', 'noodle', 'broth', 'pho', 'vietnamese', 'curry', 'tofu', 'dumpling', 'bo', 'vermicelli'] [20-0.475-0.15906]: ['sushi', 'margarita', 'happy', 'hour', 'seated', 'table', 'reservation', 'drink', 'salsa', 'wine'] [21-1-0.29525]: ['brisket', 'bbq', 'rib', 'pulled', 'mac', 'pork', 'slaw', 'coleslaw', 'cole', 'meat'] [22-0.65333-0.17083]: ['sushi', 'consistently', 'happy', 'mexican', 'quality', 'consistent', 'location', 'pizza', 'ha', 'great'] ,→ [23-0.95-0.24093]: ['flight', 'airline', 'shuttle', 'cab', 'airport', 'driver', 'plane', 'delayed', 'airway', 'rental'] [24-0.95-0.29522]: ['steak', 'filet', 'steakhouse', 'ribeye', 'bone-in', 'mignon', 'rare', 'creamed', 'lobster', 'gras'] [25-0.78667-0.16426]: ['attentive', 'calamari', 'pleasantly', 'appetizer', 'happy', 'pizza', 'wa', 'great', 'enjoyed', 'loved'] ,→ [26-0.95-0.25875]: ['beer', 'tap', 'brewery', 'brew', 'pub', 'sport', 'craft', 'ale', 'draft', 'ipa'] [27-0.70833-0.21733]: ['waitress', 'came', 'asked', 'ordered', 'server', 'u', 'brought', 'table', 'drink', 'said'] [28-0.51667-0.25582]: ['breakfast', 'pancake', 'bagel', 'brunch', 'toast', 'egg', 'benedict', 'omelet', 'coffee', 'hash'] [29-0.83333-0.20873]: ['great', 'staff', 'friendly', 'helpful', 'atmosphere', 'service', 'excellent', 'knowledgeable', 'environment', 'clean'] ,→ [30-0.92-0.32171]: ['pizza', 'crust', 'pepperoni', 'slice', 'topping', 'dough', 'pizzeria', 'oven', 'ny', 'mozzarella'] [31-0.875-0.29161]: ['burger', 'bun', 'in-n-out', 'patty', 'shake', 'fry', 'milkshake', 'dog', 'smashburger', 'cheeseburger'] [32-1-0.24596]: ['cirque', 'soleil', 'acrobatics', 'audience', 'performer', 'stage', 'exhibit', 'performance', 'museum', 'theater'] ,→ [33-0.775-0.17923]: ['pad', 'thai', 'gyro', 'sandwich', 'curry', 'sub', 'pita', 'panang', 'chicken', 'tom'] [34-0.85-0.26535]: ['salon', 'massage', 'stylist', 'hair', 'facial', 'haircut', 'waxing', 'pedicure', 'spa', 'barber'] [35-0.74167-0.16379]: ['mexican', 'burger', 'food', 'wing', 'average', 'taco', 'overpriced', 'asada', 'bad', 'mediocre'] [36-0.93333-0.20984]: ['produce', 'grocery', 'market', 'trader', 'farmer', 'organic', 'bulk', 'park', 'store', 'supermarket'] [37-0.9-0.20996]: ['room', 'bed', 'shower', 'housekeeping', 'motel', 'hotel', 'stain', 'sheet', 'carpet', 'pillow'] [38-0.95-0.29802]: ['cupcake', 'frosting', 'cake', 'chocolate', 'cream', 'ice', 'yogurt', 'velvet', 'vanilla', 'boba'] [39-1-0.24957]: ['dr.', 'dentist', 'doctor', 'vet', 'dr', 'dental', 'patient', 'office', 'exam', 'clinic'] [40-0.8-0.16245]: ['bartender', 'game', 'bar', 'beer', 'band', 'dive', 'karaoke', 'football', 'song', 'jukebox'] [41-0.85-0.23321]: ['hotel', 'suite', 'spa', 'pool', 'casino', 'room', 'amenity', 'jacuzzi', 'stayed', 'spacious'] [42-1-0.16035]: ['view', 'fountain', 'bellagio', 'romantic', 'gabi', 'anniversary', 'ami', 'impeccable', 'pairing', 'mon'] [43-0.87-0.22016]: ['delivery', 'order', 'deliver', 'called', 'hung', 'pizza', 'phone', 'driver', 'delivered', 'answered'] [44-0.6-0.17233]: ['pho', 'closed', 'rude', 'bartender', 'customer', 'worst', 'suck', 'business', 'horrible', 'car'] [45-1-0.16971]: ['gym', 'contract', 'membership', 'cox', 'lease', 'fitness', 'apartment', 'account', 'tenant', 'trainer'] [46-0.88333-0.16832]: ['wine', 'bruschetta', 'tapa', 'cocktail', 'goat', 'date', 'martini', 'sangria', 'cozy', 'list'] [47-0.93333-0.23392]: ['clothing', 'clothes', 'shoe', 'accessory', 'store', 'dress', 'clearance', 'jewelry', 'pair', 'thrift'] [48-0.62-0.2303]: ['healthy', 'love', 'pizza', 'sandwich', 'favorite', 'hummus', 'gyro', 'fresh', 'pita', 'burger'] [49-0.9-0.20869]: ['chinese', 'mein', 'panda', 'bland', 'chow', 'rice', 'noodle', 'express', 'wonton', 'tasteless']
2019
640
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6382–6391 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6382 Dense Procedure Captioning in Narrated Instructional Videos Botian Shi1∗†, Lei Ji2,3†, Yaobo Liang3, Nan Duan3, Peng Chen4, Zhendong Niu1‡, Ming Zhou3 1Beijing Institute of Technology, Beijing, China 2Institute of Computing Technology, Chinese Academy of Science, Beijing, China 3Microsoft Research Asia, Beijing, China 4Microsoft Research and AI Group, Beijing, China [email protected],{leiji,yalia,nanduan,peche}@microsoft.com [email protected],[email protected] Abstract Understanding narrated instructional videos is important for both research and real-world web applications. Motivated by video dense captioning, we propose a model to generate procedure captions from narrated instructional videos which are a sequence of stepwise clips with description. Previous works on video dense captioning learn video segments and generate captions without considering transcripts. We argue that transcripts in narrated instructional videos can enhance video representation by providing fine-grained complimentary and semantic textual information. In this paper, we introduce a framework to (1) extract procedures by a cross-modality module, which fuses video content with the entire transcript; and (2) generate captions by encoding video frames as well as a snippet of transcripts within each extracted procedure. Experiments show that our model can achieve state-of-the-art performance in procedure extraction and captioning, and the ablation studies demonstrate that both the video frames and the transcripts are important for the task. 1 Introduction Narrated instructional videos provide rich visual, acoustic and language information for people to easily understand how to complete a task by procedures. An increasing amount of people resort to narrated instructional videos to learn skills and solve problems. For example, people would like to watch videos to repair a water damaged plasterboard / drywall ceiling1 or cook Cottage Pie2. This motivates us to investigate whether machines can understand narrated instructional videos like ∗This work was done during the first author’s intership in MSR Asia †Equal contribution ‡Corresponding Author 1https://goo.gl/QZFsfR 2https://goo.gl/2Z4Kb8 Video Clip Time Transcript ... get a little pecorino Romano then use three egg yolks ... use yolk and whip these eggs up together ... now fix your spaghetti and boil water … put sauce on top pasta ... grate some pecorino cheese and beat the eggs stir cheese into the eggs cook the spaghetti in the boiling water pour the egg sauce on the spaghetti and mix well ... ... ... ... Procedure and Captions 0:00:12 0:00:46 0:00:52 0:01:10 0:01:27 0:02:04 0:02:16 0:02:30 Figure 1: A showcase of video dense procedure captioning. In this task, the video frames and the transcript are given to (1) extract procedures in the video, (2) generate a descriptive and informative sentence as the caption of each procedure. humans. Besides, watching a long video is timeconsuming, captions of videos provide a quick overview of video content for people to learn the main steps rapidly. Inspired by this, our task is to generate procedure captions from narrated instructional videos which are a sequence of step-wise clips with a description as shown in Figure 1. Previous works on video understanding tend to recognize actions in video clips by detecting pose (Wang et al., 2013a; Packer et al., 2012) and motion (Wang et al., 2013b; Yang et al., 2013) or both (Wang et al., 2014) and fine-grained features(Rohrbach et al., 2016). These works take low-level vision features into account and can 6383 only detect human actions, instead of complicated events that occur in the scene. To deeply understand the video content, Video Dense Captioning (Krishna et al., 2017) is proposed to generate semantic captions for a video. The goal of this task is to identify all events inside a video and our target is the video dense captioning on narrated instructional videos which we call dense procedure captioning. Different from videos in the open domain, instructional videos contain an explicit sequential structure of procedures accompanied by a series of shots and descriptive transcripts. Moreover, they contain fine-grained information including actions, entities, and their interactions. According to our analysis, many fine-grained entities and actions also present in captions which are ignored by previous works like (Krishna et al., 2017; Zhou et al., 2018b). The procedure caption should be detailed and informative. Previous works (Krishna et al., 2017; Xu et al., 2016) for video captioning usually consist of two stages: (1) temporal event proposition; and (2) event captioning. However, there are two challenges for narrated instructional videos: one of the challenges is that video content fails to provide semantic information so as to extract procedures semantically; the other challenge is that it is hard to recognize fine-grained entities from the video content only, and thus tends to generate coarse captions. Previous models for dense video captioning only use video signals without considering transcripts. We argue that transcripts in narrated instructional videos can enhance video representation by providing fine-grained complimentary and semantic textual information. As shown in Figure 1, the task takes a video with a transcript as input and extracts the main procedures as well as these captions. The whole video is divided into four proposal procedure spans in sequential order including: (1) grate some pecorino cheese and beat the eggs during time span [0:00:12-0:00:46], (2) then stir cheese into the eggs during [0:00:52-0:01:10], and so on. Besides video content, transcripts can provide semantic information. Our model embeds transcript using a pre-trained context-aware model to provide rich semantic information. Furthermore, with the transcript, our model can directly ”copy” many fine-grained entities, e.g. pecorino cheese for procedure captioning. In this paper, we propose utilizing multi-modal content of videos including frame features and transcripts to conduct procedure extraction and captioning. First, we use the transcript of instructional videos as a global text feature and fuse it with video signals to construct context-aware features. Then we use temporal convolution to encode these features and generate procedure proposals. Next, the fused features of video and transcript tokens within the proposed time span are used to generate the final caption via a recurrent model. Experiments on the YouCookII dataset (Zhou et al., 2018a) (a cooking-domain instructional video corpus) are conducted to show that our model can achieve state-of-the-art results and the ablation studies demonstrate that the transcript can not only improve procedure proposition performance but also be very effective for procedure captioning. The contributions of this paper are as follows: 1. We propose a model fusing transcript of narrated instructional video during procedure extraction and captioning. 2. We employ the pre-trained BERT(Devlin et al., 2018) and self-attention(Vaswani et al., 2017) layer to embed transcript, and then integrate them to visual encoding during procedure extraction. 3. We adopt the sequence-to-sequence model to generate captions by merging tokens of the transcript with the aligned video frames. 2 Related Works Narrated Instructional Video Understanding Previous works aim to ground the description to the video. (Malmaud et al., 2015) adopted an HMM model to align the recipe steps to the narration. (Naim et al., 2015) utilize latent-variable based discriminative models (CRF, Structured Perceptron) for unsupervised alignment. Besides the alignment of transcripts with video, (Alayrac et al., 2016, 2018) propose to learn the main steps from a set of narrated instructional videos for five different tasks and formulate the problem into two clustering problems. Graph-based clustering is also adopted to learn the semantic storyline of instructional videos in (Sener et al., 2015). These works assume that ”one task” has the same procedures. Different from previous works, we focus on learning more complicated procedures for 6384 ResNet-34 + Transformer Input Video Context-Aware Fusion Module ... ... ... ... Transcript I’m going to show you ... boiling some water ... put potato into the pot ... cook with beef and sr ... I’m going for mashing potato and ... this is the recipe of ... Procedure Extracon Module ti start ti end …... putpotatoes and dump them into a pan. …… spread the mixture and the potatoes in a bowl. …… add salt and pepper …... Transcript: ….. mashing potato ... beef mixture ….. Encoder Decoder Predicon Score Offset Center Offset Length Average Pooling of Selected Frame Features Procedure Caponing Module Predicted Procedures Transcript Embedding Self-Aenon Layer n Self-Aenon Layer 1 Pre-trained BERT Snippets of Transcript LSTM for Procedure Predicon ti start ti end ti start ti end Feature Matrix Feature Matrix Score Feature Posion Embedding Figure 2: The main structure of our model. each video and propose a neural network model for step-wise summarization. Temporal action proposal is designed to divide a long video into contiguous segments as a sequence of actions, which is similar to the first stage of our model. (Shou et al., 2016) adopt 3D convolutional neural networks to generate multi-scale proposals. DAPs in (Escorcia et al., 2016) apply a sliding window and a Long Short-Term Memory (LSTM) network for video content encoding and predicting proposals covered by the window. SST in (Buch et al., 2017) effectively generates proposals in a single pass. However, previous methods do not consider context information to produce nonoverlapped procedures. (Zhou et al., 2018a) is the most similar work to ours, which is designed to detect long complicated event proposals rather than actions. We adopt this framework and inject the textual transcript of narrated instructional videos as our first step. Dense video caption aims to generate descriptive sentences for all events in the video. Different from video captioning and paragraph generation, dense video caption requires segmenting of each video into a sequence of temporal proposals with corresponding captions. (Krishna et al., 2017) resorts to the DAP method (Escorcia et al., 2016) for event detection and apply the contextaware S2VT model (Venugopalan et al., 2015). (Yu et al., 2018) propose to generate long and detailed description for sport videos. (Li et al., 2018) train jointly on unifying the temporal proposal localization and sentence generation for dense video captioning. (Xiong et al., 2018) assembles temporally localized description to produce a descriptive paragraph. (Duan et al., 2018) propose weakly supervised dense event captioning, which does not require temporal segment annotations, and decomposes the problem into a pair of dual tasks. (Wang et al., 2018a) exploit both past and future context for predicting accurate event proposals. (Zhou et al., 2018b) adopt a transformer for action proposing and captioning simultaneously. Besides, there are also some works try to incorporate multi-modal information (e.g. audio stream) for dense video captioning task(Ramanishka et al., 2016; Xu et al., 2017; Wang et al., 2018b). The major difference is that our work adopts a different model structure and fuses transcripts to further enhance semantic representation. Experiments show that transcripts can improve both procedure ex6385 traction and captioning. 3 Model In this section, we describe our framework and model details as shown in Figure 2. First, we adopt a context-aware video-transcript fusion module to generate features by fusing video information and transcript embedding; Then the procedure extraction module takes the embedded features and predicts procedures with various lengths; Finally, the procedure captioning module generates captions for each procedure by an encoder-decoder based model. 3.1 Context-Aware Fusion Module We first encode transcripts and video frames separately and then extract cross-modal features by feeding both embeddings into a context-aware model. To embed transcripts, we first split all tokens in the transcript by a sliding window and input them into a uncased BERT-large (Devlin et al., 2018) model. Next, we encode these sentences by a Transformer (Vaswani et al., 2017) and take the first output as the context-aware transcript embedding e ∈Re. To embed the videos, we uniformly sample T frames and encode each frame vt in V = {v1, · · · , vT } to an embedding representation by an ImageNet-pre-trained ResNet-32 (He et al., 2016) network. Then we adopt another Transformer model to further encode the context information, and output X = {x1, · · · , xT } ∈RT×d. Finally, we combine each of the frame features in X with transcript feature e to get the fused feature C = {c1, · · · , ct, · · · , cT |ct = {xt ◦e}} and feed it into a Bi-directional LSTM (Hochreiter and Schmidhuber, 1997) in order to encode past and future contextual information of video frames: F = Bi-LSTM(C) where F = {f1 · · · fT } ∈ RT×f, and f is the hidden size of the LSTM layers. 3.2 Procedure Extraction Module We take the encoded T feature vectors F of each video as the elementary units to generate procedure proposals. We follow the idea in (Zhou et al., 2018a; Krishna et al., 2017) that (1) generate a lot of anchors, i.e. proposals, with different lengths and (2) use the frame features within a proposal span to predict plausible scores. 3.2.1 Procedure Proposal Generation In order to generate different-sized procedure proposals, we adopt a 1D (temporal) convolutional layer with the setting of K different kernels; three output channels and zero padding to generate procedure candidates. The layer takes F ∈RT×f as input and outputs a list of M(k) ∈RT×3 for each k-th kernel. All these results are stacked as a tensor M ∈RK×T×3. Next, the tensor M is divided into three matrices: M = h ˆ Mm, ˆ Ml, ˆ Ms i where ˆ Mm, ˆ Ml, ˆ Ms ∈RK×T , They are designed to represent the offset of the proposal’s midpoint; the offset of the proposal’s length and the prediction score. We calculate the starting and ending timestamp of each proposal by the offset of midpoint and length. Finally, a non-linear projection is applied on each matrix: Mm = tanh( ˆ Mm), Ml = tanh( ˆ Ml), Ms = σ( ˆ Ms) where σ is the Sigmoid projection. 3.2.2 Procedure Proposal Prediction It is obvious that all proposed procedure candidates are co-related to each other. In order to encode this interaction, we follow the method in (Zhou et al., 2018a) which uses an LSTM model to predict a sequence from the K × T generated procedure proposal. The input of the recurrent prediction model for each time step consists of three parts: frame features, the position embedding, the plausibility score feature. Frame Features For a generated procedure proposal, the corresponding feature vectors F(k,t) are calculated as follows: F(k,t) =  fC(k,t)−L(k,t), · · · , fC(k,t)+L(k,t) (1) C(k, t) = ⌊t + k(k) × M(k,t) m ⌋ (2) L(k, t) = ⌊k(k) × M(k,t) l 2 ⌋ (3) where k = {k1, · · · , kK} is a list of different kernel sizes. The M(k,t) m and M(k,t) l represent the midpoint and length offset of the span for k-th kernel and t-th frame respectively and k(k) is the length of the k-th kernel. Position Embedding We treat all possible positions as a list of tokens and use an embedding layer to get a continuous representation. The [BOS] and [EOS], i.e. the begin of sentence and the end of sentence, are also added into the vocabulary for sequence prediction. 6386 Score Feature The score feature is a flatten of matrix Ms, i.e. s ∈RK·T×1. The input embedding of each time step is the concatenation of: 1. The averaged features of the proposal predicted in the previous step t: F(k,t) = 1 2L(k, t) L(k,t) X t′=−L(k,t) fC(k,t)+t′ (4) 2. The position embedding of the proposal. 3. The score feature s. Specifically, for the first step, the input frame feature is the averaged frame features of the entire video. F = 1 T PT t=1 ft and the position embedding is the encoding of [BOS]. The procedure extraction finishes when [EOS] is predicted, and the output of this module is a sequence of indexes of frames: P = {p1 · pL} where L is the maximum count of the predicted proposals. 3.3 Procedure Captioning Module We design an LSTM based sequence-to-sequence model (Sutskever et al., 2014) to generate captions for each extracted procedure. For the (k, t)-th extracted procedure, we calculate the starting time ts and ending time te separately and retrieve all tokens within the time span [ts, te]: E(ts, te) = {ets, · · · , ete} ⊂ {e1, · · · , eQ} where Q is the total word count of a video’s transcript. On each step, we concatenate the embedding representation of each token q ∈E(ts, te), i.e. q, with the nearest video frame feature fˆq into the input vector eq = {q ◦fˆq} of the encoder. We employ the hidden state of the last step after encoding all tokens in E(ts, te) and decode the caption of this extracted procedure as W = {w1, · · · , wZ} where Z is the word count of the decoded procedure caption. 3.4 Loss Functions The target of the model is to extract procedures and generate captions. The loss function consists of four parts: (1) Ls: a binary cross-entropy loss of each generated positive and negative procedure; (2) Lr: the regression loss with a smooth l1-loss (Ren et al., 2015) of a time span between the extracted and the ground-truth procedure. (3) Lp: the cross-entropy loss of each proposed procedure in the predicted sequence of proposals. (4) Lc: the cross-entropy loss of each token in the generated procedure captions. Here are the formulations: L = αsLs + αrLr + αpLp + αcLc (5) Ls = −1 CP CP X i=1 log(MP s ) − 1 CN CN X i=1 log(1 −MN s ) (6) Lr = 1 CP CP X i=1 ||Bpred i −Bgt i ||s−l1 (7) Lp = −1 L L X l=1 log(pl1(gtl) l ) (8) Lc = −1 L L X l=1 1 |Wl| X w∈Wl log(w1(gtw)) (9) where MP s and MN s are the scoring matrix of positive and negative samples in a video, and CP and CN represent the count separately. Here we regard a sample as positive if its IoU (Intersection of Union) with any ground-truth procedure is more than 0.8. If the IoU is less than 0.2, we treat it as negative. The loss Ls aims to enlarge the score of all positive samples and decrease the score otherwise. The Bpred i and Bgt i represent the boundary (calculated by the offset of midpoint and length) of the positive sample and ground-truth procedure separately. We only take positive samples into account and conduct the regression with Lr to shorten the distance between all positive samples and the ground-truth procedures. The pl is the classification result of the procedure extraction module and the value of 1 will be 1 if the predicted class of extracted procedure proposal is identical to the class of the groundtruth proposal with the maximal IoU and 0 otherwise. The cross-entropy loss Lp aims to exploit the model to correctly select the most similar proposal of each ground-truth procedure from many positive samples. Finally, W stores all decoded captions of procedures of a video. The Lc is designed for the captioning module based on the extracted procedures. 6387 4 Experiment and Case Study 4.1 Evaluation Metrics We separately evaluate the procedure extraction and captioning module. For procedure extraction, we adopt the widely used mJacc (mean of Jaccard) (Bojanowski et al., 2014) and mIoU (mean of IoU) metrics for evaluating the procedure proposition. The Jaccard calculates the intersection of the predicted and ground-truth procedure proposals over the length of the latter. The IoU replaces the denominator part with the union of predicted and ground-truth procedures. For procedure captioning, we adopt BLEU4(Papineni et al., 2002) and METEOR(Banerjee and Lavie, 2005) as the metrics to evaluate the performance on the result of captioning based on both extracted and ground-truth procedures. 4.2 Dataset In this paper, we use the YouCookII3 (Zhou et al., 2018a) dataset to conduct experiments. It contains 2000 videos dumped from YouTube which are all instructional cooking recipe videos. For each video, human annotators were asked to first label the starting and ending time of procedure segments, and then write captions for each procedure. This dataset contains pre-processed frame features (T = 500 frames for each video, each frame feature is a 512-d vector, extracted by ResNet-32) which were used in (Zhou et al., 2018a). In this paper, we also use these pre-computed video features for our task. Besides the video content, our proposed model also relies on transcripts to provide multi-modality information. Since the YouCookII dataset does not have transcripts, we crawl all transcripts automatically generated by YouTube’s ASR engine. YouCookII provides a partition on these 2000 videos: 1333 for training, 457 for validation and 210 for testing. However, the labels of 210 testing videos are unpublished, we can only adopt the training and validation dataset for our experiment. We also remove several videos which are unavailable on YouTube. In all, we use 1387 videos from the YouCookII dataset. We split these videos into 967 for training, 210 for validation and 210 for testing. As shown in Table 1, even though we use 3http://youcook2.eecs.umich.edu/ validation testing Methods mJacc mIoU mJacc mIoU YouCookII Partition SCNN-prop 46.3 28.0 45.6 26.7 vsLSTM 47.2 33.9 45.2 32.2 ProcNets 51.5 37.5 50.6 37.0 Our Partition ProcNets 50.9 38.2 49.1 37.0 Ours (Video Only) 53.3 38.0 52.8 37.1 Ours (Full Model) 56.5 41.4 56.4 41.8 Table 1: Result on Procedure Extraction less data for training, we can still obtain comparable results. 4.3 Implementation Details For the procedure extraction module, we follow the method in (Zhou et al., 2018a) to use 16 different kernel sizes for the temporal convolutional layer, i.e. from 3 to 123 with the interval step of 8, which can cover the different lengths. We also used a max-pooling layer with a kernel of [8, 5] after the convolutional layer. We extract at most 16 procedures for each video, and the maximum caption length of each extracted procedure is 50. The hidden size of all recurrent model (LSTM) is 512 and we conduct a dropout for each layer with a probability of 0.5. We use two transformer models with 2048 inner hidden sizes, 8 heads, and 6 layers to encode context-aware transcripts and video frame features separately. We adopt an Adam optimizer (Kingma and Ba, 2015) with a starting learning rate of 0.000025 and α = 0.8 and β = 0.999 to train the model. The batch size of training is 4 for each GPU and we use 4 GPUs to train our model so the overall batch size is 16. 4.4 Result on Procedure Extraction Ground-Truth Procedures Predicted Procedures Methods B@4 M B@4 M Bi-LSTM +TempoAttn 0.87 8.15 0.008 4.62 End-to-End Transformer 1.42 11.20 0.30 6.58 Ours (Video Only) 2.20 17.59 1.70 16.71 Ours (Full Model) 2.76 18.08 2.61 17.43 Table 2: Result on Procedure Captioning We demonstrate the result of the procedure extraction model by Table 1. We compare our model with several baseline methods: (1) SCNN-prop (Shou et al., 2016) is the Segment CNN for pro6388 Procedure Extraction Procedure Captioning Ground-Truth Procedures Predicted Procedures Methods mJacc mIoU B@4 M B@4 M 1. Video Only Model Proposal by Video Only & Caption by Video Only 52.80 37.13 2.20 17.59 1.70 16.72 2. Transcript Only Model Proposal by Transcript Only & Caption by Transcript Only 48.25 31.66 2.43 17.66 1.09 15.23 3. Caption by Video Model Proposal by Video+Transcript & Caption by Video Only 53.83 37.72 3.12 18.24 2.59 17.38 4. Caption by Transcript Model Proposal by Video+Transcript & Caption by Transcript Only 52.66 36.54 2.12 17.27 1.85 15.80 5. Full Model Proposal by Video+Transcript & Caption by Video+Transcript 56.37 41.76 2.76 18.08 2.61 17.43 Table 3: Ablation experiments of our model. (All experiments are conducted on testing dataset) Ground Truth 5. Full Model 3. Capon by Video 4. Capon by Transcript (a) (b) (c) (d) (e) (f) (g) Video: Spaghe Carbonara Recipe (5.1) (5.2) (5.3) (5.4) (5.5) (5.6) (5.7) (5.8) 1. Video Only (3.1) (3.2) (3.3) (3.4) (3.5) (3.6) (3.7) (4.1) (4.2) (4.4) (4.5) (4.6) (4.7) (4.8) (4.9) (1.1) (1.2) (1.3) (1.4) (1.5) (1.6) (4.3) Predicon of Procedures 2. Transcript Only (2.1) (2.2) (2.3) (2.4) (2.5) (2.6) Figure 3: The ground-truth and extracted procedures, which are generated by our full and ablated models. (best viewed in color) posals; (2) vsLSTM is an LSTM based video summarization model (Zhang et al., 2016); (3) ProcNets (Zhou et al., 2018a) which is the previous SOTA method. As shown in Table 1, we first show the results reported in (Zhou et al., 2018a) which use the full dataset with 2000 videos. In order to ensure a fair comparison, we first run the ProcNets on the validation dataset of YouCookII and get a comparable result. In further experiments, we directly use the subset (the our partition in the table) described in the previous section. Moreover, we conduct two experiments to demonstrate the effectiveness of incorporating transcripts in this task. The Ours (Full Model) is the final model we propose, which achieves state-of-the-art results. The Ours (Video Only) model considers video content without transcripts in the procedure extraction module. Compared with ProcNets, our video only model adds a captioning module, which helps the procedure extraction module to get a better result. 4.5 Result on Procedure Captioning For evaluating procedure captioning, we consider two baseline models: (1) Bi-LSTM with temporal attention (Yao et al., 2015) (2) an end-to-end transformer based video dense captioning model proposed in (Zhou et al., 2018b). We evaluate the performance of captioning on two different procedures: (1) the ground-truth procedure; (2) the procedure extracted by models. In Table 2, we demonstrate that using ground-truth procedures can generate better captions. Additionally, our model achieves the SOTA result on BLEU-4 and METEOR metrics when using the ground-truth procedures as well as the extracted procedures. 4.6 Ablation and analysis We conduct the ablation experiments to show the effectiveness of utilizing transcripts. Table 3 lists the results. The Video Only Model only relies on video information for all modules. The Captioning by Video Model fuses transcripts during the procedure extraction which shows the transcript is effective for the extracting procedure. The Caption by Transcript Model only uses transcripts for captioning. Compared with the Caption by Video Model, we find that only using transcripts for captioning decreases performance. The reason is that only using transcripts for captioning will miss several actions appearing in the video but not mentioned in the transcript. The full Model achieves state6389 (a) Capon of Extracted Procedures (b) Capon of Ground-Truth Procedures Ground Truth (a)grate some pecorino cheese and beat the eggs (b)sr cheese into the eggs (c)cut some bacon strips into small pieces (d)cook the spaghe in the boiling water (e)heat the pan put bacon and pepper in it and cook the bacon (f)mix the spaghe with the bacon (g)pour the egg sauce on the spaghe and mix well 1. Full Model (1.1)mix the eggs and mix in a bowl (1.2)mix the eggs in a bowl (1.3)cut the meat into pieces (1.4)mix some olive oil in a bowl (1.5)add salt and pepper and pepper to the bowl (1.6)mix the sauce and mix (1.7)pour the sauce in the pan and sr (1.8)add the pasta and mix it with the sauce 2. Capon by Video (2.1)add some oil in a pan and add some water (2.2)add a lile of oil and add a pan and add some oil (2.3)add oil and add to a pan and add some oil (2.4)add salt and pepper to the pan and sr (2.5)add the chicken to the pan and sr (2.6)add the sauce to the pan and sr (2.7)add the pasta and add the sauce and mix 3. Capon by Transcript (3.1)add the sauce and soy sauce and sugar to the rice (3.2)mix the onion garlic garlic powder and pepper and pepper to the bowl (3.3)add the rice and chopped onions and garlic paste (3.4)add salt and pepper and sr (3.5)add salt and pepper and pepper to the pan (3.6)add the pasta to the wok (3.7)coat the chicken in the flour and place the bread crumbs in the pan (3.8)add flour to the mixture and sr (3.9)add salt and pepper to the wok 4. Video Only (4.1)slice the potatoes and add some oil and pepper (4.2)add chopped garlic and garlic and add chopped onions and add the onions (4.3)add the onion and pepper and add the onion and sr (4.4)add the sauce and fry the noodles in the pan and add them to the pan (4.5)add the sauce and add the sauce and sr (4.6)add the sauce and add the sauce and sr Ground Truth (a)grate some pecorino cheese and beat the eggs (b)sr cheese into the eggs (c)cut some bacon strips into small pieces (d)cook the spaghe in the boiling water (e)heat the pan put bacon and pepper in it and cook the bacon (f)mix the spaghe with the bacon (g)pour the egg sauce on the spaghe and mix well 1. Full Model (a)mix the eggs in the bowl (b)mix some salt and mix in a bowl (c)cut the meat into a bowl (d)add salt and pepper to the bowl (e)add salt and pepper to the bowl and mix well (f)pour the sauce in the pan (g)add the pasta and mix it with the sauce 2. Capon by Video (a)add some oil and salt and pepper to a bowl (b)add a bowl of water and add to a bowl of water (c)add a lile of oil on a pan (d)add oil and a pan and add some oil (e)add oil and add to a pan and add some oil (f)add some oil and salt to the pan and sr (g)add the pasta and add the sauce to the pan and mix 3. Capon by Transcript (a)mix the eggs and soy sauce and sugar to the bowl (b)add some chili sauce and chili powder to the wok (c)place the sandwich on the bread (d)add the cheese and pepper to the salad (e)add the meat and pepper to the bowl and mix together (f)heat the pan in the pan (g)add soy sauce soy sauce soy sauce and sugar and mix together 4. Video Only (a)cut the potatoes into a bowl and add some oil and pepper (b)cut a pan and add some oil and add the pan (c)cut the potatoes into a bowl and add them (d)heat some oil in a pan and add some chopped onions and add some chopped onions and pepper (e)add chopped garlic and garlic and garlic and add to the pot (f)add the sauce and cook in the pan and sr (g)add the sauce and add the sauce and sr 5. Transcript Only (5.1)blend the pepper and a small pieces (5.2)mix cheese bread crumbs parmesan cheese egg yolks a bowl and whisk the mixture (5.3)add sugar cream ketchup and worcestershire sauce on a pan (5.4)add some tomato into a bowl (5.5)add salt and black pepper to the salad and mix (5.6)mix the cabbage and salt in a bowl 5. Transcript Only (a)mix the egg yolks milk and (b)add some milk and worcestershire sauce to the pan (c)place the bacon into a bowl (d) take the bread on top of the bread mixture with some cheese and top it (e) add some salt and pepper and an egg into the bowl (f)add beef into the pan and add the meat (g) pour the mixture parmesan cheese egg mixture and the mixture Figure 4: The procedure captions, which are generated based on the Extracted Procedures and the Ground-Truth Procedures. (best review in color) of-the-art results on procedure extraction and captioning, while Caption by Video Model gets better results on captioning for the ground-truth procedure. To sum up, both video frame frames and transcripts are important for the task. We study several captioning results and find that the Caption by Video Model tends to generate general descriptions such as ”add ...” for all steps. Nonetheless, our model tends to generate various fine-grained captions. Motivated by this, we conduct another experiment to use cherry picked sentence like add the chicken (or beef, carrot, onion, etc.) to the pan and stir or add pepper and salt to the bowl as the captions for all procedures and can still achieve a good result on BLEU (4.0+) and METEOR (16.0+). We find that the distribution of captions in this dataset is biased because there are many similar procedure descriptions even in different recipes. 4.7 Case study We also present a qualitative analysis based on the case study shown in Figures 3 and 4 (best viewed in color). Figure 3 visualizes the ground-truth procedures and the predicted procedures. The horizontal axis is the time and the number on each small ribbon is the ID of the procedure. We have slightly shifted the overlapping procedures in order to show the results more clearly. It can be seen that the extracted procedures by our full model have the most similar trend with the ground-truth procedures. Figure 4 presents the generated captions on extracted procedures (Fig.4a) and ground-truth procedures (Fig.4b) separately. Each column shows captioning results from one model, and the first column is the ground-truth result. On one hand, only the full model can generate eggs in the procedure (1.1) and (1.2), which is also an important ingredient entity in the ground-truth captions. On the other hand, the ingredient bacon in groundtruth caption (c) is ignored by all models. In fact, our Full Model predicts meat synonyms of bacon. Besides, the Full Model can also generate the action cut and the final state of ingredient pieces mentioned in transcript, while it is hard to recognize using only video signals. 5 Conclusion In this paper, we propose a framework for procedure extraction and captioning modeling in instructional videos. Our model use narrated tran6390 scripts of each video as the supplementary information and can help to predict and caption procedures better. The extensive experiments demonstrate that our model achieves state-of-the-art results on the YouCookII dataset, and ablation studies indicate the effectiveness of utilizing transcripts. Acknowledgments We thank the reviewers for their carefully reading and suggestions. This work was supported by the National Natural Science Foundation of China (No. 61370137), the National Basic Research Program of China (No.2012CB7207002), the Ministry of Education - China Mobile Research Foundation Project (2016/2-7). References Jean-Baptiste Alayrac, Piotr Bojanowski, Nishant Agrawal, Josef Sivic, Ivan Laptev, and Simon Lacoste-Julien. 2016. Unsupervised learning from narrated instruction videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4575–4583. Jean-Baptiste Alayrac, Piotr Bojanowski, Nishant Agrawal, Josef Sivic, Ivan Laptev, and Simon Lacoste-Julien. 2018. Learning from narrated instruction videos. IEEE transactions on pattern analysis and machine intelligence, 40(9):2194–2208. Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72. Piotr Bojanowski, R´emi Lajugie, Francis Bach, Ivan Laptev, Jean Ponce, Cordelia Schmid, and Josef Sivic. 2014. Weakly supervised action labeling in videos under ordering constraints. In European Conference on Computer Vision, pages 628–643. Springer. Shyamal Buch, Victor Escorcia, Chuanqi Shen, Bernard Ghanem, and Juan Carlos Niebles. 2017. Sst: Single-stream temporal action proposals. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 2911–2920. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Xuguang Duan, Wenbing Huang, Chuang Gan, Jingdong Wang, Wenwu Zhu, and Junzhou Huang. 2018. Weakly supervised dense event captioning in videos. In Advances in Neural Information Processing Systems, pages 3063–3073. Victor Escorcia, Fabian Caba Heilbron, Juan Carlos Niebles, and Bernard Ghanem. 2016. Daps: Deep action proposals for action understanding. In European Conference on Computer Vision, pages 768– 784. Springer. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. International Conference on Learning Representations. Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. 2017. Dense-captioning events in videos. In Proceedings of the IEEE International Conference on Computer Vision, pages 706–715. Yehao Li, Ting Yao, Yingwei Pan, Hongyang Chao, and Tao Mei. 2018. Jointly localizing and describing events for dense video captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7492–7500. Jonathan Malmaud, Jonathan Huang, Vivek Rathod, Nick Johnston, Andrew Rabinovich, and Kevin P Murphy. 2015. What’s cookin’? interpreting cooking videos using text, speech and vision. North American Chapter of the Association for Computational Linguistics, pages 143–152. Iftekhar Naim, Young C Song, Qiguang Liu, Liang Huang, Henry Kautz, Jiebo Luo, and Daniel Gildea. 2015. Discriminative unsupervised alignment of natural language instructions with corresponding video segments. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 164–174. Benjamin Packer, Kate Saenko, and Daphne Koller. 2012. A combined pose, object, and feature model for action understanding. In CVPR, pages 1378– 1385. Citeseer. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Vasili Ramanishka, Abir Das, Dong Huk Park, Subhashini Venugopalan, Lisa Anne Hendricks, Marcus Rohrbach, and Kate Saenko. 2016. Multimodal video description. In Proceedings of the 24th 6391 ACM international conference on Multimedia, pages 1092–1096. ACM. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99. Marcus Rohrbach, Anna Rohrbach, Michaela Regneri, Sikandar Amin, Mykhaylo Andriluka, Manfred Pinkal, and Bernt Schiele. 2016. Recognizing finegrained and composite activities using hand-centric features and script data. International Journal of Computer Vision, 119(3):346–373. Ozan Sener, Amir R Zamir, Silvio Savarese, and Ashutosh Saxena. 2015. Unsupervised semantic parsing of video collections. In Proceedings of the IEEE International Conference on Computer Vision, pages 4480–4488. Zheng Shou, Dongang Wang, and Shih-Fu Chang. 2016. Temporal action localization in untrimmed videos via multi-stage cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1049–1058. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond J Mooney, and Kate Saenko. 2015. Translating videos to natural language using deep recurrent neural networks. North American Chapter of the Association for Computational Linguistics, pages 1494–1504. Chunyu Wang, Yizhou Wang, and Alan L Yuille. 2013a. An approach to pose-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 915–922. Jingwen Wang, Wenhao Jiang, Lin Ma, Wei Liu, and Yong Xu. 2018a. Bidirectional attentive fusion with context gating for dense video captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7190–7198. LiMin Wang, Yu Qiao, and Xiaoou Tang. 2013b. Motionlets: Mid-level 3d parts for human motion recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2674–2681. Limin Wang, Yu Qiao, and Xiaoou Tang. 2014. Video action detection with relational dynamic-poselets. In European Conference on Computer Vision, pages 565–580. Springer. Xin Wang, Yuanfang Wang, and William Yang Wang. 2018b. Watch, listen, and describe: Globally and locally aligned cross-modal attentions for video captioning. North American Chapter of the Association for Computational Linguistics, 2:795–801. Yilei Xiong, Bo Dai, and Dahua Lin. 2018. Move forward and tell: A progressive generator of video descriptions. In Proceedings of the European Conference on Computer Vision (ECCV), pages 468–483. Jun Xu, Tao Mei, Ting Yao, and Yong Rui. 2016. Msrvtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5288–5296. Jun Xu, Ting Yao, Yongdong Zhang, and Tao Mei. 2017. Learning multimodal attention lstm networks for video captioning. In Proceedings of the 25th ACM international conference on Multimedia, pages 537–545. ACM. Yang Yang, Imran Saleemi, and Mubarak Shah. 2013. Discovering motion primitives for unsupervised grouping and one-shot learning of human actions, gestures, and expressions. IEEE transactions on pattern analysis and machine intelligence, 35(7):1635–1648. Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, and Aaron Courville. 2015. Describing videos by exploiting temporal structure. In Proceedings of the IEEE international conference on computer vision, pages 4507–4515. Huanyu Yu, Shuo Cheng, Bingbing Ni, Minsi Wang, Jian Zhang, and Xiaokang Yang. 2018. Fine-grained video captioning for sports narrative. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6006–6015. Ke Zhang, Wei-Lun Chao, Fei Sha, and Kristen Grauman. 2016. Video summarization with long shortterm memory. In European conference on computer vision, pages 766–782. Springer. Luowei Zhou, Chenliang Xu, and Jason J Corso. 2018a. Towards automatic learning of procedures from web instructional videos. In Thirty-Second AAAI Conference on Artificial Intelligence. Luowei Zhou, Yingbo Zhou, Jason J Corso, Richard Socher, and Caiming Xiong. 2018b. End-to-end dense video captioning with masked transformer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8739–8748.
2019
641
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6392–6405 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6392 Latent Variable Model for Multi-modal Translation Iacer Calixto Miguel Rios ILLC The University of Amsterdam {iacer.calixto,m.riosgaona,w.aziz}@uva.nl Wilker Aziz Abstract In this work, we propose to model the interaction between visual and textual features for multi-modal neural machine translation (MMT) through a latent variable model. This latent variable can be seen as a multi-modal stochastic embedding of an image and its description in a foreign language. It is used in a target-language decoder and also to predict image features. Importantly, our model formulation utilises visual and textual inputs during training but does not require that images be available at test time. We show that our latent variable MMT formulation improves considerably over strong baselines, including a multi-task learning approach (Elliott and K´ad´ar, 2017) and a conditional variational auto-encoder approach (Toyama et al., 2016). Finally, we show improvements due to (i) predicting image features in addition to only conditioning on them, (ii) imposing a constraint on the KL term to promote models with nonnegligible mutual information between inputs and latent variable, and (iii) by training on additional target-language image descriptions (i.e. synthetic data). 1 Introduction Multi-modal machine translation (MMT) is an exciting novel take on machine translation (MT) where we are interested in learning to translate sentences in the presence of visual input (mostly images). In the last three years there have been shared tasks (Specia et al., 2016; Elliott et al., 2017; Barrault et al., 2018) where many research groups proposed different techniques to integrate images into MT, e.g. Caglayan et al. (2017); Libovick´y and Helcl (2017). Most MMT models expand neural machine translation (NMT) architectures (Sutskever et al., 2014; Bahdanau et al., 2015) to additionally condition on an image in order to compute the likelihood of a translation in context. This gives the model a chance to exploit correlations in visual and language data, but also means that images must be available at test time. An exception to this rule is the work of Toyama et al. (2016) who exploit the framework of conditional variational auto-encoders (CVAEs) (Sohn et al., 2015) to decouple the encoder used for posterior inference at training time from the encoder used for generation at test time. Rather than conditioning on image features, the model of Elliott and K´ad´ar (2017) learns to rank image features using language data in a multi-task learning (MTL) framework, therefore sharing parameters between a translation (generative) and a sentence-image ranking model (discriminative). This similarly exploits correlations between the two modalities and has the advantage that images are also not necessary at test time. In this work, we also aim at translating without images at test time, yet learning a visually grounded translation model. To that end, we resort to probabilistic modelling instead of multi-task learning and estimate a joint distribution over translations and images. In a nutshell, we propose to model the interaction between visual and textual features through a latent variable. This latent variable can be seen as a stochastic embedding which is used in the target-language decoder, as well as to predict image features. Our experiments show that this joint formulation improves over an MTL approach (Elliott and K´ad´ar, 2017), which does model both modalities but not jointly, and over the CVAE of Toyama et al. (2016), which uses image features to condition an inference network but crucially does not model the images. The main contributions of this paper are:1 • we propose a novel multi-modal NMT model 1Code and pre-trained models available in https:// github.com/iacercalixto/variational_mmt. 6393 that incorporates image features through latent variables in a deep generative model. • our latent variable MMT formulation improves considerably over strong baselines, and compares favourably to the state-of-the-art. • we exploit correlations between both modalities at training time through a joint generative approach and do not require images at prediction time. The remainder of this paper is organised as follows. In §2, we describe our variational MMT models. In §3, we introduce the data sets we used and report experiments and assess how our models compare to prior work. In §4, we position our approach with respect to the literature. Finally, in §5 we draw conclusions and provide avenues for future work. 2 Variational Multi-modal NMT Similarly to standard NMT, in MMT we wish to translate a source sequence xm 1 ≜⟨x1, · · · , xm⟩ into a target sequence yn 1 ≜⟨y1, · · · , yn⟩. The main difference is the presence of an image v which illustrates the sentence pair ⟨xm 1 , yn 1 ⟩. We do not model images directly, but instead an 2048dimensional vector of pre-activations of a ResNet50’s pool5 layer (He et al., 2015). In our variational MMT models, image features are assumed to be generated by transforming a stochastic latent embedding z, which is also used to inform the RNN decoder in translating source sentences into a target language. Generative model We propose a generative model of translation and image generation where both the image v and the target sentence yn 1 are independently generated given a common stochastic embedding z. The generative story is as follows. We observe a source sentence xm 1 and draw an embedding z from a latent Gaussian model, Z|xm 1 ∼N(µ, diag(σ2)) µ = fµ(xm 1 ; θ) σ = fσ(xm 1 ; θ) , (1) where fµ(·) and fσ(·) map from a source sentence to a vector of locations µ ∈Rc and a vector of scales σ ∈Rc >0, respectively. We then proceed to draw the image features from a Gaussian observation model, V |z ∼N(ν, ς2I) ν = fν(z; θ) , (2) where fν(·) maps from z to a vector of locations ν ∈Ro, and ς ∈R>0 is a hyperparameter of the model (we use 1). Conditioned on z and on the source sentence xm 1 , and independently of v, we generate a translation by drawing each target word in context from a Categorical observation model, Yj|xm 1 , z, y<j ∼Cat(πj) πj = fπ(xm 1 , y<j, z; θ) , (3) where fπ(·) maps z, xm 1 , and a prefix translation y<j to the parameters πj of a categorical distribution over the target vocabulary. Functions fµ(·), fσ(·), fν(·), and fπ(·) are implemented as neural networks whose parameters are collectively denoted by θ. In particular, implementing fπ(·) is as simple as augmenting a standard NMT architecture (Bahdanau et al., 2015; Luong et al., 2015), i.e. encoder-decoder with attention, with an additional input z available at every time-step. All other functions are single-layer MLPs that transform the average encoder hidden state to the dimensionality of the corresponding Gaussian variable followed by an appropriate activation.2 Note that in effect we model a joint distribution pθ(yn 1 , v, z|xm 1 ) = pθ(z|xm 1 )pθ(v|z)Pθ(yn 1 |xm 1 , z) (4) consisting of three components which we parameterise directly. As there are no observations for z, we cannot estimate these components directly. We must instead marginalise z out, which yields the marginal Pθ(yn 1 , v|xm 1 ) = Z pθ(z|xm 1 )pθ(v|z)Pθ(yn 1 |xm 1 , z)dz . (5) An important statistical consideration about this model is that even though yn 1 and v are conditionally independent given z, they are marginally dependent. This means that we have designed a data generating process where our observations 2Locations have support on the entire real space, thus we use linear activations, scales must be strictly positive, thus we use a softplus activation. 6394 y y< z xm 1 v θ n N (a) VMMTC: given the source text xm 1 , we model the joint likelihood of the translation yn 1 , the image (features) v, and a stochastic embedding z sampled from a conditional latent Gaussian model. Note that the stochastic embedding is the sole responsible for assigning a probability to the observation v, and it helps assign a probability to the translation. xm 1 yn 1 z v λ N (b) Inference model for VMMTC: to approximate the true posterior we have access to both modalities (text xm 1 , yn 1 and image v). Figure 1: Generative model of target text and image features (left), and inference model (right). yn 1 , v|xm 1 are not assumed to have been independently produced.3 This is in direct contrast with multi-task learning or joint modelling without latent variables—for an extended discussion see (Eikema and Aziz, 2019, § 3.1). Finally, Figure 1 (left) is a graphical depiction of the generative model: shaded circles denote observed random variables, unshaded circles indicate latent random variables, deterministic quantities are not circled; the internal plate indicates iteration over time-steps, the external plate indicates iteration over the training data. Note that deterministic parameters θ are global to all training instances, while stochastic embeddings z are local to each tuple ⟨xm 1 , yn 1 , v⟩. Inference Parameter estimation for our model is challenging due to the intractability of the marginal likelihood function (5). We can however employ variational inference (VI) (Jordan et al., 1999), in particular amortised VI (Kingma and Welling, 2014; Rezende et al., 2014), and estimate parameters to maximise a lowerbound Eqλ(z|xm 1 ,yn 1 ,v) [log pθ(v|z) + log Pθ(yn 1 |xm 1 , z)] −KL(qλ(z|xm 1 , yn 1 , v)||pθ(z|xm 1 )) (6) on the log-likelihood function. This evidence lowerbound (ELBO) is expressed in terms of an inference model qλ(z|xm 1 , yn 1 , v) which we design having tractability in mind. In particular, our ap3This is an aspect of the model we aim to explore more explicitly in the near future. proximate posterior is a Gaussian distribution qλ(z|xm 1 , yn 1 , v) = N(z|u, diag(s2)) u = gu(xm 1 , yn 1 , v; λ) s = gs(xm 1 , yn 1 , v; λ) (7) parametrised by an inference network, that is, an independently parameterised neural network (whose parameters we denote collectively by λ) which maps from observations, in our case a sentence pair and an image, to a variational location u ∈Rc and a variational scale s ∈Rc >0. Figure 1 (right) is a graphical depiction of the inference model. Location-scale variables (e.g. Gaussians) can be reparametrised, i.e. we can obtain a latent sample via a deterministic transformation of the variational parameters and a sample from the standard Gaussian distribution: z = u + ϵ ⊙s where ϵ ∼N(0, I) . (8) This reparametrisation enables backpropagation through stochastic units (Kingma and Welling, 2014; Titsias and L´azaro-Gredilla, 2014). In addition, for two Gaussians the KL term in the ELBO (6) can be computed in closed form (Kingma and Welling, 2014, Appendix B). Altogether, we can obtain a reparameterised gradient estimate of the ELBO, we use a single sample estimate of the first term, and count on stochastic gradient descent to attain a local optimum of (6). Architecture All of our parametric functions are neural network architectures. In particular, fπ is a standard sequence-to-sequence architecture with attention and a softmax output. We build upon OpenNMT (Klein et al., 2017), which we modify 6395 slightly by providing z as additional input to the target-language decoder at each time step. Location layers fµ, fν and gu, and scale layers fσ and gs, are feed-forward networks with a single ReLU hidden layer. Furthermore, location layers have a linear output while scale layers have a softplus output. For the generative model, fµ and fσ transform the average source-language encoder hidden state. We let the inference model condition on sourcelanguage encodings without updating them, and we use a target-language bidirectional LSTM encoder in order to also condition on the complete target sentence. Then gu and gs transform a concatenation of the average source-language encoder hidden state, the average target-language bidirectional encoder hidden state, and the image features. Fixed Gaussian prior We have just presented our variational MMT model in its full generality— we refer to that model as VMMTC. However, keeping in mind that MMT datasets are rather small, it is desirable to simplify some of our model’s components. In particular, the estimated latent Gaussian model (1) can be replaced by a fixed standard Gaussian prior, i.e., Z ∼N(0, I)—we refer to this model as VMMTF. Along with this change it is convenient to modify the inference model to condition on xm 1 alone, which allow us to use the inference model for both training and prediction. Importantly this also sidesteps the need for a target-language bidirectional LSTM encoder, which leaves us a smaller set of inference parameters λ to estimate. Interestingly, this model does not rely on features from v, instead only using it as learning signal through the objective in (6), which is in direct contrast with the model of Toyama et al. (2016). 3 Experiments Our encoder is a 2-layer 500D bidirectional RNN with GRU, the source and target word embeddings are 500D, and all are trained jointly with the model. We use OpenNMT to implement all our models (Klein et al., 2017). All model parameters are initialised sampling from a uniform distribution U(−0.1, +0.1) and bias vectors are initialised to ⃗0. Visual features are obtained by feeding images to the pre-trained ResNet-50 and using the activations of the pool5 layer (He et al., 2015). We apply dropout with a probability of 0.5 in the encoder bidirectional RNN, the image features, the decoder RNN, and before emitting a target word. All models are trained using the Adam optimiser (Kingma and Ba, 2014) with an initial learning rate of 0.002 and minibatches of size 40, where each training instance consists of one English sentence, one German sentence and one image (MMT). Models are trained for up to 40 epochs and we perform model selection based on BLEU4, and use the best performing model on the validation set to translate test data. Moreover, we halt training if the model does not improve BLEU4 scores on the validation set for 10 epochs or more. We report mean and standard deviation over 4 independent runs for all models we trained ourselves (NMT, VMMTF, VMMTC), and other baseline results are the ones reported in the authors’ publications (Toyama et al., 2016; Elliott and K´ad´ar, 2017). We preprocess our data by tokenizing, lowercasing, and converting words to subword tokens using a bilingual BPE model with 10k merge operations (Sennrich et al., 2016b). We quantitatively evaluate translation quality using case-insensitive and tokenized outputs in terms of BLEU4 (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014), chrF3 (Popovi´c, 2015), and BEER (Stanojevi´c and Sima’an, 2014). By using these, we hope to include word-level metrics which are traditionally used by the MT community (i.e. BLEU and METEOR), as well as more recent metrics which operate at the character level and that better correlate with human judgements of translation quality (i.e. chrF3 and BEER) (Bojar et al., 2017). 3.1 Datasets The Flickr30k dataset (Young et al., 2014) consists of images from Flickr and their English descriptions. We use the translated Multi30k (M30kT) dataset (Elliott et al., 2016), i.e. an extension of Flickr30k where for each image one of its English descriptions was translated into German by a professional translator. Training, validation and test sets contain 29k, 1014 and 1k images respectively, each accompanied by the original English sentence and its translation into German. In addition to the test set released for the first run of the multimodal translation shared task (Elliott et al., 2016), henceforth test2016, we also use test2017 released for the next run of this shared task (Elliott et al., 2017). Since this dataset is very small, we also investigate the effect of including more in-domain data to train our models. To that purpose, we use addi6396 Model BLEU4↑ METEOR↑ chrF↑ BEER↑ NMT 35.0 (0.4) 54.9 (0.2) 61.0 (0.2) 65.2 (0.1) Imagination 36.8 (0.8) 55.8 (0.4) – – Model G 36.5 56.0 – – VMMTF 37.7 (0.4) ↑0.9 56.0 (0.1) ↑0.0 62.1 (0.1) ↑1.1 66.6 (0.1) ↑1.4 VMMTC 37.5 (0.3) ↑0.7 55.7 (0.1) ↓0.3 61.9 (0.1) ↑0.9 66.5 (0.1) ↑1.3 Table 1: Results of applying variational MMT models to translate the Multi30k 2016 test set. For each model, we report the mean and standard deviation over 4 independent runs where models were selected using validation BLEU4 scores. Best mean baseline scores per metric are underlined and best overall results (i.e. means) are in bold. We highlight in green/red the improvement brought by our models compared to the best baseline mean score. tional 145K monolingual German descriptions released as part of the Multi30k dataset to the task of image description generation (Elliott et al., 2016). We refer to this dataset as comparable Multi30k (M30kC). Descriptions in the comparable Multi30k were collected independently of existing English descriptions and describe the same 29K images as in the M30kT dataset. In order to obtain features for images, we use ResNet-50 (He et al., 2015) pre-trained on ImageNet (Russakovsky et al., 2015). We report experiments using pool5 features as our image features, i.e. 2048-dimensional pre-activations of the last layer of the network. In order to investigate how well our models generalise, we also evaluate our models on the ambiguous MSCOCO test set (Elliott et al., 2017) which was designed with example sentences that are hard to translate without resorting to visual context available in the accompanying image. Finally, we use a 50D latent embedding z in our experiments with the translated Multi30k data, whereas in our ablative experiments and experiments with the comparable Multi30k data, we use a 500D stochastic embedding z. 3.2 Baselines We compare our work against three different baselines. The first one is a standard text-only sequenceto-sequence NMT model with attention (Luong et al., 2015), trained from scratch using hyperparameters described above. The second baseline is the variational multi-modal MT model Model G proposed by Toyama et al. (2016), where global image features are used as additional input to condition an inference network. Finally, a third baseline is the Imagination model of Elliott and K´ad´ar (2017), a multi-task MMT model which uses a shared source-language encoder RNN and is trained in two tasks: to translate from English into German and on image-sentence ranking (English↔image). 3.3 Translated Multi30k We now report on experiments conducted with models trained to translate from English into German using the translated Multi30k data set (M30kT). In Table 1, we compare our variational MMT models—VMMTC for the general case with a conditional Gaussian latent model, and VMMTF for the simpler case of a fixed Gaussian prior—to the three baselines described above. The general trend is that both formulations of our VMMT improve with respect to all three baselines. We note an improvement in BLEU and METEOR mean scores compared to the Imagination model (Elliott and K´ad´ar, 2017), as well as reduced variance (though note this is based on only 4 independent runs in our case, and 3 independent runs of Imagination). Both models VMMTF and VMMTC outperform Model G according to BLEU and perform comparably according to METEOR, especially since results reported by (Toyama et al., 2016) are based on a single run. Moreover, we also note that both our models outperform the text-only NMT baseline according to all four metrics, and by 1%–1.4% according chrF3 and BEER, both being metrics well-suited to measure the quality of translations into German and generated with subwords units. In Table 2, we report results when translating the Multi30k test2017 and the ambiguous MSCOCO test sets. Note that standard deviations for the conditional model VMMTC are considerably higher than those obtained for model VMMTF. We investigated the issue further and found out that one of the runs of VMMTC performed considerably 6397 Model BLEU4↑ METEOR↑ chrF↑ BEER↑ Multi30k 2017 test set VMMTF 30.1 (0.3) 49.9 (0.3) 57.2 (0.4) 62.2 (0.3) VMMTC 26.1 (6.6) 45.4 (7.3) 52.2 (8.4) 58.6 (5.8) Ambiguous MSCOCO 2017 test set VMMTF 25.5 (0.5) 44.8 (0.2) 52.0 (0.3) 58.3 (0.2) VMMTC 21.8 (5.6) 41.2 (6.3) 47.4 (7.6) 55.3 (5.2) Table 2: Results of applying variational MMT models to translate the Multi30k 2017 and the ambiguous MSCOCO test sets. For each model, we report the mean and standard deviation over 4 independent runs where models were selected using validation BLEU4 scores. Best overall results (i.e. means) are in bold. Note that standard deviations for the conditional model VMMTC are considerably higher than those obtained for model VMMTF. This is partly due to the fact that one of the runs of VMMTC underperformed compared to the other three. worse than the others; this caused the mean scores to be much lower and also increased the variance significantly. Finally, one interesting finding is that all four metrics indicate that the fixed-prior model VMMTF either performs slightly (Table 1) or considerably better (Table 2) than the conditional model VMMTC. We speculate this is partly due to VMMTF’s simpler parameterisation, after all, we have just about 29k training instances to estimate two sets of parameters (θ and λ) and the more complex VMMTC requires an additional bidirectional LSTM encoder for the target text. 3.4 Back-translated Comparable Multi30k Since the translated Multi30k dataset is very small, we also investigate the effect of including more in-domain data to train our models. For that purpose, we use additional 145K monolingual German descriptions released as part of the comparable Multi30k dataset (M30kC). We train a text-only NMT model to translate from German into English using the original 29K parallel sentences in the translated Multi30k (without images), and apply this model to back-translate the 145K German descriptions into English (Sennrich et al., 2016a). In this set of experiments, we explore how pretraining models NMT, VMMTF and VMMTC using both the translated and back-translated comparable Multi30k affects results. Models are pre-trained on mini-batches with a one-to-one ratio of translated and back-translated data.4 All three models NMT, VMMTF and VMMTC, are further fine4One pre-training epoch corresponds to about 290K examples, i.e. we up-sample the smaller translated Multi30k data set to achieve the one-to-one ratio. Figure 2: Validation set BLEU scores per number of pre-trained epochs for models VMMTC and VMMTF pre-trained using the comparable Multi30k and translated Multi30k data sets. The height of a bar represents the mean and the black vertical lines indicate ±1 std over 4 independent runs. tuned on the translated Multi30k until convergence, and model selection using BLEU is only applied during fine-tuning and not at the pre-training stage. In Figure 2, we inspect for how many epochs should a model be pre-trained using the additional noisy back-translated descriptions, and note that both VMMTF and VMMTC reach best BLEU scores on the validation set when pre-trained for about 3 epochs. As shown in Figure 2, we note that when using additional noisy data VMMTC, which uses a conditional prior, performs considerably better than its counterpart VMMTF, which has a fixed prior. These results indicate that VMMTC makes better use of additional synthetic data than VMMTF. Some of the reasons that explain these results are (i) the conditional prior p(z|x) can learn 6398 Model BLEU4↑ METEOR↑ # train sents. NMT 37.7 (0.5) 56.0 (0.3) 145K VMMTF 38.4 (0.6) -↑0.7 56.0 (0.3) -↑0.0 VMMTC 38.4 (0.2) -↑0.7 56.3 (0.2) -↑0.3 Imagination 37.8 (0.7) 57.1 (0.2) 654K Table 3: Results for models pre-trained using the translated and comparable Multi30k to translate the Multi30k test set. We report the mean and standard deviation over 4 independent runs. Our best overall results are highlighted in bold, and we highlight in green/red the improvement/decrease brought by our models compared to the baseline mean score. We additionally show results for the Imagination model trained on 4× more data (as reported in the authors’ paper). to be sensitive to whether x is gold-standard or synthetic, whereas p(z) cannot; (ii) in the conditional case the posterior approximation q(z|x, y, v) can directly exploit different patterns arising from a gold-standard versus a synthetic ⟨x, y⟩pair; and finally (iii) our synthetic data is made of targetlanguage gold-standard image descriptions, which help train the inference network’s target-language BiLSTM encoder. In Table 3, we show results when applying VMMTF and VMMTC to translate the Multi30k test set. Both models and the NMT baseline are pretrained on the translated and the back-translated comparable Multi30k data sets, and are selected according to validation set BLEU scores. For comparison, we also include results for Imagination (Elliott and K´ad´ar, 2017) when trained on the translated Multi30k, the WMT News Commentary English-German dataset (240K parallel sentence pairs) and the MSCOCO image description dataset (414K German descriptions of 83K images, i.e. 5 descriptions for each image). In contrast, our models observe 29K images (i.e. the same as the models evaluated in Section 3.3) plus 145K German descriptions only.5 3.5 Ablative experiments In our ablation we are interested in finding out to what extent the model makes use of the latent space, i.e. how important is the latent variable. KL free bits A common issue when training latent variable models with a strong decoder is having 5There are no additional images because the comparable Multi30k consists of additional German descriptions for the same 29K images already in the translated Multi30k. Model Number of BLEU4↑ free bits (KL) VMMTF 0 38.3 (0.2) 1 38.1 (0.3) 2 38.4 (0.4) 4 38.4 (0.4) 8 35.7 (3.1) VMMTC 0 38.5 (0.2) 1 38.3 (0.3) 2 38.2 (0.2) 4 36.8 (2.6) 8 38.6 (0.2) Table 4: Results of applying VMMT models trained with different numbers of free bits in the KL (Kingma et al., 2016) to translate the Multi30k validation set. the true posterior collapse to the prior and the KL term in the ELBO vanish to zero. In practice, that would mean the model has virtually not used the latent variable z to predict image features v, but mostly as a source of stochasticity in the decoder. This can happen because the model has access to informative features from the source bi-LSTM encoder and need not learn a difficult mapping from observations to latent representations predictive of image features. For that reason, we wish to measure how well can we train latent variable MMT models while ensuring that the KL term in the loss (Equation (6)) does not vanish to zero. We use the free bits heuristic (Kingma et al., 2016) to impose a constraint on the KL, which in turn promotes models with non-negligible mutual information between inputs and latent variables (Alemi et al., 2018). In Table 4, we see the results of different models trained using different number of free bits in the KL component. We note that including free bits improves translations slightly, but note that finding the optimal number of free bits requires hyper-parameter search. 3.6 Discussion In Table 5 we show how our different models translate two examples of the M30k test set. In the first example (id#801), training on additional backtranslated data improves variational models but not the NMT baseline, whereas in the second example (id#873) differences between baseline and variational models still persist even when training on 6399 Model Example #801 Example #873 source a man on a bycicle pedals through an archway . a man throws a fishing net into the bay . reference ein mann f¨ahrt auf einem fahrrad durch einen torbogen . ein mann wirft ein fischernetz in die bucht . M30kT M30kT NMT ein mann auf einem fahrrad f¨ahrt durch eine scheibe . ein mann wirft ein fischernetz in die luft . VMMTF ein mann auf einem fahrrad f¨ahrt durch einen torbogen . ein mann wirft ein fischernetz in die bucht . VMMTC ein mann auf einem fahrrad f¨ahrt durch einen bogen . ein mann wirft ein fischernetz in die bucht . M30kT + back-translated M30kC M30kT + back-translated M30kC NMT ein mann auf einem fahrrad f¨ahrt durch einen bogen . ein mann wirft ein fischernetz ins meer . VMMTF ein mann auf einem fahrrad f¨ahrt durch einen torbogen . ein mann wirft ein fischernetz in den wellen . VMMTC ein mann auf einem fahrrad f¨ahrt durch einen torbogen . ein mann wirft ein fischernetz in die bucht . Table 5: Translations for examples 801 and 873 of the M30k test set. In the first example, neither the NMT baseline (with or without back-translated data) nor model VMMTC (trained on limited data) could translate archway correctly; the NMT baseline translates it as “scheibe” (disk) and “bogen” (bow), and VMMTC also incorrectly translates it as “bogen” (bow). However, VMMTC translates without errors when trained on additional back-translated data, i.e. “torbogen” (archway). In the second example, the NMT baseline translates bay as “luft” (air) or “meer” (sea), whereas VMMTF translates it as “bucht” (bay) or “wellen” (waves) and VMMTC always as “bucht” (bay). additional back-translated data. 4 Related work Even though there has been growing interest in variational approaches to machine translation (Zhang et al., 2016; Schulz et al., 2018; Shah and Barber, 2018; Eikema and Aziz, 2019) and to tasks that integrate vision and language, e.g. image description generation (Pu et al., 2016; Wang et al., 2017), relatively little attention has been dedicated to variational models for multi-modal translation. This is partly due to the fact that multi-modal machine translation was only recently addressed by the MT community by means of a shared task (Specia et al., 2016; Elliott et al., 2017; Barrault et al., 2018). Nevertheless, we now discuss relevant variational and deterministic multi-modal MT models in the literature. Fully supervised MMT models. All submissions to the three runs of the multi-modal MT shared tasks (Specia et al., 2016; Elliott et al., 2017; Barrault et al., 2018) model conditional probabilities directly without latent variables. Perhaps the first MMT model proposed prior to these shared tasks is that of Hitschler et al. (2016), who used image features to re-rank translations of image descriptions generated by a phrase-based statistical MT model (PBSMT) and reported significant improvements. Shah et al. (2016) propose a similar model where image logits are used to rerank the output of PBSMT. Global image features, i.e. features computed over an entire image (such as pool5 ResNet-50 features used in this work), have been directly used as “tokens” in the source sentence, to initialise encoder RNN hidden states, or as additional information used to initialise the decoder RNN states (Huang et al., 2016; Libovick´y et al., 2016; Calixto and Liu, 2017). On the other hand, spatial visual features, i.e. local features that encode different parts of the image separately in different vectors, have been used in doubly-attentive models where there is one attention mechanism over the source RNN hidden states and another one over the image features (Caglayan et al., 2016; Calixto et al., 2017). Finally, Caglayan et al. (2017) proposed to interact image features with target word embeddings, more specifically to perform an element-wise multiplication of the (projected) global image features and the target word embeddings before feeding the target word embeddings into their decoder GRU. They reported significant improvements by using image features to gate target word embeddings and won the 2017 Multi-modal MT shared task (Elliott et al., 2017). Multi-task MMT models. Multi-task learning MMT models are easily applicable to translate sentences without images (at test time), which is an advantage over the above-mentioned models. Luong et al. (2016) proposed a multi-task approach where a model is trained using two tasks and a shared decoder: the main task is to translate from German into English and the secondary task is to generate English descriptions given an image. They show improvements in the main translation task when also training for the secondary image description task. Their model is large, i.e. a 4-layer encoder LSTM and a 4-layer decoder LSTM, and their best set up uses a ratio of 0.05 image description generation training data samples in comparison to translation training data samples. Elliott and K´ad´ar (2017) propose an MTL model trained 6400 to do translation (English→German) and sentenceimage ranking (English↔image), using a standard word cross-entropy and margin-based losses as its task objectives, respectively. Their model uses the pre-trained GoogleNet v3 CNN (Szegedy et al., 2016) to extract pool5 features, and has a 1-layer source-language bidirectional GRU encoder and a 1-layer GRU decoder. Variational MMT models. Toyama et al. (2016) proposed a variational MMT model that is likely the most similar model to the one we put forward in this work. They build on the variational neural MT (VNMT) model of Zhang et al. (2016), which is a conditional latent model where a Gaussiandistributed prior of z is parameterised as a function of the the source sentence xm 1 , i.e. p(z|xm 1 ), and both xm 1 and z are used at each time step in an attentive decoder RNN, P(yj|xm 1 , z, y<j). In Toyama et al. (2016), image features are used as input to the inference model qλ(z|xm 1 , yn 1 , v) that approximates the posterior over the latent variable, but otherwise are not modelled and not used in the generative network. Differently from their work, we use image features in all our generative models, and propose modelling them as random observed outcomes while still being able to use our model to translate without images at test time. In the conditional case, we further use image features for posterior inference. Additionally, we also investigate both conditional and fixed priors, i.e. p(z|xm 1 ) and p(z), whereas their model is always conditional. Interestingly, we found in our experiments that fixed-prior models perform slightly better than conditional ones under limited training data. Toyama et al. (2016) uses the pre-trained VGG19 CNN (Simonyan and Zisserman, 2015) to extract FC7 features, and additionally experiment with using additional features from object detections obtained with the Fast RCNN network (Girshick, 2015). One more difference between their work and ours is that we only use the ResNet-50 network to extract pool5 features, and no additional pretrained CNN nor object detections. 5 Conclusions and Future work We have proposed a latent variable model for multimodal neural machine translation and have shown benefits from both modelling images and promoting use of latent space. We also show that in the absence of enough data to train a more complex inference network a simple fixed prior suffices, whereas when more training data is available (even noisy data) a conditional prior is preferable. Importantly, our models compare favourably to the state-of-theart. In future work we will explore other generative models for multi-modal MT, as well as different ways to directly incorporate images into these models. We are also interested in modelling different views of the image, such as global vs. local image features, and also in using larger image collections and modelling images directly, i.e. pixel intensities. Acknowledgements This work is supported by the Dutch Organisation for Scientific Research (NWO) VICI Grant nr. 27789-002. References Alexander A. Alemi, Ben Poole, Ian Fischer, Joshua V. Dillon, Rif A. Saurous, and Kevin Murphy. 2018. Fixing a Broken ELBO. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, pages 159–168. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In International Conference on Learning Representations, ICLR 2015, San Diego, California. Lo¨ıc Barrault, Fethi Bougares, Lucia Specia, Chiraag Lala, Desmond Elliott, and Stella Frank. 2018. Findings of the Third Shared Task on Multimodal Machine Translation. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 304– 323, Belgium, Brussels. Association for Computational Linguistics. Ondˇrej Bojar, Yvette Graham, and Amir Kamran. 2017. Results of the WMT17 Metrics Shared Task. In Proceedings of the Second Conference on Machine Translation, pages 489–513, Copenhagen, Denmark. Association for Computational Linguistics. Ozan Caglayan, Walid Aransa, Adrien Bardet, Mercedes Garc´ıa-Mart´ınez, Fethi Bougares, Lo¨ıc Barrault, Marc Masana, Luis Herranz, and Joost van de Weijer. 2017. LIUM-CVC Submissions for WMT17 Multimodal Translation 6401 Task. In Proceedings of the Second Conference on Machine Translation, pages 432–439, Copenhagen, Denmark. Association for Computational Linguistics. Ozan Caglayan, Lo¨ıc Barrault, and Fethi Bougares. 2016. Multimodal attention for neural machine translation. CoRR, abs/1609.03976. Iacer Calixto and Qun Liu. 2017. Incorporating Global Visual Features into Attention-based Neural Machine Translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 992– 1003, Copenhagen, Denmark. Association for Computational Linguistics. Iacer Calixto, Qun Liu, and Nick Campbell. 2017. Doubly-Attentive Decoder for Multi-modal Neural Machine Translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1913–1924, Vancouver, Canada. Association for Computational Linguistics. Michael Denkowski and Alon Lavie. 2014. Meteor Universal: Language Specific Translation Evaluation for Any Target Language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation. Bryan Eikema and Wilker Aziz. 2019. Autoencoding variational neural machine translation. In 4th Workshop on Representation Learning for NLP. Desmond Elliott, Stella Frank, Lo¨ıc Barrault, Fethi Bougares, and Lucia Specia. 2017. Findings of the second shared task on multimodal machine translation and multilingual image description. In Proceedings of the Second Conference on Machine Translation, pages 215– 233, Copenhagen, Denmark. Association for Computational Linguistics. Desmond Elliott, Stella Frank, Khalil Sima’an, and Lucia Specia. 2016. Multi30K: Multilingual English-German Image Descriptions. In Proceedings of the 5th Workshop on Vision and Language, VL@ACL 2016, Berlin, Germany. Desmond Elliott and ´Akos K´ad´ar. 2017. Imagination improves multimodal translation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 130–141, Taipei, Taiwan. Asian Federation of Natural Language Processing. Ross Girshick. 2015. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), ICCV ’15, pages 1440–1448, Washington, DC, USA. IEEE Computer Society. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep Residual Learning for Image Recognition. arXiv preprint arXiv:1512.03385. Julian Hitschler, Shigehiko Schamoni, and Stefan Riezler. 2016. Multimodal Pivots for Image Caption Translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2399–2409, Berlin, Germany. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735–1780. Po-Yao Huang, Frederick Liu, Sz-Rung Shiang, Jean Oh, and Chris Dyer. 2016. Attentionbased Multimodal Neural Machine Translation. In Proceedings of the First Conference on Machine Translation, pages 639–645, Berlin, Germany. MichaelI. Jordan, Zoubin Ghahramani, TommiS. Jaakkola, and LawrenceK. Saul. 1999. An introduction to variational methods for graphical models. Machine Learning, 37(2):183– 233. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. 2016. Improved variational inference with inverse autoregressive flow. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4743– 4751. Curran Associates, Inc. 6402 Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In International Conference on Learning Representations. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proc. ACL. Jindˇrich Libovick´y and Jindˇrich Helcl. 2017. Attention Strategies for Multi-Source Sequenceto-Sequence Learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 196–202, Vancouver, Canada. Association for Computational Linguistics. Jindˇrich Libovick´y, Jindˇrich Helcl, Marek Tlust´y, Ondˇrej Bojar, and Pavel Pecina. 2016. CUNI System for WMT16 Automatic Post-Editing and Multimodal Translation Tasks. In Proceedings of the First Conference on Machine Translation, pages 646–654, Berlin, Germany. Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-Task Sequence to Sequence Learning. In Proceedings of the International Conference on Learning Representations (ICLR), 2016, San Juan, Puerto Rico. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective Approaches to Attention-based Neural Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1412–1421, Lisbon, Portugal. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318, Philadelphia, Pennsylvania. Maja Popovi´c. 2015. chrF: character n-gram Fscore for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics. Yunchen Pu, Zhe Gan, Ricardo Henao, Xin Yuan, Chunyuan Li, Andrew Stevens, and Lawrence Carin. 2016. Variational autoencoder for deep learning of images, labels and captions. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 2352–2360. Curran Associates, Inc. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pages 1278–1286. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252. Philip Schulz, Wilker Aziz, and Trevor Cohn. 2018. A stochastic decoder for neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1243–1252. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving Neural Machine Translation Models with Monolingual Data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Harshil Shah and David Barber. 2018. Generative neural machine translation. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, 6403 N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 1346–1355. Curran Associates, Inc. Kashif Shah, Josiah Wang, and Lucia Specia. 2016. SHEF-Multimodal: Grounding Machine Translation on Images. In Proceedings of the First Conference on Machine Translation, pages 660–665, Berlin, Germany. K. Simonyan and A. Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 3483–3491. Curran Associates, Inc. Lucia Specia, Stella Frank, Khalil Sima’an, and Desmond Elliott. 2016. A Shared Task on Multimodal Machine Translation and Crosslingual Image Description. In Proceedings of the First Conference on Machine Translation, WMT 2016, colocated with ACL 2016, pages 543–553, Berlin, Germany. Miloˇs Stanojevi´c and Khalil Sima’an. 2014. Fitting sentence level translation evaluation with many dense features. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 202–206, Doha, Qatar. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V. V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. 2016. Rethinking the inception architecture for computer vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2818–2826. Michalis Titsias and Miguel L´azaro-Gredilla. 2014. Doubly stochastic variational bayes for nonconjugate inference. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1971–1979. Joji Toyama, Masanori Misono, Masahiro Suzuki, Kotaro Nakayama, and Yutaka Matsuo. 2016. Neural machine translation with latent semantic of image and text. CoRR, abs/1611.08459. Liwei Wang, Alexander Schwing, and Svetlana Lazebnik. 2017. Diverse and accurate image description using a variational auto-encoder with an additive gaussian encoding space. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5756–5766. Curran Associates, Inc. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67–78. Biao Zhang, Deyi Xiong, jinsong su, Hong Duan, and Min Zhang. 2016. Variational neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 521–530, Austin, Texas. Association for Computational Linguistics. 6404 A Model Architecture Once again, we wish to translate a source sequence xm 1 ≜⟨x1, · · · , xm⟩into a target sequence yn 1 ≜ ⟨y1, · · · , yn⟩, and also predict image features v. x v y µ σ z y v λ θ inference model generative model ϵ ∼N (0, I) KL KL Figure 3: Illustration of multi-modal machine translation generative and inference models. The conditional model VMMTC includes dashed arrows; the fixed prior model VMMTF does not, i.e. its inference network only uses x. In Figure 3, we illustrate generative and inference networks for models VMMTC and VMMTF. A.1 Generative model Source-language encoder The source-language encoder is deterministic and implemented using a 2-layer bidirectional Long Short-Term Memory (LSTM) network (Hochreiter and Schmidhuber, 1997): fi = emb(xi; θemb-x), h0 = ⃗0, − → h i = LSTM(hi−1, fi; θlstmf-x), ← − h i = LSTM(hi+1, fi; θlstmb-x), hi = [− → h i, ← − h i], (9) where emb is the source look-up matrix, trained jointly with the model, and hm 1 are the final source hidden states. Target-language decoder Now we assume that z is given, and will discuss how to compute it later on. The translation model consists of a sequence of draws from a Categorical distribution over the target-language vocabulary (independently from image features v): Yj|z, x, y<j ∼Cat(fθ(z, x, y<j)), where fθ parameterises the distribution with an attentive encoder-decoder architecture: wj = emb(yj; θemb-y), s0 = tanh affine(hm 1 ; θinit-y)  , sj = LSTM(sj−1, [wj, z]; θlstm-y), ci,j = attention(hm 1 , sn 1; θattn), fθ(z, x, y<j) = softmax(affine([sj, cj]; θout-y)), where the attention mechanism is a bilinear attention (Luong et al., 2015), and the generative parameters are θ = {θemb-{x,y}, θlstm{f,b}-x, θinit-y, θlstm-y, θattn, θout-y}. Image decoder We do not model images directly, but instead as a 2048-dimensional feature vector v of pre-activations of a ResNet-50’s pool5 layer. We simply draw image features from a Gaussian observation model: V |z ∼N(ν, ς2I), ν = MLP(z; θ), (10) where a multi-layer perceptron (MLP) maps from z to a vector of locations ν ∈Ro, and ς ∈R>0 is a hyper-parameter of the model (we use 1). Conditional prior VMMTC Given a source sentence xm 1 , we draw an embedding z from a latent Gaussian model: Z|xm 1 ∼N(µ, diag(σ2)), µ = MLP(hm 1 ; θlatent), (11) σ = softplus(MLP(hm 1 ; θlatent)) , (12) where Equations (11) and (12) employ two multilayer perceptrons (MLPs) to map from a source sentence (i.e. source hidden states) to a vector of locations µ ∈Rc and a vector of scales σ ∈Rc >0, respectively. Fixed prior VMMTF In the MMT model VMMTF, we simply have a draw from a standard Normal prior: Z ∼N(0, I). All MLPs have one hidden layer and are implemented as below (eqs. (10) to (12)): MLP(·) = affine(ReLU(affine( · ; θ)); θ). A.2 Inference model The inference network shares the source-language encoder with the generative model and differs depending on the model (VMMTC or VMMTF). 6405 Conditional prior VMMTC Model VMMTC’s approximate posterior qλ(z|xm 1 , yn 1 , v) is a Gaussian distribution: Z|xm 1 , yn 1 , v ∼N(u, diag(s2); λ). We use two bidirectional LSTMs, one over sourceand the other over target-language words, respectively. To reduce the number of model parameters, we re-use the entire source-language BiLSTM and the target-language embeddings in the generative model but prevent updates to the generative model’s parameters by blocking gradients from being back-propagated (Equation 9). Concretely, the inference model is parameterised as below: hm 1 = detach(BiLSTM(xm 1 ; θemb-x,lstmf-x,lstmb-x)), wn 1 = detach(emb(yn 1 ; θemb-y)), hx = avg(affine(hm 1 ; λx)), hy = avg(BiLSTM(wn 1 ; λy)), hv = MLP(v; λv), hall = [hx, hy, hv], u = MLP(hall; λmu), s = softplus(MLP(hall; λsigma)), where the set of the inference network parameters are λ = {λx, λy, λv, λmu, λsigma}. Fixed prior VMMTF Model VMMTF’s approximate posterior qλ(z|xm 1 ) is also a Gaussian: Z|xm 1 ∼N(u, diag(s2); λ), where we re-use the source-language BiLSTM from the generative model but prevent updates to its parameters by blocking gradients from being backpropagated (Equation 9). Concretely, the inference model is parameterised as below: hm 1 = detach(BiLSTM(xm 1 ; θemb-x,lstmf-x,lstmb-x)), hx = avg(affine(hm 1 ; λx)), u = MLP(hx; λmu), s = softplus(MLP(hx; λsigma)), where the set of the inference network parameters are λ = {λx, λmu, λsigma}. Finally, all MLPs are implemented as below: MLP(·) = affine(ReLU(affine( · ; λ)); λ).
2019
642
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6406–6417 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6406 Identifying Visible Actions in Lifestyle Vlogs Oana Ignat1, Laura Burdick1, Jia Deng2, Rada Mihalcea1 1University of Michigan, 2Princeton University {oignat,wenlaura,mihalcea}@umich.edu, [email protected] Abstract We consider the task of identifying human actions visible in online videos. We focus on the widely spread genre of lifestyle vlogs, which consist of videos of people performing actions while verbally describing them. Our goal is to identify if actions mentioned in the speech description of a video are visually present. We construct a dataset with crowdsourced manual annotations of visible actions, and introduce a multimodal algorithm that leverages information derived from visual and linguistic clues to automatically infer which actions are visible in a video. We demonstrate that our multimodal algorithm outperforms algorithms based only on one modality at a time. 1 Introduction There has been a surge of recent interest in detecting human actions in videos. Work in this space has mainly focused on learning actions from articulated human pose (Du et al., 2015; Vemulapalli et al., 2014; Zhang et al., 2017) or mining spatial and temporal information from videos (Simonyan and Zisserman, 2014; Wang et al., 2016). A number of resources have been produced, including Action Bank (Sadanand and Corso, 2012), NTU RGB+D (Shahroudy et al., 2016), SBU Kinect Interaction (Yun et al., 2012), and PKU-MMD (Liu et al., 2017). Most research on video action detection has gathered video information for a set of pre-defined actions (Fabian Caba Heilbron and Niebles, 2015; Real et al., 2017; Kay et al., 2017), an approach known as explicit data gathering (Fouhey et al., 2018). For instance, given an action such as “open door,” a system would identify videos that include a visual depiction of this action. While this approach is able to detect a specific set of actions, whose choice may be guided by downstream applications, it achieves high precision at the cost of low recall. In many cases, the set of predefined actions is small (e.g., 203 activity classes in Fabian Caba Heilbron and Niebles 2015), and for some actions, the number of visual depictions is very small. An alternative approach is to start with a set of videos, and identify all the actions present in these videos (Damen et al., 2018; Bregler, 1997). This approach has been referred to as implicit data gathering, and it typically leads to the identification of a larger number of actions, possibly with a small number of examples per action. In this paper, we use an implicit data gathering approach to label human activities in videos. To the best of our knowledge, we are the first to explore video action recognition using both transcribed audio and video information. We focus on the popular genre of lifestyle vlogs, which consist of videos of people demonstrating routine actions while verbally describing them. We use these videos to develop methods to identify if actions are visually present. The paper makes three main contributions. First, we introduce a novel dataset consisting of 1,268 short video clips paired with sets of actions mentioned in the video transcripts, as well as manual annotations of whether the actions are visible or not. The dataset includes a total of 14,769 actions, 4,340 of which are visible. Second, we propose a set of strong baselines to determine whether an action is visible or not. Third, we introduce a multimodal neural architecture that combines information drawn from visual and linguistic clues, and show that it improves over models that rely on one modality at a time. By making progress towards automatic action recognition, in addition to contributing to video understanding, this work has a number of important and exciting applications, including sports analytics (Fani et al., 2017), human-computer inter6407 Dataset #Actions #Verbs #Actors Implicit Label types Ours 4340 580 10 ✓ ✓ VLOG (Fouhey et al., 2018) 10.7k ✓ ✓ Kinetics (Kay et al., 2017) 600 270 x x ActivityNet (Fabian Caba Heilbron and Niebles, 2015) 203 x x MIT (Monfort et al., 2019) 339 339 x x AVA (Gu et al., 2018) 80 80 192 ✓ x Charades (Sigurdsson et al., 2016) 157 30 267 x x MPII Cooking (Rohrbach et al., 2012) 78 78 12 ✓ x Table 1: Comparison between our dataset and other video human action recognition datasets. # Actions show either the number of action classes in that dataset (for the other datasets), or the number of unique visible actions in that dataset (ours); # Verbs shows the number of unique verbs in the actions; Implicit is the type of data gathering method (versus explicit); Label types are either post-defined (first gathering data and then annotating actions): ✓, or pre-defined (annotating actions before gathering data): x. action (Rautaray and Agrawal, 2015), and automatic analysis of surveillance video footage (Ji et al., 2012). The paper is organized as follows. We begin by discussing related work, then describe our data collection and annotation process. We next overview our experimental set-up and introduce a multimodal method for identifying visible actions in videos. Finally, we discuss our results and conclude with general directions for future work. 2 Related Work There has been substantial work on action recognition in the computer vision community, focusing on creating datasets (Soomro et al., 2012; Karpathy et al., 2014; Sigurdsson et al., 2016; Fabian Caba Heilbron and Niebles, 2015) or introducing new methods (Herath et al., 2017; Carreira and Zisserman, 2017; Donahue et al., 2015; Tran et al., 2015). Table 1 compares our dataset with previous action recognition datasets.1 The largest datasets that have been compiled to date are based on YouTube videos (Fabian Caba Heilbron and Niebles, 2015; Real et al., 2017; Kay et al., 2017). These actions cover a broad range of classes including human-object interactions such as cooking (Rohrbach et al., 2014; Das et al., 2013; Rohrbach et al., 2012) and playing tennis (Karpathy et al., 2014), as well as human-human interactions such as shaking hands and hugging (Gu et al., 2018). 1Note that the number of actions shown for our dataset reflects the number of unique visible actions in the dataset and not the number of action classes, as in other datasets. This is due to our annotation process (see §3). Similar to our work, some of these previous datasets have considered everyday routine actions (Fabian Caba Heilbron and Niebles, 2015; Real et al., 2017; Kay et al., 2017). However, because these datasets rely on videos uploaded on YouTube, it has been observed they can be potentially biased towards unusual situations (Kay et al., 2017). For example, searching for videos with the query “drinking tea” results mainly in unusual videos such as dogs or birds drinking tea. This bias can be addressed by paying people to act out everyday scenarios (Sigurdsson et al., 2016), but this can end up being very expensive. In our work, we address this bias by changing the approach used to search for videos. Instead of searching for actions in an explicit way, using queries such as “opening a fridge” or “making the bed,” we search for more general videos using queries such as “my morning routine.” This approach has been referred to as implicit (as opposed to explicit) data gathering, and was shown to result in a greater number of videos with more realistic action depictions (Fouhey et al., 2018). Although we use implicit data gathering as proposed in the past, unlike (Fouhey et al., 2018) and other human action recognition datasets, we search for routine videos that contain rich audio descriptions of the actions being performed, and we use this transcribed audio to extract actions. In these lifestyle vlogs, a vlogger typically performs an action while also describing it in detail. To the best of our knowledge, we are the first to build a video action recognition dataset using both transcribed audio and video information. Another important difference between our 6408 methodology and previously proposed methods is that we extract action labels from the transcripts. By gathering data before annotating the actions, our action labels are post-defined (as in Fouhey et al. 2018). This is unlike the majority of the existing human action datasets that use pre-defined labels (Sigurdsson et al., 2016; Fabian Caba Heilbron and Niebles, 2015; Real et al., 2017; Kay et al., 2017; Gu et al., 2018; Das et al., 2013; Rohrbach et al., 2012; Monfort et al., 2019). Postdefined labels allow us to use a larger set of labels, expanding on the simplified label set used in earlier datasets. These action labels are more inline with everyday scenarios, where people often use different names for the same action. For example, when interacting with a robot, a user could refer to an action in a variety of ways; our dataset includes the actions “stick it into the freezer,” “freeze it,” “pop into the freezer,” and “put into the freezer,” variations, which would not be included in current human action recognition datasets. In addition to human action recognition, our work relates to other multimodal tasks such as visual question answering (Jang et al., 2017; Wu et al., 2017), video summarization (Gygli et al., 2014; Song et al., 2015), and mapping text descriptions to video content (Karpathy and Fei-Fei, 2015; Rohrbach et al., 2016). Specifically, we use an architecture similar to (Jang et al., 2017), where an LSTM (Hochreiter and Schmidhuber, 1997) is used together with frame-level visual features such as Inception (Szegedy et al., 2016), and sequence-level features such as C3D (Tran et al., 2015). However, unlike (Jang et al., 2017) who encode the textual information (question-answers pairs) using an LSTM, we chose instead to encode our textual information (action descriptions and their contexts) using a large-scale language model ELMo (Peters et al., 2018). Similar to previous research on multimodal methods (Lei et al., 2018; Xu et al., 2015; Wu et al., 2013; Jang et al., 2017), we also perform feature ablation to determine the role played by each modality in solving the task. Consistent with earlier work, we observe that the textual modality leads to the highest performance across individual modalities, and that the multimodal model combining textual and visual clues has the best overall performance. Query Results my morning routine 28M+ my after school routine 13M+ my workout routine 23M+ my cleaning routine 13M+ DIY 78M+ Table 2: Approximate number of videos found when searching for routine and do-it-yourself queries on YouTube. 3 Data Collection and Annotation We collect a dataset of routine and do-it-yourself (DIY) videos from YouTube, consisting of people performing daily activities, such as making breakfast or cleaning the house. These videos also typically include a detailed verbal description of the actions being depicted. We choose to focus on these lifestyle vlogs because they are very popular, with tens of millions having been uploaded on YouTube; Table 2 shows the approximate number of videos available for several routine queries. Vlogs also capture a wide range of everyday activities; on average, we find thirty different visible human actions in five minutes of video. By collecting routine videos, instead of searching explicitly for actions, we do implicit data gathering, a form of data collection introduced by Fouhey et al. 2018. Because everyday actions are common and not unusual, searching for them directly does not return many results. In contrast, by collecting routine videos, we find many everyday activities present in these videos. 3.1 Data Gathering We build a data gathering pipeline (see Figure 1) to automatically extract and filter videos and their transcripts from YouTube. The input to the pipeline is manually selected YouTube channels. Ten channels are chosen for their rich routine videos, where the actor(s) describe their actions in great detail. From each channel, we manually select two different playlists, and from each playlist, we randomly download ten videos. The following data processing steps are applied: Transcript Filtering. Transcripts are automatically generated by YouTube. We filter out videos that do not contain any transcripts or that contain transcripts with an average (over the entire video) of less than 0.5 words per second. These videos do not contain detailed action descriptions so we cannot effectively leverage textual information. 6409 Extract Candidate Actions from Transcript. Starting with the transcript, we generate a noisy list of potential actions. This is done using the Stanford parser (Chen and Manning, 2014) to split the transcript into sentences and identify verb phrases, augmented by a set of hand-crafted rules to eliminate some parsing errors. The resulting actions are noisy, containing phrases such as “found it helpful if you” and “created before up the top you.” Segment Videos into Miniclips. The length of our collected videos varies from two minutes to twenty minutes. To ease the annotation process, we split each video into miniclips (short video sequences of maximum one minute). Miniclips are split to minimize the chance that the same action is shown across multiple miniclips. This is done automatically, based on the transcript timestamp of each action. Because YouTube transcripts have timing information, we are able to line up each action with its corresponding frames in the video. We sometimes notice a gap of several seconds between the time an action occurs in the transcript and the time it is shown in the video. To address this misalignment, we first map the actions to the miniclips using the time information from the transcript. We then expand the miniclip by 15 seconds before the first action and 15 seconds after the last action. This increases the chance that all actions will be captured in the miniclip. Motion Filtering. We remove miniclips that do not contain much movement. We sample one out of every one hundred frames of the miniclip, and compute the 2D correlation coefficient between these sampled frames. If the median of the obtained values is greater than a certain threshold (we choose 0.8), we filter out the miniclip. Videos with low movement tend to show people sitting in front of the camera, describing their routine, but not acting out what they are saying. There can be many actions in the transcript, but if they are not depicted in the video, we cannot leverage the video information. 3.2 Visual Action Annotation Our goal is to identify which of the actions extracted from the transcripts are visually depicted in the videos. We create an annotation task on Amazon Mechanical Turk (AMT) to identify actions that are visible. We give each AMT turker a HIT consisting of five miniclips with up to seven actions generated Figure 1: Overview of the data gathering pipeline. from each miniclip. The turker is asked to assign a label (visible in the video; not visible in the video; not an action) to each action. Because it is difficult to reliably separate not visible and not an action, we group these labels together. Each miniclip is annotated by three different turkers. For the final annotation, we use the label assigned by the majority of turkers, i.e., visible or not visible / not an action. To help detect spam, we identify and reject the turkers that assign the same label for every action in all five miniclips that they annotate. Additionally, each HIT contains a ground truth miniclip that has been pre-labeled by two reliable annotators. Each ground truth miniclip has more than four actions with labels that were agreed upon by both reliable annotators. We compute accuracy between a turker’s answers and the ground truth annotations; if this accuracy is less than 20%, we reject the HIT as spam. After spam removal, we compute the agreement score between the turkers using Fleiss kappa (Fleiss and Cohen, 1973). Over the entire data set, the Fleiss agreement score is 0.35, indicating fair agreement. On the ground truth data, the Fleiss kappa score is 0.46, indicating moderate agreement. This fair to moderate agreement indicates that the task is difficult, and there are cases where the visibility of the actions is hard to label. To illustrate, Figure 3 shows examples where the annotators had low agreement. Table 3 shows statistics for our final dataset of 6410 ... 03:24 you’re gonna actually cook it 03:27 and it you’re gonna bake it for 03:30 about six hours it’s definitely a 03:32 long time so keep in mind that it’s 03:34 basically just dehydrating it 03:50 after what seems like an eternity in 03:53 the oven you’re going to take it out 03:55 it’s actually dehydrated at that point 03:57 which is fabulous because you can 03:59 pull it right off the baking sheet and 04:01 you’re going to put it on to some 04:03 parchment paper and then you’re ... Action Visible? actually cook it ✓ bake it for ✓ take it out ✓ pull it right off ✓ the baking sheet put it on to some ✓ parchment paper so keep in mind that x seems like an eternity x in the oven dehydrated at that x point which Figure 2: Sample video frames, transcript, and annotations. Videos 177 Video hours 21 Transcript words 302,316 Miniclips 1,268 Actions 14,769 Visible actions 4,340 Non-visible actions 10,429 Table 3: Data statistics. Train Test Validation # Actions 11,403 1,999 1,367 # Miniclips 997 158 113 # Actions/ Miniclip 11.4 12.6 12.0 Table 4: Statistics for the experimental data split. videos labeled with actions, and Figure 2 shows a sample video and transcript, with annotations. For our experiments, we use the first eight YouTube channels from our dataset as train data, the ninth channel as validation data and the last channel as test data. Statistics for this split are shown in Table 4. 3.3 Discussion The goal of our dataset is to capture naturallyoccurring, routine actions. Because the same action can be identified in different ways (e.g., “pop into the freezer”, “stick into the freezer”), our dataset has a complex and diverse set of action labels. These labels demonstrate the language used by humans in everyday scenarios; because of that, we choose not to group our labels into a pre-defined set of actions. Table 1 shows the number of unique verbs, which can be considered a Action #1 #2 #3 GT make sure your skin x x ✓ x cleansed before you ✓ x ✓ ✓ do all that x x ✓ x absorbing all that x x ✓ x serum when there move on x x x x Figure 3: An example of low agreement. The table shows actions and annotations from workers #1, #2, and #3, as well as the ground truth (GT). Labels are: visible - ✓, not visible - x. The bottom row shows screenshots from the video. The Fleiss kappa agreement score is -0.2. lower bound for the number of unique actions in our dataset. On average, a single verb is used in seven action labels, demonstrating the richness of our dataset. The action labels extracted from the transcript are highly dependent on the performance of the constituency parser. This can introduce noise or ill-defined action labels. Some acions contain extra words (e.g., “brush my teeth of course”), or lack words (e.g., “let me just”). Some of this noise is handled during the annotation process; for example, most actions that lack words are labeled as “not visible” or “not an action” because they are hard to interpret. 6411 4 Identifying Visible Actions in Videos Our goal is to determine if actions mentioned in the transcript of a video are visually represented in the video. We develop a multimodal model that leverages both visual and textual information, and we compare its performance with several singlemodality baselines. 4.1 Data Processing and Representations Starting with our annotated dataset, which includes miniclips paired with transcripts and candidate actions drawn from the transcript, we extract several layers of information, which we then use to develop our multimodal model, as well as several baselines. Action Embeddings. To encode each action, we use both GloVe (Pennington et al., 2014) and ELMo (Peters et al., 2018) embeddings. When using GloVe embeddings, we represent the action as the average of all its individual word embeddings. We use embeddings with dimension 50. When using ELMo, we represent the action as a list of words which we feed into the default ELMo embedding layer.2 This performs a fixed mean pooling of all the contextualized word representations in each action. Part-of-speech (POS). We use POS information for each action. Similar to word embeddings (Pennington et al., 2014), we train POS embeddings. We run the Stanford POS Tagger (Toutanova et al., 2003) on the transcripts and assign a POS to each word in an action. To obtain the POS embeddings, we train GloVe on the Google N-gram corpus3 using POS information from the five-grams. Finally, for each action, we average together the POS embeddings for all the words in the action to form a POS embedding vector. Context Embeddings. Context can be helpful to determine if an action is visible or not. We use two types of context information, action-level and sentence-level. Action-level context takes into account the previous action and the next action; we denote it as ContextA. These are each calculated by taking the average of the action’s GloVe embeddings. Sentence-level context considers up to five words directly before the action and up to five words after the action (we do not consider words that are not in the same sentence as the action); 2Implemented as the ELMo module in Tensorflow 3http://storage.googleapis.com/books/ngrams/books/ datasetsv2.html Action Con. Visible? cook things in water 5.00 ✓ head right into my kitchen 4.97 ✓ throw it into the washer 4.70 ✓ told you what 2.31 x share my thoughts 2.96 x prefer them 1.62 x Table 5: Visible actions with high concreteness scores (Con.), and non-visible actions with low concreteness scores. The noun or verb with the highest concreteness score is in bold. Action Visible in the miniclip? put my son x sleep after we x done dinner x get comfortable ✓ pick out some pajamas ✓ start with my skincare x cleanse if I or even x we denote it as ContextS. Again, we average the GLoVe embeddings of the preceding and following words to get two context vectors. Concreteness. Our hypothesis is that the concreteness of the words in an action is related to its visibility in a video. We use a dataset of words with associated concreteness scores from (Brysbaert et al., 2014). Each word is labeled by a human annotator with a value between 1 (very abstract) and 5 (very concrete). The percentage of actions from our dataset that have at least one word in the concreteness dataset is 99.8%. For each action, we use the concreteness scores of the verbs and nouns in the action. We consider the concreteness score of an action to be the highest concreteness score of its corresponding verbs and nouns. Table 5 shows several sample actions along with their concreteness scores and their visiblity. Video Representations. We use YOLO9000 (Redmon and Farhadi, 2017) to identify objects present in each miniclip. We choose YOLO9000 for its high and diverse number of labels (9,000 unique labels). We sample the miniclips at a rate of 1 frame-per-second, and we use the YOLO9000 model pre-trained on COCO (Lin et al., 2014) and ImageNet (Deng et al., 2009). We represent a video both at the frame level and the sequence level. For frame-level video features, we use the Inception V3 model (Szegedy 6412 et al., 2016) pre-trained on ImageNet. We extract the output of the very last layer before the Flatten operation (the “bottleneck layer”); we choose this layer because the following fully connected layers are too specialized for the original task they were trained for. We extract Inception V3 features from miniclips sampled at 1 frame-per-second. For sequence-level video features, we use the C3D model (Tran et al., 2015) pre-trained on the Sports-1M dataset (Karpathy et al., 2014). Similarly, we take the feature map of the sixth fully connected layer. Because C3D captures motion information, it is important that it is applied on consecutive frames. We take each frame used to extract the Inception features and extract C3D features from the 16 consecutive frames around it. We use this approach because combining Inception V3 and C3D features has been shown to work well in other video-based models (Jang et al., 2017; Carreira and Zisserman, 2017; Kay et al., 2017). 4.2 Baselines Using the different data representations described in Section 4.1, we implement several baselines. Concreteness. We label as visible all the actions that have a concreteness score above a certain threshold, and label as non-visible the remaining ones. We fine tune the threshold on our validation set; for fine tuning, we consider threshold values between 3 and 5. Table 6 shows the results obtained for this baseline. Feature-based Classifier. For our second set of baselines, we run a classifier on subsets of all of our features. We use an SVM (Cortes and Vapnik, 1995), and perform five-fold cross-validation across the train and validation sets, fine tuning the hyper-parameters (kernel type, C, gamma) using a grid search. We run experiments with various combinations of features: action GloVe embeddings; POS embeddings; embeddings of sentencelevel context (ContextS) and action-level context (ContextA); concreteness score. The combinations that perform best during cross-validation on the combined train and validation sets are shown in Table 6. LSTM and ELMo. We also consider an LSTM model (Hochreiter and Schmidhuber, 1997) that takes as input the tokenized action sequences padded to the length of the longest action. These are passed through a trainable embedding layer, Action: brush my teeth Object detected: toothbrush WUP(brush, toothbrush) = 0.94 Action: chop my vegetables Object detected: carrot WUP(vegetables, carrot) = 0.9 Figure 4: Example of frames, corresponding actions, object detected with YOLO, and the object - word pair with the highest WUP similarity score in each frame. initialized with GloVe embeddings, before the LSTM. The LSTM output is then passed through a feed forward network of fully connected layers, each followed by a dropout layer (Srivastava et al., 2014) at a rate of 50%. We use a sigmoid activation function after the last hidden layer to get an output probability distribution. We fine tune the model on the validation set for the number of training epochs, batch size, size of LSTM, and number of fully-connected layers. We build a similar model that embeds actions using ELMo (composed of 2 bi-LSTMs). We pass these embeddings through the same feed forward network and sigmoid activation function. The results for both the LSTM and ELMo models are shown in Table 6. YOLO Object Detection. Our final baseline leverages video information from the YOLO9000 object detector. This baseline builds on the intuition that many visible actions involve visible objects. We thus label an action as visible if it contains at least one noun similar to objects detected in its corresponding miniclip. To measure similarity, we compute both the Wu-Palmer (WUP) path-lengthbased semantic similarity (Wu and Palmer, 1994) and the cosine similarity on the GloVe word embeddings. For every action in a miniclip, each noun is compared to all detected objects and assigned a similarity score. As in our concreteness baseline, the action is assigned the highest score of its corresponding nouns. We use the validation data to fine tune the similarity threshold that decides if an action is visible or not. The results are reported in Table 6. Examples of actions that contain one or more words similar to detected objects by YOLO can be seen in Figure 4. 6413 Figure 5: Overview of the multimodal neural architecture. + represents concatenation. 5 Multimodal Model Each of our baselines considers only a single modality, either text or video. While each of these modalities contributes important information, neither of them provides a full picture. The visual modality is inherently necessary, because it shows the visibility of an action. For example, the same spoken action can be labeled as either visible or non-visible, depending on its visual context; we find 162 unique actions that are labeled as both visible and not visible, depending on the miniclip. This ambiguity has to be captured using video information. However, the textual modality provides important clues that are often missing in the video. The words of the person talking fill in details that many times cannot be inferred from the video. For our full model, we combine both textual and visual information to leverage both modalities. We propose a multimodal neural architecture that combines encoders for the video and text modalities, as well as additional information (e.g., concreteness). Figure 5 shows our model architecture. The model takes as input a (miniclip m, action a) pair and outputs the probability that action a is visible in miniclip m. We use C3D and Inception V3 video features extracted for each frame, as described in Section 4.1. These features are concatenated and run through an LSTM. To represent the actions, we use ELMo embeddings (see Section 4.1). These features are concatenated with the output from the video encoding LSTM, and run through a three-layer feed forward network with dropout. Finally, the result of the last layer is passed through a sigmoid function, which produces a probability distribution indicating whether the action is visible in the miniclip. We use an RMSprop optimizer (Tieleman and Hinton, 2012) and fine tune the number of epochs, batch size and size of the LSTM and fullyconnected layers. 6 Evaluation and Results Table 6 shows the results obtained using the multimodal model for different sets of input features. The model that uses all the input features available leads to the best results, improving significantly over the text-only and video-only methods.4 We find that using only YOLO to find visible objects does not provide sufficient information to solve this task. This is due to both the low number of objects that YOLO is able to detect, and the fact that not all actions involve objects. For example, visible actions from our datasets such as “get up”, “cut them in half”, “getting ready”, and “chopped up” cannot be correctly labeled using only object detection. Consequently, we need to use additional video information such as Inception and C3D information. In general, we find that the text information plays an important role. ELMo embeddings lead to better results than LSTM embeddings, with a relative error rate reduction of 6.8%. This is not surprising given that ELMo uses two bidirectional LSTMs and has improved the state-of-the-art in many NLP tasks (Peters et al., 2018). Consequently, we use ELMo in our multimodal model. Moreover, the addition of extra information improves the results for both modalities. Specifically, the addition of context is found to bring improve4Significance is measured using a paired t-test: p < 0.005 when compared to the best text-only model; p < 0.0005 when compared to the best video-only model. 6414 Method Input Accuracy Precision Recall F1 BASELINES Majority Action 0.692 0.692 1.0 0.81 Threshold Concreteness 0.685 0.7 0.954 0.807 ActionG 0.715 0.722 0.956 0.823 Featurebased Classifier ActionG, POS 0.701 0.702 0.986 0.820 ActionG, ContextS 0.725 0.736 0.938 0.825 ActionG, ContextA 0.712 0.722 0.949 0.820 ActionG, Concreteness 0.718 0.729 0.942 0.822 ActionG, ContextS, Concreteness 0.728 0.742 0.932 0.826 LSTM ActionG 0.706 0.753 0.857 0.802 ELMo ActionG 0.726 0.771 0.859 0.813 YOLO Miniclip 0.625 0.619 0.448 0.520 MULTIMODAL NEURAL ARCHITECTURE (FIGURE 5) ActionE, Inception 0.722 0.765 0.863 0.811 ActionE, Inception, C3D 0.725 0.769 0.869 0.814 ActionE, POS, Inception, C3D 0.731 0.763 0.885 0.820 Multimodal Model ActionE, ContextS, Inception, C3D 0.725 0.770 0.859 0.812 ActionE, ContextA, Inception, C3D 0.729 0.757 0.895 0.820 ActionE, Concreteness, Inception, C3D 0.723 0.768 0.860 0.811 ActionE, POS, ContextS, Concreteness, Inception, C3D 0.737 0.758 0.911 0.827 Table 6: Results from baselines and our best multimodal method on validation and test data. ActionG indicates action representation using GloVe embedding, and ActionE indicates action representation using ELMo embedding. ContextS indicates sentence-level context, and ContextA indicates action-level context. ments. The use of POS is also found to be generally helpful. 7 Conclusion In this paper, we address the task of identifying human actions visible in online videos. We focus on the genre of lifestyle vlogs, and construct a new dataset consisting of 1,268 miniclips and 14,769 actions out of which 4,340 have been labeled as visible. We describe and evaluate several text-based and video-based baselines, and introduce a multimodal neural model that leverages visual and linguistic information as well as additional information available in the input data. We show that the multimodal model outperforms the use of one modality at a time. A distinctive aspect of this work is that we label actions in videos based on the language that accompanies the video. This has the potential to create a large repository of visual depictions of actions, with minimal human intervention, covering a wide spectrum of actions that typically occur in everyday life. In future work, we plan to explore additional representations and architectures to improve the accuracy of our model, and to identify finer-grained alignments between visual actions and their verbal descriptions. The dataset and the code introduced in this paper are publicly available at http://lit.eecs.umich. edu/downloads.html. Acknowledgments This material is based in part upon work supported by the Michigan Institute for Data Science, by the National Science Foundation (grant #1815291), by the John Templeton Foundation (grant #61156), and by DARPA (grant #HR001117S0026-AIDA-FP-045). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the Michigan Institute for Data Science, the National Science Foundation, the John Templeton Foundation, or DARPA. 6415 References Christoph Bregler. 1997. Learning and recognizing human dynamics in video sequences. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pages 568– 574. IEEE. Marc Brysbaert, Amy Beth Warriner, and Victor Kuperman. 2014. Concreteness ratings for 40 thousand generally known english word lemmas. Behavior research methods, 46(3):904–911. Joao Carreira and Andrew Zisserman. 2017. Quo vadis, action recognition? a new model and the kinetics dataset. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6299–6308. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 740–750. Corinna Cortes and Vladimir Vapnik. 1995. Supportvector networks. Machine learning, 20(3):273–297. Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, et al. 2018. Scaling egocentric vision: The epic-kitchens dataset. In Proceedings of the European Conference on Computer Vision (ECCV), pages 720–736. Pradipto Das, Chenliang Xu, Richard F Doell, and Jason J Corso. 2013. A thousand frames in just a few words: Lingual description of videos through latent topics and sparse object stitching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2634–2641. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 248–255. Ieee. Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2015. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2625–2634. Yong Du, Wei Wang, and Liang Wang. 2015. Hierarchical recurrent neural network for skeleton based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1110–1118. Bernard Ghanem Fabian Caba Heilbron, Victor Escorcia and Juan Carlos Niebles. 2015. Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 961–970. Mehrnaz Fani, Helmut Neher, David A Clausi, Alexander Wong, and John Zelek. 2017. Hockey action recognition via integrated stacked hourglass network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 29–37. Joseph L Fleiss and Jacob Cohen. 1973. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and psychological measurement, 33(3):613– 619. David F Fouhey, Wei-cheng Kuo, Alexei A Efros, and Jitendra Malik. 2018. From lifestyle vlogs to everyday interactions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4991–5000. Chunhui Gu, Chen Sun, David A Ross, Carl Vondrick, Caroline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, et al. 2018. Ava: A video dataset of spatio-temporally localized atomic visual actions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6047–6056. Michael Gygli, Helmut Grabner, Hayko Riemenschneider, and Luc Van Gool. 2014. Creating summaries from user videos. In European Conference on Computer Vision (ECCV), pages 505–520. Springer. Samitha Herath, Mehrtash Harandi, and Fatih Porikli. 2017. Going deeper into action recognition: A survey. Image and vision computing, 60:4–21. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, and Gunhee Kim. 2017. Tgif-qa: Toward spatiotemporal reasoning in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2758–2766. Shuiwang Ji, Wei Xu, Ming Yang, and Kai Yu. 2012. 3d convolutional neural networks for human action recognition. IEEE transactions on pattern analysis and machine intelligence, 35(1):221–231. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3128–3137. 6416 Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. 2014. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 1725–1732. Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. 2017. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950. Jie Lei, Licheng Yu, Mohit Bansal, and Tamara Berg. 2018. Tvqa: Localized, compositional video question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1369–1379. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European Conference on Computer Vision (ECCV), pages 740–755. Springer. Chunhui Liu, Yueyu Hu, Yanghao Li, Sijie Song, and Jiaying Liu. 2017. Pku-mmd: A large scale benchmark for continuous multi-modal human action understanding. arXiv preprint arXiv:1703.07475. Mathew Monfort, Alex Andonian, Bolei Zhou, Kandan Ramakrishnan, Sarah Adel Bargal, Yan Yan, Lisa Brown, Quanfu Fan, Dan Gutfreund, Carl Vondrick, et al. 2019. Moments in time dataset: one million videos for event understanding. IEEE transactions on pattern analysis and machine intelligence. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Siddharth S Rautaray and Anupam Agrawal. 2015. Vision based hand gesture recognition for human computer interaction: a survey. Artificial intelligence review, 43(1):1–54. Esteban Real, Jonathon Shlens, Stefano Mazzocchi, Xin Pan, and Vincent Vanhoucke. 2017. Youtubeboundingboxes: A large high-precision humanannotated data set for object detection in video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5296–5305. Joseph Redmon and Ali Farhadi. 2017. Yolo9000: better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7263–7271. Anna Rohrbach, Marcus Rohrbach, Ronghang Hu, Trevor Darrell, and Bernt Schiele. 2016. Grounding of textual phrases in images by reconstruction. In European Conference on Computer Vision (ECCV), pages 817–834. Springer. Anna Rohrbach, Marcus Rohrbach, Wei Qiu, Annemarie Friedrich, Manfred Pinkal, and Bernt Schiele. 2014. Coherent multi-sentence video description with variable level of detail. In German conference on pattern recognition, pages 184–195. Springer. Marcus Rohrbach, Sikandar Amin, Mykhaylo Andriluka, and Bernt Schiele. 2012. A database for fine grained activity detection of cooking activities. In 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1194–1201. IEEE. Sreemanananth Sadanand and Jason J Corso. 2012. Action bank: A high-level representation of activity in video. In 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1234– 1241. IEEE. Amir Shahroudy, Jun Liu, Tian-Tsong Ng, and Gang Wang. 2016. Ntu rgb+ d: A large scale dataset for 3d human activity analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1010–1019. Gunnar A Sigurdsson, G¨ul Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. 2016. Hollywood in homes: Crowdsourcing data collection for activity understanding. In European Conference on Computer Vision (ECCV), pages 510–526. Springer. Karen Simonyan and Andrew Zisserman. 2014. Twostream convolutional networks for action recognition in videos. In Advances in neural information processing systems, pages 568–576. Yale Song, Jordi Vallmitjana, Amanda Stent, and Alejandro Jaimes. 2015. Tvsum: Summarizing web videos using titles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5179–5187. Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. 2012. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2818–2826. 6417 Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26–31. Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 conference of the North American chapter of the association for computational linguistics on human language technologyvolume 1, pages 173–180. Association for Computational Linguistics. Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. 2015. Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 4489–4497. Raviteja Vemulapalli, Felipe Arrate, and Rama Chellappa. 2014. Human action recognition by representing 3d skeletons as points in a lie group. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 588– 595. Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool. 2016. Temporal segment networks: Towards good practices for deep action recognition. In European Conference on Computer Vision (ECCV), pages 20–36. Springer. Qi Wu, Damien Teney, Peng Wang, Chunhua Shen, Anthony Dick, and Anton van den Hengel. 2017. Visual question answering: A survey of methods and datasets. Computer Vision and Image Understanding, 163:21–40. Qiuxia Wu, Zhiyong Wang, Feiqi Deng, Zheru Chi, and David Dagan Feng. 2013. Realistic human action recognition with multimodal feature selection and fusion. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 43(4):875–885. Zhibiao Wu and Martha Palmer. 1994. Verbs semantics and lexical selection. In Proceedings of the 32nd annual meeting on Association for Computational Linguistics, pages 133–138. Association for Computational Linguistics. Xun Xu, Timothy Hospedales, and Shaogang Gong. 2015. Semantic embedding space for zero-shot action recognition. In 2015 IEEE International Conference on Image Processing (ICIP), pages 63–67. IEEE. Kiwon Yun, Jean Honorio, Debaleena Chattopadhyay, Tamara L Berg, and Dimitris Samaras. 2012. Twoperson interaction detection using body-pose features and multiple instance learning. In 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 28–35. IEEE. Pengfei Zhang, Cuiling Lan, Junliang Xing, Wenjun Zeng, Jianru Xue, and Nanning Zheng. 2017. View adaptive recurrent neural networks for high performance human action recognition from skeleton data. In Proceedings of the IEEE International Conference on Computer Vision, pages 2117–2126.
2019
643
A Corpus for Reasoning About Natural Language Grounded in Photographs Alane Suhr‡∗, Stephanie Zhou†,∗Ally Zhang‡, Iris Zhang‡, Huajun Bai‡, and Yoav Artzi‡ ‡Cornell University Department of Computer Science and Cornell Tech New York, NY 10044 {suhr, yoav}@cs.cornell.edu {az346, wz337, hb364}@cornell.edu †University of Maryland Department of Computer Science College Park, MD 20742 [email protected] Abstract We introduce a new dataset for joint reasoning about natural language and images, with a focus on semantic diversity, compositionality, and visual reasoning challenges. The data contains 107,292 examples of English sentences paired with web photographs. The task is to determine whether a natural language caption is true about a pair of photographs. We crowdsource the data using sets of visually rich images and a compare-and-contrast task to elicit linguistically diverse language. Qualitative analysis shows the data requires compositional joint reasoning, including about quantities, comparisons, and relations. Evaluation using state-of-the-art visual reasoning methods shows the data presents a strong challenge. 1 Introduction Visual reasoning with natural language is a promising avenue to study compositional semantics by grounding words, phrases, and complete sentences to objects, their properties, and relations in images. This type of linguistic reasoning is critical for interactions grounded in visually complex environments, such as in robotic applications. However, commonly used resources for language and vision (e.g., Antol et al., 2015; Chen et al., 2016) focus mostly on identification of object properties and few spatial relations (Section 4; Ferraro et al., 2015; Alikhani and Stone, 2019). This relatively simple reasoning, together with biases in the data, removes much of the need to consider language compositionality (Goyal et al., 2017). This motivated the design of datasets that require compositional1 visual reasoning, including ∗Contributed equally. † Work done as an undergraduate at Cornell University. 1In parts of this paper, we use the term compositional differently than it is commonly used in linguistics to refer to reasoning that requires composition. This type of reasoning often manifests itself in highly compositional language. The left image contains twice the number of dogs as the right image, and at least two dogs in total are standing. One image shows exactly two brown acorns in back-to-back caps on green foliage. Figure 1: Two examples from NLVR2. Each caption is paired with two images.2 The task is to predict if the caption is True or False. The examples require addressing challenging semantic phenomena, including resolving twice ...as to counting and comparison of objects, and composing cardinality constraints, such as at least two dogs in total and exactly two.3 NLVR (Suhr et al., 2017) and CLEVR (Johnson et al., 2017a,b). These datasets use synthetic images, synthetic language, or both. The result is a limited representation of linguistic challenges: synthetic languages are inherently of bounded expressivity, and synthetic visual input entails limited lexical and semantic diversity. We address these limitations with Natural Language Visual Reasoning for Real (NLVR2), a new dataset for reasoning about natural language descriptions of photos. The task is to determine if a caption is true with regard to a pair of images. Figure 1 shows examples from NLVR2. We use im2Appendix G contains license information for all photographs used in this paper. 3The top example is True, while the bottom is False. ages with rich visual content and a data collection process designed to emphasize semantic diversity, compositionality, and visual reasoning challenges. Our process reduces the chance of unintentional linguistic biases in the dataset, and therefore the ability of expressive models to take advantage of them to solve the task. Analysis of the data shows that the rich visual input supports diverse language, and that the task requires joint reasoning over the two inputs, including about sets, counts, comparisons, and spatial relations. Scalable curation of semantically-diverse sentences that describe images requires addressing two key challenges. First, we must identify images that are visually diverse enough to support the type of language desired. For example, a photo of a single beetle with a uniform background (Table 2, bottom left) is likely to elicit only relatively simple sentences about the existence of the beetle and its properties. Second, we need a scalable process to collect a large set of captions that demonstrate diverse semantics and visual reasoning. We use a search engine with queries designed to yield sets of similar, visually complex photographs, including of sets of objects and activities, which display real-world scenes. We annotate the data through a sequence of crowdsourcing tasks, including filtering for interesting images, writing captions, and validating their truth values. To elicit interesting captions, rather than presenting workers with single images, we ask workers for descriptions that compare and contrast four pairs of similar images. The description must be True for two pairs, and False for the other two pairs. Using pairs of images encourages language that composes properties shared between or contrasted among the two images. The four pairs are used to create four examples, each comprising an image pair and the description. This setup ensures that each sentence appears multiple times with both labels, resulting in a balanced dataset robust to linguistic biases, where a sentence’s truth value cannot be determined from the sentence alone, and generalization can be measured using multiple image-pair examples. This paper includes four main contributions: (1) a procedure for collecting visually rich images paired with semantically-diverse language descriptions; (2) NLVR2, which contains 107,292 examples of captions and image pairs, including 29,680 unique sentences and 127,502 images; (3) a qualitative linguistically-driven data analysis showing that our process achieves a broader representation of linguistic phenomena compared to other resources; and (4) an evaluation with several baselines and state-of-the-art visual reasoning methods on NLVR2. The relatively low performance we observe shows that NLVR2 presents a significant challenge, even for methods that perform well on existing visual reasoning tasks. NLVR2 is available at http://lil.nlp.cornell.edu/nlvr/. 2 Related Work and Datasets Language understanding in the context of images has been studied within various tasks, including visual question answering (e.g., Zitnick and Parikh, 2013; Antol et al., 2015), caption generation (Chen et al., 2016), referring expression resolution (e.g., Mitchell et al., 2010; Kazemzadeh et al., 2014; Mao et al., 2016), visual entailment (Xie et al., 2019), and binary image selection (Hu et al., 2019). Recently, the relatively simple language and reasoning in existing resources motivated datasets that focus on compositional language, mostly using synthetic data for language and vision (Andreas et al., 2016; Johnson et al., 2017a; Kuhnle and Copestake, 2017; Kahou et al., 2018; Yang et al., 2018).4 Three exceptions are CLEVR-Humans (Johnson et al., 2017b), which includes human-written paraphrases of generated questions for synthetic images; NLVR (Suhr et al., 2017), which uses human-written captions that compare and contrast sets of synthetic images; and GQA (Hudson and Manning, 2019), which uses synthetic language grounded in real-world photographs. In contrast, we focus on both humanwritten language and web photographs. Several methods have been proposed for compositional visual reasoning, including modular neural networks (e.g., Andreas et al., 2016; Johnson et al., 2017b; Perez et al., 2018; Hu et al., 2017; Suarez et al., 2018; Hu et al., 2018; Yao et al., 2018; Yi et al., 2018) and attention- or memory-based methods (e.g., Santoro et al., 2017; Hudson and Manning, 2018; Tan and Bansal, 2018). We use FiLM (Perez et al., 2018), N2NMN (Hu et al., 2017), and MAC (Hudson and Manning, 2018) for our empirical analysis. In our data, we use each sentence in multiple 4A tabular summary of the comparison of NLVR2 to existing resources is available in Table 7, Appendix A. examples, but with different labels. This is related to recent visual question answering datasets that aim to require models to consider both image and question to perform well (Zhang et al., 2016; Goyal et al., 2017; Li et al., 2017; Agrawal et al., 2017, 2018). Our approach is inspired by the collection of NLVR, where workers were shown a set of similar images and asked to write a sentence True for some images, but False for the others (Suhr et al., 2017). We adapt this method to web photos, including introducing a process to identify images that support complex reasoning and designing incentives for the more challenging writing task. 3 Data Collection Each example in NLVR2 includes a pair of images and a natural language sentence. The task is to determine whether the sentence is True or False about the pair of images. Our goal is to collect a large corpus of grounded semanticallyrich descriptions that require diverse types of reasoning, including about sets, counts, and comparisons. We design a process to identify images that enable such types of reasoning, collect grounded natural language descriptions, and label them as True or False. While we use image pairs, we do not explicitly set the task of describing the differences between the images or identifying which image matches the sentence better (Hu et al., 2019). We use pairs to enable comparisons and set reasoning between the objects that appear in the two images. Figure 2 illustrates our data collection procedure. For further discussion on the design decisions for our task and data collection implementation, please see appendices A and B. 3.1 Image Collection We require sets of images where the images in each set are detailed but similar enough such that comparison will require use of a diverse set of reasoning skills, more than just object or property identification. Because existing image resources, such as ImageNet (Russakovsky et al., 2015) or COCO (Lin et al., 2014), do not provide such grouping and mostly include relatively simple object-focused scenes, we collect a new set of images. We retrieve sets of images with similar content using search queries generated from synsets from the ILSVRC2014 ImageNet challenge (Russakovsky et al., 2015). This correspondence to ImageNet synsets allows researchers to use pre-trained image featurization models, and focuses the challenges of the task not on object detection, but compositional reasoning challenges. ImageNet Synsets Correspondence We identify a subset of the 1,000 synsets in ILSVRC2014 that often appear in rich contexts. For example, an acorn often appears in images with other acorns, while a seawall almost always appears alone. For each synset, we issue five queries to the Google Images search engine5 using query expansion heuristics. The heuristics are designed to retrieve images that support complex reasoning, including images with groups of entities, rich environments, or entities participating in activities. For example, the expansions for the synset acorn will include two acorns and acorn fruit. The heuristics are specified in Table 1. For each query, we use the Google similar images tool for each of the first five images to retrieve the seven non-duplicate most similar images. This results in five sets of eight similar images per query,6 25 sets in total. If at least half of the images in a set were labeled as interesting according to the criteria in Table 2, the synset is awarded one point. We choose the 124 synsets with the most points.7 The 124 synsets are distributed evenly among animals and objects. This annotation was performed by the first two authors and student volunteers, is only used for identifying synsets, and is separate from the image search described below. Image Search We use the Google Images search engine to find sets of similar images (Figure 2a). We apply the query generation heuristics to the 124 synsets. We use all synonyms in each synset (Deng et al., 2014; Russakovsky et al., 2015). For example, for the synset timber wolf, we use the synonym set {timber wolf, grey wolf, gray wolf, canis lupus }. For each generated query, we download sets containing at most 16 related images. Image Pruning We use two crowdsourcing tasks to (1) prune the sets of images, and (2) construct sets of eight images to use in the sentencewriting phase. In the first task, we remove low5https://images.google.com/ 6At the time of publication, the similar images tool is available at the “View more” link in the list of related images after expanding the results for each image. Images are ranked by similarity, where more similar images appear higher. 7We pick 125 and remove one set due to high image pruning rate in later stages. (a) Find Sets of Images: The query two acorns is issued to the search engine. The leftmost image appears in the list of results. The Similar Images tool is used to find a set of images, shown on the right, similar to this image. 🔍two acorns ✔ ✔ One image shows exactly two brown acorns in back-to-back caps on green foliage. One image shows exactly two brown acorns in back-toback caps on green foliage. ✔ True False ✘ Not Interesting ✘ Reordered Images Not Interesting Not Interesting (b) Image Pruning: Crowdworkers are given the synset name and identify low-quality images to be removed. In this example, one image is removed because it does not show an instance of the synset acorn. 🔍two acorns ✔ ✔ One image shows exactly two brown acorns in back-to-back caps on green foliage. One image shows exactly two brown acorns in back-toback caps on green foliage. ✔ True False ✘ Not Interesting ✘ Reordered Images Not Interesting Not Interesting (c) Set Construction: Crowdworkers decide whether each of the remaining images is interesting. In this example, three images are marked as non-interesting (top row) because they contain only a single instance of the synset. The images are re-ordered (bottom row) so that interesting images appear before non-interesting images, and the top eight images are used to form the set. In this example, the set is formed using the leftmost eight images. 🔍two acorns ✔ ✔ One image shows exactly two brown acorns in back-to-back caps on green foliage. One image shows exactly two brown acorns in back-toback caps on green foliage. ✔ True False ✘ Not Interesting ✘ Reordered Images Not Interesting Not Interesting 🔍two acorns ✔ ✔ One image shows exactly two brown acorns in back-to-back caps on green foliage. One image shows exactly two brown acorns in back-toback caps on green foliage. ✔ True False ✘ Not Interesting ✘ Reordered Images Not Interesting Not Interesting (d) Sentence Writing: The images in the set are randomly paired and shown to the worker. The worker selects two pairs, and writes a sentence that is True for the two selected pairs but False for the other two pairs. 🔍two acorns ✔ ✔ One image shows exactly two brown acorns in back-to-back caps on green foliage. One image shows exactly two brown acorns in back-toback caps on green foliage. ✔ True False ✘ Not Interesting ✘ Reordered Images Not Interesting Not Interesting (e) Validation: Each pair forms an example with the written sentence. Each example is shown to a worker to re-label. 🔍two acorns ✔ ✔ One image shows exactly two brown acorns in back-to-back caps on green foliage. One image shows exactly two brown acorns in back-toback caps on green foliage. ✔ True False ✘ Not Interesting ✘ Reordered Images Not Interesting Not Interesting Figure 2: Diagram of the data collection process, showing how a single example from the training set is constructed. Steps (a)–(c) are described in Section 3.1; step (d) in Section 3.2; and step (e) in Section 3.3. quality images from each downloaded set of similar images (Figure 2b). We display the image set and the synset name, and ask a worker to remove any images that do not load correctly; images that contain inappropriate content, non-realistic artwork, or collages; or images that do not contain an instance of the corresponding synset. This results in sets of sixteen or fewer similar images. We discard all sets with fewer than eight images. The second task further prunes these sets by removing duplicates and down-ranking noninteresting images (Figure 2c). The goal of this stage is to collect sets that contain enough interesting images. Workers are asked to remove duplicate images, and mark images that are not interesting. An image is interesting if it fits any of the criteria in Table 2. We ask workers not to mark an image if they consider it interesting for any other reason. We discard sets with fewer than three interesting images. We sort the images in descending order according to first interestingness, and second similarity, and keep the top eight. 3.2 Sentence Writing Each set of eight images is used for a sentencewriting task. We randomly split the set into four pairs of images. Using pairs encourages comparison and set reasoning within the pairs. Workers are asked to select two of the four pairs and write a sentence that is True for the selected pairs, but Heuristic Examples (synset synonym →query) Description Quantities cup →group of cups Add numerical phrases or manually-identified collective nouns to the synonym. These queries result in images containing multiple examples of the synset. Hypernyms flute →flute woodwind Add direct or indirect hypernyms from WordNet (Miller, 1993). Applied only to the non-animal synsets. This heuristic increases the diversity of images retrieved for the synset (Deng et al., 2014). Similar words banana →banana pear Add concrete nouns whose cosine similarity with the synonym is greater than 0.35 in the embedding space of Google News word2vec embeddings (Mikolov et al., 2013). Applied only to nonanimal synsets. These queries result in images containing a variety of different but related object types. Activities beagle →beagles eating Add manually-identified verbs describing common activities of animal synsets. Applied only to animal synsets. This heuristic results in images of animals participating in activities, which encourages captions with a diversity of entity properties. Table 1: The four heuristics used to generate search queries from synsets. Positive Examples and Criteria Contains more than one instance of the synset. Shows an instance of the synset interacting with other objects. Shows an instance of the synset performing an activity. Displays a set of diverse objects or features. Negative Examples Table 2: Positive and negative examples of interesting images. False for the unselected pairs. Allowing workers to select pairs themselves makes the sentencewriting task easier than with random selection, which may create tasks that are impossible to complete. Writing requires finding similarities and differences between the pairs, which encourages compositional language (Suhr et al., 2017). In contrast to the collection process of NLVR, using real images does not allow for as much control over their content, in some cases permitting workers to write simple sentences. For example, a worker could write a sentence stating the existence of a single object if it was only present in both selected pairs, which is avoided in NLVR by controlling for the objects in the images. Instead, we define more specific guidelines for the workers for writing sentences, including asking to avoid subjective opinions, discussion of properties of photograph, mentions of text, and simple object identification. We include more details and examples of these guidelines in Appendix B. 3.3 Validation We split each sentence-writing task into four examples, where the sentence is paired with each pair of images. Validation ensures that the selection of each image pair reflects its truth value. We show each example independently to a worker, and ask them to label it as True or False. The worker may also report the sentence as nonsensical. We keep all non-reported examples where the validation label is the same as the initial label indicated by the sentence-writer’s selection. For example, if the image pair is initially selected during sentencewriting, the sentence-writer intends the sentence to be True for the pair, so if the validation label is False, this example is removed. 3.4 Splitting the Dataset We assign a random 20% of the examples passing validation to development and testing, ensuring that examples from the same initial set of eight images do not appear across the split. For these examples, we collect four additional validation judgments to estimate agreement and human performance. We remove from this set examples where two or more of the extra judgments disagreed with the existing label (Section 3.3). Finally, we create True False One image contains a single vulture in a standing pose with its head and body facing leftward, and the other image contains a group of at least eight vultures. There are two trains in total traveling in the same direction. There are more birds in the image on the left than in the image on the right. Table 3: Six examples with three different sentences from NLVR2. For each sentence, we show two examples using different image-pairs, each with a different label. equal-sized splits for a development set and two test sets, ensuring that original image sets do not appear in multiple splits of the data (Table 4). 3.5 Data Collection Management We use a tiered system with bonuses to encourage workers to write linguistically diverse sentences. After every round of annotation, we sample examples for each worker and give bonuses to workers that follow our writing guidelines well. Once workers perform at a sufficient level, we allow them access to a larger pool of tasks. We also use qualification tasks to train workers. The mean cost per unique sentence in our dataset is $0.65; the mean cost per example is $0.18. Appendix B provides additional details about our bonus system, qualification tasks, and costs. 3.6 Collection Statistics We collect 27,678 sets of related images and a total of 387,426 images (Section 3.1). Pruning lowquality images leaves 19,500 sets and 250,862 images. Most images are removed for not containing an instance of the corresponding synset or for being non-realistic artwork or a collage of images. We construct 17,685 sets of eight images each. We crowdsource 31,418 sentences (Section 3.2). We create two writing tasks for each set of eight images. Workers may flag sets of images if they should have been removed in earlier stages; for example, if they contain duplicate images. Sentence-writing tasks that remain without annotation after three days are removed. During validation, 1,875 sentences are reported as nonsensical. 108,516 examples pass validation; i.e., the validation label matches the initial selecUnique sentences Examples Train 23,671 86,373 Development 2,018 6,982 Test-P 1,995 6,967 Test-U 1,996 6,970 Total 29,680 107,292 Table 4: NLVR2 data splits. tion for the pair of images (Section 3.3). Removing low-agreement examples in the development and test sets yields a dataset of 107,292 examples, 127,502 unique images, and 29,680 unique sentences. Each unique sentence is paired with an average of 3.6 pairs of images. Table 3 shows examples of three unique sentences from NLVR2. Table 4 shows the sizes of the data splits, including train, development, a public test set (Test-P), and an unreleased test set (Test-U). 4 Data Analysis We perform quantitative and qualitative analysis using the training and development sets. Agreement Following validation, 8.5% of the examples not reported during validation are removed due to disagreement between the validator’s label and the initial selection of the image pair (Section 3.3).8 We use the five validation labels we collect for the development and test sets to compute Krippendorff’s α and Fleiss’ κ to measure agreement (Cocos et al., 2015; Suhr et al., 2017). Before removing low-agreement examples 8The validator is the same worker as the sentence-writer for 11.5% of examples. In these cases, the validator agrees with themselves 96.7% of the time. For examples where the sentence-writer and validator were not the same person, they agree in 90.8% of examples. 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 Sentence length % of sentences VQA (real) MSCOCO VQA (abstract) GQA CLEVR NLVR CLEVR-Humans NLVR2 Figure 3: Distribution of sentence lengths. Dotted curves represent datasets with synthetic images. (Section 3.4), α = 0.906 and κ = 0.814. After removal, α = 0.912 and κ = 0.889, indicating almost perfect agreement (Landis and Koch, 1977). Synsets Each synset is associated with µ = 752.9 ± 205.7 examples. The five most common synsets are gorilla, bookcase, bookshop, pug, and water buffalo. The five least common synsets are orange, acorn, ox, dining table, and skunk. Synsets appear in equal proportions across the four splits. Language NLVR2’s vocabulary contains 7,457 word types, significantly larger than NLVR, which has 262 word types. Sentences in NLVR2 are on average 14.8 tokens long, whereas NLVR has a mean sentence length of 11.2. Figure 3 shows the distribution of sentence lengths compared to related corpora. NLVR2 shows a similar distribution to NLVR, but with a longer tail. NLVR2 contains longer sentences than the questions of VQA (Antol et al., 2015), GQA (Hudson and Manning, 2019), and CLEVR-Humans (Johnson et al., 2017b). Its distribution is similar to MSCOCO (Chen et al., 2015), which also contains captions, and CLEVR (Johnson et al., 2017a), where the language is synthetically generated. We analyze 800 sentences from the development set for occurrences of semantic and syntactic phenomena (Table 5). We compare with the 200example analysis of VQA and NLVR from Suhr et al. (2017), and 200 examples from the balanced split of GQA. Generally, NLVR2 has similar linguistic diversity to NLVR, showing broader representation of linguistic phenomena than VQA and GQA. One noticeable difference from NLVR is less use of hard cardinality. This is possibly due to how NLVR is designed to use a very limited set of object attributes, which encourages writers to rely on accurate counting for discrimination more often. We include further analysis in Appendix C. 5 Estimating Human Performance We use the additional labels of the development and test examples to estimate human performance. We group these labels according to workers. We do not consider cases where the worker labels a sentence written by themselves. For each worker, we measure their performance as the proportion of their judgements that matches the gold-standard label, which is the original validation label. We compute the average and standard deviation performance over workers with at least 100 such additional validation judgments, a total of 68 unique workers. Before pruning low-agreement examples (Section 3.4), the average performance over workers in the development and both test sets is 93.1±3.1. After pruning, it increases to 96.1±2.6. Table 6 shows human performance for each data split that has extra validations. Because this process does not include the full dataset for each worker, it is not fully comparable to our evaluation results. However, it provides an estimate by balancing between averaging over many workers and having enough samples for each worker. 6 Evaluation Systems We evaluate several baselines and existing visual reasoning approaches using NLVR2. For all systems, we optimize for example-level accuracy.9 We measure the biases in the data using three baselines: (a) MAJORITY: assign the most common label (True) to each example; (b) TEXT: encode the caption using a recurrent neural network (RNN; Elman, 1990), and use a multilayer perceptron to predict the truth value; and (c) IMAGE: encode the pair of images using a convolutional neural network (CNN), and use a multilayer perceptron to predict the truth value. The latter two estimate the potential of solving the task using only one of the two modalities. We use two baselines that consider both language and vision inputs. The CNN+RNN baseline concatenates the encoding of the text and images, computed similar to the TEXT and IMAGE baselines, and applies a multilayer perceptron to predict a truth value. The MAXENT baseline computes features from the sentence and objects de9System and learning details are available in Appendix E. VQA GQA NLVR NLVR2 Example from NLVR2 (real) % % % % Semantics Cardinality (hard) 11.5 0 66 41.1 Six rolls of paper towels are enclosed in a plastic package with the brand name on it. Cardinality (soft) 1 0 16 23.6 No more than two cheetahs are present. Existential 11.5 16.5 88 23.6 There are at most 3 water buffalos in the image pair. Universal 1 4.5 7.5 16.8 In one image there is a line of fence posts with one large darkly colored bird on top of each post. Coordination 5 21.5 17 33.3 Each image contains only one wolf, and all images include snowy backdrops. Coreference 6.5 0.5 3 14.6 there are four or more animals very close to each other on the grass in the image to the left. Spatial Relations 42.5 43 66 49 A stylus is near a laptop in one of the images. Comparative 1 2 3 8 There are more birds in the image on the right than in the image on the left. Presupposition 80 79 19.5 20.6 A cookie sits in the dessert in the image on the left. Negation 1 2.5 9.5 9.6 The front paws of the dog in the image on the left are not touching the ground. Syntactic Ambiguity CC Attachment 0 2.5 4.5 3.8 The left image shows a cream-layered dessert in a footed clear glass which includes sliced peanut butter cups and brownie chunks. PP Attachment 3 6.5 23 11.5 At least one panda is sitting near a fallen branch on the ground. SBAR Attachment 0 5 2 1.9 Balloons float in a blue sky with dappled clouds on strings that angle rightward, in the right image. Table 5: Linguistic analysis of sentences from NLVR2, GQA, VQA, and NLVR. We analyze 800 development sentences from NLVR2 and 200 from each of the other datasets for the presence of semantic and syntactic phenomena described in Suhr et al. (2017). We report the proportion of examples containing each phenomenon. tected in the paired images. We detect the objects in the images using a Mask R-CNN model (He et al., 2017; Girshick et al., 2018) pre-trained on the COCO detection task (Lin et al., 2014). We use a detection threshold of 0.5. For each n-gram with a numerical phrase in the caption and object class detected in the images, we compute features based on the number present in the n-gram and the detected object count. We create features for each image and for both together, and use these features in a maximum entropy classifier. Several recent approaches to visual reasoning make use of modular networks (Section 2). Broadly speaking, these approaches predict a neural network layout from the input sentence by using a set of modules. The network is used to reason about the image and text. The layout predictor may be trained: (a) using the formal programs used to generate synthetic sentences (e.g., in CLEVR), (b) using heuristically generated layouts from syntactic structures, or (c) jointly with the neural modules with latent layouts. Because sentences in NLVR2 are human-written, no supervised formal programs are available at training time. We use two methods that do not require such formal programs: end-to-end neural module networks (N2NMN; Hu et al., 2017) and featurewise linear modulation (FiLM; Perez et al., 2018). For N2NMN, we evaluate three learning methods: (a) N2NMN-CLONING: using supervised learning with gold layouts; (b) N2NMN-TUNE: using policy search after cloning; and (c) N2NMN-RL: using policy search from scratch. For N2NMNCLONING, we construct layouts from constituency trees (Cirik et al., 2018). Finally, we evaluate the Memory, Attention, and Composition approach (MAC; Hudson and Manning, 2018), which uses a sequence of attention-based steps. We modify N2NMN, FiLM, and MAC to process a pair of images by extracting image features from the concatenation of the pair. 7 Experiments and Results We use two metrics: accuracy and consistency. Accuracy measures the per-example prediction accuracy. Consistency measures the proportion of unique sentences for which predictions are correct for all paired images (Goldman et al., 2018). For training and development results, we report mean and standard deviation of accuracy and conTrain Dev Test-P Test-U MAJORITY (assign True) 50.8/2.1 50.9/3.9 51.1/4.2 51.4/4.6 TEXT 50.8±0.0/2.1±0.0 50.9±0.0/3.9±0.0 51.1/4.2 51.4/4.6 IMAGE 60.1±2.9/14.2±4.2 51.6±0.2/8.4±0.8 51.9/7.4 51.9/7.1 CNN+RNN 94.3±3.3/84.5±10.2 53.4±0.4/12.2±0.7 52.4/11.0 53.2/11.2 MAXENT 89.4/73.4 54.1/11.4 54.8/11.5 53.5/12.0 N2NMN (Hu et al., 2017): N2NMN-CLONING 65.7±25.8/30.8±49.7 50.2±1.0/5.7±3.1 – – N2NMN-TUNE 96.5±1.6/94.9±0.4 50.0±0.7/9.8±0.5 – – N2NMN-RL 50.8±0.3/2.3±0.3 51.0±0.1/4.1±0.3 51.1/5.0 51.5/5.0 FiLM (Perez et al., 2018) 69.0±16.9/32.4±29.6 51.0±0.4/10.3±1.0 52.1/9.8 53.0/10.6 MAC 87.4±0.8/64.0±1.7 50.8±0.6/11.0±0.2 51.4/11.4 51.2/11.2 (Hudson and Manning, 2018) HUMAN – 96.2±2.1/– 96.3±2.9/– 96.1±3.1/– Table 6: Performance (accuracy/consistency) on NLVR2. sistency over three trials as µacc±σacc/µcons±σcons. The results on the test sets are generated by evaluating the model that achieved the highest accuracy on the development set. For the N2NMN methods, we report test results only for the best of the three variants on the development set.10 Table 6 shows results for NLVR2. MAJORITY results demonstrate the data is fairly balanced. The results are slightly higher than perfect balance due to pruning (Sections 3.3 and 3.4). The TEXT and IMAGE baselines perform similar to MAJORITY, showing that both modalities are required to solve the task. TEXT shows identical performance to MAJORITY because of how the data is balanced. The best performing system is the feature-based MAXENT with the highest accuracy and consistency. FiLM performs best of the visual reasoning methods. Both FiLM and MAC show relatively high consistency. While almost all visual reasoning methods are able to fit the data, an indication of their high learning capacity, all generalize poorly. An exception is N2NMN-RL, which fails to fit the data, most likely due to the difficult task of policy learning from scratch. We also experimented with recent contextualized word embeddings to study the potential of stronger language models. We used a 12-layer uncased pre-trained BERT model (Devlin et al., 2019) with FiLM. We observed BERT provides no benefit, and therefore use the default embedding method for each model. 8 Conclusion We introduce the NLVR2 corpus for studying semantically-rich joint reasoning about photographs and natural language captions. Our fo10For reference, we also provide NLVR results in Table 11, Appendix D. cus on visually complex, natural photographs and human-written captions aims to reflect the challenges of compositional visual reasoning better than existing corpora. Our analysis shows that the language contains a wide range of linguistic phenomena including numerical expressions, quantifiers, coreference, and negation. This demonstrates how our focus on complex visual stimuli and data collection procedure result in compositional and diverse language. We experiment with baseline approaches and several methods for visual reasoning, which result in relatively low performance on NLVR2. These results and our analysis exemplify the challenge that NLVR2 introduces to methods for visual reasoning. We release training, development, and public test sets, and provide scripts to break down performance on the 800 examples we manually analyzed (Section 4) according to the analysis categories. Procedures for evaluating on the unreleased test set and a leaderboard are available at http://lic.nlp.cornell.edu/nlvr/. Acknowledgments This research was supported by the NSF (CRII1656998), a Google Faculty Award, a Facebook ParlAI Research Award, an AI2 Key Scientific Challenge Award, Amazon Cloud Credits Grant, and support from Women in Technology New York. This material is based on work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE1650441. We thank Mark Yatskar, Noah Snavely, and Valts Blukis for their comments and suggestions, the workers who participated in our data collection for their contributions, and the anonymous reviewers for their feedback. References Aishwarya Agrawal, Dhruv Batra, Devi Parikh, and Aniruddha Kembhavi. 2018. Don’t just assume; look and answer: Overcoming priors for visual question answering. In IEEE Conference on Computer Vision and Pattern Recognition, pages 4971–4980. Aishwarya Agrawal, Aniruddha Kembhavi, Dhruv Batra, and Devi Parikh. 2017. C-VQA: A compositional split of the visual question answering (VQA) v1.0 dataset. CoRR, abs/1704.08243. Malihe Alikhani and Matthew Stone. 2019. "Caption" as a coherence relation: Evidence and implications. In Proceedings of the Workshop on Shortcomings in Vision and Language. Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Neural module networks. In IEEE Conference on Computer Vision and Pattern Recognition, pages 39–48. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question answering. In IEEE International Conference on Computer Vision, pages 2425–2433. Wenhu Chen, Aurélien Lucchi, and Thomas Hofmann. 2016. Bootstrap, review, decode: Using out-ofdomain textual data to improve image captioning. CoRR, abs/1611.05321. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C. Lawrence Zitnick. 2015. Microsoft COCO captions: Data collection and evaluation server. CoRR, abs/1504.00325. Volkan Cirik, Taylor Berg-Kirkpatrick, and LouisPhilippe Morency. 2018. Using syntax to ground referring expressions in natural images. In AAAI Conference on Artificial Intelligence. Anne Cocos, Aaron Masino, Ting Qian, Ellie Pavlick, and Chris Callison-Burch. 2015. Effectively crowdsourcing radiology report annotations. In Proceedings of the Sixth International Workshop on Health Text Mining and Information Analysis, pages 109– 114. Jia Deng, Olga Russakovsky, Jonathan Krause, Michael S. Bernstein, Alex Berg, and Li Fei-Fei. 2014. Scalable multi-label annotation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 3099–3102. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186. Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science, 14:179–211. Francis Ferraro, Nasrin Mostafazadeh, Ting-Hao Huang, Lucy Vanderwende, Jacob Devlin, Michel Galley, and Margaret Mitchell. 2015. A survey of current datasets for vision and language research. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 207– 213. Ross Girshick, Ilija Radosavovic, Georgia Gkioxari, Piotr Dollár, and Kaiming He. 2018. Detectron. https://github.com/ facebookresearch/detectron. Omer Goldman, Veronica Latcinnik, Ehud Nave, Amir Globerson, and Jonathan Berant. 2018. Weakly supervised semantic parsing with abstract examples. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 1809– 1819. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understanding in visual question answering. In IEEE Conference on Computer Vision and Pattern Recognition, pages 6325–6334. Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross B. Girshick. 2017. Mask R-CNN. In IEEE International Conference on Computer Vision, pages 2980–2988. Hexiang Hu, Ishan Misra, and Laurens van der Maaten. 2019. Binary image selection (BISON): Interpretable evaluation of visual grounding. CoRR, abs/1901.06595. Ronghang Hu, Jacob Andreas, Trevor Darrell, and Kate Saenko. 2018. Explainable neural computation via stack neural module networks. In European Conference on Computer Vision. Ronghang Hu, Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Kate Saenko. 2017. Learning to reason: End-to-end module networks for visual question answering. In IEEE International Conference on Computer Vision, pages 804–813. Drew A. Hudson and Christopher D. Manning. 2018. Compositional attention networks for machine reasoning. In Proceedings of the International Conference on Learning Representations. Drew A. Hudson and Christopher D. Manning. 2019. GQA: a new dataset for compositional question answering over real-world images. In IEEE Conference on Computer Vision and Pattern Recognition. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C. Lawrence Zitnick, and Ross B. Girshick. 2017a. CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1988–1997. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Judy Hoffman, Li Fei-Fei, C. Lawrence Zitnick, and Ross B. Girshick. 2017b. Inferring and executing programs for visual reasoning. In IEEE International Conference on Computer Vision, pages 3008–3017. Samira Ebrahimi Kahou, Adam Atkinson, Vincent Michalski, Ákos Kádár, Adam Trischler, and Yoshua Bengio. 2018. FigureQA: An annotated figure dataset for visual reasoning. In Proceedings of the International Conference on Learning Representations. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. 2014. ReferItGame: Referring to objects in photographs of natural scenes. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 787–798. Alexander Kuhnle and Ann A. Copestake. 2017. ShapeWorld a new test methodology for multimodal language understanding. CoRR, abs/1704.04517. J. Richard Landis and Gary Koch. 1977. The measurement of observer agreement for categorical data. Biometrics, 33 1:159–74. Yining Li, Chen Huang, Xiaoou Tang, and Chen Change Loy. 2017. Learning to disambiguate by asking discriminative questions. In IEEE International Conference on Computer Vision, pages 3439–3448. Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft COCO: Common objects in context. In European Conference on Computer Vision. Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous object descriptions. In IEEE Conference on Computer Vision and Pattern Recognition, pages 11–20. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. George A. Miller. 1993. WordNet: A lexical database for English. In Proceedings of the Workshop on Human Language Technology, pages 409–409. Margaret Mitchell, Kees van Deemter, and Ehud Reiter. 2010. Natural reference to objects in a visual domain. In Proceedings of the International Natural Language Generation Conference. Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron C. Courville. 2018. FiLM: Visual reasoning with a general conditioning layer. In AAAI Conference on Artificial Intelligence. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115(3):211–252. Adam Santoro, David Raposo, David G.T. Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. 2017. A simple neural network module for relational reasoning. In Advances in Neural Information Processing Systems, pages 4967–4976. Joseph Suarez, Justin Johnson, and Fei-Fei Li. 2018. DDRprog: A CLEVR differentiable dynamic reasoning programmer. CoRR, abs/1803.11361. Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. 2017. A corpus of natural language for visual reasoning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 217–223. Hao Tan and Mohit Bansal. 2018. Object ordering with bidirectional matchings for visual reasoning. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 444–451. Ning Xie, Farley Lai, Derek Doran, and Asim Kadav. 2019. Visual entailment: A novel task for fine-grained image understanding. CoRR, abs/1901.06706. Robert Guangyu Yang, Igor Ganichev, Xiao Jing Wang, Jonathon Shlens, and David Sussillo. 2018. A dataset and architecture for visual reasoning with a working memory. In European Conference on Computer Vision. Yiqun Yao, Jiaming Xu, Feng Wang, and Bo Xu. 2018. Cascaded mutual modulation for visual reasoning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 975–980. Association for Computational Linguistics. Kexin Yi, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, and Josh Tenenbaum. 2018. Neural-symbolic VQA: Disentangling reasoning from vision and language understanding. In Advances in Neural Information Processing Systems, pages 1031–1042. Peng Zhang, Yash Goyal, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2016. Yin and yang: Balancing and answering binary visual questions. In IEEE Conference on Computer Vision and Pattern Recognition, pages 5014–5022. C. Lawrence Zitnick and Devi Parikh. 2013. Bringing semantics into focus using visual abstraction. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3009–3016.
2019
644
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6429–6441 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6429 Learning to Discover, Ground and Use Words with Segmental Neural Language Models Kazuya Kawakami♠♣Chris Dyer♣Phil Blunsom♠♣ ♠Department of Computer Science, University of Oxford, Oxford, UK ♣DeepMind, London, UK {kawakamik,cdyer,pblunsom}@google.com Abstract We propose a segmental neural language model that combines the generalization power of neural networks with the ability to discover word-like units that are latent in unsegmented character sequences. In contrast to previous segmentation models that treat word segmentation as an isolated task, our model unifies word discovery, learning how words fit together to form sentences, and, by conditioning the model on visual context, how words’ meanings ground in representations of nonlinguistic modalities. Experiments show that the unconditional model learns predictive distributions better than character LSTM models, discovers words competitively with nonparametric Bayesian word segmentation models, and that modeling language conditional on visual context improves performance on both. 1 Introduction How infants discover words that make up their first language is a long-standing question in developmental psychology (Saffran et al., 1996). Machine learning has contributed much to this discussion by showing that predictive models of language are capable of inferring the existence of word boundaries solely based on statistical properties of the input (Elman, 1990; Brent and Cartwright, 1996; Goldwater et al., 2009). However, there are two serious limitations of current models of word learning in the context of the broader problem of language acquisition. First, language acquisition involves not only learning what words there are (“the lexicon”), but also how they fit together (“the grammar”). Unfortunately, the best language models, measured in terms of their ability to predict language (i.e., those which seem acquire grammar best), segment quite poorly (Chung et al., 2017; Wang et al., 2017; Kádár et al., 2018), while the strongest models in terms of word segmentation (Goldwater et al., 2009; Berg-Kirkpatrick et al., 2010) do not adequately account for the long-range dependencies that are manifest in language and that are easily captured by recurrent neural networks (Mikolov et al., 2010). Second, word learning involves not only discovering what words exist and how they fit together grammatically, but also determining their non-linguistic referents, that is, their grounding. The work that has looked at modeling acquisition of grounded language from character sequences— usually in the context of linking words to a visually experienced environment—has either explicitly avoided modeling word units (Gelderloos and Chrupała, 2016) or relied on high-level representations of visual context that overly simplify the richness and ambiguity of the visual signal (Johnson et al., 2010; Räsänen and Rasilo, 2015). In this paper, we introduce a single model that discovers words, learns how they fit together (not just locally, but across a complete sentence), and grounds them in learned representations of naturalistic non-linguistic visual contexts. We argue that such a unified model is preferable to a pipeline model of language acquisition (e.g., a model where words are learned by one character-aware model, and then a full-sentence grammar is acquired by a second language model using the words predicted by the first). Our preference for the unified model may be expressed in terms of basic notions of simplicity (we require one model rather than two), and in terms of the Continuity Hypothesis of Pinker (1984), which argues that we should assume, absent strong evidence to the contrary, that children have the same cognitive systems as adults, and differences are due to them having set their parameters differently/immaturely. In §2 we introduce a neural model of sentences that explicitly discovers and models word-like units from completely unsegmented sequences of characters. Since it is a model of complete sentences 6430 (rather than just a word discovery model), and it can incorporate multimodal conditioning context (rather than just modeling language unconditionally), it avoids the two continuity problems identified above. Our model operates by generating text as a sequence of segments, where each segment is generated either character-by-character from a sequence model or as a single draw from a lexical memory of multi-character units. The segmentation decisions and decisions about how to generate words are not observed in the training data and marginalized during learning using a dynamic programming algorithm (§3). Our model depends crucially on two components. The first is, as mentioned, a lexical memory. This lexicon stores pairs of a vector (key) and a string (value) the strings in the lexicon are contiguous sequences of characters encountered in the training data; and the vectors are randomly initialized and learned during training. The second component is a regularizer (§4) that prevents the model from overfitting to the training data by overusing the lexicon to account for the training data.1 Our evaluation (§5–§7) looks at both language modeling performance and the quality of the induced segmentations, in both unconditional (sequence-only) contexts and when conditioning on a related image. First, we look at the segmentations induced by our model. We find that these correspond closely to human intuitions about word segments, competitive with the best existing models for unsupervised word discovery. Importantly, these segments are obtained in models whose hyperparameters are tuned to optimize validation (held-out) likelihood, whereas tuning the hyperparameters of our benchmark models using held-out likelihood produces poor segmentations. Second, we confirm findings (Kawakami et al., 2017; Mielke and Eisner, 2018) that show that word segmentation information leads to better language models compared to pure character models. However, in contrast to previous work, we realize this performance improvement without having to observe the segment boundaries. Thus, our model may be applied straightforwardly to Chinese, where word boundaries are not part of the orthography. 1Since the lexical memory stores strings that appear in the training data, each sentence could, in principle, be generated as a single lexical unit, thus the model could fit the training data perfectly while generalizing poorly. The regularizer penalizes based on the expectation of the powered length of each segment, preventing this degenerate solution from being optimal. Ablation studies demonstrate that both the lexicon and the regularizer are crucial for good performance, particularly in word segmentation— removing either or both significantly harms performance. In a final experiment, we learn to model language that describes images, and we find that conditioning on visual context improves segmentation performance in our model (compared to the performance when the model does not have access to the image). On the other hand, in a baseline model that predicts boundaries based on entropy spikes in a character-LSTM, making the image available to the model has no impact on the quality of the induced segments, demonstrating again the value of explicitly including a word lexicon in the language model. 2 Model We now describe the segmental neural language model (SNLM). Refer to Figure 1 for an illustration. The SNLM generates a character sequence x = x1, . . . , xn, where each xi is a character in a finite character set Σ. Each sequence x is the concatenation of a sequence of segments s = s1, . . . , s|s| where |s| ≤n measures the length of the sequence in segments and each segment si ∈Σ+ is a sequence of characters, si,1, . . . , si,|si|. Intuitively, each si corresponds to one word. Let π(s1, . . . , si) represent the concatenation of the characters of the segments s1 to si, discarding segmentation information; thus x = π(s). For example if x = anapple, the underlying segmentation might be s = an apple (with s1 = an and s2 = apple), or s = a nap ple, or any of the 2|x|−1 segmentation possibilities for x. The SNLM defines the distribution over x as the marginal distribution over all segmentations that give rise to x, i.e., p(x) = X s:π(s)=x p(s). (1) To define the probability of p(s), we use the chain rule, rewriting this in terms of a product of the series of conditional probabilities, p(st | s<t). The process stops when a special end-sequence segment ⟨/S⟩is generated. To ensure that the summation in Eq. 1 is tractable, we assume the following: p(st | s<t) ≈p(st | π(s<t)) = p(st | x<t), (2) which amounts to a conditional semi-Markov assumption—i.e., non-Markovian generation hap6431 n C a u y o l o k o t a … o l o o k o </w> k l <w> l l o l o o k … apapp appl apple loloo look l o o k … looks looke looked Figure 1: Fragment of the segmental neural language model while evaluating the marginal likelihood of a sequence. At the indicated time, the model has generated the sequence Canyou, and four possible continuations are shown. pens inside each segment, but the segment generation probability does not depend on memory of the previous segmentation decisions, only upon the sequence of characters π(s<t) corresponding to the prefix character sequence x<t. This assumption has been employed in a number of related models to permit the use of LSTMs to represent rich history while retaining the convenience of dynamic programming inference algorithms (Wang et al., 2017; Ling et al., 2017; Graves, 2012). 2.1 Segment generation We model p(st | x<t) as a mixture of two models, one that generates the segment using a sequence model and the other that generates multi-character sequences as a single event. Both are conditional on a common representation of the history, as is the mixture proportion. Representing history To represent x<t, we use an LSTM encoder to read the sequence of characters, where each character type σ ∈Σ has a learned vector embedding vσ. Thus the history representation at time t is ht = LSTMenc(vx1, . . . , vxt). This corresponds to the standard history representation for a character-level language model, although in general, we assume that our modelled data is not delimited by whitespace. Character-by-character generation The first component model, pchar(st | ht), generates st by sampling a sequence of characters from a LSTM language model over Σ and a two extra special symbols, an end-of-word symbol ⟨/W⟩/∈Σ and the end-of-sequence symbol ⟨/S⟩discussed above. The initial state of the LSTM is a learned transformation of ht, the initial cell is 0, and different parameters than the history encoding LSTM are used. During generation, each letter that is sampled (i.e., each st,i) is fed back into the LSTM in the usual way and the probability of the character sequence decomposes according to the chain rule. The end-of-sequence symbol can never be generated in the initial position. Lexical generation The second component model, plex(st | ht), samples full segments from lexical memory. Lexical memory is a key-value memory containing M entries, where each key, ki, a vector, is associated with a value vi ∈Σ+. The generation probability of st is defined as h′ t = MLP(ht) m = softmax(Kh′ t + b) plex(st | ht) = M X i=1 mi[vi = st], where [vi = st] is 1 if the ith value in memory is st and 0 otherwise, and K is a matrix obtained by stacking the k⊤ i ’s. This generation process assigns zero probability to most strings, but the alternate character model can generate all of Σ+. In this work, we fix the vi’s to be subsequences of at least length 2, and up to a maximum length L that are observed at least F times in the training data. These values are tuned as hyperparameters (See Appendix C for details of the experiments). Mixture proportion The mixture proportion, gt, determines how likely the character generator is to 6432 be used at time t (the lexicon is used with probability 1 −gt). It is defined by as gt = σ(MLP(ht)). Total segment probability The total generation probability of st is thus p(st | x<t) = gtpchar(st | ht)+ (1 −gt)plex(st | ht). 3 Inference We are interested in two inference questions: first, given a sequence x, evaluate its (log) marginal likelihood; second, given x, find the most likely decomposition into segments s∗. Marginal likelihood To efficiently compute the marginal likelihood, we use a variant of the forward algorithm for semi-Markov models (Yu, 2010), which incrementally computes a sequence of probabilities, αi, where αi is the marginal likelihood of generating x≤i and concluding a segment at time i. Although there are an exponential number of segmentations of x, these values can be computed using O(|x|) space and O(|x|2) time as: α0 = 1, αt = t−1 X j=t−L αjp(s = xj:t | x<j). (3) By letting xt+1 = ⟨/S⟩, then p(x) = αt+1. Most probable segmentation The most probable segmentation of a sequence x can be computed by replacing the summation with a max operator in Eq. 3 and maintaining backpointers. 4 Expected length regularization When the lexical memory contains all the substrings in the training data, the model easily overfits by copying the longest continuation from the memory. To prevent overfitting, we introduce a regularizer that penalizes based on the expectation of the exponentiated (by a hyperparameter β) length of each segment: R(x, β) = X s:π(s)=x p(s | x) X s∈s |s|β. This can be understood as a regularizer based on the double exponential prior identified to be effective in previous work (Liang and Klein, 2009; Berg-Kirkpatrick et al., 2010). This expectation is a differentiable function of the model parameters. Because of the linearity of the penalty across segments, it can be computed efficiently using the above dynamic programming algorithm under the expectation semiring (Eisner, 2002). This is particularly efficient since the expectation semiring jointly computes the expectation and marginal likelihood in a single forward pass. For more details about computing gradients of expectations under distributions over structured objects with dynamic programs and semirings, see Li and Eisner (2009). 4.1 Training Objective The model parameters are trained by minimizing the penalized log likelihood of a training corpus D of unsegmented sentences, L = X x∈D [−log p(x) + λR(x, β)]. 5 Datasets We evaluate our model on both English and Chinese segmentation. For both languages, we used standard datasets for word segmentation and language modeling. We also use MS-COCO to evaluate how the model can leverage conditioning context information. For all datasets, we used train, validation and test splits.2 Since our model assumes a closed character set, we removed validation and test samples which contain characters that do not appear in the training set. In the English corpora, whitespace characters are removed. In Chinese, they are not present to begin with. Refer to Appendix A for dataset statistics. 5.1 English Brent Corpus The Brent corpus is a standard corpus used in statistical modeling of child language acquisition (Brent, 1999; Venkataraman, 2001).3 The corpus contains transcriptions of utterances directed at 13- to 23-month-old children. The corpus has two variants: an orthographic one (BRtext) and a phonemic one (BR-phono), where each character corresponds to a single English phoneme. As the Brent corpus does not have a standard train and test split, and we want to tune the parameters by measuring the fit to held-out data, we used the first 80% of the utterances for training and the next 10% for validation and the rest for test. 2The data and splits used are available at https://s3.eu-west-2.amazonaws.com/ k-kawakami/seg.zip. 3https://childes.talkbank.org/derived 6433 English Penn Treebank (PTB) We use the commonly used version of the PTB prepared by Mikolov et al. (2010). However, since we removed space symbols from the corpus, our cross entropy results cannot be compared to those usually reported on this dataset. 5.2 Chinese Since Chinese orthography does not mark spaces between words, there have been a number of efforts to annotate word boundaries. We evaluate against two corpora that have been manually segmented according different segmentation standards. Beijing University Corpus (PKU) The Beijing University Corpus was one of the corpora used for the International Chinese Word Segmentation Bakeoff (Emerson, 2005). Chinese Penn Treebank (CTB) We use the Penn Chinese Treebank Version 5.1 (Xue et al., 2005). It generally has a coarser segmentation than PKU (e.g., in CTB a full name, consisting of a given name and family name, is a single token), and it is a larger corpus. 5.3 Image Caption Dataset To assess whether jointly learning about meanings of words from non-linguistic context affects segmentation performance, we use image and caption pairs from the COCO caption dataset (Lin et al., 2014). We use 10,000 examples for both training and testing and we only use one reference per image. The images are used to be conditional context to predict captions. Refer to Appendix B for the dataset construction process. 6 Experiments We compare our model to benchmark Bayesian models, which are currently the best known unsupervised word discovery models, as well as to a simple deterministic segmentation criterion based on surprisal peaks (Elman, 1990) on language modeling and segmentation performance. Although the Bayeisan models are shown to able to discover plausible word-like units, we found that a set of hyperparameters that provides best performance with such model on language modeling does not produce good structures as reported in previous works. This is problematic since there is no objective criteria to find hyperparameters in fully unsupervised manner when the model is applied to completely unknown languages or domains. Thus, our experiments are designed to assess how well the models infers word segmentations of unsegmented inputs when they are trained and tuned to maximize the likelihood of the held-out text. DP/HDP Benchmarks Among the most effective existing word segmentation models are those based on hierarchical Dirichlet process (HDP) models (Goldwater et al., 2009; Teh et al., 2006) and hierarchical Pitman–Yor processes (Mochihashi et al., 2009). As a representative of these, we use a simple bigram HDP model: θ· ∼DP(α0, p0) θ·|s ∼DP(α1, θ·) ∀s ∈Σ∗ st+1 | st ∼Categorical(θ·|st). The base distribution, p0, is defined over strings in Σ∗∪{⟨/S⟩} by deciding with a specified probability to end the utterance, a geometric length model, and a uniform probability over Σ at a each position. Intuitively, it captures the preference for having short words in the lexicon. In addition to the HDP model, we also evaluate a simpler single Dirichlet process (DP) version of the model, in which the st’s are generated directly as draws from Categorical(θ·). We use an empirical Bayesian approach to select hyperparameters based on the likelihood assigned by the inferred posterior to a held-out validation set. Refer to Appendix D for details on inference. Deterministic Baselines Incremental word segmentation is inherently ambiguous (e.g., the letters the might be a single word, or they might be the beginning of the longer word theater). Nevertheless, several deterministic functions of prefixes have been proposed in the literature as strategies for discovering rudimentary word-like units hypothesized for being useful for bootstrapping the lexical acquisition process or for improving a model’s predictive accuracy. These range from surprisal criteria (Elman, 1990) to sophisticated language models that switch between models that capture intra- and inter-word dynamics based on deterministic functions of prefixes of characters (Chung et al., 2017; Shen et al., 2018). In our experiments, we also include such deterministic segmentation results using (1) the surprisal criterion of Elman (1990) and (2) a two-level hierarchical multiscale LSTM (Chung et al., 2017), which has been shown to predict boundaries in 6434 whitespace-containing character sequences at positions corresponding to word boundaries. As with all experiments in this paper, the BR-corpora for this experiment do not contain spaces. SNLM Model configurations and Evaluation LSTMs had 512 hidden units with parameters learned using the Adam update rule (Kingma and Ba, 2015). We evaluated our models with bits-percharacter (bpc) and segmentation accuracy (Brent, 1999; Venkataraman, 2001; Goldwater et al., 2009). Refer to Appendices C–F for details of model configurations and evaluation metrics. For the image caption dataset, we extend the model with a standard attention mechanism in the backbone LSTM (LSTMenc) to incorporate image context. For every character-input, the model calculates attentions over image features and use them to predict the next characters. As for image representations, we use features from the last convolution layer of a pre-trained VGG19 model (Simonyan and Zisserman, 2014). 7 Results In this section, we first do a careful comparison of segmentation performance on the phonemic Brent corpus (BR-phono) across several different segmentation baselines, and we find that our model obtains competitive segmentation performance. Additionally, ablation experiments demonstrate that both lexical memory and the proposed expected length regularization are necessary for inferring good segmentations. We then show that also on other corpora, we likewise obtain segmentations better than baseline models. Finally, we also show that our model has superior performance, in terms of heldout perplexity, compared to a character-level LSTM language model. Thus, overall, our results show that we can obtain good segmentations on a variety of tasks, while still having very good language modeling performance. Word Segmentation (BR-phono) Table 1 summarizes the segmentation results on the widely used BR-phono corpus, comparing it to a variety of baselines. Unigram DP, Bigram HDP, LSTM suprisal and HMLSTM refer to the benchmark models explained in §6. The ablated versions of our model show that without the lexicon (−memory), without the expected length penalty (−length), and without either, our model fails to discover good segmentations. Furthermore, we draw attention to the difference in the performance of the HDP and DP models when using subjective settings of the hyperparameters and the empirical settings (likelihood). Finally, the deterministic baselines are interesting in two ways. First, LSTM surprisal is a remarkably good heuristic for segmenting text (although we will see below that its performance is much less good on other datasets). Second, despite careful tuning, the HMLSTM of Chung et al. (2017) fails to discover good segments, although in their paper they show that when spaces are present between, HMLSTMs learn to switch between their internal models in response to them. Furthermore, the priors used in the DP/HDP models were tuned to maximize the likelihood assigned to the validation set by the inferred posterior predictive distribution, in contrast to previous papers which either set them subjectively or inferred them (Johnson and Goldwater, 2009). For example, the DP and HDP model with subjective priors obtained 53.8 and 72.3 F1 scores, respectively (Goldwater et al., 2009). However, when the hyperparameters are set to maximize held-out likelihood, this drops obtained 56.1 and 56.9. Another result on this dataset is the feature unigram model of Berg-Kirkpatrick et al. (2010), which obtains an 88.0 F1 score with hand-crafted features and by selecting the regularization strength to optimize segmentation performance. Once the features are removed, the model achieved a 71.5 F1 score when it is tuned on segmentation performance and only 11.5 when it is tuned on held-out likelihood. P R F1 LSTM surprisal (Elman, 1990) 54.5 55.5 55.0 HMLSTM (Chung et al., 2017) 8.1 13.3 10.1 Unigram DP 63.3 50.4 56.1 Bigram HDP 53.0 61.4 56.9 SNLM (−memory, −length) 54.3 34.9 42.5 SNLM (+memory, −length) 52.4 36.8 43.3 SNLM (−memory, +length) 57.6 43.4 49.5 SNLM (+memory, +length) 81.3 77.5 79.3 Table 1: Summary of segmentation performance on phoneme version of the Brent Corpus (BR-phono). Word Segmentation (other corpora) Table 2 summarizes results on the BR-text (orthographic Brent corpus) and Chinese corpora. As in the previous section, all the models were trained to maxi6435 mize held-out likelihood. Here we observe a similar pattern, with the SNLM outperforming the baseline models, despite the tasks being quite different from each other and from the BR-phono task. P R F1 BR-text LSTM surprisal 36.4 49.0 41.7 Unigram DP 64.9 55.7 60.0 Bigram HDP 52.5 63.1 57.3 SNLM 68.7 78.9 73.5 PTB LSTM surprisal 27.3 36.5 31.2 Unigram DP 51.0 49.1 50.0 Bigram HDP 34.8 47.3 40.1 SNLM 54.1 60.1 56.9 CTB LSTM surprisal 41.6 25.6 31.7 Unigram DP 61.8 49.6 55.0 Bigram HDP 67.3 67.7 67.5 SNLM 78.1 81.5 79.8 PKU LSTM surprisal 38.1 23.0 28.7 Unigram DP 60.2 48.2 53.6 Bigram HDP 66.8 67.1 66.9 SNLM 75.0 71.2 73.1 Table 2: Summary of segmentation performance on other corpora. Word Segmentation Qualitative Analysis We show some representative examples of segmentations inferred by various models on the BR-text and PKU corpora in Table 3. As reported in Goldwater et al. (2009), we observe that the DP models tend to undersegment, keep long frequent sequences together (e.g., they failed to separate articles). HDPs do successfully prevent oversegmentation; however, we find that when trained to optimize heldout likelihood, they often insert unnecessary boundaries between words, such as yo u. Our model’s performance is better, but it likewise shows a tendency to oversegment. Interestingly, we can observe a tendency tends to put boundaries between morphemes in morphologically complex lexical items such as dumpty ’s, and go ing. Since morphemes are the minimal units that carry meaning in language, this segmentation, while incorrect, is at least plasuible. Turning to the Chinese examples, we see that both baseline models fail to discover basic words such as 山间(mountain) and 人们(human). Finally, we observe that none of the models successfully segment dates or numbers containing multiple digits (all oversegment). Since number types tend to be rare, they are usually not in the lexicon, meaning our model (and the H/DP baselines) must generate them as character sequences. Language Modeling Performance The above results show that the SNLM infers good word segmentations. We now turn to the question of how well it predicts held-out data. Table 4 summarizes the results of the language modeling experiments. Again, we see that SNLM outperforms the Bayesian models and a character LSTM. Although there are numerous extensions to LSTMs to improve language modeling performance, LSTMs remain a strong baseline (Melis et al., 2018). One might object that because of the lexicon, the SNLM has many more parameters than the character-level LSTM baseline model. However, unlike parameters in LSTM recurrence which are used every timestep, our memory parameters are accessed very sparsely. Furthermore, we observed that an LSTM with twice the hidden units did not improve the baseline with 512 hidden units on both phonemic and orthographic versions of Brent corpus but the lexicon could. This result suggests more hidden units are useful if the model does not have enough capacity to fit larger datasets, but that the memory structure adds other dynamics which are not captured by large recurrent networks. Multimodal Word Segmentation Finally, we discuss results on word discovery with nonlinguistic context (image). Although there is much evidence that neural networks can reliably learn to exploit additional relevant context to improve language modeling performance (e.g. machine translation and image captioning), it is still unclear whether the conditioning context help to discover structure in the data. We turn to this question here. Table 5 summarizes language modeling and segmentation performance of our model and a baseline character-LSTM language model on the COCO image caption dataset. We use the Elman Entropy criterion to infer the segmentation points from the baseline LM, and the MAP segmentation under our model. Again, we find our model outperforms the baseline model in terms of both language modeling and word segmentation accuracy. Interestingly, we find while conditioning on image context leads to reductions in perplexity in both models, in our model the presence of the image further improves segmentation accuracy. This suggests that 6436 Examples BR-text Reference are you going to make him pretty this morning Unigram DP areyou goingto makehim pretty this morning Bigram HDP areyou go ingto make him p retty this mo rn ing SNLM are you go ing to make him pretty this morning Reference would you like to do humpty dumpty’s button Unigram DP wouldyoul iketo do humpty dumpty ’s button Bigram HDP would youlike to do humptyd umpty ’s butt on SNLM would you like to do humpty dumpty ’s button PKU Reference 笑声、掌声、欢呼声,在山间回荡,勾起了人们对往事的回忆。 Unigram DP 笑声、掌声、欢呼声,在山间回荡,勾起了人们对往事的回忆。 Bigram HDP 笑声、掌声、欢呼声,在山间回荡,勾起了人们对往事的回忆。 SNLM 笑声、掌声、欢呼声,在山间回荡,勾起了人们对往事的回忆。 Reference 不得在江河电缆保护区内抛锚、拖锚、炸鱼、挖沙。 Unigram DP 不得在江河电缆保护区内抛锚、拖锚、炸鱼、挖沙。 Bigram HDP 不得在江河电缆保护区内抛锚、拖锚、炸鱼、挖沙。 SNLM 不得在江河电缆保护区内抛锚、拖锚、炸鱼、挖沙。 Table 3: Examples of predicted segmentations on English and Chinese. BR-text BR-phono PTB CTB PKU Unigram DP 2.33 2.93 2.25 6.16 6.88 Bigram HDP 1.96 2.55 1.80 5.40 6.42 LSTM 2.03 2.62 1.65 4.94 6.20 SNLM 1.94 2.54 1.56 4.84 5.89 Table 4: Test language modeling performance (bpc). our model and its learning mechanism interact with the conditional context differently than the LSTM does. To understand what kind of improvements in segmentation performance the image context leads to, we annotated the tokens in the references with part-of-speech (POS) tags and compared relative improvements on recall between SNLM (−image) and SNLM (+image) among the five POS tags which appear more than 10,000 times. We observed improvements on ADJ (+4.5%), NOUN (+4.1%), VERB (+3.1%). The improvements on the categories ADP (+0.5%) and DET (+0.3%) are were more limited. The categories where we see the largest improvement in recall correspond to those that are likely a priori to correlate most reliably with observable features. Thus, this result is consistent with a hypothesis that the lexican is successfully acquiring knowledge about how words idiosyncratically link to visual features. Segmentation State-of-the-Art The results reported are not the best-reported numbers on the Enbpc↓ P ↑ R ↑ F1↑ Unigram DP 2.23 44.0 40.0 41.9 Bigram HDP 1.68 30.9 40.8 35.1 LSTM (−image) 1.55 31.3 38.2 34.4 SNLM (−image) 1.52 39.8 55.3 46.3 LSTM (+image) 1.42 31.7 39.1 35.0 SNLM (+image) 1.38 46.4 62.0 53.1 Table 5: Language modeling (bpc) and segmentation accuracy on COCO dataset. +image indicates that the model has access to image context. glish phoneme or Chinese segmentation tasks. As we discussed in the introduction, previous work has focused on segmentation in isolation from language modeling performance. Models that obtain better segmentations include the adaptor grammars (F1: 87.0) of Johnson and Goldwater (2009) and the feature-unigram model (88.0) of Berg-Kirkpatrick et al. (2010). While these results are better in terms of segmentation, they are weak language models (the feature unigram model is effectively a unigram word model; the adaptor grammar model is effectively phrasal unigram model; both are incapable of generalizing about substantially non-local dependencies). Additionally, the features and grammars used in prior work reflect certain English-specific design considerations (e.g., syllable structure in the case of adaptor grammars and phonotactic equivalence classes in the feature unigram model), which make them questionable models if the goal is to ex6437 plore what models and biases enable word discovery in general. For Chinese, the best nonparametric models perform better at segmentation (Zhao and Kit, 2008; Mochihashi et al., 2009), but again they are weaker language models than neural models. The neural model of Sun and Deng (2018) is similar to our model without lexical memory or length regularization; it obtains 80.2 F1 on the PKU dataset; however, it uses gold segmentation data during training and hyperparameter selection,4 whereas our approach requires no gold standard segmentation data. 8 Related Work Learning to discover and represent temporally extended structures in a sequence is a fundamental problem in many fields. For example in language processing, unsupervised learning of multiple levels of linguistic structures such as morphemes (Snyder and Barzilay, 2008), words (Goldwater et al., 2009; Mochihashi et al., 2009; Wang et al., 2014) and phrases (Klein and Manning, 2001) have been investigated. Recently, speech recognition has benefited from techniques that enable the discovery of subword units (Chan et al., 2017; Wang et al., 2017); however, in that work, the optimally discovered character sequences look quite unlike orthographic words. In fact, the model proposed by Wang et al. (2017) is essentially our model without a lexicon or the expected length regularization, i.e., (−memory, −length), which we have shown performs quite poorly in terms of segmentation accuracy. Finally, some prior work has also sought to discover lexical units directly from speech based on speech-internal statistical regularities (Kamper et al., 2016), as well as jointly with grounding (Chrupała et al., 2017). 9 Conclusion Word discovery is a fundamental problem in language acquisition. While work studying the problem in isolation has provided valuable insights (showing both what data is sufficient for word discovery with which models), this paper shows that neural models offer the flexibility and performance to productively study the various facets of the problem in a more unified model. While this work unifies several components that had previously been 4https://github.com/ Edward-Sun/SLM/blob/ d37ad735a7b1d5af430b96677c2ecf37a65f59b7/ codes/run.py#L329 studied in isolation, our model assumes access to phonetic categories. The development of these categories likely interact with the development of the lexicon and acquisition of semantics (Feldman et al., 2013; Fourtassi and Dupoux, 2014), and thus subsequent work should seek to unify more aspects of the acquisition problem. Acknowledgments We thank Mark Johnson, Sharon Goldwater, and Emmanuel Dupoux, as well as many colleagues at DeepMind, for their insightful comments and suggestions for improving this work and the resulting paper. References Taylor Berg-Kirkpatrick, Alexandre Bouchard-Côté, John DeNero, and Dan Klein. 2010. Painless unsupervised learning with features. In Proc. NAACL. Michael R Brent. 1999. An efficient, probabilistically sound algorithm for segmentation and word discovery. Machine Learning, 34(1):71–105. Michael R Brent and Timothy A Cartwright. 1996. Distributional regularity and phonotactic constraints are useful for segmentation. Cognition, 61(1):93–125. William Chan, Yu Zhang, Quoc Le, and Navdeep Jaitly. 2017. Latent sequence decompositions. In Proc. ICLR. Grzegorz Chrupała, Lieke Gelderloos, and Afra Alishahi. 2017. Representations of language in a model of visually grounded speech signal. Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. 2017. Hierarchical multiscale recurrent neural networks. In Proc. ICLR. Jason Eisner. 2002. Parameter estimation for probabilistic finite-state transducers. In Proc. ACL. Jeffrey L Elman. 1990. Finding structure in time. Cognitive science, 14(2):179–211. Thomas Emerson. 2005. The second international Chinese word segmentation bakeoff. In Proc. SIGHAN Workshop. Naomi H. Feldman, Thomas L. Griffiths, Sharon Goldwater, and James L. Morgan. 2013. A role for the developing lexicon in phonetic category acquisition. Psychological Review, 120(4):751–778. Abdellah Fourtassi and Emmanuel Dupoux. 2014. A rudimentary lexicon and semantics help bootstrap phoneme acquisition. In Proc. EMNLP. 6438 Lieke Gelderloos and Grzegorz Chrupała. 2016. From phonemes to images: levels of representation in a recurrent neural model of visually-grounded language learning. In Proc. COLING. Sharon Goldwater, Thomas L Griffiths, and Mark Johnson. 2009. A Bayesian framework for word segmentation: Exploring the effects of context. Cognition, 112(1):21–54. Alex Graves. 2012. Sequence transduction with recurrent neural networks. arXiv preprint arXiv:1211.3711. Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850. Mark Johnson, Katherine Demuth, Michael Frank, and Bevan K. Jones. 2010. Synergies in learning words and their referents. In Proc. NIPS. Mark Johnson and Sharon Goldwater. 2009. Improving nonparameteric bayesian inference: experiments on unsupervised word segmentation with adaptor grammars. In Proc. NAACL, pages 317–325. Ákos Kádár, Marc-Alexandre Côté, Grzegorz Chrupała, and Afra Alishahi. 2018. Revisiting the hierarchical multiscale LSTM. In Proc. COLING. Herman Kamper, Aren Jansen, and Sharon Goldwater. 2016. Unsupervised word segmentation and lexicon induction discovery using acoustic word embeddings. IEEE Transactions on Audio, Speech, and Language Processing, 24(4):669–679. Kazuya Kawakami, Chris Dyer, and Phil Blunsom. 2017. Learning to create and reuse words in openvocabulary neural language modeling. In Proc. ACL. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proc. ICLR. Dan Klein and Christopher D Manning. 2001. Distributional phrase structure induction. In Workshop Proc. ACL. Zhifei Li and Jason Eisner. 2009. First-and secondorder expectation semirings with applications to minimum-risk training on translation forests. In Proc. EMNLP. Percy Liang and Dan Klein. 2009. Online EM for unsupervised models. In Proc. NAACL. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft COCO: Common objects in context. In Proc. ECCV, pages 740–755. Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Koˇciský, Andrew Senior, Fumin Wang, and Phil Blunsom. 2017. Latent predictor networks for code generation. In Proc. ACL. Gábor Melis, Chris Dyer, and Phil Blunsom. 2018. On the state of the art of evaluation in neural language models. In Proc. ICLR. Sebastian J. Mielke and Jason Eisner. 2018. Spell once, summon anywhere: A two-level open-vocabulary language model. In Proc. NAACL. Tomas Mikolov, Martin Karafiát, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proc. Interspeech. Daichi Mochihashi, Takeshi Yamada, and Naonori Ueda. 2009. Bayesian unsupervised word segmentation with nested Pitman–Yor language modeling. Steven Pinker. 1984. Language learnability and language development. Harvard University Press. Okko Räsänen and Heikki Rasilo. 2015. A joint model of word segmentation and meaning acquisition through cross-situational learning. Psychological Review, 122(4):792–829. Jenny R Saffran, Richard N Aslin, and Elissa L Newport. 1996. Statistical learning by 8-month-old infants. Science, 274(5294):1926–1928. Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron Courville. 2018. Neural language modeling by jointly learning syntax and lexicon. In Proc. ICLR. K. Simonyan and A. Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556. Benjamin Snyder and Regina Barzilay. 2008. Unsupervised multilingual learning for morphological segmentation. In Proc. ACL. Zhiqing Sun and Zhi-Hong Deng. 2018. Unsupervised neural word segmentation for Chinese via segmental language modeling. Yee-Whye Teh, Michael I. Jordan, Matthew J. Beal, and Daivd M. Blei. 2006. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581. Anand Venkataraman. 2001. A statistical model for word discovery in transcribed speech. Computational Linguistics, 27(3):351–372. Chong Wang, Yining Wan, Po-Sen Huang, Abdelrahman Mohammad, Dengyong Zhou, and Li Deng. 2017. Sequence modeling via segmentations. In Proc. ICML. Xiaolin Wang, Masao Utiyama, Andrew Finch, and Eiichiro Sumita. 2014. Empirical study of unsupervised Chinese word segmentation methods for SMT on large-scale corpora. 6439 Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Marta Palmer. 2005. The Penn Chinese treebank: Phrase structure annotation of a large corpus. Natural language engineering, 11(2):207–238. Shun-Zheng Yu. 2010. Hidden semi-Markov models. Artificial Intelligence, 174(2):215–243. Hai Zhao and Chunyu Kit. 2008. An empirical comparison of goodness measures for unsupervised Chinese word segmentation with a unified framework. In Proc. IJCNLP. 6440 A Dataset statistics Table 6 summarizes dataset statistics. B Image Caption Dataset Construction We use 8000, 2000 and 10000 images for train, development and test set in order of integer ids specifying image in cocoapi5 and use first annotation provided for each image. We will make pairs of image id and annotation id available from https://s3.eu-west-2. amazonaws.com/k-kawakami/seg.zip. C SNLM Model Configuration For each RNN based model we used 512 dimensions for the character embeddings and the LSTMs have 512 hidden units. All the parameters, including character projection parameters, are randomly sampled from uniform distribution from −0.08 to 0.08. The initial hidden and memory state of the LSTMs are initialized with zero. A dropout rate of 0.5 was used for all but the recurrent connections. To restrict the size of memory, we stored substrings which appeared F-times in the training corpora and tuned F with grid search. The maximum length of subsequences L was tuned on the held-out likelihood using a grid search. Tab. 7 summarizes the parameters for each dataset. Note that we did not tune the hyperparameters on segmentation quality to ensure that the models are trained in a purely unsupervised manner assuming no reference segmentations are available. D DP/HDP Inference By integrating out the draws from the DP’s, it is possible to do inference using Gibbs sampling directly in the space of segmentation decisions. We use 1,000 iterations with annealing to find an approximation of the MAP segmentation and then use the corresponding posterior predictive distribution to estimate the held-out likelihood assigned by the model, marginalizing the segmentations using appropriate dynamic programs. The evaluated segmentation was the most probable segmentation according to the posterior predictive distribution. In the original Bayesian segmentation work, the hyperparameters (i.e., α0, α1, and the components of p0) were selected subjectively. To make comparison with our neural models fairer, we instead used an empirical approach and set them using the 5https://github.com/cocodataset/cocoapi held-out likelihood of the validation set. However, since this disadvantages the DP/HDP models in terms of segmentation, we also report the original results on the BR corpora. E Learning The models were trained with the Adam update rule (Kingma and Ba, 2015) with a learning rate of 0.01. The learning rate is divided by 4 if there is no improvement on development data. The maximum norm of the gradients was clipped at 1.0. F Evaluation Metrics Language Modeling We evaluated our models with bits-per-character (bpc), a standard evaluation metric for character-level language models. Following the definition in Graves (2013), bits-percharacter is the average value of −log2 p(xt | x<t) over the whole test set, bpc = −1 |x| log2 p(x), where |x| is the length of the corpus in characters. The bpc is reported on the test set. Segmentation We also evaluated segmentation quality in terms of precision, recall, and F1 of word tokens (Brent, 1999; Venkataraman, 2001; Goldwater et al., 2009). To get credit for a word, the models must correctly identify both the left and right boundaries. For example, if there is a pair of a reference segmentation and a prediction, Reference: do you see a boy Prediction: doyou see a boy then 4 words are discovered in the prediction where the reference has 5 words. 3 words in the prediction match with the reference. In this case, we report scores as precision = 75.0 (3/4), recall = 60.0 (3/5), and F1, the harmonic mean of precision and recall, 66.7 (2/3). To facilitate comparison with previous work, segmentation results are reported on the union of the training, validation, and test sets. 6441 Sentence Char. Types Word Types Characters Average Word Length Train Valid Test Train Valid Test Train Valid Test Train Valid Test Train Valid Test BR-text 7832 979 979 30 30 29 1237 473 475 129k 16k 16k 3.82 4.06 3.83 BR-phono 7832 978 978 51 51 50 1183 457 462 104k 13k 13k 2.86 2.97 2.83 PTB 42068 3370 3761 50 50 48 10000 6022 6049 5.1M 400k 450k 4.44 4.37 4.41 CTB 50734 349 345 160 76 76 60095 1769 1810 3.1M 18k 22k 4.84 5.07 5.14 PKU 17149 1841 1790 90 84 87 52539 13103 11665 2.6M 247k 241k 4.93 4.94 4.85 COCO 8000 2000 10000 50 42 48 4390 2260 5072 417k 104k 520k 4.00 3.99 3.99 Table 6: Summary of Dataset Statistics. max len (L) min freq (F) λ BR-text 10 10 7.5e-4 BR-phono 10 10 9.5e-4 PTB 10 100 5.0e-5 CTB 5 25 1.0e-2 PKU 5 25 9.0e-3 COCO 10 100 2.0e-4 Table 7: Hyperparameter values used.
2019
645
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6442–6451 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6442 What Should I Ask? Using Conversationally Informative Rewards for Goal-Oriented Visual Dialogue Pushkar Shuklar1, Carlos Elmadjian2, Richika Sharan1, Vivek Kulkarni3, William Yang Wang1, Matthew Turk1 1University of California, Santa Barbara 2University of S˜ao Paulo 3Stanford University 1{pushkarshukla,richikasharan,wangwilliamyang,mturk}@ucsb.edu 2 [email protected] 3 [email protected] Abstract The ability to engage in goal-oriented conversations has allowed humans to gain knowledge, reduce uncertainty, and perform tasks more efficiently. Artificial agents, however, are still far behind humans in having goaldriven conversations. In this work, we focus on the task of goal-oriented visual dialogue, aiming to automatically generate a series of questions about an image with a single objective. This task is challenging, since these questions must not only be consistent with a strategy to achieve a goal, but also consider the contextual information in the image. We propose an end-to-end goal-oriented visual dialogue system, that combines reinforcement learning with regularized information gain. Unlike previous approaches that have been proposed for the task, our work is motivated by the Rational Speech Act framework, which models the process of human inquiry to reach a goal. We test the two versions of our model on the GuessWhat?! dataset, obtaining significant results that outperform the current state-of-the-art models in the task of generating questions to find an undisclosed object in an image. 1 Introduction Building natural language models that are able to converse towards a specific goal is an active area of research that has attracted a lot of attention in recent years. These models are vital for efficient human-machine collaboration, such as when interacting with personal assistants. In this paper, we focus on the task of goal-oriented visual dialogue, which requires an agent to engage in conversations about an image with a predefined objective. The task presents some unique challenges. Firstly, the conversations should be consistent with the goals of the agent. Secondly, the conversations between two agents must be coherent with the common viFigure 1: An example of goal-oriented visual dialogue for finding an undisclosed object in an image through a series of questions. On the left, we ask a human to guess the unknown object in the image. On the right, we use the baseline model proposed by Strub et al. (Strub et al., 2017). While the human is able to narrow down the search space relatively faster, the artificial agent is not able to adopt a clear strategy for guessing the object. sual feedback. Finally, the agents should come up with a strategy to achieve the objective in the shortest possible way. This is different from a normal dialogue system where there is no constraint on the length of a conversation. Inspired by the success of Deep Reinforcement Learning, many recent works have also used it for building models for goal-oriented visual dialogue (Bordes et al., 2017). The choice makes sense, as reinforcement learning is well suited for tasks that require a set of actions to reach a goal. However, the performance of these models have been sub-optimal when compared to the average human performance on the same task. For example, consider the two conversations shown in Figure 1. The figure draws a comparison between possible ques6443 tions asked by humans and an autonomous agent proposed by Strub et al. (Strub et al., 2017) to locate an undisclosed object in the image. While humans tend to adopt strategies to narrow down the search space, bringing them closer to the goal, it is not clear whether an artificial agent is capable of learning a similar behavior only by looking at a set of examples. This leads us to pose two questions: What strategies do humans adopt while coming up with a series of questions with respect to a goal?; and Can these strategies be used to build models that are suited for goal-oriented visual dialogue? With this challenge in mind, we directed our attention to contemporary works in the field of cognitive science, linguistics and psychology for modelling human inquiry (Groenendijk et al., 1984; Nelson, 2005; Van Rooy, 2003). More specifically, our focus lies on how humans come up with a series of questions in order to reach a particular goal. One popular theory suggests that humans try to maximize the expected regularized information gain while asking questions (Hawkins et al., 2015; Coenen et al., 2017). Motivated by that, we evaluate the utility of using information gain for goal-oriented visual question generation with a reinforcement learning paradigm. In this paper, we propose two different approaches for training an end-to-end architecture: first, a novel reward function that is a trade-off between the expected information gain of a question and the cost of asking it; and second, a loss function that uses regularized information gain with a step-based reward function. Our architecture is able to generate goal-oriented questions without using any prior templates. Our experiments are performed on the GuessWhat?! dataset (De Vries et al., 2017), a standard dataset for goal-oriented visual dialogue that focuses on identifying an undisclosed object in the image through a series of questions. Thus, our contribution is threefold: • An end-to-end architecture for goal-oriented visual dialogue combining Information Gain with Reinforcement Learning. • A novel reward function for goal-oriented visual question generation to model long-term dependencies in dialogue. • Both versions of our model outperform the current baselines on the GuessWhat?! dataset for the task of identifying an undisclosed object in an image by asking a series of questions. 2 Related Work 2.1 Models for Human Inquiry There have been several works in the area of cognitive science that focus on models for question generation. Groenendijk et al. (Groenendijk et al., 1984) proposed a theory stating that meaningful questions are propositions conditioned by the quality of its answers. Van Rooy (Van Rooy, 2003) suggested that the value of a question is proportional to the questioner’s interest and the answer that is likely to be provided. Many recent related models take into consideration the optimal experimental design (OED) (Nelson, 2005; Gureckis and Markant, 2012), which considers that humans perform intuitive experiments to gain information, while others resort to Bayesian inference. Coenen et al. (Coenen et al., 2017), for instance, came up with nine important questions about human inquiry, while one recent model called Rational Speech Act (RSA) (Hawkins et al., 2015) considers questions as a distribution that is proportional to the trade-off between the expected information gain and the cost of asking a question. 2.2 Dialogue Generation and Visual Dialogue Dialogue generation is an important research topic in NLP, thus many approaches have been proposed to address this task. Most earlier works made use of a predefined template (Lemon et al., 2006; Wang and Lemon, 2013) to generate dialogues. More recently, deep neural networks have been used for building end-to-end architectures capable of generating questions (Vinyals and Le, 2015; Sordoni et al., 2015) and also for the task of goal-oriented dialogue generation (Rajendran et al., 2018; Bordes et al., 2017). Visual dialogue focuses on having a conversation about an image with either one or both of the agents being a machine. Since its inception (Das et al., 2017), different approaches have been proposed to address this problem (Massiceti et al., 2018; Lu et al., 2017; Das et al., 2017). Goaloriented Visual Dialogue, on the other hand, is an area that has only been introduced fairly recently. De Vries et al. (De Vries et al., 2017) proposed the GuessWhat?! dataset for goal-oriented visual dialogue while Strub et al. (Strub et al., 2017) developed a reinforcement learning approach for 6444 Figure 2: A block diagram of our model. The framework is trained on top of three individual models: the questioner (QGen), the guesser, and the oracle. The guesser returns an object distribution given a history of question-answer pairs that are generated by the questioner and the oracle respectively. These distributions are used for calculating the information gain of the question-answer pair. The information gain and distribution of probabilities given by the Guesser are used either as a reward or optimized as a loss function with global rewards for training the questioner. goal-oriented visual question generation. More recently, Zhang et al. (Zhang et al., 2018) used intermediate rewards for training a model on this task. 2.3 Sampling Questions with Information Gain Information gain has been used before to build question-asking agents, but most of these models resort to it to sample questions. Rothe et al. (Rothe et al., 2017) proposed a model that generates questions in a Battleship game scenario. Their model uses Expected Information Gain to come up with questions akin to what humans would ask. Lee et al. (Lee et al., 2018) used information gain alone to sample goal-oriented questions on the GuessWhat?! task in a non-generative fashion. The most similar work to ours was proposed by Lipton et al. (Lipton et al., 2017), who used information gain and Q-learning to generate goal-oriented questions for movie recommendations. However, they generated questions using a template-based question generator. 3 The GuessWhat?! framework We built our model based on the GuessWhat?! framework (De Vries et al., 2017). GuessWhat?! is a two-player game in which both players are given access to an image containing multiple objects. One of the players – the oracle – chooses an object in the image. The goal of the other player – the questioner – is to identify this object by asking a series of questions to the oracle, who can only give three possible answers: ”yes,” ”no,” or ”not applicable.” Once enough evidence is collected, the questioner has to choose the correct object from a set of possibilities – which, in the case of an artificial agent, are evaluated by a guesser module. If this final guess is correct, the questioner is declared the winner. The GuessWhat?! dataset comprises 155,280 games on 66,537 images from the MS-COCO dataset, with 831,889 question-answer pairs. The dataset has 134,074 unique objects and 4,900 words in the vocabulary. A game is comprised of an image I with height H and width W, a dialogue D = {(q1, a1), (q2, a3), ...(qn, an)}, where qj ∈Q denotes a question from a list of questions and aj ∈ A denotes an answer from a list of answers, which can either be ⟨yes⟩, ⟨no⟩or ⟨N/A⟩. The total number of objects in the image is denoted by O and the target is denoted by o∗. The term V indicates the vocabulary that comprises all the words that are employed to train the question generation module (QGen). Each question can be represented by q = {wi}, where wi denotes the ith word in the vocabulary. The set of segmentation masks of objects is denoted by S. These notations are similar to those of Strub et al. (Strub et al., 2017). An example of a game can be seen in Figure 1, where the questioner generates a series of questions to guess the undisclosed object. In the end, the guesser tries to predict the object with the image and the given 6445 set of question-answer pairs. 3.1 Learning Environment We now describe the preliminary models for the questioner, the guesser, and the oracle. Before using them for the GuessWhat?! task, we pre-train all three models in a supervised manner. During the final training of the Guesswhat?! task our focus is on building a new model for the questioner and we use the existing pre-trained models for the oracle and the guesser. 3.1.1 The Questioner The questioner’s job is to generate a new question qj+1 given the previous j question-answer pairs and the image I. Our model has a similar architecture to the VQG model proposed by Strub et al. (Strub et al., 2017). It consists of an LSTM whose inputs are the representations of the corresponding image I and the input sequence corresponds to the previous dialogue history. The representations of the image are extracted from the fc-8 layer of the VGG16 network (Simonyan and Zisserman, 2014). The output of the LSTM is a probability distribution over all words in the vocabulary. The questioner is trained in a supervised fashion by minimizing the following negative loglikelihood loss function: Lques = −logpq(q1:J|I, a1:J) = − J X j=1 Ij X i=1 logpq(wj i |wj 1:i−1, (q, a)1:j−1, I) (1) Samples are generated in the following manner during testing: given an initial state s0 and new token wj 0, a word is sampled from the vocabulary. The sampled word along with the previous state is given as the input to the next state of the LSTM. The process is repeated until the output of the LSTM is the ⟨end⟩token. 3.1.2 The Oracle The job of the oracle is to come up with an answer to each question that is posed. In our case, the three possible outcomes are ⟨yes⟩, ⟨no⟩, or ⟨N/A⟩. The architecture of the oracle model is similar to the one proposed by De Vries et al. (De Vries et al., 2017). The input to the oracle is an image, a category vector, and the question that is encoded using an LSTM. The model then returns a distribution over the possible set of answers. 3.1.3 The Guesser The job of the guesser is to return a distribution of probabilities over all set of objects given the input image and the dialogue history. We convert the entire dialogue history into a single encoded vector using an LSTM. All objects are embedded into vectors, and the dot product of these embeddings are performed with the encoded vector containing the dialogue history. The dot product is then passed through an MLP layer that returns the distribution over all objects. 4 Regularized Information Gain The motivation behind using Regularized Information Gain (RIG) for goal-oriented questionasking comes from the Rational Speech Act Model (RSA) (Hawkins et al., 2015). RSA tries to mathematically model the process of human questioning and answering. According to this model, when selecting a question from a set of questions, the questioner considers a goal g ∈G with respect to the world state G and returns a probability distribution of questions such that: P(q|g) ∝eDKL( ∧p(q|g)|| ∼p(q|g))−C(q) (2) where P(q|g) represents probability of selecting a question q from a set of questions Q. The probability is directly proportional to the trade-off between the cost of asking a question C(q) and the expected information gain DKL( ∼p(q|g)|| ∧p(q|g)). The cost may depend on several factors such as the length of the question, the similarity with previously asked questions, or the number of questions that may have been asked before. The information gain is defined as the KL divergence between the prior distribution of the world with respect to the goal, ∼p(q|g), and the posterior distribution that the questioner would expect after asking a question, ∧p(q|g). Similar to Equation 2, in our model we make use of the trade-off between expected information gain and the cost of asking a question for goal-oriented question generation. Since the cost term regularizes the expected information gain, we denote this trade-off as Regularized Information Gain. For a given question q, the Regularized Information Gain is given as: RIG(q) = τ(q) −C(q)) (3) 6446 where τ(q) is the expected information gain associated with asking a question and C(q) is the cost of asking a question q ∈Q in a given game. Thus, the information gain is measured as the KL divergence between the prior and posterior likelihoods of the scene objects before and after a certain question is made, weighted by a skewness coefficient β(q) over the same posterior. τ(q) = DKL( ∼p(qj|I, (q, a)1:j−1)|| ∧p(qj|I, (q, a)1:j−1))β(q) (4) The prior distribution before the start of the game is assumed to be 1 N , where N is the total number of objects in the game. After a question is asked, the prior distribution is updated and it is equal to the output distribution of the guesser: ∼p(qj|I, (q, a)1:j−1 = ( pguess(I, (q, a)1:j−1), if i ≥1 1 N , if i = 0 (5) We define the posterior to be the output of the guesser once the answer has been given by the oracle: ∧p(qj|I, (q, a))1:j−1) = A X a∈A pguess(qj|I, (q, a)1:j−1) (6) The idea behind using skewness is to reward questions that lead to a more skewed distribution at each round. The implication is that a smaller group of objects with higher probabilities lowers the chances of making a wrong guess by the end of the game. Additionally, the measure of skewness also works as a counterweight to certain scenarios where KL divergence itself should not reward the outcome of a question, such as when there is a significant information gain from a previous state but the distribution of likely objects, according to the guesser, becomes mostly homogeneous after the question. Since we assume that initially all objects are equally likely to be the target, the skewness approach is only applied after the first question. We use the posterior distribution provided by the guesser to extract the Pearson’s second skewness coefficient (i.e., the median skewness) and create the β component. Therefore, assuming a sample mean µ, median m, and standard deviation σ, the skewness coefficient is simply given by: β(q) = 3(µ −m) σ (7) Some questions might have a high information gain, but at a considerable cost. The term C(q) acts as a regularizing component to information gain and controls what sort of questions should be asked by the questioner. The cost of asking a question can be defined in many ways and may differ from one scenario to another. In our case, we are only considering whether a question is being asked more than once, since a repeated question cannot provide any new evidence that will help get closer to the target, despite a high information gain from one state to another during a complete dialogue. The cost for a repeated question is defined as: C(q) = ( τ(q), if qj ∈{qj−1, ..., q1} 0, otherwise (8) The cost for a question is equal to the negative information gain. This sets the value of an intermediate reward to 0 for a repeated question, ensuring that the net RIG is zero when the question is repeated. 5 Our Model We view the task of generating goal-oriented visual questions as a Markov Decision Process (MDP), and we optimize it using the Policy Gradient algorithm. In this section, we describe some of the basic terminology employed in our model before moving into the specific aspects of it. At any time instance t, the state of the agent can be written as ut = ((wj 1, ..., wj m), (q, a)1:j−1, I), where I is the image of interest, (q, a)1:j−1 is the question-answer history, and (wj 1, ..., wj m) is the previously generated sequence of words for the current question qj. The action vt denotes the selection of the next output token from all the tokens in the vocabulary. All actions can lead to one of the following outcomes: 1. The selected token is ⟨stop⟩, marking the end of the dialogue. This shows that it is now the turn of the guesser to make the guess. 2. The selected token is ⟨end⟩, marking the end of a question. 3. The selected token is another word from the vocabulary. The word is then appended to the current sequence (wj 1, ..., wj m). This marks the start of the next state. Our approach models the task of goal-oriented questioning as an optimal stochastic policy πθ(v|u) over the possible set of state-action pairs. 6447 Algorithm 1 Training the question generator using REINFORCE with the proposed rewards Require: Pretrained QGen, Oracle and Guesser Require: Batch size K 1: for Each update do 2: for k = 1 to K do 3: Pick image Ik and the target object o∗ k ∈Ok 4: N ←|Ok| 5: ∼p(ok1:N ) ← 1 N 6: for j ←1 to Jmax do 7: qk j ←QGen((q, a)k 1:j−1, Ik) 8: ak j ←Oracle(qk j , o∗ k, Ik) 9: ∧p(ok1:N ) ←Guesser((q, a)k 1:j, Ik, Ok) 10: β(qk j ) ←Skewness( ∧p(ok1:N )) 11: τ(qk j ) ←DKL( ∼p(ok1:N )|| ∧p(ok1:N ))β(qk j ) 12: C(qk j ) ← ( τ(qk j ) if qj ∈{qj−1, ..., q1} 0 Otherwise 13: R ←PJmax j=1 τ(qk j ) −C(qk j ) 14: p(ok|·) ←Guesser((q, a)k 1:j, Ik, Ok) 15: r(ut, vt) ← ( R If argmax p(ok|·) = o∗ k 0 Otherwise 16: Define Th ←((q, a)k 1:jk, Ik, rk)1:K 17: Evaluate ∇J(θh) with Eq.13 with Th 18: SGD update of QGen parameters θ using ∇J(θh) 19: Evaluate ∇L(φh) with Eq.15 with Th 20: SGD update of baseline parameters using ∇L(φh) Here θ represents the parameters present in our architecture for question generation. In this work, we experiment with two different settings to train our model with Regularized Information Gain and policy gradients. In the first setting, we use Regularized Information Gain as an additional term in the loss function of the questioner. We then train it using policy gradients with a 0-1 reward function. In the second setting, we use Regularized Information Gain to reward our model. Both methods are described below. 5.1 Regularized Information Gain loss minimization with 0-1 rewards During the training of the GuessWhat?! game we introduce Regularized Information Gain as an additional term in the loss function. The goal is to minimize the negative log-likelihood and maximize the Regularized Information Gain. The loss function for the questioner is given by: L(θ) = −logpq(q1:J|I, a1:J) + τ(q) −C(q) = − J X j=1 Ij X i=1 logpq(wj i |wj 1:i−1, (q, a)1:j−1, I) +DKL( ∧p(qj|I, (q, a))|| ∼p(qj|I, (q, a)))β(q) (9) We adopt a reinforcement learning paradigm on top of the proposed loss function. We use a zeroone reward function similar to Strub et al. (Strub et al., 2017) for training our model. The reward function is given as: r(ut, vt) = ( 1, if argmaxpguess = o∗ 0, otherwise (10) Thus, we give a reward of 1 if the guesser is able to guess the right object and 0 otherwise. 5.2 Using Regularized Information Gain as a reward Defining a valuable reward function is a crucial aspect for any Reinforcement Learning problem. There are several factors that should be considered while designing a good reward function for asking goal-oriented questions. First, the reward function should help the questioner achieve its goal. Second, the reward function should optimize the search space, allowing the questioner to come up with relevant questions. The idea behind using regularized information gain as a reward function is to take into account the long term dependencies in dialogue. Regularized information gain as a reward function can help the questioner to come up with an efficient strategy to narrow down a large search space. The reward function is given by: r(ut, vt) = (P|Q| j=1(τ(qj) −C(qj)), if argmax pguess = o∗ 0, otherwise (11) Thus, the reward function is the sum of the tradeoff between the information gain τ(q) and the cost of asking a question C(q) for all questions Q in a given game. Our function only rewards the agent if it is able to correctly predict the oracle’s initial choice. 5.3 Policy Gradients Once the reward function is defined, we train our model using the policy gradient algorithm. For a given policy πθ, the objective function of the policy gradient is given by: J(θ) = Eπθ " T X t=1 r(ut, vt) # (12) According to Sutton et al. (Sutton et al., 2000), the gradient of J(θ) can be written as: ∇J(θ) ≈ * T X t=1 X vt∈V ∇θlogπθ(ut, vt)(Qπθ(ut, vt)−bφ) + (13) 6448 New Image New Object Approach Greedy Beam Sampling Best Greedy Beam Sampling Best Baseline.(Strub et al., 2017) 46.9% 53.0% 45.0% 53.0% 53.4% 46.4% 46.44% 53.4% Strub et al. (Strub et al., 2017) 58.6% 54.3% 63.2% 63.2% 57.5% 53.2% 62.0% 62.0% Zhang et al. (Zhang et al., 2018) 56.1% 54.9% 55.6% 55.6% 56.51% 56.53% 49.2% 56.53% TPG1(ZhaoandTresp, 2018) 62.6% GDSE-C (Venkatesh et al., 2018) 60.7% 63.3% ISM1(Abbasnejadet al., 2018) 62.1% 64.2% RIG as rewards 59.0% 60.21% 64.06% 64.06% 63.00% 63.08% 65.20% 65.20% RIG as a loss with 0-1 rewards 61.18% 59.79% 65.79% 65.79% 63.19% 62.57% 67.19% 67.19% Table 1: A comparison of the recognition accuracy of our model with the state of the art model (Strub et al., 2017) and other concurrent models on the GuessWhat?! task for guessing an object in the images from the test set. where Qπθ(ut, vt) is the state value function given by the sum of the expected cumulative rewards: Qπθ(ut, vt) = Eπθ " T X t′=t r(ut, vt) # (14) Here bφ is the baseline function used for reducing the variance. The baseline function is a single-layered MLP that is trained by minimizing the squared loss error function given by: min Lφ = D [bφ −r(ut, vt)]2E (15) 6 Results The model was trained under the same settings of (Strub et al., 2017). This was done in order to obtain a more reliable comparison with the preexisting models in terms of accuracy. After a supervised training of the question generator, we ran our reinforcement procedure using the policy gradient for 100 epochs on a batch size of 64 with a learning rate of 0.001. The maximum number of questions was 8. The baseline model, the oracle, and the guesser were also trained with the same settings described by (De Vries et al., 2017), in order to compare the performance of the two reward functions. The error obtained by the guesser and the oracle were 35.8% and 21.1%, respectively. 1 Table 1 shows our primary results along with the baseline model trained on the standard crossentropy loss for the task of guessing a new object in the test dataset. We compare our model with the one presented by (Strub et al., 2017) and other concurrent approaches. Table 1 also compares our model with others when objects are sampled using a uniform distribution (right column). 1In order to have a fair comparison, the results reported for TPG (Zhao and Tresp, 2018) and (Abbasnejad et al., 2018) only take into consideration the performance of the question generator. We do not report the scores that were generated after employing memory network to the guesser. 6.1 Ablation Study We performed an ablation analysis over RIG in order to identify its main learning components. The results of the experiments with the reward function based on RIG are presented in Table 2, whereas Table 3 compares the different components of RIG when used as a loss function. The results mentioned under New Images refer to images in the test set, while the results shown under New Objects refer to the analysis made on the training dataset with different undisclosed objects from the ones used during training time. For the first set of experiments, we compared the performance of information gain vs. RIG with the skewness coefficient for goal-oriented visual question generation. It is possible to observe that RIG is able to achieve an absolute improvement of 10.57% over information gain when used as a reward function and a maximum absolute improvement of 2.8% when it is optimized in the loss function. Adding the skewness term results in a maximum absolute improvement of 0.9% for the first case and an improvement of 2.3% for the second case. Furthermore, we compared the performance of the model when trained using RIG but without policy gradients. The model then achieves an improvement of 10.35% when information gain is used as a loss function. 6.2 Qualitative Analysis In order to further analyze the performance of our model, we assess it in terms repetitive questions, since they compromise the framework’s efficiency. We compare our model with the one proposed by (Strub et al., 2017) and calculate the average number of repetitive questions generated for each dialogue. The model by Strub et al. achieved a score of 0.82, whereas ours scored 0.36 repeated questions per dialogue and 0.27 using RIG as a reward function. 6449 Figure 3: A qualitative comparison of our model with the model proposed by Strub et al. (Strub et al., 2017). Rewards New New Images Objects I.G. (greedy) 51.6% 52.4% I.G. + skewness (greedy) 57.5% 62.4% R.I.G. (greedy) 58.8% 63.03% Table 2: An ablation analysis using Regularized Information Gain as a reward on the GuessWhat?! dataset. Approach New New Images Objects I.G. as a loss function 51.2% 52.8% with no rewards I.G. as a loss function 57.3% 61.9% with 0-1 rewards (greedy) I.G. + skewness as a loss function 59.47% 62.44% with 0-1 rewards (greedy) R.I.G. as a loss function 60.18% 63.15% with 0-1 rewards (greedy) Table 3: An ablation analysis of using Regularized Information Gain as a loss function with 0-1 rewards. The figures presented in the table indicate the accuracy of the model on the GuessWhat?! dataset. 7 Discussion Our model was able to achieve an accuracy of 67.19% for the task of asking goal-oriented questions on the GuessWhat?! dataset. This result is the highest obtained so far among existing approaches on this problem, albeit still far from human-level performance on the same task, reportedly of 84.4%. Our gains can be explained in part by how RIG with the skewness component for goal-oriented VQG constrains the process of generating relevant questions and, at the same time, allows the agent to reduce the search space significantly, similarly to decision trees and reinforcement learning, but in a very challenging scenario, since the search space in generative models can be significantly large. Our qualitative results also demonstrate that our approach is able to display certain levels of strategic behavior and mutual consistency between questions in this scenario, as shown in Figure 3. The same cannot be said about previous approaches, as the majority of them fail to avoid redundant or other sorts of expendable questions. We argue that our cost function and the skewness coefficient both play an important role here, as the former penalizes synonymic questions and the latter narrows down the set of optimal questions. Our ablation analysis showed that information gain alone is not the determinant factor that leads to improved learning, as hypothesized by Lee et al. (Lee et al., 2018). However, Regularized Information Gain does have a significant effect, which indicates that a set of constraints, especially regarding the cost of making a question, cannot be taken lightly in the context of goal-oriented VQG. 8 Conclusion In this paper we propose a model for goal-oriented visual question generation using two different approaches that leverage information gain with reinforcement learning. Our algorithm achieves improved accuracy and qualitative results in comparison to existing state-of-the-art models on the GuessWhat?! dataset. We also discuss the innovative aspects of our model and how performance could be increased. Our results indicate that RIG is a more promising approach to build betterperforming agents capable of displaying strategy and coherence in an end-to-end architecture for Visual Dialogue. Acknowledgments We acknowledge partial support of this work by the S˜ao Paulo Research Foundation (FAPESP), grant 2015/26802-1. 6450 References Ehsan Abbasnejad, Qi Wu, Iman Abbasnejad, Javen Shi, and Anton van den Hengel. 2018. An active information seeking model for goaloriented vision-and-language tasks. arXiv preprint arXiv:1812.06398. Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. International Conference on Learning Representations. Anna Coenen, Jonathan D Nelson, and Todd M Gureckis. 2017. Asking the right questions about human inquiry. OpenCoenen, Anna, Jonathan D Nelson, and Todd M Gureckis.Asking the Right Questions About Human Inquiry. PsyArXiv, 13. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e MF Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 2. Harm De Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron C Courville. 2017. Guesswhat?! visual object discovery through multi-modal dialogue. In CVPR, volume 1, page 3. Jeroen AG Groenendijk, MJB Stokhof, et al. 1984. On the semantics of questions and the pragmatics of answers. DordrechtForis90676500809789067650083. Todd M Gureckis and Douglas B Markant. 2012. Selfdirected learning: A cognitive and computational perspective. Perspectives on Psychological Science, 7(5):464–481. Robert XD Hawkins, Andreas Stuhlm¨uller, Judith Degen, and Noah D Goodman. 2015. Why do you ask? good questions provoke informative answers. In CogSci. Citeseer. Sang-Woo Lee, Yu-Jung Heo, and Byoung-Tak Zhang. 2018. Answerer in questioner’s mind for goaloriented visual dialogue. NeurIPS. Oliver Lemon, Kallirroi Georgila, James Henderson, and Matthew Stuttle. 2006. An isu dialogue system exhibiting reinforcement learning of dialogue policies: generic slot-filling in the talk in-car system. In Proceedings of the Eleventh Conference of the European Chapter of the Association for Computational Linguistics: Posters & Demonstrations, pages 119– 122. Association for Computational Linguistics. Zachary Lipton, Xiujun Li, Jianfeng Gao, Lihong Li, Faisal Ahmed, and Li Deng. 2017. Bbq-networks: Efficient exploration in deep reinforcement learning for task-oriented dialogue systems. AAAI. Jiasen Lu, Anitha Kannan, Jianwei Yang, Devi Parikh, and Dhruv Batra. 2017. Best of both worlds: Transferring knowledge from discriminative learning to a generative visual dialog model. In Advances in Neural Information Processing Systems, pages 313–323. Daniela Massiceti, N Siddharth, Puneet K Dokania, and Philip HS Torr. 2018. Flipdial: A generative model for two-way visual dialogue. image (referred to as captioning), 13:15. Jonathan D Nelson. 2005. Finding useful questions: on bayesian diagnosticity, probability, impact, and information gain. Psychological review, 112(4):979. Janarthanan Rajendran, Jatin Ganhotra, Satinder Singh, and Lazaros Polymenakos. 2018. Learning endto-end goal-oriented dialog with multiple answers. EMNLP. Anselm Rothe, Brenden M Lake, and Todd Gureckis. 2017. Question asking as program generation. In Advances in Neural Information Processing Systems, pages 1046–1055. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. CVPR. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. arXiv preprint arXiv:1506.06714. Florian Strub, Harm De Vries, Jeremie Mary, Bilal Piot, Aaron Courville, and Olivier Pietquin. 2017. End-to-end optimization of goal-driven and visually grounded dialogue systems. International Joint Conference on Artificial Intelligence. Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057–1063. Robert Van Rooy. 2003. Questioning to resolve decision problems. Linguistics and Philosophy, 26(6):727–763. Aashish Venkatesh, Ravi Shekhar, Tim Baumg¨artner, Elia Bruni, Raffaella Bernardi, and Raquel Fern´andez. 2018. Jointly learning to see, ask, and guesswhat. NAACL. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. ICML Deep Learning Workshop 2015. Zhuoran Wang and Oliver Lemon. 2013. A simple and generic belief tracking mechanism for the dialog state tracking challenge: On the believability of observed information. In Proceedings of the SIGDIAL 2013 Conference, pages 423–432. Junjie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jianfeng Lu, and Anton Van Den Hengel. 2018. Goaloriented visual question generation via intermediate rewards. In European Conference on Computer Vision, pages 189–204. Springer. 6451 Rui Zhao and Volker Tresp. 2018. Learning goaloriented visual dialog via tempered policy gradient. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 868–875. IEEE.
2019
646
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6452–6462 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6452 Symbolic inductive bias for visually grounded learning of spoken language Grzegorz Chrupała Tilburg University [email protected] Abstract A widespread approach to processing spoken language is to first automatically transcribe it into text. An alternative is to use an end-to-end approach: recent works have proposed to learn semantic embeddings of spoken language from images with spoken captions, without an intermediate transcription step. We propose to use multitask learning to exploit existing transcribed speech within the end-to-end setting. We describe a three-task architecture which combines the objectives of matching spoken captions with corresponding images, speech with text, and text with images. We show that the addition of the SPEECH/TEXT task leads to substantial performance improvements on image retrieval when compared to training the SPEECH/IMAGE task in isolation. We conjecture that this is due to a strong inductive bias transcribed speech provides to the model, and offer supporting evidence for this. 1 Introduction Understanding spoken language is one of the key capabilities of intelligent systems which need to interact with humans. Applications include personal assistants, search engines, vehicle navigation systems and many others. The standard approach to understanding spoken language both in industry and in research has been to decompose the problem into two components arranged in a pipeline: Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU). The audio signal representing a spoken utterance is first transcribed into written text, which is subsequently processed to extract some semantic representation of the utterance. Recent works have proposed to learn semantic embeddings of spoken language by using photographic images of everyday situations matched with their spoken captions, without an intermediate transcription step (Harwath et al., 2016; Chrupała et al., 2017). The weak and noisy supervision in these approaches is closer to how humans learn to understand speech by grounding it in perception and thus more useful as a cognitive model. It can also have some practical advantages: in certain circumstances it may be easier to find or collect speech associated with images rather than transcribed speech – for example when dealing with language whose speakers are illiterate, or for languages with no standard writing system (note that even some languages with many millions of speakers, like Cantonese, may not have a standardized writing system). On the other hand, the learning problem in this type of framework is less constrained, and harder, than standard ASR. In order to alleviate this shortcoming, we propose to use multitask learning (MTL) and exploit transcribed speech within the end-to-end visuallygrounded setting, and thus combine some features of both the pipeline and end-to-end approaches. Incorporating speech transcriptions into the endto-end architecture via multi-task learning measn that the amount of transcribed speech and its quality do not need to be as high as needed for training an ASR system within the pipeline architecture, since the role of this data is only to guide the endto-end model via an auxiliary task. We describe a three-task architecture which combines the main objective of matching speech with images with two auxiliary objectives: matching speech with text, and matching text with images. The plain end-to-end SPEECH/IMAGE matching task, modeled via standard architectures such as recurrent neural networks, lacks a languagespecific learning bias. This type of model may discover in the course of learning that speech can be represented as a sequence of symbols (such as for example phonemes or graphemes), but it is in no way predisposed to make this discovery. Hu6453 man learners may be more efficient at least in part thanks to their innate inductive bias whereby they assume that language is symbolic. They arguably acquired such bias via the process of evolution by natural selection. In the context of machine learning, inductive bias can instead be injected via multi-task learning, where supervision from the secondary task guides the model towards appropriately biased representations. Specifically, our motivation for the SPEECH/TEXT task is to encourage the model to learn speech representations which are correlated with the encoding of spoken language as a sequence of characters. Additionally, and for completeness, we also consider a second auxiliary task matching text to images. Our contribution consists in formulating and answering the following questions: • Do the auxiliary tasks improve the main SPEECH/IMAGE task? The SPEECH/TEXT task helps but we have no evidence of the TEXT/IMAGE task improving performance. • If so, is this mainly because MTL allows us to exploit extra data, or because the additional task injects an appropriate inductive bias into the model? The inductive bias is key to the performance gains of MTL, while extra data makes no impact. • Which parameters should be shared between tasks and which should be task specific? Best performance is achieved by sharing only the lower layers of the speech encoder. • What are the specific effects of the symbolic inductive bias on the learned representations? SPEECH/TEXT contributes to make the encoded speech more speaker invariant, and more strongly correlated to the written or phonetically represented form of the utterances. 2 Related work 2.1 Visually grounded semantic embeddings of spoken language The most relevant strand of related work is on visually-grounded learning of (spoken) language. It dates back at least to Roy and Pentland (2002), but has recently attracted further interest due to better-performing modeling tools based on neural networks. Harwath and Glass (2015) collect spoken descriptions for the Flick8K captioned image dataset and present a model which is able to map presegmented spoken words to aspects of visual context. Harwath et al. (2016) describe a larger dataset of images paired with spoken captions (Places Audio Caption Corpus) and present an architecture that learns to project images and unsegmented spoken captions to the same embedding space. The sentence representation is obtained by feeding the spectrogram to a convolutional network. Further elaborations on this setting include Harwath and Glass (2017), which shows a clustering-based method to identify grounded words in the speech-image pairs, and Harwath et al. (2018b) which constructs a three-dimensional tensor encoding affinities between image regions and speech segments. The work of Chrupała et al. (2017) is similar in that it exploits datasets of images with spoken captions, but their grounded speech model is based around multi-layer Recurrent Highway Networks, and focuses on quantitative analyses of the learned representations. They show that the encoding of meaning tends to become richer in higher layers, whereas encoding of form tends to initially increase and then stay constant or decrease. Alishahi et al. (2017a) further analyze the representations of the same model and show that phonological form is reliably encoded in the lower recurrent layers of the network but becomes substantially attenuated in the higher layers. Drexler and Glass (2017) also analyze the representations of a visually grounded speech model with view of using such representations for unsupervised speech recognition, and show that they contain more linguistic and less speaker information than filterbank features. Kamper et al. (2017) use images as a pivot to learn to associate textual labels with spoken utterances, by mapping utterances and images into joint semantic space. After labeling the images with an object classifier, these labels can be further associated with utterances, providing bag-ofwords representation of spoken language which can be useful in speech retrieval. 2.2 Multi-task learning for speech and language The concept of multi-task learning (MTL) was introduced by Caruana (1997). Neural architectures 6454 widely used in the fields of speech and language processing make it easy to define parametersharing architectures and exploit MTL, and thus there has been a recent spurt of reports on its impact. Within Natural Language Processing (NLP), Luong et al. (2016) explore sharing encoders and decoders in a sequence-to-sequence architecture for translation, syntactic parsing, and image captioning, and show gains on some configurations. Bingel and Søgaard (2017) investigate which particular pairs of NLP tasks lead to gains, concluding that learning curves and label entropy of the tasks may be used as predictors. McCann et al. (2018) propose a 10-task NLP challenge, and a single MTL model which performs reasonably well on all tasks. Søgaard and Goldberg (2016) show that which parameters are shared in a multi-task architecture matters a lot: they find that when sharing parameters between syntactic chunking or supertagging and POS tagging as an auxiliary task, it was consistently better to only share the lower-layers of the model. Relatedly, Hashimoto et al. (2017) propose a method of training NLP tasks at multiple levels of complexity by growing the depth of the model to solve increasingly more difficult tasks. Swayamdipta et al. (2018) use similar ideas and show that syntactic information can be incorporated in a semantic task with MTL, using auxiliary syntactic tasks without building full-fledged syntactic structure at prediction time. MTL can lead to a bewildering number of choices regarding which tasks to combine, which parameters to share and how to schedule and weight the tasks. Some recent works have suggested specific approaches to deal with this complexity: Ruder et al. (2017) propose to learn from data which parameters to share in MTL with sluice networks and show some gains on NLP tasks. Kiperwasser and Ballesteros (2018) investigate how to interleave learning syntax and translation and how to schedule these tasks. Several works show that exploiting MTL via the use of multiple language versions of the same or comparable data leads to performance gains (e.g. Lee et al., 2017; Johnson et al., 2017; de Lhoneux et al., 2018). Gella et al. (2017) and Kádár et al. (2018) learn visual semantic embeddings from textual-visual datasets and show gains from additional languages which reuse the same encoder. Kádár et al. (2018) additionally show that an extra objective linking the languages directly rather than only via the visual modality provides additional performance gains. In the context of audio-visual data, Harwath et al. (2018a) applies a type of MTL in the setting where there are images paired with descriptions in English and Hindi. They project the images, English speech and Hindi speech into a joint semantic space, and show that training on multiple tasks matching both languages to images works better compared to only using a single monolingual task. MTL has also recently seen some success in speech processing. Similar to what we see in machine translation, in ASR parameter sharing between different languages is also beneficial (Heigold et al., 2013). More recently, Dalmia et al. (2018) show that exploiting this effect is especially useful for low-resource languages. Seltzer and Droppo (2013) apply MTL for phone recognition with three lower-level auxiliary tasks and show noticeable reductions in error rates. Toshniwal et al. (2017) use MTL for conversational speech recognition with lower-level tasks (e.g. phoneme recognition) in an encoder-decoder model for direct character transcription. Rao and Sak (2017) learn to align utterances with phonetic transcriptions in a lower layer and graphemic transcriptions in the final layer, exploiting again the relation between task level of complexity and levels of neural architecture in a MTL setting. They also show a benefit of sharing model parameters between different varieties of the same language, specifically US, British, Indian and Australian English. McMahan and Rao (2017) demonstrate the effectiveness of transfer from generic audio classification to speech command recognition, which can also be considered a particular instance of MTL. How our work fits in. The current paper uses an intuition also present in several of the works mentioned above: namely that an end-to-end model which needs to induce several levels of intermediate latent representations should be guided to find useful ones by including auxiliary prediction tasks at the intermediate layers. These auxiliary prediction tasks typically use lower-level linguisticallymotivated structures such as phonemes for end-toend ASR, or syntactic trees for semantic parsing. The present study extends this setting to a full speech-to-semantics setup: the main task is to take 6455 spoken language as input and learn a semantic representation based on feedback from the visual modality, while an ASR-like task (SPEECH/TEXT MATCHING) is merely auxiliary. The lower-level linguistic structures in our case are the sequences of phoneme-like units approximated by the written form of the language. 3 Methods 3.1 Models S2T S S2I T T2S T2I Speech/Text Text/Image A bird walks on a beam I2T I I2S Speech/Image Figure 1: Overview of the task architecture. T: shared text encoder, S: shared speech encoder, I: shared image encoder. The notation X2Y stands for an encoder for input type X which is only used for the loss between encoded input types X and Y. The modeling framework uses a multi-task setup. The core model is a three-task architecture depicted in Figure 1: there are three encoders, one for each modality: speech, image, and text. Each modality has a shared encoder which works directly on the input modality, and two specialized encoders which take as input the encoded data from the shared encoder. The three tasks correspond to three losses (depicted with circles in the figure): each loss works with a pair of modalities and attempts to minimize the distance between matching encoded items, while maximizing the distance between mismatching ones. For a pair of modalities with encoded objects u and i, the loss is defined as follows (1) X u,i X u′ max[0, α+d(u, i)−d(u′, i)] + X i′ max[0, α + d(u, i) −d(u, i′)] ! where (u, i) are matching objects (for example an utterance and a matching image), and (u′, i) and (u, i′) are mismatched objects within a batch, while d(·, ·) is the cosine distance between encoded objects. The SPEECH/IMAGE part of the architecture is based on the grounded speech model from Chrupała et al. (2017), with the main difference being that these authors used Recurrent Highway Networks (Zilly et al., 2017) for the recurrent layers, while we chose the simpler Gated Recurrent Unit networks (Chung et al., 2014), because they have optimized low-level CUDA support which makes them much faster to run and enables us to carry out an at least somewhat comprehensive set of experiments. 3.2 Image Encoders The shared image encoder I is a pretrained, fixed Convolutional Neural Network which outputs a vector with image features; specifically, the activations of the pre-classification layer. The modalityspecific encoders I2S and I2T are linear mappings which take the output of I. 3.3 Speech Encoders The shared encoder S consists of a 1-dimensional convolutional layer which subsamples the input, followed by a stack of recurrent layers. The modality specific encoders S2T and S2I consist of a stack of recurrent layers, followed by an attention operator. The encoder S is defined as follows: S(x) = GRUℓ(Convs,d,z(x)) (2) where Conv is a convolutional layer with kernel size s, d channels, and stride z, and GRUℓis a stack of ℓGRU layers. An encoder of modality X is defined as S2X(x) = unit(Attn(GRUℓ(x))) (3) where Attn is the attention operator and unit is L2-normalization. Note that for the case ℓ= 0 6456 GRUℓis simply the identity function. The attention operator computes a weighted sum of the RNN activations at all timesteps: Attn(x) = X t αtxt (4) where the weights αt are determined by an MLP with learned parameters U and W, and passed through the timewise softmax function: αt = exp(U tanh(Wxt)) P t′ exp(U tanh(Wxt′)) (5) 3.4 Text Encoders The text encoders are defined in the same way as the speech encoders, with the only difference being that the convolutional layer is replaced by an embedding layer, i.e. a lookup table mapping characters to embedding vectors. 3.5 Multi-tasking The model is trained by alternating between the tasks, and updating the parameters of each task in turn. Note that the input data for the three tasks can be the same, but can also be partly or completely disjoint. We report two conditions • ALIGNED: all tasks use the same parallel data; • NON-ALIGNED: the data for the SPEECH/TEXT task is disjoint from the data for the other two tasks. We consider the NON-ALIGNED condition somewhat more realistic, in that it is easier to find separate datasets for each pair of modalities than it is to to find a single dataset with all three modalities. However the main reason to including both conditions is that it allows us to disentangle via which mechanism MTL contributes: by enabling the use of extra data, or by enforcing an inductive bias. 3.6 Architecture variants There is a multitude of ways in which the details of the core architecture can be varied. in order to reduce them to a manageable number we made the following choices: • Keep the image encoder simple and fixed. • Keep the architecture of the encoders fixed, and only vary encoder depth and the degree of sharing. In addition to variants of the full three-task model, we also have single-task and two-task baselines which are the three-task model with the SPEECH/TEXT and TEXT/IMAGE tasks completely ablated, or with only the TEXT/IMAGE task ablated. Note that we do not include a condition with only the SPEECH/TEXT task ablated, as the two remaining tasks do not share any learnable parameters (since I is fixed). 3.7 Evaluation metrics Below we introduce metrics evaluating performance on the image retrieval task, as well as additional analytical metrics which quantify some aspects of the internal representation learned by the encoders. Evaluating image retrieval In order to evaluate how well the main SPEECH/IMAGE task performs we report the recall at 10 (R@10) and median rank (Medr) for the SPEECH/IMAGE task: utterances in the development set are encoded via S2I and images via I2S. For each utterance the images are ranked in order of cosine distance; R@10 counts the mean proportion of correct images among top 10 ranked images, while Medr gives the median of the ranks of the correct image (where correct image counts as image originally paired with the utterance). Invariance to speaker We measure how invariant the utterance encoding is to the identity of the speaker; in principle it is expected and desirable that the utterance encoding captures the meaning of the spoken language rather than other aspects of it such as who spoke it. To quantify this invariance we report the accuracy of an L2-penalized logistic regression model on the task of decoding the identity of the speaker from the output of the S2I encoder. The logistic model is trained on 2 3 of the development data and tested on the remaining 1 3. Representational similarity Representational Similarity Analysis (Kriegeskorte et al., 2008) gauges the correlation between two sets of pairwise similarity measurements. Here we use it to quantify the correlation of the learned representation space with the written text space and with the image space. For the encoder representations, the pairwise similarities between utterances are given by the cosine similarities. For the written form, the similarities are the inverse of the normalized Levenshtein distance between 6457 the character sequences encoding each pair of utterances: simtext(a, b) = 1 − D(a, b) max(|a|, |b|) (6) where D(a, b) is the Levenshtein distance and |·| is string length. We compute the Pearson correlation coefficient between two similarity matrices on the upper triangulars of the each matrix, excluding the diagonal. Phoneme decoding A direct way of measuring whether neural representations of speech are biased towards encoding symbols is to try to decode the phonemes from the activation patterns aligned with a phonetic transcription of the utterance. We follow the methodology of Alishahi et al. (2017b) and train an L2-penalized logistic regression model on the output of the S encoders for phonemes from 2,500 utterances and report classification accuracies on data from 2,500 heldout utterances. 3.8 Experimental settings Data The SPEECH/IMAGE and TEXT/IMAGE tasks are always trained on the Flickr8K Audio Caption Corpus (Harwath et al., 2016), which is based on the original Flickr8K dataset (Hodosh et al., 2013). Flickr8K consists of 8,000 photographic images depicting everyday situations. Each image is accompanied by five brief English descriptions produced by crowd workers. Flickr8K Audio Caption Corpus enriches this data with spoken versions of these descriptions, read aloud and recorded by crowd workers. The total amount of speech in this dataset is approximately 34 hours. One thousand images are held out for validation, and another one thousand for the test set, using the splits provided by Karpathy and Fei-Fei (2015). In the ALIGNED condition the SPEECH/TEXT task is also trained on this data. In the NON-ALIGNED condition, we train the SPEECH/TEXT task on the Libri dataset (Panayotov et al., 2015) which consists of approximately 1,000 hours of read English speech, derived from read audiobooks. There are 291,630 sentences in the corpus, of which 1,000 are held out for validation. We preprocess the audio by extracting 12dimensional mel-frequency cepstral coefficients (MFCC) plus log of the total energy. We use 25 millisecond windows, sampled every 10 milliseconds. The shared image encoder is fixed and consists of 4096 dimensional activations of the pre-classification layer of VGG-16 (Simonyan and Zisserman, 2014) pre-trained on Imagenet (Russakovsky et al., 2015). Hyperparameters Most of the hyperparameters are based from especially Chrupała et al. (2017). The models are trained for a maximum of 25 epochs with Adam, with learning rate 0.0002, and gradient clipping at 2.0. The loss function’s margin parameter is α = 0.2. The GRUs have 1024 dimensions. The convolutional layer has 64 channels, kernel size of 6 and stride 2. The hidden layer of the attention MLP is 128. The linear mappings I2S and I2T project 4096 dimensions down to 1024. We apply early stopping and pick the results of each run after the epoch for which it scored best on R@10. We run three random initializations of each configuration. Multi-task training We use a simple roundrobin training scheme: we alternate between tasks, and for each task update the parameters of that task as well as the shared parameters based on supervision from one batch of data. The data ordering for each task is independent, both in the ALIGNED and NON-ALIGNED condition: for each epoch we reshuffle the dataset associated to each task and iterate through the batches until the smallest dataset runs out. This procedure makes sure that the only difference between the ALIGNED and NON-ALIGNED conditions is the actual data and not other aspects of training. Repository The code needed to reproduce our results and analyses is available at https://github.com/gchrupala/symbolic-bias. 4 Results Table 1 shows the evaluation results on the validation data, on the image retrieval task of 13 configurations of the model, including three versions with one or two tasks ablated. Table 2 shows the results on the test set with the 1-task baseline model and the best performing configuration compared to previously reported results on this dataset. As can be seen the baseline model is a bit worse than the best reported result on this data, while the 3-task model is much better. 5 Discussion Below we discuss and interpret the patterns in performance on image retrieval as measured by Re6458 Data Tasks S T S2I S2T T2S T2I R@10 Medr 1 NA 1 2 . 2 . . . 0.218 63.8 2 Aligned 2 2 1 2 0 0 . 0.279 42.3 3 Non-aligned 2 2 1 2 0 0 . 0.280 41.3 4 Aligned 3 2 1 1 0 0 1 0.280 43.0 5 2 1 1 1 1 1 0.266 44.3 6 2 1 2 0 0 1 0.281 39.7 7 2 1 2 1 1 1 0.270 44.3 8 4 1 0 0 0 0 0.255 48.3 9 Non-aligned 3 2 1 1 0 0 1 0.275 42.8 10 2 1 1 1 1 1 0.257 49.8 11 2 1 2 0 0 1 0.280 41.7 12 2 1 2 1 1 1 0.252 50.7 13 4 1 0 0 0 0 0.223 59.3 Table 1: Results on the validation set with varying model configuration. R@10 is recall at 10 for the Speech/Image task, Medr is the median rank for the same task. All scores are averages over 3 runs with different random initializations; models were run for 25 epochs with early stopping with R@10 as a criterion. The numbers (1, 2) in the columns corresponding to encoders specify the number of RNN layers in each encoder; zero (0) indicates the encoder only consists of the self-attention with no RNN layers; dot (.) indicates the whole task in which the encoder participates is ablated. Data Tasks S T S2I S2T T2S T2I R@10 Medr NA 1 Harwath and Glass (2015) 0.179 NA 1 Chrupała et al. (2017) 0.253 48 NA 1 2 . 2 . . . 0.244 51 Aligned 3 2 1 2 0 0 1 0.296 34 Table 2: Results on the test set, obtained by using the best run/epoch determined on the validation data. The first two rows show the numbers reported in previous work. call@10 and median rank. Impact of tasks The most striking result is the large gap in performance between the 1-task condition (row 1) and most of the other rows. Comparing row 1 versus rows 2 and 3 we see that adding the SPEECH/TEXT task leads to a substantial improvement. However, comparing rows 2 and 3 versus rows 6 and 11, it seems that the addition of TEXT/IMAGE task does not seem to have a major impact on performance, at least to the extent that can be gleaned from the experiments we carried out. It is possible that with more effort put into engineering this component of the model we would see a better result. Role of data vs inductive bias The other major finding is that whether we use the same or different data to train the main and auxiliary task has overall little impact: this is indicated by relatively small differences between configurations in the ALIGNED vs NON-ALIGNED condition. The differences that are there tend to favor the ALIGNED setting. This lends supports to the conclusion that the SPEECH/TEXT auxiliary task contributes to improved performance on the main task via a strong inductive bias rather than merely via enabling the use of extra data. This is in contrast to many other applications of MTL. Impact of parameter sharing design The third important effect is about how parameters between the tasks are shared, specifically how the shared and task-specific parts of the speech encoder are apportioned. The configuration with maximum sharing of parameters among the tasks (rows 8 and 13) performs poorly compared to sharing only the lower layers of the encoders for speech and text (i.e. rows 6 and 11). Additionally, we see that the inclusion of a text-specific speech encoder S2T degrades performance: compare for exam6459 ple row 6 to 7, and row 11 to 12. Thus it is best to have a shared speech encoder whose output is directly used by the SPEECH/TEXT task, while the SPEECH/IMAGE task carries out further transformations of the input via an image-specific speech encoder S2I. We can interpret this as the MTL emulating a pipeline architecture to some extent: direct connection of the SPEECH/TEXT task to the shared encoder forces it to come up with a representation closely correlated with a written transcription, and then the image-specific speech encoder takes this as input and maps it to a more meaning-related representation, useful for the SPEECH/IMAGE task. In addition to the above patterns of performance on image retrieval we now address our further research questions by investigating selected aspects of the encoder representations. Speaker invariance Table 3 shows the accuracy of speaker identification from the activation patterns of the output of encoder S2I for the single task model, the 2-task model, and for the 3-task model which achieved the highest recall@10. The accuracy of the 2 task model is almost three times worse than for the single task model, indicating that the inclusion of SPEECH/TEXT strongly drives the learned representations towards speaker invariance. The TEXT/IMAGE task has only a minor impact. Model Accuracy Model 1, S2I 0.297 Model 2, S2I 0.101 Model 6, S2I 0.085 Table 3: Speaker identification accuracy for three model configurations. Model numbers refer to rows in Table 1. RSA with regard to textual and visual spaces Table 4 shows the RSA scores between the encoder representations of utterances and their representations in the spoken, written and visual modalities. Comparing the RSA scores between the S2I encoder of model 1 (single task) and model 6 (3 task) we see that the correlations with the textual modality and the visual modality are enhanced while the correlation with the input audio modality drops. This can be interpreted as the SPEECH/TEXT task nudging the model to align more closely with the text, which also ends up MFCC Text Image Model 1, S2I 0.043 0.194 0.187 Model 6, S2I 0.030 0.212 0.222 Model 6, S2T 0.099 0.243 0.105 Image 0.008 0.083 1.000 Table 4: Pearson correlation between pairwise utterance similarity matrices, for utterances represented by Mean MFCC features, written text, three encoders, and the features of the image corresponding to the utterance. Model numbers refer to rows in Table 1. Analysis carried out on the single best seed/epoch for each configuration, according to Recall@10. contributing to the correlation with the image space. For model 6 but using the output of the S2T encoder, we see the correlation with the text space is even higher while the correlation with the image space is low. These patterns are what we would expect if SPEECH/TEXT does indeed inject a symbolic inductive bias to the model. Finally, while the RSA score between the textual and visual modalities is low (0.083), nevertheless model 6’s encoder S2I is moderately correlated with both of these (0.212 and 0.222 respectively). Phoneme decoding Table 5 shows how well phonemes can be decoded from time-aligned slices of four types of representations: input MFCC features, the activation patterns of a randomly initialized S encoder, and the activations of the S encoder for two trained models (1-task and 3-task). Phonemes are most decodable from the 3-task activation patterns, corroborating that the SPEECH/TEXT task biases the representations towards a symbolic encoding of speech. Representation Accuracy MFCC 0.284 Random init, S 0.486 Model 1, S 0.528 Model 6, S 0.578 Table 5: Phoneme decoding accuracy for the four representations. Model numbers refer to rows in Table 1. 6 Conclusion We show that the SPEECH/TEXT task leads to substantial performance improvements when compared to training the SPEECH/IMAGE task in isolation. Via controlled experiments and analyses we 6460 show evidence that this is due to the role of inductive bias on the learned encoder representations. Limitations and future work Our current model does not include an explicit speech-to-text decoder, which limits the types of analyses we can perform. For one, it makes it infeasible to carry out an apples-to-apples comparison with a pipeline architecture. Going forward we would like to go beyond matching tasks and evaluate the impact of an explicit speech-to-text decoder as an auxiliary task. We are also planning to investigate how sensitive our approach is to amount of data for the auxiliary task. This would be especially interesting given that one motivation for a visually-supervised end-to-end approach is the un-availability of large amounts of transcribed speech in certain circumstances. Acknowledgements I would like to thank Afra Alishahi, Lieke Gelderloos and Ákos Kádár, as well as several anonymous reviewers, for helpful comments and discussion about this work. References Afra Alishahi, Marie Barking, and Grzegorz Chrupała. 2017a. Encoding of phonology in a recurrent neural model of grounded speech. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 368–378. Association for Computational Linguistics. Afra Alishahi, Marie Barking, and Grzegorz Chrupała. 2017b. Encoding of phonology in a recurrent neural model of grounded speech. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 368–378. Joachim Bingel and Anders Søgaard. 2017. Identifying beneficial task relations for multi-task learning in deep neural networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 164–169. Association for Computational Linguistics. Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41–75. Grzegorz Chrupała, Lieke Gelderloos, and Afra Alishahi. 2017. Representations of language in a model of visually grounded speech signal. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS 2014 Deep Learning and Representation Learning Workshop. Siddharth Dalmia, Ramon Sanabria, Florian Metze, and Alan W. Black. 2018. Sequence-based multilingual low resource speech recognition. CoRR, abs/1802.07420. Jennifer Drexler and James Glass. 2017. Analysis of audio-visual features for unsupervised speech recognition. In Proceedings of Grounded Language Understanding Workshop. Spandana Gella, Rico Sennrich, Frank Keller, and Mirella Lapata. 2017. Image pivoting for learning multilingual multimodal representations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2839– 2845. David Harwath, Galen Chuang, and James Glass. 2018a. Vision as an interlingua: Learning multilingual semantic embeddings of untranscribed speech. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). David Harwath and James Glass. 2015. Deep multimodal semantic embeddings for speech and images. In IEEE Automatic Speech Recognition and Understanding Workshop. David Harwath and James Glass. 2017. Learning word-like units from joint audio-visual analysis. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 506–517. Association for Computational Linguistics. David Harwath, Adrià Recasens, Dídac Surís, Galen Chuang, Antonio Torralba, and James Glass. 2018b. Jointly discovering visual objects and spoken words from raw sensory input. arXiv preprint arXiv:1804.01452. David Harwath, Antonio Torralba, and James Glass. 2016. Unsupervised learning of spoken language with visual context. In Advances in Neural Information Processing Systems, pages 1858–1866. Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple nlp tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1923–1933. Association for Computational Linguistics. Georg Heigold, Vincent Vanhoucke, Andrew Senior, Patrick Nguyen, Marc’Aurelio Ranzato, Matthieu Devin, and Jeff Dean. 2013. Multilingual acoustic models using distributed deep neural networks. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8619–8623. 6461 Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. Journal of Artificial Intelligence Research, 47:853–899. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. Ákos Kádár, Desmond Elliott, Marc-Alexandre Côté, Grzegorz Chrupała, and Afra Alishahi. 2018. Lessons learned in multilingual grounded language learning. In Proceedings of the 22nd Conference on Computational Natural Language Learning (CoNLL 2018). Herman Kamper, Shane Settle, Gregory Shakhnarovich, and Karen Livescu. 2017. Visually grounded learning of keyword prediction from untranscribed speech. In Proc. Interspeech 2017, pages 3677–3681. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3128–3137. Eliyahu Kiperwasser and Miguel Ballesteros. 2018. Scheduled multi-task learning: From syntax to translation. Transactions of the Association of Computational Linguistics, 6:225–240. Simon Kirby, Mike Dowman, and Thomas L. Griffiths. 2007. Innateness and culture in the evolution of language. Proceedings of the National Academy of Sciences, 104(12):5241–5245. Nikolaus Kriegeskorte, Marieke Mur, and Peter A Bandettini. 2008. Representational similarity analysisconnecting the branches of systems neuroscience. Frontiers in systems neuroscience, 2:4. Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2017. Fully character-level neural machine translation without explicit segmentation. Transactions of the Association for Computational Linguistics, 5:365–378. Miryam de Lhoneux, Johannes Bjerva, Isabelle Augenstein, and Anders Søgaard. 2018. Parameter sharing between dependency parsers for related languages. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP). Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task sequence to sequence learning. In International Conference on Learning Representations. Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language Decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730. Brian McMahan and Delip Rao. 2017. Listening to the world improves speech command recognition. CoRR, abs/1710.08377. Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: An ASR corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5206–5210. Kanishka Rao and Ha¸sim Sak. 2017. Multi-accent speech recognition with hierarchical grapheme based models. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4815–4819. Deb K Roy and Alex P Pentland. 2002. Learning words from sights and sounds: a computational model. Cognitive Science, 26(1):113 – 146. Sebastian Ruder, Joachim Bingel, Isabelle Augenstein, and Anders Søgaard. 2017. Learning what to share between loosely related tasks. arXiv preprint arXiv:1705.08142. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252. Michael L. Seltzer and Jasha Droppo. 2013. Multitask learning in deep neural networks for improved phoneme recognition. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 6965–6969. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556. Anders Søgaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 231–235. Association for Computational Linguistics. Swabha Swayamdipta, Sam Thomson, Kenton Lee, Luke S. Zettlemoyer, Chris Dyer, and Noah A. Smith. 2018. Syntactic scaffolds for semantic structures. CoRR, abs/1808.10485. Shubham Toshniwal, Hao Tang, Liang Lu, and Karen Livescu. 2017. Multitask learning with low-level auxiliary tasks for encoder-decoder based speech recognition. In Interspeech 2017, 18th Annual Conference of the International Speech Communication Association, Stockholm, Sweden, August 20-24, 2017, pages 3532–3536. 6462 Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutník, and Jürgen Schmidhuber. 2017. Recurrent highway networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 4189–4198, International Convention Centre, Sydney, Australia. PMLR.
2019
647
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6463–6474 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6463 Multi-step Reasoning via Recurrent Dual Attention for Visual Dialog Zhe Gan1, Yu Cheng1, Ahmed El Kholy1, Linjie Li1, Jingjing Liu1, Jianfeng Gao2 1Microsoft Dynamics 365 AI Research, 2Microsoft Research {zhe.gan, yu.cheng, ahmed.eikholy, lindsey.li, jingjl, jfgao}@microsoft.com Abstract This paper presents a new model for visual dialog, Recurrent Dual Attention Network (ReDAN), using multi-step reasoning to answer a series of questions about an image. In each question-answering turn of a dialog, ReDAN infers the answer progressively through multiple reasoning steps. In each step of the reasoning process, the semantic representation of the question is updated based on the image and the previous dialog history, and the recurrently-refined representation is used for further reasoning in the subsequent step. On the VisDial v1.0 dataset, the proposed ReDAN model achieves a new state-ofthe-art of 64.47% NDCG score. Visualization on the reasoning process further demonstrates that ReDAN can locate context-relevant visual and textual clues via iterative refinement, which can lead to the correct answer step-bystep. 1 Introduction There has been a recent surge of interest in developing neural network models capable of understanding both visual information and natural language, with applications ranging from image captioning (Fang et al., 2015; Vinyals et al., 2015; Xu et al., 2015) to visual question answering (VQA) (Antol et al., 2015; Fukui et al., 2016; Anderson et al., 2018). Unlike VQA, where the model can answer a single question about an image, a visual dialog system (Das et al., 2017a; De Vries et al., 2017; Das et al., 2017b) is designed to answer a series of questions regarding an image, which requires a comprehensive understanding of both the image and previous dialog history. Most previous work on visual dialog rely on attention mechanisms (Bahdanau et al., 2015; Xu et al., 2015) to identify specific regions of the image and dialog-history snippets that are relevant to the question. These attention models measure the relevance between the query and the attended image, as well as the dialog context. To generate an answer, either a discriminative decoder is used for ranking answer candidates, or a generative decoder is trained for synthesizing an answer (Das et al., 2017a; Lu et al., 2017). Though promising results have been reported, these models often fail to provide accurate answers, especially in cases where answers are confined to particular image regions or dialog-history snippets. One hypothesis for the cause of failure is the inherent limitation of single-step reasoning approach. Intuitively, after taking a first glimpse of the image and the dialog history, readers often revisit specific sub-areas of both image and text to obtain a better understanding of the multimodal context. Inspired by this, we propose a Recurrent Dual Attention Network (ReDAN) that exploits multi-step reasoning for visual dialog. Figure 1a provides an overview of the model architecture of ReDAN. First, a set of visual and textual memories are created to store image features and dialog context, respectively. In each step, a semantic representation of the question is used to attend to both memories, in order to obtain a question-aware image representation and question-aware dialog representation, both of which subsequently contribute to updating the question representation via a recurrent neural network. Later reasoning steps typically provide a sharper attention distribution than earlier steps, aiming at narrowing down the regions most relevant to the answer. Finally, after several iterations of reasoning steps, the refined question vector and the garnered visual/textual clues are fused to obtain a final multimodal context vector, which is fed to the decoder for answer generation. This multistep reasoning process is performed in each turn of the dialog. 6464 R-CNN BiLSTM BiLSTM … Image I Question: “is he wearing shorts?” Dialog History: … Textual Memory Multimodal Fusion Visual Memory C: the young boy is playing tennis at the court Q: Is the young boy a toddler ? A: no Q: What color is his hair ? A: It ‘s black Decoder Answer: “yes” Visual features Textual features (a) Overview of the proposed ReDAN framework. 0.569 0.149 0.282 0.447 0.204 0.349 Original image 1st step reasoning 2nd step reasoning the young boy is playing tennis at the court Is the young boy a toddler ? no What color is his hair ? It ‘s black Dialog history 1st step reasoning 2nd step reasoning Snippet-level attention weights Q: “is he wearing shorts ?” A: “yes” 0.05 0.08 0.38 0.03 0.92 0.01 (b) An example of multi-step reasoning in ReDAN. Figure 1: Model architecture and visualization of the learned multi-step reasoning strategies. In the first step, ReDAN first focuses on all relevant objects in the image (e.g., “boy”, “shorts”), and all relevant facts in the dialog history (e.g., “young boy”, “playing tennis”, “black hair”). In the second step, the model narrows down to more context-relevant regions and dialog context (i.e., the attention maps become sharper) which lead to the final correct answer (“yes”). The numbers in the bounding boxes and in the histograms are the attention weights of the corresponding objects or dialog history snippets. Figure 1b provides an illustration of the iterative reasoning process. In the current dialog turn for the question “is he wearing shorts?”, in the initial reasoning step, the system needs to draw knowledge from previous dialog history to know who “he” refers to (i.e., “the young boy”), as well as interpreting the image to rule out objects irrelevant to the question (i.e., “net”, “racket” and “court”). After this, the system conducts a second round of reasoning to pinpoint the image region (i.e., “shorts”, whose attention weight increases from 0.38 to 0.92 from the 1st step to the 2nd step) and the dialog-history snippet (i.e., “playing tennis at the court”, whose attention weight increased from 0.447 to 0.569), which are most indicative of the correct answer (“yes”). The main contributions of this paper are threefold. (i) We propose a ReDAN framework that supports multi-step reasoning for visual dialog. (ii) We introduce a simple rank aggregation method to combine the ranking results of discriminative and generative models to further boost the performance. (iii) Comprehensive evaluation and visualization analysis demonstrate the effectiveness of our model in inferring answers progressively through iterative reasoning steps. Our proposed model achieves a new state-of-the-art of 64.47% NDCG score on the VisDial v1.0 dataset. 2 Related Work Visual Dialog The visual dialog task was recently proposed by Das et al. (2017a) and De Vries et al. (2017). Specifically, Das et al. (2017a) released the VisDial dataset, which contains freeform natural language questions and answers. And De Vries et al. (2017) introduced the GuessWhat?! dataset, where the dialogs provided are more goal-oriented and aimed at object discovery within an image, through a series of yes/no questions between two dialog agents. For the VisDial task, a typical system follows the encoder-decoder framework proposed in Sutskever et al. (2014). Different encoder models have been explored in previous studies, including late fusion, hierarchical recurrent network, memory network (all three proposed in Das et al. (2017a)), early answer fusion (Jain et al., 2018), history-conditional image attention (Lu et al., 2017), and sequential co-attention (Wu et al., 2018). The decoder model usually falls into two categories: (i) generative decoder to synthesize the answer with a Recurrent Neural Network (RNN) (Das et al., 2017a); and (ii) discriminative decoder to rank answer candidates via a softmaxbased cross-entropy loss (Das et al., 2017a) or a ranking-based multi-class N-pair loss (Lu et al., 2017). Reinforcement Learning (RL) was used in Das et al. (2017b); Chattopadhyay et al. (2017) to train two agents to play image guessing games. Lu et al. (2017) proposed a training schema to effectively transfer knowledge from a pre-trained discriminative model to a generative dialog model. Generative Adversarial Network (Goodfellow et al., 2014; Yu et al., 2017b; Li et al., 2017) was also 6465 used in Wu et al. (2018) to generate answers indistinguishable from human-generated answers, and a conditional variational autoencoder (Kingma and Welling, 2014; Sohn et al., 2015) was developed in Massiceti et al. (2018) to promote answer diversity. There were also studies investigating visual coreference resolution, either via attention memory implicitly (Seo et al., 2017) or using a more explicit reasoning procedure (Kottur et al., 2018) based on neural module networks (Andreas et al., 2016). In addition to answering questions, question sequence generation is also investigated in Jain et al. (2018); Massiceti et al. (2018). For the GuessWhat?! task, various methods (such as RL) have been proposed to improve the performance of dialog agents, measured by task completion rate as in goal-oriented dialog system (Strub et al., 2017; Shekhar et al., 2018; Strub et al., 2018; Lee et al., 2018; Zhang et al., 2018). Other related work includes imagegrounded chitchat (Mostafazadeh et al., 2017), dialog-based image retrieval (Guo et al., 2018), and text-only conversational question answering (Reddy et al., 2018; Choi et al., 2018). A recent survey on neural approaches to dialog modeling can be found in Gao et al. (2018). In this work, we focus on the VisDial task. Different from previous approaches to visual dialog, which all used a single-step reasoning strategy, we propose a novel multi-step reasoning framework that can boost the performance of visual dialog systems by inferring context-relevant information from the image and the dialog history iteratively. Multi-step Reasoning The idea of multi-step reasoning has been explored in many tasks, including image classification (Mnih et al., 2014), text classification (Yu et al., 2017a), image generation (Gregor et al., 2015), language-based image editing (Chen et al., 2018), Visual Question Answering (VQA) (Yang et al., 2016; Nam et al., 2017; Hudson and Manning, 2018), and Machine Reading Comprehension (MRC) (Cui et al., 2017; Dhingra et al., 2017; Hill et al., 2016; Sordoni et al., 2016; Shen et al., 2017; Liu et al., 2018). Specifically, Mnih et al. (2014) introduced an RNN for image classification, by selecting a sequence of regions adaptively and only processing the selected regions. Yu et al. (2017a) used an RNN for text classification, by learning to skip irrelevant information when reading the text input. A recurrent variational autoencoder termed DRAW was proposed in Gregor et al. (2015) for multi-step image generation. A recurrent attentive model for image editing was also proposed in Chen et al. (2018) to fuse image and language features via multiple steps. For VQA, Stacked Attention Network (SAN) (Yang et al., 2016) was proposed to attend the question to relevant image regions via multiple attention layers. For MRC, ReasoNet (Shen et al., 2017) was developed to perform multi-step reasoning to infer the answer span based on a given passage and a question, where the number of steps can be dynamically determined via a termination gate. Different from SAN for VQA (Yang et al., 2016) and ReasoNet for MRC (Shen et al., 2017), which reason over a single type of input (either image or text), our proposed ReDAN model incorporates multimodal context that encodes both visual information and textual dialog. This multimodal reasoning approach presents a mutual enhancement between image and text for a better understanding of both: on the one hand, the attended image regions can provide additional information for better dialog interpretation; on the other hand, the attended history snippets can be used for better image understanding (see the dotted red lines in Figure 2). Concurrent Work We also include some concurrent work for visual dialog that has not been discussed above, including image-questionanswer synergistic network (Guo et al., 2019), recursive visual attention (Niu et al., 2018), factor graph attention (Schwartz et al., 2019), dual attention network (Kang et al., 2019), graph neural network (Zheng et al., 2019), history-advantage sequence training (Yang et al., 2019), and weighted likelihood estimation (Zhang et al., 2019). 3 Recurrent Dual Attention Network The visual dialog task (Das et al., 2017a) is formulated as follows: given a question Qℓ grounded in an image I, and previous dialog history (including the image caption C) Hℓ= {C, (Q1, A1), · · · , (Qℓ−1, Aℓ−1)} (ℓis the current dialog turn) as additional context, the goal is to generate an answer by ranking a list of N candidate answers Aℓ= {A(1) ℓ, . . . , A(N) ℓ }. Figure 2 provides an overview of the Recurrent Dual Attention Network (ReDAN). Specifically, 6466 R-CNN BiLSTM BiLSTM BiLSTM BiLSTM … … Ranking … Image I Question Q: “what color are the glasses?” Dialog History H: Attention Attention Attention Attention … Visual Reasoning Textual Memory Multimodal Fusion LSTM Answer A: “black frame” Candidate 1: “red and white” Candidate 100: “brown” Discriminative Decoder Generative Decoder Multi-step Reasoning via Recurrent Dual Attention Network Visual Memory Textual Reasoning a person sitting on a red bench with a laptop is the person male or female ? male how old is the male ? looks to be late 20s does he wear glasses ? yes Figure 2: Model Architecture of Recurrent Dual Attention Network for visual dialog. Please see Sec. 3 for details. ReDAN consists of three components: (i) Memory Generation Module (Sec. 3.1), which generates a set of visual and textual memories to provide grounding for reasoning; (ii) Multi-step Reasoning Module (Sec. 3.2), where recurrent dual attention is applied to jointly encode question, image and dialog history into a multimodal context vector for decoding; and (iii) Answer Decoding Module (Sec. 3.3), which derives the final answer for each question based on the multimodal context vector. The following sub-sections describe the details of these components. 3.1 Memory Generation Module In this module, the image I and the dialog history Hℓare transformed into a set of memory vectors (visual and textual). Visual Memory We use a pre-trained Faster RCNN (Ren et al., 2015; Anderson et al., 2018) to extract image features, in order to enable attention on both object-level and salient region-level, each associated with a feature vector. Compared to image features extracted from VGG-Net (Simonyan and Zisserman, 2014) and ResNet (He et al., 2016), this type of features from Faster RCNN has achieved state-of-the-art performance in both image captioning and VQA (Anderson et al., 2018; Teney et al., 2018) tasks. Specifically, the image features FI for a raw image I are represented by: FI = R-CNN(I) ∈Rnf×M , (1) where M = 36 is the number of detected objects in an image1, and nf = 2048 is the dimension of the feature vector. A single-layer perceptron is used to transform each feature into a new vector that has the same dimension as the query vector (described in Sec. 3.2): Mv = tanh(WIFI) ∈Rnh×M , (2) where WI ∈Rnh×nf . All the bias terms in this paper are omitted for simplicity. Mv is the visual memory, and its m-th column corresponds to the visual feature vector for the region of the object indexed by m. Textual Memory In the ℓ-th dialogue turn, the dialog history Hℓconsists of the caption C and ℓ−1 rounds of QA pairs (Qj, Aj) (j = 1, . . . , ℓ−1). For each dialog-history snippet j (the caption is considered as the first one with j = 0), it is first represented as a matrix M(j) h = [h(j) 0 , . . . , h(j) K−1] ∈Rnh×K via a bidirectional Long Short-Term Memory (BiLSTM) network (Hochreiter and Schmidhuber, 1997), where K is the maximum length of the dialog-history snippet. Then, a self-attention mechanism is applied to learn the attention weight of every word in the snippet, identifying the key words and ruling out irrelevant information. Specifically, ωj = softmax(pT ω · tanh(WhM(j) h )) , uj = ωj · (M(j) h )T , (3) 1We have also tried using an adaptive number of detected objects for an image. Results are very similar to the results with M = 36. 6467 where ωj ∈ R1×K, pω ∈ Rnh×1, Wh ∈ Rnh×nh, and uj ∈R1×nh. After applying the same BiLSTM to each dialog-history snippet, the textual memory is then represented as Md = [uT 0 , . . . , uT ℓ−1] ∈Rnh×ℓ. 3.2 Multi-step Reasoning Module The multi-step reasoning framework is implemented via an RNN, where the hidden state st represents the current representation of the question, and acts as a query to retrieve visual and textual memories. The initial state s0 is a selfattended question vector q. Let vt and dt denote the attended image representation and dialoghistory representation in the t-th step, respectively. A one-step reasoning pathway can be illustrated as st →vt →dt →st+1, which is performed T times. Details are described below. Self-attended Question Similar to textual memory construction, a question Q (the subscript ℓfor Qℓis omitted to reduce confusion) is first represented as a matrix Mq = [q0, . . . , qK′−1] ∈Rnh×K′ via a BiLSTM, where K′ is the maximum length of the question. Then, self attention is applied, α = softmax(pT α · tanh(WqMq)) , q = αMT q , where α ∈R1×K′, pα ∈Rnh×1, and Wq ∈ Rnh×nh. q ∈R1×nh then serves as the initial hidden state of the RNN, i.e., s0 = q. The reasoning pathway st →vt →dt →st+1 includes the following steps: (i) (st, dt−1) →vt; (ii) (st, vt) →dt; and (iii) (vt, dt) →st+1. Query and History Attending to Image Given st and the previous attended dialog history representation dt−1 ∈R1×nh, we update vt as follows: β = softmax(pT β · tanh(WvMv + WssT t + WddT t−1)) , vt = β · MT v , (4) where β ∈ R1×M, pβ ∈ Rnh×1, Wv ∈ Rnh×nh, Ws ∈Rnh×nh and Wd ∈Rnh×nh. The updated vt, together with st, is used to attend to the dialog history. Query and Image Attending to History Given st ∈R1×nh and the attended image representation vt ∈R1×nh, we update dt as follows: γ = softmax(pT γ · tanh(W ′ dMd + W ′ ssT t + W ′ vvT t )) , dt = γ · MT d , (5) where γ ∈ R1×ℓ, pγ ∈ Rnh×1, W ′ v ∈ Rnh×nh, W ′ s ∈Rnh×nh and W ′ d ∈Rnh×nh. The updated dt is fused with vt and then used to update the RNN query state. Multimodal Fusion Given the query vector st, we have thus far obtained the updated image representation vt and the dialog-history representation dt. Now, we use Multimodal Factorized Bilinear pooling (MFB) (Yu et al., 2017c) to fuse vt and dt together. Specifically, zt = SumPooling(UvvT t ◦UddT t , k) , (6) zt = sign(zt)|zt|0.5, zt = zT t /||zt|| , (7) where Uv ∈Rnhk×nh, Ud ∈Rnhk×nh. The function SumPooling(x, k) in (6) means using a onedimensional non-overlapped window with the size k to perform sum pooling over x. (7) performs power normalization and ℓ2 normalization. The whole process is denoted in short as: zt = MFB(vt, dt) ∈R1×nh . (8) There are also other methods for multimodal fusion, such as MCB (Fukui et al., 2016) and MLB (Kim et al., 2017). We use MFB in this paper due to its superior performance in VQA. Image and History Updating RNN State The initial state s0 is set to q, which represents the initial understanding of the question. The question representation is then updated based on the current dialogue history and the image, via an RNN with Gated Recurrent Unit (GRU) (Cho et al., 2014): st+1 = GRU(st, zt) . (9) This process forms a cycle completing one reasoning step. After performing T steps of reasoning, multimodal fusion is then used to obtain the final context vector: c = [MFB(sT , vT ), MFB(sT , dT ), MFB(vT , dT )] . (10) 3.3 Answer Decoding Module Discriminative Decoder The context vector c is used to rank answers from a pool of candidates A (the subscript ℓfor Aℓis omitted). Similar to how we obtain the self-attended question vector in Sec. 3.2, a BiLSTM, together with the selfattention mechanism, is used to obtain a vector representation for each candidate Aj ∈A, resulting in aj ∈R1×nh, for j = 1, . . . , N. Based 6468 on this, a probability vector p is computed as p = softmax(s), where s ∈RN, and s[j] = caT j . During training, ReDAN is optimized by minimizing the cross-entropy loss2 between the one-hotencoded ground-truth label vector and the probability distribution p. During evaluation, the answer candidates are simply ranked based on the probability vector p. Generative Decoder Besides the discriminative decoder, following Das et al. (2017a), we also consider a generative decoder, where another LSTM is used to decode the context vector into an answer. During training, we maximize the log-likelihood of the ground-truth answers. During evaluation, we use the log-likelihood scores to rank answer candidates. Rank Aggregation Empirically, we found that combining the ranking results of discriminative and generative decoders boosts the performance a lot. Two different rank aggregation methods are explored here: (i) average over ranks; and (ii) average over reciprocal ranks. Specifically, in a dialog session, assuming r1, . . . , rK represents the ranking results obtained from K trained models (either discriminative, or generative). In the first method, the average ranks 1 K PK k=1 rk are used to re-rank the candidates. In the second one, we use the average of the reciprocal ranks of each individual model 1 K PK k=1 1/rk for re-ranking. 4 Experiments In this section, we explain in details our experiments on the VisDial dataset. We compare our ReDAN model with state-of-the-art baselines, and conduct detailed analysis to validate the effectiveness of our proposed model. 4.1 Experimental Setup Dataset We evaluate our proposed approach on the recently released VisDial v1.0 dataset3. Specifically, the training and validation splits from v0.9 are combined together to form the new training data in v1.0, which contains dialogs on 123, 287 images from COCO dataset (Lin et al., 2014). Each dialog is equipped with 10 turns, resulting in a total of 1.2M question-answer pairs. 2We have also tried the N-pair ranking loss used in Lu et al. (2017). Results are very similar to each other. 3As suggested in https://visualdialog.org/ data, results should be reported on v1.0, instead of v0.9. An additional 10, 064 COCO-like images are further collected from Flickr, of which 2, 064 images are used as the validation set (val v1.0), and the rest 8K are used as the test set (test-std v1.0), hosted on an evaluation server4 (the ground-truth answers for this split are not publicly available). Each image in the val v1.0 split is associated with a 10-turn dialog, while a dialog with a flexible number of turns is provided for each image in teststd v1.0. Each question-answer pair in the VisDial dataset is accompanied by a list of 100 answer candidates, and the goal is to find the correct answer among all the candidates. Preprocessing We truncate captions/questions/ answers that are longer than 40/20/20 words, respectively. And we build a vocabulary of words that occur at least 5 times in train v1.0, resulting in 11, 319 words in the vocabulary. For word embeddings, we use pre-trained GloVe vectors (Pennington et al., 2014) for all the captions, questions and answers, concatenated with the learned word embedding from the BiLSTM encoders to further boost the performance. For image representation, we use bottom-up-attention features (Anderson et al., 2018) extracted from Faster R-CNN (Ren et al., 2015) pre-trained on Visual Genome (Krishna et al., 2017). A set of 36 features is created for each image. Each feature is a 2048dimentional vector. Evaluation Following Das et al. (2017a), we use a set of ranking metrics (Recall@k for k = {1, 5, 10}, mean rank, and mean reciprocal rank (MRR)), to measure the performance of retrieving the ground-truth answer from a pool of 100 candidates. Normalized Discounted Cumulative Gain (NDCG) score is also used for evaluation in the visual dialog challenge 2018 and 2019, based on which challenge winners are picked. Since this requires dense human annotations, the calculation of NDCG is only available on val v1.0, test-std v1.0, and a small subset of 2000 images from train v1.0. Training details All three BiLSTMs used in the model are single-layer with 512 hidden units. The number of factors used in MFB is set to 5, and we use mini-batches of size 100. The maximum number of epochs is set to 20. No datasetspecific tuning or regularization is conducted except dropout (Srivastava et al., 2014) and early 4https://evalai.cloudcv.org/web/ challenges/challenge-page/161/overview 6469 Model NDCG MRR R@1 R@5 R@10 Mean MN-D (Das et al., 2017a) 55.13 60.42 46.09 78.14 88.05 4.63 HCIAE-D (Lu et al., 2017) 57.65 62.96 48.94 80.50 89.66 4.24 CoAtt-D (Wu et al., 2018) 57.72 62.91 48.86 80.41 89.83 4.21 ReDAN-D (T=1) 58.49 63.35 49.47 80.72 90.05 4.19 ReDAN-D (T=2) 59.26 63.46 49.61 80.75 89.96 4.15 ReDAN-D (T=3) 59.32 64.21 50.60 81.39 90.26 4.05 Ensemble of 4 60.53 65.30 51.67 82.40 91.09 3.82 Table 1: Comparison of ReDAN with a discriminative decoder to state-of-the-art methods on VisDial v1.0 validation set. Higher score is better for NDCG, MRR and Recall@k, while lower score is better for mean rank. All these baselines are re-implemented with bottom-up features and incorporated with GloVe vectors for fair comparison. Model NDCG MRR R@1 R@5 R@10 Mean MN-G (Das et al., 2017a) 56.99 47.83 38.01 57.49 64.08 18.76 HCIAE-G (Lu et al., 2017) 59.70 49.07 39.72 58.23 64.73 18.43 CoAtt-G (Wu et al., 2018) 59.24 49.64 40.09 59.37 65.92 17.86 ReDAN-G (T=1) 59.41 49.60 39.95 59.32 65.97 17.79 ReDAN-G (T=2) 60.11 49.96 40.36 59.72 66.57 17.53 ReDAN-G (T=3) 60.47 50.02 40.27 59.93 66.78 17.40 Ensemble of 4 61.43 50.41 40.85 60.08 67.17 17.38 Table 2: Comparison of ReDAN with a generative decoder to state-of-the-art generative methods on VisDial val v1.0. All the baseline models are re-implemented with bottom-up features and incorporated with GloVe vectors for fair comparison. stopping on validation sets. The dropout ratio is 0.2. The Adam algorithm (Kingma and Ba, 2014) with learning rate 4 × 10−4 is used for optimization. The learning rate is halved every 10 epochs. 4.2 Quantitative Results Baselines We compare our proposed approach with state-of-the-art models, including Memory Network (MN) (Das et al., 2017a), History-Conditioned Image Attentive Encoder (HCIAE) (Lu et al., 2017) and Sequential CoAttention model (CoAtt) (Wu et al., 2018). In their original papers, all these models used VGGNet (Simonyan and Zisserman, 2014) for image feature extraction, and reported results on VisDial v0.9. Since bottom-up-attention features have proven to achieve consistently better performance than VGG-Net in other tasks, we re-implemented all these models with bottom-up-attention features, and used the same cross-entropy loss for training. Further, unidirectional LSTMs are used in these previous baselines, which are replaced by bidirectional LSTMs with self-attention mechanisms for fair comparison. All the baselines are also further incorporated with pre-trained GloVe vectors. We choose the best three models on VisDial v0.9 as the baselines: • MN (Das et al., 2017a): (i) mean pooling is performed over the bottom-up-attention features for image representation; (ii) image and question attend to the dialog history. • HCIAE (Lu et al., 2017): (i) question attends to dialog history; (ii) then, question and the attended history attend to the image. • CoAtt (Wu et al., 2018): (i) question attends to the image; (ii) question and image attend to the history; (iii) image and history attend to the question; (iv) question and history attend to the image again. Results on VisDial val v1.0 Experimental results on val v1.0 are shown in Table 1. “-D” denotes that a discriminative decoder is used. With only one reasoning step, our ReDAN model already achieves better performance than CoAtt, which is the previous best-performing model. Using two or three reasoning steps further increases the performance. Further increasing the number of reasoning steps does not help, thus results are not shown. We also report results on an ensemble of 4 ReDAN-D models. Significant improvement was observed, boosting NDCG from 59.32 to 60.53, and MRR from 64.21 to 65.30. In addition to discriminative decoders, we also evaluate our model with a generative decoder. Results are summarized in Table 2. Similar to Table 1, ReDAN-G with T=3 also achieves the best performance. It is intuitive to observe that ReDAN-D achieves much better results than ReDAN-G on MRR, R@k and Mean Rank, since ReDAN-D is a discriminative model, and utilizes much more information than ReDAN-G. For ex6470 Q: is she wearing sneakers? A: yes Q: what is the woman wearing? A: a white light jacket, white t shirt, shorts Q: what color is his hat? A: white Q: is the dog sleeping? A: no (Left) 2 step reasoning (Right) 3 step reasoning Q: can you see both laptops ? A: yes Q: what color is the stove? A: white Figure 3: Visualization of learned attention maps in multiple reasoning steps. Model Ens. Method NDCG MRR R@1 R@5 R@10 Mean 4 Dis. Average 60.53 65.30 51.67 82.40 91.09 3.82 4 Gen. Average 61.43 50.41 40.85 60.08 67.17 17.38 1 Dis. + 1 Gen. Average 63.85 53.53 42.16 65.43 74.36 9.00 1 Dis. + 1 Gen. Reciprocal 63.18 59.03 42.33 78.71 88.13 4.88 4 Dis. + 4 Gen. Average 65.13 54.19 42.92 66.25 74.88 8.74 4 Dis. + 4 Gen. Reciprocal 64.75 61.33 45.52 80.67 89.55 4.41 ReDAN+ (Diverse Ens.) Average 67.12 56.77 44.65 69.47 79.90 5.96 Table 3: Results of different rank aggregation methods. Dis. and Gen. is short for discriminative and generative model, respectively. ample, ReDAN-D uses both positive and negative answer candidates for ranking/classification, while ReDAN-G only uses positive answer candidates for generation. However, interestingly, ReDAN-G achieves better NDCG scores than ReDAN-D (61.43 vs 60.53). We provide some detailed analysis in the question-type analysis section below. 4.3 Qualitative Analysis In addition to the examples illustrated in Figure 1b, Figure 3 provide six more examples to visualize the learned attention maps. The associated dialog histories are omitted for simplicity. Typically, the attention maps become sharper and more focused throughout the reasoning process. During multiple steps, the model gradually learns to narrow down to the image regions of key objects relevant to the questions (“laptops”, “stove”, “sneakers”, “hat”, “dog’s eyes” and “woman’s clothes”). For instance, in the top-right example, the model focuses on the wrong region (“man”) in the 1st step, but gradually shifts its focus to the correct regions (“dog’s eyes”) in the later steps. 4.4 Visual Dialog Challenge 2019 Now, we discuss how we further boost the performance of ReDAN for participating Visual Dialog Challenge 20195. Rank Aggregation As shown in Table 1 and 2, ensemble of discriminative or generative models increase the NDCG score to some extent. Empirically, we found that aggregating the ranking results of both discriminative and generative models readily boost the performance. Results are summarized in Table 3. Combining one discriminative and one generative model already shows much better NDCG results than ensemble of 4 discriminative models. The ensemble of 4 discriminative and 4 generative models further boosts the performance. It is interesting to note that using average of the ranks results in better NDCG than using reciprocal of the ranks, though the reciprocal method achieves better results on the other metrics. Since NDCG is the metric we mostly care about, the method of averaging ranking results from different models is adopted. Finally, we have tried using different image feature inputs, and incorporating relation-aware encoders (Li et al., 2019) into ReDAN to further boost the performance. By this diverse set of ensembles (called ReDAN+), we achieve an NDCG score of 67.12% on the val v1.0 set. 5https://visualdialog.org/challenge/ 2019 6471 Model NDCG MRR R@1 R@5 R@10 Mean ReDAN+ (Diverse Ens.) 64.47 53.73 42.45 64.68 75.68 6.63 ReDAN (1 Dis. + 1 Gen.) 61.86 53.13 41.38 66.07 74.50 8.91 DAN (Kang et al., 2019) 59.36 64.92 51.28 81.60 90.88 3.92 NMN (Kottur et al., 2018) 58.10 58.80 44.15 76.88 86.88 4.81 Sync (Guo et al., 2019) 57.88 63.42 49.30 80.77 90.68 3.97 HACAN (Yang et al., 2019) 57.17 64.22 50.88 80.63 89.45 4.20 FGA† 57.13 69.25 55.65 86.73 94.05 3.14 USTC-YTH‡ 56.47 61.44 47.65 78.13 87.88 4.65 RvA (Niu et al., 2018) 55.59 63.03 49.03 80.40 89.83 4.18 MS ConvAI‡ 55.35 63.27 49.53 80.40 89.60 4.15 CorefNMN (Kottur et al., 2018) 54.70 61.50 47.55 78.10 88.80 4.40 FGA (Schwartz et al., 2019) 54.46 67.25 53.40 85.28 92.70 3.54 GNN (Zheng et al., 2019) 52.82 61.37 47.33 77.98 87.83 4.57 LF-Att w/ bottom-up† 51.63 60.41 46.18 77.80 87.30 4.75 LF-Att‡ 49.76 57.07 42.08 74.83 85.05 5.41 MN-Att‡ 49.58 56.90 42.43 74.00 84.35 5.59 MN‡ 47.50 55.49 40.98 72.30 83.30 5.92 HRE‡ 45.46 54.16 39.93 70.45 81.50 6.41 LF‡ 45.31 55.42 40.95 72.45 82.83 5.95 Table 4: Comparison of ReDAN to state-of-the-art visual dialog models on the blind test-std v1.0 set, as reported by the test server. (†) taken from https://evalai.cloudcv.org/web/challenges/challenge-page/161/ leaderboard/483. (‡) taken from https://evalai.cloudcv.org/web/challenges/challenge-page/ 103/leaderboard/298. Question Type All Yes/no Number Color Others Percentage 100% 75% 3% 11% 11% Dis. 59.32 60.89 44.47 58.13 52.68 Gen. 60.42 63.49 41.09 52.16 51.45 4 Dis. + 4 Gen. 65.13 68.04 46.61 57.49 57.50 ReDAN+ 67.12 69.49 50.10 62.70 58.50 Table 5: Question-type analysis of the NDCG score achieved by different models on the val v1.0 set. Results on VisDial test-std v1.0 We also evaluate the proposed ReDAN on the blind test-std v1.0 set, by submitting results to the online evaluation server. Table 4 shows the comparison between our model and state-of-the-art visual dialog models. By using a diverse set of ensembles, ReDAN+ outperforms the state of the art method, DAN (Kottur et al., 2018), by a significant margin, lifting NDCG from 59.36% to 64.47%. Question-Type Analysis We further perform a question-type analysis of the NDCG scores achieved by different models. We classify questions into 4 categories: Yes/no, Number, Color, and Others. As illustrated in Table 5, in terms of the NDCG score, generative models performed better on Yes/no questions, while discriminative models performed better on all the other types of questions. We hypothesize that this is due to that generative models tend to ranking short answers higher, thus is beneficial for Yes/no questions. Since Yes/no questions take a majority of all the questions (75%), the better performance of generative models on the Yes/no questions translated into an overall better performance of generative models. Aggregating the ranking results of both discriminative and generative models results in the mutual enhancement of each other, and therefore boosting the final NDCG score by a large margin. Also, we observe that the Number questions are most difficult to answer, since training a model to count is a challenging research problem. 5 Conclusion We have presented Recurrent Dual Attention Network (ReDAN), a new multimodal framework for visual dialog, by incorporating image and dialog history context via a recurrently-updated query vector for multi-step reasoning. This iterative reasoning process enables model to achieve a finegrained understanding of multimodal context, thus boosting question answering performance over state-of-the-art methods. Experiments on the VisDial dataset validate the effectiveness of the proposed approach. Acknowledgements We thank Yuwei Fang, Huazheng Wang and Junjie Hu for helpful discussions. We thank anonymous reviewers for their constructive feedbacks. 6472 References Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR. Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Neural module networks. In CVPR. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In ICCV. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Prithvijit Chattopadhyay, Deshraj Yadav, Viraj Prabhu, Arjun Chandrasekaran, Abhishek Das, Stefan Lee, Dhruv Batra, and Devi Parikh. 2017. Evaluating visual conversational agents via cooperative human-ai games. In HCOMP. Jianbo Chen, Yelong Shen, Jianfeng Gao, Jingjing Liu, and Xiaodong Liu. 2018. Language-based image editing with recurrent attentive models. In CVPR. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. Quac: Question answering in context. In EMNLP. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2017. Attention-overattention neural networks for reading comprehension. In ACL. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e MF Moura, Devi Parikh, and Dhruv Batra. 2017a. Visual dialog. In CVPR. Abhishek Das, Satwik Kottur, Jos´e MF Moura, Stefan Lee, and Dhruv Batra. 2017b. Learning cooperative visual dialog agents with deep reinforcement learning. In ICCV. Harm De Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron C Courville. 2017. Guesswhat?! visual object discovery through multi-modal dialogue. In CVPR. Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. 2017. Gated-attention readers for text comprehension. In ACL. Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh K Srivastava, Li Deng, Piotr Doll´ar, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C Platt, et al. 2015. From captions to visual concepts and back. In CVPR. Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. Multimodal compact bilinear pooling for visual question answering and visual grounding. In EMNLP. Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational ai. arXiv preprint arXiv:1809.08267. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In NIPS. Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. 2015. Draw: A recurrent neural network for image generation. In ICML. Dalu Guo, Chang Xu, and Dacheng Tao. 2019. Imagequestion-answer synergistic network for visual dialog. arXiv preprint arXiv:1902.09774. Xiaoxiao Guo, Hui Wu, Yu Cheng, Steven Rennie, and Rogerio Schmidt Feris. 2018. Dialog-based interactive image retrieval. In NIPS. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The goldilocks principle: Reading children’s books with explicit memory representations. In ICLR. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation. Drew A Hudson and Christopher D Manning. 2018. Compositional attention networks for machine reasoning. In ICLR. Unnat Jain, Svetlana Lazebnik, and Alexander G Schwing. 2018. Two can play this game: visual dialog with discriminative question generation and answering. In CVPR. Gi-Cheon Kang, Jaeseo Lim, and Byoung-Tak Zhang. 2019. Dual attention networks for visual reference resolution in visual dialog. arXiv preprint arXiv:1902.09368. Jin-Hwa Kim, Kyoung-Woon On, Woosang Lim, Jeonghee Kim, Jung-Woo Ha, and Byoung-Tak Zhang. 2017. Hadamard product for low-rank bilinear pooling. In ICLR. 6473 Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Diederik P Kingma and Max Welling. 2014. Autoencoding variational bayes. In ICLR. Satwik Kottur, Jos´e MF Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. 2018. Visual coreference resolution in visual dialog using neural module networks. In ECCV. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV. Sang-Woo Lee, Yu-Jung Heo, and Byoung-Tak Zhang. 2018. Answerer in questioner’s mind for goaloriented visual dialogue. In NIPS. Jiwei Li, Will Monroe, Tianlin Shi, S´ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In EMNLP. Linjie Li, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. Relation-aware graph attention network for visual question answering. arXiv preprint arXiv:1903.12314. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In ECCV. Xiaodong Liu, Yelong Shen, Kevin Duh, and Jianfeng Gao. 2018. Stochastic answer networks for machine reading comprehension. In ACL. Jiasen Lu, Anitha Kannan, Jianwei Yang, Devi Parikh, and Dhruv Batra. 2017. Best of both worlds: Transferring knowledge from discriminative learning to a generative visual dialog model. In NIPS. Daniela Massiceti, N Siddharth, Puneet K Dokania, and Philip HS Torr. 2018. Flipdial: A generative model for two-way visual dialogue. In CVPR. Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. 2014. Recurrent models of visual attention. In NIPS. Nasrin Mostafazadeh, Chris Brockett, Bill Dolan, Michel Galley, Jianfeng Gao, Georgios P Spithourakis, and Lucy Vanderwende. 2017. Imagegrounded conversations: Multimodal context for natural question and response generation. arXiv preprint arXiv:1701.08251. Hyeonseob Nam, Jung-Woo Ha, and Jeonghee Kim. 2017. Dual attention networks for multimodal reasoning and matching. In CVPR. Yulei Niu, Hanwang Zhang, Manli Zhang, Jianhong Zhang, Zhiwu Lu, and Ji-Rong Wen. 2018. Recursive visual attention in visual dialog. arXiv preprint arXiv:1812.02664. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Siva Reddy, Danqi Chen, and Christopher D Manning. 2018. Coqa: A conversational question answering challenge. In EMNLP. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS. Idan Schwartz, Seunghak Yu, Tamir Hazan, and Alexander Schwing. 2019. Factor graph attention. arXiv preprint arXiv:1904.05880. Paul Hongsuck Seo, Andreas Lehrmann, Bohyung Han, and Leonid Sigal. 2017. Visual reference resolution using attention memory for visual dialog. In NIPS. Ravi Shekhar, Tim Baumgartner, Aashish Venkatesh, Elia Bruni, Raffaella Bernardi, and Raquel Fernandez. 2018. Ask no more: Deciding when to guess in referential visual dialogue. In COLING. Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2017. Reasonet: Learning to stop reading in machine comprehension. In KDD. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In NIPS. Alessandro Sordoni, Philip Bachman, Adam Trischler, and Yoshua Bengio. 2016. Iterative alternating neural attention for machine reading. arXiv preprint arXiv:1606.02245. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. JMLR. Florian Strub, Harm De Vries, Jeremie Mary, Bilal Piot, Aaron Courville, and Olivier Pietquin. 2017. End-to-end optimization of goal-driven and visually grounded dialogue systems. In IJCAI. Florian Strub, Mathieu Seurin, Ethan Perez, Harm de Vries, Philippe Preux, Aaron Courville, Olivier Pietquin, et al. 2018. Visual reasoning with multihop feature modulation. In ECCV. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS. 6474 Damien Teney, Peter Anderson, Xiaodong He, and Anton van den Hengel. 2018. Tips and tricks for visual question answering: Learnings from the 2017 challenge. In CVPR. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In CVPR. Qi Wu, Peng Wang, Chunhua Shen, Ian Reid, and Anton van den Hengel. 2018. Are you talking to me? reasoned visual dialog generation through adversarial learning. In CVPR. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In ICML. Tianhao Yang, Zheng-Jun Zha, and Hanwang Zhang. 2019. Making history matter: Gold-critic sequence training for visual dialog. arXiv preprint arXiv:1902.09326. Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In CVPR. Adams Wei Yu, Hongrae Lee, and Quoc V Le. 2017a. Learning to skim text. arXiv preprint arXiv:1704.06877. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017b. Seqgan: Sequence generative adversarial nets with policy gradient. In AAAI. Zhou Yu, Jun Yu, Jianping Fan, and Dacheng Tao. 2017c. Multi-modal factorized bilinear pooling with co-attention learning for visual question answering. In ICCV. Heming Zhang, Shalini Ghosh, Larry Heck, Stephen Walsh, Junting Zhang, Jie Zhang, and C-C Jay Kuo. 2019. Generative visual dialogue system via adaptive reasoning and weighted likelihood estimation. arXiv preprint arXiv:1902.09818. Junjie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jianfeng Lu, and Anton Van Den Hengel. 2018. Goaloriented visual question generation via intermediate rewards. In ECCV. Zilong Zheng, Wenguan Wang, Siyuan Qi, and SongChun Zhu. 2019. Reasoning visual dialogs with structural and partial observations. arXiv preprint arXiv:1904.05548.
2019
648
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6475–6484 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 6475 Lattice Transformer for Speech Translation Pei Zhang∗, Boxing Chen∗, Niyu Ge∗, Kai Fan∗† Alibaba Group Inc. {xiaoyi.zp,boxing.cbx,niyu.ge,k.fan}@alibaba-inc.com Abstract Recent advances in sequence modeling have highlighted the strengths of the transformer architecture, especially in achieving state-of-theart machine translation results. However, depending on the up-stream systems, e.g., speech recognition, or word segmentation, the input to translation system can vary greatly. The goal of this work is to extend the attention mechanism of the transformer to naturally consume the lattice in addition to the traditional sequential input. We first propose a general lattice transformer for speech translation where the input is the output of the automatic speech recognition (ASR) which contains multiple paths and posterior scores. To leverage the extra information from the lattice structure, we develop a novel controllable lattice attention mechanism to obtain latent representations. On the LDC SpanishEnglish speech translation corpus, our experiments show that lattice transformer generalizes significantly better and outperforms both a transformer baseline and a lattice LSTM. Additionally, we validate our approach on the WMT 2017 Chinese-English translation task with lattice inputs from different BPE segmentations. In this task, we also observe the improvements over strong baselines. 1 Introduction Transformer based encoder-decoder framework (Vaswani et al., 2017) for Neural Machine Translation (NMT) has currently become the state-ofthe-art in many translation tasks, significantly improving translation quality in text (Bojar et al., 2018; Fan et al., 2018) as well as in speech (Jan et al., 2018). Most NMT systems fall into the category of Sequence-to-Sequence (Seq2Seq) model (Sutskever et al., 2014), because both the input and ∗indicates equal contribution. †corresponding author. 0: <s> 1: iban 2: ivan 6: esquinas 5: así 4: esquinas 3: espinas 7: así 8:entonces 9: </s> 0.87 0.87 1 0.13 0.13 1 0.13 0.11 1 0.87 0.76 0.87 1 0.11 0.11 1 0.13 0.13 1 0.89 0.89 1 1 1 1 1 1 1 1 1 x0 x1 x2 x6 x5 x4 x3 x7 x8 x9 0: <s> 1: iban 2: esquinas 3: así 4:entonces 5: </s> x0 x1 x2 x3 x4 x5 Attention in Standard Transformer Encoder Attention in Lattice Transformer Encoder Figure 1: Illustration of our proposed attention mechanism (best viewed in color). Our attention depends on the tokens of common paths and forward (blue) / marginal (grey) / backward (orange) probability scores. output consist of sequential tokens. Therefore, in most neural speech translation, such as that of (Bojar et al., 2018), the input to the translation system is usually the 1-best hypothesis from the ASR instead of the word lattice output with its corresponding probability scores. How to consume word lattice rather than sequential input has been substantially researched in several natural language processing (NLP) tasks, such as language modeling (Buckman and Neubig, 2018), Chinese Named Entity Recognition (NER) (Zhang and Yang, 2018), and NMT (Su et al., 2017). Additionally, some pioneering works (Adams et al., 2016; Sperber et al., 2017; Osamura et al., 2018) demonstrated the potential improvements in speech translation by leveraging the additional information and uncertainty of the packed lattice structure produced by ASR acoustic model. Efforts have since continued to push the boundaries of long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) models. More precisely, most previous works are in line with the existing method Tree-LSTMs (Tai et al., 6476 2015), adapting to task-specific variant LatticeLSTMs that can successfully handle lattices and robustly establish better performance than the original models. However, the inherently sequential nature still remains in Lattice-LSTMs due to the topological representation of the lattice graph, precluding long-path dependencies (Khandelwal et al., 2018) and parallelization within training examples that are the fundamental constraint of LSTMs. In this work, we introduce a generalization of the standard transformer architecture to accept lattice-structured network topologies. The standard transformer is a transduction model relying entirely on attention modules to compute latent representations, e.g., the self-attention requires to calculate the intra-attention of every two tokens for each sequence example. Latest works such as (Yu et al., 2018; Devlin et al., 2018; Lample et al., 2018; Su et al., 2018) empirically find that transformer can outperform LSTMs by a large margin, and the success is mainly attributed to selfattention. In our lattice transformer, we propose a lattice relative positional attention mechanism that can incorporate the probability scores of ASR word lattices. The major difference with the selfattention in transformer encoder is illustrated in Figure 1. We first borrow the idea from the relative positional embedding (Shaw et al., 2018) to maximally encode the information of the lattice graph into its corresponding relative positional matrix. This design essentially does not allow a token to pay attention to any token that has not appeared in a shared path. Secondly, the attention weights depend not only on the query and key representations in the standard attention module, but also on the marginal / forward / backward probability scores (Rabiner, 1989; Post et al., 2013) derived from the upstream systems (such as ASR). Instead of 1-best hypothesis alone (though it is based on forward scores), the additional probability scores have rich information about the distribution of each path (Sperber et al., 2017). It is in principle possible to use them, for example in attention weights reweighing, to increase the uncertainty of the attention for other alternative tokens. Our lattice attention is controllable and flexible enough for the utilization of each score. The lattice transformer can readily consume the lattice input alone if the scores are unavailable. A common application is found in the Chinese NER task, in which a Chinese sentence could possibly have multiple word segmentation possibilities (Zhang and Yang, 2018). Furthermore, different BPE operations (Sennrich et al., 2016) or probabilistic subwords (Kudo, 2018) can also bring similar uncertainty to subword candidates and form a compact lattice structure. In summary, this paper makes the following main contributions. i) To our best knowledge, we are the first to propose a novel attention mechanism that consumes a word lattice and the probability scores from the ASR system. ii) The proposed approach is naturally applied to both the encoder self-attention and encoder-decoder attention. iii) Another appealing feature is that the lattice transformer can be reduced to standard latticeto-sequence model without probability scores, fitting the text translation task. iv) Extensive experiments on speech translation datasets demonstrate that our method outperforms the previous transformer and Lattice-LSTMs. The experiment on the WMT 2017 Chinese-English translation task shows the reduced model can improve many strong baselines such as the transformer. 2 Background We first briefly describe the standard transformer that our model is built upon, and then elaborate on our proposed approach in the next section. 2.1 Transformer The Transformer follows the typical encoderdecoder architecture using stacked self-attention, point-wise fully connected layers, and the encoder-decoder attention layers. Each layer is in principle wrapped by a residual connection (He et al., 2016) and a postprocessing layer normalization (Ba et al., 2016). Although in principle, it is not necessary to mask for self-attentions in the encoder, in practical implementation it is required to mask the padding positions. However, self-attention in the decoder only allows positions up to the current one to be attended to, preventing information flow from the left and preserving the auto-regressive property. The illegal connections will be masked out by setting as −109 before the softmax operation. 6477 2.2 Dot-product Attention Suppose that for each attention layer in the transformer encoder and decoder, we have two input sequences that can be presented as two matrices X ∈Rn×d and Y ∈Rm×d, where n, m are the lengths of source and target sentences respectively, and d is the hidden size (usually equal to embedding size), the output is h new sequences Zi ∈Rn×d/h or ∈Rm×d/h, where h is the number of heads in attention. In general, the result of multi-head attention is calculated according to the following procedure. Q = XW Q or Y W Q or Y W Q (1) K = XW K or Y W K or XW K (2) V = XW V or Y W V or XW V (3) Zi = Softmax QK⊤ p d/h + IdM ! V (4) Z = Concat(Z1, ..., Zh)W O (5) where the matrices W Q, W K, W V ∈Rd×d/h and W O ∈Rd×d represent the learnable projection parameters, and the masking matrix M ∈Rm×m is an upper triangular matrix with zero on the diagonal and non-zero (−109) everywhere else. Note that i) the three columns in the right-side of Eq (1,2,3) are used to compute the encoder self-attention, the decoder self-attention, and the encoder-decoder attention respectively, ii) Id is the indicator function that returns 1 if it computes decoder self-attention and 0 otherwise, iii) the projection parameters are unique per layer and head, iv) the Softmax in Eq (4) means a row-wise matrix operation, computing the attention weights by scaled dot product and resulting in a simplex ∆n for each row. 3 Lattice Transformer As motivated in the introduction, our goal is to enhance the standard transformer architecture, which is limited to sequential inputs, to consume lattice inputs with additional information from the upstream ASR systems. 3.1 Lattice Representation Without loss of generality, we assume a word lattice from ASR system to be a directed, connected and acyclic graph following a topological ordering such that a child node comes after its parent nodes. x0 x1 x2 x6 x5 x4 x3 x7 x8 x9 x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 x0 0 1 1 2 2 3 2 3 4 5 x1 -1 0 -inf 1 1 2 X 2 3 4 x2 -1 X 0 -inf -inf X 1 2 3 4 x3 -2 -1 -inf 0 -inf 1 -inf -inf 2 3 x4 -2 -1 -inf -inf 0 -inf -inf 1 2 3 x5 -3 -2 -inf -1 -inf 0 -inf -inf 1 2 x6 -2 -inf -1 -inf inf -inf 0 1 2 3 x7 -3 -2 -2 -inf -1 -inf -1 0 1 2 x8 -4 -3 -3 -2 -2 -1 -2 -1 0 1 x9 -5 -4 -4 -3 -3 -2 -3 -2 -1 0 0 2 -1 -1 -3 1 -2 -2 -inf -inf Figure 2: An example of the lattice relative position matrix, where “-inf” in the matrix is a special number denoting that no relative position exists between the corresponding two tokens. We add two special tokens to each path of the lattice, which represent the start of sentence and the end of sentence (e.g., Figure 1), so that the graph has a single source node and a single end node, where each node is assigned a token. Given the definition and property described above, we propose to use a relative positional lattice matrix L ∈N n×n to encode the graph information, where n is number of nodes in the graph. For any two nodes i, j in the lattice graph, the matrix entry Lij is the minimum relative distance between them. In other words, if the nodes i, j share at least one path, then we have Lij = min p∈common paths for i, j Lp i0 −Lp j0, (6) where Lp ·0 is the distance to the source node in path p. If no common path exists for two nodes, we denote the relative distance as −∞(−109 in practice) for subsequent masking in the lattice attention. The reason for choosing the “min” in Eq (6) is that in our dataset we find about 70% of Lijs computed by “min” and “max” are identical, and about 20% entries just differ by 1. Empirically, our experiments also show no significant difference in the performance of either one. An illustration to compute the lattice matrix for the example in the introduction is shown in Figure 2. Since we can deterministically reconstruct the lattice graph from those matrix elements that are equal to 1, it indicates the relation information between the parent and child nodes. 6478 3.2 Controllable Lattice Attention Besides the lattice graph representation, the posterior probability scores can be simultaneously produced from the acoustic model and language model in most ASR systems. We deliberately design a controllable lattice attention mechanism to incorporate such information to make the attention encode more uncertainties. In general, we denote the posterior probability of a node i as the forward score fi, where the summation of the forward scores for its child nodes is 1. Following the recursion rule in (Rabiner, 1989), we can further derive another two useful probabilities, the marginal score mi = fi P j∈Pa(i) mj and the backward score bi = mi/ P k∈Ch(i) mk, where Pa(i) or Ch(i) denotes node i’s predecessor or successor set, respectively. Intuitively, the marginal score measures the global importance of the current token compared with its substitutes given all predecessors; the backward score is analogous to the forward score, which is only locally associated with the importance of different parents to their children, where the summation of its parent nodes’ scores is 1. Therefore, our controllable attention aims to employ marginal scores and forward / backward scores. 3.2.1 Lattice Embedding We first construct the latent representations of the relative positional lattice matrix L. The matrix L can be straightforwardly decomposed into two matrices: one is the mask LM with only 0 and −∞ values, and the other is the matrix with regular values i.e., LR = L −LM. Given a 2D embedding matrix W L, the embedded vector of LR ij can be written as W L[LR ij, :] with the NumPy style indexing. In order to prevent the the lattice embedding from dynamically changing, we have to clip every entry of LR with a positive integer c1, such that W L ∈R(2c+1)×d/h has a fixed dimensionality and becomes learnable parameters. 3.2.2 Attention with Probability Scores Our proposed controllable lattice attention is depicted in the left panel of Figure 3. It shows the computational graph with detailed network modules. More concretely, we first denote the lattice embedding for LR as a 3D array E ∈Rn×n×d/h. Then, the attention weights adapted from traditional transformer are integrated with marginal 1clip(l, c) = max(−c, min(l, c)) scores that capture the distribution of each path in the lattice. The logits in Eq (4) will become the addition of three individual terms (if we temporarily omit the mask matrix), QK⊤+ einsum(’ik,ijk->ij’, Q, E) p d/h + wmm . (7) The original QK⊤will remain since the word embeddings have the majority of significant semantic information. The difficult part in Eq (7) is the new dot product term involving the lattice embedding by einsum2 operation, where einsum is a multi-dimensional linear algebraic array operation in Einstein summation convention. In our case, it tries to sum out the dimension of the hidden size, resulting in a new 2D array ∈Rn×n, which is further be scaled by 1 √ d/h as well. In addition, we aggregate the scaled marginal score vector m ∈Rn together to obtain the logits. With the new parameterization, each term has an intuitive meaning: term i) represents semantic information, term ii) governs the lattice-dependent positional relation, term iii) encodes the global uncertainty of the ASR output. The attention logits associated with the forward or backward scores are much different from marginal scores, since they govern the local information between the parent and child nodes. They are represented as a matrix rather than a vector, where the matrix has only non-zero values if nodes i, j have a parent-child relation in the lattice graph. First, an upper or lower triangular mask matrix is used to enforce every token’s attention to the forward scores of its successors or the backward scores of its predecessors. It seems counterintuitive but the reason is that the summation of the forward scores for each token’s child nodes is 1. So is the backward scores of each token’s parent nodes. Secondly, before applying the softmax operation, the lattice mask matrix LM is added to each logits to prevent attention from crossing paths. Eventually, the final attention vector used to multiply the value representation V is a weighed averaging of the three proposed attention vectors 2This op is available in NumPy, TensorFlow, or PyTorch. In our example, Q and E are 2D and 3D arrays, and the result of this op is a 2D array, with the element in ith row, jth column is P k QikEijk. 6479 MatMul Q K MatMul Q Lattice Embedding Add Lattice Mask SoftMax Marginal Scores Scale Add Forward / Backward Scores MatMul V Upper triangular Lower triangular Lattice Mask Lattice Mask SoftMax SoftMax !" !# !$ Scale Scale Input Embedding Lattice Word Inputs Output Embedding Outputs Lattice Matrix Lattice Embedding Controllable Lattice Masked Multi-Head Attention Scale marginal scores Add & Norm Feed Froward Add & Norm F/B scores Split & Mask Multi-Head Attention Masked Multi-Head Attention Add & Norm Positional Encoding Add & Norm Feed Froward Add & Norm Linear Softmax Output Probabilities Scale N x N x Figure 3: Left panel: the controllable lattice attention, where sm, sf, sb are learnable scalars and sm +sf +sb = 1. Right panel: the overall model architecture of lattice transformer. A· with different probability scores s·, Afinal =smAm + sfAf + sbAb, (8) s.t. sm + sf + sb = 1 . In summary, the overall architecture of lattice transformer is illustrated in the right of Figure 3. 3.2.3 Discussion A critical point for the lattice transformer is whether the model can generalize to other common lattice-based inputs. More specifically, how does the model apply to the lattice input without probability scores? And to what extent can we train the lattice model on a regular sequential input? If probability scores are unavailable, we can use the lattice graph representations alone by setting the scalar wm = 0 in Eq (7) and sf = sb = 0, sm = 1 in Eq (8) as non-trainable constants. We validate this viewpoint on the Chinese-English translation task, where the Chinese input is a pure lattice structure derived from different tokenizations. As to sequential inputs, it is just a special case of the lattice graph with only one path. An interesting point to mention is that our encoder-decoder attention also takes the key and value representations from the lattice input and aggregates the marginal scores, though the sequential target forbids us to use lattice self-attention in the decoder. However, we can still visualize how the sequential target attends to the lattice input. A practical point for the lattice transformer is whether the training or inference time for such a seemingly complicated architecture is acceptable. In our implementation, we first preprocess the lattice input to obtain the position matrix for the whole dataset, thus the one-time preprocessing will bring almost no over-head to our training and inference. In addition, the extra enisum operation in controllable lattice attention is the most time-consuming computation, but remaining the same computational complexity as QK⊤. Empirically, in the ASR experiments, we found that the training and inference of the most complicated lattice transformer (last row in the ablation study) take about 100% and 40% more time than standard transformer; in the text translation task, our algorithm takes about 30% and 20% more time during training and inference. 4 Experiments We mainly validate our model in two scenarios, speech translation with word lattices and posterior scores, and Chinese to English text translation with different BPE operations on the source side. 4.1 Speech Translation For the speech translation experiment, we use the Fisher and Callhome Spanish-English Speech Translation Corpus from LDC (Post et al., 2013), which is produced from telephone conversations. Our baseline models are the vanilla Transformer with relative positional embeddings (Vaswani et al., 2017; Shaw et al., 2018), and LatticeLSTMs (Sperber et al., 2017). 4.1.1 Datasets The Fisher corpus includes the contents between strangers, while the Callhome corpus is primarily 6480 between friends and family members. The numbers of sentence pairs of the two datasets are respectively 138,819 and 15,080. The source side Spanish corpus consists of four data types: reference (human transcripts), oracle of ASR lattices (the optimal path with the lowest word error rate (WER)), ASR 1-best hypothesis, and ASR lattice. For the data processing, we make caseinsensitive tokenization with the standard moses3 tokenizer for both the source and target transcripts, and remove the punctuation in source sides. The sentences of the other three types have been already been lowercased and punctuation-removed. To keep consistent with the lattices, we add a token “<s>” at the beginning for all cases. Setting Description R baseline, trained with human transcripts only R+1 fine-tuned on 1-best hypothesis R+L fine-tuned on lattices without probability scores R+L+S fine-tuned on lattices with probability scores Table 1: 4 systems for comparison 4.1.2 Training and Cross-Evaluation 4 systems in Table 1 are trained for both LatticeLSTMs and Lattice Transformer. For fair and comprehensive comparison, we also evaluate all algorithms on the inputs of four types. We initially train the baseline of our lattice transformer with the human transcripts on Fisher/Train data alone, which is equivalent to the modified transformer (Shaw et al., 2018). Then we fine-tune the pre-trained model with 1-best hypothesis or word lattices (and probability scores) for either Fisher or Callhome dataset. The source and target vocabularies are built respectively from the transcripts of Fisher/Train and Callhome/Train corpus, with vocabulary sizes 32000 and 20391. The hyper-parameters of our model are the same as Transformer-base with 512 hidden size, 6 attention layers, 8 attention heads and beam size 4. We use the same optimization strategy as (Vaswani et al., 2017) for pre-training with 4 GPU cards, and apply SGD with constant learning rate 0.15 for finetuning. We select the best performed model based on Fisher/Dev or Callhome/Dev, and test on Fisher/Dev2, Fisher/Test or Callhome/Test. To better analyze the performance of our approach, we use an intensive cross-evaluation 3https://github.com/moses-smt/mosesdecoder method, i.e., we feed 4 possible inputs to test different models. The cross-evaluation results are put into several 4 × 4 blocks in Table 2 and 3. As the aforementioned discussion, if the input is not ASR lattice, the evaluation on the model R+L+S needs to set wm = sf = sb = 0, sm = 1. If the input is an ASR lattice but fed into the other three models, the probability scores are in fact discarded. 4.1.3 Results on Fisher and Callhome We mainly compare our architecture with the previous Lattice-LSTMs (Sperber et al., 2017) and the transformer (Shaw et al., 2018) in Table 2. Since the transformer itself is a powerful architecture for sequence modeling, the BLEU scores of the baseline (R) have significant improvement on test sets. In addition, fine-tuning without scores hasn’t outperformed the 1-best hypothesis finetuning, but has about 0.5 BLEU improvement on oracle and transcript inputs. We suspect this may be due to the high ASR WER and if the ASR system has a lower WER, the lattice without score fine-tuning may get a better translation. We will leave this as a future research direction on other datasets from better ASR systems. For now, we just validate this argument in the BPE lattice experiments, and detailed discussion sees next section. As to fine-tuning with both lattices and probability scores, it increases the BLEU with a relatively large margin of 0.9/1.0/0.7 on Fisher Dev/Dev2/Test sets. Besides, for ASR 1-best inputs, it is still comparable with the R+1 systems, while for oracle and transcript inputs, there are about 0.5-0.9 BLEU score improvements. The results of Callhome dataset are all finetuned from the pre-trained model based on Fisher/Train corpus, since the data size of Callhome is too small to train a large deep learning model. This is the reason why we adopt the strategy for domain adaption. We use the same method for model selection and test. The detailed results in Table 3 show the consistent performance improvement. 4.1.4 Inference Analysis On the test datasets of Fisher and Callhome, we make an inference for predicting the translations, and some examples are shown in Table 4. We also visualize the alignment for both encoder selfattention and encoder-decoder attention for the input and predicted translation. Two examples are illustrated in Figure 4 and 5. As expected, the to6481 Architecture Inference Inputs Fisher dev Fisher dev2 Fisher test R R+1 R+L R+L+S R R+1 R+L R+L+S R R+1 R+L R+L+S Lattice LSTM reference 53.9 53.8 53.7 54 52.2 51.8 52.2 52.7 oracle 44.9 45.6 45.2 45.2 44.4 44.6 44.6 44.8 ASR 1-best 35.8 37.1 36.2 36.2 35.9 36.6 36.2 36.4 ASR Lattice 25.9 25.8 36.9 38.5 26.2 25.8 36.1 38 Lattice Transformer reference 57.1 55.0 55.5 55.5 58.0 56.1 56.4 56.6 56.0 53.7 54.1 54.2 oracle 46.3 46.2 46.8 46.7 47.1 47.0 47.5 47.9 46.8 46.4 46.9 46.9 ASR 1-best 36.5 37.4 37.6 37.4 37.4 38.4 38.3 38.6 37.7 38.5 38.2 38.4 ASR Lattice 32.9 33.8 37.7 38.3 33.4 34.0 38.6 39.4 33.5 33.7 37.9 39.2 Table 2: Cross-Evaluation of BLEU on Fisher. Note that for the lattice transformer architecture with R or R+1 setting, the resulted model is equivalent to a standard transformer with relative positional embeddings. The evaluation of oracle inputs is similar to ASR 1-best, but it can indicate an upper bound of the performance. The evaluation results of Lattice LSTM on Fisher dev are not reported in (Sperber et al., 2017). Architecture Inference Inputs Callhome devtest R R+1 R+L R+L+S Lattice Transformer reference 28.3 29.6 30.0 30.4 oracle 17.7 19.7 19.5 19.6 ASR 1-best 13.4 15.2 14.8 15.1 ASR Lattice 13.4 13.4 15.6 15.7 Callhome evltest R R+1 R+L R+L+S Lattice LSTM reference 24.7 24.3 24.8 24.4 oracle 15.8 16.8 16.3 15.9 ASR 1-best 11.8 13.3 12.4 12.0 ASR Lattice 9.3 7.1 13.7 14.1 Lattice Transformer reference 27.1 28.6 28.9 29.1 oracle 16.5 18.1 17.7 18.0 ASR 1-best 12.7 14.5 13.6 14.1 ASR Lattice 12.7 13.0 14.2 14.9 Table 3: Cross-Evaluation of BLEU on Callhome. kens from different paths will not attend to each other, e.g., “pero” and “perd´on” in Figure 4 or “hoy” and “y” in Figure 5. In Figure 4, we observe that the 1-best hypothesis can even result in erroneous translation “sorry, sorry”, which is supposed to be “but in peru”. In Figure 5, the translation from 1-best hypothesis obviously misses the important information “i heard it”. We primarily attribute such errors to the insufficient information within 1-best hypothesis, but if the lattice transformer is appropriately trained, the translations from lattice inputs can possibly correct them. Due to limited space, more visualization examples can be found in supplementary material. 4.1.5 Model Ablation Study We conduct an ablation study to examine the effectiveness of every module in the lattice transformer. We gradually add one module from a standard transformer model to our most complicated lattice transformer. From the results in Table 5, we can see that the application of marginal scores in encoder or decoder has the most influential impact on the lattice fine-tuning. Furthermore, the superimposed application of marginal scores in both encoder and decoder can gain an additional promotion, compared to individual applications. However, the use of forward and backward scores has no significantly extra rewards in this situation. Perhaps due to overfitting, the most complicated lattice transformer on the Callhome of smaller data size cannot achieve better BLEUs than simpler models. 4.2 Chinese English Text Translation In this experiment, we demonstrate the performance of our lattice transformer when the probability scores are unavailable. The comparison baseline method is the vanilla transformer (Vaswani et al., 2017) in both base and big settings. 4.2.1 Datasets and Settings The Chinese to English parallel corpus for WMT 2017 news task contains about 20 million sentences after deduplication. For Chinese word segmentation, we use Jieba4 as the baseline (Zhang et al., 2018; Hassan et al., 2018), while the English sentences are tokenized by moses tokenizer. Some data filtering tricks have been applied, such as the ratio within [1/3, 3] of lengths between source and target sentence pairs and the count of tokens in both sides (≤200). Then for the Chinese source corpus, we learn the BPE tokenization with 16K / 32K / 48K operations, while for the English target corpus, we only learn the BPE tokenization with 32K operations. In this way, each Chinese input can be represented as three different tokenized results, thus being ready to construct a word lattice. The hyper-parameters of our model are the 4https://github.com/fxsjy/jieba 6482 src transcript qu´e tal , eh , yo soy guillermo , ¿ c´omo est´as ? porque como esto tiene que ir avanzando ¿ no ? pues , ¿ y llevas muchos a˜nos aqu`ı en atlanta ? quererlo y tener fe . tgt reference how are you , eh i ’m guillermo , how are you ? because like this has to be moving forward , no ? well . and you ’ve been many years here in atlanta ? to love him and have faith . ASR 1-best quedar eh yo soy guillermo c´omo est´as porque como esto tiene que ir avanzando no pas lleva muchos aos aqu en atlanta quieren lo y tener fe mt from R+1 stay . eh , i ’m guillermo . how are you ? why do you have to move forward or not ? country has been many years here in atlanta they want to have faith ASR lattice quedar que qu´e eh yo soy dar eh yo tal eh yo soy guillermo cmo comprar con como est´a est´as porque como esto tiene que ir avanzando no pa´ıs well lleva lleva muchos a˜nos aqu´ı en atlanta quieren quererlo lo y tener tenerse fe y tener tenerse fe mt from R+L+S how are you ? i ’m guillermo . how are you ? because since this has to move forward , right ? well , you ’ve been here many years in atlanta loving him and having faith Table 4: Translation examples on test sets. Note that the presented ASR lattice does not include lattice information. Figure 4: Visualization of Lattice Transformer encoder self-attention and encoder-decoder attention for inference. Top panel: ASR 1-best. Bottom panel: ASR lattice. Target reference: “But in Peru, I’ve heard there are parts where it really gets cold.” same as the setting with the speech translation in previous experiments. We follow the optimization convention in (Vaswani et al., 2017) to use ADAM optimizer with Noam invert squared decay. All of our lattice transformers are trained on 4 P-100 GPU cards. Similar to our comparison method, detokenized cased-sensitive BLEU is reported in our experiment. 4.2.2 Results For our lattice transformer, we have three models trained for comparison. First we use the 32K BPE Chinese corpus alone to train our lattice transformer, which is equivalent to the standard transFigure 5: Left panel: ASR 1-best. Right panel: ASR lattice. Target reference: “Yes, yes, I heard it.” Model Fisher dev2 Fisher test Callhome evltest LSTM (1-best input) 37.1 36.6 13.3 Lattice LSTM (lattice input) 36.9 36.1 13.7 +lattice prob scores 38.5 38 14.1 Transformer (1-best input) 38.4 38.5 14.5 Lattice Transformer (lattice input) 38.6 37.9 14.2 + marginal scores in decoder 39.0 38.7 14.4 + marginal scores in encoder 38.8 38.2 14.7 + marginal scores in encoder and decoder 39.5 39.0 14.8 + marginal scores in encoder and decoder, and forward / backward scores only in encoder self-attention layer 0 and layer 1 39.6 39.1 14.9 + marginal scores in encoder and decoder, and forward / backward scores in all encoder self-attention layers 39.4 39.2 14.7 Table 5: Ablation Experiment BLEU Results. The rows of the Lattice LSTM and the Lattice Transformer represent the 1-best hypothesis fine-tuning, and the BLEUs are evaluated on 1-best inputs and on lattice inputs for the others. The colored BLEU values come from Table 2 and 3. former with relative positional embeddings. Secondly, we train another lattice transformer with the word lattice corpus from scratch. In addition, we follow the convention of the speech translation task in previous experiments by fine-tuning the first model with word lattice corpus. For each setting, the model evaluated on test 2017 dataset is selected from the best model performed on the dev2017 data. The fine-tuning of Lattice Model 3 starts from the best checkpoint of Lattice Model 1. The BLEU evaluation is shown in Table 6, and two examples of attention visualization are shown in Figure 6. Notice that the first two results of transformer-base and -big are directly copied from the relevant references. From the result, we can 6483 Figure 6: Attention visualization for Chinese English translation task. see that our Model 1 can be comparable with the vanilla transformer-big model in a base setting, and significantly better than the transformer-base model. We also validate the argument that training from scratch can also achieve a better result than most baselines. Empirically, we find an interesting phenomena that training from scratch converges faster than other settings. 5 Conclusions In this paper, we propose a novel lattice transformer architecture with a controllable lattice attention mechanism that can consume a word lattice and probability scores from the ASR system. The proposed approach is naturally applied to both Architecture Inference Inputs test2017 Transformer (Zhang et al., 2018) BPE 32K 23.01 Transformer-big (Hassan et al., 2018) BPE 32K 24.20 1. Transformer with BPE 32K BPE 32K 24.26 2. Lattice Transformer from scratch lattice 24.71 3. Lattice Transformer with fine-tuning lattice 24.81 Table 6: BLEU on WMT 2017 Chinese English the encoder self-attention and encoder-decoder attention. We mainly validate our lattice transformer on speech translation task, and additionally demonstrate its generalization to text translation on the WMT 2017 Chinese-English translation task. In general, the lattice transformer can increase the metric BLEU for translation tasks by a significant margin over many baselines. Acknowledgments We thank Nguyen Bach to provide the script for attention visualization. References Oliver Adams, Graham Neubig, Trevor Cohn, and Steven Bird. 2016. Learning a translation model from word lattices. In Interspeech. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Ondˇrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (wmt18). In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 272–303. Jacob Buckman and Graham Neubig. 2018. Neural lattice language models. Transactions of the Association for Computational Linguistics, 6:529–541. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Kai Fan, Bo Li, Fengming Zhou, and Jiayi Wang. 2018. ” bilingual expert” can find translation errors. arXiv preprint arXiv:1807.09433. Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, et al. 2018. Achieving human parity on automatic chinese to english news translation. arXiv preprint arXiv:1803.05567. 6484 Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Niehues Jan, Roldano Cattoni, St¨uker Sebastian, Mauro Cettolo, Marco Turchi, and Marcello Federico. 2018. The iwslt 2018 evaluation campaign. In International Workshop on Spoken Language Translation, pages 2–6. Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. 2018. Sharp nearby, fuzzy far away: How neural language models use context. arXiv preprint arXiv:1805.04623. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 66–75. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, et al. 2018. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049. Kaho Osamura, Takatomo Kano, Sakriani Sakti, Katsuhito Sudoh, and Satoshi Nakamura. 2018. Using spoken word posterior features in neural machine translation. architecture, 21:22. Matt Post, Gaurav Kumar, Adam Lopez, Damianos Karakos, Chris Callison-Burch, and Sanjeev Khudanpur. 2013. Improved speech-to-text translation with the fisher and callhome spanish–english speech translation corpus. In International Workshop on Spoken Language Translation. Lawrence R Rabiner. 1989. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257– 286. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1715–1725. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 464–468. Matthias Sperber, Graham Neubig, Jan Niehues, and Alex Waibel. 2017. Neural lattice-to-sequence models for uncertain inputs. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1380–1389. Jinsong Su, Zhixing Tan, Deyi Xiong, Rongrong Ji, Xiaodong Shi, and Yang Liu. 2017. Lattice-based recurrent neural network encoders for neural machine translation. In Thirty-First AAAI Conference on Artificial Intelligence. Yuanhang Su, Kai Fan, Nguyen Bach, C-C Jay Kuo, and Fei Huang. 2018. Unsupervised multimodal neural machine translation. arXiv preprint arXiv:1811.11365. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1556–1566. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. arXiv preprint arXiv:1804.09541. Yue Zhang and Jie Yang. 2018. Chinese ner using lattice lstm. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1554– 1564. Zhirui Zhang, Shuangzhi Wu, Shujie Liu, Mu Li, Ming Zhou, and Enhong Chen. 2018. Regularizing neural machine translation by target-bidirectional agreement. arXiv preprint arXiv:1808.04064.
2019
649
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 666–672 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 666 Implicit Discourse Relation Identification for Open-domain Dialogues Mingyu Derek Ma1, Kevin K. Bowden2, Jiaqi Wu2, Wen Cui2 and Marilyn Walker2 1Human-Computer Communications Laboratory & Stanley Ho Big Data Decision Analytics Research Centre The Chinese University of Hong Kong [email protected] 2Natural Language and Dialogue Systems Lab University of California, Santa Cruz {kkbowden, jwu64, wcui7, mawalker}@ucsc.edu Abstract Discourse relation identification has been an active area of research for many years, and the challenge of identifying implicit relations remains largely an unsolved task, especially in the context of an open-domain dialogue system. Previous work primarily relies on a corpora of formal text which is inherently nondialogic, i.e., news and journals. This data however is not suitable to handle the nuances of informal dialogue nor is it capable of navigating the plethora of valid topics present in open-domain dialogue. In this paper, we designed a novel discourse relation identification pipeline specifically tuned for opendomain dialogue systems. We firstly propose a method to automatically extract the implicit discourse relation argument pairs and labels from a dataset of dialogic turns, resulting in a novel corpus of discourse relation pairs; the first of its kind to attempt to identify the discourse relations connecting the dialogic turns in open-domain discourse. Moreover, we have taken the first steps to leverage the dialogue features unique to our task to further improve the identification of such relations by performing feature ablation and incorporating dialogue features to enhance the state-of-the-art model. 1 Introduction Discourse analysis considering relations between clauses has received increasing attention from the field, and implicit discourse relation identification is one of the most challenging problems in discourse parsing since it is purely based on textual features. Previous work has defined four widely accepted major classes of discourse relation “Comparison”, “Expansion”, “Contingency” and “Temporal” (Miltsakaki et al., 2008; Prasad et al., 2008). These four relations can either be explicitly or implicitly realized. When explicitly realized, there are often clear connective words between clauses which result in an associated discourse relation, while implicit realizations are often much harder to detect. For example, people can imply there is a “Comparison” relation between the following two sentences by understanding the meaning. Without clear keywords like “but” however, it is hard for machines to recognize such implicit relations. Arg 1: it’s a great album. Arg 2: it’s probably not their best. Since the development of the Penn Discourse Treebank (PDTB)1, discourse relation identification has been treated as a supervised learning problem. For explicit discourse relation pairs, simple classification methods based on connective cues achieve more than 90% accuracy (Pitler et al., 2008). For implicit discourse relations however, where there is no discourse clue, relations needs to be inferred on the basis of textual features, making this a challenging problem in discourse parsing (Li and Nenkova, 2014; Lin et al., 2009). While previous work has suggested that discourse relations may hold between dialogue turns, this idea is relatively unexplored (Stent, 2000; Tonelli et al., 2010). We posit that discourse relation identification could have wide application in dialogue systems, by cultivating a more aware state space in order to improve the continuity between an extended sequence of turns. The detected discourse relation could additionally serve as a query or ranking parameter for possible next turns, retrieved from a database of content, or generated by natural language generation. Adding this additional natural language understanding component might be especially useful when navigating open-domain dialogue where user input is unpredictable and the model must be topic-robust. 1More details about Penn Discourse Treebank can be found at https://www.seas.upenn.edu/˜pdtb/ 667 There are many fundamental challenges with identifying and utilizing discourse relations in an open-domain dialogue system. All existing datasets for discourse relation identification are based on monologic text such as news; these datasets are unlikely to provide good training material for dialogue. Moreover there is no previous work investigating the feasibility of applying a machine learning model developed on formal text to dialogic content, where turns in are normally short, informal text. Thus, the lack of labeled dialogue data for implicit discourse relation pairs in open-domain dialogue is the first challenge that must be addressed. To tackle these two problems and utilize the unexplored benefits of features unique to dialogue systems, we carry out two steps. First, we construct a discourse relation pair dataset from a large corpus of open-domain dialogue, which to our knowledge is the first of its kind. Second, we investigated a feature-based model with different dialogue feature combinations and enhanced a deep learning model by incorporating dialogue features that utilize aspects unique to dialogue. The dataset and related code are publicly available.2 2 Related Work The release of the Penn Discourse Treebank (PDTB) (Prasad et al., 2008) makes research on machine learning based implicit discourse relation recognition possible. Most previous work is based on linguistic and semantic features such as word pairs and brown cluster pair representation (Pitler et al., 2008; Lin et al., 2009) or rule-based systems (Wellner et al., 2006). Recent work has proposed neural network based models with attention or advanced representations, such as CNN (Qin et al., 2016), attention on neural tensor network (Guo et al., 2018), and memory networks (Jia et al., 2018). Advanced representations may help to achieve higher performance (Bai and Zhao, 2018). Some methods also consider context paragraphs and inter-paragraph dependency (Dai and Huang, 2018). To utilize machine learning models for this task, larger datasets would provide a bigger optimization space (Li and Nenkova, 2014). Marcu and Echihabi (2002) is the first work to generate artificial samples to extend the dataset by using rules to 2https://github.com/derekmma/ dialogue-discourse-relation convert explicit discourse relation pairs into implicit pairs by dropping the connectives. This work is further extended by methods for selecting high-quality samples (Rutherford and Xue, 2015; Xu et al., 2018; Braud and Denis, 2014; Wang et al., 2012). Most of the existing work discussed so far is based on the PDTB dataset, which targets formal texts like news, making it less suitable for our task which is centered around informal dialogue. Related work on discourse relation annotation in a dialogue corpus is limited (Stent, 2000; Tonelli et al., 2010). For example Tonelli et al. (2010) annotated the Luna corpus,3 which does not include English annotations. To our knowledge there is no English dialogue-based corpus with implicit discourse relation labels, as such research specifically targeting a discourse relation identification model for social open-domain dialogue remains unexplored. 3 Dataset Construction Previous work on discourse relation identification suggests that the most effective approach is supervised learning, but limited amounts of annotated data constrain the application of such algorithms. Previous work has additionally proven that weakly labeled data, which contains a small number of false labels and can be generated automatically, helps improve classifier performance with implicit relations (Rutherford and Xue, 2015). We therefore constructed Edina-DR, the novel dataset of discourse relation pairs based on the publicly available self-dialogue Edina corpus which contains 24,165 multi-turn social conversations across 23 topics (Fainberg et al., 2018; Krause et al., 2017).4 To the best of our knowledge, this is the first English discourse relation dataset based on open-domain dialogues. The Edina dataset initially contains no discourse relation labels. Inspired by the approaches taken to automatically extend PDTB, we designed a pipeline to extract discourse relation argument pairs through utilizing the connective words which are known as clear relation indicators. The pipeline automatically extracts argument pairs and assign discourse relation labels to each of the utterances. We 3EU FP6 contract No. 33549, http://www. ist-luna.eu/ 4The Edina dataset is publicly available at https://github.com/jfainberg/self_ dialogue_corpus 668 then have humans annotate a small sample of the data in order to validate the automated pipeline. Our pipeline targets the four level-1 discourse relations, i.e., “Comparison”, “Expansion”, “Contingency” and “Temporal”. We obtained this initial connectives pool according to statistical analysis of connective frequencies in PDTB conducted by Pitler et al. (2008), in which we only consider connectives which are strongly associated (probability > 95%) with only one class of relation.5 For example, we exclude the connective word “since” because it may often appear as an indicator of either a “Temporal” or “Contingency” relation. Secondly, some connectives cannot be removed without changing the original meaning (Sporleder and Lascarides, 2008). We follow the method proposed by Rutherford and Xue (2015) to identify the connectives which are freely omissible by measuring the Omissible Rate and Context Differential. Since we need some manually labeled connectives for this task, we implement the connective selection on the PDTB dataset and generalize the selection result to the dialogue dataset. The selected connectives include: • Comparison: but, however, although, by contrast • Contingency: because, so, thus, as a result, consequently, therefore • Expansion: also, for example, in addition, instead, indeed, moreover, for instance, in fact, furthermore, or, and • Temporal: then, previously, earlier, later, after, before The third step is to select the conversations matching specific predefined patterns for different structures of the sentences with the selected connective words shown above. Inspired by (Braud and Denis, 2014; Marcu and Echihabi, 2002), we use two patterns: (Arg 1) (connective) (Arg 2) and (Arg 1). (Connective),(Arg 2). In other words, we have one pattern for when connectives appear in the middle of an utterance, and another pattern for when connectives link two arguments in adjacent utterances across separate turns. Finally, 5The list of connectives for each relation in detail can be found in (Pitler et al., 2008). we defined several heuristic rules to filter out lowquality pairs which have been applied in previous work (Braud and Denis, 2014). The program only accepts full sentence arguments and we use certain POS tags for particular connectives to make sure the connective function as relation indicators. A segment window is defined so that our method only picks the closest phrases or sub-sentences if the whole conversation contains several sentences. For example, in the sentence “they had a $5 off the price, so i bought it.”, the connective “so” is identified in the list of connective words for “Contingency” relation and the sentence matches our pattern 1. Therefore we convert this sentence to a “Contingency” discourse relation pair and the two arguments are “they had a $5 off the price” and “i bought it”. Edina-DR PDTB # pairs of all relations 27998 11734 avg # words of arg 1 7.1 18.8 avg # words of arg 2 7.3 19.4 # pairs of ‘Comparison’ 20823 1799 # pairs of ‘Contingency’ 5080 2243 # pairs of ‘Expansion’ 1580 6933 # pairs of ‘Temporal’ 452 759 Table 1: Statistics of the extracted dataset Edina-DR The statistics of the annotated dialogue discourse relation pairs dataset Edina-DR is shown in Table 1. The new dataset contains more than twice the pairs compared to PDTB, which should prove useful for machine learning. We note that the distribution of discourse relations in the EdinaDR dataset is different from PDTB. Most of the pairs belong to the “Comparison” relation, which is a natural way to structure dialogue. The number of “Temporal” pairs however is smaller, one possible explanation being that people do not use connectives words often in dialogues when talking about time-related events. These differences highlight the need for this work, as it’s clear that human dialogue is in fact structured differently than more formal non-dialogic text. We annotated discourse relations for 400 samples out of the extracted dataset by an expert annotator, 12% of the samples do not form a discourse relation which probably due to failures by the automatic extraction program to catch particular linguistic structures. 88% of the samples which do hold relations match the relation labels of the hu669 man annotations, which proves the reliability of our proposed extraction method. 4 Model We propose the novel approach of applying the unique dialogue features encapsulated in the statespace of a real deployed dialogue systems to enhance discourse relation identification. Firstly, we use a feature-based classifier for feature selection and then we explore the feasibility of utilizing existing deep learning model in dialogue discourse relation identification task. 4.1 Feature-based Classifier We extract dialogue features using the Natural Language Understanding (NLU) capabilities in SlugBot, a deployed open-domain dialogue system (Bowden et al., 2018a). These features are normally used for dialogue management and content retrieval. We input raw argument pairs into the NLU pipeline and get dialogue features which are then fed as one-hot vectors to a logistic regression classifier. A full dialogue feature vector contains 448 features. The dialogue features include: Dialogue Act: The act of a dialogue utterance is obtained using the NPS dialogue act classifier (Forsyth and Martell, 2007). There are 15 different dialogue acts, including GREET, CLARIFY, and STATEMENT. The full list of dialogue acts is described in (Forsyth and Martell, 2007). Sentiment: The sentiment of a dialogue utterance is obtained from the Stanford CoreNLP Toolkit (Manning et al., 2014) and there are five possible sentiment values: VERY POSITIVE, POSITIVE, NEUTRAL, NEGATIVE, and VERY NEGATIVE. Intent: An utterance intent ontology consisting of 33 discrete intents is developed and recognized using heuristics and a trained model. It is designed to obtain utterance intent without conversational context, so only the input utterance is considered for intent detection. Some sample intents are REQUEST OPINION, REQUEST SERVICE, REQUEST CHANGE TOPIC. It is trained using a subset of Common Alexa Prize Chats (CAPC) dataset with roughly 50K utterances and the model ensembles both a Recurrent Neural Network and Convolutional Neural Network (Ram et al., 2018). Topic: The topic of the utterance is obtained using the CoBot (Conversational Bot) toolkit topic classification model (Khatri et al., 2018), which is a Deep Average network BiLSTM model. The model is trained on over 120,000 utterances and labeled across 22 topics. This includes commonly discussed topics such as POLITICS, FASHION, SPORTS, SCIENCE AND TECHNOLOGY, and MUSIC. Core Entities Types: We use SlugNERDS to detect our named entities (Bowden et al., 2018b, 2017). SlugNERDS is specialized for opendomain dialogue interactions. It can sift through noisy user data and it uses the constantly updated Google Knowledge Graph6 to remain aware of even the latest named entities. Both of these points are vital for understanding social chit-chat. We only consider the entity types of the entities as feature rather than entities themselves. We use standard schema.org types and there are totally 614 types. For example, if SlugNERDS detects “Cam Newton”, which is an entity with type PERSON, then PERSON is used as feature. 4.2 Deep Learning Model with Dialogue Features To investigate the adaptability of existing discourse relation identification models on dialogue data and our proposed features, we build on the Deep Enhanced Representation (DER) model of Bai and Zhao (2018)7, which demonstrated its efficiency by achieving the current state-of-the-art performance on the PDTB dataset. It utilized different grained text representations including character, sub-word, word, sentence, and sentence pair levels, with embeddings obtained by ELMo (Peters et al., 2018). The model first generates representations for the argument pairs using an encoder and bi-attention module; these are then sent to the classifier, consisting of multiple layer perceptrons with softmax, to predict the discourse relation. We take the DER design and architecture and train on Edina-DR dataset to evaluate the adaptability of existing model in dialogue environment. Then we explore a variation of this model by connecting dialogue feature vectors to the argument pairs representation vector to extend the representation. We use the same method to encode all dialogue features as the feature-based classifier. With the help of previous experiments, we use the best feature combination for the dialogue feature vec6https://developers.google.com/ knowledge-graph/ 7Original implementation of the authors can be found at https://github.com/hxbai/Deep_Enhanced_ Repr_for_IDRR. 670 tors. 5 Evaluation and Analysis For the following experiments, we randomly selected 400 samples to be used as test set with discourse relation labels annotated by an expert. We repeat the experiments five times and take the average score as the final report results. 5.1 Feature-based Classifier and Dialogue Feature Selection We first analyze the performance of the featurebased model with different feature combinations shown in Table 2. Features Precision Recall F1 DIALOGUE ACT 0.64 0.69 0.66 INTENT 0.63 0.74 0.68 TOPICS 0.62 0.71 0.66 SENTIMENT 0.56 0.74 0.64 ENTITIES TYPES 0.63 0.74 0.68 All 0.63 0.65 0.64 All - SENTIMENT 0.64 0.73 0.68 Table 2: Feature-based Model Evaluation For single dialogue features, INTENT and ENTITIES TYPES provide the largest performance boost compared to other single dialogue features, and this demonstrates the effectiveness of using intent and types of entities for discourse relation identification. Other three features maintain the same level of performance, except a large drop in precision with respect to SENTIMENT. One possible explanation is that our sentiment classification results are obtained using the Sentiment Annotator from Stanford CoreNLP Toolkit, which is trained on movie reviews corpus (Manning et al., 2014; Socher et al., 2013). The nature of training data is not suitable for our dialogue corpus in this task. Using Table 2, we see that the best configuration includes all of our dialogue features except SENTIMENT. 5.2 Deep Learning Models In Table 3, we see the results of our experiments, where DER represents our baseline model. We use the default parameter for DER models. We also show the result of the DER model trained and tested on the PDTB dataset for comparison marked as “DER (PDTB)”. The first observation is that the DER model performs surprisingly well Model Acc. F1 DER (PDTB) 0.61 0.51 Logistic Reg. (Edina-DR) 0.64 0.68 DER (Edina-DR) 0.80 0.76 DER+Dialogue (Edina-DR) 0.81 0.77 Table 3: Performance of Deep Learning Models (Dataset name is shown in parentheses) with an F1 score of 0.76 on the new dialogue discourse relation dataset Edina-DR with p-value of 0.008, which demonstrates its strong adaptability to the task of discourse relation identification in dialogues. Comparing the same DER model on PDTB, the large drop in F1 score shows the difference between formal and informal data. We also find that the model with dialogue features enhance the performance by 1% on F1 score with p-value 0.006, which indicates the potential of using dialogue features to further enhance discourse relation identification models. 6 Conclusion and Future Work In this paper, we proposed a novel pipeline specifically designed for implicit discourse relation identification in open-domain dialogue. We constructed a novel dataset of discourse relation pairs for dialogue conversations, and utilized unique dialogue features to enhance the performance of a state-of-the-art classifier. Our experiments show that dialogue intent and entities types play important roles and dialogue features can increase the performance of the discourse relation identification model. Since implicit discourse relation identification is a key task for dialogue systems, there are still many approaches worth investigating in future work. More sophisticated dialogue features and classification algorithms are needed for the discourse relation identification task in addition to a larger more balanced corpus. References Hongxiao Bai and Hai Zhao. 2018. Deep enhanced representation for implicit discourse relation recognition. In Proceedings of the 27th International Conference on Computational Linguistics, pages 571–583. Kevin K Bowden, Shereen Oraby, Jiaqi Wu, Amita Misra, and Marilyn Walker. 2017. Combining 671 search with structured data to create a more engaging user experience in open domain dialogue. arXiv preprint arXiv:1709.05411. Kevin K Bowden, Jiaqi Wu, Wen Cui, Juraj Juraska, Vrindavan Harrison, Brian Schwarzmann, Nick Santer, and Marilyn Walker. Slugbot: Developing a computational model and framework of a novel dialogue genre. Kevin K Bowden, Jiaqi Wu, Shereen Oraby, Amita Misra, and Marilyn Walker. 2018a. Slugbot: An application of a novel and scalable open domain socialbot framework. arXiv preprint arXiv:1801.01531. Kevin K Bowden, Jiaqi Wu, Shereen Oraby, Amita Misra, and Marilyn Walker. 2018b. Slugnerds: A named entity recognition tool for open domain dialogue systems. arXiv preprint arXiv:1805.03784. Chlo´e Braud and Pascal Denis. 2014. Combining natural and artificial examples to improve implicit discourse relation identification. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1694–1705. Zeyu Dai and Ruihong Huang. 2018. Improving implicit discourse relation classification by modeling inter-dependencies of discourse units in a paragraph. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 141–151. Joachim Fainberg, Ben Krause, Mihai Dobre, Marco Damonte, Emmanuel Kahembwe, Daniel Duma, Bonnie Webber, and Federico Fancellu. 2018. Talking to myself: self-dialogues as data for conversational agents. arXiv preprint arXiv:1809.06641. Eric N Forsyth and Craig H Martell. 2007. Lexical and discourse analysis of online chat dialog. In International Conference on Semantic Computing (ICSC 2007), pages 19–26. IEEE. Fengyu Guo, Ruifang He, Di Jin, Jianwu Dang, Longbiao Wang, and Xiangang Li. 2018. Implicit discourse relation recognition using neural tensor network with interactive attention and sparse learning. In Proceedings of the 27th International Conference on Computational Linguistics, pages 547–558. Yanyan Jia, Yuan Ye, Yansong Feng, Yuxuan Lai, Rui Yan, and Dongyan Zhao. 2018. Modeling discourse cohesion for discourse parsing via memory network. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 438–443. Chandra Khatri, Behnam Hedayatnia, Anu Venkatesh, Jeff Nunn, Yi Pan, Qing Liu, Han Song, Anna Gottardi, Sanjeev Kwatra, Sanju Pancholi, et al. 2018. Advancing the state of the art in open domain dialog systems through the alexa prize. arXiv preprint arXiv:1812.10757. Ben Krause, Marco Damonte, Mihai Dobre, Daniel Duma, Joachim Fainberg, Federico Fancellu, Emmanuel Kahembwe, Jianpeng Cheng, and Bonnie Webber. 2017. Edina: Building an open domain socialbot with self-dialogues. arXiv preprint arXiv:1709.09816. Junyi Jessy Li and Ani Nenkova. 2014. Addressing class imbalance for improved recognition of implicit discourse relations. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 142–150. Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing implicit discourse relations in the penn discourse treebank. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 343–351. Association for Computational Linguistics. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60. Daniel Marcu and Abdessamad Echihabi. 2002. An unsupervised approach to recognizing discourse relations. In Proceedings of the 40th annual meeting of the association for computational linguistics. Eleni Miltsakaki, Livio Robaldo, Alan Lee, and Aravind Joshi. 2008. Sense annotation in the penn discourse treebank. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 275–286. Springer. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. Emily Pitler, Mridhula Raghupathy, Hena Mehta, Ani Nenkova, Alan Lee, and Aravind K Joshi. 2008. Easily identifiable discourse relations. Technical Reports (CIS), page 884. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The penn discourse treebank 2.0. In Proceedings of the 6th International Conference on Language Resources and Evaluation. Lianhui Qin, Zhisong Zhang, and Hai Zhao. 2016. A stacking gated neural architecture for implicit discourse relation classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2263–2270. Ashwin Ram, Rohit Prasad, Chandra Khatri, Anu Venkatesh, Raefer Gabriel, Qing Liu, Jeff Nunn, Behnam Hedayatnia, Ming Cheng, Ashish Nagar, et al. 2018. Conversational ai: The science behind the alexa prize. arXiv preprint arXiv:1801.03604. 672 Attapol Rutherford and Nianwen Xue. 2015. Improving the inference of implicit discourse relations via classifying explicit discourse connectives. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 799–808. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631–1642. Caroline Sporleder and Alex Lascarides. 2008. Using automatically labelled examples to classify rhetorical relations: An assessment. Natural Language Engineering, 14(3):369–416. Amanda Stent. 2000. Rhetorical structure in dialog. In INLG’2000 Proceedings of the First International Conference on Natural Language Generation. Sara Tonelli, Giuseppe Riccardi, Rashmi Prasad, and Aravind K Joshi. 2010. Annotation of discourse relations for conversational spoken dialogs. In LREC. Xun Wang, Sujian Li, Jiwei Li, and Wenjie Li. 2012. Implicit discourse relation recognition by selecting typical training examples. Proceedings of COLING 2012, pages 2757–2772. Ben Wellner, James Pustejovsky, Catherine Havasi, Anna Rumshisky, and Roser Sauri. 2006. Classification of discourse coherence relations: An exploratory study using multiple knowledge sources. In Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue, pages 117–125. Yang Xu, Yu Hong, Huibin Ruan, Jianmin Yao, Min Zhang, and Guodong Zhou. 2018. Using active learning to expand training data for implicit discourse relation recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 725–731.
2019
65